text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Regina Tyshkevich
Regina Iosifovna Tyshkevich (Belarusian: Рэгіна Іосіфаўна Тышкевіч; 20 October 1929 – 17 November 2019[1]) was a Belarusian mathematician, an expert in graph theory, Doctor of Physical and Mathematical Sciences, professor of the Belarusian State University.[2][3]
Regina Tyshkevich
Born
Regina Iosifovna Tyshkevich
(1929-10-29)October 29, 1929
Minsk, SFSR, Soviet Union
DiedNovember 17, 2019(2019-11-17) (aged 90)
NationalityBelarusan
Alma materBelarusian State University
Scientific career
FieldsMathematics
InstitutionsBelarusian State University
Her main scientific interests included Intersection graphs, degree sequences, and the reconstruction conjecture. She was also known for an independent introduction and investigation of the class of split graphs and for her contributions to line graphs of hypergraphs.
In 1998, she was awarded the Belarus State Prize for her book Lectures in Graph Theory.[2] Of note is her textbook An Introduction into Mathematics, written together with her two colleagues.
In October 2009 an international conference "Discrete Mathematics, Algebra, and their Applications", sponsored by the Central European Initiative, was held in Minsk, Belarus in honor of her 80th anniversary.[4]
Regina Tyshkevich was a direct descendant of the Tyszkiewicz magnate family, therefore her colleagues sometimes called her "the countess of graph theory", which is a pun in the Russian language: the Russian word "граф" (graf) is a homonym for two words meaning "count" and "graph".[2]
Books and selected publications
• (With Dmitry Suprunenko) "Commutative Matrices", 1968, Academic Press ISBN 0-12-677050-6
• Russian original: "Perestanovochnye matritsy" 1966, 2nd edition: 2003, ISBN 5-354-00437-3
• (With Emilichev, V. A., Melnikov, O. I., Sarvanov, V. I.) "Lectures on Graph Theory", B. I. Wissenschaftsverlag, 1994 ISBN 3-411-17121-9
• Russian original: "Lektsii po teorii grafov", 1990
• (With O. Melnikov and V. Sarvanov, etc.) "Exercises in Graph Theory", Kluwer Academic Publishers, 1998, ISBN 0-7923-4906-7
• "Linear Algebra and Analytical Geometry (Линейная алгебра и аналитическая геометрия)
• Кононов С.Г., Тышкевич Р.И., Янчевский В.И. "Введение в математику" ("An Introduction into Mathematics") 3 volumes, Minsk, Belarusian State University, 2003
• R.I. Tyshkevich. Decomposition of graphical sequences and unigraphs // Discrete Math., 2000, Vol. 220, p. 201 - 238.
• Yury Metelsky, Regina Tyshkevich: Line Graphs of Helly Hypergraphs. SIAM Journal on Discrete Mathematics 16(3): 438-448 (2003)
State awards
• 1979 (Почетная грамота Министерства высшего и среднего образования БССР «За многолетнюю плодотворную научно-методическую деятельность»);[5]
• 1985: Veteran of Labor Medal (Медаль «Ветеран труда»);[5]
• 1992: (почетное звание «Заслуженный работник народного образования Республики Беларусь»)[5]
• 1998: Belarus State Prize (государственная премия Республики Беларусь);[5]
• 2009: Medal of Frantsysk Skaryna [5]
References
1. "Мехмат соболезнует родным и близким Тышкевич Регины Иосифовны". Belarusian State University (in Russian). November 18, 2019. Retrieved November 19, 2019.
2. Артеага, Вера (October 28, 2006), "Графиня» теории графов [A Countess of Graph Theory]", Республика, archived from the original on September 26, 2007{{citation}}: CS1 maint: unfit URL (link).(retrieved February 8, 2007); (archive; text-only, (retrieved May 11, 2016))
3. https://scholar.google.com/scholar?q=Regina+Tyshkevich&hl=en&lr=&btnG=Search - On Google scholar
4. Conference announcement.
5. "Тышкевич Регина Иосифовна", a Belarus State University webpage (retrieved May 9, 2016)
Authority control
International
• ISNI
• VIAF
National
• Norway
• Germany
• Israel
• United States
• Netherlands
Academics
• CiNii
• DBLP
• MathSciNet
• zbMATH
Other
• IdRef
|
Wikipedia
|
Regiomontanus' angle maximization problem
In mathematics, the Regiomontanus's angle maximization problem, is a famous optimization problem[1] posed by the 15th-century German mathematician Johannes Müller[2] (also known as Regiomontanus). The problem is as follows:
A painting hangs from a wall. Given the heights of the top and bottom of the painting above the viewer's eye level, how far from the wall should the viewer stand in order to maximize the angle subtended by the painting and whose vertex is at the viewer's eye?
If the viewer stands too close to the wall or too far from the wall, the angle is small; somewhere in between it is as large as possible.
The same approach applies to finding the optimal place from which to kick a ball in rugby.[3] For that matter, it is not necessary that the alignment of the picture be at right angles: we might be looking at a window of the Leaning Tower of Pisa or a realtor showing off the advantages of a sky-light in a sloping attic roof.
Solution by elementary geometry
There is a unique circle passing through the top and bottom of the painting and tangent to the eye-level line. By elementary geometry, if the viewer's position were to move along the circle, the angle subtended by the painting would remain constant. All positions on the eye-level line except the point of tangency are outside of the circle, and therefore the angle subtended by the painting from those points is smaller.
By Euclid's Elements III.36 (alternatively the power-of-a-point theorem), the distance from the wall to the point of tangency is the geometric mean of the heights of the top and bottom of the painting. This means, in turn, that if we reflect the bottom of the picture in the line at eye-level and draw the circle with the segment between the top of the picture and this reflected point as diameter, the circle intersects the line at eye-level in the required position (by Elements II.14).
Solution by calculus
In the present day, this problem is widely known because it appears as an exercise in many first-year calculus textbooks (for example that of Stewart [4]).
Let
a = the height of the bottom of the painting above eye level;
b = the height of the top of the painting above eye level;
x = the viewer's distance from the wall;
α = the angle of elevation of the bottom of the painting, seen from the viewer's position;
β = the angle of elevation of the top of the painting, seen from the viewer's position.
The angle we seek to maximize is β − α. The tangent of the angle increases as the angle increases; therefore it suffices to maximize
$\tan(\beta -\alpha )={\frac {\tan \beta -\tan \alpha }{1+\tan \beta \tan \alpha }}={\frac {{\frac {b}{x}}-{\frac {a}{x}}}{1+{\frac {b}{x}}\cdot {\frac {a}{x}}}}=(b-a){\frac {x}{x^{2}+ab}}.$
Since b − a is a positive constant, we only need to maximize the fraction that follows it. Differentiating, we get
${d \over dx}\left({\frac {x}{x^{2}+ab}}\right)={\frac {ab-x^{2}}{(x^{2}+ab)^{2}}}\qquad {\begin{cases}{}>0&{\text{if }}0\leq x<{\sqrt {ab\,{}}},\\{}=0&{\text{if }}x={\sqrt {ab\,{}}},\\{}<0&{\text{if }}x>{\sqrt {ab\,{}}}.\end{cases}}$
Therefore the angle increases as x goes from 0 to √ab and decreases as x increases from √ab. The angle is therefore as large as possible precisely when x = √ab, the geometric mean of a and b.
Solution by algebra
We have seen that it suffices to maximize
${\frac {x}{x^{2}+ab}}.$
This is equivalent to minimizing the reciprocal:
${\frac {x^{2}+ab}{x}}=x+{\frac {ab}{x}}.$
Observe that this last quantity is equal to
$\left({\sqrt {x}}-{\sqrt {\frac {ab}{x}}}\,\right)^{2}+2{\sqrt {ab\,{}}}.$
(Click "show" at right to see the algebraic details or "hide" to hide them.)
Recall that
$(u-v)^{2}=u^{2}-2uv+v^{2}.$
Thus when we have u2 + v2, we can add the middle term −2uv to get a perfect square. We have
$x+{\frac {ab}{x}}.$
If we regard x as u2 and ab/x as v2, then u = √x and v = √ab/x, and so
$2uv=2{\sqrt {x}}{\sqrt {\frac {ab}{x}}}=2{\sqrt {ab\,{}}}.$
Thus we have
${\begin{aligned}x+{\frac {ab}{x}}&=u^{2}+v^{2}=\underbrace {\left(u^{2}-2uv+v^{2}\right)} _{\text{a perfect square}}+2uv\\&=(u-v)^{2}+2uv=\left({\sqrt {x}}-{\sqrt {\frac {ab}{x}}}\,\right)^{2}+2{\sqrt {ab\,{}}}.\end{aligned}}$
This is as small as possible precisely when the square is 0, and that happens when x = √ab. Alternatively, we might cite this as an instance of the inequality between the arithmetic and geometric means.
References
1. Heinrich Dörrie,100 Great Problems of Elementary Mathematics: Their History And Solution, Dover, 1965, pp. 369–370
2. Eli Maor, Trigonometric Delights, Princeton University Press, 2002, pages 46–48
3. Jones, Troy; Jackson, Steven (2001), "Rugby and Mathematics: A Surprising Link among Geometry, the Conics, and Calculus" (PDF), Mathematics Teacher, 94 (8): 649–654, doi:10.5951/MT.94.8.0649.
4. James Stewart, Calculus: Early Transcendentals, Fifth Edition, Brooks/Cole, 2003, page 340, exercise 58
Calculus
Precalculus
• Binomial theorem
• Concave function
• Continuous function
• Factorial
• Finite difference
• Free variables and bound variables
• Graph of a function
• Linear function
• Radian
• Rolle's theorem
• Secant
• Slope
• Tangent
Limits
• Indeterminate form
• Limit of a function
• One-sided limit
• Limit of a sequence
• Order of approximation
• (ε, δ)-definition of limit
Differential calculus
• Derivative
• Second derivative
• Partial derivative
• Differential
• Differential operator
• Mean value theorem
• Notation
• Leibniz's notation
• Newton's notation
• Rules of differentiation
• linearity
• Power
• Sum
• Chain
• L'Hôpital's
• Product
• General Leibniz's rule
• Quotient
• Other techniques
• Implicit differentiation
• Inverse functions and differentiation
• Logarithmic derivative
• Related rates
• Stationary points
• First derivative test
• Second derivative test
• Extreme value theorem
• Maximum and minimum
• Further applications
• Newton's method
• Taylor's theorem
• Differential equation
• Ordinary differential equation
• Partial differential equation
• Stochastic differential equation
Integral calculus
• Antiderivative
• Arc length
• Riemann integral
• Basic properties
• Constant of integration
• Fundamental theorem of calculus
• Differentiating under the integral sign
• Integration by parts
• Integration by substitution
• trigonometric
• Euler
• Tangent half-angle substitution
• Partial fractions in integration
• Quadratic integral
• Trapezoidal rule
• Volumes
• Washer method
• Shell method
• Integral equation
• Integro-differential equation
Vector calculus
• Derivatives
• Curl
• Directional derivative
• Divergence
• Gradient
• Laplacian
• Basic theorems
• Line integrals
• Green's
• Stokes'
• Gauss'
Multivariable calculus
• Divergence theorem
• Geometric
• Hessian matrix
• Jacobian matrix and determinant
• Lagrange multiplier
• Line integral
• Matrix
• Multiple integral
• Partial derivative
• Surface integral
• Volume integral
• Advanced topics
• Differential forms
• Exterior derivative
• Generalized Stokes' theorem
• Tensor calculus
Sequences and series
• Arithmetico-geometric sequence
• Types of series
• Alternating
• Binomial
• Fourier
• Geometric
• Harmonic
• Infinite
• Power
• Maclaurin
• Taylor
• Telescoping
• Tests of convergence
• Abel's
• Alternating series
• Cauchy condensation
• Direct comparison
• Dirichlet's
• Integral
• Limit comparison
• Ratio
• Root
• Term
Special functions
and numbers
• Bernoulli numbers
• e (mathematical constant)
• Exponential function
• Natural logarithm
• Stirling's approximation
History of calculus
• Adequality
• Brook Taylor
• Colin Maclaurin
• Generality of algebra
• Gottfried Wilhelm Leibniz
• Infinitesimal
• Infinitesimal calculus
• Isaac Newton
• Fluxion
• Law of Continuity
• Leonhard Euler
• Method of Fluxions
• The Method of Mechanical Theorems
Lists
• Differentiation rules
• List of integrals of exponential functions
• List of integrals of hyperbolic functions
• List of integrals of inverse hyperbolic functions
• List of integrals of inverse trigonometric functions
• List of integrals of irrational functions
• List of integrals of logarithmic functions
• List of integrals of rational functions
• List of integrals of trigonometric functions
• Secant
• Secant cubed
• List of limits
• Lists of integrals
Miscellaneous topics
• Complex calculus
• Contour integral
• Differential geometry
• Manifold
• Curvature
• of curves
• of surfaces
• Tensor
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Regiomontanus' angle maximization problem
• Steinmetz solid
|
Wikipedia
|
Domain of holomorphy
In mathematics, in the theory of functions of several complex variables, a domain of holomorphy is a domain which is maximal in the sense that there exists a holomorphic function on this domain which cannot be extended to a bigger domain.
Formally, an open set $\Omega $ in the n-dimensional complex space ${\mathbb {C} }^{n}$ is called a domain of holomorphy if there do not exist non-empty open sets $U\subset \Omega $ and $V\subset {\mathbb {C} }^{n}$ where $V$ is connected, $V\not \subset \Omega $ and $U\subset \Omega \cap V$ such that for every holomorphic function $f$ on $\Omega $ there exists a holomorphic function $g$ on $V$ with $f=g$ on $U$
In the $n=1$ case, every open set is a domain of holomorphy: we can define a holomorphic function with zeros accumulating everywhere on the boundary of the domain, which must then be a natural boundary for a domain of definition of its reciprocal. For $n\geq 2$ this is no longer true, as it follows from Hartogs' lemma.
Equivalent conditions
For a domain $\Omega $ the following conditions are equivalent:
1. $\Omega $ is a domain of holomorphy
2. $\Omega $ is holomorphically convex
3. $\Omega $ is pseudoconvex
4. $\Omega $ is Levi convex - for every sequence $S_{n}\subseteq \Omega $ of analytic compact surfaces such that $S_{n}\rightarrow S,\partial S_{n}\rightarrow \Gamma $ for some set $\Gamma $ we have $S\subseteq \Omega $ ($\partial \Omega $ cannot be "touched from inside" by a sequence of analytic surfaces)
5. $\Omega $ has local Levi property - for every point $x\in \partial \Omega $ there exist a neighbourhood $U$ of $x$ and $f$ holomorphic on $U\cap \Omega $ such that $f$ cannot be extended to any neighbourhood of $x$
Implications $1\Leftrightarrow 2,3\Leftrightarrow 4,1\Rightarrow 4,3\Rightarrow 5$ are standard results (for $1\Rightarrow 3$, see Oka's lemma). The main difficulty lies in proving $5\Rightarrow 1$, i.e. constructing a global holomorphic function which admits no extension from non-extendable functions defined only locally. This is called the Levi problem (after E. E. Levi) and was first solved by Kiyoshi Oka, and then by Lars Hörmander using methods from functional analysis and partial differential equations (a consequence of ${\bar {\partial }}$-problem).
Properties
• If $\Omega _{1},\dots ,\Omega _{n}$ are domains of holomorphy, then their intersection $\Omega =\bigcap _{j=1}^{n}\Omega _{j}$ is also a domain of holomorphy.
• If $\Omega _{1}\subseteq \Omega _{2}\subseteq \dots $ is an ascending sequence of domains of holomorphy, then their union $\Omega =\bigcup _{n=1}^{\infty }\Omega _{n}$ is also a domain of holomorphy (see Behnke-Stein theorem).
• If $\Omega _{1}$ and $\Omega _{2}$ are domains of holomorphy, then $\Omega _{1}\times \Omega _{2}$ is a domain of holomorphy.
• The first Cousin problem is always solvable in a domain of holomorphy; this is also true, with additional topological assumptions, for the second Cousin problem.
See also
• Behnke–Stein theorem
• Levi pseudoconvex
• solution of the Levi problem
• Stein manifold
References
• Steven G. Krantz. Function Theory of Several Complex Variables, AMS Chelsea Publishing, Providence, Rhode Island, 1992.
• Boris Vladimirovich Shabat, Introduction to Complex Analysis, AMS, 1992
This article incorporates material from Domain of holomorphy on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Wikipedia
|
Regression analysis
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one or more independent variables (often called 'predictors', 'covariates', 'explanatory variables' or 'features'). The most common form of regression analysis is linear regression, in which one finds the line (or a more complex linear combination) that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line (or hyperplane) that minimizes the sum of squared differences between the true data and that line (or hyperplane). For specific mathematical reasons (see linear regression), this allows the researcher to estimate the conditional expectation (or population average value) of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters (e.g., quantile regression or Necessary Condition Analysis[1]) or estimate the conditional expectation across a broader collection of non-linear models (e.g., nonparametric regression).
Part of a series on
Regression analysis
Models
• Linear regression
• Simple regression
• Polynomial regression
• General linear model
• Generalized linear model
• Vector generalized linear model
• Discrete choice
• Binomial regression
• Binary regression
• Logistic regression
• Multinomial logistic regression
• Mixed logit
• Probit
• Multinomial probit
• Ordered logit
• Ordered probit
• Poisson
• Multilevel model
• Fixed effects
• Random effects
• Linear mixed-effects model
• Nonlinear mixed-effects model
• Nonlinear regression
• Nonparametric
• Semiparametric
• Robust
• Quantile
• Isotonic
• Principal components
• Least angle
• Local
• Segmented
• Errors-in-variables
Estimation
• Least squares
• Linear
• Non-linear
• Ordinary
• Weighted
• Generalized
• Generalized estimating equation
• Partial
• Total
• Non-negative
• Ridge regression
• Regularized
• Least absolute deviations
• Iteratively reweighted
• Bayesian
• Bayesian multivariate
• Least-squares spectral analysis
Background
• Regression validation
• Mean and predicted response
• Errors and residuals
• Goodness of fit
• Studentized residual
• Gauss–Markov theorem
• Mathematics portal
Part of a series on
Machine learning
and data mining
Paradigms
• Supervised learning
• Unsupervised learning
• Online learning
• Batch learning
• Meta-learning
• Semi-supervised learning
• Self-supervised learning
• Reinforcement learning
• Rule-based learning
• Quantum machine learning
Problems
• Classification
• Generative model
• Regression
• Clustering
• dimension reduction
• density estimation
• Anomaly detection
• Data Cleaning
• AutoML
• Association rules
• Semantic analysis
• Structured prediction
• Feature engineering
• Feature learning
• Learning to rank
• Grammar induction
• Ontology learning
• Multimodal learning
Supervised learning
(classification • regression)
• Apprenticeship learning
• Decision trees
• Ensembles
• Bagging
• Boosting
• Random forest
• k-NN
• Linear regression
• Naive Bayes
• Artificial neural networks
• Logistic regression
• Perceptron
• Relevance vector machine (RVM)
• Support vector machine (SVM)
Clustering
• BIRCH
• CURE
• Hierarchical
• k-means
• Fuzzy
• Expectation–maximization (EM)
• DBSCAN
• OPTICS
• Mean shift
Dimensionality reduction
• Factor analysis
• CCA
• ICA
• LDA
• NMF
• PCA
• PGD
• t-SNE
• SDL
Structured prediction
• Graphical models
• Bayes net
• Conditional random field
• Hidden Markov
Anomaly detection
• RANSAC
• k-NN
• Local outlier factor
• Isolation forest
Artificial neural network
• Autoencoder
• Cognitive computing
• Deep learning
• DeepDream
• Feedforward neural network
• Recurrent neural network
• LSTM
• GRU
• ESN
• reservoir computing
• Restricted Boltzmann machine
• GAN
• Diffusion model
• SOM
• Convolutional neural network
• U-Net
• Transformer
• Vision
• Spiking neural network
• Memtransistor
• Electrochemical RAM (ECRAM)
Reinforcement learning
• Q-learning
• SARSA
• Temporal difference (TD)
• Multi-agent
• Self-play
Learning with humans
• Active learning
• Crowdsourcing
• Human-in-the-loop
Model diagnostics
• Learning curve
Mathematical foundations
• Kernel machines
• Bias–variance tradeoff
• Computational learning theory
• Empirical risk minimization
• Occam learning
• PAC learning
• Statistical learning
• VC theory
Machine-learning venues
• ECML PKDD
• NeurIPS
• ICML
• ICLR
• IJCAI
• ML
• JMLR
Related articles
• Glossary of artificial intelligence
• List of datasets for machine-learning research
• Outline of machine learning
Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables. Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed dataset. To use regressions for prediction or to infer causal relationships, respectively, a researcher must carefully justify why existing relationships have predictive power for a new context or why a relationship between two variables has a causal interpretation. The latter is especially important when researchers hope to estimate causal relationships using observational data.[2][3]
History
The earliest form of regression was the method of least squares, which was published by Legendre in 1805,[4] and by Gauss in 1809.[5] Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the Sun (mostly comets, but also later the then newly discovered minor planets). Gauss published a further development of the theory of least squares in 1821,[6] including a version of the Gauss–Markov theorem.
The term "regression" was coined by Francis Galton in the 19th century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known as regression toward the mean).[7][8] For Galton, regression had only this biological meaning,[9][10] but his work was later extended by Udny Yule and Karl Pearson to a more general statistical context.[11][12] In the work of Yule and Pearson, the joint distribution of the response and explanatory variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925.[13][14][15] Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.
In the 1950s and 1960s, economists used electromechanical desk calculators to calculate regressions. Before 1970, it sometimes took up to 24 hours to receive the result from one regression.[16]
Regression methods continue to be an area of active research. In recent decades, new methods have been developed for robust regression, regression involving correlated responses such as time series and growth curves, regression in which the predictor (independent variable) or response variables are curves, images, graphs, or other complex data objects, regression methods accommodating various types of missing data, nonparametric regression, Bayesian methods for regression, regression in which the predictor variables are measured with error, regression with more predictor variables than observations, and causal inference with regression.
Regression model
In practice, researchers first select a model they would like to estimate and then use their chosen method (e.g., ordinary least squares) to estimate the parameters of that model. Regression models involve the following components:
• The unknown parameters, often denoted as a scalar or vector $\beta $.
• The independent variables, which are observed in data and are often denoted as a vector $X_{i}$ (where $i$ denotes a row of data).
• The dependent variable, which are observed in data and often denoted using the scalar $Y_{i}$.
• The error terms, which are not directly observed in data and are often denoted using the scalar $e_{i}$.
In various fields of application, different terminologies are used in place of dependent and independent variables.
Most regression models propose that $Y_{i}$ is a function (regression function) of $X_{i}$ and $\beta $, with $e_{i}$ representing an additive error term that may stand in for un-modeled determinants of $Y_{i}$ or random statistical noise:
$Y_{i}=f(X_{i},\beta )+e_{i}$
The researchers' goal is to estimate the function $f(X_{i},\beta )$ that most closely fits the data. To carry out regression analysis, the form of the function $f$ must be specified. Sometimes the form of this function is based on knowledge about the relationship between $Y_{i}$ and $X_{i}$ that does not rely on the data. If no such knowledge is available, a flexible or convenient form for $f$ is chosen. For example, a simple univariate regression may propose $f(X_{i},\beta )=\beta _{0}+\beta _{1}X_{i}$, suggesting that the researcher believes $Y_{i}=\beta _{0}+\beta _{1}X_{i}+e_{i}$ to be a reasonable approximation for the statistical process generating the data.
Once researchers determine their preferred statistical model, different forms of regression analysis provide tools to estimate the parameters $\beta $. For example, least squares (including its most common variant, ordinary least squares) finds the value of $\beta $ that minimizes the sum of squared errors $\sum _{i}(Y_{i}-f(X_{i},\beta ))^{2}$. A given regression method will ultimately provide an estimate of $\beta $, usually denoted ${\hat {\beta }}$ to distinguish the estimate from the true (unknown) parameter value that generated the data. Using this estimate, the researcher can then use the fitted value ${\hat {Y_{i}}}=f(X_{i},{\hat {\beta }})$ for prediction or to assess the accuracy of the model in explaining the data. Whether the researcher is intrinsically interested in the estimate ${\hat {\beta }}$ or the predicted value ${\hat {Y_{i}}}$ will depend on context and their goals. As described in ordinary least squares, least squares is widely used because the estimated function $f(X_{i},{\hat {\beta }})$ approximates the conditional expectation $E(Y_{i}|X_{i})$.[5] However, alternative variants (e.g., least absolute deviations or quantile regression) are useful when researchers want to model other functions $f(X_{i},\beta )$.
It is important to note that there must be sufficient data to estimate a regression model. For example, suppose that a researcher has access to $N$ rows of data with one dependent and two independent variables: $(Y_{i},X_{1i},X_{2i})$. Suppose further that the researcher wants to estimate a bivariate linear model via least squares: $Y_{i}=\beta _{0}+\beta _{1}X_{1i}+\beta _{2}X_{2i}+e_{i}$. If the researcher only has access to $N=2$ data points, then they could find infinitely many combinations $({\hat {\beta }}_{0},{\hat {\beta }}_{1},{\hat {\beta }}_{2})$ that explain the data equally well: any combination can be chosen that satisfies ${\hat {Y}}_{i}={\hat {\beta }}_{0}+{\hat {\beta }}_{1}X_{1i}+{\hat {\beta }}_{2}X_{2i}$, all of which lead to $\sum _{i}{\hat {e}}_{i}^{2}=\sum _{i}({\hat {Y}}_{i}-({\hat {\beta }}_{0}+{\hat {\beta }}_{1}X_{1i}+{\hat {\beta }}_{2}X_{2i}))^{2}=0$ and are therefore valid solutions that minimize the sum of squared residuals. To understand why there are infinitely many options, note that the system of $N=2$ equations is to be solved for 3 unknowns, which makes the system underdetermined. Alternatively, one can visualize infinitely many 3-dimensional planes that go through $N=2$ fixed points.
More generally, to estimate a least squares model with $k$ distinct parameters, one must have $N\geq k$ distinct data points. If $N>k$, then there does not generally exist a set of parameters that will perfectly fit the data. The quantity $N-k$ appears often in regression analysis, and is referred to as the degrees of freedom in the model. Moreover, to estimate a least squares model, the independent variables $(X_{1i},X_{2i},...,X_{ki})$ must be linearly independent: one must not be able to reconstruct any of the independent variables by adding and multiplying the remaining independent variables. As discussed in ordinary least squares, this condition ensures that $X^{T}X$ is an invertible matrix and therefore that a unique solution ${\hat {\beta }}$ exists.
Underlying assumptions
By itself, a regression is simply a calculation using the data. In order to interpret the output of regression as a meaningful statistical quantity that measures real-world relationships, researchers often rely on a number of classical assumptions. These assumptions often include:
• The sample is representative of the population at large.
• The independent variables are measured with no error.
• Deviations from the model have an expected value of zero, conditional on covariates: $E(e_{i}|X_{i})=0$
• The variance of the residuals $e_{i}$ is constant across observations (homoscedasticity).
• The residuals $e_{i}$ are uncorrelated with one another. Mathematically, the variance–covariance matrix of the errors is diagonal.
A handful of conditions are sufficient for the least-squares estimator to possess desirable properties: in particular, the Gauss–Markov assumptions imply that the parameter estimates will be unbiased, consistent, and efficient in the class of linear unbiased estimators. Practitioners have developed a variety of methods to maintain some or all of these desirable properties in real-world settings, because these classical assumptions are unlikely to hold exactly. For example, modeling errors-in-variables can lead to reasonable estimates independent variables are measured with errors. Heteroscedasticity-consistent standard errors allow the variance of $e_{i}$ to change across values of $X_{i}$. Correlated errors that exist within subsets of the data or follow specific patterns can be handled using clustered standard errors, geographic weighted regression, or Newey–West standard errors, among other techniques. When rows of data correspond to locations in space, the choice of how to model $e_{i}$ within geographic units can have important consequences.[17][18] The subfield of econometrics is largely focused on developing techniques that allow researchers to make reasonable real-world conclusions in real-world settings, where classical assumptions do not hold exactly.
Linear regression
Main article: Linear regression
See simple linear regression for a derivation of these formulas and a numerical example
In linear regression, the model specification is that the dependent variable, $y_{i}$ is a linear combination of the parameters (but need not be linear in the independent variables). For example, in simple linear regression for modeling $n$ data points there is one independent variable: $x_{i}$, and two parameters, $\beta _{0}$ and $\beta _{1}$:
straight line: $y_{i}=\beta _{0}+\beta _{1}x_{i}+\varepsilon _{i},\quad i=1,\dots ,n.\!$
In multiple linear regression, there are several independent variables or functions of independent variables.
Adding a term in $x_{i}^{2}$ to the preceding regression gives:
parabola: $y_{i}=\beta _{0}+\beta _{1}x_{i}+\beta _{2}x_{i}^{2}+\varepsilon _{i},\ i=1,\dots ,n.\!$
This is still linear regression; although the expression on the right hand side is quadratic in the independent variable $x_{i}$, it is linear in the parameters $\beta _{0}$, $\beta _{1}$ and $\beta _{2}.$
In both cases, $\varepsilon _{i}$ is an error term and the subscript $i$ indexes a particular observation.
Returning our attention to the straight line case: Given a random sample from the population, we estimate the population parameters and obtain the sample linear regression model:
${\widehat {y}}_{i}={\widehat {\beta }}_{0}+{\widehat {\beta }}_{1}x_{i}.$
The residual, $e_{i}=y_{i}-{\widehat {y}}_{i}$, is the difference between the value of the dependent variable predicted by the model, ${\widehat {y}}_{i}$, and the true value of the dependent variable, $y_{i}$. One method of estimation is ordinary least squares. This method obtains parameter estimates that minimize the sum of squared residuals, SSR:
$SSR=\sum _{i=1}^{n}e_{i}^{2}$
Minimization of this function results in a set of normal equations, a set of simultaneous linear equations in the parameters, which are solved to yield the parameter estimators, ${\widehat {\beta }}_{0},{\widehat {\beta }}_{1}$.
In the case of simple regression, the formulas for the least squares estimates are
${\widehat {\beta }}_{1}={\frac {\sum (x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{\sum (x_{i}-{\bar {x}})^{2}}}$
${\widehat {\beta }}_{0}={\bar {y}}-{\widehat {\beta }}_{1}{\bar {x}}$
where ${\bar {x}}$ is the mean (average) of the $x$ values and ${\bar {y}}$ is the mean of the $y$ values.
Under the assumption that the population error term has a constant variance, the estimate of that variance is given by:
${\hat {\sigma }}_{\varepsilon }^{2}={\frac {SSR}{n-2}}$
This is called the mean square error (MSE) of the regression. The denominator is the sample size reduced by the number of model parameters estimated from the same data, $(n-p)$ for $p$ regressors or $(n-p-1)$ if an intercept is used.[19] In this case, $p=1$ so the denominator is $n-2$.
The standard errors of the parameter estimates are given by
${\hat {\sigma }}_{\beta _{1}}={\hat {\sigma }}_{\varepsilon }{\sqrt {\frac {1}{\sum (x_{i}-{\bar {x}})^{2}}}}$
${\hat {\sigma }}_{\beta _{0}}={\hat {\sigma }}_{\varepsilon }{\sqrt {{\frac {1}{n}}+{\frac {{\bar {x}}^{2}}{\sum (x_{i}-{\bar {x}})^{2}}}}}={\hat {\sigma }}_{\beta _{1}}{\sqrt {\frac {\sum x_{i}^{2}}{n}}}.$
Under the further assumption that the population error term is normally distributed, the researcher can use these estimated standard errors to create confidence intervals and conduct hypothesis tests about the population parameters.
General linear model
For a derivation, see linear least squares
For a numerical example, see linear regression
In the more general multiple regression model, there are $p$ independent variables:
$y_{i}=\beta _{1}x_{i1}+\beta _{2}x_{i2}+\cdots +\beta _{p}x_{ip}+\varepsilon _{i},\,$
where $x_{ij}$ is the $i$-th observation on the $j$-th independent variable. If the first independent variable takes the value 1 for all $i$, $x_{i1}=1$, then $\beta _{1}$ is called the regression intercept.
The least squares parameter estimates are obtained from $p$ normal equations. The residual can be written as
$\varepsilon _{i}=y_{i}-{\hat {\beta }}_{1}x_{i1}-\cdots -{\hat {\beta }}_{p}x_{ip}.$
The normal equations are
$\sum _{i=1}^{n}\sum _{k=1}^{p}x_{ij}x_{ik}{\hat {\beta }}_{k}=\sum _{i=1}^{n}x_{ij}y_{i},\ j=1,\dots ,p.\,$
In matrix notation, the normal equations are written as
$\mathbf {(X^{\top }X){\hat {\boldsymbol {\beta }}}={}X^{\top }Y} ,\,$
where the $ij$ element of $\mathbf {X} $ is $x_{ij}$, the $i$ element of the column vector $Y$ is $y_{i}$, and the $j$ element of ${\hat {\boldsymbol {\beta }}}$ is ${\hat {\beta }}_{j}$. Thus $\mathbf {X} $ is $n\times p$, $Y$ is $n\times 1$, and ${\hat {\boldsymbol {\beta }}}$ is $p\times 1$. The solution is
$\mathbf {{\hat {\boldsymbol {\beta }}}=(X^{\top }X)^{-1}X^{\top }Y} .\,$
Diagnostics
Once a regression model has been constructed, it may be important to confirm the goodness of fit of the model and the statistical significance of the estimated parameters. Commonly used checks of goodness of fit include the R-squared, analyses of the pattern of residuals and hypothesis testing. Statistical significance can be checked by an F-test of the overall fit, followed by t-tests of individual parameters.
Interpretations of these diagnostic tests rest heavily on the model's assumptions. Although examination of the residuals can be used to invalidate a model, the results of a t-test or F-test are sometimes more difficult to interpret if the model's assumptions are violated. For example, if the error term does not have a normal distribution, in small samples the estimated parameters will not follow normal distributions and complicate inference. With relatively large samples, however, a central limit theorem can be invoked such that hypothesis testing may proceed using asymptotic approximations.
Limited dependent variables
Limited dependent variables, which are response variables that are categorical variables or are variables constrained to fall only in a certain range, often arise in econometrics.
The response variable may be non-continuous ("limited" to lie on some subset of the real line). For binary (zero or one) variables, if analysis proceeds with least-squares linear regression, the model is called the linear probability model. Nonlinear models for binary dependent variables include the probit and logit model. The multivariate probit model is a standard method of estimating a joint relationship between several binary dependent variables and some independent variables. For categorical variables with more than two values there is the multinomial logit. For ordinal variables with more than two values, there are the ordered logit and ordered probit models. Censored regression models may be used when the dependent variable is only sometimes observed, and Heckman correction type models may be used when the sample is not randomly selected from the population of interest. An alternative to such procedures is linear regression based on polychoric correlation (or polyserial correlations) between the categorical variables. Such procedures differ in the assumptions made about the distribution of the variables in the population. If the variable is positive with low values and represents the repetition of the occurrence of an event, then count models like the Poisson regression or the negative binomial model may be used.
Nonlinear regression
When the model function is not linear in the parameters, the sum of squares must be minimized by an iterative procedure. This introduces many complications which are summarized in Differences between linear and non-linear least squares.
Prediction (interpolation and extrapolation)
Further information: Predicted response and Prediction interval
Regression models predict a value of the Y variable given known values of the X variables. Prediction within the range of values in the dataset used for model-fitting is known informally as interpolation. Prediction outside this range of the data is known as extrapolation. Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values.
It is generally advised that when performing extrapolation, one should accompany the estimated value of the dependent variable with a prediction interval that represents the uncertainty. Such intervals tend to expand rapidly as the values of the independent variable(s) moved outside the range covered by the observed data.
For such reasons and others, some tend to say that it might be unwise to undertake extrapolation.[21]
However, this does not cover the full set of modeling errors that may be made: in particular, the assumption of a particular form for the relation between Y and X. A properly conducted regression analysis will include an assessment of how well the assumed form is matched by the observed data, but it can only do so within the range of values of the independent variables actually available. This means that any extrapolation is particularly reliant on the assumptions being made about the structural form of the regression relationship. Best-practice advice here is that a linear-in-variables and linear-in-parameters relationship should not be chosen simply for computational convenience, but that all available knowledge should be deployed in constructing a regression model. If this knowledge includes the fact that the dependent variable cannot go outside a certain range of values, this can be made use of in selecting the model – even if the observed dataset has no values particularly near such bounds. The implications of this step of choosing an appropriate functional form for the regression can be great when extrapolation is considered. At a minimum, it can ensure that any extrapolation arising from a fitted model is "realistic" (or in accord with what is known).
Power and sample size calculations
There are no generally agreed methods for relating the number of observations versus the number of independent variables in the model. One method conjectured by Good and Hardin is $N=m^{n}$, where $N$ is the sample size, $n$ is the number of independent variables and $m$ is the number of observations needed to reach the desired precision if the model had only one independent variable.[22] For example, a researcher is building a linear regression model using a dataset that contains 1000 patients ($N$). If the researcher decides that five observations are needed to precisely define a straight line ($m$), then the maximum number of independent variables the model can support is 4, because
${\frac {\log 1000}{\log 5}}=4.29.$
Other methods
Although the parameters of a regression model are usually estimated using the method of least squares, other methods which have been used include:
• Bayesian methods, e.g. Bayesian linear regression
• Percentage regression, for situations where reducing percentage errors is deemed more appropriate.[23]
• Least absolute deviations, which is more robust in the presence of outliers, leading to quantile regression
• Nonparametric regression, requires a large number of observations and is computationally intensive
• Scenario optimization, leading to interval predictor models
• Distance metric learning, which is learned by the search of a meaningful distance metric in a given input space.[24]
Software
All major statistical software packages perform least squares regression analysis and inference. Simple linear regression and multiple regression using least squares can be done in some spreadsheet applications and on some calculators. While many statistical software packages can perform various types of nonparametric and robust regression, these methods are less standardized. Different software packages implement different methods, and a method with a given name may be implemented differently in different packages. Specialized regression software has been developed for use in fields such as survey analysis and neuroimaging.
See also
• Anscombe's quartet
• Curve fitting
• Estimation theory
• Forecasting
• Fraction of variance unexplained
• Function approximation
• Generalized linear model
• Kriging (a linear least squares estimation algorithm)
• Local regression
• Modifiable areal unit problem
• Multivariate adaptive regression spline
• Multivariate normal distribution
• Pearson correlation coefficient
• Quasi-variance
• Prediction interval
• Regression validation
• Robust regression
• Segmented regression
• Signal processing
• Stepwise regression
• Taxicab geometry
• Linear trend estimation
References
1. Necessary Condition Analysis
2. David A. Freedman (27 April 2009). Statistical Models: Theory and Practice. Cambridge University Press. ISBN 978-1-139-47731-4.
3. R. Dennis Cook; Sanford Weisberg Criticism and Influence Analysis in Regression, Sociological Methodology, Vol. 13. (1982), pp. 313–361
4. A.M. Legendre. Nouvelles méthodes pour la détermination des orbites des comètes, Firmin Didot, Paris, 1805. “Sur la Méthode des moindres quarrés” appears as an appendix.
5. Chapter 1 of: Angrist, J. D., & Pischke, J. S. (2008). Mostly Harmless Econometrics: An Empiricist's Companion. Princeton University Press.
6. C.F. Gauss. Theoria combinationis observationum erroribus minimis obnoxiae. (1821/1823)
7. Mogull, Robert G. (2004). Second-Semester Applied Statistics. Kendall/Hunt Publishing Company. p. 59. ISBN 978-0-7575-1181-3.
8. Galton, Francis (1989). "Kinship and Correlation (reprinted 1989)". Statistical Science. 4 (2): 80–86. doi:10.1214/ss/1177012581. JSTOR 2245330.
9. Francis Galton. "Typical laws of heredity", Nature 15 (1877), 492–495, 512–514, 532–533. (Galton uses the term "reversion" in this paper, which discusses the size of peas.)
10. Francis Galton. Presidential address, Section H, Anthropology. (1885) (Galton uses the term "regression" in this paper, which discusses the height of humans.)
11. Yule, G. Udny (1897). "On the Theory of Correlation". Journal of the Royal Statistical Society. 60 (4): 812–54. doi:10.2307/2979746. JSTOR 2979746.
12. Pearson, Karl; Yule, G.U.; Blanchard, Norman; Lee, Alice (1903). "The Law of Ancestral Heredity". Biometrika. 2 (2): 211–236. doi:10.1093/biomet/2.2.211. JSTOR 2331683.
13. Fisher, R.A. (1922). "The goodness of fit of regression formulae, and the distribution of regression coefficients". Journal of the Royal Statistical Society. 85 (4): 597–612. doi:10.2307/2341124. JSTOR 2341124. PMC 1084801.
14. Ronald A. Fisher (1954). Statistical Methods for Research Workers (Twelfth ed.). Edinburgh: Oliver and Boyd. ISBN 978-0-05-002170-5.
15. Aldrich, John (2005). "Fisher and Regression". Statistical Science. 20 (4): 401–417. doi:10.1214/088342305000000331. JSTOR 20061201.
16. Rodney Ramcharan. Regressions: Why Are Economists Obessessed with Them? March 2006. Accessed 2011-12-03.
17. Fotheringham, A. Stewart; Brunsdon, Chris; Charlton, Martin (2002). Geographically weighted regression: the analysis of spatially varying relationships (Reprint ed.). Chichester, England: John Wiley. ISBN 978-0-471-49616-8.
18. Fotheringham, AS; Wong, DWS (1 January 1991). "The modifiable areal unit problem in multivariate statistical analysis". Environment and Planning A. 23 (7): 1025–1044. doi:10.1068/a231025. S2CID 153979055.
19. Steel, R.G.D, and Torrie, J. H., Principles and Procedures of Statistics with Special Reference to the Biological Sciences., McGraw Hill, 1960, page 288.
20. Rouaud, Mathieu (2013). Probability, Statistics and Estimation (PDF). p. 60.
21. Chiang, C.L, (2003) Statistical methods of analysis, World Scientific. ISBN 981-238-310-7 - page 274 section 9.7.4 "interpolation vs extrapolation"
22. Good, P. I.; Hardin, J. W. (2009). Common Errors in Statistics (And How to Avoid Them) (3rd ed.). Hoboken, New Jersey: Wiley. p. 211. ISBN 978-0-470-45798-6.
23. Tofallis, C. (2009). "Least Squares Percentage Regression". Journal of Modern Applied Statistical Methods. 7: 526–534. doi:10.2139/ssrn.1406472. SSRN 1406472.
24. YangJing Long (2009). "Human age estimation by metric learning for regression problems" (PDF). Proc. International Conference on Computer Analysis of Images and Patterns: 74–82. Archived from the original (PDF) on 2010-01-08.
Further reading
• William H. Kruskal and Judith M. Tanur, ed. (1978), "Linear Hypotheses," International Encyclopedia of Statistics. Free Press, v. 1,
Evan J. Williams, "I. Regression," pp. 523–41.
Julian C. Stanley, "II. Analysis of Variance," pp. 541–554.
• Lindley, D.V. (1987). "Regression and correlation analysis," New Palgrave: A Dictionary of Economics, v. 4, pp. 120–23.
• Birkes, David and Dodge, Y., Alternative Methods of Regression. ISBN 0-471-56881-3
• Chatfield, C. (1993) "Calculating Interval Forecasts," Journal of Business and Economic Statistics, 11. pp. 121–135.
• Draper, N.R.; Smith, H. (1998). Applied Regression Analysis (3rd ed.). John Wiley. ISBN 978-0-471-17082-2.
• Fox, J. (1997). Applied Regression Analysis, Linear Models and Related Methods. Sage
• Hardle, W., Applied Nonparametric Regression (1990), ISBN 0-521-42950-1
• Meade, Nigel; Islam, Towhidul (1995). "Prediction intervals for growth curve forecasts". Journal of Forecasting. 14 (5): 413–430. doi:10.1002/for.3980140502.
• A. Sen, M. Srivastava, Regression Analysis — Theory, Methods, and Applications, Springer-Verlag, Berlin, 2011 (4th printing).
• T. Strutz: Data Fitting and Uncertainty (A practical introduction to weighted least squares and beyond). Vieweg+Teubner, ISBN 978-3-8348-1022-9.
• Stulp, Freek, and Olivier Sigaud. Many Regression Algorithms, One Unified Model: A Review. Neural Networks, vol. 69, Sept. 2015, pp. 60–79. https://doi.org/10.1016/j.neunet.2015.05.005.
• Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.
• Chicco, Davide; Warrens, Matthijs J.; Jurman, Giuseppe (2021). "The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation". PeerJ Computer Science. 7 (e623): e623. doi:10.7717/peerj-cs.623. PMC 8279135. PMID 34307865.
External links
Wikimedia Commons has media related to Regression analysis.
• "Regression analysis", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Earliest Uses: Regression – basic history and references
• What is multiple regression used for? – Multiple regression
• Regression of Weakly Correlated Data – how linear regression mistakes can appear when Y-range is much smaller than X-range
Differentiable computing
General
• Differentiable programming
• Information geometry
• Statistical manifold
• Automatic differentiation
• Neuromorphic engineering
• Pattern recognition
• Tensor calculus
• Computational learning theory
• Inductive bias
Concepts
• Gradient descent
• SGD
• Clustering
• Regression
• Overfitting
• Hallucination
• Adversary
• Attention
• Convolution
• Loss functions
• Backpropagation
• Normalization (Batchnorm)
• Activation
• Softmax
• Sigmoid
• Rectifier
• Regularization
• Datasets
• Augmentation
• Diffusion
• Autoregression
Applications
• Machine learning
• In-context learning
• Artificial neural network
• Deep learning
• Scientific computing
• Artificial Intelligence
• Language model
• Large language model
Hardware
• IPU
• TPU
• VPU
• Memristor
• SpiNNaker
Software libraries
• TensorFlow
• PyTorch
• Keras
• Theano
• JAX
• Flux.jl
Implementations
Audio–visual
• AlexNet
• WaveNet
• Human image synthesis
• HWR
• OCR
• Speech synthesis
• Speech recognition
• Facial recognition
• AlphaFold
• DALL-E
• Midjourney
• Stable Diffusion
Verbal
• Word2vec
• Seq2seq
• BERT
• LaMDA
• Bard
• NMT
• Project Debater
• IBM Watson
• GPT-2
• GPT-3
• ChatGPT
• GPT-4
• GPT-J
• Chinchilla AI
• PaLM
• BLOOM
• LLaMA
Decisional
• AlphaGo
• AlphaZero
• Q-learning
• SARSA
• OpenAI Five
• Self-driving car
• MuZero
• Action selection
• Auto-GPT
• Robot control
People
• Yoshua Bengio
• Alex Graves
• Ian Goodfellow
• Stephen Grossberg
• Demis Hassabis
• Geoffrey Hinton
• Yann LeCun
• Fei-Fei Li
• Andrew Ng
• Jürgen Schmidhuber
• David Silver
Organizations
• Anthropic
• EleutherAI
• Google DeepMind
• Hugging Face
• OpenAI
• Meta AI
• Mila
• MIT CSAIL
Architectures
• Neural Turing machine
• Differentiable neural computer
• Transformer
• Recurrent neural network (RNN)
• Long short-term memory (LSTM)
• Gated recurrent unit (GRU)
• Echo state network
• Multilayer perceptron (MLP)
• Convolutional neural network
• Residual network
• Autoencoder
• Variational autoencoder (VAE)
• Generative adversarial network (GAN)
• Graph neural network
• Portals
• Computer programming
• Technology
• Categories
• Artificial neural networks
• Machine learning
Least squares and regression analysis
Computational statistics
• Least squares
• Linear least squares
• Non-linear least squares
• Iteratively reweighted least squares
Correlation and dependence
• Pearson product-moment correlation
• Rank correlation (Spearman's rho
• Kendall's tau)
• Partial correlation
• Confounding variable
Regression analysis
• Ordinary least squares
• Partial least squares
• Total least squares
• Ridge regression
Regression as a
statistical model
Linear regression
• Simple linear regression
• Ordinary least squares
• Generalized least squares
• Weighted least squares
• General linear model
Predictor structure
• Polynomial regression
• Growth curve (statistics)
• Segmented regression
• Local regression
Non-standard
• Nonlinear regression
• Nonparametric
• Semiparametric
• Robust
• Quantile
• Isotonic
Non-normal errors
• Generalized linear model
• Binomial
• Poisson
• Logistic
Decomposition of variance
• Analysis of variance
• Analysis of covariance
• Multivariate AOV
Model exploration
• Stepwise regression
• Model selection
• Mallows's Cp
• AIC
• BIC
• Model specification
• Regression validation
Background
• Mean and predicted response
• Gauss–Markov theorem
• Errors and residuals
• Goodness of fit
• Studentized residual
• Minimum mean-square error
• Frisch–Waugh–Lovell theorem
Design of experiments
• Response surface methodology
• Optimal design
• Bayesian design
Numerical approximation
• Numerical analysis
• Approximation theory
• Numerical integration
• Gaussian quadrature
• Orthogonal polynomials
• Chebyshev polynomials
• Chebyshev nodes
Applications
• Curve fitting
• Calibration curve
• Numerical smoothing and differentiation
• System identification
• Moving least squares
• Regression analysis category
• Statistics category
• Mathematics portal
• Statistics outline
• Statistics topics
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
Quantitative forecasting methods
Historical data forecasts
• Moving average
• Exponential smoothing
• Trend analysis
• Decomposition of time series
• Naïve approach
Associative (causal) forecasts
• Moving average
• Simple linear regression
• Regression analysis
• Econometric model
Public health
General
• Auxology
• Biological hazard
• Chief Medical Officer
• Cultural competence
• Deviance
• Environmental health
• Euthenics
• Genomics
• Globalization and disease
• Harm reduction
• Health economics
• Health literacy
• Health policy
• Health system
• Health care reform
• Public health law
• National public health institute
• Health politics
• Maternal health
• Medical anthropology
• Medical sociology
• Mental health (Ministers)
• Pharmaceutical policy
• Pollution
• Air
• Water
• Soil
• Radiation
• Light
• Public health intervention
• Public health laboratory
• Sexual and reproductive health
• Social psychology
• Sociology of health and illness
Preventive healthcare
• Behavior change
• Theories
• Family planning
• Health promotion
• Human nutrition
• Healthy diet
• Preventive nutrition
• Hygiene
• Food safety
• Hand washing
• Infection control
• Oral hygiene
• Occupational safety and health
• Human factors and ergonomics
• Hygiene
• Controlled Drugs
• Injury prevention
• Medicine
• Nursing
• Patient safety
• Organization
• Pharmacovigilance
• Safe sex
• Sanitation
• Emergency
• Fecal–oral transmission
• Open defecation
• Sanitary sewer
• Waterborne diseases
• Worker
• School hygiene
• Smoking cessation
• Vaccination
• Vector control
Population health
• Biostatistics
• Child mortality
• Community health
• Epidemiology
• Global health
• Health impact assessment
• Health system
• Infant mortality
• Open-source healthcare software
• Multimorbidity
• Public health informatics
• Social determinants of health
• Health equity
• Race and health
• Social medicine
Biological and
epidemiological statistics
• Case–control study
• Randomized controlled trial
• Relative risk
• Statistical hypothesis testing
• Analysis of variance (ANOVA)
• Regression analysis
• ROC curve
• Student's t-test
• Z-test
• Statistical software
Infectious and epidemic
disease prevention
• Asymptomatic carrier
• Epidemics
• List
• Notifiable diseases
• List
• Public health surveillance
• Disease surveillance
• Quarantine
• Sexually transmitted infection
• Social distancing
• Tropical disease
• Vaccine trial
Food hygiene and
safety management
• Food
• Additive
• Chemistry
• Engineering
• Microbiology
• Processing
• Safety
• Safety scandals
• Genetically modified food
• Good agricultural practice
• Good manufacturing practice
• HACCP
• ISO 22000
Health behavioral
sciences
• Diffusion of innovations
• Health belief model
• Health communication
• Health psychology
• Positive deviance
• PRECEDE-PROCEED model
• Social cognitive theory
• Social norms approach
• Theory of planned behavior
• Transtheoretical model
Organizations,
education
and history
Organizations
• Caribbean
• Caribbean Public Health Agency
• China
• Center for Disease Control and Prevention
• Europe
• Centre for Disease Prevention and Control
• Committee on the Environment, Public Health and Food Safety
• India
• Ministry of Health and Family Welfare
• Canada
• Health Canada
• Public Health Agency
• U.S.
• Centers for Disease Control and Prevention
• City and county health departments
• Council on Education for Public Health
• Public Health Service
• World Health Organization
• World Toilet Organization
• (Full list)
Education
• Health education
• Higher education
• Bachelor of Science in Public Health
• Doctor of Public Health
• Professional degrees of public health
• Schools of public health
History
• Sara Josephine Baker
• Samuel Jay Crumbine
• Carl Rogers Darnall
• Joseph Lister
• Margaret Sanger
• John Snow
• Typhoid Mary
• Radium Girls
• Germ theory of disease
• Social hygiene movement
• Category
• Commons
• WikiProject
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
|
Wikipedia
|
Regression toward the mean
In statistics, regression toward the mean (also called reversion to the mean, and reversion to mediocrity) is the phenomenon where if one sample of a random variable is extreme, the next sampling of the same random variable is likely to be closer to its mean.[1][2][3] Furthermore, when many random variables are sampled and the most extreme results are intentionally picked out, it refers to the fact that (in many cases) a second sampling of these picked-out variables will result in "less extreme" results, closer to the initial mean of all of the variables.
Mathematically, the strength of this "regression" effect is dependent on whether or not all of the random variables are drawn from the same distribution, or if there are genuine differences in the underlying distributions for each random variable. In the first case, the "regression" effect is statistically likely to occur, but in the second case, it may occur less strongly or not at all.
Regression toward the mean is thus a useful concept to consider when designing any scientific experiment, data analysis, or test, which intentionally selects the "most extreme" events - it indicates that follow-up checks may be useful in order to avoid jumping to false conclusions about these events; they may be "genuine" extreme events, a completely meaningless selection due to statistical noise, or a mix of the two cases.[4]
Conceptual examples
Simple example: students taking a test
Consider a class of students taking a 100-item true/false test on a subject. Suppose that all students choose randomly on all questions. Then, each student's score would be a realization of one of a set of independent and identically distributed random variables, with an expected mean of 50. Naturally, some students will score substantially above 50 and some substantially below 50 just by chance. If one selects only the top scoring 10% of the students and gives them a second test on which they again choose randomly on all items, the mean score would again be expected to be close to 50. Thus the mean of these students would "regress" all the way back to the mean of all students who took the original test. No matter what a student scores on the original test, the best prediction of their score on the second test is 50.
If choosing answers to the test questions was not random – i.e. if there were no luck (good or bad) or random guessing involved in the answers supplied by the students – then all students would be expected to score the same on the second test as they scored on the original test, and there would be no regression toward the mean.
Most realistic situations fall between these two extremes: for example, one might consider exam scores as a combination of skill and luck. In this case, the subset of students scoring above average would be composed of those who were skilled and had not especially bad luck, together with those who were unskilled, but were extremely lucky. On a retest of this subset, the unskilled will be unlikely to repeat their lucky break, while the skilled will have a second chance to have bad luck. Hence, those who did well previously are unlikely to do quite as well in the second test even if the original cannot be replicated.
The following is an example of this second kind of regression toward the mean. A class of students takes two editions of the same test on two successive days. It has frequently been observed that the worst performers on the first day will tend to improve their scores on the second day, and the best performers on the first day will tend to do worse on the second day. The phenomenon occurs because student scores are determined in part by underlying ability and in part by chance. For the first test, some will be lucky, and score more than their ability, and some will be unlucky and score less than their ability. Some of the lucky students on the first test will be lucky again on the second test, but more of them will have (for them) average or below average scores. Therefore, a student who was lucky and over-performed their ability on the first test is more likely to have a worse score on the second test than a better score. Similarly, students who unluckily score less than their ability on the first test will tend to see their scores increase on the second test. The larger the influence of luck in producing an extreme event, the less likely the luck will repeat itself in multiple events.
Other examples
If your favourite sports team won the championship last year, what does that mean for their chances for winning next season? To the extent this result is due to skill (the team is in good condition, with a top coach, etc.), their win signals that it is more likely they will win again next year. But the greater the extent this is due to luck (other teams embroiled in a drug scandal, favourable draw, draft picks turned out to be productive, etc.), the less likely it is they will win again next year.[5]
If a business organisation has a highly profitable quarter, despite the underlying reasons for its performance being unchanged, it is likely to do less well the next quarter.[6]
Baseball players who hit well in their rookie season are likely to do worse their second; the "sophomore slump". Similarly, regression toward the mean is an explanation for the Sports Illustrated cover jinx — periods of exceptional performance which results in a cover feature are likely to be followed by periods of more mediocre performance, giving the impression that appearing on the cover causes an athlete's decline.[7]
History
Discovery
The concept of regression comes from genetics and was popularized by Sir Francis Galton during the late 19th century with the publication of Regression towards mediocrity in hereditary stature.[8] Galton observed that extreme characteristics (e.g., height) in parents are not passed on completely to their offspring. Rather, the characteristics in the offspring regress toward a mediocre point (a point which has since been identified as the mean). By measuring the heights of hundreds of people, he was able to quantify regression to the mean, and estimate the size of the effect. Galton wrote that, "the average regression of the offspring is a constant fraction of their respective mid-parental deviations". This means that the difference between a child and its parents for some characteristic is proportional to its parents' deviation from typical people in the population. If its parents are each two inches taller than the averages for men and women, then, on average, the offspring will be shorter than its parents by some factor (which, today, we would call one minus the regression coefficient) times two inches. For height, Galton estimated this coefficient to be about 2/3: the height of an individual will measure around a midpoint that is two thirds of the parents' deviation from the population average.
Galton also published these results[9] using the simpler example of pellets falling through a Galton board to form a normal distribution centred directly under their entrance point. These pellets might then be released down into a second gallery corresponding to a second measurement. Galton then asked the reverse question: "From where did these pellets come?"
The answer was not 'on average directly above'. Rather it was 'on average, more towards the middle', for the simple reason that there were more pellets above it towards the middle that could wander left than there were in the left extreme that could wander to the right, inwards.[10]
Evolving usage of the term
Galton coined the term "regression" to describe an observable fact in the inheritance of multi-factorial quantitative genetic traits: namely that traits of the offspring of parents who lie at the tails of the distribution often tend to lie closer to the centre, the mean, of the distribution. He quantified this trend, and in doing so invented linear regression analysis, thus laying the groundwork for much of modern statistical modelling. Since then, the term "regression" has been used in other contexts, and it may be used by modern statisticians to describe phenomena such as sampling bias which have little to do with Galton's original observations in the field of genetics.
Galton's explanation for the regression phenomenon he observed in biology was stated as follows: "A child inherits partly from his parents, partly from his ancestors. Speaking generally, the further his genealogy goes back, the more numerous and varied will his ancestry become, until they cease to differ from any equally numerous sample taken at haphazard from the race at large."[8] Galton's statement requires some clarification in light of knowledge of genetics: Children receive genetic material from their parents, but hereditary information (e.g. values of inherited traits) from earlier ancestors can be passed through their parents (and may not have been expressed in their parents). The mean for the trait may be nonrandom and determined by selection pressure, but the distribution of values around the mean reflects a normal statistical distribution.
The population-genetic phenomenon studied by Galton is a special case of "regression to the mean"; the term is often used to describe many statistical phenomena in which data exhibit a normal distribution around a mean.
Importance
Regression toward the mean is a significant consideration in the design of experiments.
Take a hypothetical example of 1,000 individuals of a similar age who were examined and scored on the risk of experiencing a heart attack. Statistics could be used to measure the success of an intervention on the 50 who were rated at the greatest risk, as measured by a test with a degree of uncertainty. The intervention could be a change in diet, exercise, or a drug treatment. Even if the interventions are worthless, the test group would be expected to show an improvement on their next physical exam, because of regression toward the mean. The best way to combat this effect is to divide the group randomly into a treatment group that receives the treatment, and a group that does not. The treatment would then be judged effective only if the treatment group improves more than the untreated group.
Alternatively, a group of disadvantaged children could be tested to identify the ones with most college potential. The top 1% could be identified and supplied with special enrichment courses, tutoring, counseling and computers. Even if the program is effective, their average scores may well be less when the test is repeated a year later. However, in these circumstances it may be considered unethical to have a control group of disadvantaged children whose special needs are ignored. A mathematical calculation for shrinkage can adjust for this effect, although it will not be as reliable as the control group method (see also Stein's example).
The effect can also be exploited for general inference and estimation. The hottest place in the country today is more likely to be cooler tomorrow than hotter, as compared to today. The best performing mutual fund over the last three years is more likely to see relative performance decline than improve over the next three years. The most successful Hollywood actor of this year is likely to have less gross than more gross for his or her next movie. The baseball player with the highest batting average by the All-Star break is more likely to have a lower average than a higher average over the second half of the season.
Misunderstandings
The concept of regression toward the mean can be misused very easily.
In the student test example above, it was assumed implicitly that what was being measured did not change between the two measurements. Suppose, however, that the course was pass/fail and students were required to score above 70 on both tests to pass. Then the students who scored under 70 the first time would have no incentive to do well, and might score worse on average the second time. The students just over 70, on the other hand, would have a strong incentive to study and concentrate while taking the test. In that case one might see movement away from 70, scores below it getting lower and scores above it getting higher. It is possible for changes between the measurement times to augment, offset or reverse the statistical tendency to regress toward the mean.
Statistical regression toward the mean is not a causal phenomenon. A student with the worst score on the test on the first day will not necessarily increase his score substantially on the second day due to the effect. On average, the worst scorers improve, but that is only true because the worst scorers are more likely to have been unlucky than lucky. To the extent that a score is determined randomly, or that a score has random variation or error, as opposed to being determined by the student's academic ability or being a "true value", the phenomenon will have an effect. A classic mistake in this regard was in education. The students that received praise for good work were noticed to do more poorly on the next measure, and the students who were punished for poor work were noticed to do better on the next measure. The educators decided to stop praising and keep punishing on this basis.[11] Such a decision was a mistake, because regression toward the mean is not based on cause and effect, but rather on random error in a natural distribution around a mean.
Although extreme individual measurements regress toward the mean, the second sample of measurements will be no closer to the mean than the first. Consider the students again. Suppose the tendency of extreme individuals is to regress 10% of the way toward the mean of 80, so a student who scored 100 the first day is expected to score 98 the second day, and a student who scored 70 the first day is expected to score 71 the second day. Those expectations are closer to the mean than the first day scores. But the second day scores will vary around their expectations; some will be higher and some will be lower. For extreme individuals, we expect the second score to be closer to the mean than the first score, but for all individuals, we expect the distribution of distances from the mean to be the same on both sets of measurements.
Related to the point above, regression toward the mean works equally well in both directions. We expect the student with the highest test score on the second day to have done worse on the first day. And if we compare the best student on the first day to the best student on the second day, regardless of whether it is the same individual or not, there is no tendency to regress toward the mean going in either direction. We expect the best scores on both days to be equally far from the mean.
Regression fallacies
Many phenomena tend to be attributed to the wrong causes when regression to the mean is not taken into account.
An extreme example is Horace Secrist's 1933 book The Triumph of Mediocrity in Business, in which the statistics professor collected mountains of data to prove that the profit rates of competitive businesses tend toward the average over time. In fact, there is no such effect; the variability of profit rates is almost constant over time. Secrist had only described the common regression toward the mean. One exasperated reviewer, Harold Hotelling, likened the book to "proving the multiplication table by arranging elephants in rows and columns, and then doing the same for numerous other kinds of animals".[12]
The calculation and interpretation of "improvement scores" on standardized educational tests in Massachusetts probably provides another example of the regression fallacy. In 1999, schools were given improvement goals. For each school, the Department of Education tabulated the difference in the average score achieved by students in 1999 and in 2000. It was quickly noted that most of the worst-performing schools had met their goals, which the Department of Education took as confirmation of the soundness of their policies. However, it was also noted that many of the supposedly best schools in the Commonwealth, such as Brookline High School (with 18 National Merit Scholarship finalists) were declared to have failed. As in many cases involving statistics and public policy, the issue is debated, but "improvement scores" were not announced in subsequent years and the findings appear to be a case of regression to the mean.
The psychologist Daniel Kahneman, winner of the 2002 Nobel Memorial Prize in Economic Sciences, pointed out that regression to the mean might explain why rebukes can seem to improve performance, while praise seems to backfire.[13]
I had the most satisfying Eureka experience of my career while attempting to teach flight instructors that praise is more effective than punishment for promoting skill-learning. When I had finished my enthusiastic speech, one of the most seasoned instructors in the audience raised his hand and made his own short speech, which began by conceding that positive reinforcement might be good for the birds, but went on to deny that it was optimal for flight cadets. He said, "On many occasions I have praised flight cadets for clean execution of some aerobatic maneuver, and in general when they try it again, they do worse. On the other hand, I have often screamed at cadets for bad execution, and in general they do better the next time. So please don't tell us that reinforcement works and punishment does not, because the opposite is the case." This was a joyous moment, in which I understood an important truth about the world: because we tend to reward others when they do well and punish them when they do badly, and because there is regression to the mean, it is part of the human condition that we are statistically punished for rewarding others and rewarded for punishing them. I immediately arranged a demonstration in which each participant tossed two coins at a target behind his back, without any feedback. We measured the distances from the target and could see that those who had done best the first time had mostly deteriorated on their second try, and vice versa. But I knew that this demonstration would not undo the effects of lifelong exposure to a perverse contingency.
The regression fallacy is also explained in Rolf Dobelli's The Art of Thinking Clearly.
UK law enforcement policies have encouraged the visible siting of static or mobile speed cameras at accident blackspots. This policy was justified by a perception that there is a corresponding reduction in serious road traffic accidents after a camera is set up. However, statisticians have pointed out that, although there is a net benefit in lives saved, failure to take into account the effects of regression to the mean results in the beneficial effects being overstated.[14][15][16]
Statistical analysts have long recognized the effect of regression to the mean in sports; they even have a special name for it: the "sophomore slump". For example, Carmelo Anthony of the NBA's Denver Nuggets had an outstanding rookie season in 2004. It was so outstanding that he could not be expected to repeat it: in 2005, Anthony's numbers had dropped from his rookie season. The reasons for the "sophomore slump" abound, as sports rely on adjustment and counter-adjustment, but luck-based excellence as a rookie is as good a reason as any. Regression to the mean in sports performance may also explain the apparent "Sports Illustrated cover jinx" and the "Madden Curse". John Hollinger has an alternative name for the phenomenon of regression to the mean: the "fluke rule", while Bill James calls it the "Plexiglas Principle".
Because popular lore has focused on regression toward the mean as an account of declining performance of athletes from one season to the next, it has usually overlooked the fact that such regression can also account for improved performance. For example, if one looks at the batting average of Major League Baseball players in one season, those whose batting average was above the league mean tend to regress downward toward the mean the following year, while those whose batting average was below the mean tend to progress upward toward the mean the following year.[17]
Other statistical phenomena
Regression toward the mean simply says that, following an extreme random event, the next random event is likely to be less extreme. In no sense does the future event "compensate for" or "even out" the previous event, though this is assumed in the gambler's fallacy (and the variant law of averages). Similarly, the law of large numbers states that in the long term, the average will tend toward the expected value, but makes no statement about individual trials. For example, following a run of 10 heads on a flip of a fair coin (a rare, extreme event), regression to the mean states that the next run of heads will likely be less than 10, while the law of large numbers states that in the long term, this event will likely average out, and the average fraction of heads will tend to 1/2. By contrast, the gambler's fallacy incorrectly assumes that the coin is now "due" for a run of tails to balance out.
The opposite effect is regression to the tail, resulting from a distribution with non-vanishing probability density toward infinity.[18]
Definition for simple linear regression of data points
This is the definition of regression toward the mean that closely follows Sir Francis Galton's original usage.[8]
Suppose there are n data points {yi, xi}, where i = 1, 2, ..., n. We want to find the equation of the regression line, i.e. the straight line
$y=\alpha +\beta x,\,$
which would provide a "best" fit for the data points. (Note that a straight line may not be the appropriate regression curve for the given data points.) Here the "best" will be understood as in the least-squares approach: such a line that minimizes the sum of squared residuals of the linear regression model. In other words, numbers α and β solve the following minimization problem:
Find $\min _{\alpha ,\,\beta }Q(\alpha ,\beta )$, where $Q(\alpha ,\beta )=\sum _{i=1}^{n}{\hat {\varepsilon }}_{i}^{\,2}=\sum _{i=1}^{n}(y_{i}-\alpha -\beta x_{i})^{2}\ $
Using calculus it can be shown that the values of α and β that minimize the objective function Q are
${\begin{aligned}&{\hat {\beta }}={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}={\frac {{\overline {xy}}-{\bar {x}}{\bar {y}}}{{\overline {x^{2}}}-{\bar {x}}^{2}}}={\frac {\operatorname {Cov} [x,y]}{\operatorname {Var} [x]}}=r_{xy}{\frac {s_{y}}{s_{x}}},\\&{\hat {\alpha }}={\bar {y}}-{\hat {\beta }}\,{\bar {x}},\end{aligned}}$
where rxy is the sample correlation coefficient between x and y, sx is the standard deviation of x, and sy is correspondingly the standard deviation of y. Horizontal bar over a variable means the sample average of that variable. For example: ${\overline {xy}}={\tfrac {1}{n}}\textstyle \sum _{i=1}^{n}x_{i}y_{i}\ .$
Substituting the above expressions for ${\hat {\alpha }}$ and ${\hat {\beta }}$ into $y=\alpha +\beta x,\,$ yields fitted values
${\hat {y}}={\hat {\alpha }}+{\hat {\beta }}x,\,$
which yields
${\frac {{\hat {y}}-{\bar {y}}}{s_{y}}}=r_{xy}{\frac {x-{\bar {x}}}{s_{x}}}$
This shows the role rxy plays in the regression line of standardized data points.
If −1 < rxy < 1, then we say that the data points exhibit regression toward the mean. In other words, if linear regression is the appropriate model for a set of data points whose sample correlation coefficient is not perfect, then there is regression toward the mean. The predicted (or fitted) standardized value of y is closer to its mean than the standardized value of x is to its mean.
Definitions for bivariate distribution with identical marginal distributions
Restrictive definition
Let X1, X2 be random variables with identical marginal distributions with mean μ. In this formalization, the bivariate distribution of X1 and X2 is said to exhibit regression toward the mean if, for every number c > μ, we have
μ ≤ E[X2 | X1 = c] < c,
with the reverse inequalities holding for c < μ.[19][20]
The following is an informal description of the above definition. Consider a population of widgets. Each widget has two numbers, X1 and X2 (say, its left span (X1 ) and right span (X2)). Suppose that the probability distributions of X1 and X2 in the population are identical, and that the means of X1 and X2 are both μ. We now take a random widget from the population, and denote its X1 value by c. (Note that c may be greater than, equal to, or smaller than μ.) We have no access to the value of this widget's X2 yet. Let d denote the expected value of X2 of this particular widget. (i.e. Let d denote the average value of X2 of all widgets in the population with X1=c.) If the following condition is true:
Whatever the value c is, d lies between μ and c (i.e. d is closer to μ than c is),
then we say that X1 and X2 show regression toward the mean.
This definition accords closely with the current common usage, evolved from Galton's original usage, of the term "regression toward the mean". It is "restrictive" in the sense that not every bivariate distribution with identical marginal distributions exhibits regression toward the mean (under this definition).[20]
Theorem
If a pair (X, Y) of random variables follows a bivariate normal distribution, then the conditional mean E(Y|X) is a linear function of X. The correlation coefficient r between X and Y, along with the marginal means and variances of X and Y, determines this linear relationship:
${\frac {E(Y\mid X)-E[Y]}{\sigma _{y}}}=r{\frac {X-E[X]}{\sigma _{x}}},$
where E[X] and E[Y] are the expected values of X and Y, respectively, and σx and σy are the standard deviations of X and Y, respectively.
Hence the conditional expected value of Y, given that X is t standard deviations above its mean (and that includes the case where it's below its mean, when t < 0), is rt standard deviations above the mean of Y. Since |r| ≤ 1, Y is no farther from the mean than X is, as measured in the number of standard deviations.[21]
Hence, if 0 ≤ r < 1, then (X, Y) shows regression toward the mean (by this definition).
General definition
The following definition of reversion toward the mean has been proposed by Samuels as an alternative to the more restrictive definition of regression toward the mean above.[19]
Let X1, X2 be random variables with identical marginal distributions with mean μ. In this formalization, the bivariate distribution of X1 and X2 is said to exhibit reversion toward the mean if, for every number c, we have
μ ≤ E[X2 | X1 > c] < E[X1 | X1 > c], and
μ ≥ E[X2 | X1 < c] > E[X1 | X1 < c]
This definition is "general" in the sense that every bivariate distribution with identical marginal distributions exhibits reversion toward the mean, provided some weak criteria are satisfied (non-degeneracy and weak positive dependence as described in Samuels's paper[19]).
Alternative definition in financial usage
Jeremy Siegel uses the term "return to the mean" to describe a financial time series in which "returns can be very unstable in the short run but very stable in the long run." More quantitatively, it is one in which the standard deviation of average annual returns declines faster than the inverse of the holding period, implying that the process is not a random walk, but that periods of lower returns are systematically followed by compensating periods of higher returns, as is the case in many seasonal businesses, for example.[22]
See also
• Hardy–Weinberg principle
• Internal validity
• Law of large numbers
• Martingale (probability theory)
• Regression dilution
• Selection bias
References
1. Everitt, B. S. (August 12, 2002). The Cambridge Dictionary of Statistics (2 ed.). Cambridge University Press. ISBN 978-0521810999.
2. Upton, Graham; Cook, Ian (21 August 2008). Oxford Dictionary of Statistics. Oxford University Press. ISBN 978-0-19-954145-4.
3. Stigler, Stephen M (1997). "Regression toward the mean, historically considered". Statistical Methods in Medical Research. 6 (2): 103–114. doi:10.1191/096228097676361431. PMID 9261910.
4. Chiolero, A; Paradis, G; Rich, B; Hanley, JA (2013). "Assessing the Relationship between the Baseline Value of a Continuous Variable and Subsequent Change Over Time". Frontiers in Public Health. 1: 29. doi:10.3389/fpubh.2013.00029. PMC 3854983. PMID 24350198.
5. "A statistical review of 'Thinking, Fast and Slow' by Daniel Kahneman". Burns Statistics. November 11, 2013. Retrieved January 1, 2022.
6. "What is regression to the mean? Definition and examples". conceptually.org. Retrieved October 25, 2017.
7. Goldacre, Ben (April 4, 2009). Bad Science. Fourth Estate. p. 39. ISBN 978-0007284870.
8. Galton, F. (1886). "Regression towards mediocrity in hereditary stature". The Journal of the Anthropological Institute of Great Britain and Ireland. 15: 246–263. doi:10.2307/2841583. JSTOR 2841583.
9. Galton, Francis (1889). Natural Inheritance. London: Macmillan.
10. Stigler, Stephen M. (June 17, 2010). "Darwin, Galton and the Statistical Enlightenment". Journal of the Royal Statistical Society, Series A. 173 (3): 469–482, 477. doi:10.1111/j.1467-985X.2010.00643.x. ISSN 1467-985X. S2CID 53333238.
11. Kahneman, Daniel (October 1, 2011). Thinking Fast and Slow. Farrar, Straus and Giroux. ISBN 978-0-374-27563-1.
12. Secrist, Horace; Hotelling, Harold; Rorty, M. C.; Gini, Corrada; King, Wilford I. (June 1934). "Open Letters". Journal of the American Statistical Association. 29 (186): 196–205. doi:10.1080/01621459.1934.10502711. JSTOR 2278295.
13. Defulio, Anthony (2012). "Quotation: Kahneman on Contingencies". Journal of the Experimental Analysis of Behavior. 97 (2): 182. doi:10.1901/jeab.2012.97-182. PMC 3292229.
14. Webster, Ben (December 16, 2005). "Speed camera benefits overrated". The Times. Retrieved January 1, 2022.(subscription required)
15. Mountain, L. (2006). "Safety cameras: Stealth tax or life-savers?". Significance. 3 (3): 111–113. doi:10.1111/j.1740-9713.2006.00179.x.
16. Maher, Mike; Mountain, Linda (2009). "The sensitivity of estimates of regression to the mean". Accident Analysis & Prevention. 41 (4): 861–8. doi:10.1016/j.aap.2009.04.020. PMID 19540977.
17. For an illustration see Nate Silver, "Randomness: Catch the Fever!", Baseball Prospectus, May 14, 2003.
18. Flyvbjerg, Bent (5 October 2020). "The law of regression to the tail: How to survive Covid-19, the climate crisis, and other disasters". Environmental Science & Policy. 114: 614–618. doi:10.1016/j.envsci.2020.08.013. ISSN 1462-9011. PMC 7533687. PMID 33041651.
19. Samuels, Myra L. (November 1991). "Statistical Reversion Toward the Mean: More Universal than Regression Toward the Mean". The American Statistician. 45 (4): 344–346. doi:10.2307/2684474. JSTOR 2684474..
20. Schmittlein, David C (August 1989). "Surprising Inferences from unsurprising Observations: Do Conditional Expectations really regress to the Mean?". The American Statistician. 43 (3): 176–183. doi:10.2307/2685070. JSTOR 2685070.
21. Chernick, Michael R.; Friis, Robert H. (March 17, 2003). Introductory Biostatistics for the Health Sciences. Wiley-Interscience. p. 272. ISBN 978-0-471-41137-6.
22. Siegel, Jeremy (November 27, 2007). Stocks for the Long Run (4th ed.). McGraw–Hill. pp. 13, 28–29. ISBN 978-0071494700.
Further reading
• J.M. Bland and D.G. Altman (June 1994). "Statistic Notes: Regression towards the mean". British Medical Journal. 308 (6942): 1499. doi:10.1136/bmj.308.6942.1499. PMC 2540330. PMID 8019287. Article, including a diagram of Galton's original data.
• Edward J. Dudewicz & Satya N. Mishra (1988). "Section 14.1: Estimation of regression parameters; Linear models". Modern Mathematical Statistics. John Wiley & Sons. ISBN 978-0-471-81472-6.
• Francis Galton (1886). "Regression towards mediocrity in hereditary stature" (PDF). The Journal of the Anthropological Institute of Great Britain and Ireland. 15: 246–263. doi:10.2307/2841583. JSTOR 2841583.
• Donald F. Morrison (1967). "Chapter 3: Samples from the Multivariate Normal Population". Multivariate Statistical Methods. McGraw-Hill. ISBN 978-0-534-38778-5.
• Stephen M. Stigler (1999). "Chapter 9". Statistics on the Table. Harvard University Press.
• Myra L. Samuels (November 1991). "Statistical Reversion Toward the Mean: More Universal than Regression Toward the Mean". The American Statistician. 45 (4): 344–346. doi:10.2307/2684474. JSTOR 2684474.
• Stephen Senn. Regression: A New Mode for an Old Meaning, The American Statistician, Vol 44, No 2 (May 1990), pp. 181–183.
• Regression Toward the Mean and the Study of Change, Psychological Bulletin
• A non-mathematical explanation of regression toward the mean.
• A simulation of regression toward the mean.
• Amanda Wachsmuth, Leland Wilkinson, Gerard E. Dallal. Galton's Bend: An Undiscovered Nonlinearity in Galton's Family Stature Regression Data and a Likely Explanation Based on Pearson and Lee's Stature Data (A modern look at Galton's analysis.)
• Massachusetts standardized test scores, interpreted by a statistician as an example of regression: see discussion in sci.stat.edu and its continuation.
• Gary Smith, What the Luck: The Surprising Role of Chance in Our Everyday Lives, New York: Overlook, London: Duckworth. ISBN 978-1-4683-1375-8.
External links
• Media related to Regression toward the mean at Wikimedia Commons
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
|
Wikipedia
|
Control variates
The control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate of an unknown quantity.[1] [2][3]
Underlying principle
Let the unknown parameter of interest be $\mu $, and assume we have a statistic $m$ such that the expected value of m is μ: $\mathbb {E} \left[m\right]=\mu $, i.e. m is an unbiased estimator for μ. Suppose we calculate another statistic $t$ such that $\mathbb {E} \left[t\right]=\tau $ is a known value. Then
$m^{\star }=m+c\left(t-\tau \right)\,$
is also an unbiased estimator for $\mu $ for any choice of the coefficient $c$. The variance of the resulting estimator $m^{\star }$ is
${\textrm {Var}}\left(m^{\star }\right)={\textrm {Var}}\left(m\right)+c^{2}\,{\textrm {Var}}\left(t\right)+2c\,{\textrm {Cov}}\left(m,t\right).$
By differentiating the above expression with respect to $c$, it can be shown that choosing the optimal coefficient
$c^{\star }=-{\frac {{\textrm {Cov}}\left(m,t\right)}{{\textrm {Var}}\left(t\right)}}$
minimizes the variance of $m^{\star }$, and that with this choice,
${\begin{aligned}{\textrm {Var}}\left(m^{\star }\right)&={\textrm {Var}}\left(m\right)-{\frac {\left[{\textrm {Cov}}\left(m,t\right)\right]^{2}}{{\textrm {Var}}\left(t\right)}}\\&=\left(1-\rho _{m,t}^{2}\right){\textrm {Var}}\left(m\right)\end{aligned}}$
where
$\rho _{m,t}={\textrm {Corr}}\left(m,t\right)\,$
is the correlation coefficient of $m$ and $t$. The greater the value of $\vert \rho _{m,t}\vert $, the greater the variance reduction achieved.
In the case that ${\textrm {Cov}}\left(m,t\right)$, ${\textrm {Var}}\left(t\right)$, and/or $\rho _{m,t}\;$ are unknown, they can be estimated across the Monte Carlo replicates. This is equivalent to solving a certain least squares system; therefore this technique is also known as regression sampling.
When the expectation of the control variable, $\mathbb {E} \left[t\right]=\tau $, is not known analytically, it is still possible to increase the precision in estimating $\mu $ (for a given fixed simulation budget), provided that the two conditions are met: 1) evaluating $t$ is significantly cheaper than computing $m$; 2) the magnitude of the correlation coefficient $|\rho _{m,t}|$ is close to unity. [3]
Example
We would like to estimate
$I=\int _{0}^{1}{\frac {1}{1+x}}\,\mathrm {d} x$
using Monte Carlo integration. This integral is the expected value of $f(U)$, where
$f(U)={\frac {1}{1+U}}$
and U follows a uniform distribution [0, 1]. Using a sample of size n denote the points in the sample as $u_{1},\cdots ,u_{n}$. Then the estimate is given by
$I\approx {\frac {1}{n}}\sum _{i}f(u_{i}).$
Now we introduce $g(U)=1+U$ as a control variate with a known expected value $\mathbb {E} \left[g\left(U\right)\right]=\int _{0}^{1}(1+x)\,\mathrm {d} x={\tfrac {3}{2}}$ and combine the two into a new estimate
$I\approx {\frac {1}{n}}\sum _{i}f(u_{i})+c\left({\frac {1}{n}}\sum _{i}g(u_{i})-3/2\right).$
Using $n=1500$ realizations and an estimated optimal coefficient $c^{\star }\approx 0.4773$ we obtain the following results
Estimate Variance
Classical estimate 0.69475 0.01947
Control variates 0.69295 0.00060
The variance was significantly reduced after using the control variates technique. (The exact result is $I=\ln 2\approx 0.69314718$.)
See also
• Antithetic variates
• Importance sampling
Notes
1. Lemieux, C. (2017). "Control Variates". Wiley StatsRef: Statistics Reference Online: 1–8. doi:10.1002/9781118445112.stat07947. ISBN 9781118445112.
2. Glasserman, P. (2004). Monte Carlo Methods in Financial Engineering. New York: Springer. ISBN 0-387-00451-3 (p. 185)
3. Botev, Z.; Ridder, A. (2017). "Variance Reduction". Wiley StatsRef: Statistics Reference Online: 1–6. doi:10.1002/9781118445112.stat07975. ISBN 9781118445112.
References
• Ross, Sheldon M. (2002) Simulation 3rd edition ISBN 978-0-12-598053-1
• Averill M. Law & W. David Kelton (2000), Simulation Modeling and Analysis, 3rd edition. ISBN 0-07-116537-1
• S. P. Meyn (2007) Control Techniques for Complex Networks, Cambridge University Press. ISBN 978-0-521-88441-9. Downloadable draft (Section 11.4: Control variates and shadow functions)
|
Wikipedia
|
Regressive discrete Fourier series
In applied mathematics, the regressive discrete Fourier series (RDFS) is a generalization of the discrete Fourier transform where the Fourier series coefficients are computed in a least squares sense and the period is arbitrary, i.e., not necessarily equal to the length of the data. It was first proposed by Arruda (1992a, 1992b). It can be used to smooth data in one or more dimensions and to compute derivatives from the smoothed curve, surface, or hypersurface.
Technique
One-dimensional regressive discrete Fourier series
The one-dimensional RDFS proposed by Arruda (1992a) can be formulated in a very straightforward way. Given a sampled data vector (signal) $x_{n}=x(t_{n})$, one can write the algebraic expression:
$x_{n}=\sum _{k=-q}^{q}X_{k}e^{\frac {-i2\pi kt_{n}}{T}}+\varepsilon _{n},t_{n}{\text{ arbitrary }},\quad n=1,\dots ,N.\,$
Typically $t_{n}=n\,\Delta t$, but this is not necessary.
The above equation can be written in matrix form as
$WX=x+\varepsilon .\,$
The least squares solution of the above linear system of equations can be written as:
${\hat {X}}=(W^{H}W)^{-1}W^{H}x\,$
where $X^{H}$ is the conjugate transpose of $X$, and the smoothed signal is obtained from:
${\hat {x}}=W{\hat {X}}\,$
The first derivative of the smoothed signal ${\hat {x}}$ can be obtained from:
${\frac {dx}{dt}}(t_{n})=\sum _{k=-q}^{q}{\frac {-i2\pi k}{T}}X_{k}e^{\frac {-i2\pi kt_{n}}{T}},\quad n=1,\dots ,N.\,$
Two-dimensional regressive discrete Fourier series (RDFS)
The two-dimensional, or bidimensional RDFS proposed by Arruda (1992b) can also be formulated in a straightforward way. Here the equally spaced data case will be treated for the sake of simplicity. The general non-equally-spaced and arbitrary grid cases are given in the reference (Arruda, 1992b). Given a sampled data matrix (bi dimensional signal) $x_{mn}=x(\xi _{m},\nu _{n}),m=1,\dots ,M;\ n=1,\dots ,N;$ one can write the algebraic expression:
$x_{mn}=\sum _{k=-p}^{p}\sum _{l=-q}^{q}X_{kl}e^{\frac {-i2\pi k\xi _{m}}{L_{\xi }}}e^{\frac {-i2\pi l\nu _{n}}{L_{\nu }}}+\varepsilon _{mn},\quad m=1,\dots ,M;\ n=1,\dots ,N.\,$
The above equation can be written in matrix form for a rectangular grid. For the equally spaced sampling case :$\xi _{m}=m\Delta \xi ,\nu _{n}=n\Delta \nu \,$ we have:
$x_{mn}=\sum _{k=-p}^{p}\sum _{l=-q}^{q}X_{kl}e^{\frac {-i2\pi km\Delta \xi }{L_{\xi }}}e^{\frac {-i2\pi ln\Delta \nu }{L_{\nu }}}+\epsilon _{mn},\quad m=1,\dots ,M;\ n=1,\dots ,N.\,$
The least squares solution may be shown to be:
${\hat {X}}=(W_{L_{\xi }}^{H}W_{L_{\xi }})^{-1}W_{L_{\xi }}^{H}xW_{L_{\nu }}^{*}(W_{L_{\nu }}W_{L_{\nu }}^{H})^{-1}\,$
and the smoothed bidimensional surface is given by:
${\hat {x}}=W_{L_{\xi }}{\hat {X}}W_{L_{\nu }}^{t}\,$
where $X^{H}$ is the conjugate, and $X^{t}$ is the transpose of $X$.
Differentiation with respect to $\xi {\text{ and }}\nu $ can be easily implemented analogously to the one-dimensional case (Arruda, 1992b).
Current applications
• Spatially dense data condensation applications: Arruda, J.R.F. [1993] applied the RDFS to condense spatially dense spatial measurements made with a laser Doppler vibrometer prior to applying modal analysis parameter estimation methods. More recently, Vanherzeele et al. (2006,2008a) proposed a generalized and an optimized RDFS for the same kind of application. A review of optical measurement processing using the RDFS was published by Vanherzeele et al. (2009).
• Spatial derivative applications: Batista et al. [2009] applied RDFS to obtain spatial derivatives of bi dimensional measured vibration data to identify material properties from transverse modes of rectangular plates.
• SHM applications: Vanherzeele et al. [2009] applied a generalized version of the RDFS to tomography reconstruction.
Software
Recently, a package that includes one and two-dimensional RDFS was developed in order to make easier its use in the free and open source software R:
• A R package for RDFS at Github
See also
• Discrete Fourier transform
• Fourier series
References
• Arruda, J.R.F., 1992a: Analysis of non-equally spaced data using a Regressive discrete Fourier series. Journal of Sound and Vibration, 156(3), 571–574.
• Arruda, J.R.F., 1992b: Surface smoothing and partial spatial derivatives using a regressive discrete Fourier series. Mechanical Systems and Signal Processing, 6(1), 41–50.
• Arruda, J.R.F., 1993: Spatial domain modal analysis of lightly-damped structures using laser velocimeters. Journal of Vibration and Acoustics, 115, 225–231.
• Batista, F.B., Albuquerque, E.L., Arruda, J.R.F., Dias Jr., M., 2009: Identification of the bending stiffness of symmetric laminates using regressive discrete Fourier series and finite differences. Journal of Sound and Vibration, 320, 793–807.
• Vanherzeele, J., Guillaume, P., Vanlanduit, S., Verboten, P., 2006: Data reduction using a generalized regressive discrete Fourier series, Journal of Sound and Vibration, 298, 1–11.
• Vanherzeele, J., Vanlanduit, S., Guillaume, P., 2008a: Reducing spatial data using an optimized regressive discrete Fourier series, Journal of Sound and Vibration, 309, 858–867.
• Vanherzeele, J., Longo, R., Vanlanduit, S., Guillaume, P., 2008b: Tomographic reconstruction using a generalized regressive discrete Fourier series, Mechanical Systems and Signal Processing, 22, 1237–1247.
• Vanherzeele, J., Vanlanduit, S., Guillaume, P., 2009: Processing optical measurements using a regressive discrete Fourier series, Optical and lasers in engineering, 47, 461–472.
|
Wikipedia
|
Progressive function
In mathematics, a progressive function ƒ ∈ L2(R) is a function whose Fourier transform is supported by positive frequencies only:
$\mathop {\rm {supp}} {\hat {f}}\subseteq \mathbb {R} _{+}.$
It is called super regressive if and only if the time reversed function f(−t) is progressive, or equivalently, if
$\mathop {\rm {supp}} {\hat {f}}\subseteq \mathbb {R} _{-}.$
The complex conjugate of a progressive function is regressive, and vice versa.
The space of progressive functions is sometimes denoted $H_{+}^{2}(R)$, which is known as the Hardy space of the upper half-plane. This is because a progressive function has the Fourier inversion formula
$f(t)=\int _{0}^{\infty }e^{2\pi ist}{\hat {f}}(s)\,ds$
and hence extends to a holomorphic function on the upper half-plane $\{t+iu:t,u\in R,u\geq 0\}$
by the formula
$f(t+iu)=\int _{0}^{\infty }e^{2\pi is(t+iu)}{\hat {f}}(s)\,ds=\int _{0}^{\infty }e^{2\pi ist}e^{-2\pi su}{\hat {f}}(s)\,ds.$
Conversely, every holomorphic function on the upper half-plane which is uniformly square-integrable on every horizontal line will arise in this manner.
Regressive functions are similarly associated with the Hardy space on the lower half-plane $\{t+iu:t,u\in R,u\leq 0\}$.
This article incorporates material from progressive function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Wikipedia
|
Regula falsi
In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.
As an example, consider problem 26 in the Rhind papyrus, which asks for a solution of (written in modern notation) the equation x + x/4 = 15. This is solved by false position.[1] First, guess that x = 4 to obtain, on the left, 4 + 4/4 = 5. This guess is a good choice since it produces an integer value. However, 4 is not the solution of the original equation, as it gives a value which is three times too small. To compensate, multiply x (currently set to 4) by 3 and substitute again to get 12 + 12/4 = 15, verifying that the solution is x = 12.
Modern versions of the technique employ systematic ways of choosing new test values and are concerned with the questions of whether or not an approximation to a solution can be obtained, and if it can, how fast can the approximation be found.
Two historical types
Two basic types of false position method can be distinguished historically, simple false position and double false position.
Simple false position is aimed at solving problems involving direct proportion. Such problems can be written algebraically in the form: determine x such that
$ax=b,$
if a and b are known. The method begins by using a test input value x′, and finding the corresponding output value b′ by multiplication: ax′ = b′. The correct answer is then found by proportional adjustment, x = b/ b′ x′.
Double false position is aimed at solving more difficult problems that can be written algebraically in the form: determine x such that
$f(x)=ax+c=0,$
if it is known that
$f(x_{1})=b_{1},\qquad f(x_{2})=b_{2}.$
Double false position is mathematically equivalent to linear interpolation. By using a pair of test inputs and the corresponding pair of outputs, the result of this algorithm given by,[2]
$x={\frac {b_{1}x_{2}-b_{2}x_{1}}{b_{1}-b_{2}}},$
would be memorized and carried out by rote. Indeed, the rule as given by Robert Recorde in his Ground of Artes (c. 1542) is:[2]
Gesse at this woorke as happe doth leade.
By chaunce to truthe you may procede.
And firste woorke by the question,
Although no truthe therein be don.
Suche falsehode is so good a grounde,
That truth by it will soone be founde.
From many bate to many mo,
From to fewe take to fewe also.
With to much ioyne to fewe againe,
To to fewe adde to manye plaine.
In crossewaies multiplye contrary kinde,
All truthe by falsehode for to fynde.
For an affine linear function,
$f(x)=ax+c,$
double false position provides the exact solution, while for a nonlinear function f it provides an approximation that can be successively improved by iteration.
History
The simple false position technique is found in cuneiform tablets from ancient Babylonian mathematics, and in papyri from ancient Egyptian mathematics.[3][1]
Double false position arose in late antiquity as a purely arithmetical algorithm. In the ancient Chinese mathematical text called The Nine Chapters on the Mathematical Art (九章算術),[4] dated from 200 BC to AD 100, most of Chapter 7 was devoted to the algorithm. There, the procedure was justified by concrete arithmetical arguments, then applied creatively to a wide variety of story problems, including one involving what we would call secant lines on a conic section. A more typical example is this "joint purchase" problem involving an "excess and deficit" condition:[5]
Now an item is purchased jointly; everyone contributes 8 [coins], the excess is 3; everyone contributes 7, the deficit is 4. Tell: The number of people, the item price, what is each? Answer: 7 people, item price 53.[6]
Between the 9th and 10th centuries, the Egyptian mathematician Abu Kamil wrote a now-lost treatise on the use of double false position, known as the Book of the Two Errors (Kitāb al-khaṭāʾayn). The oldest surviving writing on double false position from the Middle East is that of Qusta ibn Luqa (10th century), an Arab mathematician from Baalbek, Lebanon. He justified the technique by a formal, Euclidean-style geometric proof. Within the tradition of medieval Muslim mathematics, double false position was known as hisāb al-khaṭāʾayn ("reckoning by two errors"). It was used for centuries to solve practical problems such as commercial and juridical questions (estate partitions according to rules of Quranic inheritance), as well as purely recreational problems. The algorithm was often memorized with the aid of mnemonics, such as a verse attributed to Ibn al-Yasamin and balance-scale diagrams explained by al-Hassar and Ibn al-Banna, all three being mathematicians of Moroccan origin.[7]
Leonardo of Pisa (Fibonacci) devoted Chapter 13 of his book Liber Abaci (AD 1202) to explaining and demonstrating the uses of double false position, terming the method regulis elchatayn after the al-khaṭāʾayn method that he had learned from Arab sources.[7] In 1494, Pacioli used the term el cataym in his book Summa de arithmetica, probably taking the term from Fibonacci. Other European writers would follow Pacioli and sometimes provided a translation into Latin or the vernacular. For instance, Tartaglia translates the Latinized version of Pacioli's term into the vernacular "false positions" in 1556.[8] Pacioli's term nearly disappeared in the 16th century European works and the technique went by various names such as "Rule of False", "Rule of Position" and "Rule of False Position". Regula Falsi appears as the Latinized version of Rule of False as early as 1690.[2]
Several 16th century European authors felt the need to apologize for the name of the method in a science that seeks to find the truth. For instance, in 1568 Humphrey Baker says:[2]
The Rule of falsehoode is so named not for that it teacheth anye deceyte or falsehoode, but that by fayned numbers taken at all aduentures, it teacheth to finde out the true number that is demaunded, and this of all the vulgar Rules which are in practise) is ye most excellence.
Numerical analysis
The method of false position provides an exact solution for linear functions, but more direct algebraic techniques have supplanted its use for these functions. However, in numerical analysis, double false position became a root-finding algorithm used in iterative numerical approximation techniques.
Many equations, including most of the more complicated ones, can be solved only by iterative numerical approximation. This consists of trial and error, in which various values of the unknown quantity are tried. That trial-and-error may be guided by calculating, at each step of the procedure, a new estimate for the solution. There are many ways to arrive at a calculated-estimate and regula falsi provides one of these.
Given an equation, move all of its terms to one side so that it has the form, f (x) = 0, where f is some function of the unknown variable x. A value c that satisfies this equation, that is, f (c) = 0, is called a root or zero of the function f and is a solution of the original equation. If f is a continuous function and there exist two points a0 and b0 such that f (a0) and f (b0) are of opposite signs, then, by the intermediate value theorem, the function f has a root in the interval (a0, b0).
There are many root-finding algorithms that can be used to obtain approximations to such a root. One of the most common is Newton's method, but it can fail to find a root under certain circumstances and it may be computationally costly since it requires a computation of the function's derivative. Other methods are needed and one general class of methods are the two-point bracketing methods. These methods proceed by producing a sequence of shrinking intervals [ak, bk], at the kth step, such that (ak, bk) contains a root of f.
Two-point bracketing methods
These methods start with two x-values, initially found by trial-and-error, at which f (x) has opposite signs. Under the continuity assumption, a root of f is guaranteed to lie between these two values, that is to say, these values "bracket" the root. A point strictly between these two values is then selected and used to create a smaller interval that still brackets a root. If c is the point selected, then the smaller interval goes from c to the endpoint where f (x) has the sign opposite that of f (c). In the improbable case that f (c) = 0, a root has been found and the algorithm stops. Otherwise, the procedure is repeated as often as necessary to obtain an approximation to the root to any desired accuracy.
The point selected in any current interval can be thought of as an estimate of the solution. The different variations of this method involve different ways of calculating this solution estimate.
Preserving the bracketing and ensuring that the solution estimates lie in the interior of the bracketing intervals guarantees that the solution estimates will converge toward the solution, a guarantee not available with other root finding methods such as Newton's method or the secant method.
The simplest variation, called the bisection method, calculates the solution estimate as the midpoint of the bracketing interval. That is, if at step k, the current bracketing interval is [ak, bk], then the new solution estimate ck is obtained by,
$c_{k}={\frac {a_{k}+b_{k}}{2}}.$
This ensures that ck is between ak and bk, thereby guaranteeing convergence toward the solution.
Since the bracketing interval's length is halved at each step, the bisection method's error is, on average, halved with each iteration. Hence, every 3 iterations, the method gains approximately a factor of 23, i.e. roughly a decimal place, in accuracy.
The regula falsi (false position) method
The convergence rate of the bisection method could possibly be improved by using a different solution estimate.
The regula falsi method calculates the new solution estimate as the x-intercept of the line segment joining the endpoints of the function on the current bracketing interval. Essentially, the root is being approximated by replacing the actual function by a line segment on the bracketing interval and then using the classical double false position formula on that line segment.[9]
More precisely, suppose that in the k-th iteration the bracketing interval is (ak, bk). Construct the line through the points (ak, f (ak)) and (bk, f (bk)), as illustrated. This line is a secant or chord of the graph of the function f. In point-slope form, its equation is given by
$y-f(b_{k})={\frac {f(b_{k})-f(a_{k})}{b_{k}-a_{k}}}(x-b_{k}).$
Now choose ck to be the x-intercept of this line, that is, the value of x for which y = 0, and substitute these values to obtain
$f(b_{k})+{\frac {f(b_{k})-f(a_{k})}{b_{k}-a_{k}}}(c_{k}-b_{k})=0.$
Solving this equation for ck gives:
$c_{k}=b_{k}-f(b_{k}){\frac {b_{k}-a_{k}}{f(b_{k})-f(a_{k})}}={\frac {a_{k}f(b_{k})-b_{k}f(a_{k})}{f(b_{k})-f(a_{k})}}.$
This last symmetrical form has a computational advantage:
As a solution is approached, ak and bk will be very close together, and nearly always of the same sign. Such a subtraction can lose significant digits. Because f (bk) and f (ak) are always of opposite sign the “subtraction” in the numerator of the improved formula is effectively an addition (as is the subtraction in the denominator too).
At iteration number k, the number ck is calculated as above and then, if f (ak) and f (ck) have the same sign, set ak + 1 = ck and bk + 1 = bk, otherwise set ak + 1 = ak and bk + 1 = ck. This process is repeated until the root is approximated sufficiently well.
The above formula is also used in the secant method, but the secant method always retains the last two computed points, and so, while it is slightly faster, it does not preserve bracketing and may not converge.
The fact that regula falsi always converges, and has versions that do well at avoiding slowdowns, makes it a good choice when speed is needed. However, its rate of convergence can drop below that of the bisection method.
Analysis
Since the initial end-points a0 and b0 are chosen such that f (a0) and f (b0) are of opposite signs, at each step, one of the end-points will get closer to a root of f. If the second derivative of f is of constant sign (so there is no inflection point) in the interval, then one endpoint (the one where f also has the same sign) will remain fixed for all subsequent iterations while the converging endpoint becomes updated. As a result, unlike the bisection method, the width of the bracket does not tend to zero (unless the zero is at an inflection point around which sign(f ) = −sign(f ")). As a consequence, the linear approximation to f (x), which is used to pick the false position, does not improve as rapidly as possible.
One example of this phenomenon is the function
$f(x)=2x^{3}-4x^{2}+3x$
on the initial bracket [−1,1]. The left end, −1, is never replaced (it does not change at first and after the first three iterations, f " is negative on the interval) and thus the width of the bracket never falls below 1. Hence, the right endpoint approaches 0 at a linear rate (the number of accurate digits grows linearly, with a rate of convergence of 2/3).
For discontinuous functions, this method can only be expected to find a point where the function changes sign (for example at x = 0 for 1/x or the sign function). In addition to sign changes, it is also possible for the method to converge to a point where the limit of the function is zero, even if the function is undefined (or has another value) at that point (for example at x = 0 for the function given by f (x) = abs(x) − x2 when x ≠ 0 and by f (0) = 5, starting with the interval [-0.5, 3.0]). It is mathematically possible with discontinuous functions for the method to fail to converge to a zero limit or sign change, but this is not a problem in practice since it would require an infinite sequence of coincidences for both endpoints to get stuck converging to discontinuities where the sign does not change, for example at x = ±1 in
$f(x)={\frac {1}{(x-1)^{2}}}+{\frac {1}{(x+1)^{2}}}.$
The method of bisection avoids this hypothetical convergence problem.
Improvements in regula falsi
Though regula falsi always converges, usually considerably faster than bisection, there are situations that can slow its convergence – sometimes to a prohibitive degree. That problem isn't unique to regula falsi: Other than bisection, all of the numerical equation-solving methods can have a slow-convergence or no-convergence problem under some conditions. Sometimes, Newton's method and the secant method diverge instead of converging – and often do so under the same conditions that slow regula falsi's convergence.
But, though regula falsi is one of the best methods, and even in its original un-improved version would often be the best choice; for example, when Newton's isn't used because the derivative is prohibitively time-consuming to evaluate, or when Newton's and Successive-Substitutions have failed to converge.
Regula falsi's failure mode is easy to detect: The same end-point is retained twice in a row. The problem is easily remedied by picking instead a modified false position, chosen to avoid slowdowns due to those relatively unusual unfavorable situations. A number of such improvements to regula falsi have been proposed; two of them, the Illinois algorithm and the Anderson–Björk algorithm, are described below.
The Illinois algorithm
The Illinois algorithm halves the y-value of the retained end point in the next estimate computation when the new y-value (that is, f (ck)) has the same sign as the previous one (f (ck − 1)), meaning that the end point of the previous step will be retained. Hence:
$c_{k}={\frac {{\frac {1}{2}}f(b_{k})a_{k}-f(a_{k})b_{k}}{{\frac {1}{2}}f(b_{k})-f(a_{k})}}$
or
$c_{k}={\frac {f(b_{k})a_{k}-{\frac {1}{2}}f(a_{k})b_{k}}{f(b_{k})-{\frac {1}{2}}f(a_{k})}},$
down-weighting one of the endpoint values to force the next ck to occur on that side of the function.[10] The factor ½ used above looks arbitrary, but it guarantees superlinear convergence (asymptotically, the algorithm will perform two regular steps after any modified step, and has order of convergence 1.442). There are other ways to pick the rescaling which give even better superlinear convergence rates.[11]
The above adjustment to regula falsi is called the Illinois algorithm by some scholars.[10][12] Ford (1995) summarizes and analyzes this and other similar superlinear variants of the method of false position.[11]
Anderson–Björck algorithm
Suppose that in the k-th iteration the bracketing interval is [ak, bk] and that the functional value of the new calculated estimate ck has the same sign as f (bk). In this case, the new bracketing interval [ak + 1, bk + 1] = [ak, ck] and the left-hand endpoint has been retained. (So far, that's the same as ordinary Regula Falsi and the Illinois algorithm.)
But, whereas the Illinois algorithm would multiply f (ak) by 1/2, Anderson–Björck algorithm multiplies it by m, where m has one of the two following values:[13]
${\begin{aligned}m'&=1-{\frac {f(c_{k})}{f(b_{k})}},\\m&={\begin{cases}m'&{\text{if }}m'>0,\\{\frac {1}{2}}&{\text{otherwise.}}\end{cases}}\end{aligned}}$
For simple roots, Anderson–Björck performs very well in practice.[14]
ITP method
Main article: ITP method
Given $\kappa _{1}\in (0,\infty ),\kappa _{2}\in \left[1,1+\phi \right)$, $n_{1/2}\equiv \lceil (b_{0}-a_{0})/2\epsilon \rceil $ and $n_{0}\in [0,\infty )$ where $\phi $ is the golden ration ${\tfrac {1}{2}}(1+{\sqrt {5}})$, in each iteration $j=0,1,2...$ the ITP method calculates the point $x_{\text{ITP}}$ following three steps:
1. [Interpolation Step] Calculate the bisection and the regula falsi points: $x_{1/2}\equiv {\frac {a+b}{2}}$ and $x_{f}\equiv {\frac {bf(a)-af(b)}{f(a)-f(b)}}$ ;
2. [Truncation Step] Perturb the estimator towards the center: $x_{t}\equiv x_{f}+\sigma \delta $ where $\sigma \equiv {\text{sign}}(x_{1/2}-x_{f})$ and $\delta \equiv \min\{\kappa _{1}|b-a|^{\kappa _{2}},|x_{1/2}-x_{f}|\}$ ;
3. [Projection Step] Project the estimator to minmax interval: $x_{\text{ITP}}\equiv x_{1/2}-\sigma \rho _{k}$ where $\rho _{k}\equiv \min \left\{\epsilon 2^{n_{1/2}+n_{0}-j}-{\frac {b-a}{2}},|x_{t}-x_{1/2}|\right\}$.
The value of the function $f(x_{\text{ITP}})$ on this point is queried, and the interval is then reduced to bracket the root by keeping the sub-interval with function values of opposite sign on each end. This three step procedure guarantees that the minmax properties of the bisection method are enjoyed by the estimate as well as the superlinear convergence of the secant method. And, is observed to outperform both bisection and interpolation based methods under smooth and non-smooth functions.[15]
Practical considerations
When solving one equation, or just a few, using a computer, the bisection method is an adequate choice. Although bisection isn't as fast as the other methods—when they're at their best and don't have a problem—bisection nevertheless is guaranteed to converge at a useful rate, roughly halving the error with each iteration – gaining roughly a decimal place of accuracy with every 3 iterations.
For manual calculation, by calculator, one tends to want to use faster methods, and they usually, but not always, converge faster than bisection. But a computer, even using bisection, will solve an equation, to the desired accuracy, so rapidly that there's no need to try to save time by using a less reliable method—and every method is less reliable than bisection.
An exception would be if the computer program had to solve equations very many times during its run. Then the time saved by the faster methods could be significant.
Then, a program could start with Newton's method, and, if Newton's isn't converging, switch to regula falsi, maybe in one of its improved versions, such as the Illinois or Anderson–Björck versions. Or, if even that isn't converging as well as bisection would, switch to bisection, which always converges at a useful, if not spectacular, rate.
When the change in y has become very small, and x is also changing very little, then Newton's method most likely will not run into trouble, and will converge. So, under those favorable conditions, one could switch to Newton's method if one wanted the error to be very small and wanted very fast convergence.
Example: Growth of a bulrush
In chapter 7 of The Nine Chapters, a root finding problem can be translated to modern language as follows:
Excess And Deficit Problem #11:
• A bulrush grew 3 units on its first day. At the end of each day, the plant is observed to have grown by 1 /2 of the previous day's growth.
• A club-rush grew 1 unit on its first day. At the end of each day, the plant has grown by 2 times as much as the previous day's growth.
• Find the time [in fractional days] that the club-rush becomes as tall as the bulrush.
Answer: $(2+{\frac {6}{13}})$ days; the height is $(4+{\frac {8}{10}}+{\frac {6}{130}})$ units.
Explanation:
• Suppose it is day 2. The club-rush is shorter than the bulrush by 1.5 units.
• Suppose it is day 3. The club-rush is taller than the bulrush by 1.75 units. ∎
To understand this, we shall model the heights of the plants on day n (n = 1, 2, 3...) after a geometric series.
$B(n)=\sum _{i=1}^{n}3\cdot {\frac {1}{2^{i-1}}}\quad $Bulrush
$C(n)=\sum _{i=1}^{n}1\cdot 2^{i-1}\quad $Club-rush
For the sake of better notations, let $\ k=i-1~.$ Rewrite the plant height series $\ B(n),\ C(n)\ $ in terms of k and invoke the sum formula.
$\ B(n)=\sum _{k=0}^{n-1}3\cdot {\frac {1}{2^{k}}}=3\left({\frac {1-({\tfrac {1}{2}})^{n-1+1}}{1-{\tfrac {1}{2}}}}\right)=6\left(1-{\frac {1}{2^{n}}}\right)$
$\ C(n)=\sum _{k=0}^{n-1}2^{k}={\frac {~~1-2^{n}}{\ 1-2\ }}=2^{n}-1\ $
Now, use regula falsi to find the root of $\ (C(n)-B(n))\ $
$\ F(n):=C(n)-B(n)={\frac {6}{2^{n}}}+2^{n}-7\ $
Set $\ x_{1}=2\ $ and compute $\ F(x_{1})=F(2)\ $ which equals −1.5 (the "deficit").
Set $\ x_{2}=3\ $ and compute $\ F(x_{2})=F(3)\ $ which equals 1.75 (the "excess").
Estimated root (1st iteration):
$\ {\hat {x}}~=~{\frac {~x_{1}F(x_{2})-x_{2}F(x_{1})~}{F(x_{2})-F(x_{1})}}~=~{\frac {~2\times 1.75+3\times 1.5~}{1.75+1.5}}~\approx ~2.4615\ $
Example code
This example program, written in the C programming language, is an example of the Illinois algorithm. To find the positive number x where cos(x) = x3, the equation is transformed into a root-finding form f (x) = cos(x) - x3 = 0.
#include <stdio.h>
#include <math.h>
double f(double x) {
return cos(x) - x*x*x;
}
/* a,b: endpoints of an interval where we search
e: half of upper bound for relative error
m: maximal number of iteration
*/
double FalsiMethod(double (*f)(double), double a, double b, double e, int m) {
double c, fc;
int n, side = 0;
/* starting values at endpoints of interval */
double fa = f(a);
double fb = f(b);
for (n = 0; n < m; n++) {
c = (fa * b - fb * a) / (fa - fb);
if (fabs(b - a) < e * fabs(b + a))
break;
fc = f(c);
if (fc * fb > 0) {
/* fc and fb have same sign, copy c to b */
b = c; fb = fc;
if (side == -1)
fa /= 2;
side = -1;
} else if (fa * fc > 0) {
/* fc and fa have same sign, copy c to a */
a = c; fa = fc;
if (side == +1)
fb /= 2;
side = +1;
} else {
/* fc * f_ very small (looks like zero) */
break;
}
}
return c;
}
int main(void) {
printf("%0.15f\n", FalsiMethod(&f, 0, 1, 5E-15, 100));
return 0;
}
After running this code, the final answer is approximately 0.865474033101614.
See also
• ITP method, a variation with guaranteed minmax and superlinear convergence
• Ridders' method, another root-finding method based on the false position method
• Brent's method
References
1. Katz, Victor J. (1998), A History of Mathematics (2nd ed.), Addison Wesley Longman, p. 15, ISBN 978-0-321-01618-8
2. Smith, D. E. (1958) [1925], History of Mathematics, vol. II, Dover, pp. 437–441, ISBN 978-0-486-20430-7
3. Chabert, Jean-Luc, ed. (2012) [1999]. "3. Methods of False Position". A History of Algorithms: From the Pebble to the Microchip. Springer. pp. 86–91. ISBN 978-3-642-18192-4.
4. Needham, Joseph (1959). Mathematics and the Sciences of the Heavens and the Earth. Science and Civilisation in China. Vol. 3. Cambridge University Press. pp. 147–. ISBN 978-0-521-05801-8.
5. "Nine chapters". www-groups.dcs.st-and.ac.uk. Retrieved 2019-02-16.
6. Shen, Kangshen; Crossley, John N.; Lun, Anthony Wah-Cheung (1999). The Nine Chapters on the Mathematical Art: Companion and Commentary. Oxford University Press. p. 358. ISBN 978-7-03-006101-0.
7. Schwartz, R. K. (2004). Issues in the Origin and Development of Hisab al-Khata'ayn (Calculation by Double False Position). Eighth North African Meeting on the History of Arab Mathematics. Radès, Tunisia. Available online at: http://facstaff.uindy.edu/~oaks/Biblio/COMHISMA8paper.doc and "Archived copy" (PDF). Archived from the original (PDF) on 2014-05-16. Retrieved 2012-06-08.{{cite web}}: CS1 maint: archived copy as title (link)
8. General Trattato, vol. I, Venice, 1556, p. fol. 238, v, Regola Helcataym (vocabulo Arabo) che in nostra lingua vuol dire delle false Positioni
9. Conte, S.D.; Boor, Carl de (1965). Elementary Numerical Analysis: an algorithmic approach (2nd ed.). McGraw-Hill. p. 40. OCLC 1088854304.
10. Dahlquist, Germund; Björck, Åke (2003) [1974]. Numerical Methods. Dover. pp. 231–232. ISBN 978-0486428079.
11. Ford, J. A. (1995), Improved Algorithms of Illinois-type for the Numerical Solution of Nonlinear Equations, Technical Report, University of Essex Press, CiteSeerX 10.1.1.53.8676, CSM-257
12. Dowell, M.; Jarratt, P. (1971). "A modified regula falsi method for computing the root of an equation". BIT. 11 (2): 168–174. doi:10.1007/BF01934364. S2CID 50473598.
13. King, Richard F. (October 1983). "Anderson-Bjorck for Linear Sequences". Mathematics of Computation. 41 (164): 591–596. doi:10.2307/2007695. JSTOR 2007695.
14. Galdino, Sérgio (2011). "A family of regula falsi root-finding methods". Proceedings of 2011 World Congress on Engineering and Technology. 1. Retrieved 9 September 2016.
15. Oliveira, I. F. D.; Takahashi, R. H. C. (2020-12-06). "An Enhancement of the Bisection Method Average Performance Preserving Minmax Optimality". ACM Transactions on Mathematical Software. 47 (1): 5:1–5:24. doi:10.1145/3423597. ISSN 0098-3500. S2CID 230586635.
Further reading
• Burden, Richard L.; Faires, J. Douglas (2000). Numerical Analysis (7th ed.). Brooks/Cole. ISBN 0-534-38216-9.
• Sigler, L.E. (2002). Fibonacci's Liber Abaci, Leonardo Pisano's Book of Calculation. Springer-Verlag. ISBN 0-387-40737-5.
• Roberts, A.M. (2020). "Mathematical Philology in the Treatise on Double False Position in an Arabic Manuscript at Columbia University". Philological Encounters. 5 (3–4): 3–4. doi:10.1163/24519197-BJA10007. S2CID 229538951. (On a previously unpublished treatise on Double False Position in a medieval Arabic manuscript.)
Root-finding algorithms
Bracketing (no derivative)
• Bisection method
• Regula falsi
• ITP Method
Newton
• Newton's method
Quasi-Newton
• Muller's method
• Secant method
Hybrid methods
• Brent's method
• Ridders' method
Polynomial methods
• Bairstow's method
• Jenkins–Traub method
• Laguerre's method
|
Wikipedia
|
Regular tuning
Among alternative guitar-tunings, regular tunings have equal musical intervals between the paired notes of their successive open strings.
Regular tunings
For regular guitar-tunings, the distance between consecutive open-strings is a constant musical-interval, measured by semitones on the chromatic circle. The chromatic circle lists the twelve notes of the octave.
Basic information
AliasesUniform tunings
All-interval tunings
Advanced information
AdvantagesProvides new material for improvisation by advanced guitarists
DisadvantagesMakes it difficult to play music written for standard tuning.
Regular tunings (semitones)
Trivial (0)
Minor thirds (3)
Major thirds (4)
All fourths (5)
Augmented fourths (6)
New standard (7, 3)
All fifths (7)
Minor sixths (8)
Guitar tunings
Guitar tunings assign pitches to the open strings of guitars. Tunings can be described by the particular pitches that are denoted by notes in Western music. By convention, the notes are ordered from lowest to highest. The standard tuning defines the string pitches as E, A, D, G, B, and E. Between the open-strings of the standard tuning are three perfect-fourths (E–A, A–D, D–G), then the major third G–B, and the fourth perfect-fourth B–E.
In contrast, regular tunings have constant intervals between their successive open-strings:
• 3 semitones (minor third): Minor-thirds, or Diminished tuning
• 4 semitones (major third): Major-thirds or Augmented tuning,
• 5 semitones (perfect fourth): All-fourths tuning,
• 6 semitones (augmented fourth, tritone, or diminished fifth): Augmented-fourths tuning,
• 7 semitones (perfect fifth): All-fifths tuning
For the regular tunings, chords may be moved diagonally around the fretboard, as well as vertically for the repetitive regular tunings (minor thirds, major thirds, and augmented fourths). Regular tunings thus often appeal to new guitarists and also to jazz-guitarists, as they facilitate key transpositions without requiring a completely new set of fingerings for the new key. On the other hand, some conventional major/minor system chords are easier to play in standard tuning than in regular tuning.[1] Left-handed guitarists may use the chord charts from one class of regular tunings for its left-handed tuning; for example, the chord charts for all-fifths tuning may be used for guitars strung with left-handed all-fourths tuning.
The class of regular tunings has been named and described by Professor William Sethares. Sethares's 2001 chapter Regular tunings (in his revised 2010–2011 Alternate tuning guide) is the leading source for this article.[1] This article's descriptions of particular regular-tunings use other sources also.
Standard and alternative guitar-tunings: A review
In standard tuning, the C-major chord has three shapes because of the irregular major-third between the G- and B-strings.
This summary of standard tuning also introduces the terms for discussing alternative tunings.
Standard
Standard tuning has the following open-string notes:
E2–A2–D3–G3–B3–E4.
In standard tuning, the separation of the second (B), and third (G) string is by a major-third interval, which has a width of four semitones.
Standard tuningopen1st fret (index)2nd fret (middle)3rd fret (ring)4th fret (little)
1st string E4 F4F♯4/G♭4G4G♯4/A♭4
2nd string B3 C4C♯4/D♭4D4D♯4/E♭4
3rd string G3 G♯3/A♭3A3A♯3/B♭3B3
4th string D3 D♯3/E♭3E3F3F♯3/G♭3
5th string A2 A♯2/B♭2B2C3C♯3/D♭3
6th string E2 F2F♯2/G♭2G2G♯2/A♭2
Chromatic note progression
The irregularity has a price. Chords cannot be shifted around the fretboard in the standard tuning E–A–D–G–B–E, which requires four chord-shapes for the major chords. There are separate chord-forms for chords having their root note on the third, fourth, fifth, and sixth strings.[2]
Alternative
Alternative ("alternate") tuning refers to any open-string note-arrangement other than standard tuning. Such alternative tuning arrangements offer different chord voicing and sonorities. Alternative tunings necessarily change the chord shapes associated with standard tuning, which eases the playing of some, often "non-standard", chords at the cost of increasing the difficulty of some traditionally-voiced chords. As with other scordatura tuning, regular tunings may require re-stringing the guitar with different string gauges. For example, all-fifths tuning has been difficult to implement on conventional guitars, due to the extreme high pitch required from the top string. Even a common approximation to all-fifths tuning, new standard tuning, requires a special set of strings.
Properties
Chords can be shifted diagonally in regular tunings, such as major-thirds (M3) tuning.
With standard tuning, and with all tunings, chord patterns can be moved twelve frets down, where the notes repeat in a higher octave.
For the standard tuning, there is exactly one interval of a third between the second and third strings, and all the other intervals are fourths. Working around the irregular third of standard tuning, guitarists have to memorize chord-patterns for at least three regions: The first four strings tuned in perfect fourths; two or more fourths and the third; and one or more initial fourths, the third, and the last fourth.
In contrast, regular tunings have constant intervals between their successive open-strings. In fact, the class of each regular tuning is characterized by its musical interval as shown by the following list:
• 3 semitones (minor third): Minor-thirds tuning,
• 4 semitones (major third): Major-thirds tuning,
• 5 semitones (perfect fourth): All-fourths tuning,
• 6 semitones (augmented fourth, tritone, or diminished fifth): Augmented-fourths tuning,
• 7 semitones (perfect fifth): All-fifths tuning
The regular tunings whose number of semitones s divides 12 (the number of notes in the octave) repeat their open-string notes (raised one octave) after 12/s strings: For example,
• having three semitones in its interval, minor-thirds tuning repeats its open notes after four (12/3) strings;
• having four semitones in its interval, major-thirds tuning repeats its open notes after three (12/4) strings;
• having six semitones in its interval, augmented-fourths tuning repeats its notes after two (12/6) strings.
Regular tunings have symmetrical scales all along the fretboard. This makes it simpler to translate chords into new keys. For the regular tunings, chords may be moved diagonally around the fretboard.
The shifting of chords is especially simple for the regular tunings that repeat their open strings, in which case chords can be moved vertically: Chords can be moved three strings up (or down) in major-thirds tuning,[3] and chords can be moved two strings up (or down) in augmented-fourths tuning. Regular tunings thus appeal to new guitarists and also to jazz-guitarists, whose improvisation is simplified by regular intervals.
Particular conventional chords are more difficult to play
On the other hand, particular traditional chords may be more difficult to play in a regular tuning than in standard tuning. It can be difficult to play conventional chords especially in augmented-fourths tuning and all-fifths tuning,[1] in which the wide (tritone and perfect-fifth) intervals require hand stretching. Some chords that are conventional in folk music are difficult to play even in all-fourths and major-thirds tunings, which do not require more hand-stretching than standard tuning.[4] On the other hand, minor-thirds tuning features many barre chords with repeated notes,[5] properties that appeal to beginners.
Frets covered by the hand
The chromatic scale climbs from one string to the next after a number of frets that is specific to each regular tuning. The chromatic scale climbs after exactly four frets in major-thirds tuning, so reducing the extensions of the little and index fingers ("hand stretching").[6] For other regular tunings, the successive strings have intervals that are minor thirds, perfect fourths, augmented fourths, or perfect fifths; thus, the fretting hand covers three, five, six, or seven frets respectively to play a chromatic scale. (Of course, the lowest chromatic-scale uses the open strings and so requires one less fret to be covered.)
Examples
The following regular tunings are discussed by Sethares, who also mentions other regular tunings that are difficult to play or have had little musical interest, to date.
Minor thirds
C2–E♭2–G♭2–A2–C3–E♭3,[7][8] or
B2–D3–F3–A♭3–B3–D4[9]
In each minor-thirds (m3) tuning, every interval between successive strings is a minor third. Thus each repeats its open-notes after four strings. In the minor-thirds tuning beginning with C, the open strings contain the notes (C, E♭, Gb) of the diminished C triad.[7]
Minor-thirds tuning features many barre chords with repeated notes,[5] properties that appeal to acoustic guitarists and to beginners. Doubled notes have different sounds because of differing "string widths, tensions and tunings, and [they] reinforce each other, like the doubled strings of a twelve string guitar add chorusing and depth," according to William Sethares.[7]
Achieving the same range as a standard-tuned guitar using minor-thirds tuning would require a nine-string guitar (e.g. E♭2–G♭2–A2–C3–E♭3–G♭3–A3–C4–E♭4).
Major thirds
Chords vertically shift.
In major-thirds tuning, chords are inverted by raising notes by three strings on the same frets. The inversions of a C major chord are shown.[10]
Major-thirds tuning repeats its notes after three strings.
Major-thirds tuning is a regular tuning in which the musical intervals between successive strings are each major thirds.[11][12] Like minor-thirds tuning (and unlike all-fourths and all-fifths tuning), major-thirds tuning is a repetitive tuning; it repeats its octave after three strings, which again simplifies the learning of chords and improvisation;[13] similarly, minor-thirds tuning repeats itself after four strings while augmented-fourths tuning repeats itself after two strings.
Neighboring the standard tuning is the all-thirds tuning that has the open strings
E2–G♯2–B♯2–E3–G♯3–B♯3 (or F♭2–A♭2–C3–F♭3–A♭3–C4).[4]
With six strings, major-thirds tuning has a smaller range than standard tuning; with seven strings, the major-thirds tuning covers the range of standard tuning on six strings.[11][12] With the repetition of three open-string notes, each major-thirds tuning provides the guitarist with many options for fingering chords. Indeed, the fingering of two successive frets suffices to play pure major and minor chords, while the fingering of three successive frets suffices to play seconds, fourths, sevenths, and ninths.[11][14]
For the standard Western guitar, which has six strings, major-thirds tuning has a smaller range than standard tuning; on a guitar with seven strings, the major-thirds tuning covers the range of standard tuning on six strings. Even greater range is possible with guitars with eight strings.[4][15]
Major-thirds tuning was heavily used in 1964 by the American jazz-guitarist Ralph Patt to facilitate his style of improvisation.[4][16]
All fourths
E2–A2–D3–G3–C4–F4
This tuning is like that of the lowest four strings in standard tuning.[17][18] Consequently, of all the regular tunings, it is the closest approximation to standard tuning, and thus it best allows the transfer of a knowledge of chords from standard tuning to a regular tuning. Jazz musician Stanley Jordan plays guitar in all-fourths tuning; he has stated that all-fourths tuning "simplifies the fingerboard, making it logical".[19]
For all-fourths tuning, all twelve major chords (in the first or open positions) are generated by two chords, the open F major chord and the D major chord. The regularity of chord-patterns reduces the number of finger positions that need to be memorized.[20]
The left-handed involute of an all-fourths tuning is an all-fifths tuning. All-fourths tuning is based on the perfect fourth (five semitones), and all-fifths tuning is based on the perfect fifth (seven semitones). Consequently, chord charts for all-fifths tunings may be used for left-handed all-fourths tuning.[21]
Augmented fourths
C2–F♯2–C3–F♯3–C4–F♯4 and B1–F1–B2–F3–B3–F4 etc.
Between the all-fifths and all-fourths tunings are augmented-fourth tunings, which are also called "diminished-fifths" or "tritone" tunings. It is a repetitive tuning that repeats its notes after two strings. With augmented-fourths tunings, the fretboard has greatest symmetry.[22] In fact, every augmented-fourths tuning lists the notes of all the other augmented-fourths tunings on the frets of its fretboard. Professor Sethares wrote that
"The augmented-fourth interval is the only interval whose inverse is the same as itself. The augmented-fourths tuning is the only tuning (other than the 'trivial' tuning C–C–C–C–C–C) for which all chords-forms remain unchanged when the strings are reversed. Thus the augmented-fourths tuning is its own 'lefty' tuning."[23]
Of all the augmented-fourths tunings, the C2–F♯2–C3–F♯3–C4–F♯4 tuning is the closest approximation to the standard tuning, and its fretboard is displayed next:
Tritone:[23] Each fret displays the open strings of an augmented-fourths tuning
open
(0th fret)
1st fret2nd fret3rd fret4th fret5th fret
1st string F#4 G4G#4A4A#4B4
2nd string C4 C#4D4D#4E4F4
3rd string F#3 G3G#3A3A#3B3
4th string C3 C#3D3D#3E3F3
5th string F#2 G2G#2A2A#2B2
6th string C2 C#2D2D#2E2F2
An augmented-fourths tuning "makes it very easy for playing half-whole scales, diminished 7 licks, and whole tone scales," stated guitarist Ron Jarzombek.[24]
All fifths: "Mandoguitar"
C2–G2–D3–A3–E4–B4
All-fifths tuning is a tuning in intervals of perfect fifths like that of a mandolin, cello or violin; other names include "perfect fifths" and "fifths".[25] Consequently, classical compositions written for violin or guitar may be adapted to all-fifths tuning more easily than to standard tuning.
When he was asked whether tuning in fifths facilitates "new intervals or harmonies that aren't readily available in standard tuning", Robert Fripp responded, "It's a more rational system, but it's also better sounding—better for chords, better for single notes." To build chords, Fripp uses "perfect intervals in fourths, fifths and octaves", so avoiding minor thirds and especially major thirds,[26] which are sharp in equal temperament tuning (in comparison to thirds in just intonation). It is a challenge to adapt conventional guitar-chords to new standard tuning, which is based on all-fifths tuning.[27] Some closely voiced jazz chords become impractical in NST and all-fifths tuning.[28]
It has a wide range, thus its implementation can be difficult. The high B4 requires a taut, thin string, and consequently is prone to breaking. This can be ameliorated by using a shorter scale length guitar, by shifting to a different key, or by shifting down a fifth. All-fifths tuning was used by the jazz-guitarist Carl Kress.
The left-handed involute of an all-fifths tuning is an all-fourths tuning. All-fifths tuning is based on the perfect fifth (seven semitones), and all-fourths tuning is based on the perfect fourth (five semitones). Consequently, chord charts for all-fifths tunings are used for left-handed all-fourths tuning.[21]
All-fifths tuning has been approximated with tunings in the Through The Looking Glass Guitar[29] of Kei Nakano, which has been played by him since 2015. This new tuning is like a mirror to all kinds of string instruments including guitar. Also it can adapt to any other tunings of guitar. If tuned to usual conventional guitar for the right handed person, it is able to use for lefty guitar in general, and vice versa.
New standard tuning
All-fifths tuning has been approximated with tunings that avoid the high B4 or the low C2. The B4 has been replaced with a G4 in the new standard tuning (NST) of King Crimson's Robert Fripp. The original version of NST was all-fifths tuning. However, in the 1980s, Fripp never attained the all fifth's high B4. While he could attain A4, the string's life-time distribution was too short. Experimenting with a g string, Fripp succeeded. "Originally, seen in 5ths. all the way, the top string would not go to B. so, as on a tenor banjo, I adopted an A on the first string. These kept breaking, so G was adopted."[30] In 2012, Fripp experimented with A String (0.007);[31][32] if successful, the experiment could lead to "the NST 1.2", CGDAE-A, according to Fripp.[31] Fripp's NST has been taught in Guitar Craft courses.[33][34] Guitar Craft and its successor Guitar Circle have taught Fripp's tuning to three-thousand students.[35]
Extreme intervals
For regular tunings, intervals wider than a perfect fifth or narrower than a minor third have, thus far, had limited interest.
Wide intervals
Two regular-tunings based on sixths, having intervals of minor sixths (eight semitones) and of major sixths (nine semitones), have received scholarly discussion.[36] The chord charts for minor-sixths tuning are useful for left-handed guitarists playing in major-thirds tuning; the chord charts for major-sixths tuning, for left-handed guitarists playing in minor-thirds tuning.[21]
The regular tunings with minor-seventh (ten semitones) or major-seventh (eleven semitones) intervals would make conventional major/minor chord-playing very difficult, as would octave intervals.[21]
Narrow intervals
There are regular-tunings that have as their intervals either zero semi-tones (unison), one semi-tone (minor second), or two semi-tones (major second). These tunings tend to increase the difficulty in playing the major/minor system chords of conventionally tuned guitars.[21]
The "trivial" class of unison tunings (such as C3–C3–C3–C3–C3–C3) are each their own left-handed tuning.[21] Unison tunings are briefly discussed in the article on ostrich tunings. Having exactly one note, unison tunings are also ostrich tunings, which have exactly one pitch class (but may have two or more octaves, for example, E2, E3, and E4'); non-unison ostrich tunings are not regular.
Left-handed involution
See also: Interval inversion, chord inversion, and involution (mathematics)
The class of regular tunings is preserved under the involution from right-handed to left-handed tunings, as observed by William Sethares.[21] The present discussion of left-handed tunings is of interest to musical theorists, mathematicians, and left-handed persons, but may be skipped by other readers.
For left-handed guitars, the ordering of the strings reverses the ordering of the strings for right-handed guitars. For example, the left-handed involute of the standard tuning E–A–D–G–B–E is the "lefty" tuning E–B–G–D–A–E. Similarly, the "left-handed" involute of the "lefty" tuning is the standard ("righty") tuning.[21]
The reordering of open-strings in left-handed tunings has an important consequence. The chord fingerings for the right-handed tunings must be changed for left-handed tunings. However, the left-handed involute of a regular tuning is easily recognized: it is another regular tuning. Thus the chords for the involuted regular-tuning may be used for the left-handed involute of a regular tuning.
For example, the left-handed version of all-fourths tuning is all-fifths tuning, and the left-handed version of all-fifths tuning is all-fourths tuning. In general, the left-handed involute of the regular tuning based on the interval with $n$ semitones is the regular tuning based on its involuted interval with $12-n$ semitones: All-fourths tuning is based on the perfect fourth (five semitones), and all-fifths tuning is based on the perfect fifth (seven semitones), as mentioned previously.[21] The following table summarizes the lefty-righty pairings discussed by Sethares.[21]
Left-handed tunings[21]
Right-handedLeft-handed
Minor thirdsMajor sixths
Major thirdsMinor sixths
All fourthsAll fifths
Augmented fourthsDiminished fifths
All fifthsAll fourths
Minor sixthsMajor thirds
Major sixthsMinor thirds
The left-handed involute of a left-handed involute is the original right-handed tuning. The left-handed version of the trivial tuning C–C–C–C–C–C is also C–C–C–C–C–C. Among non-trivial tunings, only the class of augmented-fourths tunings is fixed under the lefty involution.[21][22]
Summary
The principal regular-tunings have their properties summarized in the following table:
Regular tuningInterval
(Number of semitones)
RepetitionAdvantages:
Each facilitates learning and improvisation.
Disadvantages:
None use standard-tuning's open chords.
Left-handed
involution[21]
Guitarist(s)
Major thirdsMajor third (4)After 3 strings
• Chromatic scale on four successive frets.
• Hence, reduced hand-stretching:
• Major and minor chords are played on 2 successive frets;
• others (seconds, fourths, sevenths, and ninths) on 3.[14]
• Smaller range (without 7 strings)
• Only three open-notes.
Minor-sixth tuning Ralph Patt
All fourthsPerfect fourth (5)Non-repetitive[37]
• Uses chords from lowest 4 strings of standard tuning.
• Same tuning as bass guitar
Difficult to play folk chordsAll-fifths tuningStanley Jordan
Augmented fourthsTritone (6)After 2 stringssymmetry ("left-handed")Only 2 open notesAugmented-fourths tuningShawn Lane
All fifthsPerfect fifth (7)Non-repetitive[37]
• Wide scope facilitates ensemble playing and single-note picking (rather than conventional chords)
• Natural for all-fifths music (violin, cello, mandolin)
• Very difficult to play conventional chord-voicings.
• Requires extreme (light and heavy) strings.
All-fourths tuning
• Carl Kress
• Kei Nakano
Notes
1. Sethares (2001)
2. Denyer (1992, p. 119)
3. Griewank (2010, p. 3)
4. Patt, Ralph (14 April 2008). "The major 3rd tuning". Ralph Patt's jazz web page. ralphpatt.com. cited by Sethares (2011) harvtxt error: no target: CITEREFSethares2011 (help) and Griewank (2010, p. 1). Retrieved 10 June 2012.
5. Sethares (2001, pp. 54–55)
6. Griewank (2010, p. 9)
7. Sethares (2001, pp. 54)
8. "ACD#F#AC: Minor thirds (m3)". Guitar tunings database. 3 February 2013. Retrieved 4 February 2013.
9. "G#BDFG#B: Minor thirds (m3)". Guitar tunings database. 3 February 2013. Retrieved 4 February 2013.
10. Kirkeby (2012, "Fretmaps, major chords: Major Triads")
11. Sethares (2001, pp. 56)
12. Griewank (2010)
13. Kirkeby, Ole (1 March 2012). "Major thirds tuning". m3guitar.com. cited by Sethares (2011) harvtxt error: no target: CITEREFSethares2011 (help) and Griewank, p. 1) harvtxt error: no target: CITEREFGriewank (help). Archived from the original on 11 April 2015. Retrieved 10 June 2012.
14. Griewank (2010, p. 2)
15. In the table, the last row is labeled the "7th string" so that the low C tuning can be displayed without needing another table; the term "7th string" does not appear in the sources.
Similarly, the terms "-1st string" and "0th string" do not appear in the sources, which do discuss guitars having seven-eight strings.
16. Griewank (2010, p. 1)
17. Sethares (2001, pp. 58–59)
18. Bianco, Bob (1987). Guitar in Fourths. New York City: Calliope Music. ISBN 0-9605912-2-2. OCLC 16526869.
19. Ferguson (1986, p. 76): Ferguson, Jim (1986). "Stanley Jordan". In Casabona, Helen; Belew, Adrian (eds.). New directions in modern guitar. Guitar Player basic library. Hal Leonard Publishing Corporation. pp. 68–76. ISBN 0881884235.
20. Sethares (2001, p. 52)
21. Sethares (2001, p. 53)
22. Sethares (2001, "The augmented fourths tuning" 60–61)
23. Sethares (2001, "The augmented fourths tuning", p. 60)
24. Turner, Steve (30 December 2005). "Interview with Ron Jarzombek". RonJarzombek.com. Retrieved 23 May 2012..
25. Sethares (2001, "The mandoguitar tuning" 62–63)
26. Mulhern (1986): Mulhern, Tom (January 1986). "On the discipline of craft and art: An interview with Robert Fripp". Guitar Player. 20: 88–103. Retrieved 8 January 2013.
27. Musicologist Eric Tamm wrote that despite "considerable effort and search I just could not find a good set of chords whose sound I liked" for rhythm guitar. (Tamm 2003, Chapter 10: Postscript)
28. Sethares (2001, "The mandoguitar tuning", pp. 62–63)
29. "日本特許第6709929号 【発明の名称】弦楽器 【特許権者】中野 圭". patents.google.com. Retrieved 30 June 2023.
30. Fripp, Robert (5 February 2010). "Robert Fripp's diary: Friday, 5th February 2010". Discipline Global Mobile, DGM Live!. Archived from the original on 11 November 2013.
31. Fripp, Robert (22 April 2012). "Robert Fripp's diary: Sunday, 22nd April 2012". Discipline Global Mobile, DGM Live!. Archived from the original on 11 November 2013.
32. Octave4Plus of Gary Goodman
33. Tamm, Eric (2003) [1990], Robert Fripp: From crimson king to crafty master (Progressive Ears ed.), Faber and Faber (1990), ISBN 0-571-16289-4, archived from the original on 26 October 2011, retrieved 25 March 2012 Zipped Microsoft Word Document
34. Zwerdling, Daniel (5 September 1998). "California Guitar Trio". All Things Considered (NPR Weekend ed.). Washington DC: National Public Radio. Retrieved 25 March 2012.
35. Fripp (2011, p. 3): Fripp, Robert (2011). Pozzo, Horacio (ed.). Seven Guitar Craft themes: Definitive scores for guitar ensemble. "Original transcriptions by Curt Golden", "Layout scores and tablatures: Ariel Rzezak and Theo Morresi" (First limited ed.). Partitas Music. ISMN 979-0-9016791-7-7. DGM Sku partitas001.
36. Sethares (2001, pp. 64–67)
37. No repetition occurs in six strings; repetition occurs after 12 strings.
References
• Denyer, Ralph (1992). "Playing the guitar ('How the guitar is tuned', pp. 68–69, and 'Alternative tunings', pp. 158–159)". The guitar handbook. Special contributors Isaac Guillory and Alastair M. Crawford (Fully revised and updated ed.). London and Sydney: Pan Books. pp. 65–160. ISBN 0-330-32750-X.
• Griewank, Andreas (1 January 2010), Tuning guitars and reading music in major thirds, Matheon preprints, vol. 695, Berlin, Germany: DFG research center "MATHEON, Mathematics for key technologies" Berlin, MSC-Classification 97M80 Arts. Music. Language. Architecture. Postscript file and Pdf file, archived from the original on 8 November 2012
• Sethares, Bill (2001). "Regular tunings". Alternate tuning guide (PDF). Madison, Wisconsin: University of Wisconsin; Department of Electrical Engineering. pp. 52–67. Retrieved 19 May 2012.
• Sethares, Bill (10 January 2009) [2001]. Alternate tuning guide (PDF). Madison, Wisconsin: University of Wisconsin; Department of Electrical Engineering. Retrieved 19 May 2012.
• Sethares, William A. (18 May 2012). "Alternate tuning guide". Madison, Wisconsin: University of Wisconsin; Department of Electrical Engineering. Retrieved 8 December 2012.
Further reading
• Allen, Warren (22 September 2011) [30 December 1997]. "WA's encyclopedia of guitar tunings". Archived from the original on 13 July 2012. Retrieved 27 June 2012. (Recommended by Marcus, Gary (2012). Guitar zero: The science of learning to be musical. Oneworld. p. 234. ISBN 9781851689323.)
• Sethares, William A. (12 May 2012). "Alternate tuning guide: Interactive". Retrieved 27 June 2012. Uses Wolfram Cdf player.
• Weissman, Dick (2006). Guitar Tunings: A Comprehensive Guide. Routledge. ISBN 9780415974417. LCCN 0415974410.
External links
The Wikibook Guitar has a page on the topic of: Alternative tunings
Major thirds
• Professors Andreas Griewank and William Sethares each recommend discussions of major-thirds tuning by two jazz-guitarists, (Sethares 2011, "Regular tunings") harv error: no target: CITEREFSethares2011 (help) and (Griewank 2010, p. 1):
• Ole Kirkeby for 6- and 7-string guitars: Charts of intervals major chords, and minor chords, and recommended gauges for strings.
• Ralph Patt for 6-, 7-, and 8-string guitars: Charts of scales, chords, and chord-progressions.
All fourths
• Yahoo group for all-fourths tuning
New standard tuning
• Courses in New Standard Tuning are offered by Guitar Circle, the successor of Guitar Craft:
• Guitar Circle of Europe
• Guitar Circle of Latin America
• Guitar Circle of North America
Guitar tunings
General
• Standard
• DADGAD
• Nashville
Open (Slide and slack-key guitar)
TuningRepetitiveOvertonesOther
(often most popular)
• Open A
• Open B
• Open C
• Open D
• Open E
• Open F
• Open G
• A-C♯-E-A-C♯-E
• B-D♯-F♯-B-D♯-F♯
• C-E-G-C-E-G
• D-F♯-A-D-F♯-A
• E-G♯-B-E-G♯-B
• F-A-C-F-A-C
• G-B-D-G-B-D
• A-A-E-A-C♯-E
• B-B-F♯-B-D♯-F♯
• C-C-G-C-E-G
• D-D-A-D-F♯-A
• E-E-B-E-G♯-B
• F-F-C-F-A-C
• G-G-D-G-B-D
• E-A-C♯-E-A-E
• B-F♯-B-F♯-B-D♯
• C-G-C-G-C-E
• D-A-D-F♯-A-D
• E-B-E-G♯-B-E
• C-F-C-F-A-F
• D-G-D-G-B-D
Regular (semitones)
• Unison (0)
• Minor thirds (3)
• Major thirds (4)
• All fourths (5)
• Augmented fourths (6)
• New standard (74, 3)
• All fifths (7)
Repetitive (open pitches)
• Trivial (1)
• Augmented fourths (2)
• Major thirds (3)
• English open-C (3)
• Russian open-G (3)
• Minor thirds (4)
Miscellaneous
• Terz
• Bass guitar
• Steel guitar (C6, E9)
• Other instruments
• Musical tuning
• William Sethares
• List
• Category
|
Wikipedia
|
Regular polyhedron
A regular polyhedron is a polyhedron whose symmetry group acts transitively on its flags. A regular polyhedron is highly symmetrical, being all of edge-transitive, vertex-transitive and face-transitive. In classical contexts, many different equivalent definitions are used; a common one is that the faces are congruent regular polygons which are assembled in the same way around each vertex.
A regular polyhedron is identified by its Schläfli symbol of the form {n, m}, where n is the number of sides of each face and m the number of faces meeting at each vertex. There are 5 finite convex regular polyhedra (the Platonic solids), and four regular star polyhedra (the Kepler–Poinsot polyhedra), making nine regular polyhedra in all. In addition, there are five regular compounds of the regular polyhedra.
The regular polyhedra
There are five convex regular polyhedra, known as the Platonic solids; four regular star polyhedra, the Kepler–Poinsot polyhedra; and five regular compounds of regular polyhedra:
Platonic solids
Main article: Platonic solid
Tetrahedron {3, 3}Cube {4, 3}Octahedron {3, 4}Dodecahedron {5, 3}Icosahedron {3, 5}
χ = 2χ = 2χ = 2χ = 2χ = 2
Kepler–Poinsot polyhedra
Main article: Kepler–Poinsot polyhedra
Small stellated dodecahedron
{5/2, 5}
Great dodecahedron
{5, 5/2}
Great stellated dodecahedron
{5/2, 3}
Great icosahedron
{3, 5/2}
χ = −6χ = −6χ = 2χ = 2
Regular compounds
Main article: Polytope compound § Regular compounds
Two tetrahedra
2 {3, 3}
Five tetrahedra
5 {3, 3}
Ten tetrahedra
10 {3, 3}
Five cubes
5 {4, 3}
Five octahedra
5 {3, 4}
χ = 4χ = 10χ = 0χ = −10χ = 10
Characteristics
Equivalent properties
The property of having a similar arrangement of faces around each vertex can be replaced by any of the following equivalent conditions in the definition:
• The vertices of a convex regular polyhedron all lie on a sphere.
• All the dihedral angles of the polyhedron are equal
• All the vertex figures of the polyhedron are regular polygons.
• All the solid angles of the polyhedron are congruent.[1]
Concentric spheres
A convex regular polyhedron has all of three related spheres (other polyhedra lack at least one kind) which share its centre:
• An insphere, tangent to all faces.
• An intersphere or midsphere, tangent to all edges.
• A circumsphere, tangent to all vertices.
Symmetry
The regular polyhedra are the most symmetrical of all the polyhedra. They lie in just three symmetry groups, which are named after the Platonic solids:
• Tetrahedral
• Octahedral (or cubic)
• Icosahedral (or dodecahedral)
Any shapes with icosahedral or octahedral symmetry will also contain tetrahedral symmetry.
Euler characteristic
The five Platonic solids have an Euler characteristic of 2. This simply reflects that the surface is a topological 2-sphere, and so is also true, for example, of any polyhedron which is star-shaped with respect to some interior point.
Interior points
The sum of the distances from any point in the interior of a regular polyhedron to the sides is independent of the location of the point (this is an extension of Viviani's theorem.) However, the converse does not hold, not even for tetrahedra.[2]
Duality of the regular polyhedra
In a dual pair of polyhedra, the vertices of one polyhedron correspond to the faces of the other, and vice versa.
The regular polyhedra show this duality as follows:
• The tetrahedron is self-dual, i.e. it pairs with itself.
• The cube and octahedron are dual to each other.
• The icosahedron and dodecahedron are dual to each other.
• The small stellated dodecahedron and great dodecahedron are dual to each other.
• The great stellated dodecahedron and great icosahedron are dual to each other.
The Schläfli symbol of the dual is just the original written backwards, for example the dual of {5, 3} is {3, 5}.
History
See also: Regular polytope § History of discovery
Prehistory
Stones carved in shapes resembling clusters of spheres or knobs have been found in Scotland and may be as much as 4,000 years old. Some of these stones show not only the symmetries of the five Platonic solids, but also some of the relations of duality amongst them (that is, that the centres of the faces of the cube gives the vertices of an octahedron). Examples of these stones are on display in the John Evans room of the Ashmolean Museum at Oxford University. Why these objects were made, or how their creators gained the inspiration for them, is a mystery. There is doubt regarding the mathematical interpretation of these objects, as many have non-platonic forms, and perhaps only one has been found to be a true icosahedron, as opposed to a reinterpretation of the icosahedron dual, the dodecahedron.[3]
It is also possible that the Etruscans preceded the Greeks in their awareness of at least some of the regular polyhedra, as evidenced by the discovery near Padua (in Northern Italy) in the late 19th century of a dodecahedron made of soapstone, and dating back more than 2,500 years (Lindemann, 1987).
Greeks
The earliest known written records of the regular convex solids originated from Classical Greece. When these solids were all discovered and by whom is not known, but Theaetetus (an Athenian) was the first to give a mathematical description of all five (Van der Waerden, 1954), (Euclid, book XIII). H.S.M. Coxeter (Coxeter, 1948, Section 1.9) credits Plato (400 BC) with having made models of them, and mentions that one of the earlier Pythagoreans, Timaeus of Locri, used all five in a correspondence between the polyhedra and the nature of the universe as it was then perceived – this correspondence is recorded in Plato's dialogue Timaeus. Euclid's reference to Plato led to their common description as the Platonic solids.
One might characterise the Greek definition as follows:
• A regular polygon is a (convex) planar figure with all edges equal and all corners equal.
• A regular polyhedron is a solid (convex) figure with all faces being congruent regular polygons, the same number arranged all alike around each vertex.
This definition rules out, for example, the square pyramid (since although all the faces are regular, the square base is not congruent to the triangular sides), or the shape formed by joining two tetrahedra together (since although all faces of that triangular bipyramid would be equilateral triangles, that is, congruent and regular, some vertices have 3 triangles and others have 4).
This concept of a regular polyhedron would remain unchallenged for almost 2000 years.
Regular star polyhedra
Regular star polygons such as the pentagram (star pentagon) were also known to the ancient Greeks – the pentagram was used by the Pythagoreans as their secret sign, but they did not use them to construct polyhedra. It was not until the early 17th century that Johannes Kepler realised that pentagrams could be used as the faces of regular star polyhedra. Some of these star polyhedra may have been discovered by others before Kepler's time, but Kepler was the first to recognise that they could be considered "regular" if one removed the restriction that regular polyhedra be convex. Two hundred years later Louis Poinsot also allowed star vertex figures (circuits around each corner), enabling him to discover two new regular star polyhedra along with rediscovering Kepler's. These four are the only regular star polyhedra, and have come to be known as the Kepler–Poinsot polyhedra. It was not until the mid-19th century, several decades after Poinsot published, that Cayley gave them their modern English names: (Kepler's) small stellated dodecahedron and great stellated dodecahedron, and (Poinsot's) great icosahedron and great dodecahedron.
The Kepler–Poinsot polyhedra may be constructed from the Platonic solids by a process called stellation. The reciprocal process to stellation is called facetting (or faceting). Every stellation of one polyhedron is dual, or reciprocal, to some facetting of the dual polyhedron. The regular star polyhedra can also be obtained by facetting the Platonic solids. This was first done by Bertrand around the same time that Cayley named them.
By the end of the 19th century there were therefore nine regular polyhedra – five convex and four star.
Regular polyhedra in nature
Each of the Platonic solids occurs naturally in one form or another.
The tetrahedron, cube, and octahedron all occur as crystals. These by no means exhaust the numbers of possible forms of crystals (Smith, 1982, p212), of which there are 48. Neither the regular icosahedron nor the regular dodecahedron are amongst them, but crystals can have the shape of a pyritohedron, which is visually almost indistinguishable from a regular dodecahedron. Truly icosahedral crystals may be formed by quasicrystalline materials which are very rare in nature but can be produced in a laboratory.
A more recent discovery is of a series of new types of carbon molecule, known as the fullerenes (see Curl, 1991). Although C60, the most easily produced fullerene, looks more or less spherical, some of the larger varieties (such as C240, C480 and C960) are hypothesised to take on the form of slightly rounded icosahedra, a few nanometres across.
Regular polyhedra appear in biology as well. The coccolithophore Braarudosphaera bigelowii has a regular dodecahedral structure, about 10 micrometres across.[4] In the early 20th century, Ernst Haeckel described a number of species of radiolarians, some of whose shells are shaped like various regular polyhedra.[5] Examples include Circoporus octahedrus, Circogonia icosahedra, Lithocubus geometricus and Circorrhegma dodecahedra; the shapes of these creatures are indicated by their names.[5] The outer protein shells of many viruses form regular polyhedra. For example, HIV is enclosed in a regular icosahedron, as is the head of a typical myovirus.[6][7]
• The coccolithophore Braarudosphaera bigelowii has a regular dodecahedral structure
• The radiolarian Circogonia icosahedra has a regular icosahedral structure
• A myovirus typically has a regular icosahedral capsid (head) about 100 nanometers across.
In ancient times the Pythagoreans believed that there was a harmony between the regular polyhedra and the orbits of the planets. In the 17th century, Johannes Kepler studied data on planetary motion compiled by Tycho Brahe and for a decade tried to establish the Pythagorean ideal by finding a match between the sizes of the polyhedra and the sizes of the planets' orbits. His search failed in its original objective, but out of this research came Kepler's discoveries of the Kepler solids as regular polytopes, the realisation that the orbits of planets are not circles, and the laws of planetary motion for which he is now famous. In Kepler's time only five planets (excluding the earth) were known, nicely matching the number of Platonic solids. Kepler's work, and the discovery since that time of Uranus and Neptune, have invalidated the Pythagorean idea.
Around the same time as the Pythagoreans, Plato described a theory of matter in which the five elements (earth, air, fire, water and spirit) each comprised tiny copies of one of the five regular solids. Matter was built up from a mixture of these polyhedra, with each substance having different proportions in the mix. Two thousand years later Dalton's atomic theory would show this idea to be along the right lines, though not related directly to the regular solids.
Further generalisations
The 20th century saw a succession of generalisations of the idea of a regular polyhedron, leading to several new classes.
Regular skew apeirohedra
Main article: Regular skew apeirohedron
In the first decades, Coxeter and Petrie allowed "saddle" vertices with alternating ridges and valleys, enabling them to construct three infinite folded surfaces which they called regular skew polyhedra.[8] Coxeter offered a modified Schläfli symbol {l,m|n} for these figures, with {l,m} implying the vertex figure, with m regular l-gons around a vertex. The n defines n-gonal holes. Their vertex figures are regular skew polygons, vertices zig-zagging between two planes.
Infinite regular skew polyhedra in 3-space (partially drawn)
{4,6|4}
{6,4|4}
{6,6|3}
Regular skew polyhedra
Main article: Regular skew polyhedron
Finite regular skew polyhedra exist in 4-space. These finite regular skew polyhedra in 4-space can be seen as a subset of the faces of uniform 4-polytopes. They have planar regular polygon faces, but regular skew polygon vertex figures.
Two dual solutions are related to the 5-cell, two dual solutions are related to the 24-cell, and an infinite set of self-dual duoprisms generate regular skew polyhedra as {4, 4 | n}. In the infinite limit these approach a duocylinder and look like a torus in their stereographic projections into 3-space.
Finite regular skew polyhedra in 4-space
Orthogonal Coxeter plane projections Stereographic projection
A4 F4
{4, 6 | 3} {6, 4 | 3} {4, 8 | 3} {8, 4 | 3} {4, 4 | n}
30 {4} faces
60 edge
20 vertices
20 {6} faces
60 edges
30 vertices
288 {4} faces
576 edges
144 vertices
144 {8} faces
576 edges
288 vertices
n2 {4} faces
2n2 edges
n2 vertices
Regular polyhedra in non-Euclidean and other spaces
Studies of non-Euclidean (hyperbolic and elliptic) and other spaces such as complex spaces, discovered over the preceding century, led to the discovery of more new polyhedra such as complex polyhedra which could only take regular geometric form in those spaces.
Regular polyhedra in hyperbolic space
In H3 hyperbolic space, paracompact regular honeycombs have Euclidean tiling facets and vertex figures that act like finite polyhedra. Such tilings have an angle defect that can be closed by bending one way or the other. If the tiling is properly scaled, it will close as an asymptotic limit at a single ideal point. These Euclidean tilings are inscribed in a horosphere just as polyhedra are inscribed in a sphere (which contains zero ideal points). The sequence extends when hyperbolic tilings are themselves used as facets of noncompact hyperbolic tessellations, as in the heptagonal tiling honeycomb {7,3,3}; they are inscribed in an equidistant surface (a 2-hypercycle), which has two ideal points.
Regular tilings of the real projective plane
Another group of regular polyhedra comprise tilings of the real projective plane. These include the hemi-cube, hemi-octahedron, hemi-dodecahedron, and hemi-icosahedron. They are (globally) projective polyhedra, and are the projective counterparts of the Platonic solids. The tetrahedron does not have a projective counterpart as it does not have pairs of parallel faces which can be identified, as the other four Platonic solids do.
Hemi-cube
{4,3}
Hemi-octahedron
{3,4}
Hemi-dodecahedron
{3,5}
Hemi-icosahedron
{5,3}
These occur as dual pairs in the same way as the original Platonic solids do. Their Euler characteristics are all 1.
Abstract regular polyhedra
Further information: Abstract regular polytope
By now, polyhedra were firmly understood as three-dimensional examples of more general polytopes in any number of dimensions. The second half of the century saw the development of abstract algebraic ideas such as Polyhedral combinatorics, culminating in the idea of an abstract polytope as a partially ordered set (poset) of elements. The elements of an abstract polyhedron are its body (the maximal element), its faces, edges, vertices and the null polytope or empty set. These abstract elements can be mapped into ordinary space or realised as geometrical figures. Some abstract polyhedra have well-formed or faithful realisations, others do not. A flag is a connected set of elements of each dimension – for a polyhedron that is the body, a face, an edge of the face, a vertex of the edge, and the null polytope. An abstract polytope is said to be regular if its combinatorial symmetries are transitive on its flags – that is to say, that any flag can be mapped onto any other under a symmetry of the polyhedron. Abstract regular polytopes remain an active area of research.
Five such regular abstract polyhedra, which can not be realised faithfully, were identified by H. S. M. Coxeter in his book Regular Polytopes (1977) and again by J. M. Wills in his paper "The combinatorially regular polyhedra of index 2" (1987). All five have C2×S5 symmetry but can only be realised with half the symmetry, that is C2×A5 or icosahedral symmetry.[9][10][11] They are all topologically equivalent to toroids. Their construction, by arranging n faces around each vertex, can be repeated indefinitely as tilings of the hyperbolic plane. In the diagrams below, the hyperbolic tiling images have colors corresponding to those of the polyhedra images.
Polyhedron
Medial rhombic triacontahedron
Dodecadodecahedron
Medial triambic icosahedron
Ditrigonal dodecadodecahedron
Excavated dodecahedron
Type Dual {5,4}6{5,4}6Dual of {5,6}4{5,6}4{6,6}6
(v,e,f) (24,60,30)(30,60,24)(24,60,20)(20,60,24)(20,60,20)
Vertex figure {5}, {5/2}
(5.5/2)2
{5}, {5/2}
(5.5/3)3
Faces 30 rhombi
12 pentagons
12 pentagrams
20 hexagons
12 pentagons
12 pentagrams
20 hexagrams
Tiling
{4, 5}
{5, 4}
{6, 5}
{5, 6}
{6, 6}
χ −6 −6 −16 −16 −20
Petrie dual
The Petrie dual of a regular polyhedron is a regular map whose vertices and edges correspond to the vertices and edges of the original polyhedron, and whose faces are the set of skew Petrie polygons.[12]
Regular petrials
Name Petrial tetrahedron
Petrial cube Petrial octahedron Petrial dodecahedron Petrial icosahedron
Symbol {3,3}π {4,3}π {3,4}π {5,3}π {3,5}π
(v,e,f), χ (4,6,3), χ = 1(8,12,4), χ = 0(6,12,4), χ = −2(20,30,6), χ = −4(12,30,6), χ = −12
Faces 3 skew squares
4 skew hexagons 6 skew decagons
Image
Animation
Related
figures
{4,3}3 = {4,3}/2 = {4,3}(2,0)
{6,3}3 = {6,3}(2,0)
{6,4}3 = {6,4}(4,0)
{10,3}5 {10,5}3
Spherical polyhedra
Main article: Spherical polyhedron
The usual five regular polyhedra can also be represented as spherical tilings (tilings of the sphere):
Tetrahedron
{3,3}
Cube
{4,3}
Octahedron
{3,4}
Dodecahedron
{5,3}
Icosahedron
{3,5}
Small stellated dodecahedron
{5/2,5}
Great dodecahedron
{5,5/2}
Great stellated dodecahedron
{5/2,3}
Great icosahedron
{3,5/2}
Regular polyhedra that can only exist as spherical polyhedra
See also: Hosohedron and Dihedron
For a regular polyhedron whose Schläfli symbol is {m, n}, the number of polygonal faces may be found by:
$N_{2}={\frac {4n}{2m+2n-mn}}$
The Platonic solids known to antiquity are the only integer solutions for m ≥ 3 and n ≥ 3. The restriction m ≥ 3 enforces that the polygonal faces must have at least three sides.
When considering polyhedra as a spherical tiling, this restriction may be relaxed, since digons (2-gons) can be represented as spherical lunes, having non-zero area. Allowing m = 2 admits a new infinite class of regular polyhedra, which are the hosohedra. On a spherical surface, the regular polyhedron {2, n} is represented as n abutting lunes, with interior angles of 2π/n. All these lunes share two common vertices.[13]
A regular dihedron, {n, 2}[13] (2-hedron) in three-dimensional Euclidean space can be considered a degenerate prism consisting of two (planar) n-sided polygons connected "back-to-back", so that the resulting object has no depth, analogously to how a digon can be constructed with two line segments. However, as a spherical tiling, a dihedron can exist as nondegenerate form, with two n-sided faces covering the sphere, each face being a hemisphere, and vertices around a great circle. It is regular if the vertices are equally spaced.
Digonal dihedron
{2,2}
Trigonal dihedron
{3,2}
Square dihedron
{4,2}
Pentagonal dihedron
{5,2}
Hexagonal dihedron
{6,2}
... {n,2}
Digonal hosohedron
{2,2}
Trigonal hosohedron
{2,3}
Square hosohedron
{2,4}
Pentagonal hosohedron
{2,5}
Hexagonal hosohedron
{2,6}
... {2,n}
The hosohedron {2,n} is dual to the dihedron {n,2}. Note that when n = 2, we obtain the polyhedron {2,2}, which is both a hosohedron and a dihedron. All of these have Euler characteristic 2.
See also
• Quasiregular polyhedron
• Semiregular polyhedron
• Uniform polyhedron
• Regular polytope
References
1. Cromwell, Peter R. (1997). Polyhedra. Cambridge University Press. p. 77. ISBN 0-521-66405-5.
2. Chen, Zhibo, and Liang, Tian. "The converse of Viviani's theorem", The College Mathematics Journal 37(5), 2006, pp. 390–391.
3. The Scottish Solids Hoax.
4. Hagino, K., Onuma, R., Kawachi, M. and Horiguchi, T. (2013) "Discovery of an endosymbiotic nitrogen-fixing cyanobacterium UCYN-A in Braarudosphaera bigelowii (Prymnesiophyceae)". PLoS One, 8(12): e81749. doi:10.1371/journal.pone.0081749.
5. Haeckel, E. (1904). Kunstformen der Natur. Available as Haeckel, E. Art forms in nature, Prestel USA (1998), ISBN 3-7913-1990-6. Online version at Kurt Stüber's Biolib (in german)
6. "Myoviridae". Virus Taxonomy. Elsevier. 2012. pp. 46–62. doi:10.1016/b978-0-12-384684-6.00002-1. ISBN 9780123846846.
7. STRAUSS, JAMES H.; STRAUSS, ELLEN G. (2008). "The Structure of Viruses". Viruses and Human Disease. Elsevier. pp. 35–62. doi:10.1016/b978-0-12-373741-0.50005-2. ISBN 9780123737410. S2CID 80803624.
8. Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, ISBN 0-486-40919-8 (Chapter 5: Regular Skew Polyhedra in three and four dimensions and their topological analogues, Proceedings of the London Mathematics Society, Ser. 2, Vol 43, 1937.)
9. The Regular Polyhedra (of index two), David A. Richter
10. Regular Polyhedra of Index Two, I Anthony M. Cutler, Egon Schulte, 2010
11. Regular Polyhedra of Index Two, II Beitrage zur Algebra und Geometrie 52(2):357–387 · November 2010, Table 3, p.27
12. McMullen, Peter; Schulte, Egon (2002), Abstract Regular Polytopes, Encyclopedia of Mathematics and its Applications, vol. 92, Cambridge University Press, p. 192, ISBN 9780521814966
13. Coxeter, Regular polytopes, p. 12
• Bertrand, J. (1858). Note sur la théorie des polyèdres réguliers, Comptes rendus des séances de l'Académie des Sciences, 46, pp. 79–82.
• Haeckel, E. (1904). Kunstformen der Natur. Available as Haeckel, E. Art forms in nature, Prestel USA (1998), ISBN 3-7913-1990-6, or online at http://caliban.mpiz-koeln.mpg.de/~stueber/haeckel/kunstformen/natur.html
• Smith, J. V. (1982). Geometrical And Structural Crystallography. John Wiley and Sons.
• Sommerville, D. M. Y. (1930). An Introduction to the Geometry of n Dimensions. E. P. Dutton, New York. (Dover Publications edition, 1958). Chapter X: The Regular Polytopes.
• Coxeter, H.S.M.; Regular Polytopes (third edition). Dover Publications Inc. ISBN 0-486-61480-8
• 'there are 48 regular polyhedra' jan Misali
External links
• Weisstein, Eric W. "Regular Polyhedron". MathWorld.
|
Wikipedia
|
Regular 4-polytope
In mathematics, a regular 4-polytope is a regular four-dimensional polytope. They are the four-dimensional analogues of the regular polyhedra in three dimensions and the regular polygons in two dimensions.
There are six convex and ten star regular 4-polytopes, giving a total of sixteen.
History
The convex regular 4-polytopes were first described by the Swiss mathematician Ludwig Schläfli in the mid-19th century.[1] He discovered that there are precisely six such figures.
Schläfli also found four of the regular star 4-polytopes: the grand 120-cell, great stellated 120-cell, grand 600-cell, and great grand stellated 120-cell. He skipped the remaining six because he would not allow forms that failed the Euler characteristic on cells or vertex figures (for zero-hole tori: F − E + V = 2). That excludes cells and vertex figures such as the great dodecahedron {5,5/2} and small stellated dodecahedron {5/2,5}.
Edmund Hess (1843–1903) published the complete list in his 1883 German book Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder.
Construction
The existence of a regular 4-polytope $\{p,q,r\}$ is constrained by the existence of the regular polyhedra $\{p,q\},\{q,r\}$ which form its cells and a dihedral angle constraint
$\sin {\frac {\pi }{p}}\sin {\frac {\pi }{r}}<\cos {\frac {\pi }{q}}$
to ensure that the cells meet to form a closed 3-surface.
The six convex and ten star polytopes described are the only solutions to these constraints.
There are four nonconvex Schläfli symbols {p,q,r} that have valid cells {p,q} and vertex figures {q,r}, and pass the dihedral test, but fail to produce finite figures: {3,5/2,3}, {4,3,5/2}, {5/2,3,4}, {5/2,3,5/2}.
Regular convex 4-polytopes
The regular convex 4-polytopes are the four-dimensional analogues of the Platonic solids in three dimensions and the convex regular polygons in two dimensions.
Five of the six are clearly analogues of the five corresponding Platonic solids. The sixth, the 24-cell, has no regular analogue in three dimensions. However, there exists a pair of irregular solids, the cuboctahedron and its dual the rhombic dodecahedron, which are partial analogues to the 24-cell (in complementary ways). Together they can be seen as the three-dimensional analogue of the 24-cell.
Each convex regular 4-polytope is bounded by a set of 3-dimensional cells which are all Platonic solids of the same type and size. These are fitted together along their respective faces (face-to-face) in a regular fashion.
Properties
Like their 3-dimensional analogues, the convex regular 4-polytopes can be naturally ordered by size as a measure of 4-dimensional content (hypervolume) for the same radius. Each greater polytope in the sequence is rounder than its predecessor, enclosing more content[2] within the same radius. The 4-simplex (5-cell) is the limit smallest case, and the 120-cell is the largest. Complexity (as measured by comparing configuration matrices or simply the number of vertices) follows the same ordering.
Regular convex 4-polytopes
Symmetry group A4 B4 F4 H4
Name 5-cell
Hyper-tetrahedron
5-point
16-cell
Hyper-octahedron
8-point
8-cell
Hyper-cube
16-point
24-cell
24-point
600-cell
Hyper-icosahedron
120-point
120-cell
Hyper-dodecahedron
600-point
Schläfli symbol {3, 3, 3} {3, 3, 4} {4, 3, 3} {3, 4, 3} {3, 3, 5} {5, 3, 3}
Coxeter mirrors
Mirror dihedrals 𝝅/3 𝝅/3 𝝅/3 𝝅/2 𝝅/2 𝝅/2 𝝅/3 𝝅/3 𝝅/4 𝝅/2 𝝅/2 𝝅/2 𝝅/4 𝝅/3 𝝅/3 𝝅/2 𝝅/2 𝝅/2 𝝅/3 𝝅/4 𝝅/3 𝝅/2 𝝅/2 𝝅/2 𝝅/3 𝝅/3 𝝅/5 𝝅/2 𝝅/2 𝝅/2 𝝅/5 𝝅/3 𝝅/3 𝝅/2 𝝅/2 𝝅/2
Graph
Vertices 5 tetrahedral 8 octahedral 16 tetrahedral 24 cubical 120 icosahedral 600 tetrahedral
Edges 10 triangular 24 square 32 triangular 96 triangular 720 pentagonal 1200 triangular
Faces 10 triangles 32 triangles 24 squares 96 triangles 1200 triangles 720 pentagons
Cells 5 tetrahedra 16 tetrahedra 8 cubes 24 octahedra 600 tetrahedra 120 dodecahedra
Tori 1 5-tetrahedron 2 8-tetrahedron 2 4-cube 4 6-octahedron 20 30-tetrahedron 12 10-dodecahedron
Inscribed 120 in 120-cell 675 in 120-cell 2 16-cells 3 8-cells 25 24-cells 10 600-cells
Great polygons 2 squares x 3 4 rectangles x 4 4 hexagons x 4 12 decagons x 6 100 irregular hexagons x 4
Petrie polygons 1 pentagon 1 octagon 2 octagons 2 dodecagons 4 30-gons 20 30-gons
Long radius $1$ $1$ $1$ $1$ $1$ $1$
Edge length ${\sqrt {\tfrac {5}{2}}}\approx 1.581$ ${\sqrt {2}}\approx 1.414$ $1$ $1$ ${\tfrac {1}{\phi }}\approx 0.618$ ${\tfrac {1}{\phi ^{2}{\sqrt {2}}}}\approx 0.270$
Short radius ${\tfrac {1}{4}}$ ${\tfrac {1}{2}}$ ${\tfrac {1}{2}}$ ${\sqrt {\tfrac {1}{2}}}\approx 0.707$ ${\sqrt {\tfrac {\phi ^{4}}{8}}}\approx 0.926$ ${\sqrt {\tfrac {\phi ^{4}}{8}}}\approx 0.926$
Area $10\left({\tfrac {5{\sqrt {3}}}{8}}\right)\approx 10.825$ $32\left({\sqrt {\tfrac {3}{4}}}\right)\approx 27.713$ $24$ $96\left({\sqrt {\tfrac {3}{16}}}\right)\approx 41.569$ $1200\left({\tfrac {\sqrt {3}}{4\phi ^{2}}}\right)\approx 198.48$ $720\left({\tfrac {\sqrt {25+10{\sqrt {5}}}}{8\phi ^{4}}}\right)\approx 90.366$
Volume $5\left({\tfrac {5{\sqrt {5}}}{24}}\right)\approx 2.329$ $16\left({\tfrac {1}{3}}\right)\approx 5.333$ $8$ $24\left({\tfrac {\sqrt {2}}{3}}\right)\approx 11.314$ $600\left({\tfrac {\sqrt {2}}{12\phi ^{3}}}\right)\approx 16.693$ $120\left({\tfrac {15+7{\sqrt {5}}}{4\phi ^{6}{\sqrt {8}}}}\right)\approx 18.118$
4-Content ${\tfrac {\sqrt {5}}{24}}\left({\tfrac {\sqrt {5}}{2}}\right)^{4}\approx 0.146$ ${\tfrac {2}{3}}\approx 0.667$ $1$ $2$ ${\tfrac {{\text{Short}}\times {\text{Vol}}}{4}}\approx 3.863$ ${\tfrac {{\text{Short}}\times {\text{Vol}}}{4}}\approx 4.193$
The following table lists some properties of the six convex regular 4-polytopes. The symmetry groups of these 4-polytopes are all Coxeter groups and given in the notation described in that article. The number following the name of the group is the order of the group.
NamesImageFamilySchläfli
Coxeter
VEFCVert.
fig.
Dual Symmetry group
5-cell
pentachoron
pentatope
4-simplex
n-simplex
(An family)
{3,3,3}
51010
{3}
5
{3,3}
{3,3}self-dualA4
[3,3,3]
120
16-cell
hexadecachoron
4-orthoplex
n-orthoplex
(Bn family)
{3,3,4}
82432
{3}
16
{3,3}
{3,4}8-cellB4
[4,3,3]
384
8-cell
octachoron
tesseract
4-cube
hypercube
n-cube
(Bn family)
{4,3,3}
163224
{4}
8
{4,3}
{3,3}16-cell
24-cell
icositetrachoron
octaplex
polyoctahedron
(pO)
Fn family{3,4,3}
249696
{3}
24
{3,4}
{4,3}self-dualF4
[3,4,3]
1152
600-cell
hexacosichoron
tetraplex
polytetrahedron
(pT)
n-pentagonal
polytope
(Hn family)
{3,3,5}
1207201200
{3}
600
{3,3}
{3,5}120-cellH4
[5,3,3]
14400
120-cell
hecatonicosachoron
dodecacontachoron
dodecaplex
polydodecahedron
(pD)
n-pentagonal
polytope
(Hn family)
{5,3,3}
6001200720
{5}
120
{5,3}
{3,3}600-cell
John Conway advocated the names simplex, orthoplex, tesseract, octaplex or polyoctahedron (pO), tetraplex or polytetrahedron (pT), and dodecaplex or polydodecahedron (pD).[3]
Norman Johnson advocated the names n-cell, or pentachoron, hexadecachoron, tesseract or octachoron, icositetrachoron, hexacosichoron, and hecatonicosachoron (or dodecacontachoron), coining the term polychoron being a 4D analogy to the 3D polyhedron, and 2D polygon, expressed from the Greek roots poly ("many") and choros ("room" or "space").[4][5]
The Euler characteristic for all 4-polytopes is zero, we have the 4-dimensional analogue of Euler's polyhedral formula:
$N_{0}-N_{1}+N_{2}-N_{3}=0\,$
where Nk denotes the number of k-faces in the polytope (a vertex is a 0-face, an edge is a 1-face, etc.).
The topology of any given 4-polytope is defined by its Betti numbers and torsion coefficients.[6]
As configurations
A regular 4-polytope can be completely described as a configuration matrix containing counts of its component elements. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers (upper left to lower right) say how many of each element occur in the whole 4-polytope. The non-diagonal numbers say how many of the column's element occur in or at the row's element. For example, there are 2 vertices in each edge (each edge has 2 vertices), and 2 cells meet at each face (each face belongs to 2 cells), in any regular 4-polytope. The configuration for the dual polytope can be obtained by rotating the matrix by 180 degrees.[7][8]
5-cell
{3,3,3}
16-cell
{3,3,4}
8-cell
{4,3,3}
24-cell
{3,4,3}
600-cell
{3,3,5}
120-cell
{5,3,3}
${\begin{bmatrix}{\begin{matrix}5&4&6&4\\2&10&3&3\\3&3&10&2\\4&6&4&5\end{matrix}}\end{bmatrix}}$ ${\begin{bmatrix}{\begin{matrix}8&6&12&8\\2&24&4&4\\3&3&32&2\\4&6&4&16\end{matrix}}\end{bmatrix}}$ ${\begin{bmatrix}{\begin{matrix}16&4&6&4\\2&32&3&3\\4&4&24&2\\8&12&6&8\end{matrix}}\end{bmatrix}}$ ${\begin{bmatrix}{\begin{matrix}24&8&12&6\\2&96&3&3\\3&3&96&2\\6&12&8&24\end{matrix}}\end{bmatrix}}$ ${\begin{bmatrix}{\begin{matrix}120&12&30&20\\2&720&5&5\\3&3&1200&2\\4&6&4&600\end{matrix}}\end{bmatrix}}$ ${\begin{bmatrix}{\begin{matrix}600&4&6&4\\2&1200&3&3\\5&5&720&2\\20&30&12&120\end{matrix}}\end{bmatrix}}$
Visualization
The following table shows some 2-dimensional projections of these 4-polytopes. Various other visualizations can be found in the external links below. The Coxeter-Dynkin diagram graphs are also given below the Schläfli symbol.
A4 = [3,3,3]B4 = [4,3,3]F4 = [3,4,3]H4 = [5,3,3]
5-cell16-cell8-cell24-cell600-cell120-cell
{3,3,3}{3,3,4}{4,3,3}{3,4,3}{3,3,5}{5,3,3}
Solid 3D orthographic projections
Tetrahedral
envelope
(cell/vertex-centered)
Cubic envelope
(cell-centered)
Cubic envelope
(cell-centered)
Cuboctahedral
envelope
(cell-centered)
Pentakis icosidodecahedral
envelope
(vertex-centered)
Truncated rhombic
triacontahedron
envelope
(cell-centered)
Wireframe Schlegel diagrams (Perspective projection)
Cell-centered
Cell-centered
Cell-centered
Cell-centered
Vertex-centered
Cell-centered
Wireframe stereographic projections (3-sphere)
Regular star (Schläfli–Hess) 4-polytopes
The Schläfli–Hess 4-polytopes are the complete set of 10 regular self-intersecting star polychora (four-dimensional polytopes).[10] They are named in honor of their discoverers: Ludwig Schläfli and Edmund Hess. Each is represented by a Schläfli symbol {p,q,r} in which one of the numbers is 5/2. They are thus analogous to the regular nonconvex Kepler–Poinsot polyhedra, which are in turn analogous to the pentagram.
Names
Their names given here were given by John Conway, extending Cayley's names for the Kepler–Poinsot polyhedra: along with stellated and great, he adds a grand modifier. Conway offered these operational definitions:
1. stellation – replaces edges by longer edges in same lines. (Example: a pentagon stellates into a pentagram)
2. greatening – replaces the faces by large ones in same planes. (Example: an icosahedron greatens into a great icosahedron)
3. aggrandizement – replaces the cells by large ones in same 3-spaces. (Example: a 600-cell aggrandizes into a grand 600-cell)
John Conway names the 10 forms from 3 regular celled 4-polytopes: pT=polytetrahedron {3,3,5} (a tetrahedral 600-cell), pI=polyicoshedron {3,5,5/2} (an icosahedral 120-cell), and pD=polydodecahedron {5,3,3} (a dodecahedral 120-cell), with prefix modifiers: g, a, and s for great, (ag)grand, and stellated. The final stellation, the great grand stellated polydodecahedron contains them all as gaspD.
Symmetry
All ten polychora have [3,3,5] (H4) hexacosichoric symmetry. They are generated from 6 related Goursat tetrahedra rational-order symmetry groups: [3,5,5/2], [5,5/2,5], [5,3,5/2], [5/2,5,5/2], [5,5/2,3], and [3,3,5/2].
Each group has 2 regular star-polychora, except for two groups which are self-dual, having only one. So there are 4 dual-pairs and 2 self-dual forms among the ten regular star polychora.
Properties
Note:
• There are 2 unique vertex arrangements, matching those of the 120-cell and 600-cell.
• There are 4 unique edge arrangements, which are shown as wireframes orthographic projections.
• There are 7 unique face arrangements, shown as solids (face-colored) orthographic projections.
The cells (polyhedra), their faces (polygons), the polygonal edge figures and polyhedral vertex figures are identified by their Schläfli symbols.
Name
Conway (abbrev.)
Orthogonal
projection
Schläfli
Coxeter
C
{p, q}
F
{p}
E
{r}
V
{q, r}
Dens. χ
Icosahedral 120-cell
polyicosahedron (pI)
{3,5,5/2}
120
{3,5}
1200
{3}
720
{5/2}
120
{5,5/2}
4 480
Small stellated 120-cell
stellated polydodecahedron (spD)
{5/2,5,3}
120
{5/2,5}
720
{5/2}
1200
{3}
120
{5,3}
4 −480
Great 120-cell
great polydodecahedron (gpD)
{5,5/2,5}
120
{5,5/2}
720
{5}
720
{5}
120
{5/2,5}
6 0
Grand 120-cell
grand polydodecahedron (apD)
{5,3,5/2}
120
{5,3}
720
{5}
720
{5/2}
120
{3,5/2}
20 0
Great stellated 120-cell
great stellated polydodecahedron (gspD)
{5/2,3,5}
120
{5/2,3}
720
{5/2}
720
{5}
120
{3,5}
20 0
Grand stellated 120-cell
grand stellated polydodecahedron (aspD)
{5/2,5,5/2}
120
{5/2,5}
720
{5/2}
720
{5/2}
120
{5,5/2}
66 0
Great grand 120-cell
great grand polydodecahedron (gapD)
{5,5/2,3}
120
{5,5/2}
720
{5}
1200
{3}
120
{5/2,3}
76 −480
Great icosahedral 120-cell
great polyicosahedron (gpI)
{3,5/2,5}
120
{3,5/2}
1200
{3}
720
{5}
120
{5/2,5}
76 480
Grand 600-cell
grand polytetrahedron (apT)
{3,3,5/2}
600
{3,3}
1200
{3}
720
{5/2}
120
{3,5/2}
191 0
Great grand stellated 120-cell
great grand stellated polydodecahedron (gaspD)
{5/2,3,3}
120
{5/2,3}
720
{5/2}
1200
{3}
600
{3,3}
191 0
See also
• Regular polytope
• List of regular polytopes
• Infinite regular 4-polytopes:
• One regular Euclidean honeycomb: {4,3,4}
• Four compact regular hyperbolic honeycombs: {3,5,3}, {4,3,5}, {5,3,4}, {5,3,5}
• Eleven paracompact regular hyperbolic honeycombs: {3,3,6}, {6,3,3}, {3,4,4}, {4,4,3}, {3,6,3}, {4,3,6}, {6,3,4}, {4,4,4}, {5,3,6}, {6,3,5}, and {6,3,6}.
• Abstract regular 4-polytopes:
• 11-cell {3,5,3}
• 57-cell {5,3,5}
• Uniform 4-polytope uniform 4-polytope families constructed from these 6 regular forms.
• Platonic solid
• Kepler-Poinsot polyhedra — regular star polyhedron
• Star polygon — regular star polygons
• 4-polytope
• 5-polytope
• 6-polytope
References
Citations
1. Coxeter 1973, p. 141, §7-x. Historical remarks.
2. Coxeter 1973, pp. 292–293, Table I(ii): The sixteen regular polytopes {p,q,r} in four dimensions: [An invaluable table providing all 20 metrics of each 4-polytope in edge length units. They must be algebraically converted to compare polytopes of unit radius.]
3. Conway, Burgiel & Goodman-Strass 2008, Ch. 26. Higher Still
4. "Convex and abstract polytopes", Programme and abstracts, MIT, 2005
5. Johnson, Norman W. (2018). "§ 11.5 Spherical Coxeter groups". Geometries and Transformations. Cambridge University Press. pp. 246–. ISBN 978-1-107-10340-5.
6. Richeson, David S. (2012). "23. Henri Poincaré and the Ascendancy of Topology". Euler's Gem: The Polyhedron Formula and the Birth of Topology. Princeton University Press. pp. 256–. ISBN 978-0-691-15457-2.
7. Coxeter 1973, § 1.8 Configurations
8. Coxeter, Complex Regular Polytopes, p.117
9. Conway, Burgiel & Goodman-Strass 2008, p. 406, Fig 26.2
10. Coxeter, Star polytopes and the Schläfli function f{α,β,γ) p. 122 2. The Schläfli-Hess polytopes
Bibliography
• Coxeter, H.S.M. (1973) [1948]. Regular Polytopes (3rd ed.). New York: Dover.
• Coxeter, H.S.M. (1969). Introduction to Geometry (2nd ed.). Wiley. ISBN 0-471-50458-0.
• D.M.Y. Sommerville (2020) [1930]. "X. The Regular Polytopes". Introduction to the Geometry of n Dimensions. Courier Dover. pp. 159–192. ISBN 978-0-486-84248-6.
• Conway, John H.; Burgiel, Heidi; Goodman-Strass, Chaim (2008). "26. Regular Star-polytopes". The Symmetries of Things. pp. 404–8. ISBN 978-1-56881-220-5.
• Hess, Edmund (1883). "Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder".
• Hess, Edmund (1885). "Uber die regulären Polytope höherer Art". Sitzungsber Gesells Beförderung Gesammten Naturwiss Marburg: 31–57.
• Sherk, F. Arthur; McMullen, Peter; Thompson, Anthony C.; Weiss, Asia Ivic, eds. (1995). Kaleidoscopes: Selected Writings of H.S.M. Coxeter. Wiley. ISBN 978-0-471-01003-6.
• (Paper 10) Coxeter, H.S.M. (1989). "Star Polytopes and the Schlafli Function f(α,β,γ)". Elemente der Mathematik. 44 (2): 25–36.
• Coxeter, H.S.M. (1991). Regular Complex Polytopes (2nd ed.). Cambridge University Press. ISBN 978-0-521-39490-1.
• McMullen, Peter; Schulte, Egon (2002). "Abstract Regular Polytopes" (PDF).
External links
• Weisstein, Eric W. "Regular polychoron". MathWorld.
• Jonathan Bowers, 16 regular 4-polytopes
• Regular 4D Polytope Foldouts
• Catalog of Polytope Images A collection of stereographic projections of 4-polytopes.
• A Catalog of Uniform Polytopes
• Dimensions 2 hour film about the fourth dimension (contains stereographic projections of all regular 4-polytopes)
• Reguläre Polytope
• The Regular Star Polychora
• Hypersolids
Regular 4-polytopes
Convex
5-cell8-cell16-cell24-cell120-cell600-cell
• {3,3,3}
• pentachoron
• 4-simplex
• {4,3,3}
• tesseract
• 4-cube
• {3,3,4}
• hexadecachoron
• 4-orthoplex
• {3,4,3}
• icositetrachoron
• octaplex
• {5,3,3}
• hecatonicosachoron
• dodecaplex
• {3,3,5}
• hexacosichoron
• tetraplex
Star
icosahedral
120-cell
small
stellated
120-cell
great
120-cell
grand
120-cell
great
stellated
120-cell
grand
stellated
120-cell
great grand
120-cell
great
icosahedral
120-cell
grand
600-cell
great grand
stellated 120-cell
• {3,5,5/2}
• icosaplex
• {5/2,5,3}
• stellated dodecaplex
• {5,5/2,5}
• great dodecaplex
• {5,3,5/2}
• grand dodecaplex
• {5/2,3,5}
• great stellated dodecaplex
• {5/2,5,5/2}
• grand stellated dodecaplex
• {5,5/2,3}
• great grand dodecaplex
• {3,5/2,5}
• great icosaplex
• {3,3,5/2}
• grand tetraplex
• {5/2,3,3}
• great grand stellated dodecaplex
|
Wikipedia
|
5-polytope
In geometry, a five-dimensional polytope (or 5-polytope) is a polytope in five-dimensional space, bounded by (4-polytope) facets, pairs of which share a polyhedral cell.
Graphs of three regular and three uniform 5-polytopes.
5-simplex (hexateron)
5-orthoplex, 211
(Pentacross)
5-cube
(Penteract)
Expanded 5-simplex
Rectified 5-orthoplex
5-demicube. 121
(Demipenteract)
Definition
A 5-polytope is a closed five-dimensional figure with vertices, edges, faces, and cells, and 4-faces. A vertex is a point where five or more edges meet. An edge is a line segment where four or more faces meet, and a face is a polygon where three or more cells meet. A cell is a polyhedron, and a 4-face is a 4-polytope. Furthermore, the following requirements must be met:
1. Each cell must join exactly two 4-faces.
2. Adjacent 4-faces are not in the same four-dimensional hyperplane.
3. The figure is not a compound of other figures which meet the requirements.
Characteristics
The topology of any given 5-polytope is defined by its Betti numbers and torsion coefficients.[1]
The value of the Euler characteristic used to characterise polyhedra does not generalize usefully to higher dimensions, whatever their underlying topology. This inadequacy of the Euler characteristic to reliably distinguish between different topologies in higher dimensions led to the discovery of the more sophisticated Betti numbers.[1]
Similarly, the notion of orientability of a polyhedron is insufficient to characterise the surface twistings of toroidal polytopes, and this led to the use of torsion coefficients.[1]
Classification
5-polytopes may be classified based on properties like "convexity" and "symmetry".
• A 5-polytope is convex if its boundary (including its cells, faces and edges) does not intersect itself and the line segment joining any two points of the 5-polytope is contained in the 5-polytope or its interior; otherwise, it is non-convex. Self-intersecting 5-polytopes are also known as star polytopes, from analogy with the star-like shapes of the non-convex Kepler-Poinsot polyhedra.
• A uniform 5-polytope has a symmetry group under which all vertices are equivalent, and its facets are uniform 4-polytopes. The faces of a uniform polytope must be regular.
Main article: Uniform 5-polytope
• A semi-regular 5-polytope contains two or more types of regular 4-polytope facets. There is only one such figure, called a demipenteract.
• A regular 5-polytope has all identical regular 4-polytope facets. All regular 5-polytopes are convex.
Main article: List_of_regular_polytopes § Convex_4
• A prismatic 5-polytope is constructed by a Cartesian product of two lower-dimensional polytopes. A prismatic 5-polytope is uniform if its factors are uniform. The hypercube is prismatic (product of a square and a cube), but is considered separately because it has symmetries other than those inherited from its factors.
• A 4-space tessellation is the division of four-dimensional Euclidean space into a regular grid of polychoral facets. Strictly speaking, tessellations are not polytopes as they do not bound a "5D" volume, but we include them here for the sake of completeness because they are similar in many ways to polytopes. A uniform 4-space tessellation is one whose vertices are related by a space group and whose facets are uniform 4-polytopes.
Regular 5-polytopes
Regular 5-polytopes can be represented by the Schläfli symbol {p,q,r,s}, with s {p,q,r} polychoral facets around each face.
There are exactly three such convex regular 5-polytopes:
1. {3,3,3,3} - 5-simplex
2. {4,3,3,3} - 5-cube
3. {3,3,3,4} - 5-orthoplex
For the 3 convex regular 5-polytopes and three semiregular 5-polytope, their elements are:
NameSchläfli
symbol
(s)
Coxeter
diagram
(s)
VerticesEdgesFacesCells4-facesSymmetry (order)
5-simplex{3,3,3,3}61520156A5, (120)
5-cube{4,3,3,3}3280804010BC5, (3820)
5-orthoplex{3,3,3,4}
{3,3,31,1}
1040808032BC5, (3840)
2×D5
Uniform 5-polytopes
Main article: Uniform 5-polytope
For three of the semiregular 5-polytope, their elements are:
NameSchläfli
symbol
(s)
Coxeter
diagram
(s)
VerticesEdgesFacesCells4-facesSymmetry (order)
Expanded 5-simplext0,4{3,3,3,3}301202101801622×A5, (240)
5-demicube{3,32,1}
h{4,3,3,3}
168016012026D5, (1920)
½BC5
Rectified 5-orthoplext1{3,3,3,4}
t1{3,3,31,1}
4024040024042BC5, (3840)
2×D5
The expanded 5-simplex is the vertex figure of the uniform 5-simplex honeycomb, . The 5-demicube honeycomb, , vertex figure is a rectified 5-orthoplex and facets are the 5-orthoplex and 5-demicube.
Pyramids
Pyramidal 5-polytopes, or 5-pyramids, can be generated by a 4-polytope base in a 4-space hyperplane connected to a point off the hyperplane. The 5-simplex is the simplest example with a 4-simplex base.
See also
• List of regular polytopes#Five-dimensional regular polytopes and higher
References
1. Richeson, D.; Euler's Gem: The Polyhedron Formula and the Birth of Topoplogy, Princeton, 2008.
• T. Gosset: On the Regular and Semi-Regular Figures in Space of n Dimensions, Messenger of Mathematics, Macmillan, 1900
• A. Boole Stott: Geometrical deduction of semiregular from regular polytopes and space fillings, Verhandelingen of the Koninklijke academy van Wetenschappen width unit Amsterdam, Eerste Sectie 11,1, Amsterdam, 1910
• H.S.M. Coxeter:
• H.S.M. Coxeter, M.S. Longuet-Higgins und J.C.P. Miller: Uniform Polyhedra, Philosophical Transactions of the Royal Society of London, Londne, 1954
• H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6
• (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10]
• (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
• Klitzing, Richard. "5D uniform polytopes (polytera)".
External links
• Polytopes of Various Dimensions, Jonathan Bowers
• Uniform Polytera, Jonathan Bowers
• Multi-dimensional Glossary, Garrett Jones
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
|
Wikipedia
|
Decagon
In geometry, a decagon (from the Greek δέκα déka and γωνία gonía, "ten angles") is a ten-sided polygon or 10-gon.[1] The total sum of the interior angles of a simple decagon is 1440°.
Regular decagon
A regular decagon
TypeRegular polygon
Edges and vertices10
Schläfli symbol{10}, t{5}
Coxeter–Dynkin diagrams
Symmetry groupDihedral (D10), order 2×10
Internal angle (degrees)144°
PropertiesConvex, cyclic, equilateral, isogonal, isotoxal
Dual polygonSelf
Regular decagon
A regular decagon has all sides of equal length and each internal angle will always be equal to 144°.[1] Its Schläfli symbol is {10} [2] and can also be constructed as a truncated pentagon, t{5}, a quasiregular decagon alternating two types of edges.
Side length
The picture shows a regular decagon with side length $a$ and radius $R$ of the circumscribed circle.
• The triangle $E_{10}E_{1}M$ has two equally long legs with length $R$ and a base with length $a$
• The circle around $E_{1}$ with radius $a$ intersects $]M\,E_{10}[$ in a point $P$ (not designated in the picture).
• Now the triangle ${E_{10}E_{1}P}\;$ is a isosceles triangle with vertex $E_{1}$ and with base angles $m\angle E_{1}E_{10}P=m\angle E_{10}PE_{1}=72^{\circ }\;$.
• Therefore $m\angle PE_{1}E_{10}=180^{\circ }-2\cdot 72^{\circ }=36^{\circ }\;$. So $\;m\angle ME_{1}P=72^{\circ }-36^{\circ }=36^{\circ }\;$ and hence $\;E_{1}MP\;$ is also a isosceles triangle with vertex $P$. The length of its legs is $a$, so the length of $[P\,E_{10}]$ is $R-a$.
• The isosceles triangles $E_{10}E_{1}M\;$ and $PE_{10}E_{1}\;$ have equal angles of 36° at the vertex, and so they're similar, hence: $\;{\frac {a}{R}}={\frac {R-a}{a}}$
• Multiplication with the denominators $R,a>0$ leads to the quadratic equation: $\;a^{2}=R^{2}-aR\;$
• This equation for the side length $a\,$ has one positive solution: $\;a={\frac {R}{2}}(-1+{\sqrt {5}})$
So the regular decagon can be constructed with ruler and compass.
Further conclusions
$\;R={\frac {2a}{{\sqrt {5}}-1}}={\frac {a}{2}}({\sqrt {5}}+1)\;$ and the base height of $\Delta \,E_{10}E_{1}M\,$ (i.e. the length of $[M\,D]$) is $h={\sqrt {R^{2}-(a/2)^{2}}}={\frac {a}{2}}{\sqrt {5+2{\sqrt {5}}}}\;$ and the triangle has the area: $A_{\Delta }={\frac {a}{2}}\cdot h={\frac {a^{2}}{4}}{\sqrt {5+2{\sqrt {5}}}}$.
Area
The area of a regular decagon of side length a is given by:[3]
$A={\frac {5}{2}}a^{2}\cot \left({\frac {\pi }{10}}\right)={\frac {5}{2}}a^{2}{\sqrt {5+2{\sqrt {5}}}}\simeq 7.694208843\,a^{2}$
In terms of the apothem r (see also inscribed figure), the area is:
$A=10\tan \left({\frac {\pi }{10}}\right)r^{2}=2r^{2}{\sqrt {5\left(5-2{\sqrt {5}}\right)}}\simeq 3.249196962\,r^{2}$
In terms of the circumradius R, the area is:
$A=5\sin \left({\frac {\pi }{5}}\right)R^{2}={\frac {5}{2}}R^{2}{\sqrt {\frac {5-{\sqrt {5}}}{2}}}\simeq 2.938926261\,R^{2}$
An alternative formula is $A=2.5da$ where d is the distance between parallel sides, or the height when the decagon stands on one side as base, or the diameter of the decagon's inscribed circle. By simple trigonometry,
$d=2a\left(\cos {\tfrac {3\pi }{10}}+\cos {\tfrac {\pi }{10}}\right),$
and it can be written algebraically as
$d=a{\sqrt {5+2{\sqrt {5}}}}.$
Sides
A regular decagon has 10 sides and is equilateral. It has 35 diagonals
Construction
As 10 = 2 × 5, a power of two times a Fermat prime, it follows that a regular decagon is constructible using compass and straightedge, or by an edge-bisection of a regular pentagon.[4]
Construction of decagon
Construction of pentagon
An alternative (but similar) method is as follows:
1. Construct a pentagon in a circle by one of the methods shown in constructing a pentagon.
2. Extend a line from each vertex of the pentagon through the center of the circle to the opposite side of that same circle. Where each line cuts the circle is a vertex of the decagon. In other words, the image of a regular pentagon under a point reflection with respect of its center is a concentric congruent pentagon, and the two pentagons have in total the vertices of a concentric regular decagon.
3. The five corners of the pentagon constitute alternate corners of the decagon. Join these points to the adjacent new points to form the decagon.
Nonconvex regular decagon
The length ratio of two inequal edges of a golden triangle is the golden ratio, denoted by $\Phi $, or its multiplicative inverse:
$\Phi -1={\frac {1}{\Phi }}=2\,\cos 72\,^{\circ }={\frac {1}{\,2\,\cos 36\,^{\circ }}}={\frac {\,{\sqrt {5}}-1\,}{2}}{\text{.}}$
So we can get the properties of a regular decagonal star, through a tiling by golden triangles that fills this star polygon.
The golden ratio in decagon
Both in the construction with given circumcircle[5] as well as with given side length is the golden ratio dividing a line segment by exterior division the determining construction element.
• In the construction with given circumcircle the circular arc around G with radius GE3 produces the segment AH, whose division corresponds to the golden ratio.
${\frac {\overline {AM}}{\overline {MH}}}={\frac {\overline {AH}}{\overline {AM}}}={\frac {1+{\sqrt {5}}}{2}}=\Phi \approx 1.618{\text{.}}$
• In the construction with given side length[6] the circular arc around D with radius DA produces the segment E10F, whose division corresponds to the golden ratio.
${\frac {\overline {E_{1}E_{10}}}{\overline {E_{1}F}}}={\frac {\overline {E_{10}F}}{\overline {E_{1}E_{10}}}}={\frac {R}{a}}={\frac {1+{\sqrt {5}}}{2}}=\Phi \approx 1.618{\text{.}}$
Decagon with given circumcircle,[5] animation
Decagon with a given side length,[6] animation
Symmetry
The regular decagon has Dih10 symmetry, order 20. There are 3 subgroup dihedral symmetries: Dih5, Dih2, and Dih1, and 4 cyclic group symmetries: Z10, Z5, Z2, and Z1.
These 8 symmetries can be seen in 10 distinct symmetries on the decagon, a larger number because the lines of reflections can either pass through vertices or edges. John Conway labels these by a letter and group order.[7] Full symmetry of the regular form is r20 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders.
Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g10 subgroup has no degrees of freedom but can seen as directed edges.
The highest symmetry irregular decagons are d10, an isogonal decagon constructed by five mirrors which can alternate long and short edges, and p10, an isotoxal decagon, constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular decagon.
Dissection
10-cube projection 40 rhomb dissection
Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms.[8] In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the regular decagon, m=5, and it can be divided into 10 rhombs, with examples shown below. This decomposition can be seen as 10 of 80 faces in a Petrie polygon projection plane of the 5-cube. A dissection is based on 10 of 30 faces of the rhombic triacontahedron. The list OEIS: A006245 defines the number of solutions as 62, with 2 orientations for the first symmetric form, and 10 orientations for the other 6.
Regular decagon dissected into 10 rhombi
5-cube
Skew decagon
3 regular skew zig-zag decagons
{5}#{ } {5/2}#{ } {5/3}#{ }
A regular skew decagon is seen as zig-zagging edges of a pentagonal antiprism, a pentagrammic antiprism, and a pentagrammic crossed-antiprism.
A skew decagon is a skew polygon with 10 vertices and edges but not existing on the same plane. The interior of such an decagon is not generally defined. A skew zig-zag decagon has vertices alternating between two parallel planes.
A regular skew decagon is vertex-transitive with equal edge lengths. In 3-dimensions it will be a zig-zag skew decagon and can be seen in the vertices and side edges of a pentagonal antiprism, pentagrammic antiprism, and pentagrammic crossed-antiprism with the same D5d, [2+,10] symmetry, order 20.
These can also be seen in these 4 convex polyhedra with icosahedral symmetry. The polygons on the perimeter of these projections are regular skew decagons.
Orthogonal projections of polyhedra on 5-fold axes
Dodecahedron
Icosahedron
Icosidodecahedron
Rhombic triacontahedron
Petrie polygons
The regular skew decagon is the Petrie polygon for many higher-dimensional polytopes, shown in these orthogonal projections in various Coxeter planes:[9] The number of sides in the Petrie polygon is equal to the Coxeter number, h, for each symmetry family.
A9 D6 B5
9-simplex
411
131
5-orthoplex
5-cube
See also
• Decagonal number and centered decagonal number, figurate numbers modeled on the decagon
• Decagram, a star polygon with the same vertex positions as the regular decagon
References
1. Sidebotham, Thomas H. (2003), The A to Z of Mathematics: A Basic Guide, John Wiley & Sons, p. 146, ISBN 9780471461630.
2. Wenninger, Magnus J. (1974), Polyhedron Models, Cambridge University Press, p. 9, ISBN 9780521098595.
3. The elements of plane and spherical trigonometry, Society for Promoting Christian Knowledge, 1850, p. 59. Note that this source uses a as the edge length and gives the argument of the cotangent as an angle in degrees rather than in radians.
4. Ludlow, Henry H. (1904), Geometric Construction of the Regular Decagon and Pentagon Inscribed in a Circle, The Open Court Publishing Co..
5. Green, Henry (1861), Euclid's Plane Geometry, Books III–VI, Practically Applied, or Gradations in Euclid, Part II, London: Simpkin, Marshall,& CO., p. 116. Retrieved 10 February 2016.
6. Köller, Jürgen (2005), Regelmäßiges Zehneck, → 3. Section "Formeln, Ist die Seite a gegeben ..." (in German). Retrieved 10 February 2016.
7. John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, (2008) The Symmetries of Things, ISBN 978-1-56881-220-5 (Chapter 20, Generalized Schaefli symbols, Types of symmetry of a polygon pp. 275-278)
8. Coxeter, Mathematical recreations and Essays, Thirteenth edition, p.141
9. Coxeter, Regular polytopes, 12.4 Petrie polygon, pp. 223-226.
External links
• Weisstein, Eric W. "Decagon". MathWorld.
• Definition and properties of a decagon With interactive animation
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
|
Wikipedia
|
Regular Figures
Regular Figures is a book on polyhedra and symmetric patterns, by Hungarian geometer László Fejes Tóth. It was published in 1964 by Pergamon in London and Macmillan in New York.
Topics
Regular Figures is divided into two parts, "Systematology of the Regular Figures" and "Genetics of the Regular Figures", each in five chapters.[1] Although the first part represents older and standard material, much of the second part is based on a large collection of research works by Fejes Tóth, published over the course of approximately 25 years, and on his previous exposition of this material in a 1953 German-language text.[2]
The first part of the book covers many of the same topics as a previously published book, Regular Polytopes (1947), by H. S. M. Coxeter,[3][4] but with a greater emphasis on group theory and the classification of symmetry groups.[1][4] Its first three chapters describe the symmetries that two-dimensional geometric objects can have: the 17 wallpaper groups of the Euclidean plane in the first chapter, with the first English-language presentation of the proof of their classification by Evgraf Fedorov, the regular spherical tilings in chapter two, and the uniform tilings of the hyperbolic plane in chapter three. Also mentioned is the Voderberg tiling by non-convex enneagons, as an example of a systematically-constructed tiling that lacks all symmetry (prefiguring the discovery of aperiodic tilings). The fourth chapter describes symmetric polyhedra, including the five Platonic solids, the 13 Archimedean solids, and the five parallelohedra also enumerated by Federov, which come from the discrete translational symmetries of Euclidean space. The fifth and final chapter of this section of the book extends this investigation into higher dimensions and the regular polytopes.[5]
The second part of the book concerns the principle that many of these symmetric patterns and shapes can be generated as the solutions to optimization problems, such as the Tammes problem of arranging a given number of points on a sphere so as to maximize the minimum distance between pairs of points. Isometric inequalities for polyhedra and problems of packing density and covering density of sphere packings and coverings are also included, and the proofs make frequent use of Jensen's inequality. This part is organized into chapters in the same order as the first part of the book: Euclidean, spherical, and hyperbolic plane geometry, solid geometry, and higher-dimensional geometry.[1][2][5]
The book is heavily illustrated, including examples of ornamental patterns with the symmetries described,[2] and twelve two-color stereoscopic images.[1] Applications of its material, touched on in the book, include art and decoration, crystallography, urban planning, and the study of plant growth.[5]
Audience and reception
Reviewer W. L. Edge writes that the book's exposition combines "lightness of touch and conciseness of exposition in a quite delightful way", and H. S. M. Coxeter similarly writes that the book has "everything that could be desired in a mathematical monograph: a pleasant style, careful explanation ..., [and] a great variety of topics with a single unifying idea".
C. A. Rogers finds some of the proofs in the second part unconvincing and incomplete.[4] Patrick du Val complains that the level of difficulty is uneven, with the second part of the book being significantly more technical than the first, but nevertheless recommends it "to specialists in this field",[6] while Michael Goldberg calls the book "an excellent reference work".[7] Although calling the content of the book excellent, J. A. Todd complains that its production is marred by poor typographic quality.[3]
See also
• List of books about polyhedra
References
1. Sherk, F. A., "Review of Regular Figures", Mathematical Reviews, MR 0165423
2. Edge, W. L. (October 1965), "Review of Regular Figures", The Mathematical Gazette, 49 (369): 343–345, doi:10.2307/3612913, JSTOR 3612913
3. Todd, J. A. (December 1964), "Review of Regular Figures", Proceedings of the Edinburgh Mathematical Society, 14 (2): 174–175, doi:10.1017/s0013091500026055
4. Rogers, C. A. (1965), "Review of Regular Figures", Journal of the London Mathematical Society, s1-40 (1): 378, doi:10.1112/jlms/s1-40.1.378a
5. Coxeter, H. S. M. (December 4, 1964), "Geometry", Science, New Series, 146 (3649): 1288, doi:10.1126/science.146.3649.1288, JSTOR 1714987
6. Du Val, Patrick (August–September 1966), "Review of Regular Figures", American Mathematical Monthly, 73 (7): 799, doi:10.2307/2314022, JSTOR 2314022
7. Goldberg, Michael (April 1965), "Review of Regular Figures", Mathematics of Computation, 19 (89): 166, doi:10.2307/2004137, JSTOR 2004137
Further reading
• Florian, A., "Review of Regular Figures", zbMATH (in German), Zbl 0134.15705
|
Wikipedia
|
Regular Hadamard matrix
In mathematics a regular Hadamard matrix is a Hadamard matrix whose row and column sums are all equal. While the order of a Hadamard matrix must be 1, 2, or a multiple of 4, regular Hadamard matrices carry the further restriction that the order be a square number. The excess, denoted E(H), of a Hadamard matrix H of order n is defined to be the sum of the entries of H. The excess satisfies the bound |E(H)| ≤ n3/2. A Hadamard matrix attains this bound if and only if it is regular.
Parameters
If n = 4u2 is the order of a regular Hadamard matrix, then the excess is ±8u3 and the row and column sums all equal ±2u. It follows that each row has 2u2 ± u positive entries and 2u2 ∓ u negative entries. The orthogonality of rows implies that any two distinct rows have exactly u2 ± u positive entries in common. If H is interpreted as the incidence matrix of a block design, with 1 representing incidence and −1 representing non-incidence, then H corresponds to a symmetric 2-(v,k,λ) design with parameters (4u2, 2u2 ± u, u2 ± u). A design with these parameters is called a Menon design.
Construction
Unsolved problem in mathematics:
Which square numbers can be the order of a regular Hadamard matrix?
(more unsolved problems in mathematics)
A number of methods for constructing regular Hadamard matrices are known, and some exhaustive computer searches have been done for regular Hadamard matrices with specified symmetry groups, but it is not known whether every even perfect square is the order of a regular Hadamard matrix. Bush-type Hadamard matrices are regular Hadamard matrices of a special form, and are connected with finite projective planes.
History and naming
Like Hadamard matrices more generally, regular Hadamard matrices are named after Jacques Hadamard. Menon designs are named after P Kesava Menon, and Bush-type Hadamard matrices are named after Kenneth A. Bush.
References
• C.J. Colbourn and J.H. Dinitz (Eds.), The CRC Handbook of Combinatorial Designs, 2nd ed., CRC Press, Boca Raton, Florida., 2006.
• W. D. Wallis, Anne Penfold Street, and Jennifer Seberry Wallis, Combinatorics: Room Squares, Sum-Free Sets, Hadamard Matrices, Springer-Verlag, Berlin 1972.
|
Wikipedia
|
Nonagon
In geometry, a nonagon (/ˈnɒnəɡɒn/) or enneagon (/ˈɛniəɡɒn/) is a nine-sided polygon or 9-gon.
Regular enneagon (nonagon)
A regular enneagon (nonagon)
TypeRegular polygon
Edges and vertices9
Schläfli symbol{9}
Coxeter–Dynkin diagrams
Symmetry groupDihedral (D9), order 2×9
Internal angle (degrees)140°
PropertiesConvex, cyclic, equilateral, isogonal, isotoxal
Dual polygonSelf
The name nonagon is a prefix hybrid formation, from Latin (nonus, "ninth" + gonon), used equivalently, attested already in the 16th century in French nonogone and in English from the 17th century. The name enneagon comes from Greek enneagonon (εννεα, "nine" + γωνον (from γωνία = "corner")), and is arguably more correct,[1] though less common than "nonagon".
Regular nonagon
A regular nonagon is represented by Schläfli symbol {9} and has internal angles of 140°. The area of a regular nonagon of side length a is given by
$A={\frac {9}{4}}a^{2}\cot {\frac {\pi }{9}}=(9/2)ar=9r^{2}\tan(\pi /9)$
$=(9/2)R^{2}\sin(2\pi /9)\simeq 6.18182\,a^{2},$
where the radius r of the inscribed circle of the regular nonagon is
$r=(a/2)\cot(\pi /9)$
and where R is the radius of its circumscribed circle:
$R={\sqrt {(a/2)^{2}+r^{2}}}=r\sec(\pi /9)=(a/2)\csc(\pi /9).$
Construction
Although a regular nonagon is not constructible with compass and straightedge (as 9 = 32, which is not a product of distinct Fermat primes), there are very old methods of construction that produce very close approximations.[2]
It can be also constructed using neusis, or by allowing the use of an angle trisector.
Symmetry
The regular enneagon has Dih9 symmetry, order 18. There are 2 subgroup dihedral symmetries: Dih3 and Dih1, and 3 cyclic group symmetries: Z9, Z3, and Z1.
These 6 symmetries can be seen in 6 distinct symmetries on the enneagon. John Conway labels these by a letter and group order.[4] Full symmetry of the regular form is r18 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders.
Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g9 subgroup has no degrees of freedom but can seen as directed edges.
Tilings
The regular enneagon can tessellate the euclidean tiling with gaps. These gaps can be filled with regular hexagons and isosceles triangles. In the notation of symmetrohedron this tiling is called H(*;3;*;[2]) with H representing *632 hexagonal symmetry in the plane.
Graphs
The K9 complete graph is often drawn as a regular enneagon with all 36 edges connected. This graph also represents an orthographic projection of the 9 vertices and 36 edges of the 8-simplex.
8-simplex (8D)
Pop culture references
• They Might Be Giants have a song entitled "Nonagon" on their children's album Here Come the 123s. It refers to both an attendee at a party at which "everybody in the party is a many-sided polygon" and a dance they perform at this party.[5]
• Slipknot's logo is also a version of a nonagon, being a nine-pointed star made of three triangles, referring to the nine members.
• King Gizzard & the Lizard Wizard have an album titled 'Nonagon Infinity', the album art featuring a nonagonal complete graph. The album consists of nine songs and repeats cyclically.
Architecture
Temples of the Baháʼí Faith, called Baháʼí Houses of Worship, are required to be nonagonal.
The U.S. Steel Tower is an irregular nonagon.
Garsų Gaudyklė in Lithuania.
See also
• Enneagram (nonagram)
• Trisection of the angle 60°, Proximity construction
References
1. Eric W. Weisstein. "Nonagon". MathWorld--A Wolfram Web Resource. Retrieved 24 October 2018.
2. J. L. Berggren, "Episodes in the Mathematics of Medieval Islam", p. 82 - 85 Springer-Verlag New York, Inc. 1st edition 1986, retrieved on 11 December 2015.
3. Ernst Bindel, Helmut von Kügelgen. "KLASSISCHE PROBLEME DES GRIECHISCHENALTERTUMS IM MATHEMATIKUNTERRICHT DER OBERSTUFE" (PDF). Erziehungskunst. Bund der Freien Waldorfschulen Deutschlands. pp. 234–237.Retrieved on 14 July 2019.
4. John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, (2008) The Symmetries of Things, ISBN 978-1-56881-220-5 (Chapter 20, Generalized Schaefli symbols, Types of symmetry of a polygon pp. 275-278)
5. TMBW.net
External links
• Properties of a Nonagon (with interactive animation)
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
|
Wikipedia
|
Kleene algebra
In mathematics, a Kleene algebra (/ˈkleɪni/ KLAY-nee; named after Stephen Cole Kleene) is an idempotent (and thus partially ordered) semiring endowed with a closure operator.[1] It generalizes the operations known from regular expressions.
Definition
Various inequivalent definitions of Kleene algebras and related structures have been given in the literature.[2] Here we will give the definition that seems to be the most common nowadays.
A Kleene algebra is a set A together with two binary operations + : A × A → A and · : A × A → A and one function * : A → A, written as a + b, ab and a* respectively, so that the following axioms are satisfied.
• Associativity of + and ·: a + (b + c) = (a + b) + c and a(bc) = (ab)c for all a, b, c in A.
• Commutativity of +: a + b = b + a for all a, b in A
• Distributivity: a(b + c) = (ab) + (ac) and (b + c)a = (ba) + (ca) for all a, b, c in A
• Identity elements for + and ·: There exists an element 0 in A such that for all a in A: a + 0 = 0 + a = a. There exists an element 1 in A such that for all a in A: a1 = 1a = a.
• Annihilation by 0: a0 = 0a = 0 for all a in A.
The above axioms define a semiring. We further require:
• + is idempotent: a + a = a for all a in A.
It is now possible to define a partial order ≤ on A by setting a ≤ b if and only if a + b = b (or equivalently: a ≤ b if and only if there exists an x in A such that a + x = b; with any definition, a ≤ b ≤ a implies a = b). With this order we can formulate the last four axioms about the operation *:
• 1 + a(a*) ≤ a* for all a in A.
• 1 + (a*)a ≤ a* for all a in A.
• if a and x are in A such that ax ≤ x, then a*x ≤ x
• if a and x are in A such that xa ≤ x, then x(a*) ≤ x [3]
Intuitively, one should think of a + b as the "union" or the "least upper bound" of a and b and of ab as some multiplication which is monotonic, in the sense that a ≤ b implies ax ≤ bx. The idea behind the star operator is a* = 1 + a + aa + aaa + ... From the standpoint of programming language theory, one may also interpret + as "choice", · as "sequencing" and * as "iteration".
Examples
Notational correspondence between
Kleene algebras and +·*01
Regular expressions |not written*∅ε
Let Σ be a finite set (an "alphabet") and let A be the set of all regular expressions over Σ. We consider two such regular expressions equal if they describe the same language. Then A forms a Kleene algebra. In fact, this is a free Kleene algebra in the sense that any equation among regular expressions follows from the Kleene algebra axioms and is therefore valid in every Kleene algebra.
Again let Σ be an alphabet. Let A be the set of all regular languages over Σ (or the set of all context-free languages over Σ; or the set of all recursive languages over Σ; or the set of all languages over Σ). Then the union (written as +) and the concatenation (written as ·) of two elements of A again belong to A, and so does the Kleene star operation applied to any element of A. We obtain a Kleene algebra A with 0 being the empty set and 1 being the set that only contains the empty string.
Let M be a monoid with identity element e and let A be the set of all subsets of M. For two such subsets S and T, let S + T be the union of S and T and set ST = {st : s in S and t in T}. S* is defined as the submonoid of M generated by S, which can be described as {e} ∪ S ∪ SS ∪ SSS ∪ ... Then A forms a Kleene algebra with 0 being the empty set and 1 being {e}. An analogous construction can be performed for any small category.
The linear subspaces of a unital algebra over a field form a Kleene algebra. Given linear subspaces V and W, define V + W to be the sum of the two subspaces, and 0 to be the trivial subspace {0}. Define V · W = span {v · w | v ∈ V, w ∈ W}, the linear span of the product of vectors from V and W respectively. Define 1 = span {I}, the span of the unit of the algebra. The closure of V is the direct sum of all powers of V.
$V^{*}=\bigoplus _{i=0}^{\infty }V^{i}$
Suppose M is a set and A is the set of all binary relations on M. Taking + to be the union, · to be the composition and * to be the reflexive transitive closure, we obtain a Kleene algebra.
Every Boolean algebra with operations $\lor $ and $\land $ turns into a Kleene algebra if we use $\lor $ for +, $\land $ for · and set a* = 1 for all a.
A quite different Kleene algebra can be used to implement the Floyd–Warshall algorithm, computing the shortest path's length for every two vertices of a weighted directed graph, by Kleene's algorithm, computing a regular expression for every two states of a deterministic finite automaton. Using the extended real number line, take a + b to be the minimum of a and b and ab to be the ordinary sum of a and b (with the sum of +∞ and −∞ being defined as +∞). a* is defined to be the real number zero for nonnegative a and −∞ for negative a. This is a Kleene algebra with zero element +∞ and one element the real number zero. A weighted directed graph can then be considered as a deterministic finite automaton, with each transition labelled by its weight. For any two graph nodes (automaton states), the regular expressions computed from Kleene's algorithm evaluates, in this particular Kleene algebra, to the shortest path length between the nodes.[4]
Properties
Zero is the smallest element: 0 ≤ a for all a in A.
The sum a + b is the least upper bound of a and b: we have a ≤ a + b and b ≤ a + b and if x is an element of A with a ≤ x and b ≤ x, then a + b ≤ x. Similarly, a1 + ... + an is the least upper bound of the elements a1, ..., an.
Multiplication and addition are monotonic: if a ≤ b, then
• a + x ≤ b + x,
• ax ≤ bx, and
• xa ≤ xb
for all x in A.
Regarding the star operation, we have
• 0* = 1 and 1* = 1,
• a ≤ b implies a* ≤ b* (monotonicity),
• an ≤ a* for every natural number n, where an is defined as n-fold multiplication of a,
• (a*)(a*) = a*,
• (a*)* = a*,
• 1 + a(a*) = a* = 1 + (a*)a,
• ax = xb implies (a*)x = x(b*),
• ((ab)*)a = a((ba)*),
• (a+b)* = a*(b(a*))*, and
• pq = 1 = qp implies q(a*)p = (qap)*.[5]
If A is a Kleene algebra and n is a natural number, then one can consider the set Mn(A) consisting of all n-by-n matrices with entries in A. Using the ordinary notions of matrix addition and multiplication, one can define a unique *-operation so that Mn(A) becomes a Kleene algebra.
History
Kleene introduced regular expressions and gave some of their algebraic laws.[6][7] Although he didn't define Kleene algebras, he asked for a decision procedure for equivalence of regular expressions.[8] Redko proved that no finite set of equational axioms can characterize the algebra of regular languages.[9] Salomaa gave complete axiomatizations of this algebra, however depending on problematic inference rules.[10] The problem of providing a complete set of axioms, which would allow derivation of all equations among regular expressions, was intensively studied by John Horton Conway under the name of regular algebras,[11] however, the bulk of his treatment was infinitary. In 1981, Kozen gave a complete infinitary equational deductive system for the algebra of regular languages.[12] In 1994, he gave the above finite axiom system, which uses unconditional and conditional equalities (considering a ≤ b as an abbreviation for a + b = b), and is equationally complete for the algebra of regular languages, that is, two regular expressions a and b denote the same language only if a = b follows from the above axioms.[13]
Generalization (or relation to other structures)
Kleene algebras are a particular case of closed semirings, also called quasi-regular semirings or Lehmann semirings, which are semirings in which every element has at least one quasi-inverse satisfying the equation: a* = aa* + 1 = a*a + 1. This quasi-inverse is not necessarily unique.[14][15] In a Kleene algebra, a* is the least solution to the fixpoint equations: X = aX + 1 and X = Xa + 1.[15]
Closed semirings and Kleene algebras appear in algebraic path problems, a generalization of the shortest path problem.[15]
See also
• Action algebra
• Algebraic structure
• Kleene star
• Regular expression
• Star semiring
• Valuation algebra
Notes and references
1. Marc Pouly; Jürg Kohlas (2011). Generic Inference: A Unifying Theory for Automated Reasoning. John Wiley & Sons. p. 246. ISBN 978-1-118-01086-0.
2. For a survey, see: Kozen, Dexter (1990). "On Kleene algebras and closed semirings" (PDF). In Rovan, Branislav (ed.). Mathematical foundations of computer science, Proc. 15th Symp., MFCS '90, Banská Bystrica/Czech. 1990. Lecture Notes Computer Science. Vol. 452. Springer-Verlag. pp. 26–47. Zbl 0732.03047.
3. Kozen (1990), sect.2.1, p.3
4. Gross, Jonathan L.; Yellen, Jay (2003), Handbook of Graph Theory, Discrete Mathematics and Its Applications, CRC Press, p. 65, ISBN 9780203490204.
5. Kozen (1990), sect.2.1.2, p.5
6. S.C. Kleene (Dec 1951). Representation of Events in Nerve Nets and Finite Automata (PDF) (Technical report). U.S. Air Force / RAND Corporation. p. 98. RM-704. Here: sect.7.2, p.52
7. Kleene, Stephen C. (1956). "Representation of Events in Nerve Nets and Finite Automata" (PDF). Automata Studies, Annals of Mathematical Studies. Princeton Univ. Press. 34. Here: sect.7.2, p.26-27
8. Kleene (1956), p.35
9. V.N. Redko (1964). "On defining relations for the algebra of regular events" (PDF). Ukrainskii Matematicheskii Zhurnal. 16 (1): 120–126. (In Russian)
10. Arto Salomaa (Jan 1966). "Two complete axiom systems for the algebra of regular events" (PDF). Journal of the ACM. 13 (1): 158–169. doi:10.1145/321312.321326. S2CID 8445404.
11. Conway, J.H. (1971). Regular algebra and finite machines. London: Chapman and Hall. ISBN 0-412-10620-5. Zbl 0231.94041. Chap.IV.
12. Dexter Kozen (1981). "On induction vs. *-continuity" (PDF). In Dexter Kozen (ed.). Proc. Workshop Logics of Programs. Lect. Notes in Comput. Sci. Vol. 131. Springer. pp. 167–176.
13. Dexter Kozen (May 1994). "A Completeness Theorem for Kleene Algebras and the Algebra of Regular Events" (PDF). Information and Computation. 110 (2): 366–390. doi:10.1006/inco.1994.1037. — An earlier version appeared as: Dexter Kozen (May 1990). A Completeness Theorem for Kleene Algebras and the Algebra of Regular Events (Technical report). Cornell. p. 27. TR90-1123.
14. Jonathan S. Golan (30 June 2003). Semirings and Affine Equations over Them. Springer Science & Business Media. pp. 157–159. ISBN 978-1-4020-1358-4.
15. Marc Pouly; Jürg Kohlas (2011). Generic Inference: A Unifying Theory for Automated Reasoning. John Wiley & Sons. pp. 232 and 248. ISBN 978-1-118-01086-0.
• Kozen, Dexter. "CS786 Spring 04, Introduction to Kleene Algebra".
Further reading
• Peter Höfner (2009). Algebraic Calculi for Hybrid Systems. BoD – Books on Demand. pp. 10–13. ISBN 978-3-8391-2510-6. The introduction of this book reviews advances in the field of Kleene algebra made in the last 20 years, which are not discussed in the article above.
|
Wikipedia
|
Apeirogon
In geometry, an apeirogon (from Ancient Greek ἄπειρος apeiros 'infinite, boundless', and γωνία gonia 'angle') or infinite polygon is a polygon with an infinite number of sides. Apeirogons are the two-dimensional case of infinite polytopes. In some literature, the term "apeirogon" may refer only to the regular apeirogon, with an infinite dihedral group of symmetries.[1]
The regular apeirogon
Edges and vertices∞
Schläfli symbol{∞}
Coxeter–Dynkin diagrams
Internal angle (degrees)180°
Dual polygonSelf-dual
Definitions
Classical constructive definition
Given a point A0 in a Euclidean space and a translation S, define the point Ai to be the point obtained from i applications of the translation S to A0, so Ai = Si(A0). The set of vertices Ai with i any integer, together with edges connecting adjacent vertices, is a sequence of equal-length segments of a line, and is called the regular apeirogon as defined by H. S. M. Coxeter.[1]
A regular apeirogon can be defined as a partition of the Euclidean line E1 into infinitely many equal-length segments. It generalizes the regular n-gon, which may be defined as a partition of the circle S1 into finitely many equal-length segments.[2]
Modern abstract definition
An abstract polytope is a partially ordered set P (whose elements are called faces) with properties modeling those of the inclusions of faces of convex polytopes. The rank (or dimension) of an abstract polytope is determined by the length of the maximal ordered chains of its faces, and an abstract polytope of rank n is called an abstract n-polytope.[3]: 22–25
For abstract polytopes of rank 2, this means that: A) the elements of the partially ordered set are sets of vertices with either zero vertex (the empty set), one vertex, two vertices (an edge), or the entire vertex set (a two-dimensional face), ordered by inclusion of sets; B) each vertex belongs to exactly two edges; C) the undirected graph formed by the vertices and edges is connected.[3]: 22–25 [4]: 224
An abstract polytope is called an abstract apeirotope if it has infinitely many elements; an abstract 2-apeirotope is called an abstract apeirogon.[3]: 25
In an abstract polytope, a flag is a collection of one face of each dimension, all incident to each other (that is, comparable in the partial order); an abstract polytope is called regular if it has symmetries (structure-preserving permutations of its elements) that take any flag to any other flag. In the case of a two-dimensional abstract polytope, this is automatically true; the symmetries of the apeirogon form the infinite dihedral group.[3]: 31
Pseudogon
The regular pseudogon is a partition of the hyperbolic line H1 (instead of the Euclidean line) into segments of length 2λ, as an analogue of the regular apeirogon.[2]
Realizations
Definition
A realization of an abstract apeirogon is defined as a mapping from its vertices to a finite-dimensional geometric space (typically a Euclidean space) such that every symmetry of the abstract apeirogon corresponds to an isometry of the images of the mapping.[3]: 121 [4]: 225 Two realizations are called congruent if the natural bijection between their sets of vertices is induced by an isometry of their ambient Euclidean spaces.[3]: 126 [4]: 229 The classical definition of an apeirogon as an equally-spaced subdivision of the Euclidean line is a realization in this sense, as is the convex subset in the hyperbolic plane formed by the convex hull of equally-spaced points on a horocycle.[5] Other realizations are possible in higher-dimensional spaces.
Symmetries of a realization
The infinite dihedral group G of symmetries of a realization V of an abstract apeirogon P is generated by two reflections, the product of which translates each vertex of P to the next.[3]: 140–141 [4]: 231 The product of the two reflections can be decomposed as a product of a non-zero translation, finitely many rotations, and a possibly trivial reflection.[3]: 141 [4]: 231
Moduli space of realizations
Generally, the moduli space of realizations of an abstract polytope is a convex cone of infinite dimension.[3]: 127 [4]: 229–230 The realization cone of the abstract apeirogon has uncountably infinite algebraic dimension and cannot be closed in the Euclidean topology.[3]: 141 [4]: 232
Classification of Euclidean apeirogons
The realizations of two-dimensional abstract polytopes (including both polygons and apeirogons), in Euclidean spaces of at most three dimensions, can be classified into six types:
• convex polygons,
• star polygons,
• regular apeirogons in the Euclidean line,
• infinite skew polygons (infinite zig-zag polygons in the Euclidean plane),
• antiprisms (including star prisms and star antiprisms), and
• infinite helical polygons (evenly spaced points along a helix).[6]
Abstract apeirogons may be realized in all of these ways, in some cases mapping infinitely many different vertices of an abstract apeirogon onto finitely many points of the realization. An apeirogon also admits star polygon realizations and antiprismatic realizations with a non-discrete set of infinitely many points.
Generalizations
Higher dimension
Main articles: Apeirotope and Apeirohedron
Apeirohedra are the 3-dimensional analogues of apeirogons, and are the infinite analogues of polyhedra.[7] More generally, n-apeirotopes or infinite n-polytopes are the n-dimensional analogues of apeirogons, and are the infinite analogues of n-polytopes.[3]: 22–25
See also
• Apeirogonal tiling
• Apeirogonal prism
• Apeirogonal antiprism
• Teragon, a fractal generalized polygon that also has infinitely many sides
References
1. Coxeter, H. S. M. (1948). Regular polytopes. London: Methuen & Co. Ltd. p. 45.
2. Johnson, Norman W. (2018). "11: Finite Symmetry Groups". Geometries and transformations. Cambridge University Press. p. 226. ISBN 9781107103405.
3. McMullen, Peter; Schulte, Egon (December 2002). Abstract Regular Polytopes (1st ed.). Cambridge University Press. ISBN 0-521-81496-0.
4. McMullen, Peter (1994), "Realizations of regular apeirotopes", Aequationes Mathematicae, 47 (2–3): 223–239, doi:10.1007/BF01832961, MR 1268033, S2CID 121616949
5. Buchanan, Kristopher; Flores, Carlos; Wheeland, Sara; Jensen, Jeffrey; Grayson, David; Huff, Gregory (2017). Transmit beamforming for radar applications using circularly tapered random arrays. 2017 IEEE Radar Conference (Radar Conf). pp. 0112–0117. doi:10.1109/RADAR.2017.7944181. ISBN 978-1-4673-8823-8. S2CID 38429370.
6. Grünbaum, B. (1977). "Regular polyhedra – old and new". Aequationes Mathematicae. 16 (1–2): 119. doi:10.1007/BF01836414. S2CID 125049930.
7. Coxeter, H. S. M. (1937). "Regular Skew Polyhedra in Three and Four Dimensions". Proc. London Math. Soc. 43: 33–62.
External links
• Russell, Robert A.. "Apeirogon". MathWorld.
• Olshevsky, George. "Apeirogon". Glossary for Hyperspace. Archived from the original on 4 February 2007.
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
|
Wikipedia
|
Apeirotope
In geometry, an apeirotope or infinite polytope is a generalized polytope which has infinitely many facets.
Definition
Abstract apeirotope
An abstract n-polytope is a partially ordered set P (whose elements are called faces) such that P contains a least face and a greatest face, each maximal totally ordered subset (called a flag) contains exactly n + 2 faces, P is strongly connected, and there are exactly two faces that lie strictly between a and b are two faces whose ranks differ by two.[1][2] An abstract polytope is called an abstract apeirotope if it has infinitely many faces.[3]
An abstract polytope is called regular if its automorphism group Γ(P) acts transitively on all of the flags of P.[4]
Classification
There are two main geometric classes of apeirotope:[5]
• honeycombs in n dimensions, which completely fill an n-dimensional space.
• skew apeirotopes, comprising an n-dimensional manifold in a higher space
Honeycombs
Main article: Honeycomb (geometry)
In general, a honeycomb in n dimensions is an infinite example of a polytope in n + 1 dimensions.
Tilings of the plane and close-packed space-fillings of polyhedra are examples of honeycombs in two and three dimensions respectively.
A line divided into infinitely many finite segments is an example of an apeirogon.
Skew apeirotopes
Skew apeirogons
Main article: Skew apeirogon
A skew apeirogon in two dimensions forms a zig-zag line in the plane. If the zig-zag is even and symmetrical, then the apeirogon is regular.
Skew apeirogons can be constructed in any number of dimensions. In three dimensions, a regular skew apeirogon traces out a helical spiral and may be either left- or right-handed.
Infinite skew polyhedra
Main article: Regular skew apeirohedron
There are three regular skew apeirohedra, which look rather like polyhedral sponges:
• 6 squares around each vertex, Coxeter symbol {4,6|4}
• 4 hexagons around each vertex, Coxeter symbol {6,4|4}
• 6 hexagons around each vertex, Coxeter symbol {6,6|3}
There are thirty regular apeirohedra in Euclidean space.[6] These include those listed above, as well as (in the plane) polytopes of type: {∞,3}, {∞,4}, {∞,6} and in 3-dimensional space, blends of these with either an apeirogon or a line segment, and the "pure" 3-dimensional apeirohedra (12 in number)
References
1. McMullen & Schulte (2002), pp. 22–25.
2. McMullen (1994), p. 224.
3. McMullen & Schulte (2002), p. 25.
4. McMullen & Schulte (2002), p. 31.
5. Grünbaum (1977).
6. McMullen & Schulte (2002, Section 7E)
Bibliography
• Grünbaum, B. (1977). "Regular Polyhedra—Old and New". Aeqationes mathematicae. 16: 1–20.
• McMullen, Peter (1994), "Realizations of regular apeirotopes", Aequationes Mathematicae, 47 (2–3): 223–239, doi:10.1007/BF01832961, MR 1268033
• McMullen, Peter; Schulte, Egon (2002), Abstract Regular Polytopes, Encyclopedia of Mathematics and its Applications, vol. 92, Cambridge: Cambridge University Press, doi:10.1017/CBO9780511546686, ISBN 0-521-81496-0, MR 1965665
|
Wikipedia
|
Chiliagon
In geometry, a chiliagon (/ˈkɪliəɡɒn/) or 1,000-gon is a polygon with 1,000 sides. Philosophers commonly refer to chiliagons to illustrate ideas about the nature and workings of thought, meaning, and mental representation.
Regular chiliagon
A regular chiliagon
TypeRegular polygon
Edges and vertices1000
Schläfli symbol{1000}, t{500}, tt{250}, ttt{125}
Coxeter–Dynkin diagrams
Symmetry groupDihedral (D1000), order 2×1000
Internal angle (degrees)179.64°
PropertiesConvex, cyclic, equilateral, isogonal, isotoxal
Dual polygonSelf
Regular chiliagon
A regular chiliagon is represented by Schläfli symbol {1,000} and can be constructed as a truncated 500-gon, t{500}, or a twice-truncated 250-gon, tt{250}, or a thrice-truncated 125-gon, ttt{125}.
The measure of each internal angle in a regular chiliagon is 179°38'24"/${\frac {499\pi }{500}}$rad. The area of a regular chiliagon with sides of length a is given by
$A=250a^{2}\cot {\frac {\pi }{1000}}\simeq 79577.2\,a^{2}$
This result differs from the area of its circumscribed circle by less than 4 parts per million.
Because 1,000 = 23 × 53, the number of sides is neither a product of distinct Fermat primes nor a power of two. Thus the regular chiliagon is not a constructible polygon. Indeed, it is not even constructible with the use of an angle trisector, as the number of sides is neither a product of distinct Pierpont primes, nor a product of powers of two and three. Therefore, construction of a chiliagon requires other techniques such as the quadratrix of Hippias, Archimedean spiral, or other auxiliary curves. For example, a 9° angle can first be constructed with compass and straightedge, which can then be quintisected (divided into five equal parts) twice using an auxiliary curve to produce the 21'36" internal angle required.
Philosophical application
René Descartes uses the chiliagon as an example in his Sixth Meditation to demonstrate the difference between pure intellection and imagination. He says that, when one thinks of a chiliagon, he "does not imagine the thousand sides or see them as if they were present" before him – as he does when one imagines a triangle, for example. The imagination constructs a "confused representation," which is no different from that which it constructs of a myriagon (a polygon with ten thousand sides). However, he does clearly understand what a chiliagon is, just as he understands what a triangle is, and he is able to distinguish it from a myriagon. Therefore, the intellect is not dependent on imagination, Descartes claims, as it is able to entertain clear and distinct ideas when imagination is unable to.[1] Philosopher Pierre Gassendi, a contemporary of Descartes, was critical of this interpretation, believing that while Descartes could imagine a chiliagon, he could not understand it: one could "perceive that the word 'chiliagon' signifies a figure with a thousand angles [but] that is just the meaning of the term, and it does not follow that you understand the thousand angles of the figure any better than you imagine them."[2]
The example of a chiliagon is also referenced by other philosophers. David Hume points out that it is "impossible for the eye to determine the angles of a chiliagon to be equal to 1.996 right angles, or make any conjecture, that approaches this proportion."[3] Gottfried Leibniz comments on a use of the chiliagon by John Locke, noting that one can have an idea of the polygon without having an image of it, and thus distinguishing ideas from images.[4] Immanuel Kant refers instead to the enneacontahexagon (96-gon), but responds to the same question raised by Descartes.[5]
Henri Poincaré uses the chiliagon as evidence that "intuition is not necessarily founded on the evidence of the senses" because "we can not represent to ourselves a chiliagon, and yet we reason by intuition on polygons in general, which include the chiliagon as a particular case."[6]
Inspired by Descartes's chiliagon example, Roderick Chisholm and other 20th-century philosophers have used similar examples to make similar points. Chisholm's "speckled hen", which need not have a determinate number of speckles to be successfully imagined, is perhaps the most famous of these.[7]
Symmetry
The regular chiliagon has Dih1000 dihedral symmetry, order 2000, represented by 1,000 lines of reflection. Dih1000 has 15 dihedral subgroups: Dih500, Dih250, Dih125, Dih200, Dih100, Dih50, Dih25, Dih40, Dih20, Dih10, Dih5, Dih8, Dih4, Dih2, and Dih1. It also has 16 more cyclic symmetries as subgroups: Z1000, Z500, Z250, Z125, Z200, Z100, Z50, Z25, Z40, Z20, Z10, Z5, Z8, Z4, Z2, and Z1, with Zn representing π/n radian rotational symmetry.
John Conway labels these lower symmetries with a letter and order of the symmetry follows the letter.[8] He gives d (diagonal) with mirror lines through vertices, p with mirror lines through edges (perpendicular), i with mirror lines through both vertices and edges, and g for rotational symmetry. a1 labels no symmetry.
These lower symmetries allow degrees of freedom in defining irregular chiliagons. Only the g1000 subgroup has no degrees of freedom but can be seen as directed edges.
Chiliagram
A chiliagram is a 1,000-sided star polygon. There are 199 regular forms[lower-alpha 1] given by Schläfli symbols of the form {1000/n}, where n is an integer between 2 and 500 that is coprime to 1,000. There are also 300 regular star figures in the remaining cases.
For example, the regular {1000/499} star polygon is constructed by 1000 nearly radial edges. Each star vertex has an internal angle of 0.36 degrees.[lower-alpha 2]
{1000/499}
Central area with moiré patterns
See also
• Myriagon
• Megagon
• Philosophy of Mind
• Philosophy of Language
Notes
1. 199 = 500 cases − 1 (convex) − 100 (multiples of 5) − 250 (multiples of 2) + 50 (multiples of 2 and 5)
2. 0.36=180(1-2/(1000/499))=180(1-998/1000)=180(2/1000)=180/500
References
1. Meditation VI by Descartes (English translation).
2. Sepkoski, David (2005). "Nominalism and constructivism in seventeenth-century mathematical philosophy". Historia Mathematica. 32: 33–59. doi:10.1016/j.hm.2003.09.002.
3. David Hume, The Philosophical Works of David Hume, Volume 1, Black and Tait, 1826, p. 101.
4. Jonathan Francis Bennett (2001), Learning from Six Philosophers: Descartes, Spinoza, Leibniz, Locke, Berkeley, Hume, Volume 2, Oxford University Press, ISBN 0198250924, p. 53.
5. Immanuel Kant, "On a Discovery," trans. Henry Allison, in Theoretical Philosophy After 1791, ed. Henry Allison and Peter Heath, Cambridge UP, 2002 [Akademie 8:121].
6. Henri Poincaré (1900) "Intuition and Logic in Mathematics" in William Bragg Ewald (ed) From Kant to Hilbert: A Source Book in the Foundations of Mathematics, Volume 2, Oxford University Press, 2007, ISBN 0198505361, p. 1015.
7. Roderick Chisholm, "The Problem of the Speckled Hen", Mind 51 (1942): pp. 368–373. "These problems are all descendants of Descartes's 'chiliagon' argument in the sixth of his Meditations" (Joseph Heath, Following the Rules: Practical Reasoning and Deontic Constraint, Oxford: OUP, 2008, p. 305, note 15).
8. The Symmetries of Things, Chapter 20
• chiliagon
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
|
Wikipedia
|
Regular conditional probability
In probability theory, regular conditional probability is a concept that formalizes the notion of conditioning on the outcome of a random variable. The resulting conditional probability distribution is a parametrized family of probability measures called a Markov kernel.
Definition
Conditional probability distribution
Consider two random variables $X,Y:\Omega \to \mathbb {R} $. The conditional probability distribution of Y given X is a two variable function $\kappa _{Y|X}:\mathbb {R} \times {\mathcal {B}}(\mathbb {R} )\to [0,1]$
If the random variable X is discrete
$\kappa _{Y|X}(x,A)=P(Y\in A|X=x)={\begin{cases}{\frac {P(Y\in A,X=x)}{P(X=x)}}&{\text{ if }}P(X=x)>0\\{\text{arbitrary value}}&{\text{ otherwise}}.\end{cases}}$
If the random variables X, Y are continuous with density $f_{X,Y}(x,y)$.
$\kappa _{Y|X}(x,A)={\begin{cases}{\frac {\int _{A}f_{X,Y}(x,y)\mathrm {d} y}{\int _{\mathbb {R} }f_{X,Y}(x,y)\mathrm {d} y}}&{\text{ if }}\int _{\mathbb {R} }f_{X,Y}(x,y)\mathrm {d} y>0\\{\text{arbitrary value}}&{\text{ otherwise}}.\end{cases}}$
A more general definition can be given in terms of conditional expectation. Consider a function $e_{Y\in A}:\mathbb {R} \to [0,1]$ satisfying
$e_{Y\in A}(X(\omega ))=\mathbb {E} [1_{Y\in A}|X](\omega )$
for almost all $\omega $. Then the conditional probability distribution is given by
$\kappa _{Y|X}(x,A)=e_{Y\in A}(x).$
As with conditional expectation, this can be further generalized to conditioning on a sigma algebra ${\mathcal {F}}$. In that case the conditional distribution is a function $\Omega \times {\mathcal {B}}(\mathbb {R} )\to [0,1]$:
$\kappa _{Y|{\mathcal {F}}}(\omega ,A)=\mathbb {E} [1_{Y\in A}|{\mathcal {F}}]$
Regularity
For working with $\kappa _{Y|X}$, it is important that it be regular, that is:
1. For almost all x, $A\mapsto \kappa _{Y|X}(x,A)$ is a probability measure
2. For all A, $x\mapsto \kappa _{Y|X}(x,A)$ is a measurable function
In other words $\kappa _{Y|X}$ is a Markov kernel.
The second condition holds trivially, but the proof of the first is more involved. It can be shown that if Y is a random element $\Omega \to S$ in a Radon space S, there exists a $\kappa _{Y|X}$ that satisfies the first condition.[1] It is possible to construct more general spaces where a regular conditional probability distribution does not exist.[2]
Relation to conditional expectation
For discrete and continuous random variables, the conditional expectation can be expressed as
${\begin{aligned}\mathbb {E} [Y|X=x]&=\sum _{y}y\,P(Y=y|X=x)\\\mathbb {E} [Y|X=x]&=\int y\,f_{Y|X}(x,y)\mathrm {d} y\end{aligned}}$
where $f_{Y|X}(x,y)$ is the conditional density of Y given X.
This result can be extended to measure theoretical conditional expectation using the regular conditional probability distribution:
$\mathbb {E} [Y|X](\omega )=\int y\,\kappa _{Y|\sigma (X)}(\omega ,\mathrm {d} y)$ .
Formal definition
Let $(\Omega ,{\mathcal {F}},P)$ be a probability space, and let $T:\Omega \rightarrow E$ be a random variable, defined as a Borel-measurable function from $\Omega $ to its state space $(E,{\mathcal {E}})$. One should think of $T$ as a way to "disintegrate" the sample space $\Omega $ into $\{T^{-1}(x)\}_{x\in E}$. Using the disintegration theorem from the measure theory, it allows us to "disintegrate" the measure $P$ into a collection of measures, one for each $x\in E$. Formally, a regular conditional probability is defined as a function $\nu :E\times {\mathcal {F}}\rightarrow [0,1],$ called a "transition probability", where:
• For every $x\in E$, $\nu (x,\cdot )$ is a probability measure on ${\mathcal {F}}$. Thus we provide one measure for each $x\in E$.
• For all $A\in {\mathcal {F}}$, $\nu (\cdot ,A)$ (a mapping $E\to [0,1]$) is ${\mathcal {E}}$-measurable, and
• For all $A\in {\mathcal {F}}$ and all $B\in {\mathcal {E}}$[3]
$P{\big (}A\cap T^{-1}(B){\big )}=\int _{B}\nu (x,A)\,P{\big (}T^{-1}(dx){\big )}.$
where $P\circ T^{-1}$ is the pushforward measure $T_{*}P$ of the distribution of the random element $T$, $x\in \mathrm {supp} \,T,$ i.e. the support of the $T_{*}P$. Specifically, if we take $B=E$, then $A\cap T^{-1}(E)=A$, and so
$P(A)=\int _{E}\nu (x,A)\,P{\big (}T^{-1}(dx){\big )}$,
where $\nu (x,A)$ can be denoted, using more familiar terms $P(A\ |\ T=x)$.
Alternate definition
Consider a Radon space $\Omega $ (that is a probability measure defined on a Radon space endowed with the Borel sigma-algebra) and a real-valued random variable T. As discussed above, in this case there exists a regular conditional probability with respect to T. Moreover, we can alternatively define the regular conditional probability for an event A given a particular value t of the random variable T in the following manner:
$P(A|T=t)=\lim _{U\supset \{T=t\}}{\frac {P(A\cap U)}{P(U)}},$
where the limit is taken over the net of open neighborhoods U of t as they become smaller with respect to set inclusion. This limit is defined if and only if the probability space is Radon, and only in the support of T, as described in the article. This is the restriction of the transition probability to the support of T. To describe this limiting process rigorously:
For every $\epsilon >0,$ there exists an open neighborhood U of the event {T=t}, such that for every open V with $\{T=t\}\subset V\subset U,$
$\left|{\frac {P(A\cap V)}{P(V)}}-L\right|<\epsilon ,$
where $L=P(A|T=t)$ is the limit.
See also
• Conditioning (probability)
• Disintegration theorem
• Adherent point
• Limit point
References
1. Klenke, Achim. Probability theory : a comprehensive course (Second ed.). London. ISBN 978-1-4471-5361-0.
2. Faden, A.M., 1985. The existence of regular conditional probabilities: necessary and sufficient conditions. The Annals of Probability, 13(1), pp.288-298.
3. D. Leao Jr. et al. Regular conditional probability, disintegration of probability and Radon spaces. Proyecciones. Vol. 23, No. 1, pp. 15–29, May 2004, Universidad Católica del Norte, Antofagasta, Chile PDF
External links
• Regular Conditional Probability on PlanetMath
|
Wikipedia
|
Regular element of a Lie algebra
In mathematics, a regular element of a Lie algebra or Lie group is an element whose centralizer has dimension as small as possible. For example, in a complex semisimple Lie algebra, an element $X\in {\mathfrak {g}}$ is regular if its centralizer in ${\mathfrak {g}}$ has dimension equal to the rank of ${\mathfrak {g}}$, which in turn equals the dimension of some Cartan subalgebra ${\mathfrak {h}}$ (note that in earlier papers, an element of a complex semisimple Lie algebra was termed regular if it is semisimple and the kernel of its adjoint representation is a Cartan subalgebra). An element $g\in G$ a Lie group is regular if its centralizer has dimension equal to the rank of $G$.
Basic case
In the specific case of ${\mathfrak {gl}}_{n}(\mathbb {k} )$, the Lie algebra of $n\times n$ matrices over an algebraically closed field $\mathbb {k} $(such as the complex numbers), a regular element $M$ is an element whose Jordan normal form contains a single Jordan block for each eigenvalue (in other words, the geometric multiplicity of each eigenvalue is 1). The centralizer of a regular element is the set of polynomials of degree less than $n$ evaluated at the matrix $M$, and therefore the centralizer has dimension $n$ (which equals the rank of ${\mathfrak {gl}}_{n}$, but is not necessarily an algebraic torus).
If the matrix $M$ is diagonalisable, then it is regular if and only if there are $n$ different eigenvalues. To see this, notice that $M$ will commute with any matrix $P$ that stabilises each of its eigenspaces. If there are $n$ different eigenvalues, then this happens only if $P$ is diagonalisable on the same basis as $M$; in fact $P$ is a linear combination of the first $n$ powers of $M$, and the centralizer is an algebraic torus of complex dimension $n$ (real dimension $2n$); since this is the smallest possible dimension of a centralizer, the matrix $M$ is regular. However if there are equal eigenvalues, then the centralizer is the product of the general linear groups of the eigenspaces of $M$, and has strictly larger dimension, so that $M$ is not regular.
For a connected compact Lie group $G$, the regular elements form an open dense subset, made up of $G$-conjugacy classes of the elements in a maximal torus $T$ which are regular in $G$. The regular elements of $T$ are themselves explicitly given as the complement of a set in $T$, a set of codimension-one subtori corresponding to the root system of $G$. Similarly, in the Lie algebra ${\mathfrak {g}}$ of $G$, the regular elements form an open dense subset which can be described explicitly as adjoint $G$-orbits of regular elements of the Lie algebra of $T$, the elements outside the hyperplanes corresponding to the root system.[1]
Definition
Let ${\mathfrak {g}}$ be a finite-dimensional Lie algebra over an infinite field.[2] For each $x\in {\mathfrak {g}}$, let
$p_{x}(t)=\det(t-\operatorname {ad} (x))=\sum _{i=0}^{\dim {\mathfrak {g}}}a_{i}(x)t^{i}$
be the characteristic polynomial of the adjoint endomorphism $\operatorname {ad} (x):y\mapsto [x,y]$ of ${\mathfrak {g}}$. Then, by definition, the rank of ${\mathfrak {g}}$ is the least integer $r$ such that $a_{r}(x)\neq 0$ for some $x\in {\mathfrak {g}}$ and is denoted by $\operatorname {rk} ({\mathfrak {g}})$.[3] For example, since $a_{\dim {\mathfrak {g}}}(x)=1$ for every x, ${\mathfrak {g}}$ is nilpotent (i.e., each $\operatorname {ad} (x)$ is nilpotent by Engel's theorem) if and only if $\operatorname {rk} ({\mathfrak {g}})=\dim {\mathfrak {g}}$.
Let ${\mathfrak {g}}_{\text{reg}}=\{x\in {\mathfrak {g}}|a_{\operatorname {rk} ({\mathfrak {g}})}(x)\neq 0\}$. By definition, a regular element of ${\mathfrak {g}}$ is an element of the set ${\mathfrak {g}}_{\text{reg}}$.[3] Since $a_{\operatorname {rk} ({\mathfrak {g}})}$ is a polynomial function on ${\mathfrak {g}}$, with respect to the Zariski topology, the set ${\mathfrak {g}}_{\text{reg}}$ is an open subset of ${\mathfrak {g}}$.
Over $\mathbb {C} $, ${\mathfrak {g}}_{\text{reg}}$ is a connected set (with respect to the usual topology),[4] but over $\mathbb {R} $, it is only a finite union of connected open sets.[5]
A Cartan subalgebra and a regular element
Over an infinite field, a regular element can be used to construct a Cartan subalgebra, a self-normalizing nilpotent subalgebra. Over a field of characteristic zero, this approach constructs all the Cartan subalgebras.
Given an element $x\in {\mathfrak {g}}$, let
${\mathfrak {g}}^{0}(x)=\bigcup _{n\geq 0}\ker(\operatorname {ad} (x)^{n}:{\mathfrak {g}}\to {\mathfrak {g}})$
be the generalized eigenspace of $\operatorname {ad} (x)$ for eigenvalue zero. It is a subalgebra of ${\mathfrak {g}}$.[6] Note that $\dim {\mathfrak {g}}^{0}(x)$ is the same as the (algebraic) multiplicity[7] of zero as an eigenvalue of $\operatorname {ad} (x)$; i.e., the least integer m such that $a_{m}(x)\neq 0$ in the notation in § Definition. Thus, $\operatorname {rk} ({\mathfrak {g}})\leq \dim {\mathfrak {g}}^{0}(x)$ and the equality holds if and only if $x$ is a regular element.[3]
The statement is then that if $x$ is a regular element, then ${\mathfrak {g}}^{0}(x)$ is a Cartan subalgebra.[8] Thus, $\operatorname {rk} ({\mathfrak {g}})$ is the dimension of at least some Cartan subalgebra; in fact, $\operatorname {rk} ({\mathfrak {g}})$ is the minimum dimension of a Cartan subalgebra. More strongly, over a field of characteristic zero (e.g., $\mathbb {R} $ or $\mathbb {C} $),[9]
• every Cartan subalgebra of ${\mathfrak {g}}$ has the same dimension; thus, $\operatorname {rk} ({\mathfrak {g}})$ is the dimension of an arbitrary Cartan subalgebra,
• an element x of ${\mathfrak {g}}$ is regular if and only if ${\mathfrak {g}}^{0}(x)$ is a Cartan subalgebra, and
• every Cartan subalgebra is of the form ${\mathfrak {g}}^{0}(x)$ for some regular element $x\in {\mathfrak {g}}$.
A regular element in a Cartan subalgebra of a complex semisimple Lie algebra
For a Cartan subalgebra ${\mathfrak {h}}$ of a complex semisimple Lie algebra ${\mathfrak {g}}$ with the root system $\Phi $, an element of ${\mathfrak {h}}$ is regular if and only if it is not in the union of hyperplanes $ \bigcup _{\alpha \in \Phi }\{h\in {\mathfrak {h}}\mid \alpha (h)=0\}$.[10] This is because: for $r=\dim {\mathfrak {h}}$,
• For each $h\in {\mathfrak {h}}$, the characteristic polynomial of $\operatorname {ad} (h)$ is $ t^{r}\left(t^{\dim {\mathfrak {g}}-r}-\sum _{\alpha \in \Phi }\alpha (h)t^{\dim {\mathfrak {g}}-r-1}+\cdots \pm \prod _{\alpha \in \Phi }\alpha (h)\right)$.
This characterization is sometimes taken as the definition of a regular element (especially when only regular elements in Cartan subalgebras are of interest).
Notes
1. Sepanski, Mark R. (2006). Compact Lie Groups. Springer. p. 156. ISBN 978-0-387-30263-8.
2. Editorial note: the definition of a regular element over a finite field is unclear.
3. Bourbaki 1981, Ch. VII, § 2.2. Definition 2.
4. Serre 2001, Ch. III, § 1. Proposition 1.
5. Serre 2001, Ch. III, § 6.
6. This is a consequence of the binomial-ish formula for ad.
7. Recall that the geometric multiplicity of an eigenvalue of an endomorphism is the dimension of the eigenspace while the algebraic multiplicity of it is the dimension of the generalized eigenspace.
8. Bourbaki 1981, Ch. VII, § 2.3. Theorem 1.
9. Bourbaki 1981, Ch. VII, § 3.3. Theorem 2.
10. Procesi 2007, Ch. 10, § 3.2.
References
• Bourbaki, N. (1981), Groupes et Algèbres de Lie, Éléments de Mathématique, Hermann
• Fulton, William; Harris, Joe (1991), Representation Theory, A First Course, Graduate Texts in Mathematics, vol. 129, Berlin, New York: Springer-Verlag, ISBN 978-0-387-97495-8, MR 1153249
• Procesi, Claudio (2007), Lie Groups: an approach through invariants and representation, Springer, ISBN 9780387260402
• Serre, Jean-Pierre (2001), Complex Semisimple Lie Algebras, Springer, ISBN 3-5406-7827-1
|
Wikipedia
|
Regular embedding
In algebraic geometry, a closed immersion $i:X\hookrightarrow Y$ of schemes is a regular embedding of codimension r if each point x in X has an open affine neighborhood U in Y such that the ideal of $X\cap U$ is generated by a regular sequence of length r. A regular embedding of codimension one is precisely an effective Cartier divisor.
Not to be confused with regular scheme.
Examples and usage
For example, if X and Y are smooth over a scheme S and if i is an S-morphism, then i is a regular embedding. In particular, every section of a smooth morphism is a regular embedding.[1] If $\operatorname {Spec} B$ is regularly embedded into a regular scheme, then B is a complete intersection ring.[2]
The notion is used, for instance, in an essential way in Fulton's approach to intersection theory. The important fact is that when i is a regular embedding, if I is the ideal sheaf of X in Y, then the normal sheaf, the dual of $I/I^{2}$, is locally free (thus a vector bundle) and the natural map $\operatorname {Sym} (I/I^{2})\to \oplus _{0}^{\infty }I^{n}/I^{n+1}$ is an isomorphism: the normal cone $\operatorname {Spec} (\oplus _{0}^{\infty }I^{n}/I^{n+1})$ coincides with the normal bundle.
Non-examples
One non-example is a scheme which isn't equidimensional. For example, the scheme
$X={\text{Spec}}\left({\frac {\mathbb {C} [x,y,z]}{(xz,yz)}}\right)$
is the union of $\mathbb {A} ^{2}$ and $\mathbb {A} ^{1}$. Then, the embedding $X\hookrightarrow \mathbb {A} ^{3}$ isn't regular since taking any non-origin point on the $z$-axis is of dimension $1$ while any non-origin point on the $xy$-plane is of dimension $2$.
Local complete intersection morphisms and virtual tangent bundles
A morphism of finite type $f:X\to Y$ is called a (local) complete intersection morphism if each point x in X has an open affine neighborhood U so that f |U factors as $U{\overset {j}{\to }}V{\overset {g}{\to }}Y$ where j is a regular embedding and g is smooth. [3] For example, if f is a morphism between smooth varieties, then f factors as $X\to X\times Y\to Y$ where the first map is the graph morphism and so is a complete intersection morphism. Notice that this definition is compatible with the one in EGA IV for the special case of flat morphisms.[4]
Let $f:X\to Y$ be a local-complete-intersection morphism that admits a global factorization: it is a composition $X{\overset {i}{\hookrightarrow }}P{\overset {p}{\to }}Y$ where $i$ is a regular embedding and $p$ a smooth morphism. Then the virtual tangent bundle is an element of the Grothendieck group of vector bundles on X given as:[5]
$T_{f}=[i^{*}T_{P/Y}]-[N_{X/P}]$,
where $T_{P/Y}=\Omega _{P/Y}^{\vee }$ is the relative tangent sheaf of $p$ (which is locally free since $p$ is smooth) and $N$ is the normal sheaf $({\mathcal {I}}/{\mathcal {I}}^{2})^{\vee }$ (where ${\mathcal {I}}$ is the ideal sheaf of $X$ in $P$), which is locally free since $i$ is a regular embedding.
More generally, if $f\colon X\rightarrow Y$ is a any local complete intersection morphism of schemes, its cotangent complex $L_{X/Y}$ is perfect of Tor-amplitude [-1,0]. If moreover $f$ is locally of finite type and $Y$ locally Noetherian, then the converse is also true.[6]
These notions are used for instance in the Grothendieck–Riemann–Roch theorem.
Non-Noetherian case
SGA 6 Exposé VII uses the following slightly weaker form of the notion of a regular embedding, which agrees with the one presented above for Noetherian schemes:
First, given a projective module E over a commutative ring A, an A-linear map $u:E\to A$ is called Koszul-regular if the Koszul complex determined by it is acyclic in dimension > 0 (consequently, it is a resolution of the cokernel of u).[7] Then a closed immersion $X\hookrightarrow Y$ is called Koszul-regular if the ideal sheaf determined by it is such that, locally, there are a finite free A-module E and a Koszul-regular surjection from E to the ideal sheaf.[8]
It is this Koszul regularity that was used in SGA 6 [9] for the definition of local complete intersection morphisms; it is indicated there that Koszul-regularity was intended to replace the definition given earlier in this article and that had appeared originally in the already published EGA IV.[10]
(This questions arise because the discussion of zero-divisors is tricky for non-Noetherian rings in that one cannot use the theory of associated primes.)
See also
• Regular submanifold
Notes
1. Sernesi 2006, D. Notes 2.
2. Sernesi 2006, D.1.
3. SGA 6 1971, Exposé VIII, Definition 1.1.; Sernesi 2006, D.2.1.
4. EGA IV 1967, Definition 19.3.6, p. 196
5. Fulton 1998, Appendix B.7.5.
6. Illusie 1971, Proposition 3.2.6 , p. 209
7. SGA 6 1971, Exposé VII. Definition 1.1. NB: We follow the terminology of the Stacks project.
8. SGA 6 1971, Exposé VII, Definition 1.4.
9. SGA 6 1971, Exposé VIII, Definition 1.1.
10. EGA IV 1967, § 16 no 9, p. 45
References
• Berthelot, Pierre; Alexandre Grothendieck; Luc Illusie, eds. (1971). Séminaire de Géométrie Algébrique du Bois Marie - 1966-67 - Théorie des intersections et théorème de Riemann-Roch - (SGA 6) (Lecture notes in mathematics 225) (in French). Berlin; New York: Springer-Verlag. xii+700. doi:10.1007/BFb0066283. ISBN 978-3-540-05647-8. MR 0354655.
• Fulton, William (1998), Intersection theory, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], vol. 2, Berlin, New York: Springer-Verlag, ISBN 978-3-540-62046-4, MR 1644323, section B.7
• Grothendieck, Alexandre; Dieudonné, Jean (1967). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Quatrième partie". Publications Mathématiques de l'IHÉS. 32: 5–361. doi:10.1007/bf02732123. MR 0238860., section 16.9, p. 46
• Illusie, Luc (1971), Complexe Cotangent et Déformations I, Lecture Notes in Mathematics 239 (in French), Berlin, New York: Springer-Verlag, ISBN 978-3-540-05686-7
• Sernesi, Edoardo (2006). Deformations of Algebraic Schemes. Physica-Verlag. ISBN 9783540306153.
|
Wikipedia
|
Enneagram (geometry)
In geometry, an enneagram (🟙 U+1F7D9) is a nine-pointed plane figure. It is sometimes called a nonagram, nonangle, or enneagon.[1]
Enneagram
Enneagrams shown as sequential stellations
Edges and vertices9
Symmetry groupDihedral (D9)
Internal angle (degrees)100° {9/2}
20° {9/4}
Star polygons
• pentagram
• hexagram
• heptagram
• octagram
• enneagram
• decagram
• hendecagram
• dodecagram
The word 'enneagram' combines the numeral prefix ennea- with the Greek suffix -gram. The gram suffix derives from γραμμῆς (grammēs) meaning a line.[2]
Regular enneagram
A regular enneagram is a 9-sided star polygon. It is constructed using the same points as the regular enneagon, but the points are connected in fixed steps. Two forms of regular enneagram exist:
• One form connects every second point and is represented by the Schläfli symbol {9/2}.
• The other form connects every fourth point and is represented by the Schläfli symbol {9/4}.
There is also a star figure, {9/3} or 3{3}, made from the regular enneagon points but connected as a compound of three equilateral triangles.[3][4] (If the triangles are alternately interlaced, this results in a Brunnian link.) This star figure is sometimes known as the star of Goliath, after {6/2} or 2{3}, the star of David.[5]
Compound Regular star Regular
compound
Regular star
Complete graph K9
{9/2}
{9/3} or 3{3}
{9/4}
Other enneagram figures
The final stellation of the icosahedron has 2-isogonal enneagram faces. It is a 9/4 wound star polyhedron, but the vertices are not equally spaced.
The Fourth Way teachings and the Enneagram of Personality use an irregular enneagram consisting of an equilateral triangle and an irregular hexagram based on 142857.
The Bahá'í nine-pointed star
A 9/3 enneagram
The nine-pointed star or enneagram can also symbolize the nine gifts or fruits of the Holy Spirit.[6]
In popular culture
• The heavy metal band Slipknot previously used the {9/3} star figure enneagram[7] and currently uses the {9/4} polygon as a symbol. The prior figure can be seen on the cover of All Hope Is Gone.
See also
• List of regular star polygons
• Baháʼí symbols
References
1. "Between a square rock and a hard pentagon: Fractional polygons". 28 September 2017.
2. γραμμή, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus.
3. Grünbaum, B. and G. C. Shephard; Tilings and Patterns, New York: W. H. Freeman & Co., (1987), ISBN 0-7167-1193-1.
4. Grünbaum, B.; Polyhedra with Hollow Faces, Proc of NATO-ASI Conference on Polytopes ... etc. (Toronto 1993), ed T. Bisztriczky et al., Kluwer Academic (1994) pp. 43-70.
5. Weisstein, Eric W. "Nonagram". mathworld.wolfram.com.
6. Our Christian Symbols by Friedrich Rest (1954), ISBN 0-8298-0099-9, page 13.
7. "slipknot". eBay.
Bibliography
• John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 26. pp. 404: Regular star-polytopes Dimension 2)
External links
• Media related to Enneagram at Wikimedia Commons
• Nonagram -- from Wolfram MathWorld
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
|
Wikipedia
|
Similarity (network science)
Similarity in network analysis occurs when two nodes (or other more elaborate structures) fall in the same equivalence class.
Part of a series on
Network science
• Theory
• Graph
• Complex network
• Contagion
• Small-world
• Scale-free
• Community structure
• Percolation
• Evolution
• Controllability
• Graph drawing
• Social capital
• Link analysis
• Optimization
• Reciprocity
• Closure
• Homophily
• Transitivity
• Preferential attachment
• Balance theory
• Network effect
• Social influence
Network types
• Informational (computing)
• Telecommunication
• Transport
• Social
• Scientific collaboration
• Biological
• Artificial neural
• Interdependent
• Semantic
• Spatial
• Dependency
• Flow
• on-Chip
Graphs
Features
• Clique
• Component
• Cut
• Cycle
• Data structure
• Edge
• Loop
• Neighborhood
• Path
• Vertex
• Adjacency list / matrix
• Incidence list / matrix
Types
• Bipartite
• Complete
• Directed
• Hyper
• Labeled
• Multi
• Random
• Weighted
• Metrics
• Algorithms
• Centrality
• Degree
• Motif
• Clustering
• Degree distribution
• Assortativity
• Distance
• Modularity
• Efficiency
Models
Topology
• Random graph
• Erdős–Rényi
• Barabási–Albert
• Bianconi–Barabási
• Fitness model
• Watts–Strogatz
• Exponential random (ERGM)
• Random geometric (RGG)
• Hyperbolic (HGN)
• Hierarchical
• Stochastic block
• Blockmodeling
• Maximum entropy
• Soft configuration
• LFR Benchmark
Dynamics
• Boolean network
• agent based
• Epidemic/SIR
• Lists
• Categories
• Topics
• Software
• Network scientists
• Category:Network theory
• Category:Graph theory
There are three fundamental approaches to constructing measures of network similarity: structural equivalence, automorphic equivalence, and regular equivalence.[1] There is a hierarchy of the three equivalence concepts: any set of structural equivalences are also automorphic and regular equivalences. Any set of automorphic equivalences are also regular equivalences. Not all regular equivalences are necessarily automorphic or structural; and not all automorphic equivalences are necessarily structural.[2]
Visualizing similarity and distance
Clustering tools
Agglomerative Hierarchical clustering of nodes on the basis of the similarity of their profiles of ties to other nodes provides a joining tree or Dendrogram that visualizes the degree of similarity among cases - and can be used to find approximate equivalence classes.[2]
Multi-dimensional scaling tools
Main article: Multidimensional scaling
Usually, our goal in equivalence analysis is to identify and visualize "classes" or clusters of cases. In using cluster analysis, we are implicitly assuming that the similarity or distance among cases reflects a single underlying dimension. It is possible, however, that there are multiple "aspects" or "dimensions" underlying the observed similarities of cases. Factor or components analysis could be applied to correlations or covariances among cases. Alternatively, multi-dimensional scaling could be used (non-metric for data that are inherently nominal or ordinal; metric for valued).[2]
MDS represents the patterns of similarity or dissimilarity in the tie profiles among the actors (when applied to adjacency or distances) as a "map" in multi-dimensional space. This map lets us see how "close" actors are, whether they "cluster" in multi-dimensional space and how much variation there is along each dimension.[2]
Structural equivalence
Two vertices of a network are structurally equivalent if they share many of the same neighbors.
There is no actor who has exactly the same set of ties as actor A, so actor A is in a class by itself. The same is true for actors B, C, D and G. Each of these nodes has a unique set of edges to other nodes. E and F, however, fall in the same structural equivalence class. Each has only one edge; and that tie is to B. Since E and F have exactly the same pattern of edges with all the vertices, they are structurally equivalent. The same is true in the case of H and I.[2]
Structural equivalence is the strongest form of similarity. In many real networks, exact equivalence may be rare, and it could be useful to ease the criteria and measure approximate equivalence.
A closely related concept is institutional equivalence: two actors (e.g., firms) are institutionally equivalent if they operate in the same set of institutional fields.[3] While structurally equivalent actors have identical relational patterns or network positions, institutional equivalence captures the similarity of institutional influences that actors experience from being in the same fields, regardless of how similar their network positions are. For example, two banks in Chicago might have very different patterns of ties (e.g., one may be a central node, and the other may be in a peripheral position) such that they are not structural equivalents, but because they both operate in the field of finance and banking and in the same geographically defined field (Chicago), they will be subject to some of the same institutional influences.[3]
Cosine similarity
A simple count of common neighbors for two vertices is not on its own a very good measure. One should know the degree of the vertices or how many common neighbors other pairs of vertices has. Cosine similarity takes into account these regards and also allow for varying degrees of vertices. Salton proposed that we regard the i-th and j-th rows/columns of the adjacency matrix as two vectors and use the cosine of the angle between them as a similarity measure. The cosine similarity of i and j is the number of common neighbors divided by the geometric mean of their degrees.[4]
Its value lies in the range from 0 to 1. The value of 1 indicates that the two vertices have exactly the same neighbors while the value of zero means that they do not have any common neighbors. Cosine similarity is technically undefined if one or both of the nodes has zero degree, but according to the convention, we say that cosine similarity is 0 in these cases.[1]
Pearson coefficient
Main article: Pearson product-moment correlation coefficient
Pearson product-moment correlation coefficient is an alternative method to normalize the count of common neighbors. This method compares the number of common neighbors with the expected value that count would take in a network where vertices are connected randomly. This quantity lies strictly in the range from -1 to 1.[1]
Euclidean distance
Main article: Euclidean distance
Euclidean distance is equal to the number of neighbors that differ between two vertices. It is rather a dissimilarity measure, since it is larger for vertices which differ more. It could be normalized by dividing by its maximum value. The maximum means that there are no common neighbors, in which case the distance is equal to the sum of the degrees of the vertices.[1]
Automorphic equivalence
Formally "Two vertices are automorphically equivalent if all the vertices can be re-labeled to form an isomorphic graph with the labels of u and v interchanged. Two automorphically equivalent vertices share exactly the same label-independent properties."[5]
More intuitively, actors are automorphically equivalent if we can permute the graph in such a way that exchanging the two actors has no effect on the distances among all actors in the graph.
Suppose the graph describes the organizational structure of a company. Actor A is the central headquarter, actors B, C, and D are managers. Actors E, F and H, I are workers at smaller stores; G is the lone worker at another store.
Even though actor B and actor D are not structurally equivalent (they do have the same boss, but not the same workers), they do seem to be "equivalent" in a different sense. Both manager B and D have a boss (in this case, the same boss), and each has two workers. If we swapped them, and also swapped the four workers, all of the distances among all the actors in the network would be exactly identical.
There are actually five automorphic equivalence classes: {A}, {B, D}, {C}, {E, F, H, I}, and {G}. Note that the less strict definition of "equivalence" has reduced the number of classes.[2]
Regular equivalence
Formally, "Two actors are regularly equivalent if they are equally related to equivalent others." In other words, regularly equivalent vertices are vertices that, while they do not necessarily share neighbors, have neighbors who are themselves similar.[5]
Two mothers, for example, are equivalent, because each has a similar pattern of connections with a husband, children, etc. The two mothers do not have ties to the same husband or the same children, so they are not structurally equivalent. Because different mothers may have different numbers of husbands and children, they will not be automorphically equivalent. But they are similar because they have the same relationships with some member or members of another set of actors (who are themselves regarded as equivalent because of the similarity of their ties to a member of the set "mother").[2]
In the graph there are three regular equivalence classes. The first is actor A; the second is composed of the three actors B, C, and D; the third consists of the remaining five actors E, F, G, H, and I.
The easiest class to see is the five actors across the bottom of the diagram (E, F, G, H, and I). These actors are regularly equivalent to one another because:
1. they have no tie with any actor in the first class (that is, with actor A) and
2. each has a tie with an actor in the second class (either B or C or D).
Each of the five actors, then, has an identical pattern of ties with actors in the other classes.
Actors B, C, and D form a class similarly. B and D actually have ties with two members of the third class, whereas actor C has a tie to only one member of the third class, but this doesn't matter, as there is a tie to some member of the third class.
Actor A is in a class by itself, defined by:
1. a tie to at least one member of class two and
2. no tie to any member of class three.[2]
See also
• Similarity measure
• Blockmodeling
References
1. Newman, M.E.J. 2010. Networks: An Introduction. Oxford, UK: Oxford University Press.
2. Hanneman, Robert A. and Mark Riddle. 2005. Introduction to social network methods. Riverside, CA: University of California, Riverside ( published in digital form at http://faculty.ucr.edu/~hanneman/ )
3. Marquis, Christopher; Tilcsik, András (2016-10-01). "Institutional Equivalence: How Industry and Community Peers Influence Corporate Philanthropy". Organization Science. 27 (5): 1325–1341. doi:10.1287/orsc.2016.1083. hdl:1813/44734. ISSN 1047-7039.
4. Salton G., Automatic Text Processing: The Transformation, Analysis and Retrieval of Information by Computer, Addison-Wesley, Reading, MA (1989)
5. Borgatti, Steven, Martin Everett, and Linton Freeman. 1992. UCINET IV Version 1.0 User's Guide. Columbia, SC: Analytic Technologies.
|
Wikipedia
|
Regular graph
In graph theory, a regular graph is a graph where each vertex has the same number of neighbors; i.e. every vertex has the same degree or valency. A regular directed graph must also satisfy the stronger condition that the indegree and outdegree of each internal vertex are equal to each other.[1] A regular graph with vertices of degree k is called a k‑regular graph or regular graph of degree k. Also, from the handshaking lemma, a regular graph contains an even number of vertices with odd degree.
Graph families defined by their automorphisms
distance-transitive → distance-regular ← strongly regular
↓
symmetric (arc-transitive) ← t-transitive, t ≥ 2 skew-symmetric
↓
(if connected)
vertex- and edge-transitive
→ edge-transitive and regular → edge-transitive
↓ ↓ ↓
vertex-transitive → regular → (if bipartite)
biregular
↑
Cayley graph ← zero-symmetric asymmetric
Regular graphs of degree at most 2 are easy to classify: a 0-regular graph consists of disconnected vertices, a 1-regular graph consists of disconnected edges, and a 2-regular graph consists of a disjoint union of cycles and infinite chains.
A 3-regular graph is known as a cubic graph.
A strongly regular graph is a regular graph where every adjacent pair of vertices has the same number l of neighbors in common, and every non-adjacent pair of vertices has the same number n of neighbors in common. The smallest graphs that are regular but not strongly regular are the cycle graph and the circulant graph on 6 vertices.
The complete graph Km is strongly regular for any m.
A theorem by Nash-Williams says that every k‑regular graph on 2k + 1 vertices has a Hamiltonian cycle.
• 0-regular graph
• 1-regular graph
• 2-regular graph
• 3-regular graph
Existence
It is well known that the necessary and sufficient conditions for a $k$ regular graph of order $n$ to exist are that $n\geq k+1$ and that $nk$ is even.
Proof: As we know a complete graph has every pair of distinct vertices connected to each other by a unique edge. So edges are maximum in complete graph and number of edges are ${\binom {n}{2}}={\dfrac {n(n-1)}{2}}$ and degree here is $n-1$. So $k=n-1,n=k+1$. This is the minimum $n$ for a particular $k$. Also note that if any regular graph has order $n$ then number of edges are ${\dfrac {nk}{2}}$ so $nk$ has to be even. In such case it is easy to construct regular graphs by considering appropriate parameters for circulant graphs.
Algebraic properties
Let A be the adjacency matrix of a graph. Then the graph is regular if and only if ${\textbf {j}}=(1,\dots ,1)$ is an eigenvector of A.[2] Its eigenvalue will be the constant degree of the graph. Eigenvectors corresponding to other eigenvalues are orthogonal to ${\textbf {j}}$, so for such eigenvectors $v=(v_{1},\dots ,v_{n})$, we have $\sum _{i=1}^{n}v_{i}=0$.
A regular graph of degree k is connected if and only if the eigenvalue k has multiplicity one. The "only if" direction is a consequence of the Perron–Frobenius theorem.[2]
There is also a criterion for regular and connected graphs : a graph is connected and regular if and only if the matrix of ones J, with $J_{ij}=1$, is in the adjacency algebra of the graph (meaning it is a linear combination of powers of A).[3]
Let G be a k-regular graph with diameter D and eigenvalues of adjacency matrix $k=\lambda _{0}>\lambda _{1}\geq \cdots \geq \lambda _{n-1}$. If G is not bipartite, then
$D\leq {\frac {\log {(n-1)}}{\log(\lambda _{0}/\lambda _{1})}}+1.$[4]
Generation
Fast algorithms exist to enumerate, up to isomorphism, all regular graphs with a given degree and number of vertices.[5]
See also
• Random regular graph
• Strongly regular graph
• Moore graph
• Cage graph
• Highly irregular graph
References
1. Chen, Wai-Kai (1997). Graph Theory and its Engineering Applications. World Scientific. pp. 29. ISBN 978-981-02-1859-1.
2. Cvetković, D. M.; Doob, M.; and Sachs, H. Spectra of Graphs: Theory and Applications, 3rd rev. enl. ed. New York: Wiley, 1998.
3. Curtin, Brian (2005), "Algebraic characterizations of graph regularity conditions", Designs, Codes and Cryptography, 34 (2–3): 241–248, doi:10.1007/s10623-004-4857-4, MR 2128333.
4. Meringer, Markus (1999). "Fast generation of regular graphs and construction of cages" (PDF). Journal of Graph Theory. 30 (2): 137–146. doi:10.1002/(SICI)1097-0118(199902)30:2<137::AID-JGT7>3.0.CO;2-G.
External links
Wikimedia Commons has media related to Regular graphs.
• Weisstein, Eric W. "Regular Graph". MathWorld.
• Weisstein, Eric W. "Strongly Regular Graph". MathWorld.
• GenReg software and data by Markus Meringer.
• Nash-Williams, Crispin (1969), Valency Sequences which force graphs to have Hamiltonian Circuits, University of Waterloo Research Report, Waterloo, Ontario: University of Waterloo
|
Wikipedia
|
Honeycomb (geometry)
In geometry, a honeycomb is a space filling or close packing of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions. Its dimension can be clarified as n-honeycomb for a honeycomb of n-dimensional space.
Honeycombs are usually constructed in ordinary Euclidean ("flat") space. They may also be constructed in non-Euclidean spaces, such as hyperbolic honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space.
Classification
There are infinitely many honeycombs, which have only been partially classified. The more regular ones have attracted the most interest, while a rich and varied assortment of others continue to be discovered.
The simplest honeycombs to build are formed from stacked layers or slabs of prisms based on some tessellations of the plane. In particular, for every parallelepiped, copies can fill space, with the cubic honeycomb being special because it is the only regular honeycomb in ordinary (Euclidean) space. Another interesting family is the Hill tetrahedra and their generalizations, which can also tile the space.
Uniform 3-honeycombs
A 3-dimensional uniform honeycomb is a honeycomb in 3-space composed of uniform polyhedral cells, and having all vertices the same (i.e., the group of [isometries of 3-space that preserve the tiling] is transitive on vertices). There are 28 convex examples in Euclidean 3-space,[1] also called the Archimedean honeycombs.
A honeycomb is called regular if the group of isometries preserving the tiling acts transitively on flags, where a flag is a vertex lying on an edge lying on a face lying on a cell. Every regular honeycomb is automatically uniform. However, there is just one regular honeycomb in Euclidean 3-space, the cubic honeycomb. Two are quasiregular (made from two types of regular cells):
Type Regular cubic honeycomb Quasiregular honeycombs
Cells Cubic Octahedra and tetrahedra
Slab layer
The tetrahedral-octahedral honeycomb and gyrated tetrahedral-octahedral honeycombs are generated by 3 or 2 positions of slab layer of cells, each alternating tetrahedra and octahedra. An infinite number of unique honeycombs can be created by higher order of patterns of repeating these slab layers.
Space-filling polyhedra
See also: Stereohedron, Plesiohedron, and Parallelohedron
A honeycomb having all cells identical within its symmetries is said to be cell-transitive or isochoric. In the 3-dimensional euclidean space, a cell of such a honeycomb is said to be a space-filling polyhedron.[2] A necessary condition for a polyhedron to be a space-filling polyhedron is that its Dehn invariant must be zero,[3][4] ruling out any of the Platonic solids other than the cube.
Five space-filling polyhedra can tessellate 3-dimensional euclidean space using translations only. They are called parallelohedra:
1. Cubic honeycomb (or variations: cuboid, rhombic hexahedron or parallelepiped)
2. Hexagonal prismatic honeycomb[5]
3. Rhombic dodecahedral honeycomb
4. Elongated dodecahedral honeycomb[6]
5. Bitruncated cubic honeycomb or truncated octahedra[7]
cubic honeycomb
Hexagonal prismatic honeycomb
Rhombic dodecahedra
Elongated dodecahedra
Truncated octahedra
Cube
(parallelepiped)
Hexagonal prism Rhombic dodecahedron Elongated dodecahedron Truncated octahedron
3 edge-lengths 3+1 edge-lengths 4 edge-lengths 4+1 edge-lengths 6 edge-lengths
Other known examples of space-filling polyhedra include:
• The triangular prismatic honeycomb
• The gyrated triangular prismatic honeycomb
• The triakis truncated tetrahedral honeycomb. The Voronoi cells of the carbon atoms in diamond are this shape.[8]
• The trapezo-rhombic dodecahedral honeycomb[9]
• Isohedral tilings[10]
Other honeycombs with two or more polyhedra
Sometimes, two [11] or more different polyhedra may be combined to fill space. Besides many of the uniform honeycombs, another well known example is the Weaire–Phelan structure, adopted from the structure of clathrate hydrate crystals [12]
The periodic unit of the Weaire–Phelan structure.
A honeycomb by left and right-handed versions of the same polyhedron.
Non-convex 3-honeycombs
Documented examples are rare. Two classes can be distinguished:
• Non-convex cells which pack without overlapping, analogous to tilings of concave polygons. These include a packing of the small stellated rhombic dodecahedron, as in the Yoshimoto Cube.
• Overlapping of cells whose positive and negative densities 'cancel out' to form a uniformly dense continuum, analogous to overlapping tilings of the plane.
Hyperbolic honeycombs
In 3-dimensional hyperbolic space, the dihedral angle of a polyhedron depends on its size. The regular hyperbolic honeycombs thus include two with four or five dodecahedra meeting at each edge; their dihedral angles thus are π/2 and 2π/5, both of which are less than that of a Euclidean dodecahedron. Apart from this effect, the hyperbolic honeycombs obey the same topological constraints as Euclidean honeycombs and polychora.
The 4 compact and 11 paracompact regular hyperbolic honeycombs and many compact and paracompact uniform hyperbolic honeycombs have been enumerated.
Four regular compact honeycombs in H3
{5,3,4}
{4,3,5}
{3,5,3}
{5,3,5}
11 paracompact regular honeycombs
{6,3,3}
{6,3,4}
{6,3,5}
{6,3,6}
{4,4,3}
{4,4,4}
{3,3,6}
{4,3,6}
{5,3,6}
{3,6,3}
{3,4,4}
Duality of 3-honeycombs
For every honeycomb there is a dual honeycomb, which may be obtained by exchanging:
cells for vertices.
faces for edges.
These are just the rules for dualising four-dimensional 4-polytopes, except that the usual finite method of reciprocation about a concentric hypersphere can run into problems.
The more regular honeycombs dualise neatly:
• The cubic honeycomb is self-dual.
• That of octahedra and tetrahedra is dual to that of rhombic dodecahedra.
• The slab honeycombs derived from uniform plane tilings are dual to each other in the same way that the tilings are.
• The duals of the remaining Archimedean honeycombs are all cell-transitive and have been described by Inchbald.[13]
Self-dual honeycombs
Honeycombs can also be self-dual. All n-dimensional hypercubic honeycombs with Schläfli symbols {4,3n−2,4}, are self-dual.
See also
Wikimedia Commons has media related to Honeycombs (geometry).
• List of uniform tilings
• Regular honeycombs
• Infinite skew polyhedron
• Plesiohedron
References
1. Grünbaum (1994). "Uniform tilings of 3-space". Geombinatorics 4(2)
2. Weisstein, Eric W. "Space-filling polyhedron". MathWorld.
3. Debrunner, Hans E. (1980), "Über Zerlegungsgleichheit von Pflasterpolyedern mit Würfeln", Archiv der Mathematik (in German), 35 (6): 583–587, doi:10.1007/BF01235384, MR 0604258, S2CID 121301319.
4. Lagarias, J. C.; Moews, D. (1995), "Polytopes that fill $\mathbb {R} ^{n}$ and scissors congruence", Discrete and Computational Geometry, 13 (3–4): 573–583, doi:10.1007/BF02574064, MR 1318797.
5. Uniform space-filling using triangular, square, and hexagonal prisms
6. Uniform space-filling using only rhombo-hexagonal dodecahedra
7. Uniform space-filling using only truncated octahedra
8. John Conway (2003-12-22). "Voronoi Polyhedron. geometry.puzzles". Newsgroup: geometry.puzzles. Usenet: Pine.LNX.4.44.0312221226380.25139-100000@fine318a.math.Princeton.EDU.
9. X. Qian, D. Strahs and T. Schlick, J. Comput. Chem. 22(15) 1843–1850 (2001)
10. O. Delgado-Friedrichs and M. O'Keeffe. Isohedral simple tilings: binodal and by tiles with <16 faces. Acta Crystallogr. (2005) A61, 358-362
11. Archived 2015-06-30 at the Wayback Machine Gabbrielli, Ruggero. A thirteen-sided polyhedron which fills space with its chiral copy.
12. Pauling, Linus. The Nature of the Chemical Bond. Cornell University Press, 1960
13. Inchbald, Guy (July 1997), "The Archimedean honeycomb duals", The Mathematical Gazette, 81 (491): 213–219, doi:10.2307/3619198, JSTOR 3619198.
Further reading
• Coxeter, H. S. M.: Regular Polytopes.
• Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. pp. 164–199. ISBN 0-486-23729-X. Chapter 5: Polyhedra packing and space filling
• Critchlow, K.: Order in space.
• Pearce, P.: Structure in nature is a strategy for design.
• Goldberg, Michael Three Infinite Families of Tetrahedral Space-Fillers Journal of Combinatorial Theory A, 16, pp. 348–354, 1974.
• Goldberg, Michael (1972). "The space-filling pentahedra". Journal of Combinatorial Theory, Series A. 13 (3): 437–443. doi:10.1016/0097-3165(72)90077-5.
• Goldberg, Michael The Space-filling Pentahedra II, Journal of Combinatorial Theory 17 (1974), 375–378.
• Goldberg, Michael (1977). "On the space-filling hexahedra". Geometriae Dedicata. 6. doi:10.1007/BF00181585. S2CID 189889869.
• Goldberg, Michael (1978). "On the space-filling heptahedra". Geometriae Dedicata. 7 (2): 175–184. doi:10.1007/BF00181630. S2CID 120562040.
• Goldberg, Michael Convex Polyhedral Space-Fillers of More than Twelve Faces. Geom. Dedicata 8, 491-500, 1979.
• Goldberg, Michael (1981). "On the space-filling octahedra". Geometriae Dedicata. 10 (1–4): 323–335. doi:10.1007/BF01447431. S2CID 189876836.
• Goldberg, Michael (1982). "On the Space-filling Decahedra". {{cite journal}}: Cite journal requires |journal= (help)
• Goldberg, Michael (1982). "On the space-filling enneahedra". Geometriae Dedicata. 12 (3). doi:10.1007/BF00147314. S2CID 120914105.
External links
• Olshevsky, George. "Honeycomb". Glossary for Hyperspace. Archived from the original on 4 February 2007.
• Five space-filling polyhedra, Guy Inchbald, The Mathematical Gazette 80, November 1996, p.p. 466-475.
• Raumfueller (Space filling polyhedra) by T.E. Dorozinski
• Weisstein, Eric W. "Space-Filling Polyhedron". MathWorld.
Fundamental convex regular and uniform honeycombs in dimensions 2–9
Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$
E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal
E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4
E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb
E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6
E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222
E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331
E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521
E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10
E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11
En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
|
Wikipedia
|
List of regular polytopes and compounds
This article lists the regular polytopes and regular polytope compounds in Euclidean, spherical and hyperbolic spaces.
Example regular polytopes
Regular (2D) polygons
Convex Star
{5}
{5/2}
Regular (3D) polyhedra
Convex Star
{5,3}
{5/2,5}
Regular 4D polytopes
Convex Star
{5,3,3}
{5/2,5,3}
Regular 2D tessellations
Euclidean Hyperbolic
{4,4}
{5,4}
Regular 3D tessellations
Euclidean Hyperbolic
{4,3,4}
{5,3,4}
The Schläfli symbol describes every regular tessellation of an n-sphere, Euclidean and hyperbolic spaces. A Schläfli symbol describing an n-polytope equivalently describes a tessellation of an (n − 1)-sphere. In addition, the symmetry of a regular polytope or tessellation is expressed as a Coxeter group, which Coxeter expressed identically to the Schläfli symbol, except delimiting by square brackets, a notation that is called Coxeter notation. Another related symbol is the Coxeter–Dynkin diagram which represents a symmetry group with no rings, and the represents regular polytope or tessellation with a ring on the first node. For example, the cube has Schläfli symbol {4,3}, and with its octahedral symmetry, [4,3] or , it is represented by Coxeter diagram .
The regular polytopes are grouped by dimension and subgrouped by convex, nonconvex and infinite forms. Nonconvex forms use the same vertices as the convex forms, but have intersecting facets. Infinite forms tessellate a one-lower-dimensional Euclidean space.
Infinite forms can be extended to tessellate a hyperbolic space. Hyperbolic space is like normal space at a small scale, but parallel lines diverge at a distance. This allows vertex figures to have negative angle defects, like making a vertex with seven equilateral triangles and allowing it to lie flat. It cannot be done in a regular plane, but can be at the right scale of a hyperbolic plane.
A more general definition of regular polytopes which do not have simple Schläfli symbols includes regular skew polytopes and regular skew apeirotopes with nonplanar facets or vertex figures.
Overview
This table shows a summary of regular polytope counts by dimension.
Note that the Euclidean and hyperbolic tilings are given one dimension more than what would be expected. This is because of an analogy with finite polytopes: a convex regular n-polytope can be seen as a tessellation of (n−1)-dimensional spherical space. Thus the three regular tilings of the Euclidean plane (by triangles, squares, and hexagons) are listed under dimension three rather than two.
Dim. Finite Euclidean Hyperbolic Compounds
Compact Paracompact
Convex Star Skew Convex Convex Star Convex Convex Star
1 1nonenone1nonenonenonenonenone
2 $\infty $$\infty $$\infty $11nonenone$\infty $$\infty $
3 54?3$\infty $$\infty $$\infty $5none
4 610?14none112620
5 3none?3542nonenone
6 3none?1nonenone5nonenone
7 3none?1nonenonenone3none
8 3none?1nonenonenone6none
9+ 3none?1nonenonenone [lower-alpha 1] none
1. ${\begin{cases}2,&{\text{if the number of dimensions is of the form }}2^{k}\\1,&{\text{if the number of dimensions is of the form }}2^{k}-1\\0,&{\text{otherwise}}\\\end{cases}}$
There are no Euclidean regular star tessellations in any number of dimensions.
One dimension
A Coxeter diagram represent mirror "planes" as nodes, and puts a ring around a node if a point is not on the plane. A dion { }, , is a point p and its mirror image point p', and the line segment between them.
A one-dimensional polytope or 1-polytope is a closed line segment, bounded by its two endpoints. A 1-polytope is regular by definition and is represented by Schläfli symbol { },[1][2] or a Coxeter diagram with a single ringed node, . Norman Johnson calls it a dion[3] and gives it the Schläfli symbol { }.
Although trivial as a polytope, it appears as the edges of polygons and other higher dimensional polytopes.[4] It is used in the definition of uniform prisms like Schläfli symbol { }×{p}, or Coxeter diagram as a Cartesian product of a line segment and a regular polygon.[5]
Two dimensions (polygons)
The two-dimensional polytopes are called polygons. Regular polygons are equilateral and cyclic. A p-gonal regular polygon is represented by Schläfli symbol {p}.
Usually only convex polygons are considered regular, but star polygons, like the pentagram, can also be considered regular. They use the same vertices as the convex forms, but connect in an alternate connectivity which passes around the circle more than once to be completed.
Star polygons should be called nonconvex rather than concave because the intersecting edges do not generate new vertices and all the vertices exist on a circle boundary.
Convex
The Schläfli symbol {p} represents a regular p-gon.
Name Triangle
(2-simplex)
Square
(2-orthoplex)
(2-cube)
Pentagon
(2-pentagonal
polytope
)
Hexagon Heptagon Octagon
Schläfli {3} {4} {5} {6} {7} {8}
Symmetry D3, [3]D4, [4]D5, [5]D6, [6]D7, [7]D8, [8]
Coxeter
Image
Name Nonagon
(Enneagon)
Decagon Hendecagon Dodecagon Tridecagon Tetradecagon
Schläfli {9} {10} {11} {12} {13} {14}
Symmetry D9, [9]D10, [10]D11, [11]D12, [12]D13, [13]D14, [14]
Dynkin
Image
Name Pentadecagon Hexadecagon Heptadecagon Octadecagon Enneadecagon Icosagon ...p-gon
Schläfli {15} {16} {17} {18} {19} {20} {p}
Symmetry D15, [15]D16, [16]D17, [17]D18, [18]D19, [19]D20, [20]Dp, [p]
Dynkin
Image
Spherical
The regular digon {2} can be considered to be a degenerate regular polygon. It can be realized non-degenerately in some non-Euclidean spaces, such as on the surface of a sphere or torus. For example, digon can be realised non-degenerately as a spherical lune. A monogon {1} could also be realised on the sphere as a single point with a great circle through it.[6] However, a monogon is not a valid abstract polytope because its single edge is incident to only one vertex rather than two.
Name Monogon Digon
Schläfli symbol {1} {2}
Symmetry D1, [ ] D2, [2]
Coxeter diagram or
Image
Stars
There exist infinitely many regular star polytopes in two dimensions, whose Schläfli symbols consist of rational numbers {n/m}. They are called star polygons and share the same vertex arrangements of the convex regular polygons.
In general, for any natural number n, there are n-pointed star regular polygonal stars with Schläfli symbols {n/m} for all m such that m < n/2 (strictly speaking {n/m}={n/(n−m)}) and m and n are coprime (as such, all stellations of a polygon with a prime number of sides will be regular stars). Cases where m and n are not coprime are called compound polygons.
Name Pentagram Heptagrams Octagram Enneagrams Decagram ...n-grams
Schläfli {5/2} {7/2} {7/3} {8/3} {9/2} {9/4} {10/3} {p/q}
Symmetry D5, [5]D7, [7]D8, [8]D9, [9],D10, [10]Dp, [p]
Coxeter
Image
Regular star polygons up to 20 sides
{11/2}
{11/3}
{11/4}
{11/5}
{12/5}
{13/2}
{13/3}
{13/4}
{13/5}
{13/6}
{14/3}
{14/5}
{15/2}
{15/4}
{15/7}
{16/3}
{16/5}
{16/7}
{17/2}
{17/3}
{17/4}
{17/5}
{17/6}
{17/7}
{17/8}
{18/5}
{18/7}
{19/2}
{19/3}
{19/4}
{19/5}
{19/6}
{19/7}
{19/8}
{19/9}
{20/3}
{20/7}
{20/9}
Star polygons that can only exist as spherical tilings, similarly to the monogon and digon, may exist (for example: {3/2}, {5/3}, {5/4}, {7/4}, {9/5}), however these do not appear to have been studied in detail.
There also exist failed star polygons, such as the piangle, which do not cover the surface of a circle finitely many times.[7]
Skew polygons
In 3-dimensional space, a regular skew polygon is called an antiprismatic polygon, with the vertex arrangement of an antiprism, and a subset of edges, zig-zagging between top and bottom polygons.
Example regular skew zig-zag polygons
Hexagon Octagon Decagons
D3d, [2+,6] D4d, [2+,8] D5d, [2+,10]
{3}#{ } {4}#{ } {5}#{ } {5/2}#{ } {5/3}#{ }
In 4-dimensions a regular skew polygon can have vertices on a Clifford torus and related by a Clifford displacement. Unlike antiprismatic skew polygons, skew polygons on double rotations can include an odd-number of sides.
They can be seen in the Petrie polygons of the convex regular 4-polytopes, seen as regular plane polygons in the perimeter of Coxeter plane projection:
Pentagon Octagon Dodecagon Triacontagon
5-cell
16-cell
24-cell
600-cell
Three dimensions (polyhedra)
In three dimensions, polytopes are called polyhedra:
A regular polyhedron with Schläfli symbol {p,q}, Coxeter diagrams , has a regular face type {p}, and regular vertex figure {q}.
A vertex figure (of a polyhedron) is a polygon, seen by connecting those vertices which are one edge away from a given vertex. For regular polyhedra, this vertex figure is always a regular (and planar) polygon.
Existence of a regular polyhedron {p,q} is constrained by an inequality, related to the vertex figure's angle defect:
${\begin{aligned}&{\frac {1}{p}}+{\frac {1}{q}}>{\frac {1}{2}}:{\text{Polyhedron (existing in Euclidean 3-space)}}\\[6pt]&{\frac {1}{p}}+{\frac {1}{q}}={\frac {1}{2}}:{\text{Euclidean plane tiling}}\\[6pt]&{\frac {1}{p}}+{\frac {1}{q}}<{\frac {1}{2}}:{\text{Hyperbolic plane tiling}}\end{aligned}}$
By enumerating the permutations, we find five convex forms, four star forms and three plane tilings, all with polygons {p} and {q} limited to: {3}, {4}, {5}, {5/2}, and {6}.
Beyond Euclidean space, there is an infinite set of regular hyperbolic tilings.
Convex
The five convex regular polyhedra are called the Platonic solids. The vertex figure is given with each vertex count. All these polyhedra have an Euler characteristic (χ) of 2.
Name Schläfli
{p,q}
Coxeter
Image
(solid)
Image
(sphere)
Faces
{p}
Edges Vertices
{q}
Symmetry Dual
Tetrahedron
(3-simplex)
{3,3} 4
{3}
6 4
{3}
Td
[3,3]
(*332)
(self)
Hexahedron
Cube
(3-cube)
{4,3} 6
{4}
12 8
{3}
Oh
[4,3]
(*432)
Octahedron
Octahedron
(3-orthoplex)
{3,4} 8
{3}
12 6
{4}
Oh
[4,3]
(*432)
Cube
Dodecahedron {5,3} 12
{5}
30 20
{3}
Ih
[5,3]
(*532)
Icosahedron
Icosahedron {3,5} 20
{3}
30 12
{5}
Ih
[5,3]
(*532)
Dodecahedron
Spherical
In spherical geometry, regular spherical polyhedra (tilings of the sphere) exist that would otherwise be degenerate as polytopes. These are the hosohedra {2,n} and their dual dihedra {n,2}. Coxeter calls these cases "improper" tessellations.[8]
The first few cases (n from 2 to 6) are listed below.
Hosohedra
Name Schläfli
{2,p}
Coxeter
diagram
Image
(sphere)
Faces
{2}π/p
Edges Vertices
{p}
Symmetry Dual
Digonal hosohedron {2,2} 2
{2}π/2
2 2
{2}π/2
D2h
[2,2]
(*222)
Self
Trigonal hosohedron {2,3} 3
{2}π/3
3 2
{3}
D3h
[2,3]
(*322)
Trigonal dihedron
Square hosohedron {2,4} 4
{2}π/4
4 2
{4}
D4h
[2,4]
(*422)
Square dihedron
Pentagonal hosohedron {2,5} 5
{2}π/5
5 2
{5}
D5h
[2,5]
(*522)
Pentagonal dihedron
Hexagonal hosohedron {2,6} 6
{2}π/6
6 2
{6}
D6h
[2,6]
(*622)
Hexagonal dihedron
Dihedra
Name Schläfli
{p,2}
Coxeter
diagram
Image
(sphere)
Faces
{p}
Edges Vertices
{2}
Symmetry Dual
Digonal dihedron {2,2} 2
{2}π/2
2 2
{2}π/2
D2h
[2,2]
(*222)
Self
Trigonal dihedron {3,2} 2
{3}
3 3
{2}π/3
D3h
[3,2]
(*322)
Trigonal hosohedron
Square dihedron {4,2} 2
{4}
4 4
{2}π/4
D4h
[4,2]
(*422)
Square hosohedron
Pentagonal dihedron {5,2} 2
{5}
5 5
{2}π/5
D5h
[5,2]
(*522)
Pentagonal hosohedron
Hexagonal dihedron {6,2} 2
{6}
6 6
{2}π/6
D6h
[6,2]
(*622)
Hexagonal hosohedron
Star-dihedra and hosohedra {p/q,2} and {2,p/q} also exist for any star polygon {p/q}.
Stars
The regular star polyhedra are called the Kepler–Poinsot polyhedra and there are four of them, based on the vertex arrangements of the dodecahedron {5,3} and icosahedron {3,5}:
As spherical tilings, these star forms overlap the sphere multiple times, called its density, being 3 or 7 for these forms. The tiling images show a single spherical polygon face in yellow.
Name Image
(skeletonic)
Image
(solid)
Image
(sphere)
Stellation
diagram
Schläfli
{p,q} and
Coxeter
Faces
{p}
Edges Vertices
{q}
verf.
χ Density Symmetry Dual
Small stellated dodecahedron {5/2,5}
12
{5/2}
3012
{5}
−63Ih
[5,3]
(*532)
Great dodecahedron
Great dodecahedron {5,5/2}
12
{5}
3012
{5/2}
−63Ih
[5,3]
(*532)
Small stellated dodecahedron
Great stellated dodecahedron {5/2,3}
12
{5/2}
3020
{3}
27Ih
[5,3]
(*532)
Great icosahedron
Great icosahedron {3,5/2}
20
{3}
3012
{5/2}
27Ih
[5,3]
(*532)
Great stellated dodecahedron
There are infinitely many failed star polyhedra. These are also spherical tilings with star polygons in their Schläfli symbols, but they do not cover a sphere finitely many times. Some examples are {5/2,4}, {5/2,9}, {7/2,3}, {5/2,5/2}, {7/2,7/3}, {4,5/2}, and {3,7/3}.
Skew polyhedra
Regular skew polyhedra are generalizations to the set of regular polyhedron which include the possibility of nonplanar vertex figures.
For 4-dimensional skew polyhedra, Coxeter offered a modified Schläfli symbol {l,m|n} for these figures, with {l,m} implying the vertex figure, m l-gons around a vertex, and n-gonal holes. Their vertex figures are skew polygons, zig-zagging between two planes.
The regular skew polyhedra, represented by {l,m|n}, follow this equation:
2 sin(π/l) sin(π/m) = cos(π/n)
Four of them can be seen in 4-dimensions as a subset of faces of four regular 4-polytopes, sharing the same vertex arrangement and edge arrangement:
{4, 6 | 3} {6, 4 | 3} {4, 8 | 3} {8, 4 | 3}
Four dimensions
Regular 4-polytopes with Schläfli symbol $\{p,q,r\}$ have cells of type $\{p,q\}$, faces of type $\{p\}$, edge figures $\{r\}$, and vertex figures $\{q,r\}$.
• A vertex figure (of a 4-polytope) is a polyhedron, seen by the arrangement of neighboring vertices around a given vertex. For regular 4-polytopes, this vertex figure is a regular polyhedron.
• An edge figure is a polygon, seen by the arrangement of faces around an edge. For regular 4-polytopes, this edge figure will always be a regular polygon.
The existence of a regular 4-polytope $\{p,q,r\}$ is constrained by the existence of the regular polyhedra $\{p,q\},\{q,r\}$. A suggested name for 4-polytopes is "polychoron".[9]
Each will exist in a space dependent upon this expression:
$\sin \left({\frac {\pi }{p}}\right)\sin \left({\frac {\pi }{r}}\right)-\cos \left({\frac {\pi }{q}}\right)$
$>0$ : Hyperspherical 3-space honeycomb or 4-polytope
$=0$ : Euclidean 3-space honeycomb
$<0$ : Hyperbolic 3-space honeycomb
These constraints allow for 21 forms: 6 are convex, 10 are nonconvex, one is a Euclidean 3-space honeycomb, and 4 are hyperbolic honeycombs.
The Euler characteristic $\chi $ for convex 4-polytopes is zero: $\chi =V+F-E-C=0$
Convex
The 6 convex regular 4-polytopes are shown in the table below. All these 4-polytopes have an Euler characteristic (χ) of 0.
Name
Schläfli
{p,q,r}
Coxeter
Cells
{p,q}
Faces
{p}
Edges
{r}
Vertices
{q,r}
Dual
{r,q,p}
5-cell
(4-simplex)
{3,3,3} 5
{3,3}
10
{3}
10
{3}
5
{3,3}
(self)
8-cell
(4-cube)
(Tesseract)
{4,3,3} 8
{4,3}
24
{4}
32
{3}
16
{3,3}
16-cell
16-cell
(4-orthoplex)
{3,3,4} 16
{3,3}
32
{3}
24
{4}
8
{3,4}
Tesseract
24-cell {3,4,3} 24
{3,4}
96
{3}
96
{3}
24
{4,3}
(self)
120-cell {5,3,3} 120
{5,3}
720
{5}
1200
{3}
600
{3,3}
600-cell
600-cell {3,3,5} 600
{3,3}
1200
{3}
720
{5}
120
{3,5}
120-cell
5-cell8-cell16-cell24-cell120-cell600-cell
{3,3,3}{4,3,3}{3,3,4}{3,4,3}{5,3,3}{3,3,5}
Wireframe (Petrie polygon) skew orthographic projections
Solid orthographic projections
tetrahedral
envelope
(cell/
vertex-centered)
cubic envelope
(cell-centered)
cubic envelope
(cell-centered)
cuboctahedral
envelope
(cell-centered)
truncated rhombic
triacontahedron
envelope
(cell-centered)
Pentakis
icosidodecahedral
envelope
(vertex-centered)
Wireframe Schlegel diagrams (Perspective projection)
(cell-centered)
(cell-centered)
(cell-centered)
(cell-centered)
(cell-centered)
(vertex-centered)
Wireframe stereographic projections (Hyperspherical)
Spherical
Di-4-topes and hoso-4-topes exist as regular tessellations of the 3-sphere.
Regular di-4-topes (2 facets) include: {3,3,2}, {3,4,2}, {4,3,2}, {5,3,2}, {3,5,2}, {p,2,2}, and their hoso-4-tope duals (2 vertices): {2,3,3}, {2,4,3}, {2,3,4}, {2,3,5}, {2,5,3}, {2,2,p}. 4-polytopes of the form {2,p,2} are the same as {2,2,p}. There are also the cases {p,2,q} which have dihedral cells and hosohedral vertex figures.
Regular hoso-4-topes as 3-sphere honeycombs
Schläfli
{2,p,q}
Coxeter
Cells
{2,p}π/q
Faces
{2}π/p,π/q
Edges Vertices Vertex figure
{p,q}
Symmetry Dual
{2,3,3} 4
{2,3}π/3
6
{2}π/3,π/3
4 2 {3,3}
[2,3,3] {3,3,2}
{2,4,3} 6
{2,4}π/3
12
{2}π/4,π/3
8 2 {4,3}
[2,4,3] {3,4,2}
{2,3,4} 8
{2,3}π/4
12
{2}π/3,π/4
6 2 {3,4}
[2,4,3] {4,3,2}
{2,5,3} 12
{2,5}π/3
30
{2}π/5,π/3
20 2 {5,3}
[2,5,3] {3,5,2}
{2,3,5} 20
{2,3}π/5
30
{2}π/3,π/5
12 2 {3,5}
[2,5,3] {5,3,2}
Stars
There are ten regular star 4-polytopes, which are called the Schläfli–Hess 4-polytopes. Their vertices are based on the convex 120-cell {5,3,3} and 600-cell {3,3,5}.
Ludwig Schläfli found four of them and skipped the last six because he would not allow forms that failed the Euler characteristic on cells or vertex figures (for zero-hole tori: F+V−E=2). Edmund Hess (1843–1903) completed the full list of ten in his German book Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder (1883).
There are 4 unique edge arrangements and 7 unique face arrangements from these 10 regular star 4-polytopes, shown as orthogonal projections:
Name
Wireframe Solid Schläfli
{p, q, r}
Coxeter
Cells
{p, q}
Faces
{p}
Edges
{r}
Vertices
{q, r}
Density χ Symmetry group Dual
{r, q,p}
Icosahedral 120-cell
(faceted 600-cell)
{3,5,5/2}
120
{3,5}
1200
{3}
720
{5/2}
120
{5,5/2}
4 480 H4
[5,3,3]
Small stellated 120-cell
Small stellated 120-cell {5/2,5,3}
120
{5/2,5}
720
{5/2}
1200
{3}
120
{5,3}
4 −480 H4
[5,3,3]
Icosahedral 120-cell
Great 120-cell {5,5/2,5}
120
{5,5/2}
720
{5}
720
{5}
120
{5/2,5}
6 0 H4
[5,3,3]
Self-dual
Grand 120-cell {5,3,5/2}
120
{5,3}
720
{5}
720
{5/2}
120
{3,5/2}
20 0 H4
[5,3,3]
Great stellated 120-cell
Great stellated 120-cell {5/2,3,5}
120
{5/2,3}
720
{5/2}
720
{5}
120
{3,5}
20 0 H4
[5,3,3]
Grand 120-cell
Grand stellated 120-cell {5/2,5,5/2}
120
{5/2,5}
720
{5/2}
720
{5/2}
120
{5,5/2}
66 0 H4
[5,3,3]
Self-dual
Great grand 120-cell {5,5/2,3}
120
{5,5/2}
720
{5}
1200
{3}
120
{5/2,3}
76 −480 H4
[5,3,3]
Great icosahedral 120-cell
Great icosahedral 120-cell
(great faceted 600-cell)
{3,5/2,5}
120
{3,5/2}
1200
{3}
720
{5}
120
{5/2,5}
76 480 H4
[5,3,3]
Great grand 120-cell
Grand 600-cell {3,3,5/2}
600
{3,3}
1200
{3}
720
{5/2}
120
{3,5/2}
191 0 H4
[5,3,3]
Great grand stellated 120-cell
Great grand stellated 120-cell {5/2,3,3}
120
{5/2,3}
720
{5/2}
1200
{3}
600
{3,3}
191 0 H4
[5,3,3]
Grand 600-cell
There are 4 failed potential regular star 4-polytopes permutations: {3,5/2,3}, {4,3,5/2}, {5/2,3,4}, {5/2,3,5/2}. Their cells and vertex figures exist, but they do not cover a hypersphere with a finite number of repetitions.
Five and more dimensions
In five dimensions, a regular polytope can be named as $\{p,q,r,s\}$ where $\{p,q,r\}$ is the 4-face type, $\{p,q\}$ is the cell type, $\{p\}$ is the face type, and $\{s\}$ is the face figure, $\{r,s\}$ is the edge figure, and $\{q,r,s\}$ is the vertex figure.
A vertex figure (of a 5-polytope) is a 4-polytope, seen by the arrangement of neighboring vertices to each vertex.
An edge figure (of a 5-polytope) is a polyhedron, seen by the arrangement of faces around each edge.
A face figure (of a 5-polytope) is a polygon, seen by the arrangement of cells around each face.
A regular 5-polytope $\{p,q,r,s\}$ exists only if $\{p,q,r\}$ and $\{q,r,s\}$ are regular 4-polytopes.
The space it fits in is based on the expression:
${\frac {\cos ^{2}\left({\frac {\pi }{q}}\right)}{\sin ^{2}\left({\frac {\pi }{p}}\right)}}+{\frac {\cos ^{2}\left({\frac {\pi }{r}}\right)}{\sin ^{2}\left({\frac {\pi }{s}}\right)}}$
$<1$ : Spherical 4-space tessellation or 5-space polytope
$=1$ : Euclidean 4-space tessellation
$>1$ : hyperbolic 4-space tessellation
Enumeration of these constraints produce 3 convex polytopes, zero nonconvex polytopes, 3 4-space tessellations, and 5 hyperbolic 4-space tessellations. There are no non-convex regular polytopes in five dimensions or higher.
Convex
In dimensions 5 and higher, there are only three kinds of convex regular polytopes.[10]
Name Schläfli
Symbol
{p1,...,pn−1}
Coxeter k-faces Facet
type
Vertex
figure
Dual
n-simplex{3n−1}...${{n+1} \choose {k+1}}${3n−2}{3n−2}Self-dual
n-cube{4,3n−2}...$2^{n-k}{n \choose k}${4,3n−3}{3n−2}n-orthoplex
n-orthoplex{3n−2,4}...$2^{k+1}{n \choose {k+1}}${3n−2}{3n−3,4}n-cube
There are also improper cases where some numbers in the Schläfli symbol are 2. For example, {p,q,r,...2} is an improper regular spherical polytope whenever {p,q,r...} is a regular spherical polytope, and {2,...p,q,r} is an improper regular spherical polytope whenever {...p,q,r} is a regular spherical polytope. Such polytopes may also be used as facets, yielding forms such as {p,q,...2...y,z}.
5 dimensions
Name Schläfli
Symbol
{p,q,r,s}
Coxeter
Facets
{p,q,r}
Cells
{p,q}
Faces
{p}
Edges Vertices Face
figure
{s}
Edge
figure
{r,s}
Vertex
figure
{q,r,s}
5-simplex {3,3,3,3}
6
{3,3,3}
15
{3,3}
20
{3}
156{3}{3,3}{3,3,3}
5-cube {4,3,3,3}
10
{4,3,3}
40
{4,3}
80
{4}
8032{3}{3,3}{3,3,3}
5-orthoplex {3,3,3,4}
32
{3,3,3}
80
{3,3}
80
{3}
4010{4}{3,4}{3,3,4}
5-simplex
5-cube
5-orthoplex
6 dimensions
NameSchläfliVerticesEdgesFacesCells4-faces5-facesχ
6-simplex{3,3,3,3,3}72135352170
6-cube{4,3,3,3,3}6419224016060120
6-orthoplex{3,3,3,3,4}1260160240192640
6-simplex
6-cube
6-orthoplex
7 dimensions
NameSchläfliVerticesEdgesFacesCells4-faces5-faces6-facesχ
7-simplex{3,3,3,3,3,3}8285670562882
7-cube{4,3,3,3,3,3}12844867256028084142
7-orthoplex{3,3,3,3,3,4}14842805606724481282
7-simplex
7-cube
7-orthoplex
8 dimensions
NameSchläfliVerticesEdgesFacesCells4-faces5-faces6-faces7-facesχ
8-simplex{3,3,3,3,3,3,3}93684126126843690
8-cube{4,3,3,3,3,3,3}2561024179217921120448112160
8-orthoplex{3,3,3,3,3,3,4}1611244811201792179210242560
8-simplex
8-cube
8-orthoplex
9 dimensions
NameSchläfliVerticesEdgesFacesCells4-faces5-faces6-faces7-faces8-facesχ
9-simplex{38}104512021025221012045102
9-cube{4,37}51223044608537640322016672144182
9-orthoplex{37,4}18144672201640325376460823045122
9-simplex
9-cube
9-orthoplex
10 dimensions
NameSchläfliVerticesEdgesFacesCells4-faces5-faces6-faces7-faces8-faces9-facesχ
10-simplex{39}115516533046246233016555110
10-cube{4,38}1024512011520153601344080643360960180200
10-orthoplex{38,4}2018096033608064134401536011520512010240
10-simplex
10-cube
10-orthoplex
...
Non-convex
There are no non-convex regular polytopes in five dimensions or higher, excluding hosotopes formed from lower-dimensional non-convex regular polytopes.
Regular projective polytopes
A projective regular (n+1)-polytope exists when an original regular n-spherical tessellation, {p,q,...}, is centrally symmetric. Such a polytope is named hemi-{p,q,...}, and contain half as many elements. Coxeter gives a symbol {p,q,...}/2, while McMullen writes {p,q,...}h/2 with h as the coxeter number.[11]
Even-sided regular polygons have hemi-2n-gon projective polygons, {2p}/2.
There are 4 regular projective polyhedra related to 4 of 5 Platonic solids.
The hemi-cube and hemi-octahedron generalize as hemi-n-cubes and hemi-n-orthoplexes in any dimensions.
Regular projective polyhedra
3-dimensional regular hemi-polytopes
NameCoxeter
McMullen
ImageFacesEdgesVerticesχ
Hemi-cube{4,3}/2
{4,3}3
3641
Hemi-octahedron{3,4}/2
{3,4}3
4631
Hemi-dodecahedron{5,3}/2
{5,3}5
615101
Hemi-icosahedron{3,5}/2
{3,5}5
101561
Regular projective 4-polytopes
In 4-dimensions 5 of 6 convex regular 4-polytopes generate projective 4-polytopes. The 3 special cases are hemi-24-cell, hemi-600-cell, and hemi-120-cell.
4-dimensional regular hemi-polytopes
NameCoxeter
symbol
McMullen
Symbol
CellsFacesEdgesVerticesχ
Hemi-tesseract {4,3,3}/2{4,3,3}4 4121680
Hemi-16-cell {3,3,4}/2{3,3,4}4 8161240
Hemi-24-cell {3,4,3}/2{3,4,3}6 124848120
Hemi-120-cell {5,3,3}/2{5,3,3}15 603606003000
Hemi-600-cell {3,3,5}/2{3,3,5}15 300600360600
Regular projective 5-polytopes
There are only 2 convex regular projective hemi-polytopes in dimensions 5 or higher: they are the hemi versions of the regular hypercube and orthoplex. They are tabulated below in dimension 5, for example:
Name Schläfli4-facesCellsFacesEdgesVerticesχ
hemi-penteract {4,3,3,3}/25204040161
hemi-pentacross {3,3,3,4}/21640402051
Apeirotopes
An apeirotope or infinite polytope is a polytope which has infinitely many facets. An n-apeirotope is an infinite n-polytope: a 2-apeirotope or apeirogon is an infinite polygon, a 3-apeirotope or apeirohedron is an infinite polyhedron, etc.
There are two main geometric classes of apeirotope:[12]
• Regular honeycombs in n dimensions, which completely fill an n-dimensional space.
• Regular skew apeirotopes, comprising an n-dimensional manifold in a higher space.
One dimension (apeirogons)
The straight apeirogon is a regular tessellation of the line, subdividing it into infinitely many equal segments. It has infinitely many vertices and edges. Its Schläfli symbol is {∞}, and Coxeter diagram .
......
It exists as the limit of the p-gon as p tends to infinity, as follows:
Name Monogon Digon Triangle Square Pentagon Hexagon Heptagon p-gon Apeirogon
Schläfli {1} {2} {3} {4} {5} {6} {7} {p} {∞}
Symmetry D1, [ ] D2, [2] D3, [3]D4, [4]D5, [5]D6, [6]D7, [7][p]
Coxeter or
Image
Apeirogons in the hyperbolic plane, most notably the regular apeirogon, {∞}, can have a curvature just like finite polygons of the Euclidean plane, with the vertices circumscribed by horocycles or hypercycles rather than circles.
Regular apeirogons that are scaled to converge at infinity have the symbol {∞} and exist on horocycles, while more generally they can exist on hypercycles.
{∞} {πi/λ}
Apeirogon on horocycle
Apeirogon on hypercycle
Above are two regular hyperbolic apeirogons in the Poincaré disk model, the right one shows perpendicular reflection lines of divergent fundamental domains, separated by length λ.
Skew apeirogons
A skew apeirogon in two dimensions forms a zig-zag line in the plane. If the zig-zag is even and symmetrical, then the apeirogon is regular.
Skew apeirogons can be constructed in any number of dimensions. In three dimensions, a regular skew apeirogon traces out a helical spiral and may be either left- or right-handed.
2-dimensions 3-dimensions
Zig-zag apeirogon
Helix apeirogon
Euclidean tilings
There are three regular tessellations of the plane. All three have an Euler characteristic (χ) of 0.
Name Square tiling
(quadrille)
Triangular tiling
(deltille)
Hexagonal tiling
(hextille)
Symmetry p4m, [4,4], (*442) p6m, [6,3], (*632)
Schläfli {p,q} {4,4} {3,6} {6,3}
Coxeter diagram
Image
There are two improper regular tilings: {∞,2}, an apeirogonal dihedron, made from two apeirogons, each filling half the plane; and secondly, its dual, {2,∞}, an apeirogonal hosohedron, seen as an infinite set of parallel lines.
{∞,2},
{2,∞},
Euclidean star-tilings
There are no regular plane tilings of star polygons. There are many enumerations that fit in the plane (1/p + 1/q = 1/2), like {8/3,8}, {10/3,5}, {5/2,10}, {12/5,12}, etc., but none repeat periodically.
Hyperbolic tilings
Tessellations of hyperbolic 2-space are hyperbolic tilings. There are infinitely many regular tilings in H2. As stated above, every positive integer pair {p,q} such that 1/p + 1/q < 1/2 gives a hyperbolic tiling. In fact, for the general Schwarz triangle (p, q, r) the same holds true for 1/p + 1/q + 1/r < 1.
There are a number of different ways to display the hyperbolic plane, including the Poincaré disc model which maps the plane into a circle, as shown below. It should be recognized that all of the polygon faces in the tilings below are equal-sized and only appear to get smaller near the edges due to the projection applied, very similar to the effect of a camera fisheye lens.
There are infinitely many flat regular 3-apeirotopes (apeirohedra) as regular tilings of the hyperbolic plane, of the form {p,q}, with p+q<pq/2. (previously listed above as tessellations)
• {3,7}, {3,8}, {3,9} ... {3,∞}
• {4,5}, {4,6}, {4,7} ... {4,∞}
• {5,4}, {5,5}, {5,6} ... {5,∞}
• {6,4}, {6,5}, {6,6} ... {6,∞}
• {7,3}, {7,4}, {7,5} ... {7,∞}
• {8,3}, {8,4}, {8,5} ... {8,∞}
• {9,3}, {9,4}, {9,5} ... {9,∞}
• ...
• {∞,3}, {∞,4}, {∞,5} ... {∞,∞}
A sampling:
Regular hyperbolic tiling table
Spherical (improper/Platonic)/Euclidean/hyperbolic (Poincaré disc: compact/paracompact/noncompact) tessellations with their Schläfli symbol
p \ q 2 3 4 5 6 7 8 ... ∞ ... iπ/λ
2
{2,2}
{2,3}
{2,4}
{2,5}
{2,6}
{2,7}
{2,8}
{2,∞}
{2,iπ/λ}
3
{3,2}
(tetrahedron)
{3,3}
(octahedron)
{3,4}
(icosahedron)
{3,5}
(deltille)
{3,6}
{3,7}
{3,8}
{3,∞}
{3,iπ/λ}
4
{4,2}
(cube)
{4,3}
(quadrille)
{4,4}
{4,5}
{4,6}
{4,7}
{4,8}
{4,∞}
{4,iπ/λ}
5
{5,2}
(dodecahedron)
{5,3}
{5,4}
{5,5}
{5,6}
{5,7}
{5,8}
{5,∞}
{5,iπ/λ}
6
{6,2}
(hextille)
{6,3}
{6,4}
{6,5}
{6,6}
{6,7}
{6,8}
{6,∞}
{6,iπ/λ}
7 {7,2}
{7,3}
{7,4}
{7,5}
{7,6}
{7,7}
{7,8}
{7,∞}
{7,iπ/λ}
8 {8,2}
{8,3}
{8,4}
{8,5}
{8,6}
{8,7}
{8,8}
{8,∞}
{8,iπ/λ}
...
∞
{∞,2}
{∞,3}
{∞,4}
{∞,5}
{∞,6}
{∞,7}
{∞,8}
{∞,∞}
{∞,iπ/λ}
...
iπ/λ
{iπ/λ,2}
{iπ/λ,3}
{iπ/λ,4}
{iπ/λ,5}
{iπ/λ,6}
{iπ/λ,7}
{iπ/λ,8}
{iπ/λ,∞}
{iπ/λ, iπ/λ}
The tilings {p, ∞} have ideal vertices, on the edge of the Poincaré disc model. Their duals {∞, p} have ideal apeirogonal faces, meaning that they are inscribed in horocycles. One could go further (as is done in the table above) and find tilings with ultra-ideal vertices, outside the Poincaré disc, which are dual to tiles inscribed in hypercycles; in what is symbolised {p, iπ/λ} above, infinitely many tiles still fit around each ultra-ideal vertex.[13] (Parallel lines in extended hyperbolic space meet at an ideal point; ultraparallel lines meet at an ultra-ideal point.)[14]
Hyperbolic star-tilings
There are 2 infinite forms of hyperbolic tilings whose faces or vertex figures are star polygons: {m/2, m} and their duals {m, m/2} with m = 7, 9, 11, .... The {m/2, m} tilings are stellations of the {m, 3} tilings while the {m, m/2} dual tilings are facetings of the {3, m} tilings and greatenings of the {m, 3} tilings.
The patterns {m/2, m} and {m, m/2} continue for odd m < 7 as polyhedra: when m = 5, we obtain the small stellated dodecahedron and great dodecahedron, and when m = 3, the case degenerates to a tetrahedron. The other two Kepler–Poinsot polyhedra (the great stellated dodecahedron and great icosahedron) do not have regular hyperbolic tiling analogues. If m is even, depending on how we choose to define {m/2}, we can either obtain degenerate double covers of other tilings or compound tilings.
Name Schläfli Coxeter diagram Image Face type
{p}
Vertex figure
{q}
Density Symmetry Dual
Order-7 heptagrammic tiling {7/2,7} {7/2}
{7}
3 *732
[7,3]
Heptagrammic-order heptagonal tiling
Heptagrammic-order heptagonal tiling {7,7/2} {7}
{7/2}
3 *732
[7,3]
Order-7 heptagrammic tiling
Order-9 enneagrammic tiling {9/2,9} {9/2}
{9}
3 *932
[9,3]
Enneagrammic-order enneagonal tiling
Enneagrammic-order enneagonal tiling {9,9/2} {9}
{9/2}
3 *932
[9,3]
Order-9 enneagrammic tiling
Order-11 hendecagrammic tiling {11/2,11} {11/2}
{11}
3 *11.3.2
[11,3]
Hendecagrammic-order hendecagonal tiling
Hendecagrammic-order hendecagonal tiling {11,11/2} {11}
{11/2}
3 *11.3.2
[11,3]
Order-11 hendecagrammic tiling
Order-p p-grammic tiling {p/2,p} {p/2} {p} 3 *p32
[p,3]
p-grammic-order p-gonal tiling
p-grammic-order p-gonal tiling {p,p/2} {p} {p/2} 3 *p32
[p,3]
Order-p p-grammic tiling
Skew apeirohedra in Euclidean 3-space
There are three regular skew apeirohedra in Euclidean 3-space, with regular skew polygon vertex figures.[15][16][17] They share the same vertex arrangement and edge arrangement of 3 convex uniform honeycombs.
• 6 squares around each vertex: {4,6|4}
• 4 hexagons around each vertex: {6,4|4}
• 6 hexagons around each vertex: {6,6|3}
Regular skew polyhedra
{4,6|4}
{6,4|4}
{6,6|3}
There are thirty regular apeirohedra in Euclidean 3-space.[19] These include those listed above, as well as 8 other "pure" apeirohedra, all related to the cubic honeycomb, {4,3,4}, with others having skew polygon faces: {6,6}4, {4,6}4, {6,4}6, {∞,3}a, {∞,3}b, {∞,4}.*3, {∞,4}6,4, {∞,6}4,4, and {∞,6}6,3.
Skew apeirohedra in hyperbolic 3-space
There are 31 regular skew apeirohedra in hyperbolic 3-space:[20]
• 14 are compact: {8,10|3}, {10,8|3}, {10,4|3}, {4,10|3}, {6,4|5}, {4,6|5}, {10,6|3}, {6,10|3}, {8,8|3}, {6,6|4}, {10,10|3},{6,6|5}, {8,6|3}, and {6,8|3}.
• 17 are paracompact: {12,10|3}, {10,12|3}, {12,4|3}, {4,12|3}, {6,4|6}, {4,6|6}, {8,4|4}, {4,8|4}, {12,6|3}, {6,12|3}, {12,12|3}, {6,6|6}, {8,6|4}, {6,8|4}, {12,8|3}, {8,12|3}, and {8,8|4}.
Tessellations of Euclidean 3-space
There is only one non-degenerate regular tessellation of 3-space (honeycombs), {4, 3, 4}:[21]
Name Schläfli
{p,q,r}
Coxeter
Cell
type
{p,q}
Face
type
{p}
Edge
figure
{r}
Vertex
figure
{q,r}
χ Dual
Cubic honeycomb{4,3,4}{4,3}{4}{4}{3,4}0Self-dual
Improper tessellations of Euclidean 3-space
There are six improper regular tessellations, pairs based on the three regular Euclidean tilings. Their cells and vertex figures are all regular hosohedra {2,n}, dihedra, {n,2}, and Euclidean tilings. These improper regular tilings are constructionally related to prismatic uniform honeycombs by truncation operations. They are higher-dimensional analogues of the order-2 apeirogonal tiling and apeirogonal hosohedron.
Schläfli
{p,q,r}
Coxeter
diagram
Cell
type
{p,q}
Face
type
{p}
Edge
figure
{r}
Vertex
figure
{q,r}
{2,4,4}{2,4}{2}{4}{4,4}
{2,3,6}{2,3}{2}{6}{3,6}
{2,6,3}{2,6}{2}{3}{6,3}
{4,4,2}{4,4}{4}{2}{4,2}
{3,6,2}{3,6}{3}{2}{6,2}
{6,3,2}{6,3}{6}{2}{3,2}
Tessellations of hyperbolic 3-space
There are ten flat regular honeycombs of hyperbolic 3-space:[22] (previously listed above as tessellations)
• 4 are compact: {3,5,3}, {4,3,5}, {5,3,4}, and {5,3,5}
• while 6 are paracompact: {3,3,6}, {6,3,3}, {3,4,4}, {4,4,3}, {3,6,3}, {4,3,6}, {6,3,4}, {4,4,4}, {5,3,6}, {6,3,5}, and {6,3,6}.
4 compact regular honeycombs
{5,3,4}
{5,3,5}
{4,3,5}
{3,5,3}
4 of 11 paracompact regular honeycombs
{3,4,4}
{3,6,3}
{4,4,3}
{4,4,4}
Tessellations of hyperbolic 3-space can be called hyperbolic honeycombs. There are 15 hyperbolic honeycombs in H3, 4 compact and 11 paracompact.
4 compact regular honeycombs
Name Schläfli
Symbol
{p,q,r}
Coxeter
Cell
type
{p,q}
Face
type
{p}
Edge
figure
{r}
Vertex
figure
{q,r}
χ Dual
Icosahedral honeycomb{3,5,3}{3,5}{3}{3}{5,3}0Self-dual
Order-5 cubic honeycomb{4,3,5}{4,3}{4}{5}{3,5}0{5,3,4}
Order-4 dodecahedral honeycomb{5,3,4}{5,3}{5}{4}{3,4}0{4,3,5}
Order-5 dodecahedral honeycomb{5,3,5}{5,3}{5}{5}{3,5}0Self-dual
There are also 11 paracompact H3 honeycombs (those with infinite (Euclidean) cells and/or vertex figures): {3,3,6}, {6,3,3}, {3,4,4}, {4,4,3}, {3,6,3}, {4,3,6}, {6,3,4}, {4,4,4}, {5,3,6}, {6,3,5}, and {6,3,6}.
11 paracompact regular honeycombs
Name Schläfli
Symbol
{p,q,r}
Coxeter
Cell
type
{p,q}
Face
type
{p}
Edge
figure
{r}
Vertex
figure
{q,r}
χ Dual
Order-6 tetrahedral honeycomb{3,3,6}{3,3}{3}{6}{3,6}0{6,3,3}
Hexagonal tiling honeycomb{6,3,3}{6,3}{6}{3}{3,3}0{3,3,6}
Order-4 octahedral honeycomb{3,4,4}{3,4}{3}{4}{4,4}0{4,4,3}
Square tiling honeycomb{4,4,3}{4,4}{4}{3}{4,3}0{3,3,4}
Triangular tiling honeycomb{3,6,3}{3,6}{3}{3}{6,3}0Self-dual
Order-6 cubic honeycomb{4,3,6}{4,3}{4}{4}{3,6}0{6,3,4}
Order-4 hexagonal tiling honeycomb{6,3,4}{6,3}{6}{4}{3,4}0{4,3,6}
Order-4 square tiling honeycomb{4,4,4}{4,4}{4}{4}{4,4}0Self-dual
Order-6 dodecahedral honeycomb{5,3,6}{5,3}{5}{5}{3,6}0{6,3,5}
Order-5 hexagonal tiling honeycomb{6,3,5}{6,3}{6}{5}{3,5}0{5,3,6}
Order-6 hexagonal tiling honeycomb{6,3,6}{6,3}{6}{6}{3,6}0Self-dual
Noncompact solutions exist as Lorentzian Coxeter groups, and can be visualized with open domains in hyperbolic space (the fundamental tetrahedron having ultra-ideal vertices). All honeycombs with hyperbolic cells or vertex figures and do not have 2 in their Schläfli symbol are noncompact.
Spherical (improper/Platonic)/Euclidean/hyperbolic(compact/paracompact/noncompact) honeycombs {p,3,r}
{p,3} \ r 2345678 ... ∞
{2,3}
{2,3,2}
{2,3,3} {2,3,4} {2,3,5} {2,3,6} {2,3,7} {2,3,8} {2,3,∞}
{3,3}
{3,3,2}
{3,3,3}
{3,3,4}
{3,3,5}
{3,3,6}
{3,3,7}
{3,3,8}
{3,3,∞}
{4,3}
{4,3,2}
{4,3,3}
{4,3,4}
{4,3,5}
{4,3,6}
{4,3,7}
{4,3,8}
{4,3,∞}
{5,3}
{5,3,2}
{5,3,3}
{5,3,4}
{5,3,5}
{5,3,6}
{5,3,7}
{5,3,8}
{5,3,∞}
{6,3}
{6,3,2}
{6,3,3}
{6,3,4}
{6,3,5}
{6,3,6}
{6,3,7}
{6,3,8}
{6,3,∞}
{7,3}
{7,3,2}
{7,3,3}
{7,3,4}
{7,3,5}
{7,3,6}
{7,3,7}
{7,3,8}
{7,3,∞}
{8,3}
{8,3,2}
{8,3,3}
{8,3,4}
{8,3,5}
{8,3,6}
{8,3,7}
{8,3,8}
{8,3,∞}
... {∞,3}
{∞,3,2}
{∞,3,3}
{∞,3,4}
{∞,3,5}
{∞,3,6}
{∞,3,7}
{∞,3,8}
{∞,3,∞}
{p,4,r}
{p,4} \ r 23456∞
{2,4}
{2,4,2}
{2,4,3}
{2,4,4}
{2,4,5} {2,4,6} {2,4,∞}
{3,4}
{3,4,2}
{3,4,3}
{3,4,4}
{3,4,5}
{3,4,6}
{3,4,∞}
{4,4}
{4,4,2}
{4,4,3}
{4,4,4}
{4,4,5}
{4,4,6}
{4,4,∞}
{5,4}
{5,4,2}
{5,4,3}
{5,4,4}
{5,4,5}
{5,4,6}
{5,4,∞}
{6,4}
{6,4,2}
{6,4,3}
{6,4,4}
{6,4,5}
{6,4,6}
{6,4,∞}
{∞,4}
{∞,4,2}
{∞,4,3}
{∞,4,4}
{∞,4,5}
{∞,4,6}
{∞,4,∞}
{p,5,r}
{p,5} \ r 23456∞
{2,5}
{2,5,2}
{2,5,3} {2,5,4} {2,5,5} {2,5,6} {2,5,∞}
{3,5}
{3,5,2}
{3,5,3}
{3,5,4}
{3,5,5}
{3,5,6}
{3,5,∞}
{4,5}
{4,5,2}
{4,5,3}
{4,5,4}
{4,5,5}
{4,5,6}
{4,5,∞}
{5,5}
{5,5,2}
{5,5,3}
{5,5,4}
{5,5,5}
{5,5,6}
{5,5,∞}
{6,5}
{6,5,2}
{6,5,3}
{6,5,4}
{6,5,5}
{6,5,6}
{6,5,∞}
{∞,5}
{∞,5,2}
{∞,5,3}
{∞,5,4}
{∞,5,5}
{∞,5,6}
{∞,5,∞}
{p,6,r}
{p,6} \ r 23456∞
{2,6}
{2,6,2}
{2,6,3} {2,6,4} {2,6,5} {2,6,6} {2,6,∞}
{3,6}
{3,6,2}
{3,6,3}
{3,6,4}
{3,6,5}
{3,6,6}
{3,6,∞}
{4,6}
{4,6,2}
{4,6,3}
{4,6,4}
{4,6,5}
{4,6,6}
{4,6,∞}
{5,6}
{5,6,2}
{5,6,3}
{5,6,4}
{5,6,5}
{5,6,6}
{5,6,∞}
{6,6}
{6,6,2}
{6,6,3}
{6,6,4}
{6,6,5}
{6,6,6}
{6,6,∞}
{∞,6}
{∞,6,2}
{∞,6,3}
{∞,6,4}
{∞,6,5}
{∞,6,6}
{∞,6,∞}
{p,7,r}
{p,7} \ r 23456∞
{2,7}
{2,7,2}
{2,7,3} {2,7,4} {2,7,5} {2,7,6} {2,7,∞}
{3,7}
{3,7,2}
{3,7,3}
{3,7,4}
{3,7,5}
{3,7,6}
{3,7,∞}
{4,7}
{4,7,2}
{4,7,3}
{4,7,4}
{4,7,5}
{4,7,6}
{4,7,∞}
{5,7}
{5,7,2}
{5,7,3}
{5,7,4}
{5,7,5}
{5,7,6}
{5,7,∞}
{6,7}
{6,7,2}
{6,7,3}
{6,7,4}
{6,7,5}
{6,7,6}
{6,7,∞}
{∞,7}
{∞,7,2}
{∞,7,3}
{∞,7,4}
{∞,7,5}
{∞,7,6}
{∞,7,∞}
{p,8,r}
{p,8} \ r 23456∞
{2,8}
{2,8,2}
{2,8,3} {2,8,4} {2,8,5} {2,8,6} {2,8,∞}
{3,8}
{3,8,2}
{3,8,3}
{3,8,4}
{3,8,5}
{3,8,6}
{3,8,∞}
{4,8}
{4,8,2}
{4,8,3}
{4,8,4}
{4,8,5}
{4,8,6}
{4,8,∞}
{5,8}
{5,8,2}
{5,8,3}
{5,8,4}
{5,8,5}
{5,8,6}
{5,8,∞}
{6,8}
{6,8,2}
{6,8,3}
{6,8,4}
{6,8,5}
{6,8,6}
{6,8,∞}
{∞,8}
{∞,8,2}
{∞,8,3}
{∞,8,4}
{∞,8,5}
{∞,8,6}
{∞,8,∞}
{p,∞,r}
{p,∞} \ r 23456∞
{2,∞}
{2,∞,2}
{2,∞,3} {2,∞,4} {2,∞,5} {2,∞,6} {2,∞,∞}
{3,∞}
{3,∞,2}
{3,∞,3}
{3,∞,4}
{3,∞,5}
{3,∞,6}
{3,∞,∞}
{4,∞}
{4,∞,2}
{4,∞,3}
{4,∞,4}
{4,∞,5}
{4,∞,6}
{4,∞,∞}
{5,∞}
{5,∞,2}
{5,∞,3}
{5,∞,4}
{5,∞,5}
{5,∞,6}
{5,∞,∞}
{6,∞}
{6,∞,2}
{6,∞,3}
{6,∞,4}
{6,∞,5}
{6,∞,6}
{6,∞,∞}
{∞,∞}
{∞,∞,2}
{∞,∞,3}
{∞,∞,4}
{∞,∞,5}
{∞,∞,6}
{∞,∞,∞}
There are no regular hyperbolic star-honeycombs in H3: all forms with a regular star polyhedron as cell, vertex figure or both end up being spherical.
Ideal vertices now appear when the vertex figure is a Euclidean tiling, becoming inscribable in a horosphere rather than a sphere. They are dual to ideal cells (Euclidean tilings rather than finite polyhedra). As the last number in the Schläfli symbol rises further, the vertex figure becomes hyperbolic, and vertices become ultra-ideal (so the edges do not meet within hyperbolic space). In honeycombs {p, q, ∞} the edges intersect the Poincaré ball only in one ideal point; the rest of the edge has become ultra-ideal. Continuing further would lead to edges that are completely ultra-ideal, both for the honeycomb and for the fundamental simplex (though still infinitely many {p, q} would meet at such edges). In general, when the last number of the Schläfli symbol becomes ∞, faces of codimension two intersect the Poincaré hyperball only in one ideal point.[13]
Tessellations of Euclidean 4-space
There are three kinds of infinite regular tessellations (honeycombs) that can tessellate Euclidean four-dimensional space:
3 regular Euclidean honeycombs
Name Schläfli
Symbol
{p,q,r,s}
Facet
type
{p,q,r}
Cell
type
{p,q}
Face
type
{p}
Face
figure
{s}
Edge
figure
{r,s}
Vertex
figure
{q,r,s}
Dual
Tesseractic honeycomb{4,3,3,4}{4,3,3}{4,3}{4}{4}{3,4}{3,3,4}Self-dual
16-cell honeycomb{3,3,4,3}{3,3,4}{3,3}{3}{3}{4,3}{3,4,3}{3,4,3,3}
24-cell honeycomb{3,4,3,3}{3,4,3}{3,4}{3}{3}{3,3}{4,3,3}{3,3,4,3}
Projected portion of {4,3,3,4}
(Tesseractic honeycomb)
Projected portion of {3,3,4,3}
(16-cell honeycomb)
Projected portion of {3,4,3,3}
(24-cell honeycomb)
There are also the two improper cases {4,3,4,2} and {2,4,3,4}.
There are three flat regular honeycombs of Euclidean 4-space:[21]
• {4,3,3,4}, {3,3,4,3}, and {3,4,3,3}.
There are seven flat regular convex honeycombs of hyperbolic 4-space:[22]
• 5 are compact: {3,3,3,5}, {5,3,3,3}, {4,3,3,5}, {5,3,3,4}, {5,3,3,5}
• 2 are paracompact: {3,4,3,4}, and {4,3,4,3}.
There are four flat regular star honeycombs of hyperbolic 4-space:[22]
• {5/2,5,3,3}, {3,3,5,5/2}, {3,5,5/2,5}, and {5,5/2,5,3}.
Tessellations of hyperbolic 4-space
There are seven convex regular honeycombs and four star-honeycombs in H4 space.[23] Five convex ones are compact, and two are paracompact.
Five compact regular honeycombs in H4:
5 compact regular honeycombs
Name Schläfli
Symbol
{p,q,r,s}
Facet
type
{p,q,r}
Cell
type
{p,q}
Face
type
{p}
Face
figure
{s}
Edge
figure
{r,s}
Vertex
figure
{q,r,s}
Dual
Order-5 5-cell honeycomb{3,3,3,5}{3,3,3}{3,3}{3}{5}{3,5}{3,3,5}{5,3,3,3}
120-cell honeycomb{5,3,3,3}{5,3,3}{5,3}{5}{3}{3,3}{3,3,3}{3,3,3,5}
Order-5 tesseractic honeycomb{4,3,3,5}{4,3,3}{4,3}{4}{5}{3,5}{3,3,5}{5,3,3,4}
Order-4 120-cell honeycomb{5,3,3,4}{5,3,3}{5,3}{5}{4}{3,4}{3,3,4}{4,3,3,5}
Order-5 120-cell honeycomb{5,3,3,5}{5,3,3}{5,3}{5}{5}{3,5}{3,3,5}Self-dual
The two paracompact regular H4 honeycombs are: {3,4,3,4}, {4,3,4,3}.
2 paracompact regular honeycombs
Name Schläfli
Symbol
{p,q,r,s}
Facet
type
{p,q,r}
Cell
type
{p,q}
Face
type
{p}
Face
figure
{s}
Edge
figure
{r,s}
Vertex
figure
{q,r,s}
Dual
Order-4 24-cell honeycomb{3,4,3,4}{3,4,3}{3,4}{3}{4}{3,4}{4,3,4}{4,3,4,3}
Cubic honeycomb honeycomb{4,3,4,3}{4,3,4}{4,3}{4}{3}{4,3}{3,4,3}{3,4,3,4}
Noncompact solutions exist as Lorentzian Coxeter groups, and can be visualized with open domains in hyperbolic space (the fundamental 5-cell having some parts inaccessible beyond infinity). All honeycombs which are not shown in the set of tables below and do not have 2 in their Schläfli symbol are noncompact.
Spherical/Euclidean/hyperbolic(compact/paracompact/noncompact) honeycombs {p,q,r,s}
q=3, s=3
p \ r 3 4 5
3
{3,3,3,3}
{3,3,4,3}
{3,3,5,3}
4
{4,3,3,3}
{4,3,4,3}
{4,3,5,3}
5
{5,3,3,3}
{5,3,4,3}
{5,3,5,3}
q=3, s=4
p \ r 3 4
3
{3,3,3,4}
{3,3,4,4}
4
{4,3,3,4}
{4,3,4,4}
5
{5,3,3,4}
{5,3,4,4}
q=3, s=5
p \ r 3 4
3
{3,3,3,5}
{3,3,4,5}
4
{4,3,3,5}
{4,3,4,5}
5
{5,3,3,5}
{5,3,4,5}
q=4, s=3
p \ r 3 4
3
{3,4,3,3}
{3,4,4,3}
4
{4,4,3,3}
{4,4,4,3}
q=4, s=4
p \ r 3 4
3
{3,4,3,4}
{3,4,4,4}
4
{4,4,3,4}
{4,4,4,4}
q=4, s=5
p \ r 3 4
3
{3,4,3,5}
{3,4,4,5}
4
{4,4,3,5}
{4,4,4,5}
q=5, s=3
p \ r 3 4
3
{3,5,3,3}
{3,5,4,3}
4
{4,5,3,3}
{4,5,4,3}
Star tessellations of hyperbolic 4-space
There are four regular star-honeycombs in H4 space, all compact:
4 compact regular star-honeycombs
Name Schläfli
Symbol
{p,q,r,s}
Facet
type
{p,q,r}
Cell
type
{p,q}
Face
type
{p}
Face
figure
{s}
Edge
figure
{r,s}
Vertex
figure
{q,r,s}
Dual Density
Small stellated 120-cell honeycomb{5/2,5,3,3}{5/2,5,3}{5/2,5}{5/2}{3}{3,3}{5,3,3}{3,3,5,5/2}5
Pentagrammic-order 600-cell honeycomb{3,3,5,5/2}{3,3,5}{3,3}{3}{5/2}{5,5/2}{3,5,5/2}{5/2,5,3,3}5
Order-5 icosahedral 120-cell honeycomb{3,5,5/2,5}{3,5,5/2}{3,5}{3}{5}{5/2,5}{5,5/2,5}{5,5/2,5,3}10
Great 120-cell honeycomb{5,5/2,5,3}{5,5/2,5}{5,5/2}{5}{3}{5,3}{5/2,5,3}{3,5,5/2,5}10
Five dimensions (6-apeirotopes)
There is only one flat regular honeycomb of Euclidean 5-space: (previously listed above as tessellations)[21]
• {4,3,3,3,4}
There are five flat regular regular honeycombs of hyperbolic 5-space, all paracompact: (previously listed above as tessellations)[22]
• {3,3,3,4,3}, {3,4,3,3,3}, {3,3,4,3,3}, {3,4,3,3,4}, and {4,3,3,4,3}
Tessellations of Euclidean 5-space
The hypercubic honeycomb is the only family of regular honeycombs that can tessellate each dimension, five or higher, formed by hypercube facets, four around every ridge.
Name Schläfli
{p1, p2, ..., pn−1}
Facet
type
Vertex
figure
Dual
Square tiling{4,4}{4}{4}Self-dual
Cubic honeycomb{4,3,4}{4,3}{3,4}Self-dual
Tesseractic honeycomb{4,32,4}{4,32}{32,4}Self-dual
5-cube honeycomb{4,33,4}{4,33}{33,4}Self-dual
6-cube honeycomb{4,34,4}{4,34}{34,4}Self-dual
7-cube honeycomb{4,35,4}{4,35}{35,4}Self-dual
8-cube honeycomb{4,36,4}{4,36}{36,4}Self-dual
n-hypercubic honeycomb{4,3n−2,4}{4,3n−2}{3n−2,4}Self-dual
In E5, there are also the improper cases {4,3,3,4,2}, {2,4,3,3,4}, {3,3,4,3,2}, {2,3,3,4,3}, {3,4,3,3,2}, and {2,3,4,3,3}. In En, {4,3n−3,4,2} and {2,4,3n−3,4} are always improper Euclidean tessellations.
Tessellations of hyperbolic 5-space
There are 5 regular honeycombs in H5, all paracompact, which include infinite (Euclidean) facets or vertex figures: {3,4,3,3,3}, {3,3,4,3,3}, {3,3,3,4,3}, {3,4,3,3,4}, and {4,3,3,4,3}.
There are no compact regular tessellations of hyperbolic space of dimension 5 or higher and no paracompact regular tessellations in hyperbolic space of dimension 6 or higher.
5 paracompact regular honeycombs
Name Schläfli
Symbol
{p,q,r,s,t}
Facet
type
{p,q,r,s}
4-face
type
{p,q,r}
Cell
type
{p,q}
Face
type
{p}
Cell
figure
{t}
Face
figure
{s,t}
Edge
figure
{r,s,t}
Vertex
figure
{q,r,s,t}
Dual
5-orthoplex honeycomb{3,3,3,4,3}{3,3,3,4}{3,3,3}{3,3}{3}{3}{4,3}{3,4,3}{3,3,4,3}{3,4,3,3,3}
24-cell honeycomb honeycomb{3,4,3,3,3}{3,4,3,3}{3,4,3}{3,4}{3}{3}{3,3}{3,3,3}{4,3,3,3}{3,3,3,4,3}
16-cell honeycomb honeycomb{3,3,4,3,3}{3,3,4,3}{3,3,4}{3,3}{3}{3}{3,3}{4,3,3}{3,4,3,3}self-dual
Order-4 24-cell honeycomb honeycomb{3,4,3,3,4}{3,4,3,3}{3,4,3}{3,4}{3}{4}{3,4}{3,3,4}{4,3,3,4}{4,3,3,4,3}
Tesseractic honeycomb honeycomb{4,3,3,4,3}{4,3,3,4}{4,3,3}{4,3}{4}{3}{4,3}{3,4,3}{3,3,4,3}{3,4,3,3,4}
Since there are no regular star n-polytopes for n ≥ 5, that could be potential cells or vertex figures, there are no more hyperbolic star honeycombs in Hn for n ≥ 5.
Tessellations of hyperbolic 6-space and higher
There are no regular compact or paracompact tessellations of hyperbolic space of dimension 6 or higher. However, any Schläfli symbol of the form {p,q,r,s,...} not covered above (p,q,r,s,... natural numbers above 2, or infinity) will form a noncompact tessellation of hyperbolic n-space.[13]
Compound polytopes
Main article: Polytope compound
Two dimensional compounds
For any natural number n, there are n-pointed star regular polygonal stars with Schläfli symbols {n/m} for all m such that m < n/2 (strictly speaking {n/m}={n/(n−m)}) and m and n are coprime. When m and n are not coprime, the star polygon obtained will be a regular polygon with n/m sides. A new figure is obtained by rotating these regular n/m-gons one vertex to the left on the original polygon until the number of vertices rotated equals n/m minus one, and combining these figures. An extreme case of this is where n/m is 2, producing a figure consisting of n/2 straight line segments; this is called a degenerate star polygon.
In other cases where n and m have a common factor, a star polygon for a lower n is obtained, and rotated versions can be combined. These figures are called star figures, improper star polygons or compound polygons. The same notation {n/m} is often used for them, although authorities such as Grünbaum (1994) regard (with some justification) the form k{n} as being more correct, where usually k = m.
A further complication comes when we compound two or more star polygons, as for example two pentagrams, differing by a rotation of 36°, inscribed in a decagon. This is correctly written in the form k{n/m}, as 2{5/2}, rather than the commonly used {10/4}.
Coxeter's extended notation for compounds is of the form c{m,n,...}[d{p,q,...}]e{s,t,...}, indicating that d distinct {p,q,...}'s together cover the vertices of {m,n,...} c times and the facets of {s,t,...} e times. If no regular {m,n,...} exists, the first part of the notation is removed, leaving [d{p,q,...}]e{s,t,...}; the opposite holds if no regular {s,t,...} exists. The dual of c{m,n,...}[d{p,q,...}]e{s,t,...} is e{t,s,...}[d{q,p,...}]c{n,m,...}. If c or e are 1, they may be omitted. For compound polygons, this notation reduces to {nk}[k{n/m}]{nk}: for example, the hexagram may be written thus as {6}[2{3}]{6}.
Examples for n=2..10, nk≤30
2{2}
3{2}
4{2}
5{2}
6{2}
7{2}
8{2}
9{2}
10{2}
11{2}
12{2}
13{2}
14{2}
15{2}
2{3}
3{3}
4{3}
5{3}
6{3}
7{3}
8{3}
9{3}
10{3}
2{4}
3{4}
4{4}
5{4}
6{4}
7{4}
2{5}
3{5}
4{5}
5{5}
6{5}
2{5/2}
3{5/2}
4{5/2}
5{5/2}
6{5/2}
2{6}
3{6}
4{6}
5{6}
2{7}
3{7}
4{7}
2{7/2}
3{7/2}
4{7/2}
2{7/3}
3{7/3}
4{7/3}
2{8}
3{8}
2{8/3}
3{8/3}
2{9}
3{9}
2{9/2}
3{9/2}
2{9/4}
3{9/4}
2{10}
3{10}
2{10/3}
3{10/3}
2{11}
2{11/2}
2{11/3}
2{11/4}
2{11/5}
2{12}
2{12/5}
2{13}
2{13/2}
2{13/3}
2{13/4}
2{13/5}
2{13/6}
2{14}
2{14/3}
2{14/5}
2{15}
2{15/2}
2{15/4}
2{15/7}
Regular skew polygons also create compounds, seen in the edges of prismatic compound of antiprisms, for instance:
Regular compound skew polygon
Compound
skew squares
Compound
skew hexagons
Compound
skew decagons
Two {2}#{ } Three {2}#{ } Two {3}#{ } Two {5/3}#{ }
Three dimensional compounds
A regular polyhedron compound can be defined as a compound which, like a regular polyhedron, is vertex-transitive, edge-transitive, and face-transitive. With this definition there are 5 regular compounds.
Symmetry [4,3], Oh [5,3]+, I [5,3], Ih
Duality Self-dual Dual pairs
Image
Spherical
Polyhedra 2 {3,3} 5 {3,3} 10 {3,3} 5 {4,3} 5 {3,4}
Coxeter {4,3}[2{3,3}]{3,4} {5,3}[5{3,3}]{3,5} 2{5,3}[10{3,3}]2{3,5} 2{5,3}[5{4,3}] [5{3,4}]2{3,5}
Coxeter's notation for regular compounds is given in the table above, incorporating Schläfli symbols. The material inside the square brackets, [d{p,q}], denotes the components of the compound: d separate {p,q}'s. The material before the square brackets denotes the vertex arrangement of the compound: c{m,n}[d{p,q}] is a compound of d {p,q}'s sharing the vertices of an {m,n} counted c times. The material after the square brackets denotes the facet arrangement of the compound: [d{p,q}]e{s,t} is a compound of d {p,q}'s sharing the faces of {s,t} counted e times. These may be combined: thus c{m,n}[d{p,q}]e{s,t} is a compound of d {p,q}'s sharing the vertices of {m,n} counted c times and the faces of {s,t} counted e times. This notation can be generalised to compounds in any number of dimensions.[24]
Euclidean and hyperbolic plane compounds
There are eighteen two-parameter families of regular compound tessellations of the Euclidean plane. In the hyperbolic plane, five one-parameter families and seventeen isolated cases are known, but the completeness of this listing has not yet been proven.
The Euclidean and hyperbolic compound families 2 {p,p} (4 ≤ p ≤ ∞, p an integer) are analogous to the spherical stella octangula, 2 {3,3}.
A few examples of Euclidean and hyperbolic regular compounds
Self-dual Duals Self-dual
2 {4,4} 2 {6,3} 2 {3,6} 2 {∞,∞}
{{4,4}} or a{4,4} or {4,4}[2{4,4}]{4,4}
+ or
[2{6,3}]{3,6} a{6,3} or {6,3}[2{3,6}]
+ or
{{∞,∞}} or a{∞,∞} or {4,∞}[2{∞,∞}]{∞,4}
+ or
3 {6,3} 3 {3,6} 3 {∞,∞}
2{3,6}[3{6,3}]{6,3} {3,6}[3{3,6}]2{6,3}
+ +
+ +
Four dimensional compounds
Orthogonal projections
75 {4,3,3} 75 {3,3,4}
Coxeter lists 32 regular compounds of regular 4-polytopes in his book Regular Polytopes.[25] McMullen adds six in his paper New Regular Compounds of 4-Polytopes, in which he also proves that the list is now complete.[26] In the following tables, the superscript (var) indicates that the labeled compounds are distinct from the other compounds with the same symbols.
Self-dual regular compounds
Compound Constituent Symmetry Vertex arrangement Cell arrangement
120 {3,3,3}5-cell[5,3,3], order 14400[25]{5,3,3}{3,3,5}
120 {3,3,3}(var)5-cellorder 1200[26]{5,3,3}{3,3,5}
720 {3,3,3}5-cell[5,3,3], order 14400[26]6{5,3,3}6{3,3,5}
5 {3,4,3}24-cell[5,3,3], order 14400[25]{3,3,5}{5,3,3}
Regular compounds as dual pairs
Compound 1 Compound 2 Symmetry Vertex arrangement (1) Cell arrangement (1) Vertex arrangement (2) Cell arrangement (2)
3 {3,3,4}[27]3 {4,3,3}[3,4,3], order 1152[25]{3,4,3}2{3,4,3}2{3,4,3}{3,4,3}
15 {3,3,4}15 {4,3,3}[5,3,3], order 14400[25]{3,3,5}2{5,3,3}2{3,3,5}{5,3,3}
75 {3,3,4}75 {4,3,3}[5,3,3], order 14400[25]5{3,3,5}10{5,3,3}10{3,3,5}5{5,3,3}
75 {3,3,4}75 {4,3,3}[5,3,3], order 14400[25]{5,3,3}2{3,3,5}2{5,3,3}{3,3,5}
75 {3,3,4}75 {4,3,3}order 600[26]{5,3,3}2{3,3,5}2{5,3,3}{3,3,5}
300 {3,3,4}300 {4,3,3}[5,3,3]+, order 7200[25]4{5,3,3}8{3,3,5}8{5,3,3}4{3,3,5}
600 {3,3,4}600 {4,3,3}[5,3,3], order 14400[25]8{5,3,3}16{3,3,5}16{5,3,3}8{3,3,5}
25 {3,4,3}25 {3,4,3}[5,3,3], order 14400[25]{5,3,3}5{5,3,3}5{3,3,5}{3,3,5}
There are two different compounds of 75 tesseracts: one shares the vertices of a 120-cell, while the other shares the vertices of a 600-cell. It immediately follows therefore that the corresponding dual compounds of 75 16-cells are also different.
Self-dual star compounds
Compound Symmetry Vertex arrangement Cell arrangement
5 {5,5/2,5}[5,3,3]+, order 7200[25]{5,3,3}{3,3,5}
10 {5,5/2,5}[5,3,3], order 14400[25]2{5,3,3}2{3,3,5}
5 {5/2,5,5/2}[5,3,3]+, order 7200[25]{5,3,3}{3,3,5}
10 {5/2,5,5/2}[5,3,3], order 14400[25]2{5,3,3}2{3,3,5}
Regular star compounds as dual pairs
Compound 1 Compound 2 Symmetry Vertex arrangement (1) Cell arrangement (1) Vertex arrangement (2) Cell arrangement (2)
5 {3,5,5/2}5 {5/2,5,3}[5,3,3]+, order 7200[25]{5,3,3}{3,3,5}{5,3,3}{3,3,5}
10 {3,5,5/2}10 {5/2,5,3}[5,3,3], order 14400[25]2{5,3,3}2{3,3,5}2{5,3,3}2{3,3,5}
5 {5,5/2,3}5 {3,5/2,5}[5,3,3]+, order 7200[25]{5,3,3}{3,3,5}{5,3,3}{3,3,5}
10 {5,5/2,3}10 {3,5/2,5}[5,3,3], order 14400[25]2{5,3,3}2{3,3,5}2{5,3,3}2{3,3,5}
5 {5/2,3,5}5 {5,3,5/2}[5,3,3]+, order 7200[25]{5,3,3}{3,3,5}{5,3,3}{3,3,5}
10 {5/2,3,5}10 {5,3,5/2}[5,3,3], order 14400[25]2{5,3,3}2{3,3,5}2{5,3,3}2{3,3,5}
There are also fourteen partially regular compounds, that are either vertex-transitive or cell-transitive but not both. The seven vertex-transitive partially regular compounds are the duals of the seven cell-transitive partially regular compounds.
Partially regular compounds as dual pairs
Compound 1
Vertex-transitive
Compound 2
Cell-transitive
Symmetry
2 16-cells[28]2 tesseracts[4,3,3], order 384[25]
25 24-cell(var)25 24-cell(var)order 600[26]
100 24-cell100 24-cell[5,3,3]+, order 7200[25]
200 24-cell200 24-cell[5,3,3], order 14400[25]
5 600-cell5 120-cell[5,3,3]+, order 7200[25]
10 600-cell10 120-cell[5,3,3], order 14400[25]
Partially regular star compounds as dual pairs
Compound 1
Vertex-transitive
Compound 2
Cell-transitive
Symmetry
5 {3,3,5/2}5 {5/2,3,3}[5,3,3]+, order 7200[25]
10 {3,3,5/2}10 {5/2,3,3}[5,3,3], order 14400[25]
Although the 5-cell and 24-cell are both self-dual, their dual compounds (the compound of two 5-cells and compound of two 24-cells) are not considered to be regular, unlike the compound of two tetrahedra and the various dual polygon compounds, because they are neither vertex-regular nor cell-regular: they are not facetings or stellations of any regular 4-polytope. However, they are vertex-, edge-, face-, and cell-transitive.
Euclidean 3-space compounds
The only regular Euclidean compound honeycombs are an infinite family of compounds of cubic honeycombs, all sharing vertices and faces with another cubic honeycomb. This compound can have any number of cubic honeycombs. The Coxeter notation is {4,3,4}[d{4,3,4}]{4,3,4}.
Five dimensions and higher compounds
There are no regular compounds in five or six dimensions. There are three known seven-dimensional compounds (16, 240, or 480 7-simplices), and six known eight-dimensional ones (16, 240, or 480 8-cubes or 8-orthoplexes). There is also one compound of n-simplices in n-dimensional space provided that n is one less than a power of two, and also two compounds (one of n-cubes and a dual one of n-orthoplexes) in n-dimensional space if n is a power of two.
The Coxeter notation for these compounds are (using αn = {3n−1}, βn = {3n−2,4}, γn = {4,3n−2}):
• 7-simplexes: cγ7[16cα7]cβ7, where c = 1, 15, or 30
• 8-orthoplexes: cγ8[16cβ8]
• 8-cubes: [16cγ8]cβ8
The general cases (where n = 2k and d = 22k − k − 1, k = 2, 3, 4, ...):
• Simplexes: γn−1[dαn−1]βn−1
• Orthoplexes: γn[dβn]
• Hypercubes: [dγn]βn
Euclidean honeycomb compounds
A known family of regular Euclidean compound honeycombs in five or more dimensions is an infinite family of compounds of hypercubic honeycombs, all sharing vertices and faces with another hypercubic honeycomb. This compound can have any number of hypercubic honeycombs. The Coxeter notation is δn[dδn]δn where δn = {∞} when n = 2 and {4,3n−3,4} when n ≥ 3.
Abstract polytopes
The abstract polytopes arose out of an attempt to study polytopes apart from the geometrical space they are embedded in. They include the tessellations of spherical, Euclidean and hyperbolic space, tessellations of other manifolds, and many other objects that do not have a well-defined topology, but instead may be characterised by their "local" topology. There are infinitely many in every dimension. See this atlas for a sample. Some notable examples of abstract regular polytopes that do not appear elsewhere in this list are the 11-cell, {3,5,3}, and the 57-cell, {5,3,5}, which have regular projective polyhedra as cells and vertex figures.
The elements of an abstract polyhedron are its body (the maximal element), its faces, edges, vertices and the null polytope or empty set. These abstract elements can be mapped into ordinary space or realised as geometrical figures. Some abstract polyhedra have well-formed or faithful realisations, others do not. A flag is a connected set of elements of each dimension - for a polyhedron that is the body, a face, an edge of the face, a vertex of the edge, and the null polytope. An abstract polytope is said to be regular if its combinatorial symmetries are transitive on its flags - that is to say, that any flag can be mapped onto any other under a symmetry of the polyhedron. Abstract regular polytopes remain an active area of research.
Five such regular abstract polyhedra, which can not be realised faithfully, were identified by H. S. M. Coxeter in his book Regular Polytopes (1977) and again by J. M. Wills in his paper "The combinatorially regular polyhedra of index 2" (1987).[29] They are all topologically equivalent to toroids. Their construction, by arranging n faces around each vertex, can be repeated indefinitely as tilings of the hyperbolic plane. In the diagrams below, the hyperbolic tiling images have colors corresponding to those of the polyhedra images.
Polyhedron
Medial rhombic triacontahedron
Dodecadodecahedron
Medial triambic icosahedron
Ditrigonal dodecadodecahedron
Excavated dodecahedron
Vertex figure {5}, {5/2}
(5.5/2)2
{5}, {5/2}
(5.5/3)3
Faces 30 rhombi
12 pentagons
12 pentagrams
20 hexagons
12 pentagons
12 pentagrams
20 hexagrams
Tiling
{4, 5}
{5, 4}
{6, 5}
{5, 6}
{6, 6}
χ −6 −6 −16 −16 −20
These occur as dual pairs as follows:
• The medial rhombic triacontahedron and dodecadodecahedron are dual to each other.
• The medial triambic icosahedron and ditrigonal dodecadodecahedron are dual to each other.
• The excavated dodecahedron is self-dual.
See also
• Polygon
• Regular polygon
• Star polygon
• Polyhedron
• Regular polyhedron (5 regular Platonic solids and 4 Kepler–Poinsot solids)
• Uniform polyhedron
• Petrial
• 4-polytope
• Regular 4-polytope (16 regular 4-polytopes, 4 convex and 10 star (Schläfli–Hess))
• Uniform 4-polytope
• Tessellation
• Tilings of regular polygons
• Convex uniform honeycomb
• Regular polytope
• Uniform polytope
• Regular map (graph theory)
Notes
1. Coxeter (1973), p. 129.
2. McMullen & Schulte (2002), p. 30.
3. Johnson, N.W. (2018). "Chapter 11: Finite symmetry groups". Geometries and Transformations. 11.1 Polytopes and Honeycombs, p. 224. ISBN 978-1-107-10340-5.
4. Coxeter (1973), p. 120.
5. Coxeter (1973), p. 124.
6. Coxeter, Regular Complex Polytopes, p. 9
7. Duncan, Hugh (28 September 2017). "Between a square rock and a hard pentagon: Fractional polygons". chalkdust.
8. Coxeter (1973), pp. 66–67.
9. Abstracts (PDF). Convex and Abstract Polytopes (May 19–21, 2005) and Polytopes Day in Calgary (May 22, 2005).
10. Coxeter (1973), Table I: Regular polytopes, (iii) The three regular polytopes in n dimensions (n>=5), pp. 294–295.
11. McMullen & Schulte (2002), "6C Projective Regular Polytopes" pp. 162-165.
12. Grünbaum, B. (1977). "Regular Polyhedra—Old and New". Aequationes Mathematicae. 16 (1–2): 1–20. doi:10.1007/BF01836414. S2CID 125049930.
13. Roice Nelson and Henry Segerman, Visualizing Hyperbolic Honeycombs
14. Irving Adler, A New Look at Geometry (2012 Dover edition), p.233
15. Coxeter, H.S.M. (1938). "Regular Skew Polyhedra in Three and Four Dimensions". Proc. London Math. Soc. 2. 43: 33–62. doi:10.1112/plms/s2-43.1.33.
16. Coxeter, H.S.M. (1985). "Regular and semi-regular polytopes II". Mathematische Zeitschrift. 188 (4): 559–591. doi:10.1007/BF01161657. S2CID 120429557.
17. Conway, John H.; Burgiel, Heidi; Goodman-Strauss, Chaim (2008). "Chapter 23: Objects with Primary Symmetry, Infinite Platonic Polyhedra". The Symmetries of Things. Taylor & Francis. pp. 333–335. ISBN 978-1-568-81220-5.
18. McMullen & Schulte (2002), p. 224.
19. McMullen & Schulte (2002), Section 7E.
20. Garner, C.W.L. (1967). "Regular Skew Polyhedra in Hyperbolic Three-Space". Can. J. Math. 19: 1179–1186. doi:10.4153/CJM-1967-106-9. S2CID 124086497. Note: His paper says there are 32, but one is self-dual, leaving 31.
21. Coxeter (1973), Table II: Regular honeycombs, p. 296.
22. Coxeter (1999), "Chapter 10".
23. Coxeter (1999), "Chapter 10" Table IV, p. 213.
24. Coxeter (1973), p. 48.
25. Coxeter (1973). Table VII, p. 305
26. McMullen (2018).
27. Klitzing, Richard. "Uniform compound stellated icositetrachoron".
28. Klitzing, Richard. "Uniform compound demidistesseract".
29. David A. Richter. "The Regular Polyhedra (of index two)".
References
• Coxeter, H. S. M. (1999), "Chapter 10: Regular Honeycombs in Hyperbolic Space", The Beauty of Geometry: Twelve Essays, Mineola, NY: Dover Publications, Inc., pp. 199–214, ISBN 0-486-40919-8, LCCN 99035678, MR 1717154. See in particular Summary Tables II,III,IV,V, pp. 212–213.
• Originally published in Coxeter, H. S. M. (1956), "Regular honeycombs in hyperbolic space" (PDF), Proceedings of the International Congress of Mathematicians, 1954, Amsterdam, vol. III, Amsterdam: North-Holland Publishing Co., pp. 155–169, MR 0087114, archived from the original (PDF) on 2015-04-02.
• Coxeter, H. S. M. (1973) [1948]. Regular Polytopes (Third ed.). New York: Dover Publications. ISBN 0-486-61480-8. MR 0370327. OCLC 798003. See in particular Tables I and II: Regular polytopes and honeycombs, pp. 294–296.
• Johnson, Norman W. (2012), "Regular inversive polytopes" (PDF), International Conference on Mathematics of Distances and Applications (July 2–5, 2012, Varna, Bulgaria), pp. 85–95 Paper 27
• McMullen, Peter; Schulte, Egon (2002), Abstract Regular Polytopes, Encyclopedia of Mathematics and its Applications, vol. 92, Cambridge: Cambridge University Press, doi:10.1017/CBO9780511546686, ISBN 0-521-81496-0, MR 1965665, S2CID 115688843
• McMullen, Peter (2018), "New Regular Compounds of 4-Polytopes", New Trends in Intuitive Geometry, Bolyai Society Mathematical Studies, 27: 307–320, doi:10.1007/978-3-662-57413-3_12, ISBN 978-3-662-57412-6.
• Nelson, Roice; Segerman, Henry (2015). "Visualizing Hyperbolic Honeycombs". arXiv:1511.02851 [math.HO]. hyperbolichoneycombs.org/
• Sommerville, D. M. Y. (1958), An Introduction to the Geometry of n Dimensions, New York: Dover Publications, Inc., MR 0100239. Reprint of 1930 ed., published by E. P. Dutton. See in particular Chapter X: The Regular Polytopes.
External links
• The Platonic Solids
• Kepler-Poinsot Polyhedra
• Regular 4d Polytope Foldouts
• Multidimensional Glossary (Look up Hexacosichoron and Hecatonicosachoron)
• Polytope Viewer
• Polytopes and optimal packing of p points in n dimensional spheres
• An atlas of small regular polytopes
• Regular polyhedra through time I. Hubard, Polytopes, Maps and their Symmetries
• Regular Star Polytopes, Nan Ma
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Fundamental convex regular and uniform honeycombs in dimensions 2–9
Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$
E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal
E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4
E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb
E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6
E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222
E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331
E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521
E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10
E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11
En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
Tessellation
Periodic
• Pythagorean
• Rhombille
• Schwarz triangle
• Rectangle
• Domino
• Uniform tiling and honeycomb
• Coloring
• Convex
• Kisrhombille
• Wallpaper group
• Wythoff
Aperiodic
• Ammann–Beenker
• Aperiodic set of prototiles
• List
• Einstein problem
• Socolar–Taylor
• Gilbert
• Penrose
• Pentagonal
• Pinwheel
• Quaquaversal
• Rep-tile and Self-tiling
• Sphinx
• Socolar
• Truchet
Other
• Anisohedral and Isohedral
• Architectonic and catoptric
• Circle Limit III
• Computer graphics
• Honeycomb
• Isotoxal
• List
• Packing
• Problems
• Domino
• Wang
• Heesch's
• Squaring
• Dividing a square into similar rectangles
• Prototile
• Conway criterion
• Girih
• Regular Division of the Plane
• Regular grid
• Substitution
• Voronoi
• Voderberg
By vertex type
Spherical
• 2n
• 33.n
• V33.n
• 42.n
• V42.n
Regular
• 2∞
• 36
• 44
• 63
Semi-
regular
• 32.4.3.4
• V32.4.3.4
• 33.42
• 33.∞
• 34.6
• V34.6
• 3.4.6.4
• (3.6)2
• 3.122
• 42.∞
• 4.6.12
• 4.82
Hyper-
bolic
• 32.4.3.5
• 32.4.3.6
• 32.4.3.7
• 32.4.3.8
• 32.4.3.∞
• 32.5.3.5
• 32.5.3.6
• 32.6.3.6
• 32.6.3.8
• 32.7.3.7
• 32.8.3.8
• 33.4.3.4
• 32.∞.3.∞
• 34.7
• 34.8
• 34.∞
• 35.4
• 37
• 38
• 3∞
• (3.4)3
• (3.4)4
• 3.4.62.4
• 3.4.7.4
• 3.4.8.4
• 3.4.∞.4
• 3.6.4.6
• (3.7)2
• (3.8)2
• 3.142
• 3.162
• (3.∞)2
• 3.∞2
• 42.5.4
• 42.6.4
• 42.7.4
• 42.8.4
• 42.∞.4
• 45
• 46
• 47
• 48
• 4∞
• (4.5)2
• (4.6)2
• 4.6.12
• 4.6.14
• V4.6.14
• 4.6.16
• V4.6.16
• 4.6.∞
• (4.7)2
• (4.8)2
• 4.8.10
• V4.8.10
• 4.8.12
• 4.8.14
• 4.8.16
• 4.8.∞
• 4.102
• 4.10.12
• 4.122
• 4.12.16
• 4.142
• 4.162
• 4.∞2
• (4.∞)2
• 54
• 55
• 56
• 5∞
• 5.4.6.4
• (5.6)2
• 5.82
• 5.102
• 5.122
• (5.∞)2
• 64
• 65
• 66
• 68
• 6.4.8.4
• (6.8)2
• 6.82
• 6.102
• 6.122
• 6.162
• 73
• 74
• 77
• 7.62
• 7.82
• 7.142
• 83
• 84
• 86
• 88
• 8.62
• 8.122
• 8.162
• ∞3
• ∞4
• ∞5
• ∞∞
• ∞.62
• ∞.82
|
Wikipedia
|
Icosagon
In geometry, an icosagon or 20-gon is a twenty-sided polygon. The sum of any icosagon's interior angles is 3240 degrees.
Regular icosagon
A regular icosagon
TypeRegular polygon
Edges and vertices20
Schläfli symbol{20}, t{10}, tt{5}
Coxeter–Dynkin diagrams
Symmetry groupDihedral (D20), order 2×20
Internal angle (degrees)162°
PropertiesConvex, cyclic, equilateral, isogonal, isotoxal
Dual polygonSelf
Regular icosagon
The regular icosagon has Schläfli symbol {20}, and can also be constructed as a truncated decagon, t{10}, or a twice-truncated pentagon, tt{5}.
One interior angle in a regular icosagon is 162°, meaning that one exterior angle would be 18°.
The area of a regular icosagon with edge length t is
$A={5}t^{2}(1+{\sqrt {5}}+{\sqrt {5+2{\sqrt {5}}}})\simeq 31.5687t^{2}.$
In terms of the radius R of its circumcircle, the area is
$A={\frac {5R^{2}}{2}}({\sqrt {5}}-1);$
since the area of the circle is $\pi R^{2},$ the regular icosagon fills approximately 98.36% of its circumcircle.
Uses
The Big Wheel on the popular US game show The Price Is Right has an icosagonal cross-section.
The Globe, the outdoor theater used by William Shakespeare's acting company, was discovered to have been built on an icosagonal foundation when a partial excavation was done in 1989.[1]
As a golygonal path, the swastika is considered to be an irregular icosagon.[2]
A regular square, pentagon, and icosagon can completely fill a plane vertex.
Construction
As 20 = 22 × 5, regular icosagon is constructible using a compass and straightedge, or by an edge-bisection of a regular decagon, or a twice-bisected regular pentagon:
Construction of a regular icosagon
Construction of a regular decagon
The golden ratio in an icosagon
• In the construction with given side length the circular arc around C with radius CD, shares the segment E20F in ratio of the golden ratio.
${\frac {\overline {E_{20}E_{1}}}{\overline {E_{1}F}}}={\frac {\overline {E_{20}F}}{\overline {E_{20}E_{1}}}}={\frac {1+{\sqrt {5}}}{2}}=\varphi \approx 1.618$
Symmetry
The regular icosagon has Dih20 symmetry, order 40. There are 5 subgroup dihedral symmetries: (Dih10, Dih5), and (Dih4, Dih2, and Dih1), and 6 cyclic group symmetries: (Z20, Z10, Z5), and (Z4, Z2, Z1).
These 10 symmetries can be seen in 16 distinct symmetries on the icosagon, a larger number because the lines of reflections can either pass through vertices or edges. John Conway labels these by a letter and group order.[3] Full symmetry of the regular form is r40 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders.
Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g20 subgroup has no degrees of freedom but can seen as directed edges.
The highest symmetry irregular icosagons are d20, an isogonal icosagon constructed by ten mirrors which can alternate long and short edges, and p20, an isotoxal icosagon, constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular icosagon.
Dissection
20-gon with 180 rhombs
regular
Isotoxal
Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms.[4] In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the icosagon, m=10, and it can be divided into 45: 5 squares and 4 sets of 10 rhombs. This decomposition is based on a Petrie polygon projection of a 10-cube, with 45 of 11520 faces. The list OEIS: A006245 enumerates the number of solutions as 18,410,581,880, including up to 20-fold rotations and chiral forms in reflection.
Dissection into 45 rhombs
10-cube
Related polygons
An icosagram is a 20-sided star polygon, represented by symbol {20/n}. There are three regular forms given by Schläfli symbols: {20/3}, {20/7}, and {20/9}. There are also five regular star figures (compounds) using the same vertex arrangement: 2{10}, 4{5}, 5{4}, 2{10/3}, 4{5/2}, and 10{2}.
n 1 2 3 4 5
Form Convex polygon Compound Star polygon Compound
Image
{20/1} = {20}
{20/2} = 2{10}
{20/3}
{20/4} = 4{5}
{20/5} = 5{4}
Interior angle 162° 144° 126° 108° 90°
n 6 7 8 9 10
Form Compound Star polygon Compound Star polygon Compound
Image
{20/6} = 2{10/3}
{20/7}
{20/8} = 4{5/2}
{20/9}
{20/10} = 10{2}
Interior angle 72° 54° 36° 18° 0°
Deeper truncations of the regular decagon and decagram can produce isogonal (vertex-transitive) intermediate icosagram forms with equally spaced vertices and two edge lengths.[5]
A regular icosagram, {20/9}, can be seen as a quasitruncated decagon, t{10/9}={20/9}. Similarly a decagram, {10/3} has a quasitruncation t{10/7}={20/7}, and finally a simple truncation of a decagram gives t{10/3}={20/3}.
Icosagrams as truncations of a regular decagons and decagrams, {10}, {10/3}
Quasiregular Quasiregular
t{10}={20}
t{10/9}={20/9}
t{10/3}={20/3}
t{10/7}={20/7}
Petrie polygons
The regular icosagon is the Petrie polygon for a number of higher-dimensional polytopes, shown in orthogonal projections in Coxeter planes:
A19 B10 D11 E8 H4 ½2H2 2H2
19-simplex
10-orthoplex
10-cube
11-demicube
(421)
600-cell
Grand antiprism
10-10 duopyramid
10-10 duoprism
It is also the Petrie polygon for the icosahedral 120-cell, small stellated 120-cell, great icosahedral 120-cell, and great grand 120-cell.
References
1. Muriel Pritchett, University of Georgia "To Span the Globe" Archived 10 June 2010 at the Wayback Machine, see also Editor's Note, retrieved on 10 January 2016
2. Weisstein, Eric W. "Icosagon". MathWorld.
3. John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, (2008) The Symmetries of Things, ISBN 978-1-56881-220-5 (Chapter 20, Generalized Schaefli symbols, Types of symmetry of a polygon pp. 275-278)
4. Coxeter, Mathematical recreations and Essays, Thirteenth edition, p.141
5. The Lighter Side of Mathematics: Proceedings of the Eugène Strens Memorial Conference on Recreational Mathematics and its History, (1994), Metamorphoses of polygons, Branko Grünbaum
External links
• Naming Polygons and Polyhedra
• icosagon
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
|
Wikipedia
|
Regular ideal
In mathematics, especially ring theory, a regular ideal can refer to multiple concepts.
In operator theory, a right ideal ${\mathfrak {i}}$ in a (possibly) non-unital ring A is said to be regular (or modular) if there exists an element e in A such that $ex-x\in {\mathfrak {i}}$ for every $x\in A$.[1]
In commutative algebra a regular ideal refers to an ideal containing a non-zero divisor.[2][3] This article will use "regular element ideal" to help distinguish this type of ideal.
A two-sided ideal ${\mathfrak {i}}$ of a ring R can also be called a (von Neumann) regular ideal if for each element x of ${\mathfrak {i}}$ there exists a y in ${\mathfrak {i}}$ such that xyx=x.[4][5]
Finally, regular ideal has been used to refer to an ideal J of a ring R such that the quotient ring R/J is von Neumann regular ring.[6] This article will use "quotient von Neumann regular" to refer to this type of regular ideal.
Since the adjective regular has been overloaded, this article adopts the alternative adjectives modular, regular element, von Neumann regular, and quotient von Neumann regular to distinguish between concepts.
Properties and examples
Modular ideals
The notion of modular ideals permits the generalization of various characterizations of ideals in a unital ring to non-unital settings.
A two-sided ideal ${\mathfrak {i}}$ is modular if and only if $A/{\mathfrak {i}}$ is unital. In a unital ring, every ideal is modular since choosing e=1 works for any right ideal. So, the notion is more interesting for non-unital rings such as Banach algebras. From the definition it is easy to see that an ideal containing a modular ideal is itself modular.
Somewhat surprisingly, it is possible to prove that even in rings without identity, a modular right ideal is contained in a maximal right ideal.[7] However, it is possible for a ring without identity to lack modular right ideals entirely.
The intersection of all maximal right ideals which are modular is the Jacobson radical.[8]
Examples
• In the non-unital ring of even integers, (6) is regular ($e=4$) while (4) is not.
• Let M be a simple right A-module. If x is a nonzero element in M, then the annihilator of x is a regular maximal right ideal in A.
• If A is a ring without maximal right ideals, then A cannot have even a single modular right ideal.
Regular element ideals
Every ring with unity has at least one regular element ideal: the trivial ideal R itself. Regular element ideals of commutative rings are essential ideals. In a semiprime right Goldie ring, the converse holds: essential ideals are all regular element ideals.[9]
Since the product of two regular elements (=non-zerodivisors) of a commutative ring R is again a regular element, it is apparent that the product of two regular element ideals is again a regular element ideal. Clearly any ideal containing a regular element ideal is again a regular element ideal.
Examples
• In an integral domain, every nonzero element is a regular element, and so every nonzero ideal is a regular element ideal.
• The nilradical of a commutative ring is composed entirely of nilpotent elements, and therefore no element can be regular. This gives an example of an ideal which is not a regular element ideal.
• In an Artinian ring, each element is either invertible or a zero divisor. Because of this, such a ring only has one regular element ideal: just R.
Von Neumann regular ideals
From the definition, it is clear that R is a von Neumann regular ring if and only if R is a von Neumann regular ideal. The following statement is a relevant lemma for von Neumann regular ideals:
Lemma: For a ring R and proper ideal J containing an element a, there exists and element y in J such that a=aya if and only if there exists an element r in R such that a=ara. Proof: The "only if" direction is a tautology. For the "if" direction, we have a=ara=arara. Since a is in J, so is rar, and so by setting y=rar we have the conclusion.
As a consequence of this lemma, it is apparent that every ideal of a von Neumann regular ring is a von Neumann regular ideal. Another consequence is that if J and K are two ideals of R such that J⊆K and K is a von Neumann regular ideal, then J is also a von Neumann regular ideal.
If J and K are two ideals of R, then K is von Neumann regular if and only if both J is a von Neumann regular ideal and K/J is a von Neumann regular ring.[10]
Every ring has at least one von Neumann regular ideal, namely {0}. Furthermore, every ring has a maximal von Neumann regular ideal containing all other von Neumann regular ideals, and this ideal is given by
$M=\{x\in R\mid RxR{\text{ is a von Neumann regular ideal }}\}$.
Examples
• As noted above, every ideal of a von Neumann regular ring is a von Neumann regular ideal.
• It is well known that a local ring which is also a von Neumann regular ring is a division ring. Let R Be a local ring which is not a division ring, and denote the unique maximal right ideal by J. Then R cannot be von Neumann regular, but R/J, being a division ring, is a von Neumann regular ring. Consequently, J cannot be a von Neumann regular ideal, even though it is maximal.
• A simple domain which is not a division ring has the minimum possible number of von Neumann regular ideals: only the {0} ideal.
Quotient von Neumann regular ideals
If J and K are quotient von Neumann regular ideals, then so is J∩K.
If J⊆K are proper ideals of R and J is quotient von Neumann regular, then so is K. This is because quotients of R/J are all von Neumann regular rings, and an isomorphism theorem for rings establishing that R/K≅(R/J)/(J/K). In particular if A is any ideal in R the ideal A+J is quotient von Neumann regular if J is.
Examples
• Every proper ideal of a von Neumann regular ring is quotient von Neumann regular.
• Any maximal ideal in a commutative ring is a quotient von Neumann regular ideal since R/M is a field. This is not true in general because for noncommutative rings R/M may only be a simple ring, and may not be von Neumann regular.
• Let R be a local ring which is not a division ring, and with maximal right ideal M . Then M is a quotient von Neumann regular ideal, since R/M is a division ring, but R is not a von Neumann regular ring.
• More generally in any semilocal ring the Jacobson radical J is quotient von Neumann regular, since R/J is a semisimple ring, hence a von Neumann regular ring.
References
1. Jacobson 1956.
2. Non-zero-divisors in commutative rings are called regular elements.
3. Larsen & McCarthy 1971, p. 42.
4. Goodearl 1991, p. 2.
5. Kaplansky 1969, p. 112.
6. Burton, D.M. (1970) A first course in rings and ideals. Addison-Wesley. Reading, Massachusetts .
7. Jacobson 1956, p. 6.
8. Kaplansky 1948, Lemma 1.
9. Lam 1999, p. 342.
10. Goodearl 1991, p.2.
Bibliography
• Goodearl, K. R. (1991). von Neumann regular rings (2 ed.). Malabar, FL: Robert E. Krieger Publishing Co. Inc. pp. xviii+412. ISBN 0-89464-632-X. MR 1150975.
• Jacobson, Nathan (1956). Structure of rings. American Mathematical Society, Colloquium Publications, vol. 37. Prov., R. I.: American Mathematical Society. pp. vii+263. MR 0081264.
• Kaplansky, Irving (1948), "Dual rings", Ann. of Math., 2, 49 (3): 689–701, doi:10.2307/1969052, ISSN 0003-486X, JSTOR 1969052, MR 0025452
• Kaplansky, Irving (1969). Fields and Rings. The University of Chicago Press.
• Lam, Tsit-Yuen (1999). Lectures on modules and rings. Graduate Texts in Mathematics No. 189. Berlin, New York: Springer-Verlag. ISBN 978-0-387-98428-5. MR 1653294.
• Larsen, Max. D.; McCarthy, Paul J. (1971). "Multiplicative theory of ideals". Pure and Applied Mathematics. New York: Academic Press. 43: xiv, 298. MR 0414528.
• Zhevlakov, K.A. (2001) [1994], "Modular ideal", Encyclopedia of Mathematics, EMS Press
|
Wikipedia
|
Abstract polytope
In mathematics, an abstract polytope is an algebraic partially ordered set which captures the dyadic property of a traditional polytope without specifying purely geometric properties such as points and lines.
A geometric polytope is said to be a realization of an abstract polytope in some real N-dimensional space, typically Euclidean. This abstract definition allows more general combinatorial structures than traditional definitions of a polytope, thus allowing new objects that have no counterpart in traditional theory.
Introductory concepts
Traditional versus abstract polytopes
In Euclidean geometry, two shapes that are not similar can nonetheless share a common structure. For example, a square and a trapezoid both comprise an alternating chain of four vertices and four sides, which makes them quadrilaterals. They are said to be isomorphic or “structure preserving”.
This common structure may be represented in an underlying abstract polytope, a purely algebraic partially ordered set which captures the pattern of connections (or incidences) between the various structural elements. The measurable properties of traditional polytopes such as angles, edge-lengths, skewness, straightness and convexity have no meaning for an abstract polytope.
What is true for traditional polytopes (also called classical or geometric polytopes) may not be so for abstract ones, and vice versa. For example, a traditional polytope is regular if all its facets and vertex figures are regular, but this is not necessarily so for an abstract polytope.[1]
Realizations
A traditional polytope is said to be a realization of the associated abstract polytope. A realization is a mapping or injection of the abstract object into a real space, typically Euclidean, to construct a traditional polytope as a real geometric figure.
The six quadrilaterals shown are all distinct realizations of the abstract quadrilateral, each with different geometric properties. Some of them do not conform to traditional definitions of a quadrilateral and are said to be unfaithful realizations. A conventional polytope is a faithful realization.
Faces, ranks and ordering
In an abstract polytope, each structural element (vertex, edge, cell, etc.) is associated with a corresponding member of the set. The term face is used to refer to any such element e.g. a vertex (0-face), edge (1-face) or a general k-face, and not just a polygonal 2-face.
The faces are ranked according to their associated real dimension: vertices have rank 0, edges rank 1 and so on.
Incident faces of different ranks, for example, a vertex F of an edge G, are ordered by the relation F < G. F is said to be a subface of G.
F, G are said to be incident if either F = G or F < G or G < F. This usage of "incidence" also occurs in finite geometry, although it differs from traditional geometry and some other areas of mathematics. For example, in the square ABCD, edges AB and BC are not abstractly incident (although they are both incident with vertex B).
A polytope is then defined as a set of faces P with an order relation <. Formally, P (with <) will be a (strict) partially ordered set, or poset.
Least and greatest faces
Just as the number zero is necessary in mathematics, so also every set has the empty set ∅ as a subset. In an abstract polytope ∅ is by convention identified as the least or null face and is a subface of all the others. Since the least face is one level below the vertices or 0-faces, its rank is −1 and it may be denoted as F−1. Thus F−1 ≡ ∅ and the abstract polytope also contains the empty set as an element.[2] It is not usually realized.
There is also a single face of which all the others are subfaces. This is called the greatest face. In an n-dimensional polytope, the greatest face has rank = n and may be denoted as Fn. It is sometimes realized as the interior of the geometric figure.
These least and greatest faces are sometimes called improper faces, with all others being proper faces.[3]
A simple example
The faces of the abstract quadrilateral or square are shown in the table below:
Face type Rank (k) Count k-faces
Least−11F−1
Vertex04a, b, c, d
Edge14W, X, Y, Z
Greatest21G
The relation < comprises a set of pairs, which here include
F−1<a, ... , F−1<X, ... , F−1<G, ... , b<Y, ... , c<G, ... , Z<G.
Order relations are transitive, i.e. F < G and G < H implies that F < H. Therefore, to specify the hierarchy of faces, it is not necessary to give every case of F < H, only the pairs where one is the successor of the other, i.e. where F < H and no G satisfies F < G < H.
The edges W, X, Y and Z are sometimes written as ab, ad, bc, and cd respectively, but such notation is not always appropriate.
All four edges are structurally similar and the same is true of the vertices. The figure therefore has the symmetries of a square and is usually referred to as the square.
The Hasse diagram
Smaller posets, and polytopes in particular, are often best visualized in a Hasse diagram, as shown. By convention, faces of equal rank are placed on the same vertical level. Each "line" between faces, say F, G, indicates an ordering relation < such that F < G where F is below G in the diagram.
The Hasse diagram defines the unique poset and therefore fully captures the structure of the polytope. Isomorphic polytopes give rise to isomorphic Hasse diagrams, and vice versa. The same is not generally true for the graph representation of polytopes.
Rank
The rank of a face F is defined as (m − 2), where m is the maximum number of faces in any chain (F', F", ... , F) satisfying F' < F" < ... < F. F' is always the least face, F−1.
The rank of an abstract polytope P is the maximum rank n of any face. It is always the rank of the greatest face Fn.
The rank of a face or polytope usually corresponds to the dimension of its counterpart in traditional theory.
For some ranks, their face-types are named in the following table.
Rank-10123...n - 2n - 1n
Face Type LeastVertexEdge†CellSubfacet or ridge[3]Facet[3]Greatest
† Traditionally "face" has meant a rank 2 face or 2-face. In abstract theory the term "face" denotes a face of any rank.
Flags
In geometry, a flag is a maximal chain of faces, i.e. a (totally) ordered set Ψ of faces, each a subface of the next (if any), and such that Ψ is not a subset of any larger chain. Given any two distinct faces F, G in a flag, either F < G or F > G.
For example, {ø, a, ab, abc} is a flag in the triangle abc.
For a given polytope, all flags contain the same number of faces. Other posets do not, in general, satisfy this requirement.
Sections
Any subset P' of a poset P is a poset (with the same relation <, restricted to P').
In an abstract polytope, given any two faces F, H of P with F ≤ H, the set {G | F ≤ G ≤ H} is called a section of P, and denoted H/F. (In order theory, a section is called a closed interval of the poset and denoted [F, H].
For example, in the prism abcxyz (see diagram) the section xyz/ø (highlighted green) is the triangle
{ø, x, y, z, xy, xz, yz, xyz}.
A k-section is a section of rank k.
P is thus a section of itself.
This concept of section does not have the same meaning as in traditional geometry.
Facets
The facet for a given j-face F is the (j−1)-section F/∅, where Fj is the greatest face.
For example, in the triangle abc, the facet at ab is ab/b = {∅, a, b, ab}, which is a line segment.
The distinction between F and F/∅ is not usually significant and the two are often treated as identical.
Vertex figures
The vertex figure at a given vertex V is the (n−1)-section Fn/V, where Fn is the greatest face.
For example, in the triangle abc, the vertex figure at b is abc/b = {b, ab, bc, abc}, which is a line segment. The vertex figures of a cube are triangles.
Connectedness
A poset P is connected if P has rank ≤ 1, or, given any two proper faces F and G, there is a sequence of proper faces
H1, H2, ... ,Hk
such that F = H1, G = Hk, and each Hi, i < k, is incident with its successor.
The above condition ensures that a pair of disjoint triangles abc and xyz is not a (single) polytope.
A poset P is strongly connected if every section of P (including P itself) is connected.
With this additional requirement, two pyramids that share just a vertex are also excluded. However, two square pyramids, for example, can, be "glued" at their square faces - giving an octahedron. The "common face" is not then a face of the octahedron.
Formal definition
An abstract polytope is a partially ordered set, whose elements we call faces, satisfying the 4 axioms:
1. It has just one least face and one greatest face.
2. All flags contain the same number of faces.
3. It is strongly connected.
4. If the ranks of two faces a > b differ by 2, then there are exactly 2 faces that lie strictly between a and b.
An n-polytope is a polytope of rank n. The abstract polytope associated with a real convex polytope is also referred to as its face lattice.[4]
The simplest polytopes
Rank < 1
There is just one poset for each rank −1 and 0. These are, respectively, the null face and the point. These are not always considered to be valid abstract polytopes.
Rank 1: the line segment
There is only one polytope of rank 1, which is the line segment. It has a least face, just two 0-faces and a greatest face, for example {ø, a, b, ab}. It follows that the vertices a and b have rank 0, and that the greatest face ab, and therefore the poset, both have rank 1.
Rank 2: polygons
For each p, 3 ≤ p < $\infty $, we have (the abstract equivalent of) the traditional polygon with p vertices and p edges, or a p-gon. For p = 3, 4, 5, ... we have the triangle, square, pentagon, ....
For p = 2, we have the digon, and p = $\infty $ we get the apeirogon.
The digon
A digon is a polygon with just 2 edges. Unlike any other polygon, both edges have the same two vertices. For this reason, it is degenerate in the Euclidean plane.
Faces are sometimes described using "vertex notation" - e.g. {ø, a, b, c, ab, ac, bc, abc} for the triangle abc. This method has the advantage of implying the < relation.
With the digon this vertex notation cannot be used. It is necessary to give the faces individual symbols and specify the subface pairs F < G.
Thus, a digon is defined as a set {ø, a, b, E', E", G} with the relation < given by
{ø<a, ø<b, a<E', a<E", b<E', b<E", E'<G, E"<G}
where E' and E" are the two edges, and G the greatest face.
This need to identify each element of the polytope with a unique symbol applies to many other abstract polytopes and is therefore common practice.
A polytope can only be fully described using vertex notation if every face is incident with a unique set of vertices. A polytope having this property is said to be atomistic.
Examples of higher rank
The set of j-faces (−1 ≤ j ≤ n) of a traditional n-polytope form an abstract n-polytope.
The concept of an abstract polytope is more general and also includes:
• Apeirotopes or infinite polytopes, which include tessellations (tilings)
• Proper decompositions of unbounded manifolds such as the torus or real projective plane.
• Many other objects, such as the 11-cell and the 57-cell, that cannot be faithfully realized in Euclidean spaces.
Hosohedra and hosotopes
The digon is generalized by the hosohedron and higher dimensional hosotopes, which can all be realized as spherical polyhedra – they tessellate the sphere.
Projective polytopes
Four examples of non-traditional abstract polyhedra are the Hemicube (shown), Hemi-octahedron, Hemi-dodecahedron, and the Hemi-icosahedron. These are the projective counterparts of the Platonic solids, and can be realized as (globally) projective polyhedra – they tessellate the real projective plane.
The hemicube is another example of where vertex notation cannot be used to define a polytope - all the 2-faces and the 3-face have the same vertex set.
Duality
Every geometric polytope has a dual twin. Abstractly, the dual is the same polytope but with the ranking reversed in order: the Hasse diagram differs only in its annotations. In an n-polytope, each of the original k-faces maps to an (n − k − 1)-face in the dual. Thus, for example, the n-face maps to the (−1)-face. The dual of a dual is (isomorphic to) the original.
A polytope is self-dual if it is the same as, i.e. isomorphic to, its dual. Hence, the Hasse diagram of a self-dual polytope must be symmetrical about the horizontal axis half-way between the top and bottom. The square pyramid in the example above is self-dual.
The vertex figure at a vertex V is the dual of the facet to which V maps in the dual polytope.
Abstract regular polytopes
Formally, an abstract polytope is defined to be "regular" if its automorphism group acts transitively on the set of its flags. In particular, any two k-faces F, G of an n-polytope are "the same", i.e. that there is an automorphism which maps F to G. When an abstract polytope is regular, its automorphism group is isomorphic to a quotient of a Coxeter group.
All polytopes of rank ≤ 2 are regular. The most famous regular polyhedra are the five Platonic solids. The hemicube (shown) is also regular.
Informally, for each rank k, this means that there is no way to distinguish any k-face from any other - the faces must be identical, and must have identical neighbors, and so forth. For example, a cube is regular because all the faces are squares, each square's vertices are attached to three squares, and each of these squares is attached to identical arrangements of other faces, edges and vertices, and so on.
This condition alone is sufficient to ensure that any regular abstract polytope has isomorphic regular (n−1)-faces and isomorphic regular vertex figures.
This is a weaker condition than regularity for traditional polytopes, in that it refers to the (combinatorial) automorphism group, not the (geometric) symmetry group. For example, any abstract polygon is regular, since angles, edge-lengths, edge curvature, skewness etc. don't exist for abstract polytopes.
There are several other weaker concepts, some not yet fully standardized, such as semi-regular, quasi-regular, uniform, chiral, and Archimedean that apply to polytopes that have some, but not all of their faces equivalent in each rank.
Realization
A set of points V in a Euclidean space equipped with a surjection from the vertex set of an abstract polytope P such that automorphisms of P induce isometric permutations of V is called a realization of an abstract polytope.[5][6] Two realizations are called congruent if the natural bijection between their sets of vertices is induced by an isometry of their ambient Euclidean spaces.[7][8]
If an abstract n-polytope is realized in n-dimensional space, such that the geometrical arrangement does not break any rules for traditional polytopes (such as curved faces, or ridges of zero size), then the realization is said to be faithful. In general, only a restricted set of abstract polytopes of rank n may be realized faithfully in any given n-space. The characterization of this effect is an outstanding problem.
For a regular abstract polytope, if the combinatorial automorphisms of the abstract polytope are realized by geometric symmetries then the geometric figure will be a regular polytope.
Moduli space
The group G of symmetries of a realization V of an abstract polytope P is generated by two reflections, the product of which translates each vertex of P to the next.[9][10] The product of the two reflections can be decomposed as a product of a non-zero translation, finitely many rotations, and possibly trivial reflection.[11][10]
Generally, the moduli space of realizations of an abstract polytope is a convex cone of infinite dimension.[12][13] The realization cone of the abstract polytope has uncountably infinite algebraic dimension and cannot be closed in the Euclidean topology.[11][14]
The amalgamation problem and universal polytopes
An important question in the theory of abstract polytopes is the amalgamation problem. This is a series of questions such as
For given abstract polytopes K and L, are there any polytopes P whose facets are K and whose vertex figures are L ?
If so, are they all finite ?
What finite ones are there ?
For example, if K is the square, and L is the triangle, the answers to these questions are
Yes, there are polytopes P with square faces, joined three per vertex (that is, there are polytopes of type {4,3}).
Yes, they are all finite, specifically,
There is the cube, with six square faces, twelve edges and eight vertices, and the hemi-cube, with three faces, six edges and four vertices.
It is known that if the answer to the first question is 'Yes' for some regular K and L, then there is a unique polytope whose facets are K and whose vertex figures are L, called the universal polytope with these facets and vertex figures, which covers all other such polytopes. That is, suppose P is the universal polytope with facets K and vertex figures L. Then any other polytope Q with these facets and vertex figures can be written Q=P/N, where
• N is a subgroup of the automorphism group of P, and
• P/N is the collection of orbits of elements of P under the action of N, with the partial order induced by that of P.
Q=P/N is called a quotient of P, and we say P covers Q.
Given this fact, the search for polytopes with particular facets and vertex figures usually goes as follows:
1. Attempt to find the applicable universal polytope
2. Attempt to classify its quotients.
These two problems are, in general, very difficult.
Returning to the example above, if K is the square, and L is the triangle, the universal polytope {K,L} is the cube (also written {4,3}). The hemicube is the quotient {4,3}/N, where N is a group of symmetries (automorphisms) of the cube with just two elements - the identity, and the symmetry that maps each corner (or edge or face) to its opposite.
If L is, instead, also a square, the universal polytope {K,L} (that is, {4,4}) is the tessellation of the Euclidean plane by squares. This tessellation has infinitely many quotients with square faces, four per vertex, some regular and some not. Except for the universal polytope itself, they all correspond to various ways to tessellate either a torus or an infinitely long cylinder with squares.
The 11-cell and the 57-cell
The 11-cell, discovered independently by H. S. M. Coxeter and Branko Grünbaum, is an abstract 4-polytope. Its facets are hemi-icosahedra. Since its facets are, topologically, projective planes instead of spheres, the 11-cell is not a tessellation of any manifold in the usual sense. Instead, the 11-cell is a locally projective polytope. It is self-dual and universal: it is the only polytope with hemi-icosahedral facets and hemi-dodecahedral vertex figures.
The 57-cell is also self-dual, with hemi-dodecahedral facets. It was discovered by H. S. M. Coxeter shortly after the discovery of the 11-cell. Like the 11-cell, it is also universal, being the only polytope with hemi-dodecahedral facets and hemi-icosahedral vertex figures. On the other hand, there are many other polytopes with hemi-dodecahedral facets and Schläfli type {5,3,5}. The universal polytope with hemi-dodecahedral facets and icosahedral (not hemi-icosahedral) vertex figures is finite, but very large, with 10006920 facets and half as many vertices.
Local topology
The amalgamation problem has, historically, been pursued according to local topology. That is, rather than restricting K and L to be particular polytopes, they are allowed to be any polytope with a given topology, that is, any polytope tessellating a given manifold. If K and L are spherical (that is, tessellations of a topological sphere), then P is called locally spherical and corresponds itself to a tessellation of some manifold. For example, if K and L are both squares (and so are topologically the same as circles), P will be a tessellation of the plane, torus or Klein bottle by squares. A tessellation of an n-dimensional manifold is actually a rank n + 1 polytope. This is in keeping with the common intuition that the Platonic solids are three dimensional, even though they can be regarded as tessellations of the two-dimensional surface of a ball.
In general, an abstract polytope is called locally X if its facets and vertex figures are, topologically, either spheres or X, but not both spheres. The 11-cell and 57-cell are examples of rank 4 (that is, four-dimensional) locally projective polytopes, since their facets and vertex figures are tessellations of real projective planes. There is a weakness in this terminology however. It does not allow an easy way to describe a polytope whose facets are tori and whose vertex figures are projective planes, for example. Worse still if different facets have different topologies, or no well-defined topology at all. However, much progress has been made on the complete classification of the locally toroidal regular polytopes [15]
Exchange maps
Let Ψ be a flag of an abstract n-polytope, and let −1 < i < n. From the definition of an abstract polytope, it can be proven that there is a unique flag differing from Ψ by a rank i element, and the same otherwise. If we call this flag Ψ(i), then this defines a collection of maps on the polytopes flags, say φi. These maps are called exchange maps, since they swap pairs of flags : (Ψφi)φi = Ψ always. Some other properties of the exchange maps :
• φi2 is the identity map
• The φi generate a group. (The action of this group on the flags of the polytope is an example of what is called the flag action of the group on the polytope)
• If |i − j| > 1, φiφj = φjφi
• If α is an automorphism of the polytope, then αφi = φiα
• If the polytope is regular, the group generated by the φi is isomorphic to the automorphism group, otherwise, it is strictly larger.
The exchange maps and the flag action in particular can be used to prove that any abstract polytope is a quotient of some regular polytope.
Incidence matrices
A polytope can also be represented by tabulating its incidences.
The following incidence matrix is that of a triangle:
øabcabbccaabc
ø 11111111
a 11001011
b 10101101
c 10010111
ab 11101001
bc 10110101
ca 11010011
abc 11111111
The table shows a 1 wherever a face is a subface of another, or vice versa (so the table is symmetric about the diagonal)- so in fact, the table has redundant information; it would suffice to show only a 1 when the row face ≤ the column face.
Since both the body and the empty set are incident with all other elements, the first row and column as well as the last row and column are trivial and can conveniently be omitted.
Square pyramid
Further information is gained by counting each occurrence. This numerative usage enables a symmetry grouping, as in the Hasse Diagram of the square pyramid: If vertices B, C, D, and E are considered symmetrically equivalent within the abstract polytope, then edges f, g, h, and j will be grouped together, and also edges k, l, m, and n, And finally also the triangles P, Q, R, and S. Thus the corresponding incidence matrix of this abstract polytope may be shown as:
A B,C,D,Ef,g,h,jk,l,m,nP,Q,R,S T
A 1*4040
B,C,D,E *41221
f,g,h,j 114*20
k,l,m,n 02*411
P,Q,R,S 12214*
T 0404*1
In this accumulated incidence matrix representation the diagonal entries represent the total counts of either element type.
Elements of different type of the same rank clearly are never incident so the value will always be 0; however, to help distinguish such relationships, an asterisk (*) is used instead of 0.
The sub-diagonal entries of each row represent the incidence counts of the relevant sub-elements, while the super-diagonal entries represent the respective element counts of the vertex-, edge- or whatever -figure.
Already this simple square pyramid shows that the symmetry-accumulated incidence matrices are no longer symmetrical. But there is still a simple entity-relation (beside the generalised Euler formulae for the diagonal, respectively the sub-diagonal entities of each row, respectively the super-diagonal elements of each row - those at least whenever no holes or stars etc. are considered), as for any such incidence matrix $I=(I_{ij})$ holds:
$I_{ii}\cdot I_{ij}=I_{ji}\cdot I_{jj}\ \ (i<j).$
History
In the 1960s Branko Grünbaum issued a call to the geometric community to consider generalizations of the concept of regular polytopes that he called polystromata. He developed a theory of polystromata, showing examples of new objects including the 11-cell.
The 11-cell is a self-dual 4-polytope whose facets are not icosahedra, but are "hemi-icosahedra" — that is, they are the shape one gets if one considers opposite faces of the icosahedra to be actually the same face (Grünbaum, 1977). A few years after Grünbaum's discovery of the 11-cell, H.S.M. Coxeter discovered a similar polytope, the 57-cell (Coxeter 1982, 1984), and then independently rediscovered the 11-cell.
With the earlier work by Branko Grünbaum, H. S. M. Coxeter and Jacques Tits having laid the groundwork, the basic theory of the combinatorial structures now known as abstract polytopes was first described by Egon Schulte in his 1980 PhD dissertation. In it he defined "regular incidence complexes" and "regular incidence polytopes". Subsequently, he and Peter McMullen developed the basics of the theory in a series of research articles that were later collected into a book. Numerous other researchers have since made their own contributions, and the early pioneers (including Grünbaum) have also accepted Schulte's definition as the "correct" one.
Since then, research in the theory of abstract polytopes has focused mostly on regular polytopes, that is, those whose automorphism groups act transitively on the set of flags of the polytope.
See also
• Eulerian poset
• Graded poset
• Regular polytope
Notes
1. McMullen & Schulte 2002, p. 31
2. McMullen & Schulte 2002
3. McMullen & Schulte 2002, p. 23
4. Kaibel, Volker; Schwartz, Alexander (2003). "On the Complexity of Polytope Isomorphism Problems". Graphs and Combinatorics. 19 (2): 215–230. arXiv:math/0106093. doi:10.1007/s00373-002-0503-y. S2CID 179936. Archived from the original on 2015-07-21.
5. McMullen & Schulte 2002, p. 121
6. McMullen 1994, p. 225.
7. McMullen & Schulte 2002, p. 126.
8. McMullen 1994, p. 229.
9. McMullen & Schulte 2002, pp. 140–141.
10. McMullen 1994, p. 231.
11. McMullen & Schulte 2002, p. 141.
12. McMullen & Schulte 2002, p. 127.
13. McMullen 1994, pp. 229–230.
14. McMullen 1994, p. 232.
15. McMullen & Schulte 2002.
References
• McMullen, Peter (1994), "Realizations of regular apeirotopes", Aequationes Mathematicae, 47 (2–3): 223–239, doi:10.1007/BF01832961, MR 1268033, S2CID 121616949
• McMullen, Peter; Schulte, Egon (December 2002), Abstract Regular Polytopes (1st ed.), Cambridge University Press, ISBN 0-521-81496-0
• Jaron's World: Shapes in Other Dimensions, Discover mag., Apr 2007
• Dr. Richard Klitzing, Incidence Matrices
• Schulte, E.; "Symmetry of polytopes and polyhedra", Handbook of discrete and computational geometry, edited by Goodman, J. E. and O'Rourke, J., 2nd Ed., Chapman & Hall, 2004.
|
Wikipedia
|
Generalized inverse
In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix $A$.
"Pseudoinverse" redirects here. For the Moore–Penrose inverse, sometimes referred to as "the pseudoinverse", see Moore–Penrose inverse.
A matrix $A^{\mathrm {g} }\in \mathbb {R} ^{n\times m}$ is a generalized inverse of a matrix $A\in \mathbb {R} ^{m\times n}$ if $AA^{\mathrm {g} }A=A.$[1][2][3] A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse.[1]
Motivation
Consider the linear system
$Ax=y$
where $A$ is an $n\times m$ matrix and $y\in {\mathcal {R}}(A),$ the column space of $A$. If $A$ is nonsingular (which implies $n=m$) then $x=A^{-1}y$ will be the solution of the system. Note that, if $A$ is nonsingular, then
$AA^{-1}A=A.$
Now suppose $A$ is rectangular ($n\neq m$), or square and singular. Then we need a right candidate $G$ of order $m\times n$ such that for all $y\in {\mathcal {R}}(A),$
$AGy=y.$[4]
That is, $x=Gy$ is a solution of the linear system $Ax=y$. Equivalently, we need a matrix $G$ of order $m\times n$ such that
$AGA=A.$
Hence we can define the generalized inverse as follows: Given an $m\times n$ matrix $A$, an $n\times m$ matrix $G$ is said to be a generalized inverse of $A$ if $AGA=A.$[1][2][3] The matrix $A^{-1}$ has been termed a regular inverse of $A$ by some authors.[5]
Types
Important types of generalized inverse include:
• One-sided inverse (right inverse or left inverse)
• Right inverse: If the matrix $A$ has dimensions $n\times m$ and ${\textrm {rank}}(A)=n$, then there exists an $m\times n$ matrix $A_{\mathrm {R} }^{-1}$ called the right inverse of $A$ such that $AA_{\mathrm {R} }^{-1}=I_{n}$, where $I_{n}$ is the $n\times n$ identity matrix.
• Left inverse: If the matrix $A$ has dimensions $n\times m$ and ${\textrm {rank}}(A)=m$, then there exists an $m\times n$ matrix $A_{\mathrm {L} }^{-1}$ called the left inverse of $A$ such that $A_{\mathrm {L} }^{-1}A=I_{m}$, where $I_{m}$ is the $m\times m$ identity matrix.[6]
• Bott–Duffin inverse
• Drazin inverse
• Moore–Penrose inverse
Some generalized inverses are defined and classified based on the Penrose conditions:
1. $AA^{\mathrm {g} }A=A$
2. $A^{\mathrm {g} }AA^{\mathrm {g} }=A^{\mathrm {g} }$
3. $(AA^{\mathrm {g} })^{*}=AA^{\mathrm {g} }$
4. $(A^{\mathrm {g} }A)^{*}=A^{\mathrm {g} }A,$
where ${}^{*}$ denotes conjugate transpose. If $A^{\mathrm {g} }$ satisfies the first condition, then it is a generalized inverse of $A$. If it satisfies the first two conditions, then it is a reflexive generalized inverse of $A$. If it satisfies all four conditions, then it is the pseudoinverse of $A$, which is denoted by $A^{+}$ and also known as the Moore–Penrose inverse, after the pioneering works by E. H. Moore and Roger Penrose.[2][7][8][9][10][11] It is convenient to define an $I$-inverse of $A$ as an inverse that satisfies the subset $I\subset \{1,2,3,4\}$ of the Penrose conditions listed above. Relations, such as $A^{(1,4)}AA^{(1,3)}=A^{+}$, can be established between these different classes of $I$-inverses.[1]
When $A$ is non-singular, any generalized inverse $A^{\mathrm {g} }=A^{-1}$ and is therefore unique. For a singular $A$, some generalised inverses, such as the Drazin inverse and the Moore–Penrose inverse, are unique, while others are not necessarily uniquely defined.
Examples
Reflexive generalized inverse
Let
$A={\begin{bmatrix}1&2&3\\4&5&6\\7&8&9\end{bmatrix}},\quad G={\begin{bmatrix}-{\frac {5}{3}}&{\frac {2}{3}}&0\\[4pt]{\frac {4}{3}}&-{\frac {1}{3}}&0\\[4pt]0&0&0\end{bmatrix}}.$
Since $\det(A)=0$, $A$ is singular and has no regular inverse. However, $A$ and $G$ satisfy Penrose conditions (1) and (2), but not (3) or (4). Hence, $G$ is a reflexive generalized inverse of $A$.
One-sided inverse
Let
$A={\begin{bmatrix}1&2&3\\4&5&6\end{bmatrix}},\quad A_{\mathrm {R} }^{-1}={\begin{bmatrix}-{\frac {17}{18}}&{\frac {8}{18}}\\[4pt]-{\frac {2}{18}}&{\frac {2}{18}}\\[4pt]{\frac {13}{18}}&-{\frac {4}{18}}\end{bmatrix}}.$
Since $A$ is not square, $A$ has no regular inverse. However, $A_{\mathrm {R} }^{-1}$ is a right inverse of $A$. The matrix $A$ has no left inverse.
Inverse of other semigroups (or rings)
The element b is a generalized inverse of an element a if and only if $a\cdot b\cdot a=a$, in any semigroup (or ring, since the multiplication function in any ring is a semigroup).
The generalized inverses of the element 3 in the ring $\mathbb {Z} /12\mathbb {Z} $ are 3, 7, and 11, since in the ring $\mathbb {Z} /12\mathbb {Z} $:
$3\cdot 3\cdot 3=3$
$3\cdot 7\cdot 3=3$
$3\cdot 11\cdot 3=3$
The generalized inverses of the element 4 in the ring $\mathbb {Z} /12\mathbb {Z} $ are 1, 4, 7, and 10, since in the ring $\mathbb {Z} /12\mathbb {Z} $:
$4\cdot 1\cdot 4=4$
$4\cdot 4\cdot 4=4$
$4\cdot 7\cdot 4=4$
$4\cdot 10\cdot 4=4$
If an element a in a semigroup (or ring) has an inverse, the inverse must be the only generalized inverse of this element, like the elements 1, 5, 7, and 11 in the ring $\mathbb {Z} /12\mathbb {Z} $.
In the ring $\mathbb {Z} /12\mathbb {Z} $, any element is a generalized inverse of 0, however, 2 has no generalized inverse, since there is no b in $\mathbb {Z} /12\mathbb {Z} $ such that $2\cdot b\cdot 2=2$.
Construction
The following characterizations are easy to verify:
• A right inverse of a non-square matrix $A$ is given by $A_{\mathrm {R} }^{-1}=A^{\intercal }\left(AA^{\intercal }\right)^{-1}$, provided $A$ has full row rank.[6]
• A left inverse of a non-square matrix $A$ is given by $A_{\mathrm {L} }^{-1}=\left(A^{\intercal }A\right)^{-1}A^{\intercal }$, provided $A$ has full column rank.[6]
• If $A=BC$ is a rank factorization, then $G=C_{\mathrm {R} }^{-1}B_{\mathrm {L} }^{-1}$ is a g-inverse of $A$, where $C_{\mathrm {R} }^{-1}$ is a right inverse of $C$ and $B_{\mathrm {L} }^{-1}$ is left inverse of $B$.
• If $A=P{\begin{bmatrix}I_{r}&0\\0&0\end{bmatrix}}Q$ for any non-singular matrices $P$ and $Q$, then $G=Q^{-1}{\begin{bmatrix}I_{r}&U\\W&V\end{bmatrix}}P^{-1}$ is a generalized inverse of $A$ for arbitrary $U,V$ and $W$.
• Let $A$ be of rank $r$. Without loss of generality, let
$A={\begin{bmatrix}B&C\\D&E\end{bmatrix}},$
where $B_{r\times r}$ is the non-singular submatrix of $A$. Then,
$G={\begin{bmatrix}B^{-1}&0\\0&0\end{bmatrix}}$
is a generalized inverse of $A$ if and only if $E=DB^{-1}C$.
Uses
Any generalized inverse can be used to determine whether a system of linear equations has any solutions, and if so to give all of them. If any solutions exist for the n × m linear system
$Ax=b$,
with vector $x$ of unknowns and vector $b$ of constants, all solutions are given by
$x=A^{\mathrm {g} }b+\left[I-A^{\mathrm {g} }A\right]w$,
parametric on the arbitrary vector $w$, where $A^{\mathrm {g} }$ is any generalized inverse of $A$. Solutions exist if and only if $A^{\mathrm {g} }b$ is a solution, that is, if and only if $AA^{\mathrm {g} }b=b$. If A has full column rank, the bracketed expression in this equation is the zero matrix and so the solution is unique.[12]
Generalized inverses of matrices
The generalized inverses of matrices can be characterized as follows. Let $A\in \mathbb {R} ^{m\times n}$, and
$A=U{\begin{bmatrix}\Sigma _{1}&0\\0&0\end{bmatrix}}V^{\textsf {T}}$
be its singular-value decomposition. Then for any generalized inverse $A^{g}$, there exist[1] matrices $X$, $Y$, and $Z$ such that
$A^{g}=V{\begin{bmatrix}\Sigma _{1}^{-1}&X\\Y&Z\end{bmatrix}}U^{\textsf {T}}.$
Conversely, any choice of $X$, $Y$, and $Z$ for matrix of this form is a generalized inverse of $A$.[1] The $\{1,2\}$-inverses are exactly those for which $Z=Y\Sigma _{1}X$, the $\{1,3\}$-inverses are exactly those for which $X=0$, and the $\{1,4\}$-inverses are exactly those for which $Y=0$. In particular, the pseudoinverse is given by $X=Y=Z=0$:
$A^{+}=V{\begin{bmatrix}\Sigma _{1}^{-1}&0\\0&0\end{bmatrix}}U^{\textsf {T}}.$
Transformation consistency properties
In practical applications it is necessary to identify the class of matrix transformations that must be preserved by a generalized inverse. For example, the Moore–Penrose inverse, $A^{+},$ satisfies the following definition of consistency with respect to transformations involving unitary matrices U and V:
$(UAV)^{+}=V^{*}A^{+}U^{*}$.
The Drazin inverse, $A^{\mathrm {D} }$ satisfies the following definition of consistency with respect to similarity transformations involving a nonsingular matrix S:
$\left(SAS^{-1}\right)^{\mathrm {D} }=SA^{\mathrm {D} }S^{-1}$.
The unit-consistent (UC) inverse,[13] $A^{\mathrm {U} },$ satisfies the following definition of consistency with respect to transformations involving nonsingular diagonal matrices D and E:
$(DAE)^{\mathrm {U} }=E^{-1}A^{\mathrm {U} }D^{-1}$.
The fact that the Moore–Penrose inverse provides consistency with respect to rotations (which are orthonormal transformations) explains its widespread use in physics and other applications in which Euclidean distances must be preserved. The UC inverse, by contrast, is applicable when system behavior is expected to be invariant with respect to the choice of units on different state variables, e.g., miles versus kilometers.
See also
• Block matrix pseudoinverse
• Regular semigroup
Citations
1. Ben-Israel & Greville 2003, pp. 2, 7
2. Nakamura 1991, pp. 41–42
3. Rao & Mitra 1971, pp. vii, 20
4. Rao & Mitra 1971, p. 24
5. Rao & Mitra 1971, pp. 19–20
6. Rao & Mitra 1971, p. 19
7. Rao & Mitra 1971, pp. 20, 28, 50–51
8. Ben-Israel & Greville 2003, p. 7
9. Campbell & Meyer 1991, p. 10
10. James 1978, p. 114
11. Nakamura 1991, p. 42
12. James 1978, pp. 109–110
13. Uhlmann 2018
Sources
Textbook
• Ben-Israel, Adi; Greville, Thomas Nall Eden (2003). Generalized Inverses: Theory and Applications (2nd ed.). New York, NY: Springer. doi:10.1007/b97366. ISBN 978-0-387-00293-4.
• Campbell, Stephen L.; Meyer, Carl D. (1991). Generalized Inverses of Linear Transformations. Dover. ISBN 978-0-486-66693-8.
• Horn, Roger Alan; Johnson, Charles Royal (1985). Matrix Analysis. Cambridge University Press. ISBN 978-0-521-38632-6.
• Nakamura, Yoshihiko (1991). Advanced Robotics: Redundancy and Optimization. Addison-Wesley. ISBN 978-0201151985.
• Rao, C. Radhakrishna; Mitra, Sujit Kumar (1971). Generalized Inverse of Matrices and its Applications. New York: John Wiley & Sons. pp. 240. ISBN 978-0-471-70821-6.
Publication
• James, M. (June 1978). "The generalised inverse". The Mathematical Gazette. 62 (420): 109–114. doi:10.2307/3617665. JSTOR 3617665.
• Uhlmann, Jeffrey K. (2018). "A Generalized Matrix Inverse that is Consistent with Respect to Diagonal Transformations" (PDF). SIAM Journal on Matrix Analysis and Applications. 239 (2): 781–800. doi:10.1137/17M113890X.
• Zheng, Bing; Bapat, Ravindra (2004). "Generalized inverse A(2)T,S and a rank equation". Applied Mathematics and Computation. 155 (2): 407–415. doi:10.1016/S0096-3003(03)00786-0.
|
Wikipedia
|
Regular isotopy
In the mathematical subject of knot theory, regular isotopy is the equivalence relation of link diagrams that is generated by using the 2nd and 3rd Reidemeister moves only. The notion of regular isotopy was introduced by Louis Kauffman (Kauffman 1990). It can be thought of as an isotopy of a ribbon pressed flat against the plane which keeps the ribbon flat. For diagrams in the plane this is a finer equivalence relation than ambient isotopy of framed links, since the 2nd and 3rd Reidemeister moves preserve the winding number of the diagram (Kauffman 1990, pp. 450ff.). However, for diagrams in the sphere (considered as the plane plus infinity), the two notions are equivalent, due to the extra freedom of passing a strand through infinity.
See also
• Ambient isotopy
• Knot polynomial
References
• Kauffman, Louis H. (1990). "An invariant of regular isotopy" (PDF). Transactions of the American Mathematical Society. 318 (2): 417–471. doi:10.1090/S0002-9947-1990-0958895-7. ISSN 0002-9947. Archived from the original (PDF) on 2019-10-06. Retrieved 2019-10-06.
|
Wikipedia
|
Regular category
In category theory, a regular category is a category with finite limits and coequalizers of a pair of morphisms called kernel pairs, satisfying certain exactness conditions. In that way, regular categories recapture many properties of abelian categories, like the existence of images, without requiring additivity. At the same time, regular categories provide a foundation for the study of a fragment of first-order logic, known as regular logic.
Definition
A category C is called regular if it satisfies the following three properties:[1]
• C is finitely complete.
• If f : X → Y is a morphism in C, and
is a pullback, then the coequalizer of p0, p1 exists. The pair (p0, p1) is called the kernel pair of f. Being a pullback, the kernel pair is unique up to a unique isomorphism.
• If f : X → Y is a morphism in C, and
is a pullback, and if f is a regular epimorphism, then g is a regular epimorphism as well. A regular epimorphism is an epimorphism that appears as a coequalizer of some pair of morphisms.
Examples
Examples of regular categories include:
• Set, the category of sets and functions between the sets
• More generally, every elementary topos
• Grp, the category of groups and group homomorphisms
• The category of rings and ring homomorphisms
• More generally, the category of models of any variety
• Every bounded meet-semilattice, with morphisms given by the order relation
• Every abelian category
The following categories are not regular:
• Top, the category of topological spaces and continuous functions
• Cat, the category of small categories and functors
Epi-mono factorization
In a regular category, the regular-epimorphisms and the monomorphisms form a factorization system. Every morphism f:X→Y can be factorized into a regular epimorphism e:X→E followed by a monomorphism m:E→Y, so that f=me. The factorization is unique in the sense that if e':X→E' is another regular epimorphism and m':E'→Y is another monomorphism such that f=m'e', then there exists an isomorphism h:E→E' such that he=e' and m'h=m. The monomorphism m is called the image of f.
Exact sequences and regular functors
In a regular category, a diagram of the form $R\rightrightarrows X\to Y$ is said to be an exact sequence if it is both a coequalizer and a kernel pair. The terminology is a generalization of exact sequences in homological algebra: in an abelian category, a diagram
$R\;{\overset {r}{\underset {s}{\rightrightarrows }}}\;X\xrightarrow {f} Y$
is exact in this sense if and only if $0\to R{\xrightarrow {(r,s)}}X\oplus X{\xrightarrow {(f,-f)}}Y\to 0$ is a short exact sequence in the usual sense.
A functor between regular categories is called regular, if it preserves finite limits and coequalizers of kernel pairs. A functor is regular if and only if it preserves finite limits and exact sequences. For this reason, regular functors are sometimes called exact functors. Functors that preserve finite limits are often said to be left exact.
Regular logic and regular categories
Regular logic is the fragment of first-order logic that can express statements of the form
$\forall x(\phi (x)\to \psi (x))$,
where $\phi $ and $\psi $ are regular formulae i.e. formulae built up from atomic formulae, the truth constant, binary meets (conjunction) and existential quantification. Such formulae can be interpreted in a regular category, and the interpretation is a model of a sequent $\forall x(\phi (x)\to \psi (x))$, if the interpretation of $\phi $ factors through the interpretation of $\psi $.[2] This gives for each theory (set of sequents) T and for each regular category C a category Mod(T,C) of models of T in C. This construction gives a functor Mod(T,-):RegCat→Cat from the category RegCat of small regular categories and regular functors to small categories. It is an important result that for each theory T there is a regular category R(T), such that for each regular category C there is an equivalence
$\mathbf {Mod} (T,C)\cong \mathbf {RegCat} (R(T),C)$,
which is natural in C. Here, R(T) is called the classifying category of the regular theory T. Up to equivalence any small regular category arises in this way as the classifying category of some regular theory.[2]
Exact (effective) categories
The theory of equivalence relations is a regular theory. An equivalence relation on an object $X$ of a regular category is a monomorphism into $X\times X$ that satisfies the interpretations of the conditions for reflexivity, symmetry and transitivity.
Every kernel pair $p_{0},p_{1}:R\rightarrow X$ defines an equivalence relation $R\rightarrow X\times X$. Conversely, an equivalence relation is said to be effective if it arises as a kernel pair.[3] An equivalence relation is effective if and only if it has a coequalizer and it is the kernel pair of this.
A regular category is said to be exact, or exact in the sense of Barr, or effective regular, if every equivalence relation is effective.[4] (Note that the term "exact category" is also used differently, for the exact categories in the sense of Quillen.)
Examples of exact categories
• The category of sets is exact in this sense, and so is any (elementary) topos. Every equivalence relation has a coequalizer, which is found by taking equivalence classes.
• Every abelian category is exact.
• Every category that is monadic over the category of sets is exact.
• The category of Stone spaces is regular, but not exact.
See also
• Allegory (category theory)
• Topos
• Exact completion
References
1. Pedicchio & Tholen 2004, p. 177
2. Butz, Carsten (1998). "Regular Categories and Regular Logic". BRICS Lectures Series LS-98-2.
3. Pedicchio & Tholen 2004, p. 169
4. Pedicchio & Tholen 2004, p. 179
• Barr, Michael; Grillet, Pierre A.; van Osdol, Donovan H. (2006) [1971]. Exact Categories and Categories of Sheaves. Lecture Notes in Mathematics. Vol. 236. Springer. ISBN 978-3-540-36999-8.
• Borceux, Francis (1994). Handbook of Categorical Algebra. Vol. 2. Cambridge University Press. ISBN 0-521-44179-X.
• Lack, Stephen (1999). "A note on the exact completion of a regular category, and its infinitary generalizations". Theory and Applications of Categories. 5 (3): 70–80.
• van Oosten, Jaap (1995). "Basic Category Theory" (PDF). University of Aarhus. BRICS Lectures Series LS-95-1.
• Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical foundations. Special topics in order, topology, algebra, and sheaf theory. Encyclopedia of Mathematics and Its Applications. Vol. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001.
|
Wikipedia
|
Regular matroid
In mathematics, a regular matroid is a matroid that can be represented over all fields.
Definition
A matroid is defined to be a family of subsets of a finite set, satisfying certain axioms. The sets in the family are called "independent sets". One of the ways of constructing a matroid is to select a finite set of vectors in a vector space, and to define a subset of the vectors to be independent in the matroid when it is linearly independent in the vector space. Every family of sets constructed in this way is a matroid, but not every matroid can be constructed in this way, and the vector spaces over different fields lead to different sets of matroids that can be constructed from them.
A matroid $M$ is regular when, for every field $F$, $M$ can be represented by a system of vectors over $F$.[1][2]
Properties
If a matroid is regular, so is its dual matroid,[1] and so is every one of its minors.[3] Every direct sum of regular matroids remains regular.[4]
Every graphic matroid (and every co-graphic matroid) is regular.[5] Conversely, every regular matroid may be constructed by combining graphic matroids, co-graphic matroids, and a certain ten-element matroid that is neither graphic nor co-graphic, using an operation for combining matroids that generalizes the clique-sum operation on graphs.[6]
The number of bases in a regular matroid may be computed as the determinant of an associated matrix, generalizing Kirchhoff's matrix-tree theorem for graphic matroids.[7]
Characterizations
The uniform matroid $U{}_{4}^{2}$ (the four-point line) is not regular: it cannot be realized over the two-element finite field GF(2), so it is not a binary matroid, although it can be realized over all other fields. The matroid of the Fano plane (a rank-three matroid in which seven of the triples of points are dependent) and its dual are also not regular: they can be realized over GF(2), and over all fields of characteristic two, but not over any other fields than those. As Tutte (1958) showed, these three examples are fundamental to the theory of regular matroids: every non-regular matroid has at least one of these three as a minor. Thus, the regular matroids are exactly the matroids that do not have one of the three forbidden minors $U{}_{4}^{2}$, the Fano plane, or its dual.[8]
If a matroid is regular, it must clearly be realizable over the two fields GF(2) and GF(3). The converse is true: every matroid that is realizable over both of these two fields is regular. The result follows from a forbidden minor characterization of the matroids realizable over these fields, part of a family of results codified by Rota's conjecture.[9]
The regular matroids are the matroids that can be defined from a totally unimodular matrix, a matrix in which every square submatrix has determinant 0, 1, or −1. The vectors realizing the matroid may be taken as the rows of the matrix. For this reason, regular matroids are sometimes also called unimodular matroids.[10] The equivalence of regular matroids and unimodular matrices, and their characterization by forbidden minors, are deep results of W. T. Tutte, originally proved by him using the Tutte homotopy theorem.[8] Gerards (1989) later published an alternative and simpler proof of the characterization of unimodular matrices by forbidden minors.[11]
Algorithms
There is a polynomial time algorithm for testing whether a matroid is regular, given access to the matroid through an independence oracle.[12]
References
1. Fujishige, Satoru (2005), Submodular Functions and Optimization, Annals of Discrete Mathematics, Elsevier, p. 24, ISBN 9780444520869.
2. Oxley, James G. (2006), Matroid Theory, Oxford Graduate Texts in Mathematics, vol. 3, Oxford University Press, p. 209, ISBN 9780199202508.
3. Oxley (2006), p. 112.
4. Oxley (2006), p. 131.
5. Tutte, W. T. (1965), "Lectures on matroids", Journal of Research of the National Bureau of Standards, 69B: 1–47, doi:10.6028/jres.069b.001, MR 0179781.
6. Seymour, P. D. (1980), "Decomposition of regular matroids", Journal of Combinatorial Theory, Series B, 28 (3): 305–359, doi:10.1016/0095-8956(80)90075-1, hdl:10338.dmlcz/101946, MR 0579077.
7. Maurer, Stephen B. (1976), "Matrix generalizations of some theorems on trees, cycles and cocycles in graphs", SIAM Journal on Applied Mathematics, 30 (1): 143–148, doi:10.1137/0130017, MR 0392635.
8. Tutte, W. T. (1958), "A homotopy theorem for matroids. I, II", Transactions of the American Mathematical Society, 88 (1): 144–174, doi:10.2307/1993244, JSTOR 1993244, MR 0101526.
9. Seymour, P. D. (1979), "Matroid representation over GF(3)", Journal of Combinatorial Theory, Series B, 26 (2): 159–173, doi:10.1016/0095-8956(79)90055-8, MR 0532586.
10. Oxley (2006), p. 20.
11. Gerards, A. M. H. (1989), "A short proof of Tutte's characterization of totally unimodular matrices", Linear Algebra and Its Applications, 114/115: 207–212, doi:10.1016/0024-3795(89)90461-8.
12. Truemper, K. (1982), "On the efficiency of representability tests for matroids", European Journal of Combinatorics, 3 (3): 275–291, doi:10.1016/s0195-6698(82)80039-5, MR 0679212.
|
Wikipedia
|
Regular semigroup
In mathematics, a regular semigroup is a semigroup S in which every element is regular, i.e., for each element a in S there exists an element x in S such that axa = a.[1] Regular semigroups are one of the most-studied classes of semigroups, and their structure is particularly amenable to study via Green's relations.[2]
History
Regular semigroups were introduced by J. A. Green in his influential 1951 paper "On the structure of semigroups"; this was also the paper in which Green's relations were introduced. The concept of regularity in a semigroup was adapted from an analogous condition for rings, already considered by John von Neumann.[3] It was Green's study of regular semigroups which led him to define his celebrated relations. According to a footnote in Green 1951, the suggestion that the notion of regularity be applied to semigroups was first made by David Rees.
The term inversive semigroup (French: demi-groupe inversif) was historically used as synonym in the papers of Gabriel Thierrin (a student of Paul Dubreil) in the 1950s,[4][5] and it is still used occasionally.[6]
The basics
There are two equivalent ways in which to define a regular semigroup S:
(1) for each a in S, there is an x in S, which is called a pseudoinverse,[7] with axa = a;
(2) every element a has at least one inverse b, in the sense that aba = a and bab = b.
To see the equivalence of these definitions, first suppose that S is defined by (2). Then b serves as the required x in (1). Conversely, if S is defined by (1), then xax is an inverse for a, since a(xax)a = axa(xa) = axa = a and (xax)a(xax) = x(axa)(xax) = xa(xax) = x(axa)x = xax.[8]
The set of inverses (in the above sense) of an element a in an arbitrary semigroup S is denoted by V(a).[9] Thus, another way of expressing definition (2) above is to say that in a regular semigroup, V(a) is nonempty, for every a in S. The product of any element a with any b in V(a) is always idempotent: abab = ab, since aba = a.[10]
Examples of regular semigroups
• Every group is a regular semigroup.
• Every band (idempotent semigroup) is regular in the sense of this article, though this is not what is meant by a regular band.
• The bicyclic semigroup is regular.
• Any full transformation semigroup is regular.
• A Rees matrix semigroup is regular.
• The homomorphic image of a regular semigroup is regular.[11]
Unique inverses and unique pseudoinverses
A regular semigroup in which idempotents commute (with idempotents) is an inverse semigroup, or equivalently, every element has a unique inverse. To see this, let S be a regular semigroup in which idempotents commute. Then every element of S has at least one inverse. Suppose that a in S has two inverses b and c, i.e.,
aba = a, bab = b, aca = a and cac = c. Also ab, ba, ac and ca are idempotents as above.
Then
b = bab = b(aca)b = bac(a)b = bac(aca)b = bac(ac)(ab) = bac(ab)(ac) = ba(ca)bac = ca(ba)bac = c(aba)bac = cabac = cac = c.
So, by commuting the pairs of idempotents ab & ac and ba & ca, the inverse of a is shown to be unique. Conversely, it can be shown that any inverse semigroup is a regular semigroup in which idempotents commute.[12]
The existence of a unique pseudoinverse implies the existence of a unique inverse, but the opposite is not true. For example, in the symmetric inverse semigroup, the empty transformation Ø does not have a unique pseudoinverse, because Ø = ØfØ for any transformation f. The inverse of Ø is unique however, because only one f satisfies the additional constraint that f = fØf, namely f = Ø. This remark holds more generally in any semigroup with zero. Furthermore, if every element has a unique pseudoinverse, then the semigroup is a group, and the unique pseudoinverse of an element coincides with the group inverse.
Green's relations
Recall that the principal ideals of a semigroup S are defined in terms of S1, the semigroup with identity adjoined; this is to ensure that an element a belongs to the principal right, left and two-sided ideals which it generates. In a regular semigroup S, however, an element a = axa automatically belongs to these ideals, without recourse to adjoining an identity. Green's relations can therefore be redefined for regular semigroups as follows:
$a\,{\mathcal {L}}\,b$ if, and only if, Sa = Sb;
$a\,{\mathcal {R}}\,b$ if, and only if, aS = bS;
$a\,{\mathcal {J}}\,b$ if, and only if, SaS = SbS.[13]
In a regular semigroup S, every ${\mathcal {L}}$- and ${\mathcal {R}}$-class contains at least one idempotent. If a is any element of S and a' is any inverse for a, then a is ${\mathcal {L}}$-related to a'a and ${\mathcal {R}}$-related to aa'.[14]
Theorem. Let S be a regular semigroup; let a and b be elements of S, and let V(x) denote the set of inverses of x in S. Then
• $a\,{\mathcal {L}}\,b$ iff there exist a' in V(a) and b' in V(b) such that a'a = b'b;
• $a\,{\mathcal {R}}\,b$ iff there exist a' in V(a) and b' in V(b) such that aa' = bb',
• $a\,{\mathcal {H}}\,b$ iff there exist a' in V(a) and b' in V(b) such that a'a = b'b and aa' = bb'.[15]
If S is an inverse semigroup, then the idempotent in each ${\mathcal {L}}$- and ${\mathcal {R}}$-class is unique.[12]
Special classes of regular semigroups
Some special classes of regular semigroups are:[16]
• Locally inverse semigroups: a regular semigroup S is locally inverse if eSe is an inverse semigroup, for each idempotent e.
• Orthodox semigroups: a regular semigroup S is orthodox if its subset of idempotents forms a subsemigroup.
• Generalised inverse semigroups: a regular semigroup S is called a generalised inverse semigroup if its idempotents form a normal band, i.e., xyzx = xzyx for all idempotents x, y, z.
The class of generalised inverse semigroups is the intersection of the class of locally inverse semigroups and the class of orthodox semigroups.[17]
All inverse semigroups are orthodox and locally inverse. The converse statements do not hold.
Generalizations
• eventually regular semigroup
• E-dense (aka E-inversive) semigroup
See also
• Biordered set
• Special classes of semigroups
• Nambooripad order
• Generalized inverse
References
1. Howie 1995 p. 54
2. Howie 2002.
3. von Neumann 1936.
4. Christopher Hollings (16 July 2014). Mathematics across the Iron Curtain: A History of the Algebraic Theory of Semigroups. American Mathematical Society. p. 181. ISBN 978-1-4704-1493-1.
5. "Publications". www.csd.uwo.ca. Archived from the original on 1999-11-04.
6. Jonathan S. Golan (1999). Power Algebras over Semirings: With Applications in Mathematics and Computer Science. Springer Science & Business Media. p. 104. ISBN 978-0-7923-5834-3.
7. Klip, Knauer and Mikhalev : p. 33
8. Clifford & Preston 2010 Lemma 1.14.
9. Howie 1995 p. 52
10. Clifford & Preston 2010 p. 26
11. Howie 1995 Lemma 2.4.4
12. Howie 1995 Theorem 5.1.1
13. Howie 1995 p. 55
14. Clifford & Preston 2010 Lemma 1.13
15. Howie 1995 Proposition 2.4.1
16. Howie 1995 ch. 6, § 2.4
17. Howie 1995 p. 222
Sources
• Clifford, Alfred Hoblitzelle; Preston, Gordon Bamford (2010) [1967]. The algebraic theory of semigroups. Vol. 2. American Mathematical Society. ISBN 978-0-8218-0272-4.
• Howie, John Mackintosh (1995). Fundamentals of Semigroup Theory (1st ed.). Clarendon Press. ISBN 978-0-19-851194-6.
• M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, ISBN 3-11-015248-7.
• J. A. Green (1951). "On the structure of semigroups". Annals of Mathematics. Second Series. 54 (1): 163–172. doi:10.2307/1969317. hdl:10338.dmlcz/100067. JSTOR 1969317.
• J. M. Howie, Semigroups, past, present and future, Proceedings of the International Conference on Algebra and Its Applications, 2002, 6–20.
• J. von Neumann (1936). "On regular rings". Proceedings of the National Academy of Sciences of the USA. 22 (12): 707–713. Bibcode:1936PNAS...22..707V. doi:10.1073/pnas.22.12.707. PMC 1076849. PMID 16577757.
|
Wikipedia
|
Prewellordering
In set theory, a prewellordering on a set $X$ is a preorder $\leq $ on $X$ (a transitive and reflexive relation on $X$) that is strongly connected (meaning that any two points are comparable) and well-founded in the sense that the induced relation $x<y$ defined by $x\leq y{\text{ and }}y\nleq x$ is a well-founded relation.
Transitive binary relations
Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric
Total, Semiconnex Anti-
reflexive
Equivalence relation Y ✗ ✗ ✗ ✗ ✗ Y ✗ ✗
Preorder (Quasiorder) ✗ ✗ ✗ ✗ ✗ ✗ Y ✗ ✗
Partial order ✗ Y ✗ ✗ ✗ ✗ Y ✗ ✗
Total preorder ✗ ✗ Y ✗ ✗ ✗ Y ✗ ✗
Total order ✗ Y Y ✗ ✗ ✗ Y ✗ ✗
Prewellordering ✗ ✗ Y Y ✗ ✗ Y ✗ ✗
Well-quasi-ordering ✗ ✗ ✗ Y ✗ ✗ Y ✗ ✗
Well-ordering ✗ Y Y Y ✗ ✗ Y ✗ ✗
Lattice ✗ Y ✗ ✗ Y Y Y ✗ ✗
Join-semilattice ✗ Y ✗ ✗ Y ✗ Y ✗ ✗
Meet-semilattice ✗ Y ✗ ✗ ✗ Y Y ✗ ✗
Strict partial order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y
Strict weak order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y
Strict total order ✗ Y Y ✗ ✗ ✗ ✗ Y Y
Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric
Definitions, for all $a,b$ and $S\neq \varnothing :$ :} ${\begin{aligned}&aRb\\\Rightarrow {}&bRa\end{aligned}}$ ${\begin{aligned}aRb{\text{ and }}&bRa\\\Rightarrow a={}&b\end{aligned}}$ ${\begin{aligned}a\neq {}&b\Rightarrow \\aRb{\text{ or }}&bRa\end{aligned}}$ ${\begin{aligned}\min S\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\vee b\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\wedge b\\{\text{exists}}\end{aligned}}$ $aRa$ ${\text{not }}aRa$ ${\begin{aligned}aRb\Rightarrow \\{\text{not }}bRa\end{aligned}}$
Y indicates that the column's property is always true the row's term (at the very left), while ✗ indicates that the property is not guaranteed in general (it might, or might not, hold). For example, that every equivalence relation is symmetric, but not necessarily antisymmetric, is indicated by Y in the "Symmetric" column and ✗ in the "Antisymmetric" column, respectively.
All definitions tacitly require the homogeneous relation $R$ be transitive: for all $a,b,c,$ if $aRb$ and $bRc$ then $aRc.$
A term's definition may require additional properties that are not listed in this table.
Prewellordering on a set
A prewellordering on a set $X$ is a homogeneous binary relation $\,\leq \,$ on $X$ that satisfies the following conditions:[1]
1. Reflexivity: $x\leq x$ for all $x\in X.$
2. Transitivity: if $x<y$ and $y<z$ then $x<z$ for all $x,y,z\in X.$
3. Total/Strongly connected: $x\leq y$ or $y\leq x$ for all $x,y\in X.$
4. for every non-empty subset $S\subseteq X,$ there exists some $m\in S$ such that $m\leq s$ for all $s\in S.$
• This condition is equivalent to the induced strict preorder $x<y$ defined by $x\leq y$ and $y\nleq x$ being a well-founded relation.
A homogeneous binary relation $\,\leq \,$ on $X$ is a prewellordering if and only if there exists a surjection $\pi :X\to Y$ into a well-ordered set $(Y,\lesssim )$ such that for all $x,y\in X,$ $ x\leq y$ if and only if $\pi (x)\lesssim \pi (y).$[1]
Examples
Given a set $A,$ the binary relation on the set $X:=\operatorname {Finite} (A)$ of all finite subsets of $A$ defined by $S\leq T$ if and only if $|S|\leq |T|$ (where $|\cdot |$ denotes the set's cardinality) is a prewellordering.[1]
Properties
If $\leq $ is a prewellordering on $X,$ then the relation $\sim $ defined by
$x\sim y{\text{ if and only if }}x\leq y\land y\leq x$
is an equivalence relation on $X,$ and $\leq $ induces a wellordering on the quotient $X/{\sim }.$ The order-type of this induced wellordering is an ordinal, referred to as the length of the prewellordering.
A norm on a set $X$ is a map from $X$ into the ordinals. Every norm induces a prewellordering; if $\phi :X\to Ord$ is a norm, the associated prewellordering is given by
$x\leq y{\text{ if and only if }}\phi (x)\leq \phi (y)$
Conversely, every prewellordering is induced by a unique regular norm (a norm $\phi :X\to Ord$ is regular if, for any $x\in X$ and any $\alpha <\phi (x),$ there is $y\in X$ such that $\phi (y)=\alpha $).
Prewellordering property
If ${\boldsymbol {\Gamma }}$ is a pointclass of subsets of some collection ${\mathcal {F}}$ of Polish spaces, ${\mathcal {F}}$ closed under Cartesian product, and if $\leq $ is a prewellordering of some subset $P$ of some element $X$ of ${\mathcal {F}},$ then $\leq $ is said to be a ${\boldsymbol {\Gamma }}$-prewellordering of $P$ if the relations $<^{*}$ and $\leq ^{*}$ are elements of ${\boldsymbol {\Gamma }},$ where for $x,y\in X,$
1. $x<^{*}y{\text{ if and only if }}x\in P\land (y\notin P\lor (x\leq y\land y\not \leq x))$
2. $x\leq ^{*}y{\text{ if and only if }}x\in P\land (y\notin P\lor x\leq y)$
${\boldsymbol {\Gamma }}$ is said to have the prewellordering property if every set in ${\boldsymbol {\Gamma }}$ admits a ${\boldsymbol {\Gamma }}$-prewellordering.
The prewellordering property is related to the stronger scale property; in practice, many pointclasses having the prewellordering property also have the scale property, which allows drawing stronger conclusions.
Examples
${\boldsymbol {\Pi }}_{1}^{1}$ and ${\boldsymbol {\Sigma }}_{2}^{1}$ both have the prewellordering property; this is provable in ZFC alone. Assuming sufficient large cardinals, for every $n\in \omega ,$ ${\boldsymbol {\Pi }}_{2n+1}^{1}$ and ${\boldsymbol {\Sigma }}_{2n+2}^{1}$ have the prewellordering property.
Reduction
If ${\boldsymbol {\Gamma }}$ is an adequate pointclass with the prewellordering property, then it also has the reduction property: For any space $X\in {\mathcal {F}}$ and any sets $A,B\subseteq X,$ $A$ and $B$ both in ${\boldsymbol {\Gamma }},$ the union $A\cup B$ may be partitioned into sets $A^{*},B^{*},$ both in ${\boldsymbol {\Gamma }},$ such that $A^{*}\subseteq A$ and $B^{*}\subseteq B.$
Separation
If ${\boldsymbol {\Gamma }}$ is an adequate pointclass whose dual pointclass has the prewellordering property, then ${\boldsymbol {\Gamma }}$ has the separation property: For any space $X\in {\mathcal {F}}$ and any sets $A,B\subseteq X,$ $A$ and $B$ disjoint sets both in ${\boldsymbol {\Gamma }},$ there is a set $C\subseteq X$ such that both $C$ and its complement $X\setminus C$ are in ${\boldsymbol {\Gamma }},$ with $A\subseteq C$ and $B\cap C=\varnothing .$
For example, ${\boldsymbol {\Pi }}_{1}^{1}$ has the prewellordering property, so ${\boldsymbol {\Sigma }}_{1}^{1}$ has the separation property. This means that if $A$ and $B$ are disjoint analytic subsets of some Polish space $X,$ then there is a Borel subset $C$ of $X$ such that $C$ includes $A$ and is disjoint from $B.$
See also
• Descriptive set theory – Subfield of mathematical logic
• Graded poset – partially ordered set equipped with a rank function, sometimes called a ranked posetPages displaying wikidata descriptions as a fallback – a graded poset is analogous to a prewellordering with a norm, replacing a map to the ordinals with a map to the natural numbers
• Scale property – kind of object in descriptive set theoryPages displaying wikidata descriptions as a fallback
References
1. Moschovakis 2006, p. 106.
• Moschovakis, Yiannis N. (1980). Descriptive Set Theory. Amsterdam: North Holland. ISBN 978-0-08-096319-8. OCLC 499778252.
• Moschovakis, Yiannis N. (2006). Notes on set theory. New York: Springer. ISBN 978-0-387-31609-3. OCLC 209913560.
Order theory
• Topics
• Glossary
• Category
Key concepts
• Binary relation
• Boolean algebra
• Cyclic order
• Lattice
• Partial order
• Preorder
• Total order
• Weak ordering
Results
• Boolean prime ideal theorem
• Cantor–Bernstein theorem
• Cantor's isomorphism theorem
• Dilworth's theorem
• Dushnik–Miller theorem
• Hausdorff maximal principle
• Knaster–Tarski theorem
• Kruskal's tree theorem
• Laver's theorem
• Mirsky's theorem
• Szpilrajn extension theorem
• Zorn's lemma
Properties & Types (list)
• Antisymmetric
• Asymmetric
• Boolean algebra
• topics
• Completeness
• Connected
• Covering
• Dense
• Directed
• (Partial) Equivalence
• Foundational
• Heyting algebra
• Homogeneous
• Idempotent
• Lattice
• Bounded
• Complemented
• Complete
• Distributive
• Join and meet
• Reflexive
• Partial order
• Chain-complete
• Graded
• Eulerian
• Strict
• Prefix order
• Preorder
• Total
• Semilattice
• Semiorder
• Symmetric
• Total
• Tolerance
• Transitive
• Well-founded
• Well-quasi-ordering (Better)
• (Pre) Well-order
Constructions
• Composition
• Converse/Transpose
• Lexicographic order
• Linear extension
• Product order
• Reflexive closure
• Series-parallel partial order
• Star product
• Symmetric closure
• Transitive closure
Topology & Orders
• Alexandrov topology & Specialization preorder
• Ordered topological vector space
• Normal cone
• Order topology
• Order topology
• Topological vector lattice
• Banach
• Fréchet
• Locally convex
• Normed
Related
• Antichain
• Cofinal
• Cofinality
• Comparability
• Graph
• Duality
• Filter
• Hasse diagram
• Ideal
• Net
• Subnet
• Order morphism
• Embedding
• Isomorphism
• Order type
• Ordered field
• Ordered vector space
• Partially ordered
• Positive cone
• Riesz space
• Upper set
• Young's lattice
|
Wikipedia
|
Regular numerical predicate
In computer science and mathematics, more precisely in automata theory, model theory and formal language, a regular numerical predicate is a kind of relation over integers. Regular numerical predicates can also be considered as a subset of $\mathbb {N} ^{r}$ for some arity $r$. One of the main interests of this class of predicates is that it can be defined in plenty of different ways, using different logical formalisms. Furthermore, most of the definitions use only basic notions, and thus allows to relate foundations of various fields of fundamental computer science such as automata theory, syntactic semigroup, model theory and semigroup theory.
The class of regular numerical predicate is denoted ${\mathcal {C}}_{lca}$,[1]: 140 ${\mathcal {N}}_{\mathtt {thres,mod}}$[2] and REG.[3]
Definitions
The class of regular numerical predicate admits a lot of equivalent definitions. They are now given. In all of those definitions, we fix $r\in \mathbb {N} $ and $P\subseteq \mathbb {N} ^{r}$ a (numerical) predicate of arity $r$.
Automata with variables
The first definition encodes predicate as a formal language. A predicate is said to be regular if the formal language is regular.[3]: 25
Let the alphabet $A$ be the set of subset of $\{1,\dots ,r\}$. Given a vector of $r$ integers $\mathbf {n} =(n_{0},\dots ,n_{r-1})\in \mathbb {N} ^{r}$, it is represented by the word ${\overline {\mathbf {n} }}$ of length $\max(n_{0},\dots ,n_{r-1})$ whose $i$-th letter is $\{j\mid n_{j}=i\}$. For example, the vector $(3,1,3)$ is represented by the word $\emptyset \{1\}\emptyset \{0,2\}$.
We then define ${\overline {P}}$ as $\{{\overline {\mathbf {n} }}\mid \mathbf {n} \}$.
The numerical predicate $P$ is said to be regular if ${\overline {P}}$ is a regular language over the alphabet $A$. This is the reason for the use of the word "regular" to describe this kind of numerical predicate.
Automata reading unary numbers
This second definition is similar to the previous one. Predicates are encoded into languages in a different way, and the predicate is said to be regular if and only if the language is regular.[3]: 25
Our alphabet $A$ is the set of vectors of $r$ binary digits. That is: $\{0,1\}^{r}$. Before explaining how to encode a vector of numbers, we explain how to encode a single number.
Given a length $l$ and a number $n\leq l$, the unary representation of $n$ of length $l$ is the word $\mid {n}\mid _{l}$ over the binary alphabet $\{0,1\}$, beginning by a sequence of $n$ "1"'s, followed by $n-l$ "0"'s. For example, the unary representation of 1 of length 4 is $1000$.
Given a vector of $r$ integers $\mathbf {n} =(n_{0},\dots ,n_{r-1})\in \mathbb {N} ^{r}$, let $l=\max(n_{0},\dots ,n_{r-1})$. The vector $\mathbf {n} $ is represented by the word ${\overline {\mathbf {n} }}\in \left(\{0,1\}^{r}\right)^{*}$ such that, the projection of ${\overline {\mathbf {n} }}$ over its $i$-th component is $\mid {n_{i}}\mid _{\max(n_{0},\dots ,n_{r-1})}$. For example, the representation of $(3,1,3)$ is ${\begin{array}{l|l|l}1&1&1\\1&0&0\\1&1&1\end{array}}$. This is a word whose letters are the vectors $(1,1,1)$, $(1,0,1)$ and $(1,0,1)$ and whose projection over each components are $111$, $100$ and $111$.
As in the previous definition, the numerical predicate $P$ is said to be regular if ${\overline {P}}$ is a regular language over the alphabet $A$.
$(\exists )MSO(+1)$
A predicate is regular if and only if it can be defined by a monadic second order formula $\phi (x_{0},\dots ,x_{r-1})$, or equivalently by an existential monadic second order formula, where the only atomic predicate is the successor function $y+1=z$.[3]: 26
$FO(\leq ,\mod )$
A predicate is regular if and only if it can be defined by a first order logic formula $\phi (x_{0},\dots ,x_{r-1})$, where the atomic predicates are:
• the order relation $y\leq z$,
• the predicate stating that a number is a multiple of a constant $m$, that is $y\equiv 0\mod m$.[3]: 26
Congruence arithmetic
The language of congruence arithmetic[1]: 140 is defined as the est of Boolean combinations, where the atomic predicates are:
• the addition of a constant $x_{i}+c=x_{j}$, with $c$ an integral constant,
• the order relation $x_{i}\leq x_{j}$,
• the modular relations, with a fixed modular value. That is, predicates of the form $x_{i}\equiv c\mod m$ where $c$ and $m$ are fixed constants and $x$ is a variable.
A predicate is regular if and only if it can be defined in the language of congruence arithmetic. The equivalence with previous definition is due to quantifier elimination.[4]
Using recursion and patterns
This definition requires a fixed parameter $m$. A set is said to be regular if it is $m$-regular for some $m\geq 2$. In order to introduce the definition of $m$-regular, the trivial case where $r=0$ should be considered separately. When $r=0$, then the predicate $P$ is either the constant true or the constant false. Those two predicates are said to be $m$-regular (for every $m$). Let us now assume that $r\geq 1$. In order to introduce the definition of regular predicate in this case, we need to introduce the notion of section of a predicate.
The section $P^{x_{i}=c}$ of $P$ is the predicate of arity $r-1$ where the $i$-th component is fixed to $c$. Formally, it is defined as $\{(x_{0},\dots ,x_{i-1},x_{i+1},\dots ,x_{r-1})\mid P(x_{0},\dots ,x_{i-1},c,x_{i+1},\dots ,x_{r-1})\}$. For example, let us consider the sum predicate $S=\{(n_{0},n_{1},n_{2})\mid n_{0}+n_{1}=n_{2}\}$. Then $S^{x_{0}=c}=\{(n_{1},n_{2})\mid c+n_{1}=n_{2}\}$ is the predicate which adds the constant $c$, and $S^{x_{2}=c}=\{(n_{0},n_{1})\mid n_{0}+n_{1}=c\}$ is the predicate which states that the sum of its two elements is $c$.
The last equivalent definition of regular predicate can now be given. A predicate $P$ of arity $r\geq 1$ is $m$-regular if it satisfies the two following conditions:[5]
• all of its sections are $m$-regular,
• there exists a threshold $t\in \mathbb {N} $ such that, for each vectors $(n_{0},\dots ,n_{r})\in \mathbb {N} ^{r}$ with each $n_{i}\geq t$, $P(n_{0},\dots ,n_{r})\iff P(n_{0}+m,\dots ,n_{r}+m)$.
The second property intuitively means that, when number are big enough, then their exact value does not matter. The properties which matters are the order relation between the numbers and their value modulo the period $m$.
Using recognizable semigroups
Given a subset $s\subseteq \{0,\dots ,r-1\}$, let ${\overline {s}}$ be the characteristic vector of $s$. That is, the vector in $\{0,1\}^{r}$ whose $i$-th component is 1 if $i\in s$, and 0 otherwise. Given a sequence $\mathbf {s} =s_{0}\subsetneq \dots \subsetneq s_{p-1}$ of sets, let $P_{\mathbf {s} }=\{(n_{0},\dots ,n_{p-1})\in \mathbb {N} ^{p}\mid P(\sum n_{i}e_{i})\}$.
The predicate $P$ is regular if and only if for each increasing sequence of set $\mathbf {s} $, $P_{\mathbf {s} }$ is a recognizable submonoid of $\mathbb {N} ^{p}$.[2]
Definition of non regular language
The predicate $P$ is regular if and only if all languages which can be defined in first order logic with atomic predicates for letters and the atomic predicate $P$ are regular. The same property would hold for the monadic second order logic, and with modular quantifiers.[1]
Reducing arity
The following property allows to reduce an arbitrarily complex non-regular predicate to a simpler binary predicate which is also non-regular.[5]
Let us assume that $P$ is definable in Presburger Arithmetic. The predicate $P$ is non regular if and only if there exists a formula in $\mathbf {FO} [\leq ,R]$ which defines the multiplication by a rational ${\frac {p}{q}}\not \in \{0,1\}$. More precisely, it allows to define the non-regular predicate $\{(p\times n,q\times n)\mid n\in \mathbb {N} \}$ for some $p\not \in {0,q}$.
Properties
The class of regular numerical predicate satisfies many properties.
Satisfiability
As in previous case, let us assume that $P$ is definable in Presburger Arithmetic. The satisfiability of $\exists \mathbf {MSO} (+1,P)$ is decidable if and only if $P$ is regular.
This theorem is due to the previous property and the fact that the satisfiability of $\exists \mathbf {MSO} (+1,\times {\frac {p}{q}})$ is undecidable when $p\neq 0$ and $p\neq q$.
Closure property
The class of regular predicates is closed under union, intersection, complement, taking a section, projection and Cartesian product. All of those properties follows directly from the definition of this class as the class of predicates definable in $\mathbf {FO} (\leq ,\mod )$.
Decidability
It is decidable whether a predicate defined in Presburger arithmetic is regular.[2]
Elimination of quantifier
The logic $\mathbf {FO} (\leq ,+c,\mod )$ considered above admit the elimination of quantifier. More precisely, the algorithm for elimination of quantifier by Cooper[6] does not introduce multiplication by constants nor sums of variable. Therefore, when applied to a $\mathbf {FO} (\leq ,+c,\mod )$ it returns a quantifier-free formula in$\mathbf {FO} (\leq ,+c,\mod )$.
References
1. Péladeau, Pierre (1992). "Formulas, regular languages and Boolean circuits". Theoretical Computer Science. 101: 133–142. doi:10.1016/0304-3975(92)90152-6.
2. Choffrut, Christian (January 2008). "Deciding whether a relation defined in Presburger logic can be defined in weaker logics". RAIRO - Theoretical Informatics and Applications. 42 (1): 121–135. doi:10.1051/ita:2007047.
3. Straubing, Howard (1994). Finite Automata, Formal Logic and Circuit Complexity. Birkhäser. ISBN 978-1-4612-0289-9.
4. Smoryński., Craig A. (1991). Logical number theory. 1. , an introduction. Springer. p. 322. ISBN 978-3-642-75462-3.
5. Milchior, Arthur (January 2017). "Undecidability of satisfiability of expansions of FO [<] with a Semilinear Non Regular Predicate over words". The Nature of Computation: 161–170.
6. Cooper, D. C. (1972). "Theorem Proving in Arithmetic without Multiplication". Machine Intelligence. 7: 91–99.
|
Wikipedia
|
Octahedron
In geometry, an octahedron (PL octahedra or octahedrons) is a polyhedron with eight faces. The term is most commonly used to refer to the regular octahedron, a Platonic solid composed of eight equilateral triangles, four of which meet at each vertex.
Regular octahedron
(Click here for rotating model)
TypePlatonic solid
ElementsF = 8, E = 12
V = 6 (χ = 2)
Faces by sides8{3}
Conway notationO
aT
Schläfli symbols{3,4}
r{3,3} or ${\begin{Bmatrix}3\\3\end{Bmatrix}}$
{}+{}+{}=3{}
Face configurationV4.4.4
Wythoff symbol4 | 2 3
Coxeter diagram
SymmetryOh, BC3, [4,3], (*432)
Rotation groupO, [4,3]+, (432)
ReferencesU05, C17, W2
Propertiesregular, convexdeltahedron, Hanner polytope
Dihedral angle109.47122° = arccos(−1⁄3)
3.3.3.3
(Vertex figure)
Cube
(dual polyhedron)
Net
A regular octahedron is the dual polyhedron of a cube. It is a rectified tetrahedron. It is a square bipyramid in any of three orthogonal orientations. It is also a triangular antiprism in any of four orientations.
An octahedron is the three-dimensional case of the more general concept of a cross polytope.
A regular octahedron is a 3-ball in the Manhattan (ℓ1) metric.
Regular octahedron
Dimensions
If the edge length of a regular octahedron is a, the radius of a circumscribed sphere (one that touches the octahedron at all vertices) is
$r_{u}={\frac {\sqrt {2}}{2}}a\approx 0.707\cdot a$
and the radius of an inscribed sphere (tangent to each of the octahedron's faces) is
$r_{i}={\frac {\sqrt {6}}{6}}a\approx 0.408\cdot a$
while the midradius, which touches the middle of each edge, is
$r_{m}={\tfrac {1}{2}}a=0.5\cdot a$
Orthogonal projections
The octahedron has four special orthogonal projections, centered, on an edge, vertex, face, and normal to a face. The second and third correspond to the B2 and A2 Coxeter planes.
Orthogonal projections
Centered by Edge Face
Normal
Vertex Face
Image
Projective
symmetry
[2] [2] [4] [6]
Spherical tiling
The octahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane.
Orthographic projection Stereographic projection
Cartesian coordinates
An octahedron with edge length √2 can be placed with its center at the origin and its vertices on the coordinate axes; the Cartesian coordinates of the vertices are then
( ±1, 0, 0 );
( 0, ±1, 0 );
( 0, 0, ±1 ).
In an x–y–z Cartesian coordinate system, the octahedron with center coordinates (a, b, c) and radius r is the set of all points (x, y, z) such that
$\left|x-a\right|+\left|y-b\right|+\left|z-c\right|=r.$
Area and volume
The surface area A and the volume V of a regular octahedron of edge length a are:
$A=2{\sqrt {3}}a^{2}\approx 3.464a^{2}$
$V={\frac {1}{3}}{\sqrt {2}}a^{3}\approx 0.471a^{3}$
Thus the volume is four times that of a regular tetrahedron with the same edge length, while the surface area is twice (because we have 8 rather than 4 triangles).
If an octahedron has been stretched so that it obeys the equation
$\left|{\frac {x}{x_{m}}}\right|+\left|{\frac {y}{y_{m}}}\right|+\left|{\frac {z}{z_{m}}}\right|=1,$
the formulas for the surface area and volume expand to become
$A=4\,x_{m}\,y_{m}\,z_{m}\times {\sqrt {{\frac {1}{x_{m}^{2}}}+{\frac {1}{y_{m}^{2}}}+{\frac {1}{z_{m}^{2}}}}},$
$V={\frac {4}{3}}\,x_{m}\,y_{m}\,z_{m}.$
Additionally the inertia tensor of the stretched octahedron is
$I={\begin{bmatrix}{\frac {1}{10}}m(y_{m}^{2}+z_{m}^{2})&0&0\\0&{\frac {1}{10}}m(x_{m}^{2}+z_{m}^{2})&0\\0&0&{\frac {1}{10}}m(x_{m}^{2}-y_{m}^{2})\end{bmatrix}}.$
These reduce to the equations for the regular octahedron when
$x_{m}=y_{m}=z_{m}=a\,{\frac {\sqrt {2}}{2}}.$
Geometric relations
Using the standard nomenclature for Johnson solids, an octahedron would be called a square bipyramid.
Dual
The octahedron is the dual polyhedron of the cube.
If an octahedron of edge length $=a$ is inscribed in a cube, then the length of an edge of the cube $={\sqrt {2}}a$.
Stellation
The interior of the compound of two dual tetrahedra is an octahedron, and this compound, called the stella octangula, is its first and only stellation. Correspondingly, a regular octahedron is the result of cutting off from a regular tetrahedron, four regular tetrahedra of half the linear size (i.e. rectifying the tetrahedron). The vertices of the octahedron lie at the midpoints of the edges of the tetrahedron, and in this sense it relates to the tetrahedron in the same way that the cuboctahedron and icosidodecahedron relate to the other Platonic solids.
Snub octahedron
One can also divide the edges of an octahedron in the ratio of the golden mean to define the vertices of an icosahedron. This is done by first placing vectors along the octahedron's edges such that each face is bounded by a cycle, then similarly partitioning each edge into the golden mean along the direction of its vector. There are five octahedra that define any given icosahedron in this fashion, and together they define a regular compound. An icosahedron produced this way is called a snub octahedron.
Tessellations
Octahedra and tetrahedra can be alternated to form a vertex, edge, and face-uniform tessellation of space. This and the regular tessellation of cubes are the only such uniform honeycombs in 3-dimensional space.
Characteristic orthoscheme
Like all regular convex polytopes, the octahedron can be dissected into an integral number of disjoint orthoschemes, all of the same shape characteristic of the polytope. A polytope's characteristic orthoscheme is a fundamental property because the polytope is generated by reflections in the facets of its orthoscheme. The orthoscheme occurs in two chiral forms which are mirror images of each other. The characteristic orthoscheme of a regular polyhedron is a quadrirectangular irregular tetrahedron.
The faces of the octahedron's characteristic tetrahedron lie in the octahedron's mirror planes of symmetry. The octahedron is unique among the Platonic solids in having an even number of faces meeting at each vertex. Consequently, it is the only member of that group to possess, among its mirror planes, some that do not pass through any of its faces. The octahedron's symmetry group is denoted B3. The octahedron and its dual polytope, the cube, have the same symmetry group but different characteristic tetrahedra.
The characteristic tetrahedron of the regular octahedron can be found by a canonical dissection[1] of the regular octahedron which subdivides it into 48 of these characteristic orthoschemes surrounding the octahedron's center. Three left-handed orthoschemes and three right-handed orthoschemes meet in each of the octahedron's eight faces, the six orthoschemes collectively forming a trirectangular tetrahedron: a triangular pyramid with the octahedron face as its equilateral base, and its cube-cornered apex at the center of the octahedron.[2]
Characteristics of the regular octahedron[3]
edge arc dihedral
𝒍 $2$ 90° ${\tfrac {\pi }{2}}$ 109°28′ $\pi -2{\text{𝟁}}$
𝟀 ${\sqrt {\tfrac {4}{3}}}\approx 1.155$ 54°44′8″ ${\tfrac {\pi }{2}}-{\text{𝜿}}$ 90° ${\tfrac {\pi }{2}}$
𝝓 $1$ 45° ${\tfrac {\pi }{4}}$ 60° ${\tfrac {\pi }{3}}$
𝟁 ${\sqrt {\tfrac {1}{3}}}\approx 0.577$ 35°15′52″ ${\text{𝜿}}$ 45° ${\tfrac {\pi }{4}}$
$_{0}R/l$ ${\sqrt {2}}\approx 1.414$
$_{1}R/l$ $1$
$_{2}R/l$ ${\sqrt {\tfrac {2}{3}}}\approx 0.816$
${\text{𝜿}}$ 35°15′52″ ${\tfrac {{\text{arc sec }}3}{2}}$
If the octahedron has edge length 𝒍 = 2, its characteristic tetrahedron's six edges have lengths ${\sqrt {\tfrac {4}{3}}}$, $1$, ${\sqrt {\tfrac {1}{3}}}$ (the exterior right triangle face, the characteristic triangle 𝟀, 𝝓, 𝟁 of the octahedron), plus ${\sqrt {2}}$, $1$, ${\sqrt {\tfrac {2}{3}}}$ (edges that are the characteristic radii of the octahedron). The 3-edge path along orthogonal edges of the orthoscheme is $1$, ${\sqrt {\tfrac {1}{3}}}$, ${\sqrt {\tfrac {2}{3}}}$, first from an octahedron vertex to an octahedron edge center, then turning 90° to an octahedron face center, then turning 90° to the octahedron center. The orthoscheme has four dissimilar right triangle faces. The exterior face is a 90-60-30 triangle which is one-sixth of an octahedron face. The three faces interior to the octahedron are: a 45-90-45 triangle with edges $1$, ${\sqrt {2}}$, $1$, a right triangle with edges ${\sqrt {\tfrac {1}{3}}}$, $1$, ${\sqrt {\tfrac {2}{3}}}$, and a right triangle with edges ${\sqrt {\tfrac {4}{3}}}$, ${\sqrt {2}}$, ${\sqrt {\tfrac {2}{3}}}$.
Topology
The octahedron is 4-connected, meaning that it takes the removal of four vertices to disconnect the remaining vertices. It is one of only four 4-connected simplicial well-covered polyhedra, meaning that all of the maximal independent sets of its vertices have the same size. The other three polyhedra with this property are the pentagonal dipyramid, the snub disphenoid, and an irregular polyhedron with 12 vertices and 20 triangular faces.[4]
Nets
The regular octahedron has eleven arrangements of nets.
Faceting
The uniform tetrahemihexahedron is a tetrahedral symmetry faceting of the regular octahedron, sharing edge and vertex arrangement. It has four of the triangular faces, and 3 central squares.
Octahedron
Tetrahemihexahedron
Uniform colorings and symmetry
There are 3 uniform colorings of the octahedron, named by the triangular face colors going around each vertex: 1212, 1112, 1111.
The octahedron's symmetry group is Oh, of order 48, the three dimensional hyperoctahedral group. This group's subgroups include D3d (order 12), the symmetry group of a triangular antiprism; D4h (order 16), the symmetry group of a square bipyramid; and Td (order 24), the symmetry group of a rectified tetrahedron. These symmetries can be emphasized by different colorings of the faces.
Name Octahedron Rectified tetrahedron
(Tetratetrahedron)
Triangular antiprism Square bipyramid Rhombic fusil
Image
(Face coloring)
(1111)
(1212)
(1112)
(1111)
(1111)
Coxeter diagram =
Schläfli symbol {3,4} r{3,3} s{2,6}
sr{2,3}
ft{2,4}
{ } + {4}
ftr{2,2}
{ } + { } + { }
Wythoff symbol 4 | 3 2 2 | 4 3 2 | 6 2
| 2 3 2
Symmetry Oh, [4,3], (*432) Td, [3,3], (*332) D3d, [2+,6], (2*3)
D3, [2,3]+, (322)
D4h, [2,4], (*422) D2h, [2,2], (*222)
Order 48 24 12
6
16 8
Irregular octahedra
The following polyhedra are combinatorially equivalent to the regular polyhedron. They all have six vertices, eight triangular faces, and twelve edges that correspond one-for-one with the features of a regular octahedron.
• Triangular antiprisms: Two faces are equilateral, lie on parallel planes, and have a common axis of symmetry. The other six triangles are isosceles.
• Tetragonal bipyramids, in which at least one of the equatorial quadrilaterals lies on a plane. The regular octahedron is a special case in which all three quadrilaterals are planar squares.
• Schönhardt polyhedron, a non-convex polyhedron that cannot be partitioned into tetrahedra without introducing new vertices.
• Bricard octahedron, a non-convex self-crossing flexible polyhedron
More generally, an octahedron can be any polyhedron with eight faces. The regular octahedron has 6 vertices and 12 edges, the minimum for an octahedron; irregular octahedra may have as many as 12 vertices and 18 edges.[5] There are 257 topologically distinct convex octahedra, excluding mirror images. More specifically there are 2, 11, 42, 74, 76, 38, 14 for octahedra with 6 to 12 vertices respectively.[6][7] (Two polyhedra are "topologically distinct" if they have intrinsically different arrangements of faces and vertices, such that it is impossible to distort one into the other simply by changing the lengths of edges or the angles between edges or faces.)
Some better known irregular octahedra include the following:
• Hexagonal prism: Two faces are parallel regular hexagons; six squares link corresponding pairs of hexagon edges.
• Heptagonal pyramid: One face is a heptagon (usually regular), and the remaining seven faces are triangles (usually isosceles). It is not possible for all triangular faces to be equilateral.
• Truncated tetrahedron: The four faces from the tetrahedron are truncated to become regular hexagons, and there are four more equilateral triangle faces where each tetrahedron vertex was truncated.
• Tetragonal trapezohedron: The eight faces are congruent kites.
• Gyrobifastigium: Two uniform triangular prisms glued over one of their square sides so that no triangle shares an edge with another triangle (Johnson solid 26).
• Octagonal hosohedron: degenerate in Euclidean space, but can be realized spherically.
Octahedra in the physical world
Octahedra in nature
• Natural crystals of diamond, alum or fluorite are commonly octahedral, as the space-filling tetrahedral-octahedral honeycomb.
• The plates of kamacite alloy in octahedrite meteorites are arranged paralleling the eight faces of an octahedron.
• Many metal ions coordinate six ligands in an octahedral or distorted octahedral configuration.
• Widmanstätten patterns in nickel-iron crystals
Octahedra in art and culture
• Especially in roleplaying games, this solid is known as a "d8", one of the more common polyhedral dice.
• If each edge of an octahedron is replaced by a one-ohm resistor, the resistance between opposite vertices is 1/2 ohm, and that between adjacent vertices 5/12 ohm.[8]
• Six musical notes can be arranged on the vertices of an octahedron in such a way that each edge represents a consonant dyad and each face represents a consonant triad; see hexany.
Tetrahedral octet truss
A space frame of alternating tetrahedra and half-octahedra derived from the Tetrahedral-octahedral honeycomb was invented by Buckminster Fuller in the 1950s. It is commonly regarded as the strongest building structure for resisting cantilever stresses.
Related polyhedra
A regular octahedron can be augmented into a tetrahedron by adding 4 tetrahedra on alternated faces. Adding tetrahedra to all 8 faces creates the stellated octahedron.
tetrahedron stellated octahedron
The octahedron is one of a family of uniform polyhedra related to the cube.
Uniform octahedral polyhedra
Symmetry: [4,3], (*432) [4,3]+
(432)
[1+,4,3] = [3,3]
(*332)
[3+,4]
(3*2)
{4,3} t{4,3} r{4,3}
r{31,1}
t{3,4}
t{31,1}
{3,4}
{31,1}
rr{4,3}
s2{3,4}
tr{4,3} sr{4,3} h{4,3}
{3,3}
h2{4,3}
t{3,3}
s{3,4}
s{31,1}
=
=
=
=
or
=
or
=
Duals to uniform polyhedra
V43 V3.82 V(3.4)2 V4.62 V34 V3.43 V4.6.8 V34.4 V33 V3.62 V35
It is also one of the simplest examples of a hypersimplex, a polytope formed by certain intersections of a hypercube with a hyperplane.
The octahedron is topologically related as a part of sequence of regular polyhedra with Schläfli symbols {3,n}, continuing into the hyperbolic plane.
*n32 symmetry mutation of regular tilings: {3,n}
Spherical Euclid. Compact hyper. Paraco. Noncompact hyperbolic
3.3 33 34 35 36 37 38 3∞ 312i 39i 36i 33i
Tetratetrahedron
The regular octahedron can also be considered a rectified tetrahedron – and can be called a tetratetrahedron. This can be shown by a 2-color face model. With this coloring, the octahedron has tetrahedral symmetry.
Compare this truncation sequence between a tetrahedron and its dual:
Family of uniform tetrahedral polyhedra
Symmetry: [3,3], (*332) [3,3]+, (332)
{3,3} t{3,3} r{3,3} t{3,3} {3,3} rr{3,3} tr{3,3} sr{3,3}
Duals to uniform polyhedra
V3.3.3 V3.6.6 V3.3.3.3 V3.6.6 V3.3.3 V3.4.3.4 V4.6.6 V3.3.3.3.3
The above shapes may also be realized as slices orthogonal to the long diagonal of a tesseract. If this diagonal is oriented vertically with a height of 1, then the first five slices above occur at heights r, 3/8, 1/2, 5/8, and s, where r is any number in the range 0 < r ≤ 1/4, and s is any number in the range 3/4 ≤ s < 1.
The octahedron as a tetratetrahedron exists in a sequence of symmetries of quasiregular polyhedra and tilings with vertex configurations (3.n)2, progressing from tilings of the sphere to the Euclidean plane and into the hyperbolic plane. With orbifold notation symmetry of *n32 all of these tilings are Wythoff constructions within a fundamental domain of symmetry, with generator points at the right angle corner of the domain.[9][10]
*n32 orbifold symmetries of quasiregular tilings: (3.n)2
Construction
Spherical Euclidean Hyperbolic
*332 *432 *532 *632 *732 *832... *∞32
Quasiregular
figures
Vertex (3.3)2 (3.4)2 (3.5)2 (3.6)2 (3.7)2 (3.8)2 (3.∞)2
Trigonal antiprism
As a trigonal antiprism, the octahedron is related to the hexagonal dihedral symmetry family.
Uniform hexagonal dihedral spherical polyhedra
Symmetry: [6,2], (*622) [6,2]+, (622) [6,2+], (2*3)
{6,2} t{6,2} r{6,2} t{2,6} {2,6} rr{6,2} tr{6,2} sr{6,2} s{2,6}
Duals to uniforms
V62 V122 V62 V4.4.6 V26 V4.4.6 V4.4.12 V3.3.3.6 V3.3.3.3
Family of uniform n-gonal antiprisms
Antiprism name Digonal antiprism (Trigonal)
Triangular antiprism
(Tetragonal)
Square antiprism
Pentagonal antiprism Hexagonal antiprism Heptagonal antiprism Octagonal antiprism Enneagonal antiprism Decagonal antiprism Hendecagonal antiprism Dodecagonal antiprism ... Apeirogonal antiprism
Polyhedron image ...
Spherical tiling image Plane tiling image
Vertex config. 2.3.3.3 3.3.3.3 4.3.3.3 5.3.3.3 6.3.3.3 7.3.3.3 8.3.3.3 9.3.3.3 10.3.3.3 11.3.3.3 12.3.3.3 ... ∞.3.3.3
Square bipyramid
"Regular" right (symmetric) n-gonal bipyramids:
Bipyramid name Digonal bipyramid Triangular bipyramid
(See: J12)
Square bipyramid
(See: O)
Pentagonal bipyramid
(See: J13)
Hexagonal bipyramid Heptagonal bipyramid Octagonal bipyramid Enneagonal bipyramid Decagonal bipyramid ... Apeirogonal bipyramid
Polyhedron image ...
Spherical tiling image Plane tiling image
Face config. V2.4.4V3.4.4V4.4.4V5.4.4V6.4.4V7.4.4V8.4.4V9.4.4V10.4.4...V∞.4.4
Coxeter diagram ...
Other related polyhedra
Truncation of two opposite vertices results in a square bifrustum.
The octahedron can be generated as the case of a 3D superellipsoid with all exponent values set to 1.
See also
• Octahedral number
• Centered octahedral number
• Spinning octahedron
• Stella octangula
• Triakis octahedron
• Hexakis octahedron
• Truncated octahedron
• Octahedral molecular geometry
• Octahedral symmetry
• Octahedral graph
• Octahedral sphere
References
1. Coxeter 1973, p. 130, §7.6 The symmetry group of the general regular polytope; "simplicial subdivision". sfn error: no target: CITEREFCoxeter1973 (help)
2. Coxeter 1973, pp. 70–71, Characteristic tetrahedra; Fig. 4.7A. sfn error: no target: CITEREFCoxeter1973 (help)
3. Coxeter 1973, pp. 292–293, Table I(i); "Octahedron, 𝛽3". sfn error: no target: CITEREFCoxeter1973 (help)
4. Finbow, Arthur S.; Hartnell, Bert L.; Nowakowski, Richard J.; Plummer, Michael D. (2010). "On well-covered triangulations. III". Discrete Applied Mathematics. 158 (8): 894–912. doi:10.1016/j.dam.2009.08.002. MR 2602814.
5. "Enumeration of Polyhedra". Archived from the original on 10 October 2011. Retrieved 2 May 2006.
6. "Counting polyhedra".
7. "Polyhedra with 8 Faces and 6-8 Vertices". Archived from the original on 17 November 2014. Retrieved 14 August 2016.
8. Klein, Douglas J. (2002). "Resistance-Distance Sum Rules" (PDF). Croatica Chemica Acta. 75 (2): 633–649. Archived from the original (PDF) on 10 June 2007. Retrieved 30 September 2006.
9. Coxeter Regular Polytopes, Third edition, (1973), Dover edition, ISBN 0-486-61480-8 (Chapter V: The Kaleidoscope, Section: 5.7 Wythoff's construction)
10. "Two Dimensional symmetry Mutations by Daniel Huson".
External links
• "Octahedron" . Encyclopædia Britannica. Vol. 19 (11th ed.). 1911.
• Weisstein, Eric W. "Octahedron". MathWorld.
• Klitzing, Richard. "3D convex uniform polyhedra x3o4o – oct".
• Editable printable net of an octahedron with interactive 3D view
• Paper model of the octahedron
• K.J.M. MacLean, A Geometric Analysis of the Five Platonic Solids and Other Semi-Regular Polyhedra
• The Uniform Polyhedra
• Virtual Reality Polyhedra – The Encyclopedia of Polyhedra
• Conway Notation for Polyhedra – Try: dP4
Polyhedra
Listed by number of faces and type
1–10 faces
• Monohedron
• Dihedron
• Trihedron
• Tetrahedron
• Pentahedron
• Hexahedron
• Heptahedron
• Octahedron
• Enneahedron
• Decahedron
11–20 faces
• Hendecahedron
• Dodecahedron
• Tridecahedron
• Tetradecahedron
• Pentadecahedron
• Hexadecahedron
• Heptadecahedron
• Octadecahedron
• Enneadecahedron
• Icosahedron
>20 faces
• Icositetrahedron (24)
• Triacontahedron (30)
• Hexecontahedron (60)
• Enneacontahedron (90)
• Hectotriadiohedron (132)
• Apeirohedron (∞)
elemental things
• face
• edge
• vertex
• uniform polyhedron (two infinite groups and 75)
• regular polyhedron (9)
• quasiregular polyhedron (7)
• semiregular polyhedron (two infinite groups and 59)
convex polyhedron
• Platonic solid (5)
• Archimedean solid (13)
• Catalan solid (13)
• Johnson solid (92)
non-convex polyhedron
• Kepler–Poinsot polyhedron (4)
• Star polyhedron (infinite)
• Uniform star polyhedron (57)
prismatoids
• prism
• antiprism
• frustum
• cupola
• wedge
• pyramid
• parallelepiped
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
Johnson solids
Pyramids, cupolae and rotundae
• square pyramid
• pentagonal pyramid
• triangular cupola
• square cupola
• pentagonal cupola
• pentagonal rotunda
Modified pyramids
• elongated triangular pyramid
• elongated square pyramid
• elongated pentagonal pyramid
• gyroelongated square pyramid
• gyroelongated pentagonal pyramid
• triangular bipyramid
• pentagonal bipyramid
• elongated triangular bipyramid
• elongated square bipyramid
• elongated pentagonal bipyramid
• gyroelongated square bipyramid
Modified cupolae and rotundae
• elongated triangular cupola
• elongated square cupola
• elongated pentagonal cupola
• elongated pentagonal rotunda
• gyroelongated triangular cupola
• gyroelongated square cupola
• gyroelongated pentagonal cupola
• gyroelongated pentagonal rotunda
• gyrobifastigium
• triangular orthobicupola
• square orthobicupola
• square gyrobicupola
• pentagonal orthobicupola
• pentagonal gyrobicupola
• pentagonal orthocupolarotunda
• pentagonal gyrocupolarotunda
• pentagonal orthobirotunda
• elongated triangular orthobicupola
• elongated triangular gyrobicupola
• elongated square gyrobicupola
• elongated pentagonal orthobicupola
• elongated pentagonal gyrobicupola
• elongated pentagonal orthocupolarotunda
• elongated pentagonal gyrocupolarotunda
• elongated pentagonal orthobirotunda
• elongated pentagonal gyrobirotunda
• gyroelongated triangular bicupola
• gyroelongated square bicupola
• gyroelongated pentagonal bicupola
• gyroelongated pentagonal cupolarotunda
• gyroelongated pentagonal birotunda
Augmented prisms
• augmented triangular prism
• biaugmented triangular prism
• triaugmented triangular prism
• augmented pentagonal prism
• biaugmented pentagonal prism
• augmented hexagonal prism
• parabiaugmented hexagonal prism
• metabiaugmented hexagonal prism
• triaugmented hexagonal prism
Modified Platonic solids
• augmented dodecahedron
• parabiaugmented dodecahedron
• metabiaugmented dodecahedron
• triaugmented dodecahedron
• metabidiminished icosahedron
• tridiminished icosahedron
• augmented tridiminished icosahedron
Modified Archimedean solids
• augmented truncated tetrahedron
• augmented truncated cube
• biaugmented truncated cube
• augmented truncated dodecahedron
• parabiaugmented truncated dodecahedron
• metabiaugmented truncated dodecahedron
• triaugmented truncated dodecahedron
• gyrate rhombicosidodecahedron
• parabigyrate rhombicosidodecahedron
• metabigyrate rhombicosidodecahedron
• trigyrate rhombicosidodecahedron
• diminished rhombicosidodecahedron
• paragyrate diminished rhombicosidodecahedron
• metagyrate diminished rhombicosidodecahedron
• bigyrate diminished rhombicosidodecahedron
• parabidiminished rhombicosidodecahedron
• metabidiminished rhombicosidodecahedron
• gyrate bidiminished rhombicosidodecahedron
• tridiminished rhombicosidodecahedron
Elementary solids
• snub disphenoid
• snub square antiprism
• sphenocorona
• augmented sphenocorona
• sphenomegacorona
• hebesphenomegacorona
• disphenocingulum
• bilunabirotunda
• triangular hebesphenorotunda
(See also List of Johnson solids, a sortable table)
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
|
Wikipedia
|
Regular open set
A subset $S$ of a topological space $X$ is called a regular open set if it is equal to the interior of its closure; expressed symbolically, if $\operatorname {Int} ({\overline {S}})=S$ or, equivalently, if $\partial ({\overline {S}})=\partial S,$ where $\operatorname {Int} S,$ ${\overline {S}}$ and $\partial S$ denote, respectively, the interior, closure and boundary of $S.$[1]
A subset $S$ of $X$ is called a regular closed set if it is equal to the closure of its interior; expressed symbolically, if ${\overline {\operatorname {Int} S}}=S$ or, equivalently, if $\partial (\operatorname {Int} S)=\partial S.$[1]
Examples
If $\mathbb {R} $ has its usual Euclidean topology then the open set $S=(0,1)\cup (1,2)$ is not a regular open set, since $\operatorname {Int} ({\overline {S}})=(0,2)\neq S.$ Every open interval in $\mathbb {R} $ is a regular open set and every non-degenerate closed interval (that is, a closed interval containing at least two distinct points) is a regular closed set. A singleton $\{x\}$ is a closed subset of $\mathbb {R} $ but not a regular closed set because its interior is the empty set $\varnothing ,$ so that ${\overline {\operatorname {Int} \{x\}}}={\overline {\varnothing }}=\varnothing \neq \{x\}.$
Properties
A subset of $X$ is a regular open set if and only if its complement in $X$ is a regular closed set.[2] Every regular open set is an open set and every regular closed set is a closed set.
Each clopen subset of $X$ (which includes $\varnothing $ and $X$ itself) is simultaneously a regular open subset and regular closed subset.
The interior of a closed subset of $X$ is a regular open subset of $X$ and likewise, the closure of an open subset of $X$ is a regular closed subset of $X.$[2] The intersection (but not necessarily the union) of two regular open sets is a regular open set. Similarly, the union (but not necessarily the intersection) of two regular closed sets is a regular closed set.[2]
The collection of all regular open sets in $X$ forms a complete Boolean algebra; the join operation is given by $U\vee V=\operatorname {Int} ({\overline {U\cup V}}),$ the meet is $U\land V=U\cap V$ and the complement is $\neg U=\operatorname {Int} (X\setminus U).$
See also
• List of topologies – List of concrete topologies and topological spaces
• Regular space – topological space in which a point and a closed set are, if disjoint, separable by neighborhoodsPages displaying wikidata descriptions as a fallback
• Semiregular space
• Separation axiom – Axioms in topology defining notions of "separation"
Notes
1. Steen & Seebach, p. 6
2. Willard, "3D, Regularly open and regularly closed sets", p. 29
References
• Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology. Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1995. ISBN 0-486-68735-X (Dover edition).
• Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
|
Wikipedia
|
Regular cardinal
In set theory, a regular cardinal is a cardinal number that is equal to its own cofinality. More explicitly, this means that $\kappa $ is a regular cardinal if and only if every unbounded subset $C\subseteq \kappa $ has cardinality $\kappa $. Infinite well-ordered cardinals that are not regular are called singular cardinals. Finite cardinal numbers are typically not called regular or singular.
In the presence of the axiom of choice, any cardinal number can be well-ordered, and then the following are equivalent for a cardinal $\kappa $:
1. $\kappa $ is a regular cardinal.
2. If $\kappa =\sum _{i\in I}\lambda _{i}$ and $\lambda _{i}<\kappa $ for all $i$, then $|I|\geq \kappa $.
3. If $S=\bigcup _{i\in I}S_{i}$, and if $|I|<\kappa $ and $|S_{i}|<\kappa $ for all $i$, then $|S|<\kappa $.
4. The category $\operatorname {Set} _{<\kappa }$ of sets of cardinality less than $\kappa $ and all functions between them is closed under colimits of cardinality less than $\kappa $.
5. $\kappa $ is a regular ordinal (see below)
Crudely speaking, this means that a regular cardinal is one that cannot be broken down into a small number of smaller parts.
The situation is slightly more complicated in contexts where the axiom of choice might fail, as in that case not all cardinals are necessarily the cardinalities of well-ordered sets. In that case, the above equivalence holds for well-orderable cardinals only.
An infinite ordinal $\alpha $ is a regular ordinal if it is a limit ordinal that is not the limit of a set of smaller ordinals that as a set has order type less than $\alpha $. A regular ordinal is always an initial ordinal, though some initial ordinals are not regular, e.g., $\omega _{\omega }$ (see the example below).
Examples
The ordinals less than $\omega $ are finite. A finite sequence of finite ordinals always has a finite maximum, so $\omega $ cannot be the limit of any sequence of type less than $\omega $ whose elements are ordinals less than $\omega $, and is therefore a regular ordinal. $\aleph _{0}$ (aleph-null) is a regular cardinal because its initial ordinal, $\omega $, is regular. It can also be seen directly to be regular, as the cardinal sum of a finite number of finite cardinal numbers is itself finite.
$\omega +1$ is the next ordinal number greater than $\omega $. It is singular, since it is not a limit ordinal. $\omega +\omega $ is the next limit ordinal after $\omega $. It can be written as the limit of the sequence $\omega $, $\omega +1$, $\omega +2$, $\omega +3$, and so on. This sequence has order type $\omega $, so $\omega +\omega $ is the limit of a sequence of type less than $\omega +\omega $ whose elements are ordinals less than $\omega +\omega $; therefore it is singular.
$\aleph _{1}$ is the next cardinal number greater than $\aleph _{0}$, so the cardinals less than $\aleph _{1}$ are countable (finite or denumerable). Assuming the axiom of choice, the union of a countable set of countable sets is itself countable. So $\aleph _{1}$ cannot be written as the sum of a countable set of countable cardinal numbers, and is regular.
$\aleph _{\omega }$ is the next cardinal number after the sequence $\aleph _{0}$, $\aleph _{1}$, $\aleph _{2}$, $\aleph _{3}$, and so on. Its initial ordinal $\omega _{\omega }$ is the limit of the sequence $\omega $, $\omega _{1}$, $\omega _{2}$, $\omega _{3}$, and so on, which has order type $\omega $, so $\omega _{\omega }$ is singular, and so is $\aleph _{\omega }$. Assuming the axiom of choice, $\aleph _{\omega }$ is the first infinite cardinal that is singular (the first infinite ordinal that is singular is $\omega +1$, and the first infinite limit ordinal that is singular is $\omega +\omega $). Proving the existence of singular cardinals requires the axiom of replacement, and in fact the inability to prove the existence of $\aleph _{\omega }$ in Zermelo set theory is what led Fraenkel to postulate this axiom.[1]
Uncountable (weak) limit cardinals that are also regular are known as (weakly) inaccessible cardinals. They cannot be proved to exist within ZFC, though their existence is not known to be inconsistent with ZFC. Their existence is sometimes taken as an additional axiom. Inaccessible cardinals are necessarily fixed points of the aleph function, though not all fixed points are regular. For instance, the first fixed point is the limit of the $\omega $-sequence $\aleph _{0},\aleph _{\aleph _{0}},\aleph _{\aleph _{\aleph _{0}}},...$ and is therefore singular.
Properties
If the axiom of choice holds, then every successor cardinal is regular. Thus the regularity or singularity of most aleph numbers can be checked depending on whether the cardinal is a successor cardinal or a limit cardinal. Some cardinalities cannot be proven to be equal to any particular aleph, for instance the cardinality of the continuum, whose value in ZFC may be any uncountable cardinal of uncountable cofinality (see Easton's theorem). The continuum hypothesis postulates that the cardinality of the continuum is equal to $\aleph _{1}$, which is regular assuming choice.
Without the axiom of choice, there would be cardinal numbers that were not well-orderable. Moreover, the cardinal sum of an arbitrary collection could not be defined. Therefore, only the aleph numbers can meaningfully be called regular or singular cardinals. Furthermore, a successor aleph need not be regular. For instance, the union of a countable set of countable sets need not be countable. It is consistent with ZF that $\omega _{1}$ be the limit of a countable sequence of countable ordinals as well as the set of real numbers be a countable union of countable sets. Furthermore, it is consistent with ZF that every aleph bigger than $\aleph _{0}$ is singular (a result proved by Moti Gitik).
If $\kappa $ is a limit ordinal, $\kappa $ is regular iff the set of $\alpha <\kappa $ that are critical points of $\Sigma _{1}$-elementary embeddings $j$ with $j(\alpha )=\kappa $ is club in $\kappa $.[2]
See also
• Inaccessible cardinal
References
1. Maddy, Penelope (1988), "Believing the axioms. I", Journal of Symbolic Logic, 53 (2): 481–511, doi:10.2307/2274520, JSTOR 2274520, MR 0947855, Early hints of the Axiom of Replacement can be found in Cantor's letter to Dedekind [1899] and in Mirimanoff [1917]. Maddy cites two papers by Mirimanoff, "Les antinomies de Russell et de Burali-Forti et le problème fundamental de la théorie des ensembles" and "Remarques sur la théorie des ensembles et les antinomies Cantorienne", both in L'Enseignement Mathématique (1917).
2. T. Arai, "Bounds on provability in set theories" (2012, p.2). Accessed 4 August 2022.
• Herbert B. Enderton, Elements of Set Theory, ISBN 0-12-238440-7
• Kenneth Kunen, Set Theory, An Introduction to Independence Proofs, ISBN 0-444-85401-0
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
|
Wikipedia
|
Regular p-group
In mathematical finite group theory, the concept of regular p-group captures some of the more important properties of abelian p-groups, but is general enough to include most "small" p-groups. Regular p-groups were introduced by Phillip Hall (1934).
Definition
A finite p-group G is said to be regular if any of the following equivalent (Hall 1959, Ch. 12.4), (Huppert 1967, Kap. III §10) conditions are satisfied:
• For every a, b in G, there is a c in the derived subgroup H′ of the subgroup H of G generated by a and b, such that ap · bp = (ab)p · cp.
• For every a, b in G, there are elements ci in the derived subgroup of the subgroup generated by a and b, such that ap · bp = (ab)p · c1p ⋯ ckp.
• For every a, b in G and every positive integer n, there are elements ci in the derived subgroup of the subgroup generated by a and b such that aq · bq = (ab)q · c1q ⋯ ckq, where q = pn.
Examples
Many familiar p-groups are regular:
• Every abelian p-group is regular.
• Every p-group of nilpotency class strictly less than p is regular. This follows from the Hall–Petresco identity.
• Every p-group of order at most pp is regular.
• Every finite group of exponent p is regular.
However, many familiar p-groups are not regular:
• Every nonabelian 2-group is irregular.
• The Sylow p-subgroup of the symmetric group on p2 points is irregular and of order pp+1.
Properties
A p-group is regular if and only if every subgroup generated by two elements is regular.
Every subgroup and quotient group of a regular group is regular, but the direct product of regular groups need not be regular.
A 2-group is regular if and only if it is abelian. A 3-group with two generators is regular if and only if its derived subgroup is cyclic. Every p-group of odd order with cyclic derived subgroup is regular.
The subgroup of a p-group G generated by the elements of order dividing pk is denoted Ωk(G) and regular groups are well-behaved in that Ωk(G) is precisely the set of elements of order dividing pk. The subgroup generated by all pk-th powers of elements in G is denoted ℧k(G). In a regular group, the index [G:℧k(G)] is equal to the order of Ωk(G). In fact, commutators and powers interact in particularly simple ways (Huppert 1967, Kap III §10, Satz 10.8). For example, given normal subgroups M and N of a regular p-group G and nonnegative integers m and n, one has [℧m(M),℧n(N)] = ℧m+n([M,N]).
• Philip Hall's criteria of regularity of a p-group G: G is regular, if one of the following hold:
1. [G:℧1(G)] < pp
2. [G′:℧1(G′)| < pp−1
3. |Ω1(G)| < pp−1
Generalizations
• Powerful p-group
• power closed p-group
References
• Hall, Marshall (1959), The theory of groups, Macmillan, MR 0103215
• Hall, Philip (1934), "A contribution to the theory of groups of prime-power order", Proceedings of the London Mathematical Society, 36: 29–95, doi:10.1112/plms/s2-36.1.29
• Huppert, B. (1967), Endliche Gruppen (in German), Berlin, New York: Springer-Verlag, pp. 90–93, ISBN 978-3-540-03825-2, MR 0224703, OCLC 527050
|
Wikipedia
|
Regular paperfolding sequence
In mathematics the regular paperfolding sequence, also known as the dragon curve sequence, is an infinite sequence of 0s and 1s. It is obtained from the repeating partial sequence
1, ?, 0, ?, 1, ?, 0, ?, 1, ?, 0, ?, ...
by filling in the question marks by another copy of the whole sequence. The first few terms of the resulting sequence are:
1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, ... (sequence A014577 in the OEIS)
If a strip of paper is folded repeatedly in half in the same direction, $i$ times, it will get $2^{i}-1$ folds, whose direction (left or right) is given by the pattern of 0's and 1's in the first $2^{i}-1$ terms of the regular paperfolding sequence. Opening out each fold to create a right-angled corner (or, equivalently, making a sequence of left and right turns through a regular grid, following the pattern of the paperfolding sequence) produces a sequence of polygonal chains that approaches the dragon curve fractal:[1]
1 1 1 0 1 1 0 1 1 0 0 1 1 0 1 1 0 0 1 1 1 0 0 1 0 0 ...
Properties
The value of any given term $t_{n}$ in the regular paperfolding sequence, starting with $n=1$, can be found recursively as follows. Divide $n$ by two, as many times as possible, to get a factorization of the form $n=m\cdot 2^{k}$ where $m$ is an odd number. Then
$t_{n}={\begin{cases}1&{\text{if }}m\equiv 1\mod 4\\0&{\text{if }}m\equiv 3\mod 4\end{cases}}$
Thus, for instance, $t_{12}=t_{3}=0$: dividing 12 by two, twice, leaves the odd number 3. As another example, $t_{13}=1$ because 13 is congruent to 1 mod 4.
The paperfolding word 1101100111001001..., which is created by concatenating the terms of the regular paperfolding sequence, is a fixed point of the morphism or string substitution rules
11 → 1101
01 → 1001
10 → 1100
00 → 1000
as follows:
11 → 1101 → 11011001 → 1101100111001001 → 11011001110010011101100011001001 ...
It can be seen from the morphism rules that the paperfolding word contains at most three consecutive 0s and at most three consecutive 1s.
The paperfolding sequence also satisfies the symmetry relation:
$t_{n}={\begin{cases}1&{\text{if }}n=2^{k}\\1-t_{2^{k}-n}&{\text{if }}2^{k-1}<n<2^{k}\end{cases}}$
which shows that the paperfolding word can be constructed as the limit of another iterated process as follows:
1
1 1 0
110 1 100
1101100 1 1100100
110110011100100 1 110110001100100
In each iteration of this process, a 1 is placed at the end of the previous iteration's string, then this string is repeated in reverse order, replacing 0 by 1 and vice versa.
Generating function
The generating function of the paperfolding sequence is given by
$G(t_{n};x)=\sum _{n=1}^{\infty }t_{n}x^{n}\,.$
From the construction of the paperfolding sequence, it can be seen that G satisfies the functional relation
$G(t_{n};x)=G(t_{n};x^{2})+\sum _{n=0}^{\infty }x^{4n+1}=G(t_{n};x^{2})+{\frac {x}{1-x^{4}}}\,.$
Paperfolding constant
Substituting x = 0.5 into the generating function gives a real number between 0 and 1 whose binary expansion is the paperfolding word
$G(t_{n};{\frac {1}{2}})=\sum _{n=1}^{\infty }{\frac {t_{n}}{2^{n}}}$
This number is known as the paperfolding constant[2] and has the value
$\sum _{k=0}^{\infty }{\frac {8^{2^{k}}}{2^{2^{k+2}}-1}}=0.85073618820186...$ (sequence A143347 in the OEIS)
General paperfolding sequence
The regular paperfolding sequence corresponds to folding a strip of paper consistently in the same direction. If we allow the direction of the fold to vary at each step we obtain a more general class of sequences. Given a binary sequence (fi), we can define a general paperfolding sequence with folding instructions (fi).
For a binary word w, let w‡ denote the reverse of the complement of w. Define an operator Fa as
$F_{a}:w\mapsto waw^{\ddagger }\ $
and then define a sequence of words depending on the (fi) by w0 = ε,
$w_{n}=F_{f_{1}}(F_{f_{2}}(\cdots F_{f_{n}}(\varepsilon )\cdots ))\ .$
The limit w of the sequence wn is a paperfolding sequence. The regular paperfolding sequence corresponds to the folding sequence fi = 1 for all i.
If n = m·2k where m is odd then
$t_{n}={\begin{cases}f_{j}&{\text{if }}m\equiv 1\mod 4\\1-f_{j}&{\text{if }}m\equiv 3\mod 4\end{cases}}$
which may be used as a definition of a paperfolding sequence.[3]
Properties
• A paperfolding sequence is not ultimately periodic.[3]
• A paperfolding sequence is 2-automatic if and only if the folding sequence is ultimately periodic (1-automatic).
References
1. Weisstein, Eric W. "Dragon Curve". MathWorld.
2. Weisstein, Eric W. "Paper Folding Constant". MathWorld.
3. Everest, Graham; van der Poorten, Alf; Shparlinski, Igor; Ward, Thomas (2003). Recurrence sequences. Mathematical Surveys and Monographs. Vol. 104. Providence, RI: American Mathematical Society. p. 235. ISBN 0-8218-3387-1. Zbl 1033.11006.
• Allouche, Jean-Paul; Shallit, Jeffrey (2003). Automatic Sequences: Theory, Applications, Generalizations. Cambridge University Press. ISBN 978-0-521-82332-6. Zbl 1086.11015.
Mathematics of paper folding
Flat folding
• Big-little-big lemma
• Crease pattern
• Huzita–Hatori axioms
• Kawasaki's theorem
• Maekawa's theorem
• Map folding
• Napkin folding problem
• Pureland origami
• Yoshizawa–Randlett system
Strip folding
• Dragon curve
• Flexagon
• Möbius strip
• Regular paperfolding sequence
3d structures
• Miura fold
• Modular origami
• Paper bag problem
• Rigid origami
• Schwarz lantern
• Sonobe
• Yoshimura buckling
Polyhedra
• Alexandrov's uniqueness theorem
• Blooming
• Flexible polyhedron (Bricard octahedron, Steffen's polyhedron)
• Net
• Source unfolding
• Star unfolding
Miscellaneous
• Fold-and-cut theorem
• Lill's method
Publications
• Geometric Exercises in Paper Folding
• Geometric Folding Algorithms
• Geometric Origami
• A History of Folding in Mathematics
• Origami Polyhedra Design
• Origamics
People
• Roger C. Alperin
• Margherita Piazzola Beloch
• Robert Connelly
• Erik Demaine
• Martin Demaine
• Rona Gurkewitz
• David A. Huffman
• Tom Hull
• Kôdi Husimi
• Humiaki Huzita
• Toshikazu Kawasaki
• Robert J. Lang
• Anna Lubiw
• Jun Maekawa
• Kōryō Miura
• Joseph O'Rourke
• Tomohiro Tachi
• Eve Torrence
|
Wikipedia
|
Differentiable curve
Differential geometry of curves is the branch of geometry that deals with smooth curves in the plane and the Euclidean space by methods of differential and integral calculus.
This article is about curves in Euclidean space. For curves in an arbitrary topological space, see Curve.
Many specific curves have been thoroughly investigated using the synthetic approach. Differential geometry takes another path: curves are represented in a parametrized form, and their geometric properties and various quantities associated with them, such as the curvature and the arc length, are expressed via derivatives and integrals using vector calculus. One of the most important tools used to analyze a curve is the Frenet frame, a moving frame that provides a coordinate system at each point of the curve that is "best adapted" to the curve near that point.
The theory of curves is much simpler and narrower in scope than the theory of surfaces and its higher-dimensional generalizations because a regular curve in a Euclidean space has no intrinsic geometry. Any regular curve may be parametrized by the arc length (the natural parametrization). From the point of view of a theoretical point particle on the curve that does not know anything about the ambient space, all curves would appear the same. Different space curves are only distinguished by how they bend and twist. Quantitatively, this is measured by the differential-geometric invariants called the curvature and the torsion of a curve. The fundamental theorem of curves asserts that the knowledge of these invariants completely determines the curve.
Definitions
Main article: Curve
A parametric Cr-curve or a Cr-parametrization is a vector-valued function
$\gamma :I\to \mathbb {R} ^{n}$
that is r-times continuously differentiable (that is, the component functions of γ are continuously differentiable), where $n\in \mathbb {N} $, $r\in \mathbb {N} \cup \{\infty \}$, and I is a non-empty interval of real numbers. The image of the parametric curve is $\gamma [I]\subseteq \mathbb {R} ^{n}$. The parametric curve γ and its image γ[I] must be distinguished because a given subset of $\mathbb {R} ^{n}$ can be the image of many distinct parametric curves. The parameter t in γ(t) can be thought of as representing time, and γ the trajectory of a moving point in space. When I is a closed interval [a,b], γ(a) is called the starting point and γ(b) is the endpoint of γ. If the starting and the end points coincide (that is, γ(a) = γ(b)), then γ is a closed curve or a loop. To be a Cr-loop, the function γ must be r-times continuously differentiable and satisfy γ(k)(a) = γ(k)(b) for 0 ≤ k ≤ r.
The parametric curve is simple if
$\gamma |_{(a,b)}:(a,b)\to \mathbb {R} ^{n}$
is injective. It is analytic if each component function of γ is an analytic function, that is, it is of class Cω.
The curve γ is regular of order m (where m ≤ r) if, for every t ∈ I,
$\left\{\gamma '(t),\gamma ''(t),\ldots ,{\gamma ^{(m)}}(t)\right\}$
is a linearly independent subset of $\mathbb {R} ^{n}$. In particular, a parametric C1-curve γ is regular if and only if γ′(t) ≠ 0 for any t ∈ I.
Re-parametrization and equivalence relation
See also: Position vector and Vector-valued function
Given the image of a parametric curve, there are several different parametrizations of the parametric curve. Differential geometry aims to describe the properties of parametric curves that are invariant under certain reparametrizations. A suitable equivalence relation on the set of all parametric curves must be defined. The differential-geometric properties of a parametric curve (such as its length, its Frenet frame, and its generalized curvature) are invariant under reparametrization and therefore properties of the equivalence class itself. The equivalence classes are called Cr-curves and are central objects studied in the differential geometry of curves.
Two parametric Cr-curves, $\gamma _{1}:I_{1}\to \mathbb {R} ^{n}$ and $\gamma _{2}:I_{2}\to \mathbb {R} ^{n}$, are said to be equivalent if and only if there exists a bijective Cr-map φ : I1 → I2 such that
$\forall t\in I_{1}:\quad \varphi '(t)\neq 0$
and
$\forall t\in I_{1}:\quad \gamma _{2}{\bigl (}\varphi (t){\bigr )}=\gamma _{1}(t).$
γ2 is then said to be a re-parametrization of γ1.
Re-parametrization defines an equivalence relation on the set of all parametric Cr-curves of class Cr. The equivalence class of this relation simply a Cr-curve.
An even finer equivalence relation of oriented parametric Cr-curves can be defined by requiring φ to satisfy φ′(t) > 0.
Equivalent parametric Cr-curves have the same image, and equivalent oriented parametric Cr-curves even traverse the image in the same direction.
Length and natural parametrization
Main article: Arc length
See also: Curve § Length of a curve
The length l of a parametric C1-curve $\gamma :[a,b]\to \mathbb {R} ^{n}$ :[a,b]\to \mathbb {R} ^{n}} is defined as
$l~{\stackrel {\text{def}}{=}}~\int _{a}^{b}\left\|\gamma '(t)\right\|\,\mathrm {d} {t}.$
The length of a parametric curve is invariant under reparametrization and is therefore a differential-geometric property of the parametric curve.
For each regular parametric Cr-curve $\gamma :[a,b]\to \mathbb {R} ^{n}$ :[a,b]\to \mathbb {R} ^{n}} , where r ≥ 1, the function is defined
$\forall t\in [a,b]:\quad s(t)~{\stackrel {\text{def}}{=}}~\int _{a}^{t}\left\|\gamma '(x)\right\|\,\mathrm {d} {x}.$
Writing γ(s) = γ(t(s)), where t(s) is the inverse function of s(t). This is a re-parametrization γ of γ that is called an arc-length parametrization, natural parametrization, unit-speed parametrization. The parameter s(t) is called the natural parameter of γ.
This parametrization is preferred because the natural parameter s(t) traverses the image of γ at unit speed, so that
$\forall t\in I:\quad \left\|{\overline {\gamma }}'{\bigl (}s(t){\bigr )}\right\|=1.$
In practice, it is often very difficult to calculate the natural parametrization of a parametric curve, but it is useful for theoretical arguments.
For a given parametric curve γ, the natural parametrization is unique up to a shift of parameter.
The quantity
$E(\gamma )~{\stackrel {\text{def}}{=}}~{\frac {1}{2}}\int _{a}^{b}\left\|\gamma '(t)\right\|^{2}~\mathrm {d} {t}$
is sometimes called the energy or action of the curve; this name is justified because the geodesic equations are the Euler–Lagrange equations of motion for this action.
Frenet frame
Main article: Frenet–Serret formulas
A Frenet frame is a moving reference frame of n orthonormal vectors ei(t) which are used to describe a curve locally at each point γ(t). It is the main tool in the differential geometric treatment of curves because it is far easier and more natural to describe local properties (e.g. curvature, torsion) in terms of a local reference system than using a global one such as Euclidean coordinates.
Given a Cn + 1-curve γ in $\mathbb {R} ^{n}$ which is regular of order n the Frenet frame for the curve is the set of orthonormal vectors
$\mathbf {e} _{1}(t),\ldots ,\mathbf {e} _{n}(t)$
called Frenet vectors. They are constructed from the derivatives of γ(t) using the Gram–Schmidt orthogonalization algorithm with
${\begin{aligned}\mathbf {e} _{1}(t)&={\frac {{\boldsymbol {\gamma }}'(t)}{\left\|{\boldsymbol {\gamma }}'(t)\right\|}}\\[8px]\mathbf {e} _{j}(t)&={\frac {{\overline {\mathbf {e} _{j}}}(t)}{\left\|{\overline {\mathbf {e} _{j}}}(t)\right\|}},\quad {\overline {\mathbf {e} _{j}}}(t)={\boldsymbol {\gamma }}^{(j)}(t)-\sum _{i=1}^{j-1}\left\langle {\boldsymbol {\gamma }}^{(j)}(t),\mathbf {e} _{i}(t)\right\rangle \,\mathbf {e} _{i}(t)\end{aligned}}$
The real-valued functions χi(t) are called generalized curvatures and are defined as
$\chi _{i}(t)={\frac {{\bigl \langle }\mathbf {e} _{i}'(t),\mathbf {e} _{i+1}(t){\bigr \rangle }}{\left\|{\boldsymbol {\gamma }}^{'}(t)\right\|}}$
The Frenet frame and the generalized curvatures are invariant under reparametrization and are therefore differential geometric properties of the curve. For curves in $\mathbb {R} ^{3}$ $\chi _{1}(t)$ is the curvature and $\chi _{2}(t)$ is the torsion.
Bertrand curve
A Bertrand curve is a regular curve in $\mathbb {R} ^{3}$ with the additional property that there is a second curve in $\mathbb {R} ^{3}$ such that the principal normal vectors to these two curves are identical at each corresponding point. In other words, if γ1(t) and γ2(t) are two curves in $\mathbb {R} ^{3}$ such that for any t, the two principal normals N1(t), N2(t) are equal, then γ1 and γ2 are Bertrand curves, and γ2 is called the Bertrand mate of γ1. We can write γ2(t) = γ1(t) + r N1(t) for some constant r.[1]
According to problem 25 in Kühnel's "Differential Geometry Curves – Surfaces – Manifolds", it is also true that two Bertrand curves that do not lie in the same two-dimensional plane are characterized by the existence of a linear relation a κ(t) + b τ(t) = 1 where κ(t) and τ(t) are the curvature and torsion of γ1(t) and a and b are real constants with a ≠ 0.[2] Furthermore, the product of torsions of a Bertrand pair of curves is constant.[3] If γ1 has more than one Bertrand mate then it has infinitely many. This only occurs when γ1 is a circular helix.[1]
Special Frenet vectors and generalized curvatures
Main article: Frenet–Serret formulas
The first three Frenet vectors and generalized curvatures can be visualized in three-dimensional space. They have additional names and more semantic information attached to them.
Tangent vector
If a curve γ represents the path of a particle, then the instantaneous velocity of the particle at a given point P is expressed by a vector, called the tangent vector to the curve at P. Mathematically, given a parametrized C1 curve γ = γ(t), for every value t = t0 of the parameter, the vector
$\gamma '(t_{0})=\left.{\frac {\mathrm {d} }{\mathrm {d} t}}{\boldsymbol {\gamma }}(t)\right|_{t=t_{0}}$
is the tangent vector at the point P = γ(t0). Generally speaking, the tangent vector may be zero. The tangent vector's magnitude
$\left\|{\boldsymbol {\gamma }}'(t_{0})\right\|$
is the speed at the time t0.
The first Frenet vector e1(t) is the unit tangent vector in the same direction, defined at each regular point of γ:
$\mathbf {e} _{1}(t)={\frac {{\boldsymbol {\gamma }}'(t)}{\left\|{\boldsymbol {\gamma }}'(t)\right\|}}.$
If t = s is the natural parameter, then the tangent vector has unit length. The formula simplifies:
$\mathbf {e} _{1}(s)={\boldsymbol {\gamma }}'(s)$.
The unit tangent vector determines the orientation of the curve, or the forward direction, corresponding to the increasing values of the parameter. The unit tangent vector taken as a curve traces the spherical image of the original curve.
Normal vector or curvature vector
A curve normal vector, sometimes called the curvature vector, indicates the deviance of the curve from being a straight line. It is defined as
${\overline {\mathbf {e} _{2}}}(t)={\boldsymbol {\gamma }}''(t)-{\bigl \langle }{\boldsymbol {\gamma }}''(t),\mathbf {e} _{1}(t){\bigr \rangle }\,\mathbf {e} _{1}(t).$
Its normalized form, the unit normal vector, is the second Frenet vector e2(t) and is defined as
$\mathbf {e} _{2}(t)={\frac {{\overline {\mathbf {e} _{2}}}(t)}{\left\|{\overline {\mathbf {e} _{2}}}(t)\right\|}}.$
The tangent and the normal vector at point t define the osculating plane at point t.
It can be shown that ē2(t) ∝ e′1(t). Therefore,
$\mathbf {e} _{2}(t)={\frac {\mathbf {e} _{1}'(t)}{\left\|\mathbf {e} _{1}'(t)\right\|}}.$
Curvature
Main article: Curvature of space curves
The first generalized curvature χ1(t) is called curvature and measures the deviance of γ from being a straight line relative to the osculating plane. It is defined as
$\kappa (t)=\chi _{1}(t)={\frac {{\bigl \langle }\mathbf {e} _{1}'(t),\mathbf {e} _{2}(t){\bigr \rangle }}{\left\|{\boldsymbol {\gamma }}'(t)\right\|}}$
and is called the curvature of γ at point t. It can be shown that
$\kappa (t)={\frac {\left\|\mathbf {e} _{1}'(t)\right\|}{\left\|{\boldsymbol {\gamma }}'(t)\right\|}}.$
The reciprocal of the curvature
${\frac {1}{\kappa (t)}}$
is called the radius of curvature.
A circle with radius r has a constant curvature of
$\kappa (t)={\frac {1}{r}}$
whereas a line has a curvature of 0.
Binormal vector
The unit binormal vector is the third Frenet vector e3(t). It is always orthogonal to the unit tangent and normal vectors at t. It is defined as
$\mathbf {e} _{3}(t)={\frac {{\overline {\mathbf {e} _{3}}}(t)}{\|{\overline {\mathbf {e} _{3}}}(t)\|}},\quad {\overline {\mathbf {e} _{3}}}(t)={\boldsymbol {\gamma }}'''(t)-{\bigr \langle }{\boldsymbol {\gamma }}'''(t),\mathbf {e} _{1}(t){\bigr \rangle }\,\mathbf {e} _{1}(t)-{\bigl \langle }{\boldsymbol {\gamma }}'''(t),\mathbf {e} _{2}(t){\bigr \rangle }\,\mathbf {e} _{2}(t)$
In 3-dimensional space, the equation simplifies to
$\mathbf {e} _{3}(t)=\mathbf {e} _{1}(t)\times \mathbf {e} _{2}(t)$
or to
$\mathbf {e} _{3}(t)=-\mathbf {e} _{1}(t)\times \mathbf {e} _{2}(t),$
That either sign may occur is illustrated by the examples of a right-handed helix and a left-handed helix.
Torsion
Main article: Torsion of a curve
The second generalized curvature χ2(t) is called torsion and measures the deviance of γ from being a plane curve. In other words, if the torsion is zero, the curve lies completely in the same osculating plane (there is only one osculating plane for every point t). It is defined as
$\tau (t)=\chi _{2}(t)={\frac {{\bigl \langle }\mathbf {e} _{2}'(t),\mathbf {e} _{3}(t){\bigr \rangle }}{\left\|{\boldsymbol {\gamma }}'(t)\right\|}}$
and is called the torsion of γ at point t.
Aberrancy
The third derivative may be used to define aberrancy, a metric of non-circularity of a curve.[4][5][6]
Main theorem of curve theory
Main article: Fundamental theorem of curves
Given n − 1 functions:
$\chi _{i}\in C^{n-i}([a,b],\mathbb {R} ^{n}),\quad \chi _{i}(t)>0,\quad 1\leq i\leq n-1$
then there exists a unique (up to transformations using the Euclidean group) Cn + 1-curve γ which is regular of order n and has the following properties:
${\begin{aligned}\|\gamma '(t)\|&=1&t\in [a,b]\\\chi _{i}(t)&={\frac {\langle \mathbf {e} _{i}'(t),\mathbf {e} _{i+1}(t)\rangle }{\|{\boldsymbol {\gamma }}'(t)\|}}\end{aligned}}$
where the set
$\mathbf {e} _{1}(t),\ldots ,\mathbf {e} _{n}(t)$
is the Frenet frame for the curve.
By additionally providing a start t0 in I, a starting point p0 in $\mathbb {R} ^{n}$ and an initial positive orthonormal Frenet frame {e1, ..., en − 1} with
${\begin{aligned}{\boldsymbol {\gamma }}(t_{0})&=\mathbf {p} _{0}\\\mathbf {e} _{i}(t_{0})&=\mathbf {e} _{i},\quad 1\leq i\leq n-1\end{aligned}}$
the Euclidean transformations are eliminated to obtain a unique curve γ.
Frenet–Serret formulas
Main article: Frenet–Serret formulas
The Frenet–Serret formulas are a set of ordinary differential equations of first order. The solution is the set of Frenet vectors describing the curve specified by the generalized curvature functions χi.
2 dimensions
${\begin{bmatrix}\mathbf {e} _{1}'(t)\\\mathbf {e} _{2}'(t)\\\end{bmatrix}}=\left\Vert \gamma '\left(t\right)\right\Vert {\begin{bmatrix}0&\kappa (t)\\-\kappa (t)&0\\\end{bmatrix}}{\begin{bmatrix}\mathbf {e} _{1}(t)\\\mathbf {e} _{2}(t)\\\end{bmatrix}}$
3 dimensions
${\begin{bmatrix}\mathbf {e} _{1}'(t)\\\mathbf {e} _{2}'(t)\\\mathbf {e} _{3}'(t)\\\end{bmatrix}}=\left\Vert \gamma '\left(t\right)\right\Vert {\begin{bmatrix}0&\kappa (t)&0\\-\kappa (t)&0&\tau (t)\\0&-\tau (t)&0\\\end{bmatrix}}{\begin{bmatrix}\mathbf {e} _{1}(t)\\\mathbf {e} _{2}(t)\\\mathbf {e} _{3}(t)\\\end{bmatrix}}$
n dimensions (general formula)
${\begin{bmatrix}\mathbf {e} _{1}'(t)\\\mathbf {e} _{2}'(t)\\\vdots \\\mathbf {e} _{n-1}'(t)\\\mathbf {e} _{n}'(t)\\\end{bmatrix}}=\left\Vert \gamma '\left(t\right)\right\Vert {\begin{bmatrix}0&\chi _{1}(t)&\cdots &0&0\\-\chi _{1}(t)&0&\cdots &0&0\\\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&\cdots &0&\chi _{n-1}(t)\\0&0&\cdots &-\chi _{n-1}(t)&0\\\end{bmatrix}}{\begin{bmatrix}\mathbf {e} _{1}(t)\\\mathbf {e} _{2}(t)\\\vdots \\\mathbf {e} _{n-1}(t)\\\mathbf {e} _{n}(t)\\\end{bmatrix}}$
See also
• List of curves topics
References
1. do Carmo, Manfredo P. (2016). Differential Geometry of Curves and Surfaces (revised & updated 2nd ed.). Mineola, NY: Dover Publications, Inc. pp. 27–28. ISBN 978-0-486-80699-0.
2. Kühnel, Wolfgang (2005). Differential Geometry: Curves, Surfaces, Manifolds. Providence: AMS. p. 53. ISBN 0-8218-3988-8.
3. Weisstein, Eric W. "Bertrand Curves". mathworld.wolfram.com.
4. Schot, Stephen (November 1978). "Aberrancy: Geometry of the Third Derivative". Mathematics Magazine. 5. 51 (5): 259–275. doi:10.2307/2690245. JSTOR 2690245.
5. Cameron Byerley; Russell a. Gordon (2007). "Measures of Aberrancy". Real Analysis Exchange. Michigan State University Press. 32 (1): 233. doi:10.14321/realanalexch.32.1.0233. ISSN 0147-1937.
6. Gordon, Russell A. (2004). "The aberrancy of plane curves". The Mathematical Gazette. Cambridge University Press (CUP). 89 (516): 424–436. doi:10.1017/s0025557200178271. ISSN 0025-5572. S2CID 118533002.
Further reading
• Kreyszig, Erwin (1991). Differential Geometry. New York: Dover Publications. ISBN 0-486-66721-9. Chapter II is a classical treatment of Theory of Curves in 3-dimensions.
Differential transforms of plane curves
Unary operations
• Evolute
• Involute
• Dual curve
• Inverse curve
• Parallel curve
• Isoptic
Unary operations defined by a point
• Pedal & Contrapedal curves
• Negative pedal curve
• Pursuit curve
• Caustic
Unary operations defined by two points
• Strophoid
Binary operations defined by a point
• Roulette
• Cissoid
Operations on a family of curves
• Envelope
Various notions of curvature defined in differential geometry
Differential geometry
of curves
• Curvature
• Torsion of a curve
• Frenet–Serret formulas
• Radius of curvature (applications)
• Affine curvature
• Total curvature
• Total absolute curvature
Differential geometry
of surfaces
• Principal curvatures
• Gaussian curvature
• Mean curvature
• Darboux frame
• Gauss–Codazzi equations
• First fundamental form
• Second fundamental form
• Third fundamental form
Riemannian geometry
• Curvature of Riemannian manifolds
• Riemann curvature tensor
• Ricci curvature
• Scalar curvature
• Sectional curvature
Curvature of connections
• Curvature form
• Torsion tensor
• Cocurvature
• Holonomy
Tensors
Glossary of tensor theory
Scope
Mathematics
• Coordinate system
• Differential geometry
• Dyadic algebra
• Euclidean geometry
• Exterior calculus
• Multilinear algebra
• Tensor algebra
• Tensor calculus
• Physics
• Engineering
• Computer vision
• Continuum mechanics
• Electromagnetism
• General relativity
• Transport phenomena
Notation
• Abstract index notation
• Einstein notation
• Index notation
• Multi-index notation
• Penrose graphical notation
• Ricci calculus
• Tetrad (index notation)
• Van der Waerden notation
• Voigt notation
Tensor
definitions
• Tensor (intrinsic definition)
• Tensor field
• Tensor density
• Tensors in curvilinear coordinates
• Mixed tensor
• Antisymmetric tensor
• Symmetric tensor
• Tensor operator
• Tensor bundle
• Two-point tensor
Operations
• Covariant derivative
• Exterior covariant derivative
• Exterior derivative
• Exterior product
• Hodge star operator
• Lie derivative
• Raising and lowering indices
• Symmetrization
• Tensor contraction
• Tensor product
• Transpose (2nd-order tensors)
Related
abstractions
• Affine connection
• Basis
• Cartan formalism (physics)
• Connection form
• Covariance and contravariance of vectors
• Differential form
• Dimension
• Exterior form
• Fiber bundle
• Geodesic
• Levi-Civita connection
• Linear map
• Manifold
• Matrix
• Multivector
• Pseudotensor
• Spinor
• Vector
• Vector space
Notable tensors
Mathematics
• Kronecker delta
• Levi-Civita symbol
• Metric tensor
• Nonmetricity tensor
• Ricci curvature
• Riemann curvature tensor
• Torsion tensor
• Weyl tensor
Physics
• Moment of inertia
• Angular momentum tensor
• Spin tensor
• Cauchy stress tensor
• stress–energy tensor
• Einstein tensor
• EM tensor
• Gluon field strength tensor
• Metric tensor (GR)
Mathematicians
• Élie Cartan
• Augustin-Louis Cauchy
• Elwin Bruno Christoffel
• Albert Einstein
• Leonhard Euler
• Carl Friedrich Gauss
• Hermann Grassmann
• Tullio Levi-Civita
• Gregorio Ricci-Curbastro
• Bernhard Riemann
• Jan Arnoldus Schouten
• Woldemar Voigt
• Hermann Weyl
|
Wikipedia
|
Parametric model
In statistics, a parametric model or parametric family or finite-dimensional model is a particular class of statistical models. Specifically, a parametric model is a family of probability distributions that has a finite number of parameters.
Definition
A statistical model is a collection of probability distributions on some sample space. We assume that the collection, 𝒫, is indexed by some set Θ. The set Θ is called the parameter set or, more commonly, the parameter space. For each θ ∈ Θ, let Fθ denote the corresponding member of the collection; so Fθ is a cumulative distribution function. Then a statistical model can be written as
${\mathcal {P}}={\big \{}F_{\theta }\ {\big |}\ \theta \in \Theta {\big \}}.$
The model is a parametric model if Θ ⊆ ℝk for some positive integer k.
When the model consists of absolutely continuous distributions, it is often specified in terms of corresponding probability density functions:
${\mathcal {P}}={\big \{}f_{\theta }\ {\big |}\ \theta \in \Theta {\big \}}.$
Examples
• The Poisson family of distributions is parametrized by a single number λ > 0:
${\mathcal {P}}={\Big \{}\ p_{\lambda }(j)={\tfrac {\lambda ^{j}}{j!}}e^{-\lambda },\ j=0,1,2,3,\dots \ {\Big |}\;\;\lambda >0\ {\Big \}},$
where pλ is the probability mass function. This family is an exponential family.
• The normal family is parametrized by θ = (μ, σ), where μ ∈ ℝ is a location parameter and σ > 0 is a scale parameter:
${\mathcal {P}}={\Big \{}\ f_{\theta }(x)={\tfrac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\tfrac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)\ {\Big |}\;\;\mu \in \mathbb {R} ,\sigma >0\ {\Big \}}.$
This parametrized family is both an exponential family and a location-scale family.
• The Weibull translation model has a three-dimensional parameter θ = (λ, β, μ):
${\mathcal {P}}={\Big \{}\ f_{\theta }(x)={\tfrac {\beta }{\lambda }}\left({\tfrac {x-\mu }{\lambda }}\right)^{\beta -1}\!\exp \!{\big (}\!-\!{\big (}{\tfrac {x-\mu }{\lambda }}{\big )}^{\beta }{\big )}\,\mathbf {1} _{\{x>\mu \}}\ {\Big |}\;\;\lambda >0,\,\beta >0,\,\mu \in \mathbb {R} \ {\Big \}}.$
• The binomial model is parametrized by θ = (n, p), where n is a non-negative integer and p is a probability (i.e. p ≥ 0 and p ≤ 1):
${\mathcal {P}}={\Big \{}\ p_{\theta }(k)={\tfrac {n!}{k!(n-k)!}}\,p^{k}(1-p)^{n-k},\ k=0,1,2,\dots ,n\ {\Big |}\;\;n\in \mathbb {Z} _{\geq 0},\,p\geq 0\land p\leq 1{\Big \}}.$
This example illustrates the definition for a model with some discrete parameters.
General remarks
A parametric model is called identifiable if the mapping θ ↦ Pθ is invertible, i.e. there are no two different parameter values θ1 and θ2 such that Pθ1 = Pθ2.
Comparisons with other classes of models
Parametric models are contrasted with the semi-parametric, semi-nonparametric, and non-parametric models, all of which consist of an infinite set of "parameters" for description. The distinction between these four classes is as follows:
• in a "parametric" model all the parameters are in finite-dimensional parameter spaces;
• a model is "non-parametric" if all the parameters are in infinite-dimensional parameter spaces;
• a "semi-parametric" model contains finite-dimensional parameters of interest and infinite-dimensional nuisance parameters;
• a "semi-nonparametric" model has both finite-dimensional and infinite-dimensional unknown parameters of interest.
Some statisticians believe that the concepts "parametric", "non-parametric", and "semi-parametric" are ambiguous.[1] It can also be noted that the set of all probability measures has cardinality of continuum, and therefore it is possible to parametrize any model at all by a single number in (0,1) interval.[2] This difficulty can be avoided by considering only "smooth" parametric models.
See also
• Parametric family
• Parametric statistics
• Statistical model
• Statistical model specification
Notes
1. Le Cam & Yang 2000, §7.4
2. Bickel et al. 1998, p. 2
Bibliography
• Bickel, Peter J.; Doksum, Kjell A. (2001), Mathematical Statistics: Basic and selected topics, vol. 1 (Second (updated printing 2007) ed.), Prentice-Hall
• Bickel, Peter J.; Klaassen, Chris A. J.; Ritov, Ya’acov; Wellner, Jon A. (1998), Efficient and Adaptive Estimation for Semiparametric Models, Springer
• Davison, A. C. (2003), Statistical Models, Cambridge University Press
• Le Cam, Lucien; Yang, Grace Lo (2000), Asymptotics in Statistics: Some basic concepts (2nd ed.), Springer
• Lehmann, Erich L.; Casella, George (1998), Theory of Point Estimation (2nd ed.), Springer
• Liese, Friedrich; Miescke, Klaus-J. (2008), Statistical Decision Theory: Estimation, testing, and selection, Springer
• Pfanzagl, Johann; with the assistance of R. Hamböker (1994), Parametric Statistical Theory, Walter de Gruyter, MR 1291393
|
Wikipedia
|
Regular part
In mathematics, the regular part of a Laurent series consists of the series of terms with positive powers.[1] That is, if
$f(z)=\sum _{n=-\infty }^{\infty }a_{n}(z-c)^{n},$
then the regular part of this Laurent series is
$\sum _{n=0}^{\infty }a_{n}(z-c)^{n}.$
In contrast, the series of terms with negative powers is the principal part.[1]
References
1. Jeffrey, Alan (2005), Complex Analysis and Applications (2nd ed.), CRC Press, p. 256, ISBN 9781584885535.
|
Wikipedia
|
Regular polytope
In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetry. All its elements or j-faces (for all 0 ≤ j ≤ n, where n is the dimension of the polytope) — cells, faces and so on — are also transitive on the symmetries of the polytope, and are regular polytopes of dimension ≤ n.
Regular polytope examples
A regular pentagon is a polygon, a two-dimensional polytope with 5 edges, represented by Schläfli symbol {5}.
A regular dodecahedron is a polyhedron, a three-dimensional polytope, with 12 pentagonal faces, represented by Schläfli symbol {5,3}.
A regular 120-cell is a polychoron, a four-dimensional polytope, with 120 dodecahedral cells, represented by Schläfli symbol {5,3,3}. (shown here as a Schlegel diagram)
A regular cubic honeycomb is a tessellation, an infinite three-dimensional polytope, represented by Schläfli symbol {4,3,4}.
The 256 vertices and 1024 edges of an 8-cube can be shown in this orthogonal projection (Petrie polygon)
Regular polytopes are the generalized analog in any number of dimensions of regular polygons (for example, the square or the regular pentagon) and regular polyhedra (for example, the cube). The strong symmetry of the regular polytopes gives them an aesthetic quality that interests both non-mathematicians and mathematicians.
Classically, a regular polytope in n dimensions may be defined as having regular facets ([n–1]-faces) and regular vertex figures. These two conditions are sufficient to ensure that all faces are alike and all vertices are alike. Note, however, that this definition does not work for abstract polytopes.
A regular polytope can be represented by a Schläfli symbol of the form {a, b, c, ..., y, z}, with regular facets as {a, b, c, ..., y}, and regular vertex figures as {b, c, ..., y, z}.
Classification and description
Regular polytopes are classified primarily according to their dimensionality.
They can be further classified according to symmetry. For example, the cube and the regular octahedron share the same symmetry, as do the regular dodecahedron and icosahedron. Indeed, symmetry groups are sometimes named after regular polytopes, for example the tetrahedral and icosahedral symmetries.
Three special classes of regular polytope exist in every dimension:
• Regular simplex
• Measure polytope (Hypercube)
• Cross polytope (Orthoplex)
In two dimensions, there are infinitely many regular polygons. In three and four dimensions, there are several more regular polyhedra and 4-polytopes besides these three. In five dimensions and above, these are the only ones. See also the list of regular polytopes.
In one dimension, the Line Segment simultaneously serves as all of these polytopes, and in two dimensions, the square can act as both the Measure Polytope and Cross Polytope at the same time.
The idea of a polytope is sometimes generalised to include related kinds of geometrical object. Some of these have regular examples, as discussed in the section on historical discovery below.
Schläfli symbols
Main article: Schläfli symbol
A concise symbolic representation for regular polytopes was developed by Ludwig Schläfli in the 19th century, and a slightly modified form has become standard. The notation is best explained by adding one dimension at a time.
• A convex regular polygon having n sides is denoted by {n}. So an equilateral triangle is {3}, a square {4}, and so on indefinitely. A regular star polygon which winds m times around its centre is denoted by the fractional value {n/m}, where n and m are co-prime, so a regular pentagram is {5/2}.
• A regular polyhedron having faces {n} with p faces joining around a vertex is denoted by {n, p}. The nine regular polyhedra are {3, 3} {3, 4} {4, 3} {3, 5} {5, 3} {3, 5/2} {5/2, 3} {5, 5/2} and {5/2, 5}. {p} is the vertex figure of the polyhedron.
• A regular 4-polytope having cells {n, p} with q cells joining around an edge is denoted by {n, p, q}. The vertex figure of the 4-polytope is a {p, q}.
• A regular 5-polytope is an {n, p, q, r}. And so on.
Duality of the regular polytopes
The dual of a regular polytope is also a regular polytope. The Schläfli symbol for the dual polytope is just the original symbol written backwards: {3, 3} is self-dual, {3, 4} is dual to {4, 3}, {4, 3, 3} to {3, 3, 4} and so on.
The vertex figure of a regular polytope is the dual of the dual polytope's facet. For example, the vertex figure of {3, 3, 4} is {3, 4}, the dual of which is {4, 3} — a cell of {4, 3, 3}.
The measure and cross polytopes in any dimension are dual to each other.
If the Schläfli symbol is palindromic, i.e. reads the same forwards and backwards, then the polyhedron is self-dual. The self-dual regular polytopes are:
• All regular polygons, {a}.
• All regular n-simplexes, {3,3,...,3}
• The regular 24-cell in 4 dimensions, {3,4,3}.
• The great 120-cell ({5,5/2,5}) and grand stellated 120-cell ({5/2,5,5/2}) in 4 dimensions.
• All regular n-dimensional cubic honeycombs, {4,3,...,3,4}. These may be treated as infinite polytopes.
• Hyperbolic tilings and honeycombs (tilings {p,p} with p>4 in 2 dimensions, {4,4,4}, {5,3,5}. {3,5,3}, {6,3,6}, and {3,6,3} in 3 dimensions, {5,3,3,5} in 4 dimensions, and {3,3,4,3,3} in 5 dimensions).
Regular simplices
Graphs of the 1-simplex to 4-simplex.
Line segment Triangle Tetrahedron Pentachoron
Main article: Simplex
Begin with a point A. Mark point B at a distance r from it, and join to form a line segment. Mark point C in a second, orthogonal, dimension at a distance r from both, and join to A and B to form an equilateral triangle. Mark point D in a third, orthogonal, dimension a distance r from all three, and join to form a regular tetrahedron. And so on for higher dimensions.
These are the regular simplices or simplexes. Their names are, in order of dimensionality:
0. Point
1. Line segment
2. Equilateral triangle (regular trigon)
3. Regular tetrahedron
4. Regular pentachoron or 4-simplex
5. Regular hexateron or 5-simplex
... An n-simplex has n+1 vertices.
Measure polytopes (hypercubes)
Graphs of the 2-cube to 4-cube.
Square Cube Tesseract
Main article: Hypercube
Begin with a point A. Extend a line to point B at distance r, and join to form a line segment. Extend a second line of length r, orthogonal to AB, from B to C, and likewise from A to D, to form a square ABCD. Extend lines of length r respectively from each corner, orthogonal to both AB and BC (i.e. upwards). Mark new points E,F,G,H to form the cube ABCDEFGH. And so on for higher dimensions.
These are the measure polytopes or hypercubes. Their names are, in order of dimensionality:
0. Point
1. Line segment
2. Square (regular tetragon)
3. Cube (regular hexahedron)
4. Tesseract (regular octachoron) or 4-cube
5. Penteract (regular decateron) or 5-cube
... An n-cube has 2n vertices.
Cross polytopes (orthoplexes)
Graphs of the 2-orthoplex to 4-orthoplex.
Square Octahedron 16-cell
Main article: Orthoplex
Begin with a point O. Extend a line in opposite directions to points A and B a distance r from O and 2r apart. Draw a line COD of length 2r, centred on O and orthogonal to AB. Join the ends to form a square ACBD. Draw a line EOF of the same length and centered on 'O', orthogonal to AB and CD (i.e. upwards and downwards). Join the ends to the square to form a regular octahedron. And so on for higher dimensions.
These are the cross polytopes or orthoplexes. Their names are, in order of dimensionality:
0. Point
1. Line segment
2. Square (regular tetragon)
3. Regular octahedron
4. Regular hexadecachoron (16-cell) or 4-orthoplex
5. Regular triacontakaiditeron (Pentacross) or 5-orthoplex
... An n-orthoplex has 2n vertices.
History of discovery
Convex polygons and polyhedra
The earliest surviving mathematical treatment of regular polygons and polyhedra comes to us from ancient Greek mathematicians. The five Platonic solids were known to them. Pythagoras knew of at least three of them and Theaetetus (c. 417 BC – 369 BC) described all five. Later, Euclid wrote a systematic study of mathematics, publishing it under the title Elements, which built up a logical theory of geometry and number theory. His work concluded with mathematical descriptions of the five Platonic solids.
Platonic solids
TetrahedronCubeOctahedronDodecahedronIcosahedron
Star polygons and polyhedra
Our understanding remained static for many centuries after Euclid. The subsequent history of the regular polytopes can be characterised by a gradual broadening of the basic concept, allowing more and more objects to be considered among their number. Thomas Bradwardine (Bradwardinus) was the first to record a serious study of star polygons. Various star polyhedra appear in Renaissance art, but it was not until Johannes Kepler studied the small stellated dodecahedron and the great stellated dodecahedron in 1619 that he realised these two were regular. Louis Poinsot discovered the great dodecahedron and great icosahedron in 1809, and Augustin Cauchy proved the list complete in 1812. These polyhedra are known as collectively as the Kepler-Poinsot polyhedra.
Main article: Regular polyhedron § History
Kepler-Poinsot polyhedra
Small stellated
dodecahedron
Great stellated
dodecahedron
Great dodecahedronGreat icosahedron
Higher-dimensional polytopes
It was not until the 19th century that a Swiss mathematician, Ludwig Schläfli, examined and characterised the regular polytopes in higher dimensions. His efforts were first published in full in Schläfli (1901), six years posthumously, although parts of it were published in Schläfli (1855) and Schläfli (1858). Between 1880 and 1900, Schläfli's results were rediscovered independently by at least nine other mathematicians — see Coxeter (1973, pp. 143–144) for more details. Schläfli called such a figure a "polyschem" (in English, "polyscheme" or "polyschema"). The term "polytope" was introduced by Reinhold Hoppe, one of Schläfli's rediscoverers, in 1882, and first used in English by Alicia Boole Stott some twenty years later. The term "polyhedroids" was also used in earlier literature (Hilbert, 1952).
Coxeter (1973) is probably the most comprehensive printed treatment of Schläfli's and similar results to date. Schläfli showed that there are six regular convex polytopes in 4 dimensions. Five of them can be seen as analogous to the Platonic solids: the 4-simplex (or pentachoron) to the tetrahedron, the hypercube (or tesseract) to the cube, the 4-orthoplex (or hexadecachoron or 16-cell) to the octahedron, the 120-cell to the dodecahedron, and the 600-cell to the icosahedron. The sixth, the 24-cell, can be seen as a transitional form between the hypercube and 16-cell, analogous to the way that the cuboctahedron and the rhombic dodecahedron are transitional forms between the cube and the octahedron.
In five and more dimensions, there are exactly three regular polytopes, which correspond to the tetrahedron, cube and octahedron: these are the regular simplices, measure polytopes and cross polytopes. Descriptions of these may be found in the list of regular polytopes. Also of interest are the star regular 4-polytopes, partially discovered by Schläfli.
By the end of the 19th century, mathematicians such as Arthur Cayley and Ludwig Schläfli had developed the theory of regular polytopes in four and higher dimensions, such as the tesseract and the 24-cell.
The latter are difficult (though not impossible) to visualise through a process of dimensional analogy, since they retain the familiar symmetry of their lower-dimensional analogues. The tesseract contains 8 cubical cells. It consists of two cubes in parallel hyperplanes with corresponding vertices cross-connected in such a way that the 8 cross-edges are equal in length and orthogonal to the 12+12 edges situated on each cube. The corresponding faces of the two cubes are connected to form the remaining 6 cubical faces of the tesseract. The 24-cell can be derived from the tesseract by joining the 8 vertices of each of its cubical faces to an additional vertex to form the four-dimensional analogue of a pyramid. Both figures, as well as other 4-dimensional figures, can be directly visualised and depicted using 4-dimensional stereographs.[1]
Harder still to imagine are the more modern abstract regular polytopes such as the 57-cell or the 11-cell. From the mathematical point of view, however, these objects have the same aesthetic qualities as their more familiar two and three-dimensional relatives.
At the start of the 20th century, the definition of a regular polytope was as follows.
• A regular polygon is a polygon whose edges are all equal and whose angles are all equal.
• A regular polyhedron is a polyhedron whose faces are all congruent regular polygons, and whose vertex figures are all congruent and regular.
• And so on, a regular n-polytope is an n-dimensional polytope whose (n − 1)-dimensional faces are all regular and congruent, and whose vertex figures are all regular and congruent.
This is a "recursive" definition. It defines regularity of higher dimensional figures in terms of regular figures of a lower dimension. There is an equivalent (non-recursive) definition, which states that a polytope is regular if it has a sufficient degree of symmetry.
• An n-polytope is regular if any set consisting of a vertex, an edge containing it, a 2-dimensional face containing the edge, and so on up to n−1 dimensions, can be mapped to any other such set by a symmetry of the polytope.
So for example, the cube is regular because if we choose a vertex of the cube, and one of the three edges it is on, and one of the two faces containing the edge, then this triplet, or flag, (vertex, edge, face) can be mapped to any other such flag by a suitable symmetry of the cube. Thus we can define a regular polytope very succinctly:
• A regular polytope is one whose symmetry group is transitive on its flags.
In the 20th century, some important developments were made. The symmetry groups of the classical regular polytopes were generalised into what are now called Coxeter groups. Coxeter groups also include the symmetry groups of regular tessellations of space or of the plane. For example, the symmetry group of an infinite chessboard would be the Coxeter group [4,4].
Apeirotopes — infinite polytopes
Main article: Regular skew polyhedron
In the first part of the 20th century, Coxeter and Petrie discovered three infinite structures {4, 6}, {6, 4} and {6, 6}. They called them regular skew polyhedra, because they seemed to satisfy the definition of a regular polyhedron — all the vertices, edges and faces are alike, all the angles are the same, and the figure has no free edges. Nowadays, they are called infinite polyhedra or apeirohedra. The regular tilings of the plane {4, 4}, {3, 6} and {6, 3} can also be regarded as infinite polyhedra.
In the 1960s Branko Grünbaum issued a call to the geometric community to consider more abstract types of regular polytopes that he called polystromata. He developed the theory of polystromata, showing examples of new objects he called regular apeirotopes, that is, regular polytopes with infinitely many faces. A simple example of a skew apeirogon would be a zig-zag. It seems to satisfy the definition of a regular polygon — all the edges are the same length, all the angles are the same, and the figure has no loose ends (because they can never be reached). More importantly, perhaps, there are symmetries of the zig-zag that can map any pair of a vertex and attached edge to any other. Since then, other regular apeirogons and higher apeirotopes have continued to be discovered.
Regular complex polytopes
Main article: Complex polytope
A complex number has a real part, which is the bit we are all familiar with, and an imaginary part, which is a multiple of the square root of minus one. A complex Hilbert space has its x, y, z, etc. coordinates as complex numbers. This effectively doubles the number of dimensions. A polytope constructed in such a unitary space is called a complex polytope.[2]
Abstract polytopes
Main article: Abstract polytope
Grünbaum also discovered the 11-cell, a four-dimensional self-dual object whose facets are not icosahedra, but are "hemi-icosahedra" — that is, they are the shape one gets if one considers opposite faces of the icosahedra to be actually the same face (Grünbaum 1976). The hemi-icosahedron has only 10 triangular faces, and 6 vertices, unlike the icosahedron, which has 20 and 12.
This concept may be easier for the reader to grasp if one considers the relationship of the cube and the hemicube. An ordinary cube has 8 corners, they could be labeled A to H, with A opposite H, B opposite G, and so on. In a hemicube, A and H would be treated as the same corner. So would B and G, and so on. The edge AB would become the same edge as GH, and the face ABEF would become the same face as CDGH. The new shape has only three faces, 6 edges and 4 corners.
The 11-cell cannot be formed with regular geometry in flat (Euclidean) hyperspace, but only in positively curved (elliptic) hyperspace.
A few years after Grünbaum's discovery of the 11-cell, H. S. M. Coxeter independently discovered the same shape. He had earlier discovered a similar polytope, the 57-cell (Coxeter 1982, 1984).
By 1994 Grünbaum was considering polytopes abstractly as combinatorial sets of points or vertices, and was unconcerned whether faces were planar. As he and others refined these ideas, such sets came to be called abstract polytopes. An abstract polytope is defined as a partially ordered set (poset), whose elements are the polytope's faces (vertices, edges, faces etc.) ordered by containment. Certain restrictions are imposed on the set that are similar to properties satisfied by the classical regular polytopes (including the Platonic solids). The restrictions, however, are loose enough that regular tessellations, hemicubes, and even objects as strange as the 11-cell or stranger, are all examples of regular polytopes.
A geometric polytope is understood to be a realization of the abstract polytope, such that there is a one-to-one mapping from the abstract elements to the geometric. Thus, any geometric polytope may be described by the appropriate abstract poset, though not all abstract polytopes have proper geometric realizations.
The theory has since been further developed, largely by McMullen & Schulte (2002), but other researchers have also made contributions.
Regularity of abstract polytopes
Regularity has a related, though different meaning for abstract polytopes, since angles and lengths of edges have no meaning.
The definition of regularity in terms of the transitivity of flags as given in the introduction applies to abstract polytopes.
Any classical regular polytope has an abstract equivalent which is regular, obtained by taking the set of faces. But non-regular classical polytopes can have regular abstract equivalents, since abstract polytopes don't care about angles and edge lengths, for example. And a regular abstract polytope may not be realisable as a classical polytope.
All polygons are regular in the abstract world, for example, whereas only those having equal angles and edges of equal length are regular in the classical world.
Vertex figure of abstract polytopes
The concept of vertex figure is also defined differently for an abstract polytope. The vertex figure of a given abstract n-polytope at a given vertex V is the set of all abstract faces which contain V, including V itself. More formally, it is the abstract section
Fn / V = {F | V ≤ F ≤ Fn}
where Fn is the maximal face, i.e. the notional n-face which contains all other faces. Note that each i-face, i ≥ 0 of the original polytope becomes an (i − 1)-face of the vertex figure.
Unlike the case for Euclidean polytopes, an abstract polytope with regular facets and vertex figures may or may not be regular itself – for example, the square pyramid, all of whose facets and vertex figures are regular abstract polygons.
The classical vertex figure will, however, be a realisation of the abstract one.
Constructions
Polygons
The traditional way to construct a regular polygon, or indeed any other figure on the plane, is by compass and straightedge. Constructing some regular polygons in this way is very simple (the easiest is perhaps the equilateral triangle), some are more complex, and some are impossible ("not constructible"). The simplest few regular polygons that are impossible to construct are the n-sided polygons with n equal to 7, 9, 11, 13, 14, 18, 19, 21,...
Constructibility in this sense refers only to ideal constructions with ideal tools. Of course reasonably accurate approximations can be constructed by a range of methods; while theoretically possible constructions may be impractical.
Polyhedra
Euclid's Elements gave what amount to ruler-and-compass constructions for the five Platonic solids.[3] However, the merely practical question of how one might draw a straight line in space, even with a ruler, might lead one to question what exactly it means to "construct" a regular polyhedron. (One could ask the same question about the polygons, of course.)
The English word "construct" has the connotation of systematically building the thing constructed. The most common way presented to construct a regular polyhedron is via a fold-out net. To obtain a fold-out net of a polyhedron, one takes the surface of the polyhedron and cuts it along just enough edges so that the surface may be laid out flat. This gives a plan for the net of the unfolded polyhedron. Since the Platonic solids have only triangles, squares and pentagons for faces, and these are all constructible with a ruler and compass, there exist ruler-and-compass methods for drawing these fold-out nets. The same applies to star polyhedra, although here we must be careful to make the net for only the visible outer surface.
If this net is drawn on cardboard, or similar foldable material (for example, sheet metal), the net may be cut out, folded along the uncut edges, joined along the appropriate cut edges, and so forming the polyhedron for which the net was designed. For a given polyhedron there may be many fold-out nets. For example, there are 11 for the cube, and over 900000 for the dodecahedron.[4]
Numerous children's toys, generally aimed at the teen or pre-teen age bracket, allow experimentation with regular polygons and polyhedra. For example, klikko provides sets of plastic triangles, squares, pentagons and hexagons that can be joined edge-to-edge in a large number of different ways. A child playing with such a toy could re-discover the Platonic solids (or the Archimedean solids), especially if given a little guidance from a knowledgeable adult.
In theory, almost any material may be used to construct regular polyhedra.[5] They may be carved out of wood, modeled out of wire, formed from stained glass. The imagination is the limit.
Higher dimensions
In higher dimensions, it becomes harder to say what one means by "constructing" the objects. Clearly, in a 3-dimensional universe, it is impossible to build a physical model of an object having 4 or more dimensions. There are several approaches normally taken to overcome this matter.
The first approach, suitable for four dimensions, uses four-dimensional stereography.[1] Depth in a third dimension is represented with horizontal relative displacement, depth in a fourth dimension with vertical relative displacement between the left and right images of the stereograph.
The second approach is to embed the higher-dimensional objects in three-dimensional space, using methods analogous to the ways in which three-dimensional objects are drawn on the plane. For example, the fold out nets mentioned in the previous section have higher-dimensional equivalents.[6] One might even imagine building a model of this fold-out net, as one draws a polyhedron's fold-out net on a piece of paper. Sadly, we could never do the necessary folding of the 3-dimensional structure to obtain the 4-dimensional polytope because of the constraints of the physical universe. Another way to "draw" the higher-dimensional shapes in 3 dimensions is via some kind of projection, for example, the analogue of either orthographic or perspective projection. Coxeter's famous book on polytopes (Coxeter 1973) has some examples of such orthographic projections.[7] Note that immersing even 4-dimensional polychora directly into two dimensions is quite confusing. Easier to understand are 3-d models of the projections. Such models are occasionally found in science museums or mathematics departments of universities (such as that of the Université Libre de Bruxelles).
The intersection of a four (or higher) dimensional regular polytope with a three-dimensional hyperplane will be a polytope (not necessarily regular). If the hyperplane is moved through the shape, the three-dimensional slices can be combined, animated into a kind of four dimensional object, where the fourth dimension is taken to be time. In this way, we can see (if not fully grasp) the full four-dimensional structure of the four-dimensional regular polytopes, via such cutaway cross sections. This is analogous to the way a CAT scan reassembles two-dimensional images to form a 3-dimensional representation of the organs being scanned. The ideal would be an animated hologram of some sort, however, even a simple animation such as the one shown can already give some limited insight into the structure of the polytope.
Another way a three-dimensional viewer can comprehend the structure of a four-dimensional polytope is through being "immersed" in the object, perhaps via some form of virtual reality technology. To understand how this might work, imagine what one would see if space were filled with cubes. The viewer would be inside one of the cubes, and would be able to see cubes in front of, behind, above, below, to the left and right of himself. If one could travel in these directions, one could explore the array of cubes, and gain an understanding of its geometrical structure. An infinite array of cubes is not a polytope in the traditional sense. In fact, it is a tessellation of 3-dimensional (Euclidean) space. However, a 4-polytope can be considered a tessellation of a 3-dimensional non-Euclidean space, namely, a tessellation of the surface of a four-dimensional sphere (a 4-dimensional spherical tiling).
Locally, this space seems like the one we are familiar with, and therefore, a virtual-reality system could, in principle, be programmed to allow exploration of these "tessellations", that is, of the 4-dimensional regular polytopes. The mathematics department at UIUC has a number of pictures of what one would see if embedded in a tessellation of hyperbolic space with dodecahedra. Such a tessellation forms an example of an infinite abstract regular polytope.
Normally, for abstract regular polytopes, a mathematician considers that the object is "constructed" if the structure of its symmetry group is known. This is because of an important theorem in the study of abstract regular polytopes, providing a technique that allows the abstract regular polytope to be constructed from its symmetry group in a standard and straightforward manner.
Regular polytopes in nature
For examples of polygons in nature, see:
Main article: Polygon
Each of the Platonic solids occurs naturally in one form or another:
Main article: Regular polyhedron
See also
• List of regular polytopes
• Johnson solid
• Bartel Leendert van der Waerden
References
Notes
1. Brisson, David W. (2019) [1978]. "Visual Comprehension in n-Dimensions". In Brisson, David W. (ed.). Hypergraphics: Visualizing Complex Relationships In Arts, Science, And Technololgy. AAAS Selected Symposium. Vol. 24. Taylor & Francis. pp. 109–145. ISBN 978-0-429-70681-3.
2. Coxeter (1974)
3. See, for example, Euclid's Elements.
4. Some interesting fold-out nets of the cube, octahedron, dodecahedron and icosahedron are available here.
5. Instructions for building origami models may be found here, for example.
6. Some of these may be viewed at .
7. Other examples may be found on the web (see for example ).
Bibliography
• Coxeter, H.S.M. (1973) [1948]. Regular Polytopes (3rd ed.). Dover. ISBN 0-486-61480-8.
• — (1974). Regular Complex Polytopes. Cambridge University Press. ISBN 052120125X.
• — (1991). Regular Complex Polytopes (2nd ed.). Cambridge University Press. ISBN 978-0-521-39490-1.
• Cromwell, Peter R. (1999). Polyhedra. Cambridge University Press. ISBN 978-0-521-66405-9.
• Euclid (1956). Elements. Translated by Heath, T. L. Cambridge University Press.
• Grünbaum, B. (1976). Regularity of Graphs, Complexes and Designs. Problèmes Combinatoires et Théorie des Graphes, Colloquium Internationale CNRS, Orsay. Vol. 260. pp. 191–197.
• Grünbaum, B. (1993). "Polyhedra with hollow faces". In Bisztriczky, T.; et al. (eds.). POLYTOPES: abstract, convex, and computational. Mathematical and physical sciences, NATO Advanced Study Institute. Vol. 440. Kluwer Academic. pp. 43–70. ISBN 0792330161.
• McMullen, P.; Schulte, S. (2002). Abstract Regular Polytopes. Cambridge University Press.
• Sanford, V. (1930). A Short History Of Mathematics. The Riverside Press.
• Schläfli, L. (1855). "Réduction d'une intégrale multiple, qui comprend l'arc de cercle et l'aire du triangle sphérique comme cas particuliers". Journal de Mathématiques. 20: 359–394.
• Schläfli, L. (1858). "On the multiple integral ∫^ n dxdy... dz, whose limits are p_1= a_1x+ b_1y+…+ h_1z> 0, p_2> 0,..., p_n> 0, and x^ 2+ y^ 2+…+ z^ 2< 1". Quarterly Journal of Pure and Applied Mathematics. 2: 269–301. 3 (1860) pp54–68, 97–108.
• Schläfli, L. (1901). "Theorie der vielfachen Kontinuität". Denkschriften der Schweizerischen Naturforschenden Gesellschaft. 38: 1–237.
• Smith, J. V. (1982). Geometrical and Structural Crystallography (2nd ed.). Wiley. ISBN 0471861685.
• Van der Waerden, B. L. (1954). Science Awakening. Translated by Dresden, Arnold. P Noordhoff.
• D.M.Y. Sommerville (2020) [1930]. "X. The Regular Polytopes". Introduction to the Geometry of n Dimensions. Courier Dover. pp. 159–192. ISBN 978-0-486-84248-6.
External links
• The Atlas of Small Regular Polytopes - List of abstract regular polytopes.
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
|
Wikipedia
|
Regular Polytopes (book)
Regular Polytopes is a geometry book on regular polytopes written by Harold Scott MacDonald Coxeter. It was originally published by Methuen in 1947 and by Pitman Publishing in 1948,[1][2][3][4][5][6][7][8] with a second edition published by Macmillan in 1963[9][10][11][12] and a third edition by Dover Publications in 1973.[13][14][15] The Basic Library List Committee of the Mathematical Association of America has recommended that it be included in undergraduate mathematics libraries.[15]
Cover of the Dover edition, 1973
AuthorHarold Scott MacDonald Coxeter
LanguageEnglish
SubjectGeometry
Published1947, 1973, 1973
PublisherMethuen, Pitman, Macmillan, Dover
Pages321
ISBN0-486-61480-8
OCLC798003
Overview
The main topics of the book are the Platonic solids (regular convex polyhedra), related polyhedra, and their higher-dimensional generalizations.[1][2] It has 14 chapters, along with multiple appendices,[3] providing a more complete treatment of the subject than any earlier work, and incorporating material from 18 of Coxeter's own previous papers.[1] It includes many figures (both photographs of models by Paul Donchian and drawings), tables of numerical values, and historical remarks on the subject.[1][2]
The first chapter discusses regular polygons, regular polyhedra, basic concepts of graph theory, and the Euler characteristic.[3] Using the Euler characteristic, Coxeter derives a Diophantine equation whose integer solutions describe and classify the regular polyhedra. The second chapter uses combinations of regular polyhedra and their duals to generate related polyhedra,[1] including the semiregular polyhedra, and discusses zonohedra and Petrie polygons.[3] Here and throughout the book, the shapes it discusses are identified and classified by their Schläfli symbols.[1]
Chapters 3 through 5 describe the symmetries of polyhedra, first as permutation groups[3] and later, in the most innovative part of the book,[1] as the Coxeter groups, groups generated by reflections and described by the angles between their reflection planes. This part of the book also describes the regular tessellations of the Euclidean plane and the sphere, and the regular honeycombs of Euclidean space. Chapter 6 discusses the star polyhedra including the Kepler–Poinsot polyhedra.[3]
The remaining chapters cover higher-dimensional generalizations of these topics, including two chapters on the enumeration and construction of the regular polytopes, two chapters on higher-dimensional Euler characteristics and background on quadratic forms, two chapters on higher-dimensional Coxeter groups, a chapter on cross-sections and projections of polytopes, and a chapter on star polytopes and polytope compounds.[3]
Later editions
The second edition was published in paperback;[9][11] it adds some more recent research of Robert Steinberg on Petrie polygons and the order of Coxeter groups,[9][12] appends a new definition of polytopes at the end of the book, and makes minor corrections throughout.[9] The photographic plates were also enlarged for this printing,[10][12] and some figures were redrawn.[12] The nomenclature of these editions was occasionally cumbersome,[2] and was modernized in the third edition. The third edition also included a new preface with added material on polyhedra in nature, found by the electron microscope.[13][14]
Reception
The book only assumes a high-school understanding of algebra, geometry, and trigonometry,[2][3] but it is primarily aimed at professionals in this area,[2] and some steps in the book's reasoning which a professional could take for granted might be too much for less-advanced readers.[3] Nevertheless, reviewer J. C. P. Miller recommends it to "anyone interested in the subject, whether from recreational, educational, or other aspects",[4] and (despite complaining about the omission of regular skew polyhedra) reviewer H. E. Wolfe suggests more strongly that every mathematician should own a copy.[7] Geologist A. J. Frueh Jr., describing the book as a textbook rather than a monograph, suggests that the parts of the book on the symmetries of space would likely be of great interest to crystallographers; however, Frueh complains of the lack of rigor in its proofs and the lack of clarity in its descriptions.[6]
Already in its first edition the book was described as "long awaited",[3] and "what is, and what will probably be for many years, the only organized treatment of the subject".[7] In a review of the second edition, Michael Goldberg (who also reviewed the first edition)[1] called it "the most extensive and authoritative summary" of its area of mathematics.[10] By the time of Tricia Muldoon Brown's 2016 review, she described it as "occasionally out-of-date, although not frustratingly so", for instance in its discussion of the four color theorem, proved after its last update. However, she still evaluated it as "well-written and comprehensive".[15]
See also
• List of books about polyhedra
References
1. Goldberg, M., "Review of Regular Polytopes", Mathematical Reviews, MR 0027148
2. Allendoerfer, C.B. (1949), "Review of Regular Polytopes", Bulletin of the American Mathematical Society, 55 (7): 721–722, doi:10.1090/S0002-9904-1949-09258-3
3. Cundy, H. Martyn (February 1949), "Review of Regular Polytopes", The Mathematical Gazette, 33 (303): 47–49, doi:10.2307/3608432, JSTOR 3608432
4. Miller, J. C. P. (July 1949), "Review of Regular Polytopes", Science Progress, 37 (147): 563–564, JSTOR 43413146
5. Walsh, J. L. (August 1949), "Review of Regular Polytopes", Scientific American, 181 (2): 58–59, JSTOR 24967260
6. Frueh, Jr., A. J. (November 1950), "Review of Regular Polytopes", The Journal of Geology, 58 (6): 672, JSTOR 30071213{{citation}}: CS1 maint: multiple names: authors list (link)
7. Wolfe, H. E. (February 1951), "Review of Regular Polytopes", American Mathematical Monthly, 58 (2): 119–120, doi:10.2307/2308393, JSTOR 2308393
8. Tóth, L. Fejes, "Review of Regular Polytopes", zbMATH (in German), Zbl 0031.06502
9. Robinson, G. de B., "Review of Regular Polytopes", Mathematical Reviews, MR 0151873
10. Goldberg, Michael (January 1964), "Review of Regular Polytopes", Mathematics of Computation, 18 (85): 166, doi:10.2307/2003446, JSTOR 2003446
11. Primrose, E.J.F (October 1964), "Review of Regular Polytopes", The Mathematical Gazette, 48 (365): 344–344, doi:10.1017/s0025557200072995
12. Yff, P. (February 1965), "Review of Regular Polytopes", Canadian Mathematical Bulletin, 8 (1): 124–124, doi:10.1017/s0008439500024413
13. Peak, Philip (March 1975), "Review of Regular Polytopes", The Mathematics Teacher, 68 (3): 230, JSTOR 27960095
14. Wenninger, Magnus J. (Winter 1976), "Review of Regular Polytopes", Leonardo, 9 (1): 83, doi:10.2307/1573335, JSTOR 1573335
15. Brown, Tricia Muldoon (October 2016), "Review of Regular Polytopes", MAA Reviews, Mathematical Association of America
|
Wikipedia
|
Regular representation
In mathematics, and in particular the theory of group representations, the regular representation of a group G is the linear representation afforded by the group action of G on itself by translation.
For regular irreducible representations of a finite group, see Gelfand–Graev representation.
One distinguishes the left regular representation λ given by left translation and the right regular representation ρ given by the inverse of right translation.
Finite groups
See also: Representation theory of finite groups § Left- and right-regular representation
For a finite group G, the left regular representation λ (over a field K) is a linear representation on the K-vector space V freely generated by the elements of G, i. e. they can be identified with a basis of V. Given g ∈ G, λg is the linear map determined by its action on the basis by left translation by g, i.e.
$\lambda _{g}:h\mapsto gh,{\text{ for all }}h\in G.$
For the right regular representation ρ, an inversion must occur in order to satisfy the axioms of a representation. Specifically, given g ∈ G, ρg is the linear map on V determined by its action on the basis by right translation by g−1, i.e.
$\rho _{g}:h\mapsto hg^{-1},{\text{ for all }}h\in G.\ $
Alternatively, these representations can be defined on the K-vector space W of all functions G → K. It is in this form that the regular representation is generalized to topological groups such as Lie groups.
The specific definition in terms of W is as follows. Given a function f : G → K and an element g ∈ G,
$(\lambda _{g}f)(x)=f(\lambda _{g}^{-1}(x))=f({g}^{-1}x)$
and
$(\rho _{g}f)(x)=f(\rho _{g}^{-1}(x))=f(xg).$
Significance of the regular representation of a group
Every group G acts on itself by translations. If we consider this action as a permutation representation it is characterised as having a single orbit and stabilizer the identity subgroup {e} of G. The regular representation of G, for a given field K, is the linear representation made by taking this permutation representation as a set of basis vectors of a vector space over K. The significance is that while the permutation representation doesn't decompose – it is transitive – the regular representation in general breaks up into smaller representations. For example, if G is a finite group and K is the complex number field, the regular representation decomposes as a direct sum of irreducible representations, with each irreducible representation appearing in the decomposition with multiplicity its dimension. The number of these irreducibles is equal to the number of conjugacy classes of G.
The above fact can be explained by character theory. Recall that the character of the regular representation χ(g) is the number of fixed points of g acting on the regular representation V. It means the number of fixed points χ(g) is zero when g is not id and |G| otherwise. Let V have the decomposition ⊕aiVi where Vi's are irreducible representations of G and ai's are the corresponding multiplicities. By character theory, the multiplicity ai can be computed as
$a_{i}=\langle \chi ,\chi _{i}\rangle ={\frac {1}{|G|}}\sum {\overline {\chi (g)}}\chi _{i}(g)={\frac {1}{|G|}}\chi (1)\chi _{i}(1)=\operatorname {dim} V_{i},$ which means the multiplicity of each irreducible representation is its dimension.
The article on group rings articulates the regular representation for finite groups, as well as showing how the regular representation can be taken to be a module.
Module theory point of view
To put the construction more abstractly, the group ring K[G] is considered as a module over itself. (There is a choice here of left-action or right-action, but that is not of importance except for notation.) If G is finite and the characteristic of K doesn't divide |G|, this is a semisimple ring and we are looking at its left (right) ring ideals. This theory has been studied in great depth. It is known in particular that the direct sum decomposition of the regular representation contains a representative of every isomorphism class of irreducible linear representations of G over K. You can say that the regular representation is comprehensive for representation theory, in this case. The modular case, when the characteristic of K does divide |G|, is harder mainly because with K[G] not semisimple, a representation can fail to be irreducible without splitting as a direct sum.
Structure for finite cyclic groups
For a cyclic group C generated by g of order n, the matrix form of an element of K[C] acting on K[C] by multiplication takes a distinctive form known as a circulant matrix, in which each row is a shift to the right of the one above (in cyclic order, i.e. with the right-most element appearing on the left), when referred to the natural basis
1, g, g2, ..., gn−1.
When the field K contains a primitive n-th root of unity, one can diagonalise the representation of C by writing down n linearly independent simultaneous eigenvectors for all the n×n circulants. In fact if ζ is any n-th root of unity, the element
1 + ζg + ζ2g2 + ... + ζn−1gn−1
is an eigenvector for the action of g by multiplication, with eigenvalue
ζ−1
and so also an eigenvector of all powers of g, and their linear combinations.
This is the explicit form in this case of the abstract result that over an algebraically closed field K (such as the complex numbers) the regular representation of G is completely reducible, provided that the characteristic of K (if it is a prime number p) doesn't divide the order of G. That is called Maschke's theorem. In this case the condition on the characteristic is implied by the existence of a primitive n-th root of unity, which cannot happen in the case of prime characteristic p dividing n.
Circulant determinants were first encountered in nineteenth century mathematics, and the consequence of their diagonalisation drawn. Namely, the determinant of a circulant is the product of the n eigenvalues for the n eigenvectors described above. The basic work of Frobenius on group representations started with the motivation of finding analogous factorisations of the group determinants for any finite G; that is, the determinants of arbitrary matrices representing elements of K[G] acting by multiplication on the basis elements given by g in G. Unless G is abelian, the factorisation must contain non-linear factors corresponding to irreducible representations of G of degree > 1.
Topological group case
For a topological group G, the regular representation in the above sense should be replaced by a suitable space of functions on G, with G acting by translation. See Peter–Weyl theorem for the compact case. If G is a Lie group but not compact nor abelian, this is a difficult matter of harmonic analysis. The locally compact abelian case is part of the Pontryagin duality theory.
Normal bases in Galois theory
In Galois theory it is shown that for a field L, and a finite group G of automorphisms of L, the fixed field K of G has [L:K] = |G|. In fact we can say more: L viewed as a K[G]-module is the regular representation. This is the content of the normal basis theorem, a normal basis being an element x of L such that the g(x) for g in G are a vector space basis for L over K. Such x exist, and each one gives a K[G]-isomorphism from L to K[G]. From the point of view of algebraic number theory it is of interest to study normal integral bases, where we try to replace L and K by the rings of algebraic integers they contain. One can see already in the case of the Gaussian integers that such bases may not exist: a + bi and a − bi can never form a Z-module basis of Z[i] because 1 cannot be an integer combination. The reasons are studied in depth in Galois module theory.
More general algebras
The regular representation of a group ring is such that the left-hand and right-hand regular representations give isomorphic modules (and we often need not distinguish the cases). Given an algebra over a field A, it doesn't immediately make sense to ask about the relation between A as left-module over itself, and as right-module. In the group case, the mapping on basis elements g of K[G] defined by taking the inverse element gives an isomorphism of K[G] to its opposite ring. For A general, such a structure is called a Frobenius algebra. As the name implies, these were introduced by Frobenius in the nineteenth century. They have been shown to be related to topological quantum field theory in 1 + 1 dimensions by a particular instance of the cobordism hypothesis.
See also
• Fundamental representation
• Permutation representation
• Quasiregular representation
References
• Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
|
Wikipedia
|
Regular local ring
In commutative algebra, a regular local ring is a Noetherian local ring having the property that the minimal number of generators of its maximal ideal is equal to its Krull dimension.[1] In symbols, let A be a Noetherian local ring with maximal ideal m, and suppose a1, ..., an is a minimal set of generators of m. Then by Krull's principal ideal theorem n ≥ dim A, and A is defined to be regular if n = dim A.
The appellation regular is justified by the geometric meaning. A point x on an algebraic variety X is nonsingular if and only if the local ring ${\mathcal {O}}_{X,x}$ of germs at x is regular. (See also: regular scheme.) Regular local rings are not related to von Neumann regular rings.[lower-alpha 1]
For Noetherian local rings, there is the following chain of inclusions:
Universally catenary rings ⊃ Cohen–Macaulay rings ⊃ Gorenstein rings ⊃ complete intersection rings ⊃ regular local rings
Characterizations
There are a number of useful definitions of a regular local ring, one of which is mentioned above. In particular, if $A$ is a Noetherian local ring with maximal ideal ${\mathfrak {m}}$, then the following are equivalent definitions:
• Let ${\mathfrak {m}}=(a_{1},\ldots ,a_{n})$ where $n$ is chosen as small as possible. Then $A$ is regular if
$\dim A=n\,$,
where the dimension is the Krull dimension. The minimal set of generators of $a_{1},\ldots ,a_{n}$ are then called a regular system of parameters.
• Let $k=A/{\mathfrak {m}}$ be the residue field of $A$. Then $A$ is regular if
$\dim _{k}{\mathfrak {m}}/{\mathfrak {m}}^{2}=\dim A\,$,
where the second dimension is the Krull dimension.
• Let ${\mbox{gl dim }}A:=\sup\{\operatorname {pd} M\mid M{\text{ is an }}A{\text{-module}}\}$ be the global dimension of $A$ (i.e., the supremum of the projective dimensions of all $A$-modules.) Then $A$ is regular if
${\mbox{gl dim }}A<\infty \,$,
in which case, ${\mbox{gl dim }}A=\dim A$.
Multiplicity one criterion states:[2] if the completion of a Noetherian local ring A is unimixed (in the sense that there is no embedded prime divisor of the zero ideal and for each minimal prime p, $\dim {\widehat {A}}/p=\dim {\widehat {A}}$) and if the multiplicity of A is one, then A is regular. (The converse is always true: the multiplicity of a regular local ring is one.) This criterion corresponds to a geometric intuition in algebraic geometry that a local ring of an intersection is regular if and only if the intersection is a transversal intersection.
In the positive characteristic case, there is the following important result due to Kunz: A Noetherian local ring $R$ of positive characteristic p is regular if and only if the Frobenius morphism $R\to R,r\mapsto r^{p}$ is flat and $R$ is reduced. No similar result is known in the characteristic zero (just because how to replace Frobenius is unclear).
Examples
1. Every field is a regular local ring. These have (Krull) dimension 0. In fact, the fields are exactly the regular local rings of dimension 0.
2. Any discrete valuation ring is a regular local ring of dimension 1 and the regular local rings of dimension 1 are exactly the discrete valuation rings. Specifically, if k is a field and X is an indeterminate, then the ring of formal power series k[[X]] is a regular local ring having (Krull) dimension 1.
3. If p is an ordinary prime number, the ring of p-adic integers is an example of a discrete valuation ring, and consequently a regular local ring, which does not contain a field.
4. More generally, if k is a field and X1, X2, ..., Xd are indeterminates, then the ring of formal power series k[[X1, X2, ..., Xd]] is a regular local ring having (Krull) dimension d.
5. If A is a regular local ring, then it follows that the formal power series ring A[[x]] is regular local.
6. If Z is the ring of integers and X is an indeterminate, the ring Z[X](2, X) (i.e. the ring Z[X] localized in the prime ideal (2, X) ) is an example of a 2-dimensional regular local ring which does not contain a field.
7. By the structure theorem of Irvin Cohen, a complete regular local ring of Krull dimension d that contains a field k is a power series ring in d variables over an extension field of k.
Non-examples
The ring $A=k[x]/(x^{2})$ is not a regular local ring since it is finite dimensional but does not have finite global dimension. For example, there is an infinite resolution
$\cdots {\xrightarrow {\cdot x}}{\frac {k[x]}{(x^{2})}}{\xrightarrow {\cdot x}}{\frac {k[x]}{(x^{2})}}\to k\to 0$
Using another one of the characterizations, $A$ has exactly one prime ideal ${\mathfrak {m}}={\frac {(x)}{(x^{2})}}$, so the ring has Krull dimension $0$, but ${\mathfrak {m}}^{2}$ is the zero ideal, so ${\mathfrak {m}}/{\mathfrak {m}}^{2}$ has $k$ dimension at least $1$. (In fact it is equal to $1$ since $x+{\mathfrak {m}}$ is a basis.)
Basic properties
The Auslander–Buchsbaum theorem states that every regular local ring is a unique factorization domain.
Every localization, as well as the completion, of a regular local ring is regular.
If $(A,{\mathfrak {m}})$ is a complete regular local ring that contains a field, then
$A\cong k[[x_{1},\ldots ,x_{d}]]$,
where $k=A/{\mathfrak {m}}$ is the residue field, and $d=\dim A$, the Krull dimension.
See also: Serre's inequality on height and Serre's multiplicity conjectures.
Origin of basic notions
See also: smooth scheme
Regular local rings were originally defined by Wolfgang Krull in 1937,[3] but they first became prominent in the work of Oscar Zariski a few years later,[4][5] who showed that geometrically, a regular local ring corresponds to a smooth point on an algebraic variety. Let Y be an algebraic variety contained in affine n-space over a perfect field, and suppose that Y is the vanishing locus of the polynomials f1,...,fm. Y is nonsingular at P if Y satisfies a Jacobian condition: If M = (∂fi/∂xj) is the matrix of partial derivatives of the defining equations of the variety, then the rank of the matrix found by evaluating M at P is n − dim Y. Zariski proved that Y is nonsingular at P if and only if the local ring of Y at P is regular. (Zariski observed that this can fail over non-perfect fields.) This implies that smoothness is an intrinsic property of the variety, in other words it does not depend on where or how the variety is embedded in affine space. It also suggests that regular local rings should have good properties, but before the introduction of techniques from homological algebra very little was known in this direction. Once such techniques were introduced in the 1950s, Auslander and Buchsbaum proved that every regular local ring is a unique factorization domain.
Another property suggested by geometric intuition is that the localization of a regular local ring should again be regular. Again, this lay unsolved until the introduction of homological techniques. It was Jean-Pierre Serre who found a homological characterization of regular local rings: A local ring A is regular if and only if A has finite global dimension, i.e. if every A-module has a projective resolution of finite length. It is easy to show that the property of having finite global dimension is preserved under localization, and consequently that localizations of regular local rings at prime ideals are again regular.
This justifies the definition of regularity for non-local commutative rings given in the next section.
Regular ring
For the unrelated regular rings introduced by John von Neumann, see von Neumann regular ring.
In commutative algebra, a regular ring is a commutative Noetherian ring, such that the localization at every prime ideal is a regular local ring: that is, every such localization has the property that the minimal number of generators of its maximal ideal is equal to its Krull dimension.
The origin of the term regular ring lies in the fact that an affine variety is nonsingular (that is every point is regular) if and only if its ring of regular functions is regular.
For regular rings, Krull dimension agrees with global homological dimension.
Jean-Pierre Serre defined a regular ring as a commutative noetherian ring of finite global homological dimension. His definition is stronger than the definition above, which allows regular rings of infinite Krull dimension.
Examples of regular rings include fields (of dimension zero) and Dedekind domains. If A is regular then so is A[X], with dimension one greater than that of A.
In particular if k is a field, the ring of integers, or a principal ideal domain, then the polynomial ring $k[X_{1},\ldots ,X_{n}]$ is regular. In the case of a field, this is Hilbert's syzygy theorem.
Any localization of a regular ring is regular as well.
A regular ring is reduced[lower-alpha 2] but need not be an integral domain. For example, the product of two regular integral domains is regular, but not an integral domain.[6]
See also
• Geometrically regular ring
Notes
1. A local von Neumann regular ring is a division ring, so the two conditions are not very compatible.
2. since a ring is reduced if and only if its localizations at prime ideals are.
Citations
1. Atiyah & Macdonald 1969, p. 123, Theorem 11.22.
2. Herrmann, M., S. Ikeda, and U. Orbanz: Equimultiplicity and Blowing Up. An Algebraic Study with an Appendix by B. Moonen. Springer Verlag, Berlin Heidelberg New-York, 1988. Theorem 6.8.
3. Krull, Wolfgang (1937), "Beiträge zur Arithmetik kommutativer Integritätsbereiche III", Math. Z.: 745–766, doi:10.1007/BF01160110
4. Zariski, Oscar (1940), "Algebraic varieties over ground fields of characteristic 0", Amer. J. Math., 62: 187–221, doi:10.2307/2371447
5. Zariski, Oscar (1947), "The concept of a simple point of an abstract algebraic variety", Trans. Amer. Math. Soc., 62: 1–52, doi:10.1090/s0002-9947-1947-0021694-1
6. Is a regular ring a domain
References
• Atiyah, Michael F.; Macdonald, Ian G. (1969), Introduction to Commutative Algebra, Addison-Wesley, MR 0242802
• Kunz, Characterizations of regular local rings of characteristic p. Amer. J. Math. 91 (1969), 772–784.
• Tsit-Yuen Lam, Lectures on Modules and Rings, Springer-Verlag, 1999, ISBN 978-1-4612-0525-8. Chap.5.G.
• Jean-Pierre Serre, Local algebra, Springer-Verlag, 2000, ISBN 3-540-66641-9. Chap.IV.D.
• Regular rings at The Stacks Project
|
Wikipedia
|
Regular scheme
In algebraic geometry, a regular scheme is a locally Noetherian scheme whose local rings are regular everywhere.[1][2] Every smooth scheme is regular, and every regular scheme of finite type over a perfect field is smooth.[3]
Not to be confused with regular embedding.
For an example of a regular scheme that is not smooth, see Geometrically regular ring#Examples.
See also
• Étale morphism
• Dimension of an algebraic variety
• Glossary of scheme theory
• Smooth completion
References
1. Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, Springer, p. 238, ISBN 9780387902449. Note that the cited definition that Hartshorne gives is slightly misleading. A locally Noetherian scheme is regular if all its local rings are regular, but it is not the case for schemes which are not locally Noetherian. See the cited Stacks Project page for more details.
2. "Section 28.9 (02IR): Regular schemes—The Stacks project". stacks.math.columbia.edu. Retrieved 2022-02-18.
3. Demazure, Michel (1980), Introduction to algebraic geometry and algebraic groups, North-Holland Mathematics Studies, vol. 39, North-Holland, Proposition 3.2, p. 168, ISBN 9780080871509.
|
Wikipedia
|
Regular sequence
In commutative algebra, a regular sequence is a sequence of elements of a commutative ring which are as independent as possible, in a precise sense. This is the algebraic analogue of the geometric notion of a complete intersection.
For regular Cauchy sequence, see Cauchy sequence § In constructive mathematics. For a k-regular sequence of integers, see k-regular sequence.
Definitions
For a commutative ring R and an R-module M, an element r in R is called a non-zero-divisor on M if r m = 0 implies m = 0 for m in M. An M-regular sequence is a sequence
r1, ..., rd in R
such that ri is a not a zero-divisor on M/(r1, ..., ri-1)M for i = 1, ..., d.[1] Some authors also require that M/(r1, ..., rd)M is not zero. Intuitively, to say that r1, ..., rd is an M-regular sequence means that these elements "cut M down" as much as possible, when we pass successively from M to M/(r1)M, to M/(r1, r2)M, and so on.
An R-regular sequence is called simply a regular sequence. That is, r1, ..., rd is a regular sequence if r1 is a non-zero-divisor in R, r2 is a non-zero-divisor in the ring R/(r1), and so on. In geometric language, if X is an affine scheme and r1, ..., rd is a regular sequence in the ring of regular functions on X, then we say that the closed subscheme {r1=0, ..., rd=0} ⊂ X is a complete intersection subscheme of X.
Being a regular sequence may depend on the order of the elements. For example, x, y(1-x), z(1-x) is a regular sequence in the polynomial ring C[x, y, z], while y(1-x), z(1-x), x is not a regular sequence. But if R is a Noetherian local ring and the elements ri are in the maximal ideal, or if R is a graded ring and the ri are homogeneous of positive degree, then any permutation of a regular sequence is a regular sequence.
Let R be a Noetherian ring, I an ideal in R, and M a finitely generated R-module. The depth of I on M, written depthR(I, M) or just depth(I, M), is the supremum of the lengths of all M-regular sequences of elements of I. When R is a Noetherian local ring and M is a finitely generated R-module, the depth of M, written depthR(M) or just depth(M), means depthR(m, M); that is, it is the supremum of the lengths of all M-regular sequences in the maximal ideal m of R. In particular, the depth of a Noetherian local ring R means the depth of R as a R-module. That is, the depth of R is the maximum length of a regular sequence in the maximal ideal.
For a Noetherian local ring R, the depth of the zero module is ∞,[2] whereas the depth of a nonzero finitely generated R-module M is at most the Krull dimension of M (also called the dimension of the support of M).[3]
Examples
• Given an integral domain $R$ any nonzero $f\in R$ gives a regular sequence.
• For a prime number p, the local ring Z(p) is the subring of the rational numbers consisting of fractions whose denominator is not a multiple of p. The element p is a non-zero-divisor in Z(p), and the quotient ring of Z(p) by the ideal generated by p is the field Z/(p). Therefore p cannot be extended to a longer regular sequence in the maximal ideal (p), and in fact the local ring Z(p) has depth 1.
• For any field k, the elements x1, ..., xn in the polynomial ring A = k[x1, ..., xn] form a regular sequence. It follows that the localization R of A at the maximal ideal m = (x1, ..., xn) has depth at least n. In fact, R has depth equal to n; that is, there is no regular sequence in the maximal ideal of length greater than n.
• More generally, let R be a regular local ring with maximal ideal m. Then any elements r1, ..., rd of m which map to a basis for m/m2 as an R/m-vector space form a regular sequence.
An important case is when the depth of a local ring R is equal to its Krull dimension: R is then said to be Cohen-Macaulay. The three examples shown are all Cohen-Macaulay rings. Similarly, a finitely generated R-module M is said to be Cohen-Macaulay if its depth equals its dimension.
Non-Examples
A simple non-example of a regular sequence is given by the sequence $(xy,x^{2})$ of elements in $\mathbb {C} [x,y]$ since
$\cdot x^{2}:{\frac {\mathbb {C} [x,y]}{(xy)}}\to {\frac {\mathbb {C} [x,y]}{(xy)}}$
has a non-trivial kernel given by the ideal $(y)\subset \mathbb {C} [x,y]/(xy)$ . Similar examples can be found by looking at minimal generators for the ideals generated from reducible schemes with multiple components and taking the subscheme of a component, but fattened.
Applications
• If r1, ..., rd is a regular sequence in a ring R, then the Koszul complex is an explicit free resolution of R/(r1, ..., rd) as an R-module, of the form:
$0\rightarrow R^{\binom {d}{d}}\rightarrow \cdots \rightarrow R^{\binom {d}{1}}\rightarrow R\rightarrow R/(r_{1},\ldots ,r_{d})\rightarrow 0$
In the special case where R is the polynomial ring k[r1, ..., rd], this gives a resolution of k as an R-module.
• If I is an ideal generated by a regular sequence in a ring R, then the associated graded ring
$\oplus _{j\geq 0}I^{j}/I^{j+1}$
is isomorphic to the polynomial ring (R/I)[x1, ..., xd]. In geometric terms, it follows that a local complete intersection subscheme Y of any scheme X has a normal bundle which is a vector bundle, even though Y may be singular.
See also
• Complete intersection ring
• Koszul complex
• Depth (ring theory)
• Cohen-Macaulay ring
Notes
1. N. Bourbaki. Algèbre. Chapitre 10. Algèbre Homologique. Springer-Verlag (2006). X.9.6.
2. A. Grothendieck. EGA IV, Part 1. Publications Mathématiques de l'IHÉS 20 (1964), 259 pp. 0.16.4.5.
3. N. Bourbaki. Algèbre Commutative. Chapitre 10. Springer-Verlag (2007). Th. X.4.2.
References
• Bourbaki, Nicolas (2006), Algèbre. Chapitre 10. Algèbre Homologique, Berlin, New York: Springer-Verlag, doi:10.1007/978-3-540-34493-3, ISBN 978-3-540-34492-6, MR 2327161
• Bourbaki, Nicolas (2007), Algèbre Commutative. Chapitre 10, Berlin, New York: Springer-Verlag, doi:10.1007/978-3-540-34395-0, ISBN 978-3-540-34394-3, MR 2333539
• Winfried Bruns; Jürgen Herzog, Cohen-Macaulay rings. Cambridge Studies in Advanced Mathematics, 39. Cambridge University Press, Cambridge, 1993. xii+403 pp. ISBN 0-521-41068-1
• David Eisenbud, Commutative Algebra with a View Toward Algebraic Geometry. Springer Graduate Texts in Mathematics, no. 150. ISBN 0-387-94268-8
• Grothendieck, Alexander (1964), "Éléments de géometrie algébrique IV. Première partie", Publications Mathématiques de l'Institut des Hautes Études Scientifiques, 20: 1–259, MR 0173675
|
Wikipedia
|
Regular singular point
In mathematics, in the theory of ordinary differential equations in the complex plane $\mathbb {C} $, the points of $\mathbb {C} $ are classified into ordinary points, at which the equation's coefficients are analytic functions, and singular points, at which some coefficient has a singularity. Then amongst singular points, an important distinction is made between a regular singular point, where the growth of solutions is bounded (in any small sector) by an algebraic function, and an irregular singular point, where the full solution set requires functions with higher growth rates. This distinction occurs, for example, between the hypergeometric equation, with three regular singular points, and the Bessel equation which is in a sense a limiting case, but where the analytic properties are substantially different.
Formal definitions
More precisely, consider an ordinary linear differential equation of n-th order
$f^{(n)}(z)+\sum _{i=0}^{n-1}p_{i}(z)f^{(i)}(z)=0$
with pi(z) meromorphic functions.
The equation should be studied on the Riemann sphere to include the point at infinity as a possible singular point. A Möbius transformation may be applied to move ∞ into the finite part of the complex plane if required, see example on Bessel differential equation below.
Then the Frobenius method based on the indicial equation may be applied to find possible solutions that are power series times complex powers (z − a)r near any given a in the complex plane where r need not be an integer; this function may exist, therefore, only thanks to a branch cut extending out from a, or on a Riemann surface of some punctured disc around a. This presents no difficulty for a an ordinary point (Lazarus Fuchs 1866). When a is a regular singular point, which by definition means that
$p_{n-i}(z)$
has a pole of order at most i at a, the Frobenius method also can be made to work and provide n independent solutions near a.
Otherwise the point a is an irregular singularity. In that case the monodromy group relating solutions by analytic continuation has less to say in general, and the solutions are harder to study, except in terms of their asymptotic expansions. The irregularity of an irregular singularity is measured by the Poincaré rank (Arscott (1995) harvtxt error: no target: CITEREFArscott1995 (help)).
The regularity condition is a kind of Newton polygon condition, in the sense that the allowed poles are in a region, when plotted against i, bounded by a line at 45° to the axes.
An ordinary differential equation whose only singular points, including the point at infinity, are regular singular points is called a Fuchsian ordinary differential equation.
Examples for second order differential equations
In this case the equation above is reduced to:
$f''(x)+p_{1}(x)f'(x)+p_{0}(x)f(x)=0.$
One distinguishes the following cases:
• Point a is an ordinary point when functions p1(x) and p0(x) are analytic at x = a.
• Point a is a regular singular point if p1(x) has a pole up to order 1 at x = a and p0 has a pole of order up to 2 at x = a.
• Otherwise point a is an irregular singular point.
We can check whether there is an irregular singular point at infinity by using the substitution $w=1/x$ and the relations:
${\frac {df}{dx}}=-w^{2}{\frac {df}{dw}}$
${\frac {d^{2}f}{dx^{2}}}=w^{4}{\frac {d^{2}f}{dw^{2}}}+2w^{3}{\frac {df}{dw}}$
We can thus transform the equation to an equation in w, and check what happens at w = 0. If $p_{1}(x)$ and $p_{2}(x)$ are quotients of polynomials, then there will be an irregular singular point at infinite x unless the polynomial in the denominator of $p_{1}(x)$ is of degree at least one more than the degree of its numerator and the denominator of $p_{2}(x)$ is of degree at least two more than the degree of its numerator.
Listed below are several examples from ordinary differential equations from mathematical physics that have singular points and known solutions.
Bessel differential equation
This is an ordinary differential equation of second order. It is found in the solution to Laplace's equation in cylindrical coordinates:
$x^{2}{\frac {d^{2}f}{dx^{2}}}+x{\frac {df}{dx}}+(x^{2}-\alpha ^{2})f=0$
for an arbitrary real or complex number α (the order of the Bessel function). The most common and important special case is where α is an integer n.
Dividing this equation by x2 gives:
${\frac {d^{2}f}{dx^{2}}}+{\frac {1}{x}}{\frac {df}{dx}}+\left(1-{\frac {\alpha ^{2}}{x^{2}}}\right)f=0.$
In this case p1(x) = 1/x has a pole of first order at x = 0. When α ≠ 0, p0(x) = (1 − α2/x2) has a pole of second order at x = 0. Thus this equation has a regular singularity at 0.
To see what happens when x → ∞ one has to use a Möbius transformation, for example $x=1/w$. After performing the algebra:
${\frac {d^{2}f}{dw^{2}}}+{\frac {1}{w}}{\frac {df}{dw}}+\left[{\frac {1}{w^{4}}}-{\frac {\alpha ^{2}}{w^{2}}}\right]f=0$
Now at $w=0$,
$p_{1}(w)={\frac {1}{w}}$
has a pole of first order, but
$p_{0}(w)={\frac {1}{w^{4}}}-{\frac {\alpha ^{2}}{w^{2}}}$
has a pole of fourth order. Thus, this equation has an irregular singularity at $w=0$ corresponding to x at ∞.
Legendre differential equation
This is an ordinary differential equation of second order. It is found in the solution of Laplace's equation in spherical coordinates:
${\frac {d}{dx}}\left[(1-x^{2}){\frac {d}{dx}}f\right]+\ell (\ell +1)f=0.$
Opening the square bracket gives:
$\left(1-x^{2}\right){d^{2}f \over dx^{2}}-2x{df \over dx}+\ell (\ell +1)f=0.$
And dividing by (1 − x2):
${\frac {d^{2}f}{dx^{2}}}-{\frac {2x}{1-x^{2}}}{\frac {df}{dx}}+{\frac {\ell (\ell +1)}{1-x^{2}}}f=0.$
This differential equation has regular singular points at ±1 and ∞.
Hermite differential equation
One encounters this ordinary second order differential equation in solving the one-dimensional time independent Schrödinger equation
$E\psi =-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}+V(x)\psi $
for a harmonic oscillator. In this case the potential energy V(x) is:
$V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.$
This leads to the following ordinary second order differential equation:
${\frac {d^{2}f}{dx^{2}}}-2x{\frac {df}{dx}}+\lambda f=0.$
This differential equation has an irregular singularity at ∞. Its solutions are Hermite polynomials.
Hypergeometric equation
The equation may be defined as
$z(1-z){\frac {d^{2}f}{dz^{2}}}+\left[c-(a+b+1)z\right]{\frac {df}{dz}}-abf=0.$
Dividing both sides by z(1 − z) gives:
${\frac {d^{2}f}{dz^{2}}}+{\frac {c-(a+b+1)z}{z(1-z)}}{\frac {df}{dz}}-{\frac {ab}{z(1-z)}}f=0.$
This differential equation has regular singular points at 0, 1 and ∞. A solution is the hypergeometric function.
References
• Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations. New York: McGraw-Hill.
• E. T. Copson, An Introduction to the Theory of Functions of a Complex Variable (1935)
• Fedoryuk, M. V. (2001) [1994], "Fuchsian equation", Encyclopedia of Mathematics, EMS Press
• A. R. Forsyth Theory of Differential Equations Vol. IV: Ordinary Linear Equations (Cambridge University Press, 1906)
• Édouard Goursat, A Course in Mathematical Analysis, Volume II, Part II: Differential Equations pp. 128−ff. (Ginn & co., Boston, 1917)
• E. L. Ince, Ordinary Differential Equations, Dover Publications (1944)
• Il'yashenko, Yu. S. (2001) [1994], "Regular singular point", Encyclopedia of Mathematics, EMS Press
• T. M. MacRobert Functions of a Complex Variable p. 243 (MacMillan, London, 1917)
• Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
• E. T. Whittaker and G. N. Watson A Course of Modern Analysis pp. 188−ff. (Cambridge University Press, 1915)
|
Wikipedia
|
Regular skew apeirohedron
In geometry, a regular skew apeirohedron is an infinite regular skew polyhedron, with either skew regular faces or skew regular vertex figures.
History
According to Coxeter, in 1926 John Flinders Petrie generalized the concept of regular skew polygons (nonplanar polygons) to finite regular skew polyhedra in 4-dimensions, and infinite regular skew apeirohedra in 3-dimensions (described here).
Coxeter identified 3 forms, with planar faces and skew vertex figures, two are complements of each other. They are all named with a modified Schläfli symbol {l,m|n}, where there are l-gonal faces, m faces around each vertex, with holes identified as n-gonal missing faces.
Coxeter offered a modified Schläfli symbol {l,m|n} for these figures, with {l,m} implying the vertex figure, m l-gons around a vertex, and n-gonal holes. Their vertex figures are skew polygons, zig-zagging between two planes.
The regular skew polyhedra, represented by {l,m|n}, follow this equation:
• 2 sin(π/l) · sin(π/m) = cos(π/n)
Regular skew apeirohedra of Euclidean 3-space
The three Euclidean solutions in 3-space are {4,6|4}, {6,4|4}, and {6,6|3}. John Conway named them mucube, muoctahedron, and mutetrahedron respectively for multiple cube, octahedron, and tetrahedron.[1]
1. Mucube: {4,6|4}: 6 squares about each vertex (related to cubic honeycomb, constructed by cubic cells, removing two opposite faces from each, and linking sets of six together around a faceless cube.)
2. Muoctahedron: {6,4|4}: 4 hexagons about each vertex (related to bitruncated cubic honeycomb, constructed by truncated octahedron with their square faces removed and linking hole pairs of holes together.)
3. Mutetrahedron: {6,6|3}: 6 hexagons about each vertex (related to quarter cubic honeycomb, constructed by truncated tetrahedron cells, removing triangle faces, and linking sets of four around a faceless tetrahedron.)
Coxeter gives these regular skew apeirohedra {2q,2r|p} with extended chiral symmetry [[(p,q,p,r)]+] which he says is isomorphic to his abstract group (2q,2r|2,p). The related honeycomb has the extended symmetry [[(p,q,p,r)]].[2]
Compact regular skew apeirohedra
Coxeter group
symmetry
Apeirohedron
{p,q|l}
ImageFace
{p}
Hole
{l}
Vertex
figure
Related
honeycomb
[[4,3,4]]
[[4,3,4]+]
{4,6|4}
Mucube
animation
t0,3{4,3,4}
{6,4|4}
Muoctahedron
animation
2t{4,3,4}
[[3[4]]]
[[3[4]]+]
{6,6|3}
Mutetrahedron
animation
q{4,3,4}
Regular skew apeirohedra in hyperbolic 3-space
In 1967, C. W. L. Garner identified 31 hyperbolic skew apeirohedra with regular skew polygon vertex figures, found in a similar search to the 3 above from Euclidean space.[3]
These represent 14 compact and 17 paracompact regular skew polyhedra in hyperbolic space, constructed from the symmetry of a subset of linear and cyclic Coxeter groups graphs of the form [[(p,q,p,r)]], These define regular skew polyhedra {2q,2r|p} and dual {2r,2q|p}. For the special case of linear graph groups r = 2, this represents the Coxeter group [p,q,p]. It generates regular skews {2q,4|p} and {4,2q|p}. All of these exist as a subset of faces of the convex uniform honeycombs in hyperbolic space.
The skew apeirohedron shares the same antiprism vertex figure with the honeycomb, but only the zig-zag edge faces of the vertex figure are realized, while the other faces make "holes".
14 Compact regular skew apeirohedra
Coxeter
group
Apeirohedron
{p,q|l}
Face
{p}
Hole
{l}
HoneycombVertex
figure
Apeirohedron
{p,q|l}
Face
{p}
Hole
{l}
HoneycombVertex
figure
[3,5,3]
{10,4|3}
2t{3,5,3}
{4,10|3}
t0,3{3,5,3}
[5,3,5]
{6,4|5}
2t{5,3,5}
{4,6|5}
t0,3{5,3,5}
[(4,3,3,3)]
{8,6|3}
ct{(4,3,3,3)}
{6,8|3}
ct{(3,3,4,3)}
[(5,3,3,3)]
{10,6|3}
ct{(5,3,3,3)}
{6,10|3}
ct{(3,3,5,3)}
[(4,3,4,3)]
{8,8|3}
ct{(4,3,4,3)}
{6,6|4}
ct{(3,4,3,4)}
[(5,3,4,3)]
{8,10|3}
ct{(4,3,5,3)}
{10,8|3}
ct{(5,3,4,3)}
[(5,3,5,3)]
{10,10|3}
ct{(5,3,5,3)}
{6,6|5}
ct{(3,5,3,5)}
17 Paracompact regular skew apeirohedra
Coxeter
group
Apeirohedron
{p,q|l}
Face
{p}
Hole
{l}
HoneycombVertex
figure
Apeirohedron
{p,q|l}
Face
{p}
Hole
{l}
HoneycombVertex
figure
[4,4,4]
{8,4|4}
2t{4,4,4}
{4,8|4}
t0,3{4,4,4}
[3,6,3]
{12,4|3}
2t{3,6,3}
{4,12|3}
t0,3{3,6,3}
[6,3,6]
{6,4|6}
2t{6,3,6}
{4,6|6}
t0,3{6,3,6}
[(4,4,4,3)]
{8,6|4}
ct{(4,4,3,4)}
{6,8|4}
ct{(3,4,4,4)}
[(4,4,4,4)]
{8,8|4}
q{4,4,4}
[(6,3,3,3)]
{12,6|3}
ct{(6,3,3,3)}
{6,12|3}
ct{(3,3,6,3)}
[(6,3,4,3)]
{12,8|3}
ct{(6,3,4,3)}
{8,12|3}
ct{(4,3,6,3)}
[(6,3,5,3)]
{12,10|3}
ct{(6,3,5,3)}
{10,12|3}
ct{(5,3,6,3)}
[(6,3,6,3)]
{12,12|3}
ct{(6,3,6,3)}
{6,6|6}
ct{(3,6,3,6)}
See also
• Skew apeirohedron
• Regular skew polyhedron
• Tetrastix
References
1. The Symmetry of Things, 2008, Chapter 23 Objects with Primary Symmetry, Infinite Platonic Polyhedra, pp. 333–335
2. Coxeter, Regular and Semi-Regular Polytopes II 2.34)
3. Garner, C. W. L. Regular Skew Polyhedra in Hyperbolic Three-Space. Can. J. Math. 19, 1179–1186, 1967. Note: His paper says there are 32, but one is self-dual, leaving 31.
• Petrie–Coxeter Maps Revisited PDF, Isabel Hubard, Egon Schulte, Asia Ivic Weiss, 2005
• John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5,
• Peter McMullen, Four-Dimensional Regular Polyhedra, Discrete & Computational Geometry September 2007, Volume 38, Issue 2, pp 355–387
• Coxeter, Regular Polytopes, Third edition, (1973), Dover edition, ISBN 0-486-61480-8
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6
• (Paper 2) H.S.M. Coxeter, "The Regular Sponges, or Skew Polyhedra", Scripta Mathematica 6 (1939) 240–244.
• (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10]
• (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559–591]
• Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, ISBN 0-486-40919-8 (Chapter 5: Regular Skew Polyhedra in three and four dimensions and their topological analogues, Proceedings of the London Mathematics Society, Ser. 2, Vol 43, 1937.)
• Coxeter, H. S. M. Regular Skew Polyhedra in Three and Four Dimensions. Proc. London Math. Soc. 43, 33–62, 1937.
|
Wikipedia
|
Submanifold
In mathematics, a submanifold of a manifold M is a subset S which itself has the structure of a manifold, and for which the inclusion map S → M satisfies certain properties. There are different types of submanifolds depending on exactly which properties are required. Different authors often have different definitions.
Formal definition
In the following we assume all manifolds are differentiable manifolds of class Cr for a fixed r ≥ 1, and all morphisms are differentiable of class Cr.
Immersed submanifolds
An immersed submanifold of a manifold M is the image S of an immersion map f : N → M; in general this image will not be a submanifold as a subset, and an immersion map need not even be injective (one-to-one) – it can have self-intersections.[1]
More narrowly, one can require that the map f : N → M be an injection (one-to-one), in which we call it an injective immersion, and define an immersed submanifold to be the image subset S together with a topology and differential structure such that S is a manifold and the inclusion f is a diffeomorphism: this is just the topology on N, which in general will not agree with the subset topology: in general the subset S is not a submanifold of M, in the subset topology.
Given any injective immersion f : N → M the image of N in M can be uniquely given the structure of an immersed submanifold so that f : N → f(N) is a diffeomorphism. It follows that immersed submanifolds are precisely the images of injective immersions.
The submanifold topology on an immersed submanifold need not be the subspace topology inherited from M. In general, it will be finer than the subspace topology (i.e. have more open sets).
Immersed submanifolds occur in the theory of Lie groups where Lie subgroups are naturally immersed submanifolds. They also appear in the study of foliations where immersed submanifolds provide the right context to prove the Frobenius theorem.
Embedded submanifolds
An embedded submanifold (also called a regular submanifold), is an immersed submanifold for which the inclusion map is a topological embedding. That is, the submanifold topology on S is the same as the subspace topology.
Given any embedding f : N → M of a manifold N in M the image f(N) naturally has the structure of an embedded submanifold. That is, embedded submanifolds are precisely the images of embeddings.
There is an intrinsic definition of an embedded submanifold which is often useful. Let M be an n-dimensional manifold, and let k be an integer such that 0 ≤ k ≤ n. A k-dimensional embedded submanifold of M is a subset S ⊂ M such that for every point p ∈ S there exists a chart (U ⊂ M, φ : U → Rn) containing p such that φ(S ∩ U) is the intersection of a k-dimensional plane with φ(U). The pairs (S ∩ U, φ|S ∩ U) form an atlas for the differential structure on S.
Alexander's theorem and the Jordan–Schoenflies theorem are good examples of smooth embeddings.
Other variations
There are some other variations of submanifolds used in the literature. A neat submanifold is a manifold whose boundary agrees with the boundary of the entire manifold.[2] Sharpe (1997) defines a type of submanifold which lies somewhere between an embedded submanifold and an immersed submanifold.
Many authors define topological submanifolds also. These are the same as Cr submanifolds with r = 0.[3] An embedded topological submanifold is not necessarily regular in the sense of the existence of a local chart at each point extending the embedding. Counterexamples include wild arcs and wild knots.
Properties
Given any immersed submanifold S of M, the tangent space to a point p in S can naturally be thought of as a linear subspace of the tangent space to p in M. This follows from the fact that the inclusion map is an immersion and provides an injection
$i_{\ast }:T_{p}S\to T_{p}M.$
Suppose S is an immersed submanifold of M. If the inclusion map i : S → M is closed then S is actually an embedded submanifold of M. Conversely, if S is an embedded submanifold which is also a closed subset then the inclusion map is closed. The inclusion map i : S → M is closed if and only if it is a proper map (i.e. inverse images of compact sets are compact). If i is closed then S is called a closed embedded submanifold of M. Closed embedded submanifolds form the nicest class of submanifolds.
Submanifolds of real coordinate space
Smooth manifolds are sometimes defined as embedded submanifolds of real coordinate space Rn, for some n. This point of view is equivalent to the usual, abstract approach, because, by the Whitney embedding theorem, any second-countable smooth (abstract) m-manifold can be smoothly embedded in R2m.
Notes
1. Sharpe 1997, p. 26.
2. Kosinski 2007, p. 27.
3. Lang 1999, pp. 25–26. Choquet-Bruhat 1968, p. 11
References
• Choquet-Bruhat, Yvonne (1968). Géométrie différentielle et systèmes extérieurs. Paris: Dunod.
• Kosinski, Antoni Albert (2007) [1993]. Differential manifolds. Mineola, New York: Dover Publications. ISBN 978-0-486-46244-8.
• Lang, Serge (1999). Fundamentals of Differential Geometry. Graduate Texts in Mathematics. New York: Springer. ISBN 978-0-387-98593-0.
• Lee, John (2003). Introduction to Smooth Manifolds. Graduate Texts in Mathematics 218. New York: Springer. ISBN 0-387-95495-3.
• Sharpe, R. W. (1997). Differential Geometry: Cartan's Generalization of Klein's Erlangen Program. New York: Springer. ISBN 0-387-94732-9.
• Warner, Frank W. (1983). Foundations of Differentiable Manifolds and Lie Groups. New York: Springer. ISBN 0-387-90894-3.
Manifolds (Glossary)
Basic concepts
• Topological manifold
• Atlas
• Differentiable/Smooth manifold
• Differential structure
• Smooth atlas
• Submanifold
• Riemannian manifold
• Smooth map
• Submersion
• Pushforward
• Tangent space
• Differential form
• Vector field
Main results (list)
• Atiyah–Singer index
• Darboux's
• De Rham's
• Frobenius
• Generalized Stokes
• Hopf–Rinow
• Noether's
• Sard's
• Whitney embedding
Maps
• Curve
• Diffeomorphism
• Local
• Geodesic
• Exponential map
• in Lie theory
• Foliation
• Immersion
• Integral curve
• Lie derivative
• Section
• Submersion
Types of
manifolds
• Closed
• (Almost) Complex
• (Almost) Contact
• Fibered
• Finsler
• Flat
• G-structure
• Hadamard
• Hermitian
• Hyperbolic
• Kähler
• Kenmotsu
• Lie group
• Lie algebra
• Manifold with boundary
• Oriented
• Parallelizable
• Poisson
• Prime
• Quaternionic
• Hypercomplex
• (Pseudo−, Sub−) Riemannian
• Rizza
• (Almost) Symplectic
• Tame
Tensors
Vectors
• Distribution
• Lie bracket
• Pushforward
• Tangent space
• bundle
• Torsion
• Vector field
• Vector flow
Covectors
• Closed/Exact
• Covariant derivative
• Cotangent space
• bundle
• De Rham cohomology
• Differential form
• Vector-valued
• Exterior derivative
• Interior product
• Pullback
• Ricci curvature
• flow
• Riemann curvature tensor
• Tensor field
• density
• Volume form
• Wedge product
Bundles
• Adjoint
• Affine
• Associated
• Cotangent
• Dual
• Fiber
• (Co) Fibration
• Jet
• Lie algebra
• (Stable) Normal
• Principal
• Spinor
• Subbundle
• Tangent
• Tensor
• Vector
Connections
• Affine
• Cartan
• Ehresmann
• Form
• Generalized
• Koszul
• Levi-Civita
• Principal
• Vector
• Parallel transport
Related
• Classification of manifolds
• Gauge theory
• History
• Morse theory
• Moving frame
• Singularity theory
Generalizations
• Banach manifold
• Diffeology
• Diffiety
• Fréchet manifold
• K-theory
• Orbifold
• Secondary calculus
• over commutative algebras
• Sheaf
• Stratifold
• Supermanifold
• Stratified space
Authority control: National
• Israel
• United States
|
Wikipedia
|
Euclidean tilings by convex regular polygons
Euclidean plane tilings by convex regular polygons have been widely used since antiquity. The first systematic mathematical treatment was that of Kepler in his Harmonices Mundi (Latin: The Harmony of the World, 1619).
Example periodic tilings
A regular tiling has one type of regular face.
A semiregular or uniform tiling has one type of vertex, but two or more types of faces.
A k-uniform tiling has k types of vertices, and two or more types of regular faces.
A non-edge-to-edge tiling can have different-sized regular faces.
Notation of Euclidean tilings
Euclidean tilings are usually named after Cundy & Rollett’s notation.[1] This notation represents (i) the number of vertices, (ii) the number of polygons around each vertex (arranged clockwise) and (iii) the number of sides to each of those polygons. For example: 36; 36; 34.6, tells us there are 3 vertices with 2 different vertex types, so this tiling would be classed as a ‘3-uniform (2-vertex types)’ tiling. Broken down, 36; 36 (both of different transitivity class), or (36)2, tells us that there are 2 vertices (denoted by the superscript 2), each with 6 equilateral 3-sided polygons (triangles). With a final vertex 34.6, 4 more contiguous equilateral triangles and a single regular hexagon.
However, this notation has two main problems related to ambiguous conformation and uniqueness [2] First, when it comes to k-uniform tilings, the notation does not explain the relationships between the vertices. This makes it impossible to generate a covered plane given the notation alone. And second, some tessellations have the same nomenclature, they are very similar but it can be noticed that the relative positions of the hexagons are different. Therefore, the second problem is that this nomenclature is not unique for each tessellation.
In order to solve those problems, GomJau-Hogg’s notation [3] is a slightly modified version of the research and notation presented in 2012,[2] about the generation and nomenclature of tessellations and double-layer grids. Antwerp v3.0,[4] a free online application, allows for the infinite generation of regular polygon tilings through a set of shape placement stages and iterative rotation and reflection operations, obtained directly from the GomJau-Hogg’s notation.
Regular tilings
Following Grünbaum and Shephard (section 1.3), a tiling is said to be regular if the symmetry group of the tiling acts transitively on the flags of the tiling, where a flag is a triple consisting of a mutually incident vertex, edge and tile of the tiling. This means that, for every pair of flags, there is a symmetry operation mapping the first flag to the second. This is equivalent to the tiling being an edge-to-edge tiling by congruent regular polygons. There must be six equilateral triangles, four squares or three regular hexagons at a vertex, yielding the three regular tessellations.
Regular tilings (3)
p6m, *632 p4m, *442
C&R: 36
GJ-H: 3/m30/r(h2)
(t = 1, e = 1)
C&R: 63
GJ-H: 6/m30/r(h1)
(t = 1, e = 1)
C&R: 44
GJ-H: 4/m45/r(h1)
(t = 1, e = 1)
C&R: Cundy & Rollet's notation
GJ-H: Notation of GomJau-Hogg
Archimedean, uniform or semiregular tilings
Further information: List of convex uniform tilings
Vertex-transitivity means that for every pair of vertices there is a symmetry operation mapping the first vertex to the second.[5]
If the requirement of flag-transitivity is relaxed to one of vertex-transitivity, while the condition that the tiling is edge-to-edge is kept, there are eight additional tilings possible, known as Archimedean, uniform or semiregular tilings. Note that there are two mirror image (enantiomorphic or chiral) forms of 34.6 (snub hexagonal) tiling, only one of which is shown in the following table. All other regular and semiregular tilings are achiral.
Uniform tilings (8)
p6m, *632
C&R: 3.122
GJ-H: 12-3/m30/r(h3)
(t = 2, e = 2)
t{6,3}
C&R: 3.4.6.4
GJ-H: 6-4-3/m30/r(c2)
(t = 3, e = 2)
rr{3,6}
C&R: 4.6.12
GJ-H: 12-6,4/m30/r(c2)
(t = 3, e = 3)
tr{3,6}
C&R: (3.6)2
GJ-H: 6-3-6/m30/r(v4)
(t = 2, e = 1)
r{6,3}
C&R: 4.82
GJ-H: 8-4/m90/r(h4)
(t = 2, e = 2)
t{4,4}
C&R: 32.4.3.4
GJ-H: 4-3-3,4/r90/r(h2)
(t = 2, e = 2)
s{4,4}
C&R: 33.42
GJ-H: 4-3/m90/r(h2)
(t = 2, e = 3)
{3,6}:e
C&R: 34.6
GJ-H: 6-3-3/r60/r(h5)
(t = 3, e = 3)
sr{3,6}
C&R: Cundy & Rollet's notation
GJ-H: Notation of GomJau-Hogg
Grünbaum and Shephard distinguish the description of these tilings as Archimedean as referring only to the local property of the arrangement of tiles around each vertex being the same, and that as uniform as referring to the global property of vertex-transitivity. Though these yield the same set of tilings in the plane, in other spaces there are Archimedean tilings which are not uniform.
Plane-vertex tilings
There are 17 combinations of regular convex polygons that form 21 types of plane-vertex tilings.[6][7] Polygons in these meet at a point with no gap or overlap. Listing by their vertex figures, one has 6 polygons, three have 5 polygons, seven have 4 polygons, and ten have 3 polygons.[8]
As detailed in the sections above, three of them can make regular tilings (63, 44, 36), and eight more can make semiregular or archimedean tilings, (3.12.12, 4.6.12, 4.8.8, (3.6)2, 3.4.6.4, 3.3.4.3.4, 3.3.3.4.4, 3.3.3.3.6). Four of them can exist in higher k-uniform tilings (3.3.4.12, 3.4.3.12, 3.3.6.6, 3.4.4.6), while six can not be used to completely tile the plane by regular polygons with no gaps or overlaps - they only tessellate space entirely when irregular polygons are included (3.7.42, 3.8.24, 3.9.18, 3.10.15, 4.5.20, 5.5.10).[9]
The plane-vertex tilings
6
36
5
3.3.4.3.4
3.3.3.4.4
3.3.3.3.6
4
3.3.4.12
3.4.3.12
3.3.6.6
(3.6)2
3.4.4.6
3.4.6.4
44
3
3.7.42
3.8.24
3.9.18
3.10.15
3.12.12
4.5.20
4.6.12
4.8.8
5.5.10
63
k-uniform tilings
Such periodic tilings may be classified by the number of orbits of vertices, edges and tiles. If there are k orbits of vertices, a tiling is known as k-uniform or k-isogonal; if there are t orbits of tiles, as t-isohedral; if there are e orbits of edges, as e-isotoxal.
k-uniform tilings with the same vertex figures can be further identified by their wallpaper group symmetry.
1-uniform tilings include 3 regular tilings, and 8 semiregular ones, with 2 or more types of regular polygon faces. There are 20 2-uniform tilings, 61 3-uniform tilings, 151 4-uniform tilings, 332 5-uniform tilings and 673 6-uniform tilings. Each can be grouped by the number m of distinct vertex figures, which are also called m-Archimedean tilings.[10]
Finally, if the number of types of vertices is the same as the uniformity (m = k below), then the tiling is said to be Krotenheerdt. In general, the uniformity is greater than or equal to the number of types of vertices (m ≥ k), as different types of vertices necessarily have different orbits, but not vice versa. Setting m = n = k, there are 11 such tilings for n = 1; 20 such tilings for n = 2; 39 such tilings for n = 3; 33 such tilings for n = 4; 15 such tilings for n = 5; 10 such tilings for n = 6; and 7 such tilings for n = 7.
Below is an example of a 3-unifom tiling:
Colored 3-uniform tiling #57 of 61
by sides, yellow triangles, red squares (by polygons)
by 4-isohedral positions, 3 shaded colors of triangles (by orbits)
k-uniform, m-Archimedean tiling counts [11][12]
m-Archimedean
12345678 9 10 11 12 13 14 ≥ 15 Total
k-uniform 1 110000000 0 0 0 0 0 0 0 11
2 020000000 0 0 0 0 0 0 0 20
3 0223900000 0 0 0 0 0 0 0 61
4 03385330000 0 0 0 0 0 0 0 151
5 0741499415000 0 0 0 0 0 0 0 332
6 0100284187921000 0 0 0 0 0 0 0 673
7 0?????70 0 0 0 0 0 0 0 1472
8 0?????200 0 0 0 0 0 0 0 2850
9 0??????80 0 0 0 0 0 0 5960
10 0??????270 0 0 0 0 0 0 11866
11 0???????10 0 0 0 0 0 24459
12 0????????00 0 0 0 0 49794
13 0???????????00 0 103082
14 0????????????00 ?
≥ 15 0?????????????0 ?
Total 11∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0 ∞
2-uniform tilings
There are twenty (20) 2-uniform tilings of the Euclidean plane. (also called 2-isogonal tilings or demiregular tilings) [5]: 62-67 [13][14] Vertex types are listed for each. If two tilings share the same two vertex types, they are given subscripts 1,2.
2-uniform tilings (20)
p6m, *632p4m, *442
[36; 32.4.3.4]
3-4-3/m30/r(c3)
(t = 3, e = 3)
[3.4.6.4; 32.4.3.4]
6-4-3,3/m30/r(h1)
(t = 4, e = 4)
[3.4.6.4; 33.42]
6-4-3-3/m30/r(h5)
(t = 4, e = 4)
[3.4.6.4; 3.42.6]
6-4-3,4-6/m30/r(c4)
(t = 5, e = 5)
[4.6.12; 3.4.6.4]
12-4,6-3/m30/r(c3)
(t = 4, e = 4)
[36; 32.4.12]
12-3,4-3/m30/r(c3)
(t = 4, e = 4)
[3.12.12; 3.4.3.12]
12-0,3,3-0,4/m45/m(h1)
(t = 3, e = 3)
p6m, *632p6, 632p6, 632cmm, 2*22pmm, *2222cmm, 2*22pmm, *2222
[36; 32.62]
3-6/m30/r(c2)
(t = 2, e = 3)
[36; 34.6]1
6-3,3-3/m30/r(h1)
(t = 3, e = 3)
[36; 34.6]2
6-3-3,3-3/r60/r(h8)
(t = 5, e = 7)
[32.62; 34.6]
6-3/m90/r(h1)
(t = 2, e = 4)
[3.6.3.6; 32.62]
6-3,6/m90/r(h3)
(t = 2, e = 3)
[3.42.6; 3.6.3.6]2
6-3,4-6-3,4-6,4/m90/r(c6)
(t = 3, e = 4)
[3.42.6; 3.6.3.6]1
6-3,4/m90/r(h4)
(t = 4, e = 4)
p4g, 4*2pgg, 22×cmm, 2*22cmm, 2*22pmm, *2222cmm, 2*22
[33.42; 32.4.3.4]1
4-3,3-4,3/r90/m(h3)
(t = 4, e = 5)
[33.42; 32.4.3.4]2
4-3,3,3-4,3/r(c2)/r(h13)/r(h45)
(t = 3, e = 6)
[44; 33.42]1
4-3/m(h4)/m(h3)/r(h2)
(t = 2, e = 4)
[44; 33.42]2
4-4-3-3/m90/r(h3)
(t = 3, e = 5)
[36; 33.42]1
4-3,4-3,3/m90/r(h3)
(t = 3, e = 4)
[36; 33.42]2
4-3-3-3/m90/r(h7)/r(h5)
(t = 4, e = 5)
Higher k-uniform tilings
k-uniform tilings have been enumerated up to 6. There are 673 6-uniform tilings of the Euclidean plane. Brian Galebach's search reproduced Krotenheerdt's list of 10 6-uniform tilings with 6 distinct vertex types, as well as finding 92 of them with 5 vertex types, 187 of them with 4 vertex types, 284 of them with 3 vertex types, and 100 with 2 vertex types.
Fractalizing k-uniform tilings
There are many ways of generating new k-uniform tilings from old k-uniform tilings. For example, notice that the 2-uniform [3.12.12; 3.4.3.12] tiling has a square lattice, the 4(3-1)-uniform [343.12; (3.122)3] tiling has a snub square lattice, and the 5(3-1-1)-uniform [334.12; 343.12; (3.12.12)3] tiling has an elongated triangular lattice. These higher-order uniform tilings use the same lattice but possess greater complexity. The fractalizing basis for theses tilings is as follows:[15]
Triangle Square Hexagon Dissected
Dodecagon
Shape
Fractalizing
The side lengths are dilated by a factor of $2+{\sqrt {3}}$.
This can similarly be done with the truncated trihexagonal tiling as a basis, with corresponding dilation of $3+{\sqrt {3}}$.
Triangle Square Hexagon Dissected
Dodecagon
Shape
Fractalizing
Fractalizing examples
Truncated Hexagonal Tiling Truncated Trihexagonal Tiling
Fractalizing
Tilings that are not edge-to-edge
Convex regular polygons can also form plane tilings that are not edge-to-edge. Such tilings can be considered edge-to-edge as nonregular polygons with adjacent colinear edges.
There are seven families of isogonal each family having a real-valued parameter determining the overlap between sides of adjacent tiles or the ratio between the edge lengths of different tiles. Two of the families are generated from shifted square, either progressive or zig-zagging positions. Grünbaum and Shephard call these tilings uniform although it contradicts Coxeter's definition for uniformity which requires edge-to-edge regular polygons.[16] Such isogonal tilings are actually topologically identical to the uniform tilings, with different geometric proportions.
Periodic isogonal tilings by non-edge-to-edge convex regular polygons
1 2 3 4 5 6 7
Rows of squares with horizontal offsets
Rows of triangles with horizontal offsets
A tiling by squares
Three hexagons surround each triangle
Six triangles surround every hexagon.
Three size triangles
cmm (2*22) p2 (2222) cmm (2*22) p4m (*442) p6 (632) p3 (333)
Hexagonal tiling Square tiling Truncated square tiling Truncated hexagonal tiling Hexagonal tiling Trihexagonal tiling
See also
• Grid (spatial index)
• Uniform tilings in hyperbolic plane
• List of uniform tilings
• Wythoff symbol
• Tessellation
• Wallpaper group
• Regular polyhedron (the Platonic solids)
• Semiregular polyhedron (including the Archimedean solids)
• Hyperbolic geometry
• Penrose tiling
• Tiling with rectangles
• Lattice (group)
References
1. Cundy, H.M.; Rollett, A.P. (1981). Mathematical Models;. Stradbroke (UK): Tarquin Publications.
2. Gomez-Jauregui, Valentin al.; Otero, Cesar; et al. (2012). "Generation and Nomenclature of Tessellations and Double-Layer Grids". Journal of Structural Engineering. 138 (7): 843–852. doi:10.1061/(ASCE)ST.1943-541X.0000532. hdl:10902/5869.
3. Gomez-Jauregui, Valentin; Hogg, Harrison; et al. (2021). "GomJau-Hogg's Notation for Automatic Generation of k-Uniform Tessellations with ANTWERP v3.0". Symmetry. 13 (12): 2376. Bibcode:2021Symm...13.2376G. doi:10.3390/sym13122376.
4. Hogg, Harrison; Gomez-Jauregui, Valentin. < "Antwerp 3.0".
5. Critchlow, K. (1969). Order in Space: A Design Source Book. London: Thames and Hudson. pp. 60–61.
6. Dallas, Elmslie William (1855), The Elements of Plane Practical Geometry, Etc, John W. Parker & Son, p. 134
7. Tilings and Patterns, Figure 2.1.1, p.60
8. Tilings and Patterns, p.58-69
9. "Pentagon-Decagon Packing". American Mathematical Society. AMS. Retrieved 2022-03-07.
10. k-uniform tilings by regular polygons Archived 2015-06-30 at the Wayback Machine Nils Lenngren, 2009
11. "n-Uniform Tilings". probabilitysports.com. Retrieved 2019-06-21.
12. Sloane, N. J. A. (ed.). "Sequence A068599 (Number of n-uniform tilings.)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2023-01-07.
13. Tilings and Patterns, Grünbaum and Shephard 1986, pp. 65-67
14. "In Search of Demiregular Tilings" (PDF). Archived from the original (PDF) on 2016-05-07. Retrieved 2015-06-04.
15. Chavey, Darrah (2014). "TILINGS BY REGULAR POLYGONS III: DODECAGON-DENSE TILINGS". Symmetry-Culture and Science. 25 (3): 193–210. S2CID 33928615.
16. Tilings by regular polygons p.236
• Grünbaum, Branko; Shephard, Geoffrey C. (1977). "Tilings by regular polygons". Math. Mag. 50 (5): 227–247. doi:10.2307/2689529. JSTOR 2689529.
• Grünbaum, Branko; Shephard, G. C. (1978). "The ninety-one types of isogonal tilings in the plane". Trans. Am. Math. Soc. 252: 335–353. doi:10.1090/S0002-9947-1978-0496813-3. MR 0496813.
• Debroey, I.; Landuyt, F. (1981). "Equitransitive edge-to-edge tilings". Geometriae Dedicata. 11 (1): 47–60. doi:10.1007/BF00183189. S2CID 122636363.
• Grünbaum, Branko; Shephard, G. C. (1987). Tilings and Patterns. W. H. Freeman and Company. ISBN 0-7167-1193-1.
• Ren, Ding; Reay, John R. (1987). "The boundary characteristic and Pick's theorem in the Archimedean planar tilings". J. Comb. Theory A. 44 (1): 110–119. doi:10.1016/0097-3165(87)90063-X.
• Chavey, D. (1989). "Tilings by Regular Polygons—II: A Catalog of Tilings". Computers & Mathematics with Applications. 17: 147–165. doi:10.1016/0898-1221(89)90156-9.
• Order in Space: A design source book, Keith Critchlow, 1970 ISBN 978-0-670-52830-1
• Sommerville, Duncan MacLaren Young (1958). An Introduction to the Geometry of n Dimensions. Dover Publications. Chapter X: The Regular Polytopes
• Préa, P. (1997). "Distance sequences and percolation thresholds in Archimedean Tilings". Mathl. Comput. Modelling. 26 (8–10): 317–320. doi:10.1016/S0895-7177(97)00216-1.
• Kovic, Jurij (2011). "Symmetry-type graphs of Platonic and Archimedean solids". Math. Commun. 16 (2): 491–507.
• Pellicer, Daniel; Williams, Gordon (2012). "Minimal Covers of the Archimedean Tilings, Part 1". The Electronic Journal of Combinatorics. 19 (3): #P6. doi:10.37236/2512.
• Dale Seymour and Jill Britton, Introduction to Tessellations, 1989, ISBN 978-0866514613, pp. 50–57
External links
Euclidean and general tiling links:
• n-uniform tilings, Brian Galebach
• Dutch, Steve. "Uniform Tilings". Archived from the original on 2006-09-09. Retrieved 2006-09-09.
• Mitchell, K. "Semi-Regular Tilings". Retrieved 2006-09-09.
• Weisstein, Eric W. "Tessellation". MathWorld.
• Weisstein, Eric W. "Semiregular tessellation". MathWorld.
• Weisstein, Eric W. "Demiregular tessellation". MathWorld.
Tessellation
Periodic
• Pythagorean
• Rhombille
• Schwarz triangle
• Rectangle
• Domino
• Uniform tiling and honeycomb
• Coloring
• Convex
• Kisrhombille
• Wallpaper group
• Wythoff
Aperiodic
• Ammann–Beenker
• Aperiodic set of prototiles
• List
• Einstein problem
• Socolar–Taylor
• Gilbert
• Penrose
• Pentagonal
• Pinwheel
• Quaquaversal
• Rep-tile and Self-tiling
• Sphinx
• Socolar
• Truchet
Other
• Anisohedral and Isohedral
• Architectonic and catoptric
• Circle Limit III
• Computer graphics
• Honeycomb
• Isotoxal
• List
• Packing
• Problems
• Domino
• Wang
• Heesch's
• Squaring
• Dividing a square into similar rectangles
• Prototile
• Conway criterion
• Girih
• Regular Division of the Plane
• Regular grid
• Substitution
• Voronoi
• Voderberg
By vertex type
Spherical
• 2n
• 33.n
• V33.n
• 42.n
• V42.n
Regular
• 2∞
• 36
• 44
• 63
Semi-
regular
• 32.4.3.4
• V32.4.3.4
• 33.42
• 33.∞
• 34.6
• V34.6
• 3.4.6.4
• (3.6)2
• 3.122
• 42.∞
• 4.6.12
• 4.82
Hyper-
bolic
• 32.4.3.5
• 32.4.3.6
• 32.4.3.7
• 32.4.3.8
• 32.4.3.∞
• 32.5.3.5
• 32.5.3.6
• 32.6.3.6
• 32.6.3.8
• 32.7.3.7
• 32.8.3.8
• 33.4.3.4
• 32.∞.3.∞
• 34.7
• 34.8
• 34.∞
• 35.4
• 37
• 38
• 3∞
• (3.4)3
• (3.4)4
• 3.4.62.4
• 3.4.7.4
• 3.4.8.4
• 3.4.∞.4
• 3.6.4.6
• (3.7)2
• (3.8)2
• 3.142
• 3.162
• (3.∞)2
• 3.∞2
• 42.5.4
• 42.6.4
• 42.7.4
• 42.8.4
• 42.∞.4
• 45
• 46
• 47
• 48
• 4∞
• (4.5)2
• (4.6)2
• 4.6.12
• 4.6.14
• V4.6.14
• 4.6.16
• V4.6.16
• 4.6.∞
• (4.7)2
• (4.8)2
• 4.8.10
• V4.8.10
• 4.8.12
• 4.8.14
• 4.8.16
• 4.8.∞
• 4.102
• 4.10.12
• 4.122
• 4.12.16
• 4.142
• 4.162
• 4.∞2
• (4.∞)2
• 54
• 55
• 56
• 5∞
• 5.4.6.4
• (5.6)2
• 5.82
• 5.102
• 5.122
• (5.∞)2
• 64
• 65
• 66
• 68
• 6.4.8.4
• (6.8)2
• 6.82
• 6.102
• 6.122
• 6.162
• 73
• 74
• 77
• 7.62
• 7.82
• 7.142
• 83
• 84
• 86
• 88
• 8.62
• 8.122
• 8.162
• ∞3
• ∞4
• ∞5
• ∞∞
• ∞.62
• ∞.82
|
Wikipedia
|
Regular space
In topology and related fields of mathematics, a topological space X is called a regular space if every closed subset C of X and a point p not contained in C admit non-overlapping open neighborhoods.[1] Thus p and C can be separated by neighborhoods. This condition is known as Axiom T3. The term "T3 space" usually means "a regular Hausdorff space". These conditions are examples of separation axioms.
Separation axioms
in topological spaces
Kolmogorov classification
T0 (Kolmogorov)
T1 (Fréchet)
T2 (Hausdorff)
T2½(Urysohn)
completely T2 (completely Hausdorff)
T3 (regular Hausdorff)
T3½(Tychonoff)
T4 (normal Hausdorff)
T5 (completely normal
Hausdorff)
T6 (perfectly normal
Hausdorff)
• History
Definitions
A topological space X is a regular space if, given any closed set F and any point x that does not belong to F, there exists a neighbourhood U of x and a neighbourhood V of F that are disjoint. Concisely put, it must be possible to separate x and F with disjoint neighborhoods.
A T3 space or regular Hausdorff space is a topological space that is both regular and a Hausdorff space. (A Hausdorff space or T2 space is a topological space in which any two distinct points are separated by neighbourhoods.) It turns out that a space is T3 if and only if it is both regular and T0. (A T0 or Kolmogorov space is a topological space in which any two distinct points are topologically distinguishable, i.e., for every pair of distinct points, at least one of them has an open neighborhood not containing the other.) Indeed, if a space is Hausdorff then it is T0, and each T0 regular space is Hausdorff: given two distinct points, at least one of them misses the closure of the other one, so (by regularity) there exist disjoint neighborhoods separating one point from (the closure of) the other.
Although the definitions presented here for "regular" and "T3" are not uncommon, there is significant variation in the literature: some authors switch the definitions of "regular" and "T3" as they are used here, or use both terms interchangeably. This article uses the term "regular" freely, but will usually say "regular Hausdorff", which is unambiguous, instead of the less precise "T3". For more on this issue, see History of the separation axioms.
A locally regular space is a topological space where every point has an open neighbourhood that is regular. Every regular space is locally regular, but the converse is not true. A classical example of a locally regular space that is not regular is the bug-eyed line.
Relationships to other separation axioms
A regular space is necessarily also preregular, i.e., any two topologically distinguishable points can be separated by neighbourhoods. Since a Hausdorff space is the same as a preregular T0 space, a regular space which is also T0 must be Hausdorff (and thus T3). In fact, a regular Hausdorff space satisfies the slightly stronger condition T2½. (However, such a space need not be completely Hausdorff.) Thus, the definition of T3 may cite T0, T1, or T2½ instead of T2 (Hausdorffness); all are equivalent in the context of regular spaces.
Speaking more theoretically, the conditions of regularity and T3-ness are related by Kolmogorov quotients. A space is regular if and only if its Kolmogorov quotient is T3; and, as mentioned, a space is T3 if and only if it's both regular and T0. Thus a regular space encountered in practice can usually be assumed to be T3, by replacing the space with its Kolmogorov quotient.
There are many results for topological spaces that hold for both regular and Hausdorff spaces. Most of the time, these results hold for all preregular spaces; they were listed for regular and Hausdorff spaces separately because the idea of preregular spaces came later. On the other hand, those results that are truly about regularity generally don't also apply to nonregular Hausdorff spaces.
There are many situations where another condition of topological spaces (such as normality, pseudonormality, paracompactness, or local compactness) will imply regularity if some weaker separation axiom, such as preregularity, is satisfied.[2] Such conditions often come in two versions: a regular version and a Hausdorff version. Although Hausdorff spaces aren't generally regular, a Hausdorff space that is also (say) locally compact will be regular, because any Hausdorff space is preregular. Thus from a certain point of view, regularity is not really the issue here, and we could impose a weaker condition instead to get the same result. However, definitions are usually still phrased in terms of regularity, since this condition is more well known than any weaker one.
Most topological spaces studied in mathematical analysis are regular; in fact, they are usually completely regular, which is a stronger condition. Regular spaces should also be contrasted with normal spaces.
Examples and nonexamples
A zero-dimensional space with respect to the small inductive dimension has a base consisting of clopen sets. Every such space is regular.
As described above, any completely regular space is regular, and any T0 space that is not Hausdorff (and hence not preregular) cannot be regular. Most examples of regular and nonregular spaces studied in mathematics may be found in those two articles. On the other hand, spaces that are regular but not completely regular, or preregular but not regular, are usually constructed only to provide counterexamples to conjectures, showing the boundaries of possible theorems. Of course, one can easily find regular spaces that are not T0, and thus not Hausdorff, such as an indiscrete space, but these examples provide more insight on the T0 axiom than on regularity. An example of a regular space that is not completely regular is the Tychonoff corkscrew.
Most interesting spaces in mathematics that are regular also satisfy some stronger condition. Thus, regular spaces are usually studied to find properties and theorems, such as the ones below, that are actually applied to completely regular spaces, typically in analysis.
There exist Hausdorff spaces that are not regular. An example is the set R with the topology generated by sets of the form U — C, where U is an open set in the usual sense, and C is any countable subset of U.
Elementary properties
Suppose that X is a regular space. Then, given any point x and neighbourhood G of x, there is a closed neighbourhood E of x that is a subset of G. In fancier terms, the closed neighbourhoods of x form a local base at x. In fact, this property characterises regular spaces; if the closed neighbourhoods of each point in a topological space form a local base at that point, then the space must be regular.
Taking the interiors of these closed neighbourhoods, we see that the regular open sets form a base for the open sets of the regular space X. This property is actually weaker than regularity; a topological space whose regular open sets form a base is semiregular.
References
1. Munkres, James R. (2000). Topology (2nd ed.). Prentice Hall. ISBN 0-13-181629-2.
2. "general topology - Preregular and locally compact implies regular". Mathematics Stack Exchange.
Topology
Fields
• General (point-set)
• Algebraic
• Combinatorial
• Continuum
• Differential
• Geometric
• low-dimensional
• Homology
• cohomology
• Set-theoretic
• Digital
Key concepts
• Open set / Closed set
• Interior
• Continuity
• Space
• compact
• Connected
• Hausdorff
• metric
• uniform
• Homotopy
• homotopy group
• fundamental group
• Simplicial complex
• CW complex
• Polyhedral complex
• Manifold
• Bundle (mathematics)
• Second-countable space
• Cobordism
Metrics and properties
• Euler characteristic
• Betti number
• Winding number
• Chern number
• Orientability
Key results
• Banach fixed-point theorem
• De Rham cohomology
• Invariance of domain
• Poincaré conjecture
• Tychonoff's theorem
• Urysohn's lemma
• Category
• Mathematics portal
• Wikibook
• Wikiversity
• Topics
• general
• algebraic
• geometric
• Publications
|
Wikipedia
|
Regularity structure
Martin Hairer's theory of regularity structures provides a framework for studying a large class of subcritical parabolic stochastic partial differential equations arising from quantum field theory.[1] The framework covers the Kardar–Parisi–Zhang equation , the $\Phi _{3}^{4}$ equation and the parabolic Anderson model, all of which require renormalization in order to have a well-defined notion of solution.
Hairer won the 2021 Breakthrough Prize in mathematics for introducing regularity structures.[2]
Definition
A regularity structure is a triple ${\mathcal {T}}=(A,T,G)$ consisting of:
• a subset $A$ (index set) of $\mathbb {R} $ that is bounded from below and has no accumulation points;
• the model space: a graded vector space $T=\oplus _{\alpha \in A}T_{\alpha }$, where each $T_{\alpha }$ is a Banach space; and
• the structure group: a group $G$ of continuous linear operators $\Gamma \colon T\to T$ such that, for each $\alpha \in A$ and each $\tau \in T_{\alpha }$, we have $(\Gamma -1)\tau \in \oplus _{\beta <\alpha }T_{\beta }$.
A further key notion in the theory of regularity structures is that of a model for a regularity structure, which is a concrete way of associating to any $\tau \in T$ and $x_{0}\in \mathbb {R} ^{d}$ a "Taylor polynomial" based at $x_{0}$ and represented by $\tau $, subject to some consistency requirements. More precisely, a model for ${\mathcal {T}}=(A,T,G)$ on $\mathbb {R} ^{d}$, with $d\geq 1$ consists of two maps
$\Pi \colon \mathbb {R} ^{d}\to \mathrm {Lin} (T;{\mathcal {S}}'(\mathbb {R} ^{d}))$,
$\Gamma \colon \mathbb {R} ^{d}\times \mathbb {R} ^{d}\to G$.
Thus, $\Pi $ assigns to each point $x$ a linear map $\Pi _{x}$, which is a linear map from $T$ into the space of distributions on $\mathbb {R} ^{d}$; $\Gamma $ assigns to any two points $x$ and $y$ a bounded operator $\Gamma _{xy}$, which has the role of converting an expansion based at $y$ into one based at $x$. These maps $\Pi $ and $\Gamma $ are required to satisfy the algebraic conditions
$\Gamma _{xy}\Gamma _{yz}=\Gamma _{xz}$,
$\Pi _{x}\Gamma _{xy}=\Pi _{y}$,
and the analytic conditions that, given any $r>|\inf A|$, any compact set $K\subset \mathbb {R} ^{d}$, and any $\gamma >0$, there exists a constant $C>0$ such that the bounds
$|(\Pi _{x}\tau )\varphi _{x}^{\lambda }|\leq C\lambda ^{|\tau |}\|\tau \|_{T_{\alpha }}$,
$\|\Gamma _{xy}\tau \|_{T_{\beta }}\leq C|x-y|^{\alpha -\beta }\|\tau \|_{T_{\alpha }}$,
hold uniformly for all $r$-times continuously differentiable test functions $\varphi \colon \mathbb {R} ^{d}\to \mathbb {R} $ with unit ${\mathcal {C}}^{r}$ norm, supported in the unit ball about the origin in $\mathbb {R} ^{d}$, for all points $x,y\in K$, all $0<\lambda \leq 1$, and all $\tau \in T_{\alpha }$ with $\beta <\alpha \leq \gamma $. Here $\varphi _{x}^{\lambda }\colon \mathbb {R} ^{d}\to \mathbb {R} $ denotes the shifted and scaled version of $\varphi $ given by
$\varphi _{x}^{\lambda }(y)=\lambda ^{-d}\varphi \left({\frac {y-x}{\lambda }}\right)$.
References
1. Hairer, Martin (2014). "A theory of regularity structures". Inventiones Mathematicae. 198 (2): 269–504. arXiv:1303.5113. Bibcode:2014InMat.198..269H. doi:10.1007/s00222-014-0505-4. S2CID 119138901.
2. Sample, Ian (2020-09-10). "UK mathematician wins richest prize in academia". The Guardian. ISSN 0261-3077. Retrieved 2020-09-13.
|
Wikipedia
|
Regularity theory
Regularity is a property of elliptic partial differential equations such as Laplace's equation. Hilbert's nineteenth problem was concerned with this concept.[1]
References
1. Fernández-Real, Xavier; Ros-Oton, Xavier (2022-12-06). Regularity Theory for Elliptic PDE. arXiv:2301.01564. doi:10.4171/ZLAM/28. ISBN 978-3-98547-028-0. S2CID 254389061.
|
Wikipedia
|
Regularization by spectral filtering
Spectral regularization is any of a class of regularization techniques used in machine learning to control the impact of noise and prevent overfitting. Spectral regularization can be used in a broad range of applications, from deblurring images to classifying emails into a spam folder and a non-spam folder. For instance, in the email classification example, spectral regularization can be used to reduce the impact of noise and prevent overfitting when a machine learning system is being trained on a labeled set of emails to learn how to tell a spam and a non-spam email apart.
Spectral regularization algorithms rely on methods that were originally defined and studied in the theory of ill-posed inverse problems (for instance, see[1]) focusing on the inversion of a linear operator (or a matrix) that possibly has a bad condition number or an unbounded inverse. In this context, regularization amounts to substituting the original operator by a bounded operator called the "regularization operator" that has a condition number controlled by a regularization parameter,[2] a classical example being Tikhonov regularization. To ensure stability, this regularization parameter is tuned based on the level of noise.[2] The main idea behind spectral regularization is that each regularization operator can be described using spectral calculus as an appropriate filter on the eigenvalues of the operator that defines the problem, and the role of the filter is to "suppress the oscillatory behavior corresponding to small eigenvalues".[2] Therefore, each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function (which needs to be derived for that particular algorithm). Three of the most commonly used regularization algorithms for which spectral filtering is well-studied are Tikhonov regularization, Landweber iteration, and truncated singular value decomposition (TSVD). As for choosing the regularization parameter, examples of candidate methods to compute this parameter include the discrepancy principle, generalized cross validation, and the L-curve criterion.[3]
It is of note that the notion of spectral filtering studied in the context of machine learning is closely connected to the literature on function approximation (in signal processing).
Notation
The training set is defined as $S=\{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}$, where $X$ is the $n\times d$ input matrix and $Y=(y_{1},\dots ,y_{n})$ is the output vector. Where applicable, the kernel function is denoted by $k$, and the $n\times n$ kernel matrix is denoted by $K$ which has entries $K_{ij}=k(x_{i},x_{j})$ and ${\mathcal {H}}$ denotes the Reproducing Kernel Hilbert Space (RKHS) with kernel $k$. The regularization parameter is denoted by $\lambda $.
(Note: For $g\in G$ and $f\in F$, with $G$ and $F$ being Hilbert spaces, given a linear, continuous operator $L$, assume that $g=Lf$ holds. In this setting, the direct problem would be to solve for $g$ given $f$ and the inverse problem would be to solve for $f$ given $g$. If the solution exists, is unique and stable, the inverse problem (i.e. the problem of solving for $f$) is well-posed; otherwise, it is ill-posed.)
Relation to the theory of ill-posed inverse problems
The connection between the regularized least squares (RLS) estimation problem (Tikhonov regularization setting) and the theory of ill-posed inverse problems is an example of how spectral regularization algorithms are related to the theory of ill-posed inverse problems.
The RLS estimator solves
$\min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}+\lambda \|f\|_{\mathcal {H}}^{2}$
and the RKHS allows for expressing this RLS estimator as $f_{S}^{\lambda }(X)=\sum _{i=1}^{n}c_{i}k(x,x_{i})$ where $(K+n\lambda I)c=Y$ with $c=(c_{1},\dots ,c_{n})$.[4] The penalization term is used for controlling smoothness and preventing overfitting. Since the solution of empirical risk minimization $\min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}$ can be written as $f_{S}^{\lambda }(X)=\sum _{i=1}^{n}c_{i}k(x,x_{i})$ such that $Kc=Y$, adding the penalty function amounts to the following change in the system that needs to be solved:[5]
${\bigg \{}\min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}\rightarrow \min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}+\lambda \|f\|_{\mathcal {H}}^{2}{\bigg \}}\equiv {\bigg \{}Kc=Y\rightarrow (K+n\lambda I)c=Y{\bigg \}}.$
In this learning setting, the kernel matrix can be decomposed as $K=Q\Sigma Q^{T}$, with
$\sigma =\operatorname {diag} (\sigma _{1},\dots ,\sigma _{n}),~\sigma _{1}\geq \sigma _{2}\geq \cdots \geq \sigma _{n}\geq 0$
and $q_{1},\dots ,q_{n}$ are the corresponding eigenvectors. Therefore, in the initial learning setting, the following holds:
$c=K^{-1}Y=Q\Sigma ^{-1}Q^{T}Y=\sum _{i=1}^{n}{\frac {1}{\sigma _{i}}}\langle q_{i},Y\rangle q_{i}.$
Thus, for small eigenvalues, even small perturbations in the data can lead to considerable changes in the solution. Hence, the problem is ill-conditioned, and solving this RLS problem amounts to stabilizing a possibly ill-conditioned matrix inversion problem, which is studied in the theory of ill-posed inverse problems; in both problems, a main concern is to deal with the issue of numerical stability.
Implementation of algorithms
Each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function, denoted here by $G_{\lambda }(\cdot )$. If the Kernel matrix is denoted by $K$, then $\lambda $ should control the magnitude of the smaller eigenvalues of $G_{\lambda }(K)$. In a filtering setup, the goal is to find estimators $f_{S}^{\lambda }(X):=\sum _{i=1}^{n}c_{i}k(x,x_{i})$ where $c=G_{\lambda }(K)Y$. To do so, a scalar filter function $G_{\lambda }(\sigma )$ is defined using the eigen-decomposition of the kernel matrix:
$G_{\lambda }(K)=QG_{\lambda }(\Sigma )Q^{T},$
which yields
$G_{\lambda }(K)Y~=~\sum _{i=1}^{n}G_{\lambda }(\sigma _{i})\langle q_{i},Y\rangle q_{i}.$
Typically, an appropriate filter function should have the following properties:[5]
1. As $\lambda $ goes to zero, $G_{\lambda }(\sigma )~\rightarrow ~1/\sigma $.
2. The magnitude of the (smaller) eigenvalues of $G_{\lambda }$ is controlled by $\lambda $.
While the above items give a rough characterization of the general properties of filter functions for all spectral regularization algorithms, the derivation of the filter function (and hence its exact form) varies depending on the specific regularization method that spectral filtering is applied to.
Filter function for Tikhonov regularization
In the Tikhonov regularization setting, the filter function for RLS is described below. As shown in,[4] in this setting, $c=(K+n\lambda I)^{-1}Y$. Thus,
$c=(K+n\lambda I)^{-1}Y=Q(\Sigma +n\lambda I)^{-1}Q^{T}Y=\sum _{i=1}^{n}{\frac {1}{\sigma _{i}+n\lambda }}<q_{i},Y>q_{i}.$
The undesired components are filtered out using regularization:
• If $\sigma \gg \lambda n$, then ${\frac {1}{\sigma _{i}+n\lambda }}\sim {\frac {1}{\sigma _{i}}}$.
• If $\sigma \ll \lambda n$, then ${\frac {1}{\sigma _{i}+n\lambda }}\sim {\frac {1}{\lambda n}}$.
The filter function for Tikhonov regularization is therefore defined as:[5]
$G_{\lambda }(\sigma )={\frac {1}{\sigma +n\lambda }}.$
Filter function for Landweber iteration
The idea behind the Landweber iteration is gradient descent:[5]
$c^{0}=0$
${\text{for }}i=1,\dots ,t-1$
$~~~~~c^{i}=c^{i-1}+\eta (Y-Kc^{i-1})$
$\mathrm {end} $
In this setting, if $n$ is larger than $K$'s largest eigenvalue, the above iteration converges by choosing $\eta =2/n$ as the step-size:.[5] The above iteration is equivalent to minimizing ${\frac {1}{n}}||Y-Kc||_{2}^{2}$ (i.e. the empirical risk) via gradient descent; using induction, it can be proved that at the $t$-th iteration, the solution is given by [5]
$c=\eta \sum _{i=0}^{t-1}(I-\eta K)^{i}Y.$
Thus, the appropriate filter function is defined by:
$G_{\lambda }(\sigma )=\eta \sum _{i=0}^{t-1}(I-\eta \sigma )^{i}.$
It can be shown that this filter function corresponds to a truncated power expansion of $K^{-1}$;[5] to see this, note that the relation $\sum _{i\geq 0}x^{i}=1/(1-x)$, would still hold if $x$ is replaced by a matrix; thus, if $K$ (the kernel matrix), or rather $I-\eta K$, is considered, the following holds:
$K^{-1}=\eta \sum _{i=0}^{\infty }(I-\eta K)^{i}\sim \eta \sum _{i=0}^{t-1}(I-\eta K)^{i}.$
In this setting, the number of iterations gives the regularization parameter; roughly speaking, $t\sim 1/\lambda $.[5] If $t$ is large, overfitting may be a concern. If $t$ is small, oversmoothing may be a concern. Thus, choosing an appropriate time for early stopping of the iterations provides a regularization effect.
Filter function for TSVD
In the TSVD setting, given the eigen-decomposition $K=Q\Sigma Q^{T}$ and using a prescribed threshold $\lambda n$, a regularized inverse can be formed for the kernel matrix by discarding all the eigenvalues that are smaller than this threshold.[5] Thus, the filter function for TSVD can be defined as
$G_{\lambda }(\sigma )=\left\{{\begin{array}{lcll}1/\sigma &,&{\text{if }}\sigma \geq \lambda n\\[0.05in]0&,&{\text{otherwise}}\\[0.05in]\end{array}}\right..$
It can be shown that TSVD is equivalent to the (unsupervised) projection of the data using (kernel) Principal Component Analysis (PCA), and that it is also equivalent to minimizing the empirical risk on the projected data (without regularization).[5] Note that the number of components kept for the projection is the only free parameter here.
References
1. H. W. Engl, M. Hanke, and A. Neubauer. Regularization of inverse problems. Kluwer, 1996.
2. L. Lo Gerfo, L. Rosasco, F. Odone, E. De Vito, and A. Verri. Spectral Algorithms for Supervised Learning, Neural Computation, 20(7), 2008.
3. P. C. Hansen, J. G. Nagy, and D. P. O'Leary. Deblurring Images: Matrices, Spectra, and Filtering, Fundamentals of Algorithms 3, SIAM, Philadelphia, 2006.
4. L. Rosasco. Lecture 6 of the Lecture Notes for 9.520: Statistical Learning Theory and Applications. Massachusetts Institute of Technology, Fall 2013. Available at https://www.mit.edu/~9.520/fall13/slides/class06/class06_RLSSVM.pdf
5. L. Rosasco. Lecture 7 of the Lecture Notes for 9.520: Statistical Learning Theory and Applications. Massachusetts Institute of Technology, Fall 2013. Available at https://www.mit.edu/~9.520/fall13/slides/class07/class07_spectral.pdf
|
Wikipedia
|
Regularization perspectives on support vector machines
Within mathematical analysis, Regularization perspectives on support-vector machines provide a way of interpreting support-vector machines (SVMs) in the context of other regularization-based machine-learning algorithms. SVM algorithms categorize binary data, with the goal of fitting the training set data in a way that minimizes the average of the hinge-loss function and L2 norm of the learned weights. This strategy avoids overfitting via Tikhonov regularization and in the L2 norm sense and also corresponds to minimizing the bias and variance of our estimator of the weights. Estimators with lower Mean squared error predict better or generalize better when given unseen data.
Specifically, Tikhonov regularization algorithms produce a decision boundary that minimizes the average training-set error and constrain the Decision boundary not to be excessively complicated or overfit the training data via a L2 norm of the weights term. The training and test-set errors can be measured without bias and in a fair way using accuracy, precision, Auc-Roc, precision-recall, and other metrics.
Regularization perspectives on support-vector machines interpret SVM as a special case of Tikhonov regularization, specifically Tikhonov regularization with the hinge loss for a loss function. This provides a theoretical framework with which to analyze SVM algorithms and compare them to other algorithms with the same goals: to generalize without overfitting. SVM was first proposed in 1995 by Corinna Cortes and Vladimir Vapnik, and framed geometrically as a method for finding hyperplanes that can separate multidimensional data into two categories.[1] This traditional geometric interpretation of SVMs provides useful intuition about how SVMs work, but is difficult to relate to other machine-learning techniques for avoiding overfitting, like regularization, early stopping, sparsity and Bayesian inference. However, once it was discovered that SVM is also a special case of Tikhonov regularization, regularization perspectives on SVM provided the theory necessary to fit SVM within a broader class of algorithms.[2][3][4] This has enabled detailed comparisons between SVM and other forms of Tikhonov regularization, and theoretical grounding for why it is beneficial to use SVM's loss function, the hinge loss.[5]
Theoretical background
In the statistical learning theory framework, an algorithm is a strategy for choosing a function $f\colon \mathbf {X} \to \mathbf {Y} $ given a training set $S=\{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\}$ of inputs $x_{i}$ and their labels $y_{i}$ (the labels are usually $\pm 1$). Regularization strategies avoid overfitting by choosing a function that fits the data, but is not too complex. Specifically:
$f={\underset {f\in {\mathcal {H}}}{\operatorname {argmin} }}\left\{{\frac {1}{n}}\sum _{i=1}^{n}V(y_{i},f(x_{i}))+\lambda \|f\|_{\mathcal {H}}^{2}\right\},$
where ${\mathcal {H}}$ is a hypothesis space[6] of functions, $V\colon \mathbf {Y} \times \mathbf {Y} \to \mathbb {R} $ is the loss function, $\|\cdot \|_{\mathcal {H}}$ is a norm on the hypothesis space of functions, and $\lambda \in \mathbb {R} $ is the regularization parameter.[7]
When ${\mathcal {H}}$ is a reproducing kernel Hilbert space, there exists a kernel function $K\colon \mathbf {X} \times \mathbf {X} \to \mathbb {R} $ that can be written as an $n\times n$ symmetric positive-definite matrix $\mathbf {K} $. By the representer theorem,[8]
$f(x_{i})=\sum _{j=1}^{n}c_{j}\mathbf {K} _{ij},{\text{ and }}\|f\|_{\mathcal {H}}^{2}=\langle f,f\rangle _{\mathcal {H}}=\sum _{i=1}^{n}\sum _{j=1}^{n}c_{i}c_{j}K(x_{i},x_{j})=c^{T}\mathbf {K} c.$
Special properties of the hinge loss
The simplest and most intuitive loss function for categorization is the misclassification loss, or 0–1 loss, which is 0 if $f(x_{i})=y_{i}$ and 1 if $f(x_{i})\neq y_{i}$, i.e. the Heaviside step function on $-y_{i}f(x_{i})$. However, this loss function is not convex, which makes the regularization problem very difficult to minimize computationally. Therefore, we look for convex substitutes for the 0–1 loss. The hinge loss, $V{\big (}y_{i},f(x_{i}){\big )}={\big (}1-yf(x){\big )}_{+}$, where $(s)_{+}=\max(s,0)$, provides such a convex relaxation. In fact, the hinge loss is the tightest convex upper bound to the 0–1 misclassification loss function,[4] and with infinite data returns the Bayes-optimal solution:[5][9]
$f_{b}(x)={\begin{cases}1,&p(1\mid x)>p(-1\mid x),\\-1,&p(1\mid x)<p(-1\mid x).\end{cases}}$
Derivation
The Tikhonov regularization problem can be shown to be equivalent to traditional formulations of SVM by expressing it in terms of the hinge loss.[10] With the hinge loss
$V{\big (}y_{i},f(x_{i}){\big )}={\big (}1-yf(x){\big )}_{+},$
where $(s)_{+}=\max(s,0)$, the regularization problem becomes
$f={\underset {f\in {\mathcal {H}}}{\operatorname {argmin} }}\left\{{\frac {1}{n}}\sum _{i=1}^{n}{\big (}1-yf(x){\big )}_{+}+\lambda \|f\|_{\mathcal {H}}^{2}\right\}.$
Multiplying by $1/(2\lambda )$ yields
$f={\underset {f\in {\mathcal {H}}}{\operatorname {argmin} }}\left\{C\sum _{i=1}^{n}{\big (}1-yf(x){\big )}_{+}+{\frac {1}{2}}\|f\|_{\mathcal {H}}^{2}\right\}$
with $C=1/(2\lambda n)$, which is equivalent to the standard SVM minimization problem.
Notes and references
1. Cortes, Corinna; Vladimir Vapnik (1995). "Support-Vector Networks". Machine Learning. 20 (3): 273–297. doi:10.1007/BF00994018.
2. Rosasco, Lorenzo. "Regularized Least-Squares and Support Vector Machines" (PDF).
3. Rifkin, Ryan (2002). Everything Old is New Again: A Fresh Look at Historical Approaches in Machine Learning (PDF). MIT (PhD thesis).
4. Lee, Yoonkyung; Wahba, Grace (2012). "Multicategory Support Vector Machines". Journal of the American Statistical Association. 99 (465): 67–81. doi:10.1198/016214504000000098.
5. Rosasco L., De Vito E., Caponnetto A., Piana M., Verri A. (May 2004). "Are Loss Functions All the Same". Neural Computation. 5. 16 (5): 1063–1076. CiteSeerX 10.1.1.109.6786. doi:10.1162/089976604773135104. PMID 15070510.{{cite journal}}: CS1 maint: uses authors parameter (link)
6. A hypothesis space is the set of functions used to model the data in a machine-learning problem. Each function corresponds to a hypothesis about the structure of the data. Typically the functions in a hypothesis space form a Hilbert space of functions with norm formed from the loss function.
7. For insight on choosing the parameter, see, e.g., Wahba, Grace; Yonghua Wang (1990). "When is the optimal regularization parameter insensitive to the choice of the loss function". Communications in Statistics – Theory and Methods. 19 (5): 1685–1700. doi:10.1080/03610929008830285.
8. See Scholkopf, Bernhard; Ralf Herbrich; Alex Smola (2001). "A Generalized Representer Theorem". Computational Learning Theory. pp. 416–426. CiteSeerX 10.1.1.42.8617. doi:10.1007/3-540-44581-1_27. ISBN 978-3-540-42343-0. {{cite book}}: |journal= ignored (help)
9. Lin, Yi (July 2002). "Support Vector Machines and the Bayes Rule in Classification" (PDF). Data Mining and Knowledge Discovery. 6 (3): 259–275. doi:10.1023/A:1015469627679.
10. For a detailed derivation, see Rifkin, Ryan (2002). Everything Old is New Again: A Fresh Look at Historical Approaches in Machine Learning (PDF). MIT (PhD thesis).
• Evgeniou, Theodoros; Massimiliano Pontil; Tomaso Poggio (2000). "Regularization Networks and Support Vector Machines" (PDF). Advances in Computational Mathematics. 13 (1): 1–50. doi:10.1023/A:1018946025316.
• Joachims, Thorsten. "SVMlight". Archived from the original on 2015-04-19. Retrieved 2012-05-18.
• Vapnik, Vladimir (1999). The Nature of Statistical Learning Theory. New York: Springer-Verlag. ISBN 978-0-387-98780-4.
|
Wikipedia
|
Beta function
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral
$\mathrm {B} (z_{1},z_{2})=\int _{0}^{1}t^{z_{1}-1}(1-t)^{z_{2}-1}\,dt$
for complex number inputs $z_{1},z_{2}$ such that $\Re (z_{1}),\Re (z_{2})>0$.
The beta function was studied by Leonhard Euler and Adrien-Marie Legendre and was given its name by Jacques Binet; its symbol Β is a Greek capital beta.
Properties
The beta function is symmetric, meaning that $\mathrm {B} (z_{1},z_{2})=\mathrm {B} (z_{2},z_{1})$ for all inputs $z_{1}$ and $z_{2}$.[1]
A key property of the beta function is its close relationship to the gamma function:[1]
$\mathrm {B} (z_{1},z_{2})={\frac {\Gamma (z_{1})\,\Gamma (z_{2})}{\Gamma (z_{1}+z_{2})}}.$
A proof is given below in § Relationship to the gamma function.
The beta function is also closely related to binomial coefficients. When m (or n, by symmetry) is a positive integer, it follows from the definition of the gamma function Γ that[1]
$\mathrm {B} (m,n)={\frac {(m-1)!\,(n-1)!}{(m+n-1)!}}={\frac {m+n}{mn}}{\Bigg /}{\binom {m+n}{m}}.$
Relationship to the gamma function
A simple derivation of the relation $\mathrm {B} (z_{1},z_{2})={\frac {\Gamma (z_{1})\,\Gamma (z_{2})}{\Gamma (z_{1}+z_{2})}}$ can be found in Emil Artin's book The Gamma Function, page 18–19.[2] To derive this relation, write the product of two factorials as
${\begin{aligned}\Gamma (z_{1})\Gamma (z_{2})&=\int _{u=0}^{\infty }\ e^{-u}u^{z_{1}-1}\,du\cdot \int _{v=0}^{\infty }\ e^{-v}v^{z_{2}-1}\,dv\\[6pt]&=\int _{v=0}^{\infty }\int _{u=0}^{\infty }\ e^{-u-v}u^{z_{1}-1}v^{z_{2}-1}\,du\,dv.\end{aligned}}$
Changing variables by u = st and v = s(1 − t), because u + v = s and u / (u+v) = t, we have that the limits of integrations for s are 0 to ∞ and the limits of integration for t are 0 to 1. Thus produces
${\begin{aligned}\Gamma (z_{1})\Gamma (z_{2})&=\int _{s=0}^{\infty }\int _{t=0}^{1}e^{-s}(st)^{z_{1}-1}(s(1-t))^{z_{2}-1}s\,dt\,ds\\[6pt]&=\int _{s=0}^{\infty }e^{-s}s^{z_{1}+z_{2}-1}\,ds\cdot \int _{t=0}^{1}t^{z_{1}-1}(1-t)^{z_{2}-1}\,dt\\&=\Gamma (z_{1}+z_{2})\cdot \mathrm {B} (z_{1},z_{2}).\end{aligned}}$
Dividing both sides by $\Gamma (z_{1}+z_{2})$ gives the desired result.
The stated identity may be seen as a particular case of the identity for the integral of a convolution. Taking
${\begin{aligned}f(u)&:=e^{-u}u^{z_{1}-1}1_{\mathbb {R} _{+}}\\g(u)&:=e^{-u}u^{z_{2}-1}1_{\mathbb {R} _{+}},\end{aligned}}$
one has:
$\Gamma (z_{1})\Gamma (z_{2})=\int _{\mathbb {R} }f(u)\,du\cdot \int _{\mathbb {R} }g(u)\,du=\int _{\mathbb {R} }(f*g)(u)\,du=\mathrm {B} (z_{1},z_{2})\,\Gamma (z_{1}+z_{2}).$
Derivatives
We have
${\frac {\partial }{\partial z_{1}}}\mathrm {B} (z_{1},z_{2})=\mathrm {B} (z_{1},z_{2})\left({\frac {\Gamma '(z_{1})}{\Gamma (z_{1})}}-{\frac {\Gamma '(z_{1}+z_{2})}{\Gamma (z_{1}+z_{2})}}\right)=\mathrm {B} (z_{1},z_{2}){\big (}\psi (z_{1})-\psi (z_{1}+z_{2}){\big )},$
${\frac {\partial }{\partial z_{m}}}\mathrm {B} (z_{1},z_{2},\dots ,z_{n})=\mathrm {B} (z_{1},z_{2},\dots ,z_{n})\left(\psi (z_{m})-\psi \left(\sum _{k=1}^{n}z_{k}\right)\right),\quad 1\leq m\leq n,$
where $\psi (z)$ denotes the polygamma function.
Approximation
Stirling's approximation gives the asymptotic formula
$\mathrm {B} (x,y)\sim {\sqrt {2\pi }}{\frac {x^{x-1/2}y^{y-1/2}}{({x+y})^{x+y-1/2}}}$
for large x and large y.
If on the other hand x is large and y is fixed, then
$\mathrm {B} (x,y)\sim \Gamma (y)\,x^{-y}.$
Other identities and formulas
The integral defining the beta function may be rewritten in a variety of ways, including the following:
${\begin{aligned}\mathrm {B} (z_{1},z_{2})&=2\int _{0}^{\pi /2}(\sin \theta )^{2z_{1}-1}(\cos \theta )^{2z_{2}-1}\,d\theta ,\\[6pt]&=\int _{0}^{\infty }{\frac {t^{z_{1}-1}}{(1+t)^{z_{1}+z_{2}}}}\,dt,\\[6pt]&=n\int _{0}^{1}t^{nz_{1}-1}(1-t^{n})^{z_{2}-1}\,dt,\\&=(1-a)^{z_{2}}\int _{0}^{1}{\frac {(1-t)^{z_{1}-1}t^{z_{2}-1}}{(1-at)^{z_{1}+z_{2}}}}dt\qquad {\text{for any }}a\in \mathbb {R} _{\leq 1},\end{aligned}}$
where in the second-to-last identity n is any positive real number. One may move from the first integral to the second one by substituting $t=\tan ^{2}(\theta )$.
The beta function can be written as an infinite sum[3]
$\mathrm {B} (x,y)=\sum _{n=0}^{\infty }{\frac {(1-x)_{n}}{(y+n)\,n!}}$
(where $(x)_{n}$ is the rising factorial)
and as an infinite product
$\mathrm {B} (x,y)={\frac {x+y}{xy}}\prod _{n=1}^{\infty }\left(1+{\dfrac {xy}{n(x+y+n)}}\right)^{-1}.$
The beta function satisfies several identities analogous to corresponding identities for binomial coefficients, including a version of Pascal's identity
$\mathrm {B} (x,y)=\mathrm {B} (x,y+1)+\mathrm {B} (x+1,y)$
and a simple recurrence on one coordinate:
$\mathrm {B} (x+1,y)=\mathrm {B} (x,y)\cdot {\dfrac {x}{x+y}},\quad \mathrm {B} (x,y+1)=\mathrm {B} (x,y)\cdot {\dfrac {y}{x+y}}.$[4]
The positive integer values of the beta function are also the partial derivatives of a 2D function: for all nonnegative integers $m$ and $n$,
$\mathrm {B} (m+1,n+1)={\frac {\partial ^{m+n}h}{\partial a^{m}\,\partial b^{n}}}(0,0),$
where
$h(a,b)={\frac {e^{a}-e^{b}}{a-b}}.$
The Pascal-like identity above implies that this function is a solution to the first-order partial differential equation
$h=h_{a}+h_{b}.$
For $x,y\geq 1$, the beta function may be written in terms of a convolution involving the truncated power function $t\mapsto t_{+}^{x}$:
$\mathrm {B} (x,y)\cdot \left(t\mapsto t_{+}^{x+y-1}\right)={\Big (}t\mapsto t_{+}^{x-1}{\Big )}*{\Big (}t\mapsto t_{+}^{y-1}{\Big )}$
Evaluations at particular points may simplify significantly; for example,
$\mathrm {B} (1,x)={\dfrac {1}{x}}$
and
$\mathrm {B} (x,1-x)={\dfrac {\pi }{\sin(\pi x)}},\qquad x\not \in \mathbb {Z} $[5]
By taking $x={\frac {1}{2}}$ in this last formula, it follows that $\Gamma (1/2)={\sqrt {\pi }}$. Generalizing this into a bivariate identity for a product of beta functions leads to:
$\mathrm {B} (x,y)\cdot \mathrm {B} (x+y,1-y)={\frac {\pi }{x\sin(\pi y)}}.$
Euler's integral for the beta function may be converted into an integral over the Pochhammer contour C as
$\left(1-e^{2\pi i\alpha }\right)\left(1-e^{2\pi i\beta }\right)\mathrm {B} (\alpha ,\beta )=\int _{C}t^{\alpha -1}(1-t)^{\beta -1}\,dt.$
This Pochhammer contour integral converges for all values of α and β and so gives the analytic continuation of the beta function.
Just as the gamma function for integers describes factorials, the beta function can define a binomial coefficient after adjusting indices:
${\binom {n}{k}}={\frac {1}{(n+1)\,\mathrm {B} (n-k+1,k+1)}}.$
Moreover, for integer n, Β can be factored to give a closed form interpolation function for continuous values of k:
${\binom {n}{k}}=(-1)^{n}\,n!\cdot {\frac {\sin(\pi k)}{\pi \displaystyle \prod _{i=0}^{n}(k-i)}}.$
Reciprocal beta function
The reciprocal beta function is the function about the form
$f(x,y)={\frac {1}{\mathrm {B} (x,y)}}$
Interestingly, their integral representations closely relate as the definite integral of trigonometric functions with product of its power and multiple-angle:[6]
$\int _{0}^{\pi }\sin ^{x-1}\theta \sin y\theta ~d\theta ={\frac {\pi \sin {\frac {y\pi }{2}}}{2^{x-1}x\mathrm {B} \left({\frac {x+y+1}{2}},{\frac {x-y+1}{2}}\right)}}$
$\int _{0}^{\pi }\sin ^{x-1}\theta \cos y\theta ~d\theta ={\frac {\pi \cos {\frac {y\pi }{2}}}{2^{x-1}x\mathrm {B} \left({\frac {x+y+1}{2}},{\frac {x-y+1}{2}}\right)}}$
$\int _{0}^{\pi }\cos ^{x-1}\theta \sin y\theta ~d\theta ={\frac {\pi \cos {\frac {y\pi }{2}}}{2^{x-1}x\mathrm {B} \left({\frac {x+y+1}{2}},{\frac {x-y+1}{2}}\right)}}$
$\int _{0}^{\frac {\pi }{2}}\cos ^{x-1}\theta \cos y\theta ~d\theta ={\frac {\pi }{2^{x}x\mathrm {B} \left({\frac {x+y+1}{2}},{\frac {x-y+1}{2}}\right)}}$
Incomplete beta function
The incomplete beta function, a generalization of the beta function, is defined as[7][8]
$\mathrm {B} (x;\,a,b)=\int _{0}^{x}t^{a-1}\,(1-t)^{b-1}\,dt.$
For x = 1, the incomplete beta function coincides with the complete beta function. The relationship between the two functions is like that between the gamma function and its generalization the incomplete gamma function. For positive integer a and b, the incomplete beta function will be a polynomial of degree a + b - 1 with rational coefficients.
The regularized incomplete beta function (or regularized beta function for short) is defined in terms of the incomplete beta function and the complete beta function:
$I_{x}(a,b)={\frac {\mathrm {B} (x;\,a,b)}{\mathrm {B} (a,b)}}.$
The regularized incomplete beta function is the cumulative distribution function of the beta distribution, and is related to the cumulative distribution function $F(k;\,n,p)$ of a random variable X following a binomial distribution with probability of single success p and number of Bernoulli trials n:
$F(k;\,n,p)=\Pr \left(X\leq k\right)=I_{1-p}(n-k,k+1)=1-I_{p}(k+1,n-k).$
Properties
${\begin{aligned}I_{0}(a,b)&=0\\I_{1}(a,b)&=1\\I_{x}(a,1)&=x^{a}\\I_{x}(1,b)&=1-(1-x)^{b}\\I_{x}(a,b)&=1-I_{1-x}(b,a)\\I_{x}(a+1,b)&=I_{x}(a,b)-{\frac {x^{a}(1-x)^{b}}{a\mathrm {B} (a,b)}}\\I_{x}(a,b+1)&=I_{x}(a,b)+{\frac {x^{a}(1-x)^{b}}{b\mathrm {B} (a,b)}}\\\int B(x;a,b)\mathrm {d} x&=xB(x;a,b)-B(x;a+1,b)\\\mathrm {B} (x;a,b)&=(-1)^{a}\mathrm {B} \left({\frac {x}{x-1}};a,1-a-b\right)\end{aligned}}$
Continued fraction expansion
The continued fraction expansion
$\mathrm {B} (x;\,a,b)={\frac {x^{a}(1-x)^{b}}{a\left(1+{\frac {{d}_{1}}{1+}}{\frac {{d}_{2}}{1+}}{\frac {{d}_{3}}{1+}}{\frac {{d}_{4}}{1+}}\cdots \right)}}$
with odd and even coefficients respectively
${d}_{2m+1}=-{\frac {(a+m)(a+b+m)x}{(a+2m)(a+2m+1)}}$
${d}_{2m}={\frac {m(b-m)x}{(a+2m-1)(a+2m)}}$
converges rapidly when $x$ is not close to 1. The $4m$ and $4m+1$ convergents are less than $\mathrm {B} (x;\,a,b)$, while the $4m+2$ and $4m+3$ convergents are greater than $\mathrm {B} (x;\,a,b)$.
For $x>{\frac {a+1}{a+b+2}}$, the function may be evaluated more efficiently using $\mathrm {B} (x;\,a,b)=\mathrm {B} (a,b)-\mathrm {B} (1-x;\,b,a)$.[8]
Multivariate beta function
The beta function can be extended to a function with more than two arguments:
$\mathrm {B} (\alpha _{1},\alpha _{2},\ldots \alpha _{n})={\frac {\Gamma (\alpha _{1})\,\Gamma (\alpha _{2})\cdots \Gamma (\alpha _{n})}{\Gamma (\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n})}}.$
This multivariate beta function is used in the definition of the Dirichlet distribution. Its relationship to the beta function is analogous to the relationship between multinomial coefficients and binomial coefficients. For example, it satisfies a similar version of Pascal's identity:
$\mathrm {B} (\alpha _{1},\alpha _{2},\ldots \alpha _{n})=\mathrm {B} (\alpha _{1}+1,\alpha _{2},\ldots \alpha _{n})+\mathrm {B} (\alpha _{1},\alpha _{2}+1,\ldots \alpha _{n})+\cdots +\mathrm {B} (\alpha _{1},\alpha _{2},\ldots \alpha _{n}+1).$
Applications
The beta function is useful in computing and representing the scattering amplitude for Regge trajectories. Furthermore, it was the first known scattering amplitude in string theory, first conjectured by Gabriele Veneziano. It also occurs in the theory of the preferential attachment process, a type of stochastic urn process. The beta function is also important in statistics, e.g. for the Beta distribution and Beta prime distribution. As briefly alluded to previously, the beta function is closely tied with the gamma function and plays an important role in calculus.
Software implementation
Even if unavailable directly, the complete and incomplete beta function values can be calculated using functions commonly included in spreadsheet or computer algebra systems.
In Microsoft Excel, for example, the complete beta function can be computed with the GammaLn function (or special.gammaln in Python's SciPy package):
Value = Exp(GammaLn(a) + GammaLn(b) − GammaLn(a + b))
This result follows from the properties listed above.
The incomplete beta function cannot be directly computed using such relations and other methods must be used. In GNU Octave, it is computed using a continued fraction expansion.
The incomplete beta function has existing implementation in common languages. For instance, betainc (incomplete beta function) in MATLAB and GNU Octave, pbeta (probability of beta distribution) in R, or special.betainc in SciPy compute the regularized incomplete beta function—which is, in fact, the cumulative beta distribution—and so, to get the actual incomplete beta function, one must multiply the result of betainc by the result returned by the corresponding beta function. In Mathematica, Beta[x, a, b] and BetaRegularized[x, a, b] give $\mathrm {B} (x;\,a,b)$ and $I_{x}(a,b)$, respectively.
See also
• Beta distribution and Beta prime distribution, two probability distributions related to the beta function
• Jacobi sum, the analogue of the beta function over finite fields.
• Nørlund–Rice integral
• Yule–Simon distribution
References
1. Davis, Philip J. (1972), "6. Gamma function and related functions", in Abramowitz, Milton; Stegun, Irene A. (eds.), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover Publications, p. 258, ISBN 978-0-486-61272-0. Specifically, see 6.2 Beta Function.
2. Artin, Emil, The Gamma Function (PDF), pp. 18–19, archived from the original (PDF) on 2016-11-12, retrieved 2016-11-11
3. Beta function : Series representations (Formula 06.18.06.0007)
4. Mäklin, Tommi (2022), Probabilistic Methods for High-Resolution Metagenomics (PDF), Series of publications A / Department of Computer Science, University of Helsinki, Helsinki: Unigrafia, p. 27, ISBN 978-951-51-8695-9, ISSN 2814-4031
5. "Euler's Reflection Formula - ProofWiki", proofwiki.org, retrieved 2020-09-02
6. Paris, R. B. (2010), "Beta Function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
7. Zelen, M.; Severo, N. C. (1972), "26. Probability functions", in Abramowitz, Milton; Stegun, Irene A. (eds.), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover Publications, pp. 944, ISBN 978-0-486-61272-0
8. Paris, R. B. (2010), "Incomplete beta functions", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
• Askey, R. A.; Roy, R. (2010), "Beta function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
• Press, W. H.; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 6.1 Gamma Function, Beta Function, Factorials", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
External links
• "Beta-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Evaluation of beta function using Laplace transform at PlanetMath.
• Arbitrarily accurate values can be obtained from:
• The Wolfram functions site: Evaluate Beta Regularized incomplete beta
• danielsoper.com: Incomplete beta function calculator, Regularized incomplete beta function calculator
Authority control: National
• Germany
• Japan
|
Wikipedia
|
Incomplete gamma function
In mathematics, the upper and lower incomplete gamma functions are types of special functions which arise as solutions to various mathematical problems such as certain integrals.
Their respective names stem from their integral definitions, which are defined similarly to the gamma function but with different or "incomplete" integral limits. The gamma function is defined as an integral from zero to infinity. This contrasts with the lower incomplete gamma function, which is defined as an integral from zero to a variable upper limit. Similarly, the upper incomplete gamma function is defined as an integral from a variable lower limit to infinity.
Definition
The upper incomplete gamma function is defined as:
$\Gamma (s,x)=\int _{x}^{\infty }t^{s-1}\,e^{-t}\,dt,$
whereas the lower incomplete gamma function is defined as:
$\gamma (s,x)=\int _{0}^{x}t^{s-1}\,e^{-t}\,dt.$
In both cases s is a complex parameter, such that the real part of s is positive.
Properties
By integration by parts we find the recurrence relations
$\Gamma (s+1,x)=s\Gamma (s,x)+x^{s}e^{-x}$
and
$\gamma (s+1,x)=s\gamma (s,x)-x^{s}e^{-x}.$
Since the ordinary gamma function is defined as
$\Gamma (s)=\int _{0}^{\infty }t^{s-1}\,e^{-t}\,dt$
we have
$\Gamma (s)=\Gamma (s,0)=\lim _{x\to \infty }\gamma (s,x)$
and
$\gamma (s,x)+\Gamma (s,x)=\Gamma (s).$
Continuation to complex values
The lower incomplete gamma and the upper incomplete gamma function, as defined above for real positive s and x, can be developed into holomorphic functions, with respect both to x and s, defined for almost all combinations of complex x and s.[1] Complex analysis shows how properties of the real incomplete gamma functions extend to their holomorphic counterparts.
Holomorphic extension
Repeated application of the recurrence relation for the lower incomplete gamma function leads to the power series expansion: [2]
$\gamma (s,x)=\sum _{k=0}^{\infty }{\frac {x^{s}e^{-x}x^{k}}{s(s+1)\cdots (s+k)}}=x^{s}\,\Gamma (s)\,e^{-x}\sum _{k=0}^{\infty }{\frac {x^{k}}{\Gamma (s+k+1)}}.$
Given the rapid growth in absolute value of Γ(z + k) when k → ∞, and the fact that the reciprocal of Γ(z) is an entire function, the coefficients in the rightmost sum are well-defined, and locally the sum converges uniformly for all complex s and x. By a theorem of Weierstraß,[3] the limiting function, sometimes denoted as $\gamma ^{*}$,[4]
$\gamma ^{*}(s,z):=e^{-z}\sum _{k=0}^{\infty }{\frac {z^{k}}{\Gamma (s+k+1)}}$
is entire with respect to both z (for fixed s) and s (for fixed z),[5] and, thus, holomorphic on C × C by Hartog's theorem.[6] Hence, the following decomposition
$\gamma (s,z)=z^{s}\,\Gamma (s)\,\gamma ^{*}(s,z),$[7]
extends the real lower incomplete gamma function as a holomorphic function, both jointly and separately in z and s. It follows from the properties of $z^{s}$ and the Γ-function, that the first two factors capture the singularities of $\gamma (s,z)$ (at z = 0 or s a non-positive integer), whereas the last factor contributes to its zeros.
Multi-valuedness
The complex logarithm log z = log |z| + i arg z is determined up to a multiple of 2πi only, which renders it multi-valued. Functions involving the complex logarithm typically inherit this property. Among these are the complex power, and, since zs appears in its decomposition, the γ-function, too.
The indeterminacy of multi-valued functions introduces complications, since it must be stated how to select a value. Strategies to handle this are:
• (the most general way) replace the domain C of multi-valued functions by a suitable manifold in C × C called Riemann surface. While this removes multi-valuedness, one has to know the theory behind it;[8]
• restrict the domain such that a multi-valued function decomposes into separate single-valued branches, which can be handled individually.
The following set of rules can be used to interpret formulas in this section correctly. If not mentioned otherwise, the following is assumed:
Sectors
Sectors in C having their vertex at z = 0 often prove to be appropriate domains for complex expressions. A sector D consists of all complex z fulfilling z ≠ 0 and α − δ < arg z < α + δ with some α and 0 < δ ≤ π. Often, α can be arbitrarily chosen and is not specified then. If δ is not given, it is assumed to be π, and the sector is in fact the whole plane C, with the exception of a half-line originating at z = 0 and pointing into the direction of −α, usually serving as a branch cut. Note: In many applications and texts, α is silently taken to be 0, which centers the sector around the positive real axis.
Branches
In particular, a single-valued and holomorphic logarithm exists on any such sector D having its imaginary part bound to the range (α − δ, α + δ). Based on such a restricted logarithm, zs and the incomplete gamma functions in turn collapse to single-valued, holomorphic functions on D (or C×D), called branches of their multi-valued counterparts on D. Adding a multiple of 2π to α yields a different set of correlated branches on the same set D. However, in any given context here, α is assumed fixed and all branches involved are associated to it. If |α| < δ, the branches are called principal, because they equal their real analogues on the positive real axis. Note: In many applications and texts, formulas hold only for principal branches.
Relation between branches
The values of different branches of both the complex power function and the lower incomplete gamma function can be derived from each other by multiplication of $e^{2\pi iks}$,[9] for k a suitable integer.
Behavior near branch point
The decomposition above further shows, that γ behaves near z = 0 asymptotically like:
$\gamma (s,z)\asymp z^{s}\,\Gamma (s)\,\gamma ^{*}(s,0)=z^{s}\,\Gamma (s)/\Gamma (s+1)=z^{s}/s.$
For positive real x, y and s, xy/y → 0, when (x, y) → (0, s). This seems to justify setting γ(s, 0) = 0 for real s > 0. However, matters are somewhat different in the complex realm. Only if (a) the real part of s is positive, and (b) values uv are taken from just a finite set of branches, they are guaranteed to converge to zero as (u, v) → (0, s), and so does γ(u, v). On a single branch of γ(b) is naturally fulfilled, so there γ(s, 0) = 0 for s with positive real part is a continuous limit. Also note that such a continuation is by no means an analytic one.
Algebraic relations
All algebraic relations and differential equations observed by the real γ(s, z) hold for its holomorphic counterpart as well. This is a consequence of the identity theorem, stating that equations between holomorphic functions valid on a real interval, hold everywhere. In particular, the recurrence relation [10] and ∂γ(s, z)/∂z = zs−1 e−z [11] are preserved on corresponding branches.
Integral representation
The last relation tells us, that, for fixed s, γ is a primitive or antiderivative of the holomorphic function zs−1 e−z. Consequently, for any complex u, v ≠ 0,
$\int _{u}^{v}t^{s-1}\,e^{-t}\,dt=\gamma (s,v)-\gamma (s,u)$
holds, as long as the path of integration is entirely contained in the domain of a branch of the integrand. If, additionally, the real part of s is positive, then the limit γ(s, u) → 0 for u → 0 applies, finally arriving at the complex integral definition of γ[12]
$\gamma (s,z)=\int _{0}^{z}t^{s-1}\,e^{-t}\,dt,\,\Re (s)>0.$
Any path of integration containing 0 only at its beginning, otherwise restricted to the domain of a branch of the integrand, is valid here, for example, the straight line connecting 0 and z.
Real values
Given the integral representation of a principal branch of γ, the following equation holds for all positive real s, x:[13]
$\Gamma (s)=\int _{0}^{\infty }t^{s-1}\,e^{-t}\,dt=\lim _{x\to \infty }\gamma (s,x)$
s complex
This result extends to complex s. Assume first 1 ≤ Re(s) ≤ 2 and 1 < a < b. Then
$|\gamma (s,b)-\gamma (s,a)|\leq \int _{a}^{b}|t^{s-1}|e^{-t}\,dt=\int _{a}^{b}t^{\Re s-1}e^{-t}\,dt\leq \int _{a}^{b}te^{-t}\,dt$
where[14]
$|z^{s}|=|z|^{\Re s}\,e^{-\Im s\arg z}$
has been used in the middle. Since the final integral becomes arbitrarily small if only a is large enough, γ(s, x) converges uniformly for x → ∞ on the strip 1 ≤ Re(s) ≤ 2 towards a holomorphic function,[15] which must be Γ(s) because of the identity theorem. Taking the limit in the recurrence relation γ(s, x) = (s − 1) γ(s − 1, x) − xs − 1 e−x and noting, that lim xn e−x = 0 for x → ∞ and all n, shows, that γ(s, x) converges outside the strip, too, towards a function obeying the recurrence relation of the Γ-function. It follows
$\Gamma (s)=\lim _{x\to \infty }\gamma (s,x)$
for all complex s not a non-positive integer, x real and γ principal.
Sectorwise convergence
Now let u be from the sector |arg z| < δ < π/2 with some fixed δ (α = 0), γ be the principal branch on this sector, and look at
$\Gamma (s)-\gamma (s,u)=\Gamma (s)-\gamma (s,|u|)+\gamma (s,|u|)-\gamma (s,u).$
As shown above, the first difference can be made arbitrarily small, if |u| is sufficiently large. The second difference allows for following estimation:
$|\gamma (s,|u|)-\gamma (s,u)|\leq \int _{u}^{|u|}|z^{s-1}e^{-z}|\,dz=\int _{u}^{|u|}|z|^{\Re s-1}\,e^{-\Im s\,\arg z}\,e^{-\Re z}\,dz,$
where we made use of the integral representation of γ and the formula about |zs| above. If we integrate along the arc with radius R = |u| around 0 connecting u and |u|, then the last integral is
$\leq R\left|\arg u\right|R^{\Re s-1}\,e^{\Im s\,|\arg u|}\,e^{-R\cos \arg u}\leq \delta \,R^{\Re s}\,e^{\Im s\,\delta }\,e^{-R\cos \delta }=M\,(R\,\cos \delta )^{\Re s}\,e^{-R\cos \delta }$
where M = δ(cos δ)−Re s eIm sδ is a constant independent of u or R. Again referring to the behavior of xn e−x for large x, we see that the last expression approaches 0 as R increases towards ∞. In total we now have:
$\Gamma (s)=\lim _{|z|\to \infty }\gamma (s,z),\quad \left|\arg z\right|<\pi /2-\epsilon ,$
if s is not a non-negative integer, 0 < ε < π/2 is arbitrarily small, but fixed, and γ denotes the principal branch on this domain.
Overview
$\gamma (s,z)$ is:
• entire in z for fixed, positive integer s;
• multi-valued holomorphic in z for fixed s not an integer, with a branch point at z = 0;
• on each branch meromorphic in s for fixed z ≠ 0, with simple poles at non-positive integers s.
Upper incomplete gamma function
As for the upper incomplete gamma function, a holomorphic extension, with respect to z or s, is given by[16]
$\Gamma (s,z)=\Gamma (s)-\gamma (s,z)$
at points (s, z), where the right hand side exists. Since $\gamma $ is multi-valued, the same holds for $\Gamma $, but a restriction to principal values only yields the single-valued principal branch of $\Gamma $.
When s is a non-positive integer in the above equation, neither part of the difference is defined, and a limiting process, here developed for s → 0, fills in the missing values. Complex analysis guarantees holomorphicity, because $\Gamma (s,z)$ proves to be bounded in a neighbourhood of that limit for a fixed z.
To determine the limit, the power series of $\gamma ^{*}$ at z = 0 is useful. When replacing $e^{-x}$ by its power series in the integral definition of $\gamma $, one obtains (assume x,s positive reals for now):
$\gamma (s,x)=\int _{0}^{x}t^{s-1}e^{-t}\,dt=\int _{0}^{x}\sum _{k=0}^{\infty }(-1)^{k}\,{\frac {t^{s+k-1}}{k!}}\,dt=\sum _{k=0}^{\infty }(-1)^{k}\,{\frac {x^{s+k}}{k!(s+k)}}=x^{s}\,\sum _{k=0}^{\infty }{\frac {(-x)^{k}}{k!(s+k)}}$
or[17]
$\gamma ^{*}(s,x)=\sum _{k=0}^{\infty }{\frac {(-x)^{k}}{k!\,\Gamma (s)(s+k)}}.$
which, as a series representation of the entire $\gamma ^{*}$ function, converges for all complex x (and all complex s not a non-positive integer).
With its restriction to real values lifted, the series allows the expansion:
$\gamma (s,z)-{\frac {1}{s}}=-{\frac {1}{s}}+z^{s}\,\sum _{k=0}^{\infty }{\frac {(-z)^{k}}{k!(s+k)}}={\frac {z^{s}-1}{s}}+z^{s}\,\sum _{k=1}^{\infty }{\frac {(-z)^{k}}{k!(s+k)}},\quad \Re (s)>-1,\,s\neq 0.$
When s → 0:[18]
${\frac {z^{s}-1}{s}}\to \ln(z),\quad \Gamma (s)-{\frac {1}{s}}={\frac {1}{s}}-\gamma +O(s)-{\frac {1}{s}}\to -\gamma ,$
($\gamma $ is the Euler–Mascheroni constant here), hence,
$\Gamma (0,z)=\lim _{s\to 0}\left(\Gamma (s)-{\tfrac {1}{s}}-(\gamma (s,z)-{\tfrac {1}{s}})\right)=-\gamma -\ln(z)-\sum _{k=1}^{\infty }{\frac {(-z)^{k}}{k\,(k!)}}$
is the limiting function to the upper incomplete gamma function as s → 0, also known as the exponential integral $E_{1}(z)$.[19]
By way of the recurrence relation, values of $\Gamma (-n,z)$ for positive integers n can be derived from this result,[20]
$\Gamma (-n,z)={\frac {1}{n!}}\left({\frac {e^{-z}}{z^{n}}}\sum _{k=0}^{n-1}(-1)^{k}(n-k-1)!\,z^{k}+(-1)^{n}\Gamma (0,z)\right)$
so the upper incomplete gamma function proves to exist and be holomorphic, with respect both to z and s, for all s and z ≠ 0.
$\Gamma (s,z)$ is:
• entire in z for fixed, positive integral s;
• multi-valued holomorphic in z for fixed s non zero and not a positive integer, with a branch point at z = 0;
• equal to $\Gamma (s)$ for s with positive real part and z = 0 (the limit when $(s_{i},z_{i})\to (s,0)$), but this is a continuous extension, not an analytic one (does not hold for real s < 0!);
• on each branch entire in s for fixed z ≠ 0.
Special values
• $\Gamma (s+1,1)={\frac {\lfloor es!\rfloor }{e}}$ if s is a positive integer,
• $\Gamma (s,x)=(s-1)!\,e^{-x}\sum _{k=0}^{s-1}{\frac {x^{k}}{k!}}$ if s is a positive integer,[21]
• $\Gamma (s,0)=\Gamma (s),\Re (s)>0$,
• $\Gamma (1,x)=e^{-x}$,
• $\gamma (1,x)=1-e^{-x}$,
• $\Gamma (0,x)=-\operatorname {Ei} (-x)$ for $x>0$,
• $\Gamma (s,x)=x^{s}\operatorname {E} _{1-s}(x)$,
• $\Gamma \left({\tfrac {1}{2}},x\right)={\sqrt {\pi }}\operatorname {erfc} \left({\sqrt {x}}\right)$,
• $\gamma \left({\tfrac {1}{2}},x\right)={\sqrt {\pi }}\operatorname {erf} \left({\sqrt {x}}\right)$.
Here, $\operatorname {Ei} $ is the exponential integral, $\operatorname {E} _{n}$ is the generalized exponential integral, $\operatorname {erf} $ is the error function, and $\operatorname {erfc} $ is the complementary error function, $\operatorname {erfc} (x)=1-\operatorname {erf} (x)$.
Asymptotic behavior
• ${\frac {\gamma (s,x)}{x^{s}}}\to {\frac {1}{s}}$ as $x\to 0$,
• ${\frac {\Gamma (s,x)}{x^{s}}}\to -{\frac {1}{s}}$ as $x\to 0$ and $\Re (s)<0$ (for real s, the error of Γ(s, x) ~ −xs / s is on the order of O(xmin{s + 1, 0}) if s ≠ −1 and O(ln(x)) if s = −1),
• $\Gamma (s,x)\sim \Gamma (s)-\sum _{n=0}^{\infty }(-1)^{n}{\frac {x^{s+n}}{n!(s+n)}}$ as an asymptotic series where $x\to 0^{+}$ and $s\neq 0,-1,-2,\dots $.[22]
• $\Gamma (-N,x)\sim C_{N}+{\frac {(-1)^{N+1}}{N!}}\ln x-\sum _{n=0}^{\infty }(-1)^{n}{\frac {x^{n-N}}{n!(n-N)}}$ as an asymptotic series where $x\to 0^{+}$ and $N=1,2,\dots $, where $ C_{N}={\frac {(-1)^{N+1}}{N!}}\left(\gamma -\displaystyle \sum _{n=1}^{N}{\frac {1}{n}}\right)$, where $\gamma $ is the Euler-Mascheroni constant.[23]
• $\gamma (s,x)\to \Gamma (s)$ as $x\to \infty $,
• ${\frac {\Gamma (s,x)}{x^{s-1}e^{-x}}}\to 1$ as $x\to \infty $,
• $\Gamma (s,z)\sim z^{s-1}e^{-z}\sum _{k=0}{\frac {\Gamma (s)}{\Gamma (s-k)}}z^{-k}$ as an asymptotic series where $|z|\to \infty $ and $\left|\arg z\right|<{\tfrac {3}{2}}\pi $.[24]
Evaluation formulae
The lower gamma function can be evaluated using the power series expansion:[25]
$\gamma (s,z)=\sum _{k=0}^{\infty }{\frac {z^{s}e^{-z}z^{k}}{s(s+1)\dots (s+k)}}=z^{s}e^{-z}\sum _{k=0}^{\infty }{\dfrac {z^{k}}{s^{\overline {k+1}}}}$
where $s^{\overline {k+1}}$is the Pochhammer symbol.
An alternative expansion is
$\gamma (s,z)=\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{k!}}{\frac {z^{s+k}}{s+k}}={\frac {z^{s}}{s}}M(s,s+1,-z),$
where M is Kummer's confluent hypergeometric function.
Connection with Kummer's confluent hypergeometric function
When the real part of z is positive,
$\gamma (s,z)=s^{-1}z^{s}e^{-z}M(1,s+1,z)$
where
$M(1,s+1,z)=1+{\frac {z}{(s+1)}}+{\frac {z^{2}}{(s+1)(s+2)}}+{\frac {z^{3}}{(s+1)(s+2)(s+3)}}+\cdots $
has an infinite radius of convergence.
Again with confluent hypergeometric functions and employing Kummer's identity,
${\begin{aligned}\Gamma (s,z)&=e^{-z}U(1-s,1-s,z)={\frac {z^{s}e^{-z}}{\Gamma (1-s)}}\int _{0}^{\infty }{\frac {e^{-u}}{u^{s}(z+u)}}du\\&=e^{-z}z^{s}U(1,1+s,z)=e^{-z}\int _{0}^{\infty }e^{-u}(z+u)^{s-1}du=e^{-z}z^{s}\int _{0}^{\infty }e^{-zu}(1+u)^{s-1}du.\end{aligned}}$
For the actual computation of numerical values, Gauss's continued fraction provides a useful expansion:
$\gamma (s,z)={\cfrac {z^{s}e^{-z}}{s-{\cfrac {sz}{s+1+{\cfrac {z}{s+2-{\cfrac {(s+1)z}{s+3+{\cfrac {2z}{s+4-{\cfrac {(s+2)z}{s+5+{\cfrac {3z}{s+6-\ddots }}}}}}}}}}}}}}.$
This continued fraction converges for all complex z, provided only that s is not a negative integer.
The upper gamma function has the continued fraction[26]
$\Gamma (s,z)={\cfrac {z^{s}e^{-z}}{z+{\cfrac {1-s}{1+{\cfrac {1}{z+{\cfrac {2-s}{1+{\cfrac {2}{z+{\cfrac {3-s}{1+\ddots }}}}}}}}}}}}$
and
$\Gamma (s,z)={\cfrac {z^{s}e^{-z}}{1+z-s+{\cfrac {s-1}{3+z-s+{\cfrac {2(s-2)}{5+z-s+{\cfrac {3(s-3)}{7+z-s+{\cfrac {4(s-4)}{9+z-s+\ddots }}}}}}}}}}$
Multiplication theorem
The following multiplication theorem holds true:
$\Gamma (s,z)={\frac {1}{t^{s}}}\sum _{i=0}^{\infty }{\frac {\left(1-{\frac {1}{t}}\right)^{i}}{i!}}\Gamma (s+i,tz)=\Gamma (s,tz)-(tz)^{s}e^{-tz}\sum _{i=1}^{\infty }{\frac {\left({\frac {1}{t}}-1\right)^{i}}{i}}L_{i-1}^{(s-i)}(tz).$
Software implementation
The incomplete gamma functions are available in various of the computer algebra systems.
Even if unavailable directly, however, incomplete function values can be calculated using functions commonly included in spreadsheets (and computer algebra packages). In Excel, for example, these can be calculated using the gamma function combined with the gamma distribution function.
• The lower incomplete function: $\gamma (s,x)$ = EXP(GAMMALN(s))*GAMMA.DIST(x,s,1,TRUE).
• The upper incomplete function: $\Gamma (s,x)$ = EXP(GAMMALN(s))*(1-GAMMA.DIST(x,s,1,TRUE)).
These follow from the definition of the gamma distribution's cumulative distribution function.
In Python, although Scipy provides implementations of incomplete gamma functions under scipy.special, it does not support negative values for the first argument. One workaround in such cases is to use the function "gammainc" from the library "mpmath".
Regularized gamma functions and Poisson random variables
Two related functions are the regularized gamma functions:
$P(s,x)={\frac {\gamma (s,x)}{\Gamma (s)}},$
$Q(s,x)={\frac {\Gamma (s,x)}{\Gamma (s)}}=1-P(s,x).$
$P(s,x)$ is the cumulative distribution function for gamma random variables with shape parameter $s$ and scale parameter 1.
When $s$ is an integer, $Q(s,\lambda )$ is the cumulative distribution function for Poisson random variables: If $X$ is a $\mathrm {Poi} (\lambda )$ random variable then
$\Pr(X<s)=\sum _{i<s}e^{-\lambda }{\frac {\lambda ^{i}}{i!}}={\frac {\Gamma (s,\lambda )}{\Gamma (s)}}=Q(s,\lambda ).$
This formula can be derived by repeated integration by parts.
In the context of the stable count distribution, the $s$ parameter can be regarded as inverse of Lévy's stability parameter $\alpha $:
$Q(s,x)=\displaystyle \int _{0}^{\infty }e^{\left(-{x^{s}}/{\nu }\right)}\,{\mathfrak {N}}_{{1}/{s}}\left(\nu \right)\,d\nu ,\,\,(s>1)$
where ${\mathfrak {N}}_{\alpha }(\nu )$ is a standard stable count distribution of shape $\alpha =1/s<1$.
$P(s,x)$ and $Q(s,x)$ are implemented as gammainc[27] and gammaincc[28] in scipy.
Derivatives
Using the integral representation above, the derivative of the upper incomplete gamma function $\Gamma (s,x)$ with respect to x is
${\frac {\partial \Gamma (s,x)}{\partial x}}=-x^{s-1}e^{-x}$
The derivative with respect to its first argument $s$ is given by[29]
${\frac {\partial \Gamma (s,x)}{\partial s}}=\ln x\Gamma (s,x)+x\,T(3,s,x)$
and the second derivative by
${\frac {\partial ^{2}\Gamma (s,x)}{\partial s^{2}}}=\ln ^{2}x\Gamma (s,x)+2x[\ln x\,T(3,s,x)+T(4,s,x)]$
where the function $T(m,s,x)$ is a special case of the Meijer G-function
$T(m,s,x)=G_{m-1,\,m}^{\,m,\,0}\!\left(\left.{\begin{matrix}0,0,\dots ,0\\s-1,-1,\dots ,-1\end{matrix}}\;\right|\,x\right).$
This particular special case has internal closure properties of its own because it can be used to express all successive derivatives. In general,
${\frac {\partial ^{m}\Gamma (s,x)}{\partial s^{m}}}=\ln ^{m}x\Gamma (s,x)+mx\,\sum _{n=0}^{m-1}P_{n}^{m-1}\ln ^{m-n-1}x\,T(3+n,s,x)$
where $P_{j}^{n}$ is the permutation defined by the Pochhammer symbol:
$P_{j}^{n}={\binom {n}{j}}j!={\frac {n!}{(n-j)!}}.$
All such derivatives can be generated in succession from:
${\frac {\partial T(m,s,x)}{\partial s}}=\ln x~T(m,s,x)+(m-1)T(m+1,s,x)$
and
${\frac {\partial T(m,s,x)}{\partial x}}=-{\frac {1}{x}}[T(m-1,s,x)+T(m,s,x)]$
This function $T(m,s,x)$ can be computed from its series representation valid for $|z|<1$,
$T(m,s,z)=-{\frac {(-1)^{m-1}}{(m-2)!}}\left.{\frac {d^{m-2}}{dt^{m-2}}}\left[\Gamma (s-t)z^{t-1}\right]\right|_{t=0}+\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{s-1+n}}{n!(-s-n)^{m-1}}}$
with the understanding that s is not a negative integer or zero. In such a case, one must use a limit. Results for $|z|\geq 1$ can be obtained by analytic continuation. Some special cases of this function can be simplified. For example, $T(2,s,x)=\Gamma (s,x)/x$, $x\,T(3,1,x)=\mathrm {E} _{1}(x)$, where $\mathrm {E} _{1}(x)$ is the Exponential integral. These derivatives and the function $T(m,s,x)$ provide exact solutions to a number of integrals by repeated differentiation of the integral definition of the upper incomplete gamma function.[30][31] For example,
$\int _{x}^{\infty }{\frac {t^{s-1}\ln ^{m}t}{e^{t}}}dt={\frac {\partial ^{m}}{\partial s^{m}}}\int _{x}^{\infty }{\frac {t^{s-1}}{e^{t}}}dt={\frac {\partial ^{m}}{\partial s^{m}}}\Gamma (s,x)$
This formula can be further inflated or generalized to a huge class of Laplace transforms and Mellin transforms. When combined with a computer algebra system, the exploitation of special functions provides a powerful method for solving definite integrals, in particular those encountered by practical engineering applications (see Symbolic integration for more details).
Indefinite and definite integrals
The following indefinite integrals are readily obtained using integration by parts (with the constant of integration omitted in both cases):
$\int x^{b-1}\gamma (s,x)dx={\frac {1}{b}}\left(x^{b}\gamma (s,x)-\gamma (s+b,x)\right),$
$\int x^{b-1}\Gamma (s,x)dx={\frac {1}{b}}\left(x^{b}\Gamma (s,x)-\Gamma (s+b,x)\right).$
The lower and the upper incomplete gamma function are connected via the Fourier transform:
$\int _{-\infty }^{\infty }{\frac {\gamma \left({\frac {s}{2}},z^{2}\pi \right)}{(z^{2}\pi )^{\frac {s}{2}}}}e^{-2\pi ikz}dz={\frac {\Gamma \left({\frac {1-s}{2}},k^{2}\pi \right)}{(k^{2}\pi )^{\frac {1-s}{2}}}}.$
This follows, for example, by suitable specialization of (Gradshteyn et al. 2015, §7.642).
Notes
1. DLMF, Incomplete Gamma functions, analytic continuation
2. http://dlmf.nist.gov/8.8.E7
3. "Archived copy" (PDF). Archived from the original (PDF) on 2011-05-16. Retrieved 2011-04-23.{{cite web}}: CS1 maint: archived copy as title (link) Theorem 3.9 on p.56
4. http://dlmf.nist.gov/8.7.E1
5. http://dlmf.nist.gov/8.2.ii
6. http://www.math.umn.edu/~garrett/m/complex/hartogs.pdf
7. http://dlmf.nist.gov/8.2.E6
8. http://math.berkeley.edu/~teleman/math/Riemann.pdf
9. http://dlmf.nist.gov/8.2.E8
10. http://dlmf.nist.gov/8.8.E1
11. http://dlmf.nist.gov/8.8.E12
12. http://dlmf.nist.gov/8.2.E1
13. http://dlmf.nist.gov/5.2.E1
14. http://dlmf.nist.gov/4.4.E15
15. "Archived copy" (PDF). Archived from the original (PDF) on 2011-05-16. Retrieved 2011-04-23.{{cite web}}: CS1 maint: archived copy as title (link) Theorem 3.9 on p.56
16. http://dlmf.nist.gov/8.2.E3
17. http://dlmf.nist.gov/8.7.E1
18. see last eq.
19. "DLMF: 8.4 Special Values".
20. "DLMF: 8.4 Special Values".
21. Weisstein, Eric W. "Incomplete Gamma Function". MathWorld. (equation 2)
22. Bender & Orszag (1978). Advanced Mathematical Methods for Scientists and Engineers. Springer.
23. Bender & Orszag (1978). Advanced Mathematical Methods for Scientists and Engineers. Springer.
24. DLMF, Incomplete Gamma functions, 8.11(i)
25. https://dlmf.nist.gov/8.11#ii
26. Abramowitz and Stegun p. 263, 6.5.31
27. gammainc
28. gammaincc
29. K.O. Geddes, M.L. Glasser, R.A. Moore and T.C. Scott, Evaluation of Classes of Definite Integrals Involving Elementary Functions via Differentiation of Special Functions, AAECC (Applicable Algebra in Engineering, Communication and Computing), vol. 1, (1990), pp. 149–165,
30. Milgram, M. S. (1985). "The generalized integro-exponential function". Math. Comp. 44 (170): 443–458. doi:10.1090/S0025-5718-1985-0777276-4. MR 0777276.
31. Mathar (2009). "Numerical Evaluation of the Oscillatory Integral over exp(i*pi*x)*x^(1/x) between 1 and infinity". arXiv:0912.3844 [math.CA]., App B
References
• Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 6.5". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. "Incomplete Gamma function". §6.5.
• Allasia, Giampietro; Besenghi, Renata (1986). "Numerical calculation of incomplete gamma functions by the trapezoidal rule". Numer. Math. 50 (4): 419–428. doi:10.1007/BF01396662. S2CID 121964300.
• Amore, Paolo (2005). "Asymptotic and exact series representations for the incomplete Gamma function". Europhys. Lett. 71 (1): 1–7. arXiv:math-ph/0501019. Bibcode:2005EL.....71....1A. doi:10.1209/epl/i2005-10066-6. MR 2170316. S2CID 1921569.
• G. Arfken and H. Weber. Mathematical Methods for Physicists. Harcourt/Academic Press, 2000. (See Chapter 10.)
• DiDonato, Armido R.; Morris, Jr., Alfred H. (Dec 1986). "Computation of the incomplete gamma function ratios and their inverse". ACM Transactions on Mathematical Software. 12 (4): 377–393. doi:10.1145/22721.23109. S2CID 14351930.
• Barakat, Richard (1961). "Evaluation of the Incomplete Gamma Function of Imaginary Argument by Chebyshev Polynomials". Math. Comp. 15 (73): 7–11. doi:10.1090/s0025-5718-1961-0128058-1. MR 0128058.
• Carsky, Petr; Polasek, Martin (1998). "Incomplete Gamma F_m(x) functions for real and complex arguments". J. Comput. Phys. 143 (1): 259–265. Bibcode:1998JCoPh.143..259C. doi:10.1006/jcph.1998.5975. MR 1624704.
• Chaudhry, M. Aslam; Zubair, S. M. (1995). "On the decomposition of generalized incomplete Gamma functions with applications to Fourier transforms". J. Comput. Appl. Math. 59 (101): 253–284. doi:10.1016/0377-0427(94)00026-w. MR 1346414.
• DiDonato, Armido R.; Morris, Jr., Alfred H. (Sep 1987). "ALGORITHM 654: FORTRAN subroutines for computing the incomplete gamma function ratios and their inverse". ACM Transactions on Mathematical Software. 13 (3): 318–319. doi:10.1145/29380.214348. S2CID 19902932. (See also www.netlib.org/toms/654).
• Früchtl, H.; Otto, P. (1994). "A new algorithm for the evaluation of the incomplete Gamma Function on vector computers". ACM Trans. Math. Softw. 20 (4): 436–446. doi:10.1145/198429.198432. S2CID 16737306.
• Gautschi, Walter (1998). "The incomplete gamma function since Tricomi". Atti Convegni Lincei. 147: 203–237. MR 1737497.
• Gautschi, Walter (1999). "A Note on the recursive calculation of Incomplete Gamma Functions". ACM Trans. Math. Softw. 25 (1): 101–107. doi:10.1145/305658.305717. MR 1697463. S2CID 36469885.
• Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich; Jeffrey, Alan (2015) [October 2014]. "8.35.". In Zwillinger, Daniel; Moll, Victor Hugo (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (8 ed.). Academic Press, Inc. pp. 908–911. ISBN 978-0-12-384933-5. LCCN 2014010276.
• Jones, William B.; Thron, W. J. (1985). "On the computation of incomplete gamma functions in the complex domain". J. Comput. Appl. Math. 12–13: 401–417. doi:10.1016/0377-0427(85)90034-2. MR 0793971.
• "Incomplete gamma-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Mathar, Richard J. (2004). "Numerical representation of the incomplete gamma function of complex-valued argument". Numerical Algorithms. 36 (3): 247–264. arXiv:math/0306184. Bibcode:2004NuAlg..36..247M. doi:10.1023/B:NUMA.0000040063.91709.58. MR 2091195. S2CID 30860614.
• Miller, Allen R.; Moskowitz, Ira S. (1998). "On certain Generalized incomplete Gamma functions". J. Comput. Appl. Math. 91 (2): 179–190. doi:10.1016/s0377-0427(98)00031-4.
• Paris, R. B. (2010), "Incomplete gamma function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
• Paris, R. B. (2002). "A uniform asymptotic expansion for the incomplete gamma function". J. Comput. Appl. Math. 148 (2): 323–339. Bibcode:2002JCoAM.148..323P. doi:10.1016/S0377-0427(02)00553-8. MR 1936142.
• Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 6.2. Incomplete Gamma Function and Error Function". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
• Takenaga, Roy (1966). "On the Evaluation of the Incomplete Gamma Function". Math. Comp. 20 (96): 606–610. doi:10.1090/S0025-5718-1966-0203911-3. MR 0203911.
• Temme, Nico (1975). "Uniform Asymptotic Expansions of the Incomplete Gamma Functions and the Incomplete Beta Function". Math. Comp. 29 (132): 1109–1114. doi:10.1090/S0025-5718-1975-0387674-2. MR 0387674.
• Terras, Riho (1979). "The determination of incomplete Gamma Functions through analytic integration". J. Comput. Phys. 31 (1): 146–151. Bibcode:1979JCoPh..31..146T. doi:10.1016/0021-9991(79)90066-4. MR 0531128.
• Tricomi, Francesco G. (1950). "Sulla funzione gamma incompleta". Ann. Mat. Pura Appl. 31: 263–279. doi:10.1007/BF02428264. MR 0047834. S2CID 120404791.
• Tricomi, F. G. (1950). "Asymptotische Eigenschaften der unvollst. Gammafunktion". Math. Z. 53 (2): 136–148. doi:10.1007/bf01162409. MR 0045253. S2CID 121234109.
• van Deun, Joris; Cools, Ronald (2006). "A stable recurrence for the incomplete gamma function with imaginary second argument". Numer. Math. 104 (4): 445–456. doi:10.1007/s00211-006-0026-1. MR 2249673. S2CID 43780150.
• Winitzki, Serge (2003). "Computing the Incomplete Gamma Function to Arbitrary Precision". Computational Science and Its Applications — ICCSA 2003. pp. 790–798. doi:10.1007/3-540-44839-x_83. ISBN 978-3-540-40155-1. MR 2110953. {{cite book}}: |journal= ignored (help)
• Weisstein, Eric W. "Incomplete Gamma Function". MathWorld.
External links
• $P(a,x)$ — Regularized Lower Incomplete Gamma Function Calculator
• $Q(a,x)$ — Regularized Upper Incomplete Gamma Function Calculator
• $\gamma (a,x)$ — Lower Incomplete Gamma Function Calculator
• $\Gamma (a,x)$ — Upper Incomplete Gamma Function Calculator
• formulas and identities of the Incomplete Gamma Function functions.wolfram.com
|
Wikipedia
|
Regularized canonical correlation analysis
Regularized canonical correlation analysis is a way of using ridge regression to solve the singularity problem in the cross-covariance matrices of canonical correlation analysis. By converting $\operatorname {cov} (X,X)$ and $\operatorname {cov} (Y,Y)$ into $\operatorname {cov} (X,X)+\lambda I_{X}$ and $\operatorname {cov} (Y,Y)+\lambda I_{Y}$, it ensures that the above matrices will have reliable inverses.
The idea probably dates back to Hrishikesh D. Vinod's publication in 1976 where he called it "Canonical ridge".[1][2] It has been suggested for use in the analysis of functional neuroimaging data as such data are often singular.[3] It is possible to compute the regularized canonical vectors in the lower-dimensional space.[4]
References
1. Hrishikesh D. Vinod (May 1976). "Canonical ridge and econometrics of joint production". Journal of Econometrics. 4 (2): 147–166. doi:10.1016/0304-4076(76)90010-5.
2. Kanti Mardia; et al. Multivariate Analysis.
3. Finn Årup Nielsen; Lars Kai Hansen; Stephen C. Strother (May 1998). "Canonical ridge analysis with ridge parameter optimization" (PDF). NeuroImage. 7 (4): S758. doi:10.1016/S1053-8119(18)31591-X. S2CID 54414890.
4. Finn Årup Nielsen (2001). Neuroinformatics in Functional Neuroimaging (PDF) (Thesis). Technical University of Denmark. Section 3.18.5
• Leurgans, S.E.; Moyeed, R.A.; Silverman, B.W. (1993). "Canonical correlation analysis when the data are curves". Journal of the Royal Statistical Society. Series B (Methodological). 55 (3): 725–740. JSTOR 2345883.
|
Wikipedia
|
Regularized least squares
Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution.
Part of a series on
Regression analysis
Models
• Linear regression
• Simple regression
• Polynomial regression
• General linear model
• Generalized linear model
• Vector generalized linear model
• Discrete choice
• Binomial regression
• Binary regression
• Logistic regression
• Multinomial logistic regression
• Mixed logit
• Probit
• Multinomial probit
• Ordered logit
• Ordered probit
• Poisson
• Multilevel model
• Fixed effects
• Random effects
• Linear mixed-effects model
• Nonlinear mixed-effects model
• Nonlinear regression
• Nonparametric
• Semiparametric
• Robust
• Quantile
• Isotonic
• Principal components
• Least angle
• Local
• Segmented
• Errors-in-variables
Estimation
• Least squares
• Linear
• Non-linear
• Ordinary
• Weighted
• Generalized
• Generalized estimating equation
• Partial
• Total
• Non-negative
• Ridge regression
• Regularized
• Least absolute deviations
• Iteratively reweighted
• Bayesian
• Bayesian multivariate
• Least-squares spectral analysis
Background
• Regression validation
• Mean and predicted response
• Errors and residuals
• Goodness of fit
• Studentized residual
• Gauss–Markov theorem
• Mathematics portal
RLS is used for two main reasons. The first comes up when the number of variables in the linear system exceeds the number of observations. In such settings, the ordinary least-squares problem is ill-posed and is therefore impossible to fit because the associated optimization problem has infinitely many solutions. RLS allows the introduction of further constraints that uniquely determine the solution.
The second reason for using RLS arises when the learned model suffers from poor generalization. RLS can be used in such cases to improve the generalizability of the model by constraining it at training time. This constraint can either force the solution to be "sparse" in some way or to reflect other prior knowledge about the problem such as information about correlations between features. A Bayesian understanding of this can be reached by showing that RLS methods are often equivalent to priors on the solution to the least-squares problem.
General formulation
Consider a learning setting given by a probabilistic space $(X\times Y,\rho (X,Y))$, $Y\in R$. Let $S=\{x_{i},y_{i}\}_{i=1}^{n}$ denote a training set of $n$ pairs i.i.d. with respect to $\rho $. Let $V:Y\times R\rightarrow [0;\infty )$ be a loss function. Define $F$ as the space of the functions such that expected risk:
$\varepsilon (f)=\int V(y,f(x))\,d\rho (x,y)$
is well defined. The main goal is to minimize the expected risk:
$\inf _{f\in F}\varepsilon (f)$
Since the problem cannot be solved exactly there is a need to specify how to measure the quality of a solution. A good learning algorithm should provide an estimator with a small risk.
As the joint distribution $\rho $ is typically unknown, the empirical risk is taken. For regularized least squares the square loss function is introduced:
$\varepsilon (f)={\frac {1}{n}}\sum _{i=1}^{n}V(y_{i},f(x_{i}))={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}$
However, if the functions are from a relatively unconstrained space, such as the set of square-integrable functions on $X$, this approach may overfit the training data, and lead to poor generalization. Thus, it should somehow constrain or penalize the complexity of the function $f$. In RLS, this is accomplished by choosing functions from a reproducing kernel Hilbert space (RKHS) ${\mathcal {H}}$, and adding a regularization term to the objective function, proportional to the norm of the function in ${\mathcal {H}}$:
$\inf _{f\in F}\varepsilon (f)+\lambda R(f),\lambda >0$
Kernel formulation
Definition of RKHS
A RKHS can be defined by a symmetric positive-definite kernel function $K(x,z)$ with the reproducing property:
$\langle K_{x},f\rangle _{\mathcal {H}}=f(x),$
where $K_{x}(z)=K(x,z)$. The RKHS for a kernel $K$ consists of the completion of the space of functions spanned by $\left\{K_{x}\mid x\in X\right\}$: $f(x)=\sum _{i=1}^{n}\alpha _{i}K_{x_{i}}(x),\,f\in {\mathcal {H}}$, where all $\alpha _{i}$ are real numbers. Some commonly used kernels include the linear kernel, inducing the space of linear functions:
$K(x,z)=x^{T}z,$
the polynomial kernel, inducing the space of polynomial functions of order $d$:
$K(x,z)=(x^{T}z+1)^{d},$
and the Gaussian kernel:
$K(x,z)=e^{-{\frac {\|x-z\|^{2}}{\sigma ^{2}}}}.$
Note that for an arbitrary loss function $V$, this approach defines a general class of algorithms named Tikhonov regularization. For instance, using the hinge loss leads to the support vector machine algorithm, and using the epsilon-insensitive loss leads to support vector regression.
Arbitrary kernel
The representer theorem guarantees that the solution can be written as:
$f(x)=\sum _{i=1}^{n}c_{i}K(x_{i},x)$ for some $c\in \mathbb {R} ^{n}$.
The minimization problem can be expressed as:
$\min _{c\in \mathbb {R} ^{n}}{\frac {1}{n}}\|Y-Kc\|_{\mathbb {R} ^{n}}^{2}+\lambda \|f\|_{H}^{2},$
where, with some abuse of notation, the $i,j$ entry of kernel matrix $K$ (as opposed to kernel function $K(\cdot ,\cdot )$) is $K(x_{i},x_{j})$.
For such a function,
${\begin{aligned}&\|f\|_{H}^{2}=\langle f,f\rangle _{H}=\left\langle \sum _{i=1}^{n}c_{i}K(x_{i},\cdot ),\sum _{j=1}^{n}c_{j}K(x_{j},\cdot )\right\rangle _{H}\\={}&\sum _{i=1}^{n}\sum _{j=1}^{n}c_{i}c_{j}\langle K(x_{i},\cdot ),K(x_{j},\cdot )\rangle _{H}=\sum _{i=1}^{n}\sum _{j=1}^{n}c_{i}c_{j}K(x_{i},x_{j})=c^{T}Kc,\end{aligned}}$
The following minimization problem can be obtained:
$\min _{c\in \mathbb {R} ^{n}}{\frac {1}{n}}\|Y-Kc\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{T}Kc$.
As the sum of convex functions is convex, the solution is unique and its minimum can be found by setting the gradient w.r.t $c$ to $0$:
$-{\frac {1}{n}}K(Y-Kc)+\lambda Kc=0\Rightarrow K(K+\lambda nI)c=KY\Rightarrow c=(K+\lambda nI)^{-1}Y,$
where $c\in \mathbb {R} ^{n}.$
Complexity
The complexity of training is basically the cost of computing the kernel matrix plus the cost of solving the linear system which is roughly $O(n^{3})$. The computation of the kernel matrix for the linear or Gaussian kernel is $O(n^{2}D)$. The complexity of testing is $O(n)$.
Prediction
The prediction at a new test point $x_{*}$ is:
$f(x_{*})=\sum _{i=1}^{n}c_{i}K(x_{i},x_{*})=K(X,X_{*})^{T}c$
Linear kernel
For convenience a vector notation is introduced. Let $X$ be an $n\times d$ matrix, where the rows are input vectors, and $Y$ a $n\times 1$ vector where the entries are corresponding outputs. In terms of vectors, the kernel matrix can be written as $\operatorname {K} =\operatorname {X} \operatorname {X} ^{T}$. The learning function can be written as:
$f(x_{*})=\operatorname {K} _{x_{*}}c=x_{*}^{T}\operatorname {X} ^{T}c=x_{*}^{T}w$
Here we define $w=X^{T}c,w\in R^{d}$. The objective function can be rewritten as:
${\begin{aligned}&{\frac {1}{n}}\|Y-\operatorname {K} c\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{T}\operatorname {K} c\\[4pt]={}&{\frac {1}{n}}\|y-\operatorname {X} \operatorname {X} ^{T}c\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{T}\operatorname {X} \operatorname {X} ^{T}c={\frac {1}{n}}\|y-\operatorname {X} w\|_{\mathbb {R} ^{n}}^{2}+\lambda \|w\|_{\mathbb {R} ^{d}}^{2}\end{aligned}}$
The first term is the objective function from ordinary least squares (OLS) regression, corresponding to the residual sum of squares. The second term is a regularization term, not present in OLS, which penalizes large $w$ values. As a smooth finite dimensional problem is considered and it is possible to apply standard calculus tools. In order to minimize the objective function, the gradient is calculated with respect to $w$ and set it to zero:
$\operatorname {X} ^{T}\operatorname {X} w-\operatorname {X} ^{T}y+\lambda nw=0$
$w=(\operatorname {X} ^{T}\operatorname {X} +\lambda n\operatorname {I} )^{-1}\operatorname {X} ^{T}y$
This solution closely resembles that of standard linear regression, with an extra term $\lambda \operatorname {I} $. If the assumptions of OLS regression hold, the solution $w=(\operatorname {X} ^{T}\operatorname {X} )^{-1}\operatorname {X} ^{T}y$, with $\lambda =0$, is an unbiased estimator, and is the minimum-variance linear unbiased estimator, according to the Gauss–Markov theorem. The term $\lambda n\operatorname {I} $ therefore leads to a biased solution; however, it also tends to reduce variance. This is easy to see, as the covariance matrix of the $w$-values is proportional to $(\operatorname {X} ^{T}\operatorname {X} +\lambda n\operatorname {I} )^{-1}$, and therefore large values of $\lambda $ will lead to lower variance. Therefore, manipulating $\lambda $ corresponds to trading-off bias and variance. For problems with high-variance $w$ estimates, such as cases with relatively small $n$ or with correlated regressors, the optimal prediction accuracy may be obtained by using a nonzero $\lambda $, and thus introducing some bias to reduce variance. Furthermore, it is not uncommon in machine learning to have cases where $n<d$, in which case $X^{T}X$ is rank-deficient, and a nonzero $\lambda $ is necessary to compute $(\operatorname {X} ^{T}\operatorname {X} +\lambda n\operatorname {I} )^{-1}$.
Complexity
The parameter $\lambda $ controls the invertibility of the matrix $X^{T}X+\lambda nI$. Several methods can be used to solve the above linear system, Cholesky decomposition being probably the method of choice, since the matrix $X^{T}X+\lambda nI$ is symmetric and positive definite. The complexity of this method is $O(nD^{2})$ for training and $O(D)$ for testing. The cost $O(nD^{2})$ is essentially that of computing $X^{T}X$, whereas the inverse computation (or rather the solution of the linear system) is roughly $O(D^{3})$.
Feature maps and Mercer's theorem
In this section it will be shown how to extend RLS to any kind of reproducing kernel K. Instead of linear kernel a feature map is considered $\Phi :X\rightarrow F$ for some Hilbert space $F$, called the feature space. In this case the kernel is defined as: The matrix $X$ is now replaced by the new data matrix $\Phi $, where $\Phi _{ij}=\varphi _{j}(x_{i})$, or the $j$-th component of the $\varphi (x_{i})$.
$K(x,x')=\langle \Phi (x),\Phi (x')\rangle _{F}.$
It means that for a given training set $K=\Phi \Phi ^{T}$. Thus, the objective function can be written as
$\min _{c\in \mathbb {R} ^{n}}\|Y-\Phi \Phi ^{T}c\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{T}\Phi \Phi ^{T}c.$
This approach is known as the kernel trick. This technique can significantly simplify the computational operations. If $F$ is high dimensional, computing $\varphi (x_{i})$ may be rather intensive. If the explicit form of the kernel function is known, we just need to compute and store the $n\times n$ kernel matrix $\operatorname {K} $.
In fact, the Hilbert space $F$ need not be isomorphic to $\mathbb {R} ^{m}$, and can be infinite dimensional. This follows from Mercer's theorem, which states that a continuous, symmetric, positive definite kernel function can be expressed as
$K(x,z)=\sum _{i=1}^{\infty }\sigma _{i}e_{i}(x)e_{i}(z)$
where $e_{i}(x)$ form an orthonormal basis for $\ell ^{2}(X)$, and $\sigma _{i}\in \mathbb {R} $. If feature maps is defined $\varphi (x)$ with components $\varphi _{i}(x)={\sqrt {\sigma _{i}}}e_{i}(x)$, it follows that $K(x,z)=\langle \varphi (x),\varphi (z)\rangle $. This demonstrates that any kernel can be associated with a feature map, and that RLS generally consists of linear RLS performed in some possibly higher-dimensional feature space. While Mercer's theorem shows how one feature map that can be associated with a kernel, in fact multiple feature maps can be associated with a given reproducing kernel. For instance, the map $\varphi (x)=K_{x}$ satisfies the property $K(x,z)=\langle \varphi (x),\varphi (z)\rangle $ for an arbitrary reproducing kernel.
Bayesian interpretation
Further information: Bayesian linear regression and Bayesian interpretation of kernel regularization
Least squares can be viewed as a likelihood maximization under an assumption of normally distributed residuals. This is because the exponent of the Gaussian distribution is quadratic in the data, and so is the least-squares objective function. In this framework, the regularization terms of RLS can be understood to be encoding priors on $w$. For instance, Tikhonov regularization corresponds to a normally distributed prior on $w$ that is centered at 0. To see this, first note that the OLS objective is proportional to the log-likelihood function when each sampled $y^{i}$ is normally distributed around $w^{T}\cdot x^{i}$. Then observe that a normal prior on $w$ centered at 0 has a log-probability of the form
$\log P(w)=q-\alpha \sum _{j=1}^{d}w_{j}^{2}$
where $q$ and $\alpha $ are constants that depend on the variance of the prior and are independent of $w$. Thus, minimizing the logarithm of the likelihood times the prior is equivalent to minimizing the sum of the OLS loss function and the ridge regression regularization term.
This gives a more intuitive interpretation for why Tikhonov regularization leads to a unique solution to the least-squares problem: there are infinitely many vectors $w$ satisfying the constraints obtained from the data, but since we come to the problem with a prior belief that $w$ is normally distributed around the origin, we will end up choosing a solution with this constraint in mind.
Other regularization methods correspond to different priors. See the list below for more details.
Specific examples
Ridge regression (or Tikhonov regularization)
One particularly common choice for the penalty function $R$ is the squared $\ell _{2}$ norm, i.e.,
$R(w)=\sum _{j=1}^{d}w_{j}^{2}$
${\frac {1}{n}}\|Y-\operatorname {X} w\|_{2}^{2}+\lambda \sum _{j=1}^{d}|w_{j}|^{2}\rightarrow \min _{w\in \mathbb {R} ^{d}}$
The most common names for this are called Tikhonov regularization and ridge regression. It admits a closed-form solution for $w$:
$w=(X^{T}X+\lambda I)^{-1}X^{T}Y$
The name ridge regression alludes to the fact that the $\lambda I$ term adds positive entries along the diagonal "ridge" of the sample covariance matrix $X^{T}X$.
When $\lambda =0$, i.e., in the case of ordinary least squares, the condition that $d>n$ causes the sample covariance matrix $X^{T}X$ to not have full rank and so it cannot be inverted to yield a unique solution. This is why there can be an infinitude of solutions to the ordinary least squares problem when $d>n$. However, when $\lambda >0$, i.e., when ridge regression is used, the addition of $\lambda I$ to the sample covariance matrix ensures that all of its eigenvalues will be strictly greater than 0. In other words, it becomes invertible, and the solution becomes unique.
Compared to ordinary least squares, ridge regression is not unbiased. It accepts bias to reduce variance and the mean square error.
Lasso regression
Main article: Lasso (statistics)
The least absolute selection and shrinkage (LASSO) method is another popular choice. In lasso regression, the lasso penalty function $R$ is the $\ell _{1}$ norm, i.e.
$R(w)=\sum _{j=1}^{d}\left|w_{j}\right|$
${\frac {1}{n}}\|Y-\operatorname {X} w\|_{2}^{2}+\lambda \sum _{j=1}^{d}|w_{j}|\rightarrow \min _{w\in \mathbb {R} ^{d}}$
Note that the lasso penalty function is convex but not strictly convex. Unlike Tikhonov regularization, this scheme does not have a convenient closed-form solution: instead, the solution is typically found using quadratic programming or more general convex optimization methods, as well as by specific algorithms such as the least-angle regression algorithm.
An important difference between lasso regression and Tikhonov regularization is that lasso regression forces more entries of $w$ to actually equal 0 than would otherwise. In contrast, while Tikhonov regularization forces entries of $w$ to be small, it does not force more of them to be 0 than would be otherwise. Thus, LASSO regularization is more appropriate than Tikhonov regularization in cases in which we expect the number of non-zero entries of $w$ to be small, and Tikhonov regularization is more appropriate when we expect that entries of $w$ will generally be small but not necessarily zero. Which of these regimes is more relevant depends on the specific data set at hand.
Besides feature selection described above, LASSO has some limitations. Ridge regression provides better accuracy in the case $n>d$ for highly correlated variables.[1] In another case, $n<d$, LASSO selects at most $n$ variables. Moreover, LASSO tends to select some arbitrary variables from group of highly correlated samples, so there is no grouping effect.
ℓ0 Penalization
${\frac {1}{n}}\|Y-\operatorname {X} w\|_{2}^{2}+\lambda \|w_{j}\|_{0}\rightarrow \min _{w\in \mathbb {R} ^{d}}$
The most extreme way to enforce sparsity is to say that the actual magnitude of the coefficients of $w$ does not matter; rather, the only thing that determines the complexity of $w$ is the number of non-zero entries. This corresponds to setting $R(w)$ to be the $\ell _{0}$ norm of $w$. This regularization function, while attractive for the sparsity that it guarantees, is very difficult to solve because doing so requires optimization of a function that is not even weakly convex. Lasso regression is the minimal possible relaxation of $\ell _{0}$ penalization that yields a weakly convex optimization problem.
Elastic net
Main article: Elastic net regularization
For any non-negative $\lambda _{1}$ and $\lambda _{2}$ the objective has the following form:
${\frac {1}{n}}\|Y-\operatorname {X} w\|_{2}^{2}+\lambda _{1}\sum _{j=1}^{d}|w_{j}|+\lambda _{2}\sum _{j=1}^{d}|w_{j}|^{2}\rightarrow \min _{w\in \mathbb {R} ^{d}}$
Let $\alpha ={\frac {\lambda _{1}}{\lambda _{1}+\lambda _{2}}}$, then the solution of the minimization problem is described as:
${\frac {1}{n}}\|Y-\operatorname {X} w\|_{2}^{2}\rightarrow \min _{w\in \mathbb {R} ^{d}}{\text{ s.t. }}(1-\alpha )\|w\|_{1}+\alpha \|w\|_{2}\leq t$ for some $t$.
Consider $(1-\alpha )\|w\|_{1}+\alpha \|w\|_{2}\leq t$ as an Elastic Net penalty function.
When $\alpha =1$, elastic net becomes ridge regression, whereas $\alpha =0$ it becomes Lasso. $\forall \alpha \in (0,1]$ Elastic Net penalty function doesn't have the first derivative at 0 and it is strictly convex $\forall \alpha >0$ taking the properties both lasso regression and ridge regression.
One of the main properties of the Elastic Net is that it can select groups of correlated variables. The difference between weight vectors of samples $x_{i}$ and $x_{j}$ is given by:
$|w_{i}^{*}(\lambda _{1},\lambda _{2})-w_{j}^{*}(\lambda _{1},\lambda _{2})|\leq {\frac {\sum _{i=1}^{n}|y_{i}|}{\lambda _{2}}}{\sqrt {2(1-\rho _{ij})}}$, where $\rho _{ij}=x_{i}^{T}x_{j}$.[2]
If $x_{i}$ and $x_{j}$ are highly correlated ( $\rho _{ij}\rightarrow 1$), the weight vectors are very close. In the case of negatively correlated samples ( $\rho _{ij}\rightarrow -1$) the samples $-x_{j}$ can be taken. To summarize, for highly correlated variables the weight vectors tend to be equal up to a sign in the case of negative correlated variables.
Partial list of RLS methods
The following is a list of possible choices of the regularization function $R(\cdot )$, along with the name for each one, the corresponding prior if there is a simple one, and ways for computing the solution to the resulting optimization problem.
NameRegularization functionCorresponding priorMethods for solving
Tikhonov regularization$\|w\|_{2}^{2}$NormalClosed form
Lasso regression$\|w\|_{1}$LaplaceProximal gradient descent, least angle regression
$\ell _{0}$ penalization$\|w\|_{0}$–Forward selection, Backward elimination, use of priors such as spike and slab
Elastic nets$\beta \|w\|_{1}+(1-\beta )\|w\|_{2}^{2}$Normal and Laplace mixtureProximal gradient descent
Total variation regularization$\sum _{j=1}^{d-1}|w_{j+1}-w_{j}|$–Split–Bregman method, among others
See also
• Least squares
• Regularization in mathematics.
• Generalization error, one of the reasons regularization is used.
• Tikhonov regularization
• Lasso regression
• Elastic net regularization
• Least-angle regression
References
1. Tibshirani Robert (1996). "Regression shrinkage and selection via the lasso" (PDF). Journal of the Royal Statistical Society, Series B. 58: pp. 266–288.
2. Hui, Zou; Hastie, Trevor (2003). "Regularization and Variable Selection via the Elastic Net" (PDF). Journal of the Royal Statistical Society, Series B. 67 (2): pp. 301–320.
External links
• http://www.stanford.edu/~hastie/TALKS/enet_talk.pdf Regularization and Variable Selection via the Elastic Net (presentation)
• Regularized Least Squares and Support Vector Machines (presentation)
• Regularized Least Squares(presentation)
|
Wikipedia
|
Benjamin–Bona–Mahony equation
The Benjamin–Bona–Mahony equation (BBM equation, also regularized long-wave equation; RLWE) is the partial differential equation
$u_{t}+u_{x}+uu_{x}-u_{xxt}=0.\,$
This equation was studied in Benjamin, Bona, and Mahony (1972) as an improvement of the Korteweg–de Vries equation (KdV equation) for modeling long surface gravity waves of small amplitude – propagating uni-directionally in 1+1 dimensions. They show the stability and uniqueness of solutions to the BBM equation. This contrasts with the KdV equation, which is unstable in its high wavenumber components. Further, while the KdV equation has an infinite number of integrals of motion, the BBM equation only has three.[2][3]
Before, in 1966, this equation was introduced by Peregrine, in the study of undular bores.[4]
A generalized n-dimensional version is given by[5][6]
$u_{t}-\nabla ^{2}u_{t}+\operatorname {div} \,\varphi (u)=0.\,$
where $\varphi $ is a sufficiently smooth function from $\mathbb {R} $ to $\mathbb {R} ^{n}$. Avrin & Goldstein (1985) proved global existence of a solution in all dimensions.
Solitary wave solution
The BBM equation possesses solitary wave solutions of the form:[3]
$u=3{\frac {c^{2}}{1-c^{2}}}\operatorname {sech} ^{2}{\frac {1}{2}}\left(cx-{\frac {ct}{1-c^{2}}}+\delta \right),$
where sech is the hyperbolic secant function and $\delta $ is a phase shift (by an initial horizontal displacement). For $|c|<1$, the solitary waves have a positive crest elevation and travel in the positive $x$-direction with velocity $1/(1-c^{2}).$ These solitary waves are not solitons, i.e. after interaction with other solitary waves, an oscillatory tail is generated and the solitary waves have changed.[1][3]
Hamiltonian structure
The BBM equation has a Hamiltonian structure, as it can be written as:[7]
$u_{t}=-{\mathcal {D}}{\frac {\delta H}{\delta u}},\,$ with Hamiltonian $H=\int _{-\infty }^{+\infty }\left({\tfrac {1}{2}}u^{2}+{\tfrac {1}{6}}u^{3}\right)\,{\text{d}}x\,$ and operator ${\mathcal {D}}=\left(1-\partial _{x}^{2}\right)^{-1}\,\partial _{x}.$
Here $\delta H/\delta u$ is the variation of the Hamiltonian $H(u)$ with respect to $u(x),$ and $\partial _{x}$ denotes the partial differential operator with respect to $x.$
Conservation laws
The BBM equation possesses exactly three independent and non-trivial conservation laws.[3] First $u$ is replaced by $u=-v-1$ in the BBM equation, leading to the equivalent equation:
$v_{t}-v_{xxt}=v\,v_{x}.$
The three conservation laws then are:[3]
${\begin{aligned}v_{t}-\left(v_{xt}+{\tfrac {1}{2}}v^{2}\right)_{x}&=0,\\\left({\tfrac {1}{2}}v^{2}+{\tfrac {1}{2}}v_{x}^{2}\right)_{t}-\left(v\,v_{xt}+{\tfrac {1}{3}}v^{3}\right)_{x}&=0,\\\left({\tfrac {1}{3}}v^{3}\right)_{t}+\left(v_{t}^{2}-v_{xt}^{2}-v^{2}\,v_{xt}-{\tfrac {1}{4}}v^{4}\right)_{x}&=0.\end{aligned}}$
Which can easily expressed in terms of $u$ by using $v=-u-1.$
Linear dispersion
The linearized version of the BBM equation is:
$u_{t}+u_{x}-u_{xxt}=0.$
Periodic progressive wave solutions are of the form:
$u=a\,\mathrm {e} ^{i(kx-\omega t)},$
with $k$ the wavenumber and $\omega $ the angular frequency. The dispersion relation of the linearized BBM equation is[2]
$\omega _{\mathrm {BBM} }={\frac {k}{1+k^{2}}}.$
Similarly, for the linearized KdV equation $u_{t}+u_{x}+u_{xxx}=0$ the dispersion relation is:[2]
$\omega _{\mathrm {KdV} }=k-k^{3}.$
This becomes unbounded and negative for $k\to \infty ,$ and the same applies to the phase velocity $\omega _{\mathrm {KdV} }/k$ and group velocity $\mathrm {d} \omega _{\mathrm {KdV} }/\mathrm {d} k.$ Consequently, the KdV equation gives waves travelling in the negative $x$-direction for high wavenumbers (short wavelengths). This is in contrast with its purpose as an approximation for uni-directional waves propagating in the positive $x$-direction.[2]
The strong growth of frequency $\omega _{\mathrm {KdV} }$ and phase speed with wavenumber $k$ posed problems in the numerical solution of the KdV equation, while the BBM equation does not have these shortcomings.[2]
Notes
1. Bona, Pritchard & Scott (1980)
2. Benjamin, Bona, and Mahony (1972)
3. Olver (1979)
4. Peregrine (1966)
5. Goldstein & Wichnoski (1980)
6. Avrin & Goldstein (1985)
7. Olver, P.J. (1980), "On the Hamiltonian structure of evolution equations", Mathematical Proceedings of the Cambridge Philosophical Society, 88 (1): 71–88, Bibcode:1980MPCPS..88...71O, doi:10.1017/S0305004100057364, S2CID 10607644
References
• Avrin, J.; Goldstein, J.A. (1985), "Global existence for the Benjamin–Bona–Mahony equation in arbitrary dimensions", Nonlinear Analysis, 9 (8): 861–865, doi:10.1016/0362-546X(85)90023-9, MR 0799889
• Benjamin, T. B.; Bona, J. L.; Mahony, J. J. (1972), "Model Equations for Long Waves in Nonlinear Dispersive Systems", Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences, 272 (1220): 47–78, Bibcode:1972RSPTA.272...47B, doi:10.1098/rsta.1972.0032, ISSN 0962-8428, JSTOR 74079, S2CID 120673596
• Bona, J. L.; Pritchard, W. G.; Scott, L. R. (1980), "Solitary‐wave interaction", Physics of Fluids, 23 (3): 438–441, Bibcode:1980PhFl...23..438B, doi:10.1063/1.863011
• Goldstein, J.A.; Wichnoski, B.J. (1980), "On the Benjamin–Bona–Mahony equation in higher dimensions", Nonlinear Analysis, 4 (4): 665–675, doi:10.1016/0362-546X(80)90067-X
• Olver, P. J. (1979), "Euler operators and conservation laws of the BBM equation", Mathematical Proceedings of the Cambridge Philosophical Society, 85 (1): 143–160, Bibcode:1979MPCPS..85..143O, doi:10.1017/S0305004100055572, S2CID 10840014
• Peregrine, D.H. (1966), "Calculations of the development of an undular bore", Journal of Fluid Mechanics, 25 (2): 321–330, Bibcode:1966JFM....25..321P, doi:10.1017/S0022112066001678, S2CID 122299686
• Zwillinger, D. (1998), Handbook of differential equations (3rd ed.), Boston, MA: Academic Press, pp. 174 & 176, ISBN 978-0-12-784396-4, MR 0977062 (Warning: On p. 174 Zwillinger misstates the Benjamin–Bona–Mahony equation, confusing it with the similar KdV equation.)
|
Wikipedia
|
Regularized meshless method
In numerical mathematics, the regularized meshless method (RMM), also known as the singular meshless method or desingularized meshless method, is a meshless boundary collocation method designed to solve certain partial differential equations whose fundamental solution is explicitly known. The RMM is a strong-form collocation method with merits being meshless, integration-free, easy-to-implement, and high stability. Until now this method has been successfully applied to some typical problems, such as potential, acoustics, water wave, and inverse problems of bounded and unbounded domains.
Description
The RMM employs the double layer potentials from the potential theory as its basis/kernel functions. Like the method of fundamental solutions (MFS),[1][2] the numerical solution is approximated by a linear combination of double layer kernel functions with respect to different source points. Unlike the MFS, the collocation and source points of the RMM, however, are coincident and placed on the physical boundary without the need of a fictitious boundary in the MFS. Thus, the RMM overcomes the major bottleneck in the MFS applications to the real world problems.
Upon the coincidence of the collocation and source points, the double layer kernel functions will present various orders of singularity. Thus, a subtracting and adding-back regularizing technique [3] is introduced and, hence, removes or cancels such singularities.
History and recent development
These days the finite element method (FEM), finite difference method (FDM), finite volume method (FVM), and boundary element method (BEM) are dominant numerical techniques in numerical modelings of many fields of engineering and sciences. Mesh generation is tedious and even very challenging problems in their solution of high-dimensional moving or complex-shaped boundary problems and is computationally costly and often mathematically troublesome.
The BEM has long been claimed to alleviate such drawbacks thanks to the boundary-only discretizations and its semi-analytical nature. Despite these merits, the BEM, however, involves quite sophisticated mathematics and some tricky singular integrals. Moreover, surface meshing in a three-dimensional domain remains to be a nontrivial task. Over the past decades, considerable efforts have been devoted to alleviating or eliminating these difficulties, leading to the development of meshless/meshfree boundary collocation methods which require neither domain nor boundary meshing. Among these methods, the MFS is the most popular with the merit of easy programming, mathematical simplicity, high accuracy, and fast convergence.
In the MFS, a fictitious boundary outside the problem domain is required in order to avoid the singularity of the fundamental solution. However, determining the optimal location of the fictitious boundary is a nontrivial task to be studied. Dramatic efforts have ever since been made to remove this long perplexing issue. Recent advances include, for example, boundary knot method (BKM),[4][5] regularized meshless method (RMM),[3] modified MFS (MMFS),[6] and singular boundary method (SBM) [7]
The methodology of the RMM was firstly proposed by Young and his collaborators in 2005. The key idea is to introduce a subtracting and adding-back regularizing technique to remove the singularity of the double layer kernel function at the origin, so that the source points can be placed directly on the real boundary. Up to now, the RMM has successfully been applied to a variety of physical problems, such as potential,[3] exterior acoustics [8] antiplane piezo-electricity,[9] acoustic eigenproblem with multiply-connected domain,[10] inverse problem,[11] possion’ equation [12] and water wave problems.[13] Furthermore, some improved formulations have been made aiming to further improve the feasibility and efficiency of this method, see, for example, the weighted RMM for irregular domain problems [14] and analytical RMM for 2D Laplace problems.[15]
See also
• Radial basis function
• Boundary element method
• Method of fundamental solutions
• Boundary knot method
• Boundary particle method
• Singular boundary method
References
1. A.K. G. Fairweather, The method of fundamental solutions for elliptic boundary value problems, Advances in Computational Mathematics. 9 (1998) 69–95.
2. M.A. Golberg, C.S. Chen, The theory of radial basis functions applied to the BEM for inhomogeneous partial differential equations, Boundary Elements Communications. 5 (1994) 57–61.
3. D.L. Young, K.H. Chen, C.W. Lee. Novel meshless method for solving the potential problems with arbitrary domains. Journal of Computational Physics 2005; 209(1): 290–321.
4. W. Chen and M. Tanaka, "A meshfree, exponential convergence, integration-free, and boundary-only RBF technique Archived 2016-03-04 at the Wayback Machine", Computers and Mathematics with Applications, 43, 379–391, 2002.
5. W. Chen and Y.C. Hon, "Numerical convergence of boundary knot method in the analysis of Helmholtz, modified Helmholtz, and convection-diffusion problems Archived 2015-06-20 at the Wayback Machine", Computer Methods in Applied Mechanics and Engineering, 192, 1859–1875, 2003.
6. B. Sarler, "Solution of potential flow problems by the modified method of fundamental solutions: Formulations with the single layer and the double layer fundamental solutions", Eng Anal Bound Elem 2009;33(12): 1374–82.
7. W. Chen, F.Z. Wang, "A method of fundamental solutions without fictitious boundary Archived 2015-06-06 at the Wayback Machine", Eng Anal Bound Elem 2010;34(5): 530–32.
8. D.L. Young, K.H. Chen, C.W. Lee. Singular meshless method using double layer potentials for exterior acoustics.Journal of the Acoustical Society of America 2006;119(1):96–107.
9. K.H. Chen, J.H. Kao, J.T. Chen. Regularized meshless method for antiplane piezo- electricity problems with multiple inclusions. Computers, Materials, & Con- tinua 2009;9(3):253–79.
10. K.H. Chen, J.T. Chen, J.H. Kao. Regularized meshless method for solving acoustic eigenproblem with multiply-connected domain. Computer Modeling in Engineering & Sciences 2006;16(1):27–39.
11. K.H. Chen, J.H. Kao, J.T. Chen, K.L. Wu. Desingularized meshless method for solving Laplace equation with over-specified boundary conditions using regularization techniques. Computational Mechanics 2009;43:827–37
12. W. Chen, J. Lin, F.Z. Wang, "Regularized meshless method for nonhomogeneous problems Archived 2015-06-06 at the Wayback Machine", Eng. Anal. Bound. Elem. 35 (2011) 253–257.
13. K.H. Chen, M.C. Lu, H.M. Hsu, Regularized meshless method analysis of the problem of obliquely incident water wave, Eng. Anal. Bound. Elem. 35 (2011) 355–362.
14. R.C. Song, W. Chen,"An investigation on the regularized meshless method for irregular domain problems", CMES-Comput. Model. Eng. Sci. 42 (2009) 59–70.
15. W. Chen, R.C. Song, Analytical diagonal elements of regularized meshless method for regular domains of 2D Dirichlet Laplace problems, Eng. Anal. Bound. Elem. 34 (2010) 2–8.
Numerical methods for partial differential equations
Finite difference
Parabolic
• Forward-time central-space (FTCS)
• Crank–Nicolson
Hyperbolic
• Lax–Friedrichs
• Lax–Wendroff
• MacCormack
• Upwind
• Method of characteristics
Others
• Alternating direction-implicit (ADI)
• Finite-difference time-domain (FDTD)
Finite volume
• Godunov
• High-resolution
• Monotonic upstream-centered (MUSCL)
• Advection upstream-splitting (AUSM)
• Riemann solver
• Essentially non-oscillatory (ENO)
• Weighted essentially non-oscillatory (WENO)
Finite element
• hp-FEM
• Extended (XFEM)
• Discontinuous Galerkin (DG)
• Spectral element (SEM)
• Mortar
• Gradient discretisation (GDM)
• Loubignac iteration
• Smoothed (S-FEM)
Meshless/Meshfree
• Smoothed-particle hydrodynamics (SPH)
• Peridynamics (PD)
• Moving particle semi-implicit method (MPS)
• Material point method (MPM)
• Particle-in-cell (PIC)
Domain decomposition
• Schur complement
• Fictitious domain
• Schwarz alternating
• additive
• abstract additive
• Neumann–Dirichlet
• Neumann–Neumann
• Poincaré–Steklov operator
• Balancing (BDD)
• Balancing by constraints (BDDC)
• Tearing and interconnect (FETI)
• FETI-DP
Others
• Spectral
• Pseudospectral (DVR)
• Method of lines
• Multigrid
• Collocation
• Level-set
• Boundary element
• Method of moments
• Immersed boundary
• Analytic element
• Isogeometric analysis
• Infinite difference method
• Infinite element method
• Galerkin method
• Petrov–Galerkin method
• Validated numerics
• Computer-assisted proof
• Integrable algorithm
• Method of fundamental solutions
|
Wikipedia
|
Regularization (mathematics)
In mathematics, statistics, finance,[1] computer science, particularly in machine learning and inverse problems, regularization is a process that changes the result answer to be "simpler". It is often used to obtain results for ill-posed problems or to prevent overfitting.[2]
Part of a series on
Machine learning
and data mining
Paradigms
• Supervised learning
• Unsupervised learning
• Online learning
• Batch learning
• Meta-learning
• Semi-supervised learning
• Self-supervised learning
• Reinforcement learning
• Rule-based learning
• Quantum machine learning
Problems
• Classification
• Generative model
• Regression
• Clustering
• dimension reduction
• density estimation
• Anomaly detection
• Data Cleaning
• AutoML
• Association rules
• Semantic analysis
• Structured prediction
• Feature engineering
• Feature learning
• Learning to rank
• Grammar induction
• Ontology learning
• Multimodal learning
Supervised learning
(classification • regression)
• Apprenticeship learning
• Decision trees
• Ensembles
• Bagging
• Boosting
• Random forest
• k-NN
• Linear regression
• Naive Bayes
• Artificial neural networks
• Logistic regression
• Perceptron
• Relevance vector machine (RVM)
• Support vector machine (SVM)
Clustering
• BIRCH
• CURE
• Hierarchical
• k-means
• Fuzzy
• Expectation–maximization (EM)
• DBSCAN
• OPTICS
• Mean shift
Dimensionality reduction
• Factor analysis
• CCA
• ICA
• LDA
• NMF
• PCA
• PGD
• t-SNE
• SDL
Structured prediction
• Graphical models
• Bayes net
• Conditional random field
• Hidden Markov
Anomaly detection
• RANSAC
• k-NN
• Local outlier factor
• Isolation forest
Artificial neural network
• Autoencoder
• Cognitive computing
• Deep learning
• DeepDream
• Feedforward neural network
• Recurrent neural network
• LSTM
• GRU
• ESN
• reservoir computing
• Restricted Boltzmann machine
• GAN
• Diffusion model
• SOM
• Convolutional neural network
• U-Net
• Transformer
• Vision
• Spiking neural network
• Memtransistor
• Electrochemical RAM (ECRAM)
Reinforcement learning
• Q-learning
• SARSA
• Temporal difference (TD)
• Multi-agent
• Self-play
Learning with humans
• Active learning
• Crowdsourcing
• Human-in-the-loop
Model diagnostics
• Learning curve
Mathematical foundations
• Kernel machines
• Bias–variance tradeoff
• Computational learning theory
• Empirical risk minimization
• Occam learning
• PAC learning
• Statistical learning
• VC theory
Machine-learning venues
• ECML PKDD
• NeurIPS
• ICML
• ICLR
• IJCAI
• ML
• JMLR
Related articles
• Glossary of artificial intelligence
• List of datasets for machine-learning research
• Outline of machine learning
Although regularization procedures can be divided in many ways, the following delineation is particularly helpful:
• Explicit regularization is regularization whenever one explicitly adds a term to the optimization problem. These terms could be priors, penalties, or constraints. Explicit regularization is commonly employed with ill-posed optimization problems. The regularization term, or penalty, imposes a cost on the optimization function to make the optimal solution unique.
• Implicit regularization is all other forms of regularization. This includes, for example, early stopping, using a robust loss function, and discarding outliers. Implicit regularization is essentially ubiquitous in modern machine learning approaches, including stochastic gradient descent for training deep neural networks, and ensemble methods (such as random forests and gradient boosted trees).
In explicit regularization, independent of the problem or model, there is always a data term, that corresponds to a likelihood of the measurement and a regularization term that corresponds to a prior. By combining both using Bayesian statistics, one can compute a posterior, that includes both information sources and therefore stabilizes the estimation process. By trading off both objectives, one chooses to be more addictive to the data or to enforce generalization (to prevent overfitting). There is a whole research branch dealing with all possible regularizations. In practice, one usually tries a specific regularization and then figures out the probability density that corresponds to that regularization to justify the choice. It can also be physically motivated by common sense or intuition.
In machine learning, the data term corresponds to the training data and the regularization is either the choice of the model or modifications to the algorithm. It is always intended to reduce the generalization error, i.e. the error score with the trained model on the evaluation set and not the training data.[3]
One of the earliest uses of regularization is Tikhonov regularization, related to the method of least squares.
Classification
Empirical learning of classifiers (from a finite data set) is always an underdetermined problem, because it attempts to infer a function of any $x$ given only examples $x_{1},x_{2},...x_{n}$.
A regularization term (or regularizer) $R(f)$ is added to a loss function:
$\min _{f}\sum _{i=1}^{n}V(f(x_{i}),y_{i})+\lambda R(f)$
where $V$ is an underlying loss function that describes the cost of predicting $f(x)$ when the label is $y$, such as the square loss or hinge loss; and $\lambda $ is a parameter which controls the importance of the regularization term. $R(f)$ is typically chosen to impose a penalty on the complexity of $f$. Concrete notions of complexity used include restrictions for smoothness and bounds on the vector space norm.[4]
A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution (as depicted in the figure above, where the green function, the simpler one, may be preferred). From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters.[5]
Regularization can serve multiple purposes, including learning simpler models, inducing models to be sparse and introducing group structure into the learning problem.
The same idea arose in many fields of science. A simple form of regularization applied to integral equations (Tikhonov regularization) is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, including total variation regularization, have become popular.
Generalization
Regularization can be motivated as a technique to improve the generalizability of a learned model.
The goal of this learning problem is to find a function that fits or predicts the outcome (label) that minimizes the expected error over all possible inputs and labels. The expected error of a function $f_{n}$ is:
$I[f_{n}]=\int _{X\times Y}V(f_{n}(x),y)\rho (x,y)\,dx\,dy$
where $X$ and $Y$ are the domains of input data $x$ and their labels $y$ respectively.
Typically in learning problems, only a subset of input data and labels are available, measured with some noise. Therefore, the expected error is unmeasurable, and the best surrogate available is the empirical error over the $N$ available samples:
$I_{S}[f_{n}]={\frac {1}{n}}\sum _{i=1}^{N}V(f_{n}({\hat {x}}_{i}),{\hat {y}}_{i})$
Without bounds on the complexity of the function space (formally, the reproducing kernel Hilbert space) available, a model will be learned that incurs zero loss on the surrogate empirical error. If measurements (e.g. of $x_{i}$) were made with noise, this model may suffer from overfitting and display poor expected error. Regularization introduces a penalty for exploring certain regions of the function space used to build the model, which can improve generalization.
Tikhonov regularization
These techniques are named for Andrey Nikolayevich Tikhonov, who applied regularization to integral equations and made important contributions in many other areas.
When learning a linear function $f$, characterized by an unknown vector $w$ such that $f(x)=w\cdot x$, one can add the $L_{2}$-norm of the vector $w$ to the loss expression in order to prefer solutions with smaller norms. Tikhonov regularization is one of the most common forms. It is also known as ridge regression. It is expressed as:
$\min _{w}\sum _{i=1}^{n}V({\hat {x}}_{i}\cdot w,{\hat {y}}_{i})+\lambda \|w\|_{2}^{2}$,
where $({\hat {x}}_{i},{\hat {y}}_{i}),\,1\leq i\leq n,$ would represent samples used for training.
In the case of a general function, the norm of the function in its reproducing kernel Hilbert space is:
$\min _{f}\sum _{i=1}^{n}V(f({\hat {x}}_{i}),{\hat {y}}_{i})+\lambda \|f\|_{\mathcal {H}}^{2}$
As the $L_{2}$ norm is differentiable, learning can be advanced by gradient descent.
Tikhonov-regularized least squares
The learning problem with the least squares loss function and Tikhonov regularization can be solved analytically. Written in matrix form, the optimal $w$ is the one for which the gradient of the loss function with respect to $w$ is 0.
$\min _{w}{\frac {1}{n}}({\hat {X}}w-Y)^{T}({\hat {X}}w-Y)+\lambda \|w\|_{2}^{2}$
$\nabla _{w}={\frac {2}{n}}{\hat {X}}^{T}({\hat {X}}w-Y)+2\lambda w$
$0={\hat {X}}^{T}({\hat {X}}w-Y)+n\lambda w$ (first-order condition)
$w=({\hat {X}}^{T}{\hat {X}}+\lambda nI)^{-1}({\hat {X}}^{T}Y)$
By construction of the optimization problem, other values of $w$ give larger values for the loss function. This can be verified by examining the second derivative $\nabla _{ww}$.
During training, this algorithm takes $O(d^{3}+nd^{2})$ time. The terms correspond to the matrix inversion and calculating $X^{T}X$, respectively. Testing takes $O(nd)$ time.
Early stopping
Early stopping can be viewed as regularization in time. Intuitively, a training procedure such as gradient descent tends to learn more and more complex functions with increasing iterations. By regularizing for time, model complexity can be controlled, improving generalization.
Early stopping is implemented using one data set for training, one statistically independent data set for validation and another for testing. The model is trained until performance on the validation set no longer improves and then applied to the test set.
Theoretical motivation in least squares
Consider the finite approximation of Neumann series for an invertible matrix A where $\|I-A\|<1$:
$\sum _{i=0}^{T-1}(I-A)^{i}\approx A^{-1}$
This can be used to approximate the analytical solution of unregularized least squares, if γ is introduced to ensure the norm is less than one.
$w_{T}={\frac {\gamma }{n}}\sum _{i=0}^{T-1}(I-{\frac {\gamma }{n}}{\hat {X}}^{T}{\hat {X}})^{i}{\hat {X}}^{T}{\hat {Y}}$
The exact solution to the unregularized least squares learning problem minimizes the empirical error, but may fail. By limiting T, the only free parameter in the algorithm above, the problem is regularized for time, which may improve its generalization.
The algorithm above is equivalent to restricting the number of gradient descent iterations for the empirical risk
$I_{s}[w]={\frac {1}{2n}}\|{\hat {X}}w-{\hat {Y}}\|_{\mathbb {R} ^{n}}^{2}$
with the gradient descent update:
${\begin{aligned}w_{0}&=0\\w_{t+1}&=(I-{\frac {\gamma }{n}}{\hat {X}}^{T}{\hat {X}})w_{t}+{\frac {\gamma }{n}}{\hat {X}}^{T}{\hat {Y}}\end{aligned}}$
The base case is trivial. The inductive case is proved as follows:
${\begin{aligned}w_{T}&=(I-{\frac {\gamma }{n}}{\hat {X}}^{T}{\hat {X}}){\frac {\gamma }{n}}\sum _{i=0}^{T-2}(I-{\frac {\gamma }{n}}{\hat {X}}^{T}{\hat {X}})^{i}{\hat {X}}^{T}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{T}{\hat {Y}}\\&={\frac {\gamma }{n}}\sum _{i=1}^{T-1}(I-{\frac {\gamma }{n}}{\hat {X}}^{T}{\hat {X}})^{i}{\hat {X}}^{T}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{T}{\hat {Y}}\\&={\frac {\gamma }{n}}\sum _{i=0}^{T-1}(I-{\frac {\gamma }{n}}{\hat {X}}^{T}{\hat {X}})^{i}{\hat {X}}^{T}{\hat {Y}}\end{aligned}}$
Regularizers for sparsity
Assume that a dictionary $\phi _{j}$ with dimension $p$ is given such that a function in the function space can be expressed as:
$f(x)=\sum _{j=1}^{p}\phi _{j}(x)w_{j}$
Enforcing a sparsity constraint on $w$ can lead to simpler and more interpretable models. This is useful in many real-life applications such as computational biology. An example is developing a simple predictive test for a disease in order to minimize the cost of performing medical tests while maximizing predictive power.
A sensible sparsity constraint is the $L_{0}$ norm $\|w\|_{0}$, defined as the number of non-zero elements in $w$. Solving a $L_{0}$ regularized learning problem, however, has been demonstrated to be NP-hard.[6]
The $L_{1}$ norm (see also Norms) can be used to approximate the optimal $L_{0}$ norm via convex relaxation. It can be shown that the $L_{1}$ norm induces sparsity. In the case of least squares, this problem is known as LASSO in statistics and basis pursuit in signal processing.
$\min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\|{\hat {X}}w-{\hat {Y}}\|^{2}+\lambda \|w\|_{1}$
$L_{1}$ regularization can occasionally produce non-unique solutions. A simple example is provided in the figure when the space of possible solutions lies on a 45 degree line. This can be problematic for certain applications, and is overcome by combining $L_{1}$ with $L_{2}$ regularization in elastic net regularization, which takes the following form:
$\min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\|{\hat {X}}w-{\hat {Y}}\|^{2}+\lambda (\alpha \|w\|_{1}+(1-\alpha )\|w\|_{2}^{2}),\alpha \in [0,1]$
Elastic net regularization tends to have a grouping effect, where correlated input features are assigned equal weights.
Elastic net regularization is commonly used in practice and is implemented in many machine learning libraries.
Proximal methods
Main article: Proximal gradient method
While the $L_{1}$ norm does not result in an NP-hard problem, the $L_{1}$ norm is convex but is not strictly differentiable due to the kink at x = 0. Subgradient methods which rely on the subderivative can be used to solve $L_{1}$ regularized learning problems. However, faster convergence can be achieved through proximal methods.
For a problem $\min _{w\in H}F(w)+R(w)$ such that $F$ is convex, continuous, differentiable, with Lipschitz continuous gradient (such as the least squares loss function), and $R$ is convex, continuous, and proper, then the proximal method to solve the problem is as follows. First define the proximal operator
$\operatorname {prox} _{R}(v)=\operatorname {argmin} \limits _{w\in \mathbb {R} ^{D}}\{R(w)+{\frac {1}{2}}\|w-v\|^{2}\},$
and then iterate
$w_{k+1}=\operatorname {prox} \limits _{\gamma ,R}(w_{k}-\gamma \nabla F(w_{k}))$
The proximal method iteratively performs gradient descent and then projects the result back into the space permitted by $R$.
When $R$ is the $L_{1}$ regularizer, the proximal operator is equivalent to the soft-thresholding operator,
$S_{\lambda }(v)f(n)={\begin{cases}v_{i}-\lambda ,&{\text{if }}v_{i}>\lambda \\0,&{\text{if }}v_{i}\in [-\lambda ,\lambda ]\\v_{i}+\lambda ,&{\text{if }}v_{i}<-\lambda \end{cases}}$
This allows for efficient computation.
Group sparsity without overlaps
Groups of features can be regularized by a sparsity constraint, which can be useful for expressing certain prior knowledge into an optimization problem.
In the case of a linear model with non-overlapping known groups, a regularizer can be defined:
$R(w)=\sum _{g=1}^{G}\|w_{g}\|_{2},$ where $\|w_{g}\|_{2}={\sqrt {\sum _{j=1}^{|G_{g}|}(w_{g}^{j})^{2}}}$
This can be viewed as inducing a regularizer over the $L_{2}$ norm over members of each group followed by an $L_{1}$ norm over groups.
This can be solved by the proximal method, where the proximal operator is a block-wise soft-thresholding function:
$\operatorname {prox} \limits _{\lambda ,R,g}(w_{g})={\begin{cases}(1-{\frac {\lambda }{\|w_{g}\|_{2}}})w_{g},&{\text{if }}\|w_{g}\|_{2}>\lambda \\0,&{\text{if }}\|w_{g}\|_{2}\leq \lambda \end{cases}}$
Group sparsity with overlaps
The algorithm described for group sparsity without overlaps can be applied to the case where groups do overlap, in certain situations. This will likely result in some groups with all zero elements, and other groups with some non-zero and some zero elements.
If it is desired to preserve the group structure, a new regularizer can be defined:
$R(w)=\inf \left\{\sum _{g=1}^{G}\|w_{g}\|_{2}:w=\sum _{g=1}^{G}{\bar {w}}_{g}\right\}$
For each $w_{g}$, ${\bar {w}}_{g}$ is defined as the vector such that the restriction of ${\bar {w}}_{g}$ to the group $g$ equals $w_{g}$ and all other entries of ${\bar {w}}_{g}$ are zero. The regularizer finds the optimal disintegration of $w$ into parts. It can be viewed as duplicating all elements that exist in multiple groups. Learning problems with this regularizer can also be solved with the proximal method with a complication. The proximal operator cannot be computed in closed form, but can be effectively solved iteratively, inducing an inner iteration within the proximal method iteration.
Regularizers for semi-supervised learning
When labels are more expensive to gather than input examples, semi-supervised learning can be useful. Regularizers have been designed to guide learning algorithms to learn models that respect the structure of unsupervised training samples. If a symmetric weight matrix $W$ is given, a regularizer can be defined:
$R(f)=\sum _{i,j}w_{ij}(f(x_{i})-f(x_{j}))^{2}$
If $W_{ij}$ encodes the result of some distance metric for points $x_{i}$ and $x_{j}$, it is desirable that $f(x_{i})\approx f(x_{j})$. This regularizer captures this intuition, and is equivalent to:
$R(f)={\bar {f}}^{T}L{\bar {f}}$ where $L=D-W$ is the Laplacian matrix of the graph induced by $W$.
The optimization problem $\min _{f\in \mathbb {R} ^{m}}R(f),m=u+l$ can be solved analytically if the constraint $f(x_{i})=y_{i}$ is applied for all supervised samples. The labeled part of the vector $f$ is therefore obvious. The unlabeled part of $f$ is solved for by:
$\min _{f_{u}\in \mathbb {R} ^{u}}f^{T}Lf=\min _{f_{u}\in \mathbb {R} ^{u}}\{f_{u}^{T}L_{uu}f_{u}+f_{l}^{T}L_{lu}f_{u}+f_{u}^{T}L_{ul}f_{l}\}$
$\nabla _{f_{u}}=2L_{uu}f_{u}+2L_{ul}Y$
$f_{u}=L_{uu}^{\dagger }(L_{ul}Y)$
The pseudo-inverse can be taken because $L_{ul}$ has the same range as $L_{uu}$.
Regularizers for multitask learning
In the case of multitask learning, $T$ problems are considered simultaneously, each related in some way. The goal is to learn $T$ functions, ideally borrowing strength from the relatedness of tasks, that have predictive power. This is equivalent to learning the matrix $W:T\times D$ .
Sparse regularizer on columns
$R(w)=\sum _{i=1}^{D}\|W\|_{2,1}$
This regularizer defines an L2 norm on each column and an L1 norm over all columns. It can be solved by proximal methods.
Nuclear norm regularization
$R(w)=\|\sigma (W)\|_{1}$ where $\sigma (W)$ is the eigenvalues in the singular value decomposition of $W$.
Mean-constrained regularization
$R(f_{1}\cdots f_{T})=\sum _{t=1}^{T}\|f_{t}-{\frac {1}{T}}\sum _{s=1}^{T}f_{s}\|_{H_{k}}^{2}$
This regularizer constrains the functions learned for each task to be similar to the overall average of the functions across all tasks. This is useful for expressing prior information that each task is expected to share with each other task. An example is predicting blood iron levels measured at different times of the day, where each task represents an individual.
Clustered mean-constrained regularization
$R(f_{1}\cdots f_{T})=\sum _{r=1}^{C}\sum _{t\in I(r)}\|f_{t}-{\frac {1}{I(r)}}\sum _{s\in I(r)}f_{s}\|_{H_{k}}^{2}$ where $I(r)$ is a cluster of tasks.
This regularizer is similar to the mean-constrained regularizer, but instead enforces similarity between tasks within the same cluster. This can capture more complex prior information. This technique has been used to predict Netflix recommendations. A cluster would correspond to a group of people who share similar preferences.
Graph-based similarity
More generally than above, similarity between tasks can be defined by a function. The regularizer encourages the model to learn similar functions for similar tasks.
$R(f_{1}\cdots f_{T})=\sum _{t,s=1,t\neq s}^{T}\|f_{t}-f_{s}\|^{2}M_{ts}$ for a given symmetric similarity matrix $M$.
Other uses of regularization in statistics and machine learning
Bayesian learning methods make use of a prior probability that (usually) gives lower probability to more complex models. Well-known model selection techniques include the Akaike information criterion (AIC), minimum description length (MDL), and the Bayesian information criterion (BIC). Alternative methods of controlling overfitting not involving regularization include cross-validation.
Examples of applications of different methods of regularization to the linear model are:
ModelFit measureEntropy measure[4][7]
AIC/BIC$\|Y-X\beta \|_{2}$$\|\beta \|_{0}$
Ridge regression[8]$\|Y-X\beta \|_{2}$$\|\beta \|_{2}$
Lasso[9] $\|Y-X\beta \|_{2}$$\|\beta \|_{1}$
Basis pursuit denoising$\|Y-X\beta \|_{2}$$\lambda \|\beta \|_{1}$
Rudin–Osher–Fatemi model (TV)$\|Y-X\beta \|_{2}$$\lambda \|\nabla \beta \|_{1}$
Potts model$\|Y-X\beta \|_{2}$$\lambda \|\nabla \beta \|_{0}$
RLAD[10] $\|Y-X\beta \|_{1}$$\|\beta \|_{1}$
Dantzig Selector[11] $\|X^{\top }(Y-X\beta )\|_{\infty }$$\|\beta \|_{1}$
SLOPE[12] $\|Y-X\beta \|_{2}$$\sum _{i=1}^{p}\lambda _{i}|\beta |_{(i)}$
See also
• Bayesian interpretation of regularization
• Bias–variance tradeoff
• Matrix regularization
• Regularization by spectral filtering
• Regularized least squares
• Lagrange multiplier
Notes
1. Kratsios, Anastasis (2020). "Deep Arbitrage-Free Learning in a Generalized HJM Framework via Arbitrage-Regularization Data". Risks. 8 (2): . doi:10.3390/risks8020040. Term structure models can be regularized to remove arbitrage opportunities [sic?]. {{cite journal}}: Cite journal requires |journal= (help)
2. Bühlmann, Peter; Van De Geer, Sara (2011). Statistics for High-Dimensional Data. Springer Series in Statistics. p. 9. doi:10.1007/978-3-642-20192-9. ISBN 978-3-642-20191-2. If p > n, the ordinary least squares estimator is not unique and will heavily overfit the data. Thus, a form of complexity regularization will be necessary.
3. "Deep Learning Book". www.deeplearningbook.org. Retrieved 2021-01-29.{{cite web}}: CS1 maint: url-status (link)
4. Bishop, Christopher M. (2007). Pattern recognition and machine learning (Corr. printing. ed.). New York: Springer. ISBN 978-0-387-31073-2.
5. For the connection between maximum a posteriori estimation and ridge regression, see Weinberger, Kilian (July 11, 2018). "Linear / Ridge Regression". CS4780 Machine Learning Lecture 13. Cornell.
6. Natarajan, B. (1995-04-01). "Sparse Approximate Solutions to Linear Systems". SIAM Journal on Computing. 24 (2): 227–234. doi:10.1137/S0097539792240406. ISSN 0097-5397. S2CID 2072045.
7. Duda, Richard O. (2004). Pattern classification + computer manual : hardcover set (2. ed.). New York [u.a.]: Wiley. ISBN 978-0-471-70350-1.
8. Arthur E. Hoerl; Robert W. Kennard (1970). "Ridge regression: Biased estimation for nonorthogonal problems". Technometrics. 12 (1): 55–67. doi:10.2307/1267351. JSTOR 1267351.
9. Tibshirani, Robert (1996). "Regression Shrinkage and Selection via the Lasso" (PostScript). Journal of the Royal Statistical Society, Series B. 58 (1): 267–288. MR 1379242. Retrieved 2009-03-19.
10. Li Wang, Michael D. Gordon & Ji Zhu (2006). "Regularized Least Absolute Deviations Regression and an Efficient Algorithm for Parameter Tuning". Sixth International Conference on Data Mining. pp. 690–700. doi:10.1109/ICDM.2006.134. ISBN 978-0-7695-2701-7.
11. Candes, Emmanuel; Tao, Terence (2007). "The Dantzig selector: Statistical estimation when p is much larger than n". Annals of Statistics. 35 (6): 2313–2351. arXiv:math/0506081. doi:10.1214/009053606000001523. MR 2382644. S2CID 88524200.
12. Małgorzata Bogdan, Ewout van den Berg, Weijie Su & Emmanuel J. Candes (2013). "Statistical estimation and testing via the ordered L1 norm". arXiv:1310.1969 [stat.ME].{{cite arXiv}}: CS1 maint: multiple names: authors list (link)
References
• Neumaier, A. (1998). "Solving ill-conditioned and singular linear systems: A tutorial on regularization" (PDF). SIAM Review. 40 (3): 636–666. Bibcode:1998SIAMR..40..636N. doi:10.1137/S0036144597321909.
Differentiable computing
General
• Differentiable programming
• Information geometry
• Statistical manifold
• Automatic differentiation
• Neuromorphic engineering
• Pattern recognition
• Tensor calculus
• Computational learning theory
• Inductive bias
Concepts
• Gradient descent
• SGD
• Clustering
• Regression
• Overfitting
• Hallucination
• Adversary
• Attention
• Convolution
• Loss functions
• Backpropagation
• Normalization (Batchnorm)
• Activation
• Softmax
• Sigmoid
• Rectifier
• Regularization
• Datasets
• Augmentation
• Diffusion
• Autoregression
Applications
• Machine learning
• In-context learning
• Artificial neural network
• Deep learning
• Scientific computing
• Artificial Intelligence
• Language model
• Large language model
Hardware
• IPU
• TPU
• VPU
• Memristor
• SpiNNaker
Software libraries
• TensorFlow
• PyTorch
• Keras
• Theano
• JAX
• Flux.jl
Implementations
Audio–visual
• AlexNet
• WaveNet
• Human image synthesis
• HWR
• OCR
• Speech synthesis
• Speech recognition
• Facial recognition
• AlphaFold
• DALL-E
• Midjourney
• Stable Diffusion
Verbal
• Word2vec
• Seq2seq
• BERT
• LaMDA
• Bard
• NMT
• Project Debater
• IBM Watson
• GPT-2
• GPT-3
• ChatGPT
• GPT-4
• GPT-J
• Chinchilla AI
• PaLM
• BLOOM
• LLaMA
Decisional
• AlphaGo
• AlphaZero
• Q-learning
• SARSA
• OpenAI Five
• Self-driving car
• MuZero
• Action selection
• Auto-GPT
• Robot control
People
• Yoshua Bengio
• Alex Graves
• Ian Goodfellow
• Stephen Grossberg
• Demis Hassabis
• Geoffrey Hinton
• Yann LeCun
• Fei-Fei Li
• Andrew Ng
• Jürgen Schmidhuber
• David Silver
Organizations
• Anthropic
• EleutherAI
• Google DeepMind
• Hugging Face
• OpenAI
• Meta AI
• Mila
• MIT CSAIL
Architectures
• Neural Turing machine
• Differentiable neural computer
• Transformer
• Recurrent neural network (RNN)
• Long short-term memory (LSTM)
• Gated recurrent unit (GRU)
• Echo state network
• Multilayer perceptron (MLP)
• Convolutional neural network
• Residual network
• Autoencoder
• Variational autoencoder (VAE)
• Generative adversarial network (GAN)
• Graph neural network
• Portals
• Computer programming
• Technology
• Categories
• Artificial neural networks
• Machine learning
|
Wikipedia
|
Regularly ordered
In mathematics, specifically in order theory and functional analysis, an ordered vector space $X$ is said to be regularly ordered and its order is called regular if $X$ is Archimedean ordered and the order dual of $X$ distinguishes points in $X$.[1] Being a regularly ordered vector space is an important property in the theory of topological vector lattices.
Examples
Every ordered locally convex space is regularly ordered.[2] The canonical orderings of subspaces, products, and direct sums of regularly ordered vector spaces are again regularly ordered.[2]
Properties
If $X$ is a regularly ordered vector lattice then the order topology on $X$ is the finest topology on $X$ making $X$ into a locally convex topological vector lattice.[3]
See also
• Vector lattice – Partially ordered vector space, ordered as a latticePages displaying short descriptions of redirect targets
References
1. Schaefer & Wolff 1999, pp. 204–214.
2. Schaefer & Wolff 1999, pp. 222–225.
3. Schaefer & Wolff 1999, pp. 234–242.
Bibliography
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Ordered topological vector spaces
Basic concepts
• Ordered vector space
• Partially ordered space
• Riesz space
• Order topology
• Order unit
• Positive linear operator
• Topological vector lattice
• Vector lattice
Types of orders/spaces
• AL-space
• AM-space
• Archimedean
• Banach lattice
• Fréchet lattice
• Locally convex vector lattice
• Normed lattice
• Order bound dual
• Order dual
• Order complete
• Regularly ordered
Types of elements/subsets
• Band
• Cone-saturated
• Lattice disjoint
• Dual/Polar cone
• Normal cone
• Order complete
• Order summable
• Order unit
• Quasi-interior point
• Solid set
• Weak order unit
Topologies/Convergence
• Order convergence
• Order topology
Operators
• Positive
• State
Main results
• Freudenthal spectral
|
Wikipedia
|
Slowly varying function
In real analysis, a branch of mathematics, a slowly varying function is a function of a real variable whose behaviour at infinity is in some sense similar to the behaviour of a function converging at infinity. Similarly, a regularly varying function is a function of a real variable whose behaviour at infinity is similar to the behaviour of a power law function (like a polynomial) near infinity. These classes of functions were both introduced by Jovan Karamata,[1][2] and have found several important applications, for example in probability theory.
Basic definitions
Definition 1. A measurable function L : (0, +∞) → (0, +∞) is called slowly varying (at infinity) if for all a > 0,
$\lim _{x\to \infty }{\frac {L(ax)}{L(x)}}=1.$
Definition 2. Let L : (0, +∞) → (0, +∞). Then L is a regularly varying function if and only if $\forall a>0,g_{L}(a)=\lim _{x\to \infty }{\frac {L(ax)}{L(x)}}\in \mathbb {R} ^{+}$. In particular, the limit must be finite.
These definitions are due to Jovan Karamata.[1][2]
Note. In the regularly varying case, the sum of two slowly varying functions is again slowly varying function.
Basic properties
Regularly varying functions have some important properties:[1] a partial list of them is reported below. More extensive analyses of the properties characterizing regular variation are presented in the monograph by Bingham, Goldie & Teugels (1987).
Uniformity of the limiting behaviour
Theorem 1. The limit in definitions 1 and 2 is uniform if a is restricted to a compact interval.
Karamata's characterization theorem
Theorem 2. Every regularly varying function f : (0, +∞) → (0, +∞) is of the form
$f(x)=x^{\beta }L(x)$
where
• β is a real number,
• L is a slowly varying function.
Note. This implies that the function g(a) in definition 2 has necessarily to be of the following form
$g(a)=a^{\rho }$
where the real number ρ is called the index of regular variation.
Karamata representation theorem
Theorem 3. A function L is slowly varying if and only if there exists B > 0 such that for all x ≥ B the function can be written in the form
$L(x)=\exp \left(\eta (x)+\int _{B}^{x}{\frac {\varepsilon (t)}{t}}\,dt\right)$
where
• η(x) is a bounded measurable function of a real variable converging to a finite number as x goes to infinity
• ε(x) is a bounded measurable function of a real variable converging to zero as x goes to infinity.
Examples
• If L is a measurable function and has a limit
$\lim _{x\to \infty }L(x)=b\in (0,\infty ),$
then L is a slowly varying function.
• For any β ∈ R, the function L(x) = log β x is slowly varying.
• The function L(x) = x is not slowly varying, nor is L(x) = x β for any real β ≠ 0. However, these functions are regularly varying.
See also
• Analytic number theory
• Hardy–Littlewood tauberian theorem and its treatment by Karamata
Notes
1. See (Galambos & Seneta 1973)
2. See (Bingham, Goldie & Teugels 1987).
References
• Bingham, N.H. (2001) [1994], "Karamata theory", Encyclopedia of Mathematics, EMS Press
• Bingham, N. H.; Goldie, C. M.; Teugels, J. L. (1987), Regular Variation, Encyclopedia of Mathematics and its Applications, vol. 27, Cambridge: Cambridge University Press, ISBN 0-521-30787-2, MR 0898871, Zbl 0617.26001
• Galambos, J.; Seneta, E. (1973), "Regularly Varying Sequences", Proceedings of the American Mathematical Society, 41 (1): 110–116, doi:10.2307/2038824, ISSN 0002-9939, JSTOR 2038824.
|
Wikipedia
|
Regulated integral
In mathematics, the regulated integral is a definition of integration for regulated functions, which are defined to be uniform limits of step functions. The use of the regulated integral instead of the Riemann integral has been advocated by Nicolas Bourbaki and Jean Dieudonné.
Definition
Definition on step functions
Let [a, b] be a fixed closed, bounded interval in the real line R. A real-valued function φ : [a, b] → R is called a step function if there exists a finite partition
$\Pi =\{a=t_{0}<t_{1}<\cdots <t_{k}=b\}$
of [a, b] such that φ is constant on each open interval (ti, ti+1) of Π; suppose that this constant value is ci ∈ R. Then, define the integral of a step function φ to be
$\int _{a}^{b}\varphi (t)\,\mathrm {d} t:=\sum _{i=0}^{k-1}c_{i}|t_{i+1}-t_{i}|.$
It can be shown that this definition is independent of the choice of partition, in that if Π1 is another partition of [a, b] such that φ is constant on the open intervals of Π1, then the numerical value of the integral of φ is the same for Π1 as for Π.
Extension to regulated functions
A function f : [a, b] → R is called a regulated function if it is the uniform limit of a sequence of step functions on [a, b]:
• there is a sequence of step functions (φn)n∈N such that || φn − f ||∞ → 0 as n → ∞; or, equivalently,
• for all ε > 0, there exists a step function φε such that || φε − f ||∞ < ε; or, equivalently,
• f lies in the closure of the space of step functions, where the closure is taken in the space of all bounded functions [a, b] → R and with respect to the supremum norm || ⋅ ||∞; or equivalently,
• for every t ∈ [a, b), the right-sided limit
$f(t+)=\lim _{s\downarrow t}f(s)$
exists, and, for every t ∈ (a, b], the left-sided limit
$f(t-)=\lim _{s\uparrow t}f(s)$
exists as well.
Define the integral of a regulated function f to be
$\int _{a}^{b}f(t)\,\mathrm {d} t:=\lim _{n\to \infty }\int _{a}^{b}\varphi _{n}(t)\,\mathrm {d} t,$
where (φn)n∈N is any sequence of step functions that converges uniformly to f.
One must check that this limit exists and is independent of the chosen sequence, but this is an immediate consequence of the continuous linear extension theorem of elementary functional analysis: a bounded linear operator T0 defined on a dense linear subspace E0 of a normed linear space E and taking values in a Banach space F extends uniquely to a bounded linear operator T : E → F with the same (finite) operator norm.
Properties of the regulated integral
• The integral is a linear operator: for any regulated functions f and g and constants α and β,
$\int _{a}^{b}\alpha f(t)+\beta g(t)\,\mathrm {d} t=\alpha \int _{a}^{b}f(t)\,\mathrm {d} t+\beta \int _{a}^{b}g(t)\,\mathrm {d} t.$
• The integral is also a bounded operator: every regulated function f is bounded, and if m ≤ f(t) ≤ M for all t ∈ [a, b], then
$m|b-a|\leq \int _{a}^{b}f(t)\,\mathrm {d} t\leq M|b-a|.$
In particular:
$\left|\int _{a}^{b}f(t)\,\mathrm {d} t\right|\leq \int _{a}^{b}|f(t)|\,\mathrm {d} t.$
• Since step functions are integrable and the integrability and the value of a Riemann integral are compatible with uniform limits, the regulated integral is a special case of the Riemann integral.
Extension to functions defined on the whole real line
It is possible to extend the definitions of step function and regulated function and the associated integrals to functions defined on the whole real line. However, care must be taken with certain technical points:
• the partition on whose open intervals a step function is required to be constant is allowed to be a countable set, but must be a discrete set, i.e. have no limit points;
• the requirement of uniform convergence must be loosened to the requirement of uniform convergence on compact sets, i.e. closed and bounded intervals;
• not every bounded function is integrable (e.g. the function with constant value 1). This leads to a notion of local integrability.
Extension to vector-valued functions
The above definitions go through mutatis mutandis in the case of functions taking values in a Banach space X.
See also
• Lebesgue integral
• Riemann integral
References
• Berberian, S.K. (1979). "Regulated Functions: Bourbaki's Alternative to the Riemann Integral". The American Mathematical Monthly. Mathematical Association of America. 86 (3): 208. doi:10.2307/2321526. JSTOR 2321526.
• Gordon, Russell A. (1994). The integrals of Lebesgue, Denjoy, Perron, and Henstock. Graduate Studies in Mathematics, 4. Providence, RI: American Mathematical Society. ISBN 0-8218-3805-9.
Integrals
Types of integrals
• Riemann integral
• Lebesgue integral
• Burkill integral
• Bochner integral
• Daniell integral
• Darboux integral
• Henstock–Kurzweil integral
• Haar integral
• Hellinger integral
• Khinchin integral
• Kolmogorov integral
• Lebesgue–Stieltjes integral
• Pettis integral
• Pfeffer integral
• Riemann–Stieltjes integral
• Regulated integral
Integration techniques
• Substitution
• Trigonometric
• Euler
• Weierstrass
• By parts
• Partial fractions
• Euler's formula
• Inverse functions
• Changing order
• Reduction formulas
• Parametric derivatives
• Differentiation under the integral sign
• Laplace transform
• Contour integration
• Laplace's method
• Numerical integration
• Simpson's rule
• Trapezoidal rule
• Risch algorithm
Improper integrals
• Gaussian integral
• Dirichlet integral
• Fermi–Dirac integral
• complete
• incomplete
• Bose–Einstein integral
• Frullani integral
• Common integrals in quantum field theory
Stochastic integrals
• Itô integral
• Russo–Vallois integral
• Stratonovich integral
• Skorokhod integral
Miscellaneous
• Basel problem
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Volumes
• Washers
• Shells
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
|
Wikipedia
|
Regulus (geometry)
In three-dimensional space, a regulus R is a set of skew lines, every point of which is on a transversal which intersects an element of R only once, and such that every point on a transversal lies on a line of R
The set of transversals of R forms an opposite regulus S. In ℝ3 the union R ∪ S is the ruled surface of a hyperboloid of one sheet.
Three skew lines determine a regulus:
The locus of lines meeting three given skew lines is called a regulus. Gallucci's theorem shows that the lines meeting the generators of the regulus (including the original three lines) form another "associated" regulus, such that every generator of either regulus meets every generator of the other. The two reguli are the two systems of generators of a ruled quadric.[1]
According to Charlotte Scott, "The regulus supplies extremely simple proofs of the properties of a conic...the theorems of Chasles, Brianchon, and Pascal ..."[2]
In a finite geometry PG(3, q), a regulus has q + 1 lines.[3] For example, in 1954 William Edge described a pair of reguli of four lines each in PG(3,3).[4]
Robert J. T. Bell described how the regulus is generated by a moving straight line. First, the hyperboloid ${\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}-{\frac {z^{2}}{c^{2}}}\ =\ 1$ is factored as
$\left({\frac {x}{a}}+{\frac {z}{c}}\right)\left({\frac {x}{a}}-{\frac {z}{c}}\right)\ =\ \left(1+{\frac {y}{b}}\right)\left(1-{\frac {y}{b}}\right).$
Then two systems of lines, parametrized by λ and μ satisfy this equation:
${\frac {x}{a}}+{\frac {z}{c}}\ =\ \lambda \left(1+{\frac {y}{b}}\right),\quad {\frac {x}{a}}-{\frac {z}{c}}\ =\ {\frac {1}{\lambda }}\left(1-{\frac {y}{b}}\right)$ and
${\frac {x}{a}}-{\frac {z}{c}}\ =\ \mu \left(1+{\frac {y}{b}}\right),\quad {\frac {x}{a}}+{\frac {z}{c}}\ =\ {\frac {1}{\mu }}\left(1-{\frac {y}{b}}\right).$
No member of the first set of lines is a member of the second. As λ or μ varies, the hyperboloid is generated. The two sets represent a regulus and its opposite. Using analytic geometry, Bell proves that no two generators in a set intersect, and that any two generators in opposite reguli do intersect and form the plane tangent to the hyperboloid at that point. (page 155).[5]
See also
• Translation plane § Reguli and regular spreads
References
1. H. S. M. Coxeter (1969) Introduction to Geometry, page 259, John Wiley & Sons
2. Charlotte Angas Scott (1905) The elementary treatment of the conics by means of the regulus, Bulletin of the American Mathematical Society 12(1): 1–7
3. Albrecht Beutelspacher & Ute Rosenbaum (1998) Projective Geometry, page 72, Cambridge University Press ISBN 0-521-48277-1
4. W. L. Edge (1954) "Geometry of three dimensions over GF(3)", Proceedings of the Royal Society A 222: 262–86 doi:10.1098/rspa.1954.0068
5. Robert J. T. Bell (1910) An Elementary Treatise on Co-ordinate Geometry of Three Dimensions, page 148, via Internet Archive
• H. G. Forder (1950) Geometry, page 118, Hutchinson's University Library.
|
Wikipedia
|
Hash table
In computing, a hash table, also known as hash map, is a data structure that implements an associative array or dictionary. It is an abstract data type that maps keys to values.[2] A hash table uses a hash function to compute an index, also called a hash code, into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored.
Hash table
TypeUnordered associative array
Invented1953
Time complexity in big O notation
Algorithm Average Worst case
Space Θ(n)[1] O(n)
Search Θ(1) O(n)
Insert Θ(1) O(n)
Delete Θ(1) O(n)
Ideally, the hash function will assign each key to a unique bucket, but most hash table designs employ an imperfect hash function, which might cause hash collisions where the hash function generates the same index for more than one key. Such collisions are typically accommodated in some way.
In a well-dimensioned hash table, the average time complexity for each lookup is independent of the number of elements stored in the table. Many hash table designs also allow arbitrary insertions and deletions of key–value pairs, at amortized constant average cost per operation.[3][4][5]
Hashing is an example of a space-time tradeoff. If memory is infinite, the entire key can be used directly as an index to locate its value with a single memory access. On the other hand, if infinite time is available, values can be stored without regard for their keys, and a binary search or linear search can be used to retrieve the element.[6]: 458
In many situations, hash tables turn out to be on average more efficient than search trees or any other table lookup structure. For this reason, they are widely used in many kinds of computer software, particularly for associative arrays, database indexing, caches, and sets.
History
The idea of hashing arose independently in different places. In January 1953, Hans Peter Luhn wrote an internal IBM memorandum that used hashing with chaining. Open addressing was later proposed by A. D. Linh building on Luhn's paper.[7]: 15 Around the same time, Gene Amdahl, Elaine M. McGraw, Nathaniel Rochester, and Arthur Samuel of IBM Research implemented hashing for the IBM 701 assembler.[8]: 124 Open addressing with linear probing is credited to Amdahl, although Ershov independently had the same idea.[8]: 124–125 The term "open addressing" was coined by W. Wesley Peterson on his article which discusses the problem of search in large files.[7]: 15
The first published work on hashing with chaining is credited to Arnold Dumey, who discussed the idea of using remainder modulo a prime as a hash function.[7]: 15 The word "hashing" was first published in an article by Robert Morris.[8]: 126 A theoretical analysis of linear probing was submitted originally by Konheim and Weiss.[7]: 15
Overview
An associative array stores a set of (key, value) pairs and allows insertion, deletion, and lookup (search), with the constraint of unique keys. In the hash table implementation of associative arrays, an array $A$ of length $m$ is partially filled with $n$ elements, where $m\geq n$. A value $x$ gets stored at an index location $A[h(x)]$, where $h$ is a hash function, and $h(x)<m$.[7]: 2 Under reasonable assumptions, hash tables have better time complexity bounds on search, delete, and insert operations in comparison to self-balancing binary search trees.[7]: 1
Hash tables are also commonly used to implement sets, by omitting the stored value for each key and merely tracking whether the key is present.[7]: 1
Load factor
A load factor $\alpha $ is a critical statistic of a hash table, and is defined as follows:[1]
${\text{load factor}}\ (\alpha )={\frac {n}{m}},$
where
• $n$ is the number of entries occupied in the hash table.
• $m$ is the number of buckets.
The performance of the hash table deteriorates in relation to the load factor $\alpha $.[7]: 2 Therefore a hash table is resized or rehashed if the load factor $\alpha $ approaches 1.[9] A table is also resized if the load factor drops below $\alpha _{\max }/4$.[9] Acceptable figures of load factor $\alpha $ should range around 0.6 to 0.75.[10][11]: 110
Hash function
A hash function $h$ maps the universe $U$ of keys $h:U\rightarrow \{0,...,m-1\}$ to array indices or slots within the table for each $h(x)\in {0,...,m-1}$ where $x\in S$ and $m<n$. The conventional implementations of hash functions are based on the integer universe assumption that all elements of the table stem from the universe $U=\{0,...,u-1\}$, where the bit length of $u$ is confined within the word size of a computer architecture.[7]: 2
A perfect hash function $h$ is defined as an injective function such that each element $x$ in $S$ maps to a unique value in ${0,...,m-1}$.[12][13] A perfect hash function can be created if all the keys are known ahead of time.[12]
Integer universe assumption
The schemes of hashing used in integer universe assumption include hashing by division, hashing by multiplication, universal hashing, dynamic perfect hashing, and static perfect hashing.[7]: 2 However, hashing by division is the commonly used scheme.[14]: 264 [11]: 110
Hashing by division
The scheme in hashing by division is as follows:[7]: 2
$h(x)\ =\ M\,{\bmod {\,}}m$
Where $M$ is the hash digest of $x\in S$ and $m$ is the size of the table.
Hashing by multiplication
The scheme in hashing by multiplication is as follows:[7]: 2–3
$h(x)=\lfloor m{\bigl (}(MA){\bmod {1}}{\bigr )}\rfloor $
Where $A$ is a real-valued constant and $m$ is the size of the table. An advantage of the hashing by multiplication is that the $m$ is not critical.[7]: 2–3 Although any value $A$ produces a hash function, Donald Knuth suggests using the golden ratio.[7]: 3
Choosing a hash function
Uniform distribution of the hash values is a fundamental requirement of a hash function. A non-uniform distribution increases the number of collisions and the cost of resolving them. Uniformity is sometimes difficult to ensure by design, but may be evaluated empirically using statistical tests, e.g., a Pearson's chi-squared test for discrete uniform distributions.[15][16]
The distribution needs to be uniform only for table sizes that occur in the application. In particular, if one uses dynamic resizing with exact doubling and halving of the table size, then the hash function needs to be uniform only when the size is a power of two. Here the index can be computed as some range of bits of the hash function. On the other hand, some hashing algorithms prefer to have the size be a prime number.[17]
For open addressing schemes, the hash function should also avoid clustering, the mapping of two or more keys to consecutive slots. Such clustering may cause the lookup cost to skyrocket, even if the load factor is low and collisions are infrequent. The popular multiplicative hash is claimed to have particularly poor clustering behavior.[17][4]
K-independent hashing offers a way to prove a certain hash function does not have bad keysets for a given type of hashtable. A number of K-independence results are known for collision resolution schemes such as linear probing and cuckoo hashing. Since K-independence can prove a hash function works, one can then focus on finding the fastest possible such hash function.[18]
Collision resolution
A search algorithm that uses hashing consists of two parts. The first part is computing a hash function which transforms the search key into an array index. The ideal case is such that no two search keys hashes to the same array index. However, this is not always the case and is impossible to guarantee for unseen given data.[19]: 515 Hence the second part of the algorithm is collision resolution. The two common methods for collision resolution are separate chaining and open addressing.[6]: 458
Separate chaining
In separate chaining, the process involves building a linked list with key–value pair for each search array index. The collided items are chained together through a single linked list, which can be traversed to access the item with a unique search key.[6]: 464 Collision resolution through chaining with linked list is a common method of implementation of hash tables. Let $T$ and $x$ be the hash table and the node respectively, the operation involves as follows:[14]: 258
Chained-Hash-Insert(T, k)
insert x at the head of linked list T[h(k)]
Chained-Hash-Search(T, k)
search for an element with key k in linked list T[h(k)]
Chained-Hash-Delete(T, k)
delete x from the linked list T[h(k)]
If the element is comparable either numerically or lexically, and inserted into the list by maintaining the total order, it results in faster termination of the unsuccessful searches.[19]: 520–521
Other data structures for separate chaining
If the keys are ordered, it could be efficient to use "self-organizing" concepts such as using a self-balancing binary search tree, through which the theoretical worst case could be brought down to $O(\log {n})$, although it introduces additional complexities.[19]: 521
In dynamic perfect hashing, two-level hash tables are used to reduce the look-up complexity to be a guaranteed $O(1)$ in the worst case. In this technique, the buckets of $k$ entries are organized as perfect hash tables with $k^{2}$ slots providing constant worst-case lookup time, and low amortized time for insertion.[20] A study shows array-based separate chaining to be 97% more performant when compared to the standard linked list method under heavy load.[21]: 99
Techniques such as using fusion tree for each buckets also result in constant time for all operations with high probability.[22]
Caching and locality of reference
The linked list of separate chaining implementation may not be cache-conscious due to spatial locality—locality of reference—when the nodes of the linked list are scattered across memory, thus the list traversal during insert and search may entail CPU cache inefficiencies.[21]: 91
In cache-conscious variants, a dynamic array found to be more cache-friendly is used in the place where a linked list or self-balancing binary search trees is usually deployed for collision resolution through separate chaining, since the contiguous allocation pattern of the array could be exploited by hardware-cache prefetchers—such as translation lookaside buffer—resulting in reduced access time and memory consumption.[23][24][25]
Open addressing
Open addressing is another collision resolution technique in which every entry record is stored in the bucket array itself, and the hash resolution is performed through probing. When a new entry has to be inserted, the buckets are examined, starting with the hashed-to slot and proceeding in some probe sequence, until an unoccupied slot is found. When searching for an entry, the buckets are scanned in the same sequence, until either the target record is found, or an unused array slot is found, which indicates an unsuccessful search.[26]
Well-known probe sequences include:
• Linear probing, in which the interval between probes is fixed (usually 1).[27]
• Quadratic probing, in which the interval between probes is increased by adding the successive outputs of a quadratic polynomial to the value given by the original hash computation.[28]: 272
• Double hashing, in which the interval between probes is computed by a secondary hash function.[28]: 272–273
The performance of open addressing may be slower compared to separate chaining since the probe sequence increases when the load factor $\alpha $ approaches 1.[9][21]: 93 The probing results in an infinite loop if the load factor reaches 1, in the case of a completely filled table.[6]: 471 The average cost of linear probing depends on the hash function's ability to distribute the elements uniformly throughout the table to avoid clustering, since formation of clusters would result in increased search time.[6]: 472
Caching and locality of reference
Since the slots are located in successive locations, linear probing could lead to better utilization of CPU cache due to locality of references resulting in reduced memory latency.[27]
Coalesced hashing
Coalesced hashing is a hybrid of both separate chaining and open addressing in which the buckets or nodes link within the table.[29]: 6–8 The algorithm is ideally suited for fixed memory allocation.[29]: 4 The collision in coalesced hashing is resolved by identifying the largest-indexed empty slot on the hash table, then the colliding value is inserted into that slot. The bucket is also linked to the inserted node's slot which contains its colliding hash address.[29]: 8
Cuckoo hashing
Cuckoo hashing is a form of open addressing collision resolution technique which guarantees $O(1)$ worst-case lookup complexity and constant amortized time for insertions. The collision is resolved through maintaining two hash tables, each having its own hashing function, and collided slot gets replaced with the given item, and the preoccupied element of the slot gets displaced into the other hash table. The process continues until every key has its own spot in the empty buckets of the tables; if the procedure enters into infinite loop—which is identified through maintaining a threshold loop counter—both hash tables get rehashed with newer hash functions and the procedure continues.[30]: 124–125
Hopscotch hashing
Hopscotch hashing is an open addressing based algorithm which combines the elements of cuckoo hashing, linear probing and chaining through the notion of a neighbourhood of buckets—the subsequent buckets around any given occupied bucket, also called a "virtual" bucket.[31]: 351–352 The algorithm is designed to deliver better performance when the load factor of the hash table grows beyond 90%; it also provides high throughput in concurrent settings, thus well suited for implementing resizable concurrent hash table.[31]: 350 The neighbourhood characteristic of hopscotch hashing guarantees a property that, the cost of finding the desired item from any given buckets within the neighbourhood is very close to the cost of finding it in the bucket itself; the algorithm attempts to be an item into its neighbourhood—with a possible cost involved in displacing other items.[31]: 352
Each bucket within the hash table includes an additional "hop-information"—an H-bit bit array for indicating the relative distance of the item which was originally hashed into the current virtual bucket within H-1 entries.[31]: 352 Let $k$ and $Bk$ be the key to be inserted and bucket to which the key is hashed into respectively; several cases are involved in the insertion procedure such that the neighbourhood property of the algorithm is vowed:[31]: 352–353 if $Bk$ is empty, the element is inserted, and the leftmost bit of bitmap is set to 1; if not empty, linear probing is used for finding an empty slot in the table, the bitmap of the bucket gets updated followed by the insertion; if the empty slot is not within the range of the neighbourhood, i.e. H-1, subsequent swap and hop-info bit array manipulation of each bucket is performed in accordance with its neighbourhood invariant properties.[31]: 353
Robin Hood hashing
Robin hood hashing is an open addressing based collision resolution algorithm; the collisions are resolved through favouring the displacement of the element that is farthest—or longest probe sequence length (PSL)—from its "home location" i.e. the bucket to which the item was hashed into.[32]: 12 Although robin hood hashing does not change the theoretical search cost, it significantly affects the variance of the distribution of the items on the buckets,[33]: 2 i.e. dealing with cluster formation in the hash table.[34] Each node within the hash table that uses robin hood hashing should be augmented to store an extra PSL value.[35] Let $x$ be the key to be inserted, $x.psl$ be the (incremental) PSL length of $x$, $T$ be the hash table and $j$ be the index, the insertion procedure is as follows:[32]: 12–13 [36]: 5
• If $x.psl\ \leq \ T[j].psl$: the iteration goes into the next bucket without attempting an external probe.
• If $x.psl\ >\ T[j].psl$: insert the item $x$ into the bucket $j$; swap $x$ with $T[j]$—let it be $x'$; continue the probe from the $j+1$st bucket to insert $x'$; repeat the procedure until every element is inserted.
Dynamic resizing
Repeated insertions cause the number of entries in a hash table to grow, which consequently increases the load factor; to maintain the amortized $O(1)$ performance of the lookup and insertion operations, a hash table is dynamically resized and the items of the tables are rehashed into the buckets of the new hash table,[9] since the items cannot be copied over as varying table sizes results in different hash value due to modulo operation.[37] If a hash table becomes "too empty" after deleting some elements, resizing may be performed to avoid excessive memory usage.[38]
Resizing by moving all entries
Generally, a new hash table with a size double that of the original hash table gets allocated privately and every item in the original hash table gets moved to the newly allocated one by computing the hash values of the items followed by the insertion operation. Rehashing is computationally expensive despite its simplicity.[39]: 478–479
Alternatives to all-at-once rehashing
Some hash table implementations, notably in real-time systems, cannot pay the price of enlarging the hash table all at once, because it may interrupt time-critical operations. If one cannot avoid dynamic resizing, a solution is to perform the resizing gradually to avoid storage blip—typically at 50% of new table's size—during rehashing and to avoid memory fragmentation that triggers heap compaction due to deallocation of large memory blocks caused by the old hash table.[40]: 2–3 In such case, the rehashing operation is done incrementally through extending prior memory block allocated for the old hash table such that the buckets of the hash table remain unaltered. A common approach for amortized rehashing involves maintaining two hash functions $h_{\text{old}}$ and $h_{\text{new}}$. The process of rehashing a bucket's items in accordance with the new hash function is termed as cleaning, which is implemented through command pattern by encapsulating the operations such as $\mathrm {Add} (\mathrm {key} )$, $\mathrm {Get} (\mathrm {key} )$ and $\mathrm {Delete} (\mathrm {key} )$ through a $\mathrm {Lookup} (\mathrm {key} ,{\text{command}})$ wrapper such that each element in the bucket gets rehashed and its procedure involve as follows:[40]: 3
• Clean $\mathrm {Table} [h_{\text{old}}(\mathrm {key} )]$ bucket.
• Clean $\mathrm {Table} [h_{\text{new}}(\mathrm {key} )]$ bucket.
• The command gets executed.
Linear hashing
Linear hashing is an implementation of the hash table which enables dynamic growths or shrinks of the table one bucket at a time.[41]
Performance
The performance of a hash table is dependent on the hash function's ability in generating quasi-random numbers ($\sigma $) for entries in the hash table where $K$, $n$ and $h(x)$ denotes the key, number of buckets and the hash function such that $\sigma \ =\ h(K)\ \%\ n$. If the hash function generates the same $\sigma $ for distinct keys ($K_{1}\neq K_{2},\ h(K_{1})\ =\ h(K_{2})$), this results in collision, which is dealt with in a variety of ways. The constant time complexity ($O(1)$) of the operation in a hash table is presupposed on the condition that the hash function doesn't generate colliding indices; thus, the performance of the hash table is directly proportional to the chosen hash function's ability to disperse the indices.[42]: 1 However, construction of such a hash function is practically infeasible, that being so, implementations depend on case-specific collision resolution techniques in achieving higher performance.[42]: 2
Applications
Associative arrays
Main article: Associative array
Hash tables are commonly used to implement many types of in-memory tables. They are used to implement associative arrays.[28]
Database indexing
Hash tables may also be used as disk-based data structures and database indices (such as in dbm) although B-trees are more popular in these applications.[43]
Caches
Hash tables can be used to implement caches, auxiliary data tables that are used to speed up the access to data that is primarily stored in slower media. In this application, hash collisions can be handled by discarding one of the two colliding entries—usually erasing the old item that is currently stored in the table and overwriting it with the new item, so every item in the table has a unique hash value.[44][45]
Sets
Hash tables can be used in the implementation of set data structure, which can store unique values without any particular order; set is typically used in testing the membership of a value in the collection, rather than element retrieval.[46]
Transposition table
A transposition table to a complex Hash Table which stores information about each section that has been searched.[47]
Implementations
Many programming languages provide hash table functionality, either as built-in associative arrays or as standard library modules.
In JavaScript, an "object" is a mutable collection of key-value pairs (called "properties"), where each key is either a string or a guaranteed-unique "symbol"; any other value, when used as a key, is first coerced to a string. Aside from the seven "primitive" data types, every value in JavaScript is an object.[48] ECMAScript 2015 also added the Map data structure, which accepts arbitrary values as keys.[49]
C++11 includes unordered_map in its standard library for storing keys and values of arbitrary types.[50]
Go's built-in map implements a hash table in the form of a type.[51]
Java programming language includes the HashSet, HashMap, LinkedHashSet, and LinkedHashMap generic collections.[52]
Python's built-in dict implements a hash table in the form of a type.[53]
Ruby's built-in Hash uses the open addressing model from Ruby 2.4 onwards.[54]
Rust programming language includes HashMap, HashSet as part of the Rust Standard Library. [55]
.NET have HashSet.[56]
See also
• Bloom filter
• Consistent hashing
• Distributed hash table
• Extendible hashing
• Hash array mapped trie
• Lazy deletion
• Pearson hashing
• PhotoDNA
• Rabin–Karp string search algorithm
• Search data structure
• Stable hashing
References
1. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009). Introduction to Algorithms (3rd ed.). Massachusetts Institute of Technology. pp. 253–280. ISBN 978-0-262-03384-8.
2. Mehlhorn, Kurt; Sanders, Peter (2008), "4 Hash Tables and Associative Arrays", Algorithms and Data Structures: The Basic Toolbox (PDF), Springer, pp. 81–98
3. Leiserson, Charles E. (Fall 2005). "Lecture 13: Amortized Algorithms, Table Doubling, Potential Method". course MIT 6.046J/18.410J Introduction to Algorithms. Archived from the original on August 7, 2009.
4. Knuth, Donald (1998). The Art of Computer Programming. Vol. 3: Sorting and Searching (2nd ed.). Addison-Wesley. pp. 513–558. ISBN 978-0-201-89685-5.
5. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Chapter 11: Hash Tables". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 221–252. ISBN 978-0-262-53196-2.
6. Sedgewick, Robert; Wayne, Kevin (2011). Algorithms. Vol. 1 (4 ed.). Addison-Wesley Professional – via Princeton University, Department of Computer Science.
7. Mehta, Dinesh P.; Sahni, Sartaj (October 28, 2004). "9: Hash Tables". Handbook of Datastructures and Applications (1 ed.). Taylor & Francis. doi:10.1201/9781420035179. ISBN 978-1-58488-435-4.
8. Konheim, Alan G. (June 21, 2010). Hashing in Computer Science: Fifty Years of Slicing and Dicing. John Wiley & Sons, Inc. doi:10.1002/9780470630617. ISBN 9780470630617.
9. Mayers, Andrew (2008). "CS 312: Hash tables and amortized analysis". Cornell University, Department of Computer Science. Archived from the original on April 26, 2021. Retrieved October 26, 2021 – via cs.cornell.edu.
10. Maurer, W.D.; Lewis, T.G. (March 1, 1975). "Hash Table Methods". ACM Computing Surveys. Journal of the ACM. 1 (1): 14. doi:10.1145/356643.356645. S2CID 17874775.
11. Owolabi, Olumide (February 1, 2003). "Empirical studies of some hashing functions". Information and Software Technology. Department of Mathematics and Computer Science, University of Port Harcourt. 45 (2): 109–112. doi:10.1016/S0950-5849(02)00174-X – via ScienceDirect.
12. Lu, Yi; Prabhakar, Balaji; Bonomi, Flavio (2006). Perfect Hashing for Network Applications. 2006 IEEE International Symposium on Information Theory. pp. 2774–2778. doi:10.1109/ISIT.2006.261567. ISBN 1-4244-0505-X. S2CID 1494710.
13. Belazzougui, Djamal; Botelho, Fabiano C.; Dietzfelbinger, Martin (2009). "Hash, displace, and compress" (PDF). Algorithms—ESA 2009: 17th Annual European Symposium, Copenhagen, Denmark, September 7-9, 2009, Proceedings. Lecture Notes in Computer Science. Vol. 5757. Berlin: Springer. pp. 682–693. CiteSeerX 10.1.1.568.130. doi:10.1007/978-3-642-04128-0_61. MR 2557794.
14. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Chapter 11: Hash Tables". Introduction to Algorithms (2nd ed.). Massachusetts Institute of Technology. ISBN 978-0-262-53196-2.
15. Pearson, Karl (1900). "On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling". Philosophical Magazine. Series 5. 50 (302): 157–175. doi:10.1080/14786440009463897.
16. Plackett, Robin (1983). "Karl Pearson and the Chi-Squared Test". International Statistical Review. 51 (1): 59–72. doi:10.2307/1402731. JSTOR 1402731.
17. Wang, Thomas (March 1997). "Prime Double Hash Table". Archived from the original on September 3, 1999. Retrieved May 10, 2015.
18. Wegman, Mark N.; Carter, J. Lawrence (1981). "New hash functions and their use in authentication and set equality" (PDF). Journal of Computer and System Sciences. 22 (3): 265–279. doi:10.1016/0022-0000(81)90033-7. Conference version in FOCS'79. Retrieved February 9, 2011.
19. Donald E. Knuth (April 24, 1998). The Art of Computer Programming: Volume 3: Sorting and Searching. Addison-Wesley Professional. ISBN 978-0-201-89685-5.
20. Demaine, Erik; Lind, Jeff (Spring 2003). "Lecture 2" (PDF). 6.897: Advanced Data Structures. MIT Computer Science and Artificial Intelligence Laboratory. Archived (PDF) from the original on June 15, 2010. Retrieved June 30, 2008.
21. Askitis, Nikolas; Zobel, Justin (2005). Cache-Conscious Collision Resolution in String Hash Tables. International Symposium on String Processing and Information Retrieval. Springer Science+Business Media. pp. 91–102. doi:10.1007/11575832_1. ISBN 978-3-540-29740-6.
22. Willard, Dan E. (2000). "Examining computational geometry, van Emde Boas trees, and hashing from the perspective of the fusion tree". SIAM Journal on Computing. 29 (3): 1030–1049. doi:10.1137/S0097539797322425. MR 1740562..
23. Askitis, Nikolas; Sinha, Ranjan (2010). "Engineering scalable, cache and space efficient tries for strings". The VLDB Journal. 17 (5): 634. doi:10.1007/s00778-010-0183-9. ISSN 1066-8888. S2CID 432572.
24. Askitis, Nikolas; Zobel, Justin (October 2005). "Cache-conscious Collision Resolution in String Hash Tables". Proceedings of the 12th International Conference, String Processing and Information Retrieval (SPIRE 2005). Vol. 3772/2005. pp. 91–102. doi:10.1007/11575832_11. ISBN 978-3-540-29740-6.
25. Askitis, Nikolas (2009). "Fast and Compact Hash Tables for Integer Keys" (PDF). Proceedings of the 32nd Australasian Computer Science Conference (ACSC 2009). Vol. 91. pp. 113–122. ISBN 978-1-920682-72-9. Archived from the original (PDF) on February 16, 2011. Retrieved June 13, 2010.
26. Tenenbaum, Aaron M.; Langsam, Yedidyah; Augenstein, Moshe J. (1990). Data Structures Using C. Prentice Hall. pp. 456–461, p. 472. ISBN 978-0-13-199746-2.
27. Pagh, Rasmus; Rodler, Flemming Friche (2001). "Cuckoo Hashing". Algorithms — ESA 2001. Lecture Notes in Computer Science. Vol. 2161. pp. 121–133. CiteSeerX 10.1.1.25.4189. doi:10.1007/3-540-44676-1_10. ISBN 978-3-540-42493-2.
28. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), "11 Hash Tables", Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, pp. 221–252, ISBN 0-262-03293-7.
29. Vitter, Jeffery S.; Chen, Wen-Chin (1987). The design and analysis of coalesced hashing. New York, United States: Oxford University Press. ISBN 978-0-19-504182-8 – via Archive.org.
30. Pagh, Rasmus; Rodler, Flemming Friche (2001). "Cuckoo Hashing". Algorithms — ESA 2001. Lecture Notes in Computer Science. Vol. 2161. CiteSeerX 10.1.1.25.4189. doi:10.1007/3-540-44676-1_10. ISBN 978-3-540-42493-2.
31. Herlihy, Maurice; Shavit, Nir; Tzafrir, Moran (2008). Hopscotch Hashing. International Symposium on Distributed Computing. Distributed Computing. Vol. 5218. Berlin, Heidelberg: Springer Publishing. pp. 350–364. doi:10.1007/978-3-540-87779-0_24. ISBN 978-3-540-87778-3 – via Springer Link.
32. Celis, Pedro (1986). Robin Hood Hashing (PDF). Ontario, Canada: University of Waterloo, Dept. of Computer Science. ISBN 031529700X. OCLC 14083698. Archived (PDF) from the original on November 1, 2021. Retrieved November 2, 2021.
33. Poblete, P.V.; Viola, A. (August 14, 2018). "Analysis of Robin Hood and Other Hashing Algorithms Under the Random Probing Model, With and Without Deletions". Combinatorics, Probability and Computing. Cambridge University Press. 28 (4): 600–617. doi:10.1017/S0963548318000408. ISSN 1469-2163. S2CID 125374363. Retrieved November 1, 2021 – via Cambridge Core.
34. Clarkson, Michael (2014). "Lecture 13: Hash tables". Cornell University, Department of Computer Science. Archived from the original on October 7, 2021. Retrieved November 1, 2021 – via cs.cornell.edu.
35. Gries, David (2017). "JavaHyperText and Data Structure: Robin Hood Hashing" (PDF). Cornell University, Department of Computer Science. Archived (PDF) from the original on April 26, 2021. Retrieved November 2, 2021 – via cs.cornell.edu.
36. Celis, Pedro (March 28, 1988). External Robin Hood Hashing (PDF) (Technical report). Bloomington, Indiana: Indiana University, Department of Computer Science. 246. Archived (PDF) from the original on November 2, 2021. Retrieved November 2, 2021. {{cite tech report}}: |archive-date= / |archive-url= timestamp mismatch (help)
37. Goddard, Wayne (2021). "Chater C5: Hash Tables" (PDF). Clemson University. pp. 15–16. Archived (PDF) from the original on November 9, 2021. Retrieved November 9, 2021 – via people.cs.clemson.edu.
38. Devadas, Srini; Demaine, Erik (February 25, 2011). "Intro to Algorithms: Resizing Hash Tables" (PDF). Massachusetts Institute of Technology, Department of Computer Science. Archived (PDF) from the original on May 7, 2021. Retrieved November 9, 2021 – via MIT OpenCourseWare.
39. Thareja, Reema (October 13, 2018). "Hashing and Collision". Data Structures Using C (2 ed.). Oxford University Press. ISBN 9780198099307.
40. Friedman, Scott; Krishnan, Anand; Leidefrost, Nicholas (March 18, 2003). "Hash Tables for Embedded and Real-time systems" (PDF). All Computer Science and Engineering Research. Washington University in St. Louis. doi:10.7936/K7WD3XXV. Archived (PDF) from the original on June 9, 2021. Retrieved November 9, 2021 – via Northwestern University, Department of Computer Science.
41. Litwin, Witold (1980). "Linear hashing: A new tool for file and table addressing" (PDF). Proc. 6th Conference on Very Large Databases. Carnegie Mellon University. pp. 212–223. Archived (PDF) from the original on May 6, 2021. Retrieved November 10, 2021 – via cs.cmu.edu.
42. Dijk, Tom Van (2010). "Analysing and Improving Hash Table Performance" (PDF). Netherlands: University of Twente. Archived (PDF) from the original on November 6, 2021. Retrieved December 31, 2021.
43. Lech Banachowski. "Indexes and external sorting". pl:Polsko-Japońska Akademia Technik Komputerowych. Archived from the original on March 26, 2022. Retrieved March 26, 2022.
44. Zhong, Liang; Zheng, Xueqian; Liu, Yong; Wang, Mengting; Cao, Yang (February 2020). "Cache hit ratio maximization in device-to-device communications overlaying cellular networks". China Communications. 17 (2): 232–238. doi:10.23919/jcc.2020.02.018. ISSN 1673-5447. S2CID 212649328.
45. Bottommley, James (January 1, 2004). "Understanding Caching". Linux Journal. Archived from the original on December 4, 2020. Retrieved April 16, 2022.
46. Jill Seaman (2014). "Set & Hash Tables" (PDF). Texas State University. Archived from the original on April 1, 2022. Retrieved March 26, 2022.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
47. "Transposition Table - Chessprogramming wiki". chessprogramming.org. Archived from the original on February 14, 2021. Retrieved May 1, 2020.
48. "JavaScript data types and data structures - JavaScript | MDN". developer.mozilla.org. Retrieved July 24, 2022.
49. "Map - JavaScript | MDN". developer.mozilla.org. June 20, 2023. Retrieved July 15, 2023.
50. "Programming language C++ - Technical Specification" (PDF). International Organization for Standardization. pp. 812–813. Archived from the original (PDF) on January 21, 2022. Retrieved February 8, 2022.
51. "The Go Programming Language Specification". go.dev. Retrieved January 1, 2023.
52. "Lesson: Implementations (The Java™ Tutorials > Collections)". docs.oracle.com. Archived from the original on January 18, 2017. Retrieved April 27, 2018.
53. Zhang, Juan; Jia, Yunwei (2020). "Redis rehash optimization based on machine learning". Journal of Physics: Conference Series. 1453 (1): 3. Bibcode:2020JPhCS1453a2048Z. doi:10.1088/1742-6596/1453/1/012048. S2CID 215943738.
54. Jonan Scheffler (December 25, 2016). "Ruby 2.4 Released: Faster Hashes, Unified Integers and Better Rounding". heroku.com. Archived from the original on July 3, 2019. Retrieved July 3, 2019.
55. "doc.rust-lang.org". Archived from the original on December 8, 2022. Retrieved December 14, 2022. test
56. "HashSet Class (System.Collections.Generic)". learn.microsoft.com. Retrieved July 1, 2023.
Further reading
• Tamassia, Roberto; Goodrich, Michael T. (2006). "Chapter Nine: Maps and Dictionaries". Data structures and algorithms in Java : [updated for Java 5.0] (4th ed.). Hoboken, NJ: Wiley. pp. 369–418. ISBN 978-0-471-73884-8.
• McKenzie, B. J.; Harries, R.; Bell, T. (February 1990). "Selecting a hashing algorithm". Software: Practice and Experience. 20 (2): 209–224. doi:10.1002/spe.4380200207. hdl:10092/9691. S2CID 12854386.
External links
Wikimedia Commons has media related to Hash tables.
Wikibooks has a book on the topic of: Data Structures/Hash Tables
• NIST entry on hash tables
• Open Data Structures – Chapter 5 – Hash Tables, Pat Morin
• MIT's Introduction to Algorithms: Hashing 1 MIT OCW lecture Video
• MIT's Introduction to Algorithms: Hashing 2 MIT OCW lecture Video
Well-known data structures
Types
• Collection
• Container
Abstract
• Associative array
• Multimap
• Retrieval Data Structure
• List
• Stack
• Queue
• Double-ended queue
• Priority queue
• Double-ended priority queue
• Set
• Multiset
• Disjoint-set
Arrays
• Bit array
• Circular buffer
• Dynamic array
• Hash table
• Hashed array tree
• Sparse matrix
Linked
• Association list
• Linked list
• Skip list
• Unrolled linked list
• XOR linked list
Trees
• B-tree
• Binary search tree
• AA tree
• AVL tree
• Red–black tree
• Self-balancing tree
• Splay tree
• Heap
• Binary heap
• Binomial heap
• Fibonacci heap
• R-tree
• R* tree
• R+ tree
• Hilbert R-tree
• Trie
• Hash tree
Graphs
• Binary decision diagram
• Directed acyclic graph
• Directed acyclic word graph
• List of data structures
Authority control: National
• Germany
|
Wikipedia
|
Reider's theorem
In algebraic geometry, Reider's theorem gives conditions for a line bundle on a projective surface to be very ample.
Statement
Let D be a nef divisor on a smooth projective surface X. Denote by KX the canonical divisor of X.
• If D2 > 4, then the linear system |KX+D| has no base points unless there exists a nonzero effective divisor E such that
• $DE=0,E^{2}=-1$, or
• $DE=1,E^{2}=0$;
• If D2 > 8, then the linear system |KX+D| is very ample unless there exists a nonzero effective divisor E satisfying one of the following:
• $DE=0,E^{2}=-1$ or $-2$;
• $DE=1,E^{2}=0$ or $-1$;
• $DE=2,E^{2}=0$;
• $DE=3,D=3E,E^{2}=1$
Applications
Reider's theorem implies the surface case of the Fujita conjecture. Let L be an ample line bundle on a smooth projective surface X. If m > 2, then for D=mL we have
• D2 = m2 L2 ≥ m2 > 4;
• for any effective divisor E the ampleness of L implies D · E = m(L · E) ≥ m > 2.
Thus by the first part of Reider's theorem |KX+mL| is base-point-free. Similarly, for any m > 3 the linear system |KX+mL| is very ample.
References
• Reider, Igor (1988), "Vector bundles of rank 2 and linear systems on algebraic surfaces", Annals of Mathematics, Second Series, Annals of Mathematics, 127 (2): 309–316, doi:10.2307/2007055, ISSN 0003-486X, JSTOR 2007055, MR 0932299
|
Wikipedia
|
Rayleigh distribution
In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. Up to rescaling, it coincides with the chi distribution with two degrees of freedom. The distribution is named after Lord Rayleigh (/ˈreɪli/).[1]
Rayleigh
Probability density function
Cumulative distribution function
Parameters scale: $\sigma >0$
Support $x\in [0,\infty )$
PDF ${\frac {x}{\sigma ^{2}}}e^{-x^{2}/\left(2\sigma ^{2}\right)}$
CDF $1-e^{-x^{2}/\left(2\sigma ^{2}\right)}$
Quantile $Q(F;\sigma )=\sigma {\sqrt {-2\ln(1-F)}}$
Mean $\sigma {\sqrt {\frac {\pi }{2}}}$
Median $\sigma {\sqrt {2\ln(2)}}$
Mode $\sigma $
Variance ${\frac {4-\pi }{2}}\sigma ^{2}$
Skewness ${\frac {2{\sqrt {\pi }}(\pi -3)}{(4-\pi )^{3/2}}}$
Ex. kurtosis $-{\frac {6\pi ^{2}-24\pi +16}{(4-\pi )^{2}}}$
Entropy $1+\ln \left({\frac {\sigma }{\sqrt {2}}}\right)+{\frac {\gamma }{2}}$
MGF $1+\sigma te^{\sigma ^{2}t^{2}/2}{\sqrt {\frac {\pi }{2}}}\left(\operatorname {erf} \left({\frac {\sigma t}{\sqrt {2}}}\right)+1\right)$
CF $1-\sigma te^{-\sigma ^{2}t^{2}/2}{\sqrt {\frac {\pi }{2}}}\left(\operatorname {erfi} \left({\frac {\sigma t}{\sqrt {2}}}\right)-i\right)$
A Rayleigh distribution is often observed when the overall magnitude of a vector in the plane is related to its directional components. One example where the Rayleigh distribution naturally arises is when wind velocity is analyzed in two dimensions. Assuming that each component is uncorrelated, normally distributed with equal variance, and zero mean, then the overall wind speed (vector magnitude) will be characterized by a Rayleigh distribution. A second example of the distribution arises in the case of random complex numbers whose real and imaginary components are independently and identically distributed Gaussian with equal variance and zero mean. In that case, the absolute value of the complex number is Rayleigh-distributed.
Definition
The probability density function of the Rayleigh distribution is[2]
$f(x;\sigma )={\frac {x}{\sigma ^{2}}}e^{-x^{2}/(2\sigma ^{2})},\quad x\geq 0,$
where $\sigma $ is the scale parameter of the distribution. The cumulative distribution function is[2]
$F(x;\sigma )=1-e^{-x^{2}/(2\sigma ^{2})}$
for $x\in [0,\infty ).$
Relation to random vector length
Consider the two-dimensional vector $Y=(U,V)$ which has components that are bivariate normally distributed, centered at zero, and independent. Then $U$ and $V$ have density functions
$f_{U}(x;\sigma )=f_{V}(x;\sigma )={\frac {e^{-x^{2}/(2\sigma ^{2})}}{\sqrt {2\pi \sigma ^{2}}}}.$
Let $X$ be the length of $Y$. That is, $X={\sqrt {U^{2}+V^{2}}}.$ Then $X$ has cumulative distribution function
$F_{X}(x;\sigma )=\iint _{D_{x}}f_{U}(u;\sigma )f_{V}(v;\sigma )\,dA,$
where $D_{x}$ is the disk
$D_{x}=\left\{(u,v):{\sqrt {u^{2}+v^{2}}}\leq x\right\}.$
Writing the double integral in polar coordinates, it becomes
$F_{X}(x;\sigma )={\frac {1}{2\pi \sigma ^{2}}}\int _{0}^{2\pi }\int _{0}^{x}re^{-r^{2}/(2\sigma ^{2})}\,dr\,d\theta ={\frac {1}{\sigma ^{2}}}\int _{0}^{x}re^{-r^{2}/(2\sigma ^{2})}\,dr.$
Finally, the probability density function for $X$ is the derivative of its cumulative distribution function, which by the fundamental theorem of calculus is
$f_{X}(x;\sigma )={\frac {d}{dx}}F_{X}(x;\sigma )={\frac {x}{\sigma ^{2}}}e^{-x^{2}/(2\sigma ^{2})},$
which is the Rayleigh distribution. It is straightforward to generalize to vectors of dimension other than 2. There are also generalizations when the components have unequal variance or correlations (Hoyt distribution), or when the vector Y follows a bivariate Student t-distribution (see also: Hotelling's T-squared distribution).[3]
Generalization to bivariate Student's t-distribution
Suppose $Y$ is a random vector with components $u,v$ that follows a multivariate t-distribution. If the components both have mean zero, equal variance, and are independent, the bivariate Student's-t distribution takes the form:
$f(u,v)={1 \over {2\pi \sigma ^{2}}}\left(1+{u^{2}+v^{2} \over {\nu \sigma ^{2}}}\right)^{-\nu /2-1}$
Let $R={\sqrt {U^{2}+V^{2}}}$ be the magnitude of $Y$. Then the cumulative distribution function (CDF) of the magnitude is:
$F(r)={1 \over {2\pi \sigma ^{2}}}\iint _{D_{r}}\left(1+{u^{2}+v^{2} \over {\nu \sigma ^{2}}}\right)^{-\nu /2-1}du\;dv$
where $D_{r}$ is the disk defined by:
$D_{r}=\left\{(u,v):{\sqrt {u^{2}+v^{2}}}\leq r\right\}$
Converting to polar coordinates leads to the CDF becoming:
${\begin{aligned}F(r)&={1 \over {2\pi \sigma ^{2}}}\int _{0}^{r}\int _{0}^{2\pi }\rho \left(1+{\rho ^{2} \over {\nu \sigma ^{2}}}\right)^{-\nu /2-1}d\theta \;d\rho \\&={1 \over {\sigma ^{2}}}\int _{0}^{r}\rho \left(1+{\rho ^{2} \over {\nu \sigma ^{2}}}\right)^{-\nu /2-1}d\rho \\&=1-\left(1+{r^{2} \over {\nu \sigma ^{2}}}\right)^{-\nu /2}\end{aligned}}$
Finally, the probability density function (PDF) of the magnitude may be derived:
$f(r)=F'(r)={r \over {\sigma ^{2}}}\left(1+{r^{2} \over {\nu \sigma ^{2}}}\right)^{-\nu /2-1}$
In the limit as $\nu \rightarrow \infty $, the Rayleigh distribution is recovered because:
$\lim _{\nu \rightarrow \infty }\left(1+{r^{2} \over {\nu \sigma ^{2}}}\right)^{-\nu /2-1}=e^{-r^{2}/2\sigma ^{2}}$
Properties
The raw moments are given by:
$\mu _{j}=\sigma ^{j}2^{j/2}\,\Gamma \left(1+{\frac {j}{2}}\right),$
where $\Gamma (z)$ is the gamma function.
The mean of a Rayleigh random variable is thus :
$\mu (X)=\sigma {\sqrt {\frac {\pi }{2}}}\ \approx 1.253\ \sigma .$
The standard deviation of a Rayleigh random variable is:
$\operatorname {std} (X)={\sqrt {\left(2-{\frac {\pi }{2}}\right)}}\sigma \approx 0.655\ \sigma $
The variance of a Rayleigh random variable is :
$\operatorname {var} (X)=\mu _{2}-\mu _{1}^{2}=\left(2-{\frac {\pi }{2}}\right)\sigma ^{2}\approx 0.429\ \sigma ^{2}$
The mode is $\sigma ,$ and the maximum pdf is
$f_{\max }=f(\sigma ;\sigma )={\frac {1}{\sigma }}e^{-1/2}\approx {\frac {0.606}{\sigma }}.$ ;\sigma )={\frac {1}{\sigma }}e^{-1/2}\approx {\frac {0.606}{\sigma }}.}
The skewness is given by:
$\gamma _{1}={\frac {2{\sqrt {\pi }}(\pi -3)}{(4-\pi )^{3/2}}}\approx 0.631$
The excess kurtosis is given by:
$\gamma _{2}=-{\frac {6\pi ^{2}-24\pi +16}{(4-\pi )^{2}}}\approx 0.245$
The characteristic function is given by:
$\varphi (t)=1-\sigma te^{-{\frac {1}{2}}\sigma ^{2}t^{2}}{\sqrt {\frac {\pi }{2}}}\left[\operatorname {erfi} \left({\frac {\sigma t}{\sqrt {2}}}\right)-i\right]$
where $\operatorname {erfi} (z)$ is the imaginary error function. The moment generating function is given by
$M(t)=1+\sigma t\,e^{{\frac {1}{2}}\sigma ^{2}t^{2}}{\sqrt {\frac {\pi }{2}}}\left[\operatorname {erf} \left({\frac {\sigma t}{\sqrt {2}}}\right)+1\right]$
where $\operatorname {erf} (z)$ is the error function.
Differential entropy
The differential entropy is given by
$H=1+\ln \left({\frac {\sigma }{\sqrt {2}}}\right)+{\frac {\gamma }{2}}$
where $\gamma $ is the Euler–Mascheroni constant.
Parameter estimation
Given a sample of N independent and identically distributed Rayleigh random variables $x_{i}$ with parameter $\sigma $,
${\widehat {\sigma ^{2}}}=\!\,{\frac {1}{2N}}\sum _{i=1}^{N}x_{i}^{2}$ is the maximum likelihood estimate and also is unbiased.
${\widehat {\sigma }}\approx {\sqrt {{\frac {1}{2N}}\sum _{i=1}^{N}x_{i}^{2}}}$ is a biased estimator that can be corrected via the formula
$\sigma ={\widehat {\sigma }}{\frac {\Gamma (N){\sqrt {N}}}{\Gamma \left(N+{\frac {1}{2}}\right)}}={\widehat {\sigma }}{\frac {4^{N}N!(N-1)!{\sqrt {N}}}{(2N)!{\sqrt {\pi }}}}$[4]
Confidence intervals
To find the (1 − α) confidence interval, first find the bounds $[a,b]$ where:
$P\left(\chi _{2N}^{2}\leq a\right)=\alpha /2,\quad P\left(\chi _{2N}^{2}\leq b\right)=1-\alpha /2$
then the scale parameter will fall within the bounds
${\frac {{N}{\overline {x^{2}}}}{b}}\leq {\widehat {\sigma ^{2}}}\leq {\frac {{N}{\overline {x^{2}}}}{a}}$[5]
Generating random variates
Given a random variate U drawn from the uniform distribution in the interval (0, 1), then the variate
$X=\sigma {\sqrt {-2\ln U}}\,$
has a Rayleigh distribution with parameter $\sigma $. This is obtained by applying the inverse transform sampling-method.
Related distributions
• $R\sim \mathrm {Rayleigh} (\sigma )$ is Rayleigh distributed if $R={\sqrt {X^{2}+Y^{2}}}$, where $X\sim N(0,\sigma ^{2})$ and $Y\sim N(0,\sigma ^{2})$ are independent normal random variables.[6] This gives motivation to the use of the symbol $\sigma $ in the above parametrization of the Rayleigh density.
• The magnitude $|z|$ of a standard complex normally distributed variable z is Rayleigh distributed.
• The chi distribution with v = 2 is equivalent to the Rayleigh Distribution with σ = 1.
• If $R\sim \mathrm {Rayleigh} (1)$, then $R^{2}$ has a chi-squared distribution with parameter $N$, degrees of freedom, equal to two (N = 2)
$[Q=R^{2}]\sim \chi ^{2}(N)\ .$
• If $R\sim \mathrm {Rayleigh} (\sigma )$, then $\sum _{i=1}^{N}R_{i}^{2}$ has a gamma distribution with parameters $N$ and ${\frac {1}{2\sigma ^{2}}}$
$\left[Y=\sum _{i=1}^{N}R_{i}^{2}\right]\sim \Gamma \left(N,{\frac {1}{2\sigma ^{2}}}\right).$
• The Rice distribution is a noncentral generalization of the Rayleigh distribution: $\mathrm {Rayleigh} (\sigma )=\mathrm {Rice} (0,\sigma )$.
• The Weibull distribution with the shape parameter k = 2 yields a Rayleigh distribution. Then the Rayleigh distribution parameter $\sigma $ is related to the Weibull scale parameter according to $\lambda =\sigma {\sqrt {2}}.$
• The Maxwell–Boltzmann distribution describes the magnitude of a normal vector in three dimensions.
• If $X$ has an exponential distribution $X\sim \mathrm {Exponential} (\lambda )$, then $Y={\sqrt {X}}\sim \mathrm {Rayleigh} (1/{\sqrt {2\lambda }}).$
• The half-normal distribution is the univariate special case of the Rayleigh distribution.
Applications
An application of the estimation of σ can be found in magnetic resonance imaging (MRI). As MRI images are recorded as complex images but most often viewed as magnitude images, the background data is Rayleigh distributed. Hence, the above formula can be used to estimate the noise variance in an MRI image from background data.[7] [8]
The Rayleigh distribution was also employed in the field of nutrition for linking dietary nutrient levels and human and animal responses. In this way, the parameter σ may be used to calculate nutrient response relationship.[9]
In the field of ballistics, the Rayleigh distribution is used for calculating the circular error probable—a measure of a weapon's precision.
In physical oceanography, the distribution of significant wave height approximately follows a Rayleigh distribution.[10]
See also
• Circular error probable
• Rayleigh fading
• Rayleigh mixture distribution
• Rice distribution
References
1. "The Wave Theory of Light", Encyclopedic Britannica 1888; "The Problem of the Random Walk", Nature 1905 vol.72 p.318
2. Papoulis, Athanasios; Pillai, S. (2001) Probability, Random Variables and Stochastic Processes. ISBN 0073660116, ISBN 9780073660110
3. Röver, C. (2011). "Student-t based filter for robust signal detection". Physical Review D. 84 (12): 122004. arXiv:1109.0442. Bibcode:2011PhRvD..84l2004R. doi:10.1103/physrevd.84.122004.
4. Siddiqui, M. M. (1964) "Statistical inference for Rayleigh distributions", The Journal of Research of the National Bureau of Standards, Sec. D: Radio Science, Vol. 68D, No. 9, p. 1007
5. Siddiqui, M. M. (1961) "Some Problems Connected With Rayleigh Distributions", The Journal of Research of the National Bureau of Standards; Sec. D: Radio Propagation, Vol. 66D, No. 2, p. 169
6. Hogema, Jeroen (2005) "Shot group statistics"
7. Sijbers, J.; den Dekker, A. J.; Raman, E.; Van Dyck, D. (1999). "Parameter estimation from magnitude MR images". International Journal of Imaging Systems and Technology. 10 (2): 109–114. CiteSeerX 10.1.1.18.1228. doi:10.1002/(sici)1098-1098(1999)10:2<109::aid-ima2>3.0.co;2-r.
8. den Dekker, A. J.; Sijbers, J. (2014). "Data distributions in magnetic resonance images: a review". Physica Medica. 30 (7): 725–741. doi:10.1016/j.ejmp.2014.05.002. PMID 25059432.
9. Ahmadi, Hamed (2017-11-21). "A mathematical function for the description of nutrient-response curve". PLOS ONE. 12 (11): e0187292. Bibcode:2017PLoSO..1287292A. doi:10.1371/journal.pone.0187292. ISSN 1932-6203. PMC 5697816. PMID 29161271.
10. "Rayleigh Probability Distribution Applied to Random Wave Heights" (PDF). United States Naval Academy.{{cite web}}: CS1 maint: url-status (link)
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
|
Wikipedia
|
Reiko Sakamoto (mathematician)
Reiko Sakamoto (Japanese: 坂本 玲子, born 1939) is a Japanese mathematician affiliated with Nara Women's University.[1] Her teachers have included Sigeru Mizohata and Masaya Yamaguchi;[2] her students have included Yoshihiro Shibata.[3] She is known for her research on mixed boundary conditions for hyperbolic partial differential equations,[4] for which she won the 1974 Iyanaga Prize of the Mathematical Society of Japan,[5] and for her book on hyperbolic boundary value problems.[6]
References
1. "Sakamoto, Reiko", Catalog, German National Library, retrieved 2021-09-21
2. Hyperbolic Boundary Value Problems, Preface, pp. vii–viii, via Google Books, retrieved 2021-09-21
3. Amann, Herbert; Giga, Yoshikazu; Okamoto, Hisashi; Kozono, Hideo; Yamazaki, Masaso (2016), "The Work of Yoshihiro Shibata", in Amann, Herbert; Giga, Yoshikazu; Kozono, Hideo; Okamoto, Hisashi; Yamazaki, Masao (eds.), Recent Developments of Mathematical Fluid Mechanics, Advances in Mathematical Fluid Mechanics, Springer Basel, pp. 1–12, doi:10.1007/978-3-0348-0939-9_1
4. Sakamoto, Reiko (1970), "Mixed problems for hyperbolic equations, I: Energy inequalities", Journal of Mathematics of Kyoto University, 10 (2): 349–373, doi:10.1215/kjm/1250523767. Sakamoto, Reiko (1970), "Mixed problems for hyperbolic equations, II: Existence theorems with zero initial datas and energy inequalities with initial datas", Journal of Mathematics of Kyoto University, 10 (3): 403–417, doi:10.1215/kjm/1250523726. Translated into Russian in Matematika, 16 (1): 62–80 and 81–99, 1972. Reviews: S. Cinquini, MR0283400 (in Italian); K. Graf Finck von Finckenstein, Zbl 0203.10001, Zbl 0206.40101.
5. "The Spring Prize of the Mathematical Society of Japan", MacTutor History of Mathematics Archive, St Andrews University, retrieved 2021-09-21
6. Sakamoto, Reiko (1978), Hyperbolic Boundary Value Problems, Tokyo: Iwanami Shoten, MR 0601778. Translated into English with corrections by Katsumi Miyahara, Cambridge University Press, 1982. Reviews: Hideo Soga (1982), MR0601778; G. F. D. Duff (1983), Bulletin of the American Mathematical Society, doi:10.1090/S0273-0979-1983-15218-7; Leonard Sarason (1984), SIAM Review, JSTOR 2031005; M. Tusji, Zbl 0494.35001.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
Academics
• MathSciNet
• Scopus
• zbMATH
Other
• IdRef
|
Wikipedia
|
Reilly formula
In the mathematical field of Riemannian geometry, the Reilly formula is an important identity, discovered by Robert Reilly in 1977.[1] It says that, given a smooth Riemannian manifold-with-boundary (M, g) and a smooth function u on M, one has
$\int _{\partial M}\left(H{\Big (}{\frac {\partial u}{\partial \nu }}{\Big )}^{2}+2{\frac {\partial u}{\partial \nu }}\Delta ^{\partial M}u+h{\big (}\nabla ^{\partial M}u,\nabla ^{\partial M}u{\big )}\right)=\int _{M}{\Big (}(\Delta u)^{2}-|\nabla \nabla u|^{2}-\operatorname {Ric} (\nabla u,\nabla u){\Big )},$
in which h is the second fundamental form of the boundary of M, H is its mean curvature, and ν is its unit normal vector.[2][3] This is often used in combination with the observation
$|\nabla \nabla u|^{2}={\frac {1}{n}}(\Delta u)^{2}+{\Big |}\nabla \nabla u-{\frac {1}{n}}(\Delta u)g{\Big |}^{2}\geq {\frac {1}{n}}(\Delta u)^{2},$
with the consequence that
$\int _{\partial M}\left(H{\Big (}{\frac {\partial u}{\partial \nu }}{\Big )}^{2}+2{\frac {\partial u}{\partial \nu }}\Delta ^{\partial M}u+h{\big (}\nabla ^{\partial M}u,\nabla ^{\partial M}u{\big )}\right)\leq \int _{M}{\Big (}{\frac {n-1}{n}}(\Delta u)^{2}-\operatorname {Ric} (\nabla u,\nabla u){\Big )}.$
This is particularly useful since one can now make use of the solvability of the Dirichlet problem for the Laplacian to make useful choices for u.[4][5] Applications include eigenvalue estimates in spectral geometry and the study of submanifolds of constant mean curvature.
References
1. Reilly 1977
2. Chow, Lu, and Ni, section A.5
3. Colding and Minicozzi, section 7.3
4. Li, section 8
5. Schoen and Yau, section III.8
• Bennett Chow, Peng Lu, and Lei Ni. Hamilton's Ricci flow. Graduate Studies in Mathematics, 77. American Mathematical Society, Providence, RI; Science Press Beijing, New York, 2006. xxxvi+608 pp. ISBN 978-0-8218-4231-7, 0-8218-4231-5
• Tobias Holck Colding and William P. Minicozzi II. A course in minimal surfaces. Graduate Studies in Mathematics, 121. American Mathematical Society, Providence, RI, 2011. xii+313 pp. ISBN 978-0-8218-5323-8. doi:10.1090/gsm/121
• Peter Li. Geometric analysis. Cambridge Studies in Advanced Mathematics, 134. Cambridge University Press, Cambridge, 2012. x+406 pp. ISBN 978-1-107-02064-1. doi:10.1017/CBO9781139105798
• Reilly, Robert (1977). "Applications of the Hessian operator in a Riemannian manifold". Indiana University Mathematics Journal. 26 (3): 459. doi:10.1512/iumj.1977.26.26036. ISSN 0022-2518.
• R. Schoen and S.-T. Yau. Lectures on differential geometry. Lecture notes prepared by Wei Yue Ding, Kung Ching Chang, Jia Qing Zhong and Yi Chao Xu. Translated from the Chinese by Ding and S.Y. Cheng. With a preface translated from the Chinese by Kaising Tso. Conference Proceedings and Lecture Notes in Geometry and Topology, I. International Press, Cambridge, MA, 1994. v+235 pp. ISBN 1-57146-012-8
External links
• In Memoriam Robert Cunningham Reilly
|
Wikipedia
|
Reinhardt cardinal
In set theory, a branch of mathematics, a Reinhardt cardinal is a kind of large cardinal. Reinhardt cardinals are considered under ZF (Zermelo–Fraenkel set theory without the Axiom of Choice), because they are inconsistent with ZFC (ZF with the Axiom of Choice). They were suggested (Reinhardt 1967, 1974) by American mathematician William Nelson Reinhardt (1939–1998).
Definition
A Reinhardt cardinal is the critical point of a non-trivial elementary embedding $j:V\to V$ of $V$ into itself.
This definition refers explicitly to the proper class $j$. In standard ZF, classes are of the form $\{x|\phi (x,a)\}$ for some set $a$ and formula $\phi $. But it was shown in Suzuki (1999) that no such class is an elementary embedding $j:V\to V$. So Reinhardt cardinals are inconsistent with this notion of class.
There are other formulations of Reinhardt cardinals which are not known to be inconsistent. One is to add a new function symbol $j$ to the language of ZF, together with axioms stating that $j$ is an elementary embedding of $V$, and Separation and Collection axioms for all formulas involving $j$. Another is to use a class theory such as NBG or KM, which admit classes which need not be definable in the sense above.
Kunen's inconsistency theorem
Kunen (1971) proved his inconsistency theorem, showing that the existence of an elementary embedding $j:V\to V$ contradicts NBG with the axiom of choice (and ZFC extended by $j$). His proof uses the axiom of choice, and it is still an open question as to whether such an embedding is consistent with NBG without the axiom of choice (or with ZF plus the extra symbol $j$ and its attendant axioms).
Kunen's theorem is not simply a consequence of Suzuki (1999), as it is a consequence of NBG, and hence does not require the assumption that $j$ is a definable class. Also, assuming $0^{\#}$ exists, then there is an elementary embedding of a transitive model $M$ of ZFC (in fact Goedel's constructible universe $L$) into itself. But such embeddings are not classes of $M$.
Stronger axioms
There are some variations of Reinhardt cardinals, forming a hierarchy of hypotheses asserting the existence of elementary embeddings $V\to V$.
A super Reinhardt cardinal is $\kappa $ such that for every ordinal $\alpha $, there is an elementary embedding $j:V\to V$ with $j(\kappa )>\alpha $ and having critical point $\kappa $.[1]
J3: There is a nontrivial elementary embedding $j:V\to V$
J2: There is a nontrivial elementary embedding $j:V\to V$ and DC$\lambda $ holds, where $\lambda $ is the least fixed-point above the critical point.
J1: For every ordinal $\alpha $, there is an elementary embedding $j:V\to V$ with $j(\kappa )>\alpha $ and having critical point $\kappa $.
Each of J1 and J2 immediately imply J3. A cardinal $\kappa $ as in J1 is known as a super Reinhardt cardinal.
Berkeley cardinals are stronger large cardinals suggested by Woodin.
See also
• List of large cardinal properties
References
• Jensen, Ronald (1995), "Inner Models and Large Cardinals", The Bulletin of Symbolic Logic, The Bulletin of Symbolic Logic, Vol. 1, No. 4, 1 (4): 393–407, CiteSeerX 10.1.1.28.1790, doi:10.2307/421129, JSTOR 421129, S2CID 15714648
• Kanamori, Akihiro (2003), The Higher Infinite : Large Cardinals in Set Theory from Their Beginnings (2nd ed.), Springer, ISBN 3-540-00384-3
• Kunen, Kenneth (1971), "Elementary embeddings and infinitary combinatorics", Journal of Symbolic Logic, The Journal of Symbolic Logic, Vol. 36, No. 3, 36 (3): 407–413, doi:10.2307/2269948, JSTOR 2269948, MR 0311478, S2CID 38948969
• Reinhardt, W. N. (1967), Topics in the metamathematics of set theory, Doctoral dissertation, University of California, Berkeley
• Reinhardt, W. N. (1974), "Remarks on reflection principles, large cardinals, and elementary embeddings.", Axiomatic set theory, Proc. Sympos. Pure Math., vol. XIII, Part II, Providence, R. I.: Amer. Math. Soc., pp. 189–205, MR 0401475
• Suzuki, Akira (1999), "No elementary embedding from V into V is definable from parameters", Journal of Symbolic Logic, 64 (4): 1591–1594, doi:10.2307/2586799, JSTOR 2586799, MR 1780073, S2CID 40967369
Citations
1. J. Bagaria, P. Koellner, W. H. Woodin, Large Cardinals Beyond Choice (2019). Accessed 28 June 2023.
External links
• Koellner, Peter (2014), The Search for Deep Inconsistency (PDF)
|
Wikipedia
|
Reinhardt polygon
In geometry, a Reinhardt polygon is an equilateral polygon inscribed in a Reuleaux polygon. As in the regular polygons, each vertex of a Reinhardt polygon participates in at least one defining pair of the diameter of the polygon. Reinhardt polygons with $n$ sides exist, often with multiple forms, whenever $n$ is not a power of two. Among all polygons with $n$ sides, the Reinhardt polygons have the largest possible perimeter for their diameter, the largest possible width for their diameter, and the largest possible width for their perimeter. They are named after Karl Reinhardt, who studied them in 1922.[1][2]
Definition and construction
A Reuleaux polygon is a convex shape with circular-arc sides, each centered on a vertex of the shape and all having the same radius; an example is the Reuleaux triangle. These shapes are curves of constant width. Some Reuleaux polygons have side lengths that are irrational multiples of each other, but if a Reuleaux polygon has sides that can be partitioned into a system of arcs of equal length, then the polygon formed as the convex hull of the endpoints of these arcs is defined as a Reinhardt polygon. Necessarily, the vertices of the underlying Reuleaux polygon are also endpoints of arcs and vertices of the Reinhardt polygon, but the Reinhardt polygon may also have additional vertices, interior to the sides of the Reuleaux polygon.[3]
If $n$ is a power of two, then it is not possible to form a Reinhardt polygon with $n$ sides. If $n$ is an odd number, then the regular polygon with $n$ sides is a Reinhardt polygon. Any other natural number must have an odd divisor $d$, and a Reinhardt polygon with $n$ sides may be formed by subdividing each arc of a regular $d$-sided Reuleaux polygon into $n/d$ smaller arcs. Therefore, the possible numbers of sides of Reinhardt polygons are the polite numbers, numbers that are not powers of two. When $n$ is an odd prime number, or two times a prime number, there is only one shape of $n$-sided Reinhardt polygon, but all other values of $n$ have Reinhardt polygons with multiple shapes.[1]
Dimensions and optimality
The diameter pairs of a Reinhardt polygon form many isosceles triangles with the sides of the triangle, with apex angle $\pi /n$, from which the dimensions of the polygon may be calculated. If the side length of a Reinhardt polygon is 1, then its perimeter is just $n$. The diameter of the polygon (the longest distance between any two of its points) equals the side length of these isosceles triangles, $1/2\sin(\pi /2n)$. The curves of constant width of the polygon (the shortest distance between any two parallel supporting lines) equals the height of this triangle, $1/2\tan(\pi /2n)$. These polygons are optimal in three ways:
• They have the largest possible perimeter among all $n$-sided polygons with their diameter, and the smallest possible diameter among all $n$-sided polygons with their perimeter.[1]
• They have the largest possible width among all $n$-sided polygons with their diameter, and the smallest possible diameter among all $n$-sided polygons with their width.[1]
• They have the largest possible width among all $n$-sided polygons with their perimeter, and the smallest possible perimeter among all $n$-sided polygons with their width.[1]
The relation between perimeter and diameter for these polygons was proven by Reinhardt,[4] and rediscovered independently multiple times.[5][6] The relation between diameter and width was proven by Bezdek and Fodor in 2000; their work also investigates the optimal polygons for this problem when the number of sides is a power of two (for which Reinhardt polygons do not exist).[7]
Symmetry and enumeration
The $n$-sided Reinhardt polygons formed from $d$-sided regular Reuleaux polygons are symmetric: they can be rotated by an angle of $2\pi /d$ to obtain the same polygon. The Reinhardt polygons that have this sort of rotational symmetry are called periodic, and Reinhardt polygons without rotational symmetry are called sporadic. If $n$ is a semiprime, or the product of a power of two with an odd prime power, then all $n$-sided Reinhardt polygons are periodic. In the remaining cases, when $n$ has two distinct odd prime factors and is not the product of these two factors, sporadic Reinhardt polygons also exist.[2]
For each $n$, there are only finitely many distinct $n$-sided Reinhardt polygons.[3] If $p$ is the smallest prime factor of $n$, then the number of distinct $n$-sided periodic Reinhardt polygons is
${\frac {p2^{n/p}}{4n}}{\bigl (}1+o(1){\bigr )},$
where the $o(1)$ term uses little O notation. However, the number of sporadic Reinhardt polygons is less well-understood, and for most values of $n$ the total number of Reinhardt polygons is dominated by the sporadic ones.[2]
The numbers of these polygons for small values of $n$ (counting two polygons as the same when they can be rotated or flipped to form each other) are:[1]
$n$:3456789101112131415161718192021222324
#:101110211211501512101112
See also
• Biggest little polygon, the polygons maximizing area for their diameter
References
1. Mossinghoff, Michael J. (2011), "Enumerating isodiametric and isoperimetric polygons", Journal of Combinatorial Theory, Series A, 118 (6): 1801–1815, doi:10.1016/j.jcta.2011.03.004, MR 2793611
2. Hare, Kevin G.; Mossinghoff, Michael J. (2019), "Most Reinhardt polygons are sporadic", Geometriae Dedicata, 198: 1–18, arXiv:1405.5233, doi:10.1007/s10711-018-0326-5, MR 3933447, S2CID 119629098
3. Datta, Basudeb (1997), "A discrete isoperimetric problem", Geometriae Dedicata, 64 (1): 55–68, doi:10.1023/A:1004997002327, MR 1432534, S2CID 118797507
4. Reinhardt, Karl (1922), "Extremale Polygone gegebenen Durchmessers", Jahresbericht der Deutschen Mathematiker-Vereinigung, 31: 251–270
5. Vincze, Stephen (1950), "On a geometrical extremum problem", Acta Universitatis Szegediensis, 12: 136–142, MR 0038087
6. Larman, D. G.; Tamvakis, N. K. (1984), "The decomposition of the $n$-sphere and the boundaries of plane convex domains", Convexity and graph theory (Jerusalem, 1981), North-Holland Math. Stud., vol. 87, Amsterdam: North-Holland, pp. 209–214, doi:10.1016/S0304-0208(08)72828-7, MR 0791034
7. Bezdek, A.; Fodor, F. (2000), "On convex polygons of maximal width", Archiv der Mathematik, 74 (1): 75–80, doi:10.1007/PL00000413, MR 1728365, S2CID 123299791
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
|
Wikipedia
|
Reinhold Hoppe
Ernst Reinhold Eduard Hoppe (November 18, 1816 – May 7, 1900) was a German mathematician who worked as a professor at the University of Berlin.[1][2]
Education and career
Hoppe was a student of Johann August Grunert at the University of Greifswald,[3] graduating in 1842 and becoming an English and mathematics teacher. He completed his doctorate in 1850 in Halle and his habilitation in mathematics in 1853 in Berlin under Peter Gustav Lejeune Dirichlet. He also tried to obtain a habilitation in philosophy at the same time, but was denied until a later re-application in 1871. He worked at Berlin as a privatdozent, and then after 1870 as a professor, but with few students and little remuneration.[2]
When Grunert died in 1872, Hoppe took over the editorship of the mathematical journal founded by Grunert, the Archiv der Mathematik und Physik. Hoppe in turn continued as editor until his own death, in 1900.[3] In 1890, Hoppe was one of the 31 founding members of the German Mathematical Society.[4]
Contributions
Hoppe wrote over 250 scientific publications, including one of the first textbooks on differential geometry.[2]
His accomplishments in geometry include rediscovering the higher-dimensional regular polytopes (previously discovered by Ludwig Schläfli),[5] and coining the term "polytope".[6] In 1880 he published a closed-form expression for all triangles with consecutive integer sides and rational area, also known as almost-equilateral Heronian triangles.[7] He is sometimes credited with having proven Isaac Newton's conjecture on the kissing number problem, that at most twelve congruent balls can touch a central ball of the same radius, but his proof was incorrect, and a valid proof was not found until 1953.[8]
Hoppe published several works on a formula for the m-fold derivative of a composition of functions. The formula, now known as "Hoppe's formula", is a variation of Faà di Bruno's formula. Hoppe's publication of his formula in 1845 predates Faà di Bruno's in 1852, but is later than some other independent discoveries of equivalent formulas.[9]
In his work on special functions, Hoppe belonged to the Königsburg school of thought, led by Carl Jacobi.[10] He also published research in fluid mechanics.[11]
Awards and honors
He was elected to the Academy of Sciences Leopoldina in 1890.[1]
Books
• Theorie Der Independenten Darstellung Der Höhern Differentialquotienten (Leipzig: Joh. Ambr. Barth, 1845)
• Zulänglichkeit Des Empirismus In Der Philosophie (Berlin: Wilhelm Thome, 1852)
• Lehrbuch Der Differentialrechnung Und Reihentheorie Mit Strenger Begründung (Berlin: G. F. Otto Müller, 1865)
• Principien Der Flächentheorie (Leipzig: C. A. Koch, 1876)
• Tafeln Zur Dreissigstelligen Logarithmischen Rechnung (Leipzig: C. A. Koch, 1876)
• Lehrbuch Der Analytischen Geometrie (Leipzig: C. A. Koch, 1880)
References
1. Kieser, Dietrich Georg; Carus, Carl Gustav; Behn, Wilhelm Friedrich Georg; Knoblauch, Carl Hermann; Wangerin, Albert (1900), Leopoldina (in German), vol. 36, Halle, p. 132 {{citation}}: Missing |author5= (help)CS1 maint: location missing publisher (link).
2. Biermann, Kurt-R. (1972), "Reinhold Hoppe", Neue Deutsche Biographie (in German), vol. 9, Berlin: Duncker & Humblot, pp. 614–615; (full text online)
3. Schreiber, Peter (1996), "Johann August Grunert and his Archiv der Mathematik und Physik as an integrative factor of everyone's mathematics in the middle of the nineteenth century", in Goldstein, Catherine; Gray, Jeremy; Ritter, Jim (eds.), Mathematical Europe: History, myth, identity, Paris: Ed. Maison des Sci. de l'Homme, pp. 431–444, MR 1770139. See in particular pp. 435–437.
4. Zielsetzung, German Mathematical Society, retrieved 2015-08-19.
5. Kolmogorov, Andrei N.; Yushkevich, Adolf-Andrei P. (2012), Mathematics of the 19th Century: Geometry, Analytic Function Theory, Birkhäuser, p. 81, ISBN 9783034891738.
6. Coxeter, H. S. M. (1973), Regular Polytopes, Dover, p. vi, ISBN 0-486-61480-8.
7. Gould, H. W. (February 1973), "A triangle with integral sides and area" (PDF), Fibonacci Quarterly, 11 (1): 27–39.
8. Zong, Chuanming (2008), "The kissing number, blocking number and covering number of a convex body", in Goodman, Jacob E.; Pach, János; Pollack, Richard (eds.), Surveys on Discrete and Computational Geometry: Twenty Years Later (AMS-IMS-SIAM Joint Summer Research Conference, June 18–22, 2006, Snowbird, Utah), Contemporary Mathematics, vol. 453, Providence, RI: American Mathematical Society, pp. 529–548, doi:10.1090/conm/453/08812, MR 2405694.
9. Johnson, Warren P. (2002), "The curious history of Faà di Bruno's formula" (PDF), American Mathematical Monthly, 109 (3): 217–234, doi:10.2307/2695352, JSTOR 2695352, MR 1903577.
10. Ernst, Thomas (2012), A Comprehensive Treatment of q-Calculus, Springer, p. 52, ISBN 9783034804318.
11. Despeaux, Sloan Evans (2002), "International mathematical contributions to British scientific journals, 1800–1900", in Parshall, Karen Hunger; Rice, Adrian C. (eds.), Mathematics unbound: the evolution of an international mathematical research community, 1800–1945 (Charlottesville, VA, 1999), History of Mathematics, vol. 23, Providence, RI: American Mathematical Society, pp. 61–87, MR 1907170. See in particular p. 71.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Netherlands
Academics
• Leopoldina
• zbMATH
People
• Deutsche Biographie
|
Wikipedia
|
Reinhold Strassmann
Reinhold Strassmann (or Straßmann) (24 January 1893 in Berlin – late October 1944 in Auschwitz concentration camp) was a German mathematician who proved Strassmann's theorem. His Ph.D. advisor at University of Marburg was Kurt Hensel.
Reinhold Strassmann
Born1893
Berlin, German Empire
Died1944 (aged 50–51)
Auschwitz-Birkenau, German-occupied Poland
NationalityGerman
Alma materUniversity of Marburg
Known forStrassmann's theorem
Scientific career
FieldsMathematics
Doctoral advisorKurt Hensel
Born into a Jewish family,[1] Strassmann refused to leave Nazi Germany, and he was eventually detained and deported to Theresienstadt concentration camp in 1943. On October 23, 1944, he was deported from Theresienstadt to Auschwitz concentration camp, where he was murdered soon after.[2]
He was the son of the forensic pathologist Fritz Strassmann.
Selected publications
• Straßmann, Reinhold (1928), "Über den Wertevorrat von Potenzreihen im Gebiet der p-adischen Zahlen (On the codomain of power series in the area of p-adic numbers)", Journal für die reine und angewandte Mathematik (in German), 159: 13–28, doi:10.1515/crll.1928.159.13, ISSN 0075-4102, JFM 54.0162.06, S2CID 117410014
References
1. Burkhard Madea, History of Forensic Medicine, Lehmanns Media (2017), p. 148
2. Reinhold Strassmann's record Archived 2015-07-15 at the Wayback Machine in the Victims Database at holocaust.cz
• Reinhold Strassmann at the Mathematics Genealogy Project
• DMV short biographies
• Siegmund-Schultze, Reinhard (2009), Mathematicians fleeing from Nazi Germany, Princeton University Press, ISBN 978-0-691-14041-4, MR 2522825
• Strassmann, Wolfgang Paul (2008), The Strassmanns: science, politics, and migration in turbulent times, 1793-1993, Berghahn Books, ISBN 978-1-84545-416-6
Authority control
International
• ISNI
• VIAF
National
• Germany
Academics
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
|
Wikipedia
|
Reiss relation
In algebraic geometry, the Reiss relation, introduced by Reiss (1837), is a condition on the second-order elements of the points of a plane algebraic curve meeting a given line.
Statement
If C is a complex plane curve given by the zeros of a polynomial f(x,y) of two variables, and L is a line meeting C transversely and not meeting C at infinity, then
$\sum {\frac {f_{xx}f_{y}^{2}-2f_{xy}f_{x}f_{y}+f_{yy}f_{x}^{2}}{f_{y}^{3}}}=0$
where the sum is over the points of intersection of C and L, and fx, fxy and so on stand for partial derivatives of f (Griffiths & Harris 1994, p. 675). This can also be written as
$\sum {\frac {\kappa }{\sin(\theta )^{3}}}=0$
where κ is the curvature of the curve C and θ is the angle its tangent line makes with L, and the sum is again over the points of intersection of C and L (Griffiths & Harris 1994, p. 677).
References
• Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-05059-9, MR 1288523
• Segre, Beniamino (1971), Some properties of differentiable varieties and transformations: with special reference to the analytic and algebraic cases, Ergebnisse der Mathematik und ihrer Grenzgebiete, vol. 13, Berlin, New York: Springer-Verlag, ISBN 978-3-540-05085-8, MR 0278222
• Akivis, M. A.; Goldberg, V. V.: Projective differential geometry of submanifolds. North-Holland Mathematical Library, 49. North-Holland Publishing Co., Amsterdam, 1993 (chapter 8).
|
Wikipedia
|
Rekha R. Thomas
Rekha Rachel Thomas is a mathematician and operations researcher. She works as a professor of mathematics at the University of Washington, and was the Robert R. and Elaine F. Phelps Professor there from 2008 until 2012. Her research interests include mathematical optimization and computational algebra.[1]
Thomas earned a PhD in operations research from Cornell University in 1994, supervised by Bernd Sturmfels; her dissertation concerned Gröbner bases and integer programming.[1][2] Prior to joining the University of Washington in 2000, she did postdoctoral studies at Yale University and the Zuse Institute Berlin, and held a faculty position at Texas A&M University beginning in 1995.[1][3]
Thomas is the author of the textbook Lectures in Geometric Combinatorics (Student Mathematical Library, 33, American Mathematical Society, 2006).[4] She was a plenary speaker at the 21st International Symposium on Mathematical Programming in 2012.[3]
In 2013 she became one of the inaugural fellows of the American Mathematical Society.[5]
References
1. "Rekha R. Thomas". University of Washington. Retrieved 2014-12-31.
2. Rekha Rachel Thomas at the Mathematics Genealogy Project
3. Speaker biography, ISMP 2012, retrieved 2014-12-31.
4. Reviews of Lectures in Geometric Combinatorics: Alexander Zvonkin (2007), MR2237292; Review, Miklós Bóna (April 26, 2007), MAA Reviews; Review, (June 1, 2011), European Mathematical Society.
5. List of Fellows of the American Mathematical Society, retrieved 2014-12-31.
External links
• Rekha R. Thomas publications indexed by Google Scholar
Authority control: Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Push–relabel maximum flow algorithm
In mathematical optimization, the push–relabel algorithm (alternatively, preflow–push algorithm) is an algorithm for computing maximum flows in a flow network. The name "push–relabel" comes from the two basic operations used in the algorithm. Throughout its execution, the algorithm maintains a "preflow" and gradually converts it into a maximum flow by moving flow locally between neighboring nodes using push operations under the guidance of an admissible network maintained by relabel operations. In comparison, the Ford–Fulkerson algorithm performs global augmentations that send flow following paths from the source all the way to the sink.[1]
The push–relabel algorithm is considered one of the most efficient maximum flow algorithms. The generic algorithm has a strongly polynomial O(V 2E) time complexity, which is asymptotically more efficient than the O(VE 2) Edmonds–Karp algorithm.[2] Specific variants of the algorithms achieve even lower time complexities. The variant based on the highest label node selection rule has O(V 2√E) time complexity and is generally regarded as the benchmark for maximum flow algorithms.[3][4] Subcubic O(VElog(V 2/E)) time complexity can be achieved using dynamic trees, although in practice it is less efficient.[2]
The push–relabel algorithm has been extended to compute minimum cost flows.[5] The idea of distance labels has led to a more efficient augmenting path algorithm, which in turn can be incorporated back into the push–relabel algorithm to create a variant with even higher empirical performance.[4][6]
History
The concept of a preflow was originally designed by Alexander V. Karzanov and was published in 1974 in Soviet Mathematical Dokladi 15. This pre-flow algorithm also used a push operation; however, it used distances in the auxiliary network to determine where to push the flow instead of a labeling system.[2][7]
The push-relabel algorithm was designed by Andrew V. Goldberg and Robert Tarjan. The algorithm was initially presented in November 1986 in STOC '86: Proceedings of the eighteenth annual ACM symposium on Theory of computing, and then officially in October 1988 as an article in the Journal of the ACM. Both papers detail a generic form of the algorithm terminating in O(V 2E) along with a O(V 3) sequential implementation, a O(VE log(V 2/E)) implementation using dynamic trees, and parallel/distributed implementation.[2][8] A explained in [9] Goldberg-Tarjan introduced distance labels by incorporating them into the parallel maximum flow algorithm of Yossi Shiloach and Uzi Vishkin.[10]
Concepts
Definitions and notations
Main article: Flow network
Let:
• G = (V, E) be a network with capacity function c: V × V → $\mathbb {R} $∞,
• F = (G, c, s, t) a flow network, where s ∈ V and t ∈ V are chosen source and sink vertices respectively,
• f : V × V → $\mathbb {R} $ denote a pre-flow in F,
• xf : V → $\mathbb {R} $ denote the excess function with respect to the flow f, defined by xf (u) = Σv ∈ V f (v, u) − Σv ∈ V f (u, v),
• cf : V × V → $\mathbb {R} $∞ denote the residual capacity function with respect to the flow f, defined by cf (e) = c(e) − f (e),
• Ef ⊂ E being the edges where f < c,
and
• Gf (V, Ef ) denote the residual network of G with respect to the flow f.
The push–relabel algorithm uses a nonnegative integer valid labeling function which makes use of distance labels, or heights, on nodes to determine which arcs should be selected for the push operation. This labeling function is denoted by 𝓁 : V → $\mathbb {N} $. This function must satisfy the following conditions in order to be considered valid:
Valid labeling:
𝓁(u) ≤ 𝓁(v) + 1 for all (u, v) ∈ Ef
Source condition:
𝓁(s) = | V |
Sink conservation:
𝓁(t) = 0
In the algorithm, the label values of s and t are fixed. 𝓁(u) is a lower bound of the unweighted distance from u to t in Gf if t is reachable from u. If u has been disconnected from t, then 𝓁(u) − | V | is a lower bound of the unweighted distance from u to s. As a result, if a valid labeling function exists, there are no s-t paths in Gf because no such paths can be longer than | V | − 1.
An arc (u, v) ∈ Ef is called admissible if 𝓁(u) = 𝓁(v) + 1. The admissible network G̃f (V, Ẽf ) is composed of the set of arcs e ∈ Ef that are admissible. The admissible network is acyclic.
For a fixed flow f, a vertex v ∉ {s, t} is called active if it has positive excess with respect to f, i.e., xf (u) > 0.
Initialization
The algorithm starts by creating a residual graph, initializing the preflow values to zero and performing a set of saturating push operations on residual arcs exiting the source, (s, v) where v ∈ V \ {s}. Similarly, the labels are initialized such that the label at the source is the number of nodes in the graph, 𝓁(s) = | V |, and all other nodes are given a label of zero. Once the initialization is complete the algorithm repeatedly performs either the push or relabel operations against active nodes until no applicable operation can be performed.
Push
The push operation applies on an admissible out-arc (u, v) of an active node u in Gf. It moves min{xf (u), cf (u,v)} units of flow from u to v.
push(u, v):
assert xf[u] > 0 and 𝓁[u] == 𝓁[v] + 1
Δ = min(xf[u], c[u][v] - f[u][v])
f[u][v] += Δ
f[v][u] -= Δ
xf[u] -= Δ
xf[v] += Δ
A push operation that causes f (u, v) to reach c(u, v) is called a saturating push since it uses up all the available capacity of the residual arc. Otherwise, all of the excess at the node is pushed across the residual arc. This is called an unsaturating or non-saturating push.
Relabel
The relabel operation applies on an active node u which is neither the source nor the sink without any admissible out-arcs in Gf. It modifies 𝓁(u) to be the minimum value such that an admissible out-arc is created. Note that this always increases 𝓁(u) and never creates a steep arc, which is an arc (u, v) such that cf (u, v) > 0, and 𝓁(u) > 𝓁(v) + 1.
relabel(u):
assert xf[u] > 0 and 𝓁[u] <= 𝓁[v] for all v such that cf[u][v] > 0
𝓁[u] = 1 + min(𝓁[v] for all v such that cf[u][v] > 0)
Effects of push and relabel
After a push or relabel operation, 𝓁 remains a valid labeling function with respect to f.
For a push operation on an admissible arc (u, v), it may add an arc (v, u) to Ef, where 𝓁(v) = 𝓁(u) − 1 ≤ 𝓁(u) + 1; it may also remove the arc (u, v) from Ef, where it effectively removes the constraint 𝓁(u) ≤ 𝓁(v) + 1.
To see that a relabel operation on node u preserves the validity of 𝓁(u), notice that this is trivially guaranteed by definition for the out-arcs of u in Gf. For the in-arcs of u in Gf, the increased 𝓁(u) can only satisfy the constraints less tightly, not violate them.
The generic push–relabel algorithm
The generic push–relabel algorithm is used as a proof of concept only and does not contain implementation details on how to select an active node for the push and relabel operations. This generic version of the algorithm will terminate in O(V2E).
Since 𝓁(s) = | V |, 𝓁(t) = 0, and there are no paths longer than | V | − 1 in Gf, in order for 𝓁(s) to satisfy the valid labeling condition s must be disconnected from t. At initialisation, the algorithm fulfills this requirement by creating a pre-flow f that saturates all out-arcs of s, after which 𝓁(v) = 0 is trivially valid for all v ∈ V \ {s, t}. After initialisation, the algorithm repeatedly executes an applicable push or relabel operation until no such operations apply, at which point the pre-flow has been converted into a maximum flow.
generic-push-relabel(G, c, s, t):
create a pre-flow f that saturates all out-arcs of s
let 𝓁[s] = |V|
let 𝓁[v] = 0 for all v ∈ V \ {s}
while there is an applicable push or relabel operation do
execute the operation
Correctness
The algorithm maintains the condition that 𝓁 is a valid labeling during its execution. This can be proven true by examining the effects of the push and relabel operations on the label function 𝓁. The relabel operation increases the label value by the associated minimum plus one which will always satisfy the 𝓁(u) ≤ 𝓁(v) + 1 constraint. The push operation can send flow from u to v if 𝓁(u) = 𝓁(v) + 1. This may add (v, u) to Gf and may delete (u, v) from Gf . The addition of (v, u) to Gf will not affect the valid labeling since 𝓁(v) = 𝓁(u) − 1. The deletion of (u, v) from Gf removes the corresponding constraint since the valid labeling property 𝓁(u) ≤ 𝓁(v) + 1 only applies to residual arcs in Gf .[8]
If a preflow f and a valid labeling 𝓁 for f exists then there is no augmenting path from s to t in the residual graph Gf . This can be proven by contradiction based on inequalities which arise in the labeling function when supposing that an augmenting path does exist. If the algorithm terminates, then all nodes in V \ {s, t} are not active. This means all v ∈ V \ {s, t} have no excess flow, and with no excess the preflow f obeys the flow conservation constraint and can be considered a normal flow. This flow is the maximum flow according to the max-flow min-cut theorem since there is no augmenting path from s to t.[8]
Therefore, the algorithm will return the maximum flow upon termination.
Time complexity
In order to bound the time complexity of the algorithm, we must analyze the number of push and relabel operations which occur within the main loop. The numbers of relabel, saturating push and nonsaturating push operations are analyzed separately.
In the algorithm, the relabel operation can be performed at most (2| V | − 1)(| V | − 2) < 2| V |2 times. This is because the labeling 𝓁(u) value for any node u can never decrease, and the maximum label value is at most 2| V | − 1 for all nodes. This means the relabel operation could potentially be performed 2| V | − 1 times for all nodes V \ {s, t} (i.e. | V | − 2). This results in a bound of O(V 2) for the relabel operation.
Each saturating push on an admissible arc (u, v) removes the arc from Gf . For the arc to be reinserted into Gf for another saturating push, v must first be relabeled, followed by a push on the arc (v, u), then u must be relabeled. In the process, 𝓁(u) increases by at least two. Therefore, there are O(V) saturating pushes on (u, v), and the total number of saturating pushes is at most 2| V || E |. This results in a time bound of O(VE) for the saturating push operations.
Bounding the number of nonsaturating pushes can be achieved via a potential argument. We use the potential function Φ = Σ[u ∈ V ∧ xf (u) > 0] 𝓁(u) (i.e. Φ is the sum of the labels of all active nodes). It is obvious that Φ is 0 initially and stays nonnegative throughout the execution of the algorithm. Both relabels and saturating pushes can increase Φ. However, the value of Φ must be equal to 0 at termination since there cannot be any remaining active nodes at the end of the algorithm's execution. This means that over the execution of the algorithm, the nonsaturating pushes must make up the difference of the relabel and saturating push operations in order for Φ to terminate with a value of 0. The relabel operation can increase Φ by at most (2| V | − 1)(| V | − 2). A saturating push on (u, v) activates v if it was inactive before the push, increasing Φ by at most 2| V | − 1. Hence, the total contribution of all saturating pushes operations to Φ is at most (2| V | − 1)(2| V || E |). A nonsaturating push on (u, v) always deactivates u, but it can also activate v as in a saturating push. As a result, it decreases Φ by at least 𝓁(u) − 𝓁(v) = 1. Since relabels and saturating pushes increase Φ, the total number of nonsaturating pushes must make up the difference of (2| V | − 1)(| V | − 2) + (2| V | − 1)(2| V || E |) ≤ 4| V |2| E |. This results in a time bound of O(V 2E) for the nonsaturating push operations.
In sum, the algorithm executes O(V 2) relabels, O(VE) saturating pushes and O(V 2E) nonsaturating pushes. Data structures can be designed to pick and execute an applicable operation in O(1) time. Therefore, the time complexity of the algorithm is O(V 2E).[1][8]
Example
The following is a sample execution of the generic push-relabel algorithm, as defined above, on the following simple network flow graph diagram.
Initial flow network graph
Final maximum flow network graph
In the example, the h and e values denote the label 𝓁 and excess xf , respectively, of the node during the execution of the algorithm. Each residual graph in the example only contains the residual arcs with a capacity larger than zero. Each residual graph may contain multiple iterations of the perform operation loop.
Algorithm Operation(s)Residual Graph
Initialise the residual graph by setting the preflow to values 0 and initialising the labeling.
Initial saturating push is performed across all preflow arcs out of the source, s.
Node a is relabeled in order to push its excess flow towards the sink, t.
The excess at a is then pushed to b then d in two subsequent saturating pushes; which still leaves a with some excess.
Once again, a is relabeled in order to push its excess along its last remaining positive residual (i.e. push the excess back to s).
The node a is then removed from the set of active nodes.
Relabel b and then push its excess to t and c.
Relabel c and then push its excess to d.
Relabel d and then push its excess to t.
This leaves the node b as the only remaining active node, but it cannot push its excess flow towards the sink.
Relabel b and then push its excess towards the source, s, via the node a.
Push the last bit of excess at a back to the source, s.
There are no remaining active nodes. The algorithm terminates and returns the maximum flow of the graph (as seen above).
The example (but with initial flow of 0) can be run here interactively.
Practical implementations
While the generic push–relabel algorithm has O(V 2E) time complexity, efficient implementations achieve O(V 3) or lower time complexity by enforcing appropriate rules in selecting applicable push and relabel operations. The empirical performance can be further improved by heuristics.
"Current-arc" data structure and discharge operation
The "current-arc" data structure is a mechanism for visiting the in- and out-neighbors of a node in the flow network in a static circular order. If a singly linked list of neighbors is created for a node, the data structure can be as simple as a pointer into the list that steps through the list and rewinds to the head when it runs off the end.
Based on the "current-arc" data structure, the discharge operation can be defined. A discharge operation applies on an active node and repeatedly pushes flow from the node until it becomes inactive, relabeling it as necessary to create admissible arcs in the process.
discharge(u):
while xf[u] > 0 do
if current-arc[u] has run off the end of neighbors[u] then
relabel(u)
rewind current-arc[u]
else
let (u, v) = current-arc[u]
if (u, v) is admissible then
push(u, v)
let current-arc[u] point to the next neighbor of u
Finding the next admissible edge to push on has $O(1)$ amortized complexity. The current-arc pointer only moves to the next neighbor when the edge to the current neighbor is saturated or non-admissible, and neither of these two properties can change until the active node $u$ is relabelled. Therefore, when the pointer runs off, there are no admissible unsaturated edges and we have to relabel the active node $u$, so having moved the pointer $O(V)$ times is paid for by the $O(V)$ relabel operation.[8]
Active node selection rules
Definition of the discharge operation reduces the push–relabel algorithm to repeatedly selecting an active node to discharge. Depending on the selection rule, the algorithm exhibits different time complexities. For the sake of brevity, we ignore s and t when referring to the nodes in the following discussion.
FIFO selection rule
The FIFO push–relabel algorithm[2] organizes the active nodes into a queue. The initial active nodes can be inserted in arbitrary order. The algorithm always removes the node at the front of the queue for discharging. Whenever an inactive node becomes active, it is appended to the back of the queue.
The algorithm has O(V 3) time complexity.
Relabel-to-front selection rule
The relabel-to-front push–relabel algorithm[1] organizes all nodes into a linked list and maintains the invariant that the list is topologically sorted with respect to the admissible network. The algorithm scans the list from front to back and performs a discharge operation on the current node if it is active. If the node is relabeled, it is moved to the front of the list, and the scan is restarted from the front.
The algorithm also has O(V 3) time complexity.
Highest label selection rule
The highest-label push–relabel algorithm[11] organizes all nodes into buckets indexed by their labels. The algorithm always selects an active node with the largest label to discharge.
The algorithm has O(V 2√E) time complexity. If the lowest-label selection rule is used instead, the time complexity becomes O(V 2E).[3]
Implementation techniques
Although in the description of the generic push–relabel algorithm above, 𝓁(u) is set to zero for each node u other than s and t at the beginning, it is preferable to perform a backward breadth-first search from t to compute exact labels.[2]
The algorithm is typically separated into two phases. Phase one computes a maximum pre-flow by discharging only active nodes whose labels are below n. Phase two converts the maximum preflow into a maximum flow by returning excess flow that cannot reach t to s. It can be shown that phase two has O(VE) time complexity regardless of the order of push and relabel operations and is therefore dominated by phase one. Alternatively, it can be implemented using flow decomposition.[9]
Heuristics are crucial to improving the empirical performance of the algorithm.[12] Two commonly used heuristics are the gap heuristic and the global relabeling heuristic.[2][13] The gap heuristic detects gaps in the labeling function. If there is a label 0 < 𝓁' < | V | for which there is no node u such that 𝓁(u) = 𝓁', then any node u with 𝓁' < 𝓁(u) < | V | has been disconnected from t and can be relabeled to (| V | + 1) immediately. The global relabeling heuristic periodically performs backward breadth-first search from t in Gf to compute the exact labels of the nodes. Both heuristics skip unhelpful relabel operations, which are a bottleneck of the algorithm and contribute to the ineffectiveness of dynamic trees.[4]
Sample implementations
C implementation
#include <stdlib.h>
#include <stdio.h>
#define NODES 6
#define MIN(X,Y) ((X) < (Y) ? (X) : (Y))
#define INFINITE 10000000
void push(const int * const * C, int ** F, int *excess, int u, int v) {
int send = MIN(excess[u], C[u][v] - F[u][v]);
F[u][v] += send;
F[v][u] -= send;
excess[u] -= send;
excess[v] += send;
}
void relabel(const int * const * C, const int * const * F, int *height, int u) {
int v;
int min_height = INFINITE;
for (v = 0; v < NODES; v++) {
if (C[u][v] - F[u][v] > 0) {
min_height = MIN(min_height, height[v]);
height[u] = min_height + 1;
}
}
};
void discharge(const int * const * C, int ** F, int *excess, int *height, int *seen, int u) {
while (excess[u] > 0) {
if (seen[u] < NODES) {
int v = seen[u];
if ((C[u][v] - F[u][v] > 0) && (height[u] > height[v])) {
push(C, F, excess, u, v);
} else {
seen[u] += 1;
}
} else {
relabel(C, F, height, u);
seen[u] = 0;
}
}
}
void moveToFront(int i, int *A) {
int temp = A[i];
int n;
for (n = i; n > 0; n--) {
A[n] = A[n-1];
}
A[0] = temp;
}
int pushRelabel(const int * const * C, int ** F, int source, int sink) {
int *excess, *height, *list, *seen, i, p;
excess = (int *) calloc(NODES, sizeof(int));
height = (int *) calloc(NODES, sizeof(int));
seen = (int *) calloc(NODES, sizeof(int));
list = (int *) calloc((NODES-2), sizeof(int));
for (i = 0, p = 0; i < NODES; i++){
if ((i != source) && (i != sink)) {
list[p] = i;
p++;
}
}
height[source] = NODES;
excess[source] = INFINITE;
for (i = 0; i < NODES; i++)
push(C, F, excess, source, i);
p = 0;
while (p < NODES - 2) {
int u = list[p];
int old_height = height[u];
discharge(C, F, excess, height, seen, u);
if (height[u] > old_height) {
moveToFront(p, list);
p = 0;
} else {
p += 1;
}
}
int maxflow = 0;
for (i = 0; i < NODES; i++)
maxflow += F[source][i];
free(list);
free(seen);
free(height);
free(excess);
return maxflow;
}
void printMatrix(const int * const * M) {
int i, j;
for (i = 0; i < NODES; i++) {
for (j = 0; j < NODES; j++)
printf("%d\t",M[i][j]);
printf("\n");
}
}
int main(void) {
int **flow, **capacities, i;
flow = (int **) calloc(NODES, sizeof(int*));
capacities = (int **) calloc(NODES, sizeof(int*));
for (i = 0; i < NODES; i++) {
flow[i] = (int *) calloc(NODES, sizeof(int));
capacities[i] = (int *) calloc(NODES, sizeof(int));
}
// Sample graph
capacities[0][1] = 2;
capacities[0][2] = 9;
capacities[1][2] = 1;
capacities[1][3] = 0;
capacities[1][4] = 0;
capacities[2][4] = 7;
capacities[3][5] = 7;
capacities[4][5] = 4;
printf("Capacity:\n");
printMatrix(capacities);
printf("Max Flow:\n%d\n", pushRelabel(capacities, flow, 0, 5));
printf("Flows:\n");
printMatrix(flow);
return 0;
}
Python implementation
def relabel_to_front(C, source: int, sink: int) -> int:
n = len(C) # C is the capacity matrix
F = [[0] * n for _ in range(n)]
# residual capacity from u to v is C[u][v] - F[u][v]
height = [0] * n # height of node
excess = [0] * n # flow into node minus flow from node
seen = [0] * n # neighbours seen since last relabel
# node "queue"
nodelist = [i for i in range(n) if i != source and i != sink]
def push(u, v):
send = min(excess[u], C[u][v] - F[u][v])
F[u][v] += send
F[v][u] -= send
excess[u] -= send
excess[v] += send
def relabel(u):
# Find smallest new height making a push possible,
# if such a push is possible at all.
min_height = ∞
for v in xrange(n):
if C[u][v] - F[u][v] > 0:
min_height = min(min_height, height[v])
height[u] = min_height + 1
def discharge(u):
while excess[u] > 0:
if seen[u] < n: # check next neighbour
v = seen[u]
if C[u][v] - F[u][v] > 0 and height[u] > height[v]:
push(u, v)
else:
seen[u] += 1
else: # we have checked all neighbours. must relabel
relabel(u)
seen[u] = 0
height[source] = n # longest path from source to sink is less than n long
excess[source] = ∞ # send as much flow as possible to neighbours of source
for v in range(n):
push(source, v)
p = 0
while p < len(nodelist):
u = nodelist[p]
old_height = height[u]
discharge(u)
if height[u] > old_height:
nodelist.insert(0, nodelist.pop(p)) # move to front of list
p = 0 # start from front of list
else:
p += 1
return sum(F[source])
References
1. Cormen, T. H.; Leiserson, C. E.; Rivest, R. L.; Stein, C. (2001). "§26 Maximum flow". Introduction to Algorithms (2nd ed.). The MIT Press. pp. 643–698. ISBN 978-0262032933.
2. Goldberg, A V; Tarjan, R E (1986). "A new approach to the maximum flow problem". Proceedings of the eighteenth annual ACM symposium on Theory of computing – STOC '86. p. 136. doi:10.1145/12130.12144. ISBN 978-0897911931. S2CID 14492800.
3. Ahuja, Ravindra K.; Kodialam, Murali; Mishra, Ajay K.; Orlin, James B. (1997). "Computational investigations of maximum flow algorithms". European Journal of Operational Research. 97 (3): 509. CiteSeerX 10.1.1.297.2945. doi:10.1016/S0377-2217(96)00269-X.
4. Goldberg, Andrew V. (2008). "The Partial Augment–Relabel Algorithm for the Maximum Flow Problem". Algorithms – ESA 2008. Lecture Notes in Computer Science. Vol. 5193. pp. 466–477. CiteSeerX 10.1.1.150.5103. doi:10.1007/978-3-540-87744-8_39. ISBN 978-3-540-87743-1.
5. Goldberg, Andrew V (1997). "An Efficient Implementation of a Scaling Minimum-Cost Flow Algorithm". Journal of Algorithms. 22: 1–29. doi:10.1006/jagm.1995.0805.
6. Ahuja, Ravindra K.; Orlin, James B. (1991). "Distance-directed augmenting path algorithms for maximum flow and parametric maximum flow problems". Naval Research Logistics. 38 (3): 413. CiteSeerX 10.1.1.297.5698. doi:10.1002/1520-6750(199106)38:3<413::AID-NAV3220380310>3.0.CO;2-J.
7. Goldberg, Andrew V.; Tarjan, Robert E. (2014). "Efficient maximum flow algorithms". Communications of the ACM. 57 (8): 82. doi:10.1145/2628036. S2CID 17014879.
8. Goldberg, Andrew V.; Tarjan, Robert E. (1988). "A new approach to the maximum-flow problem". Journal of the ACM. 35 (4): 921. doi:10.1145/48014.61051. S2CID 52152408.
9. Ahuja, R. K.; Magnanti, T. L.; Orlin, J. B. (1993). Network Flows: Theory, Algorithms, and Applications (1st ed.). Prentice Hall. ISBN 978-0136175490.
10. Shiloach, Yossi; Vishkin, Uzi (1982). "An O(n2log n) parallel max-flow algorithm". Journal of Algorithms. 3 (2): 128–146. doi:10.1016/0196-6774(82)90013-X.
11. Cheriyan, J.; Maheshwari, S. N. (1988). "Analysis of preflow push algorithms for maximum network flow". Foundations of Software Technology and Theoretical Computer Science. Lecture Notes in Computer Science. Vol. 338. p. 30. doi:10.1007/3-540-50517-2_69. ISBN 978-3-540-50517-4.
12. Cherkassky, Boris V.; Goldberg, Andrew V. (1995). "On implementing push-relabel method for the maximum flow problem". Integer Programming and Combinatorial Optimization. Lecture Notes in Computer Science. Vol. 920. p. 157. CiteSeerX 10.1.1.150.3609. doi:10.1007/3-540-59408-6_49. ISBN 978-3-540-59408-6.
13. Derigs, U.; Meier, W. (1989). "Implementing Goldberg's max-flow-algorithm ? A computational investigation". Zeitschrift für Operations Research. 33 (6): 383. doi:10.1007/BF01415937. S2CID 39730584.
Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
• Golden-section search
• Interpolation methods
• Line search
• Nelder–Mead method
• Successive parabolic interpolation
Gradients
Convergence
• Trust region
• Wolfe conditions
Quasi–Newton
• Berndt–Hall–Hall–Hausman
• Broyden–Fletcher–Goldfarb–Shanno and L-BFGS
• Davidon–Fletcher–Powell
• Symmetric rank-one (SR1)
Other methods
• Conjugate gradient
• Gauss–Newton
• Gradient
• Mirror
• Levenberg–Marquardt
• Powell's dog leg method
• Truncated Newton
Hessians
• Newton's method
Constrained nonlinear
General
• Barrier methods
• Penalty methods
Differentiable
• Augmented Lagrangian methods
• Sequential quadratic programming
• Successive linear programming
Convex optimization
Convex
minimization
• Cutting-plane method
• Reduced gradient (Frank–Wolfe)
• Subgradient method
Linear and
quadratic
Interior point
• Affine scaling
• Ellipsoid algorithm of Khachiyan
• Projective algorithm of Karmarkar
Basis-exchange
• Simplex algorithm of Dantzig
• Revised simplex algorithm
• Criss-cross algorithm
• Principal pivoting algorithm of Lemke
Combinatorial
Paradigms
• Approximation algorithm
• Dynamic programming
• Greedy algorithm
• Integer programming
• Branch and bound/cut
Graph
algorithms
Minimum
spanning tree
• Borůvka
• Prim
• Kruskal
Shortest path
• Bellman–Ford
• SPFA
• Dijkstra
• Floyd–Warshall
Network flows
• Dinic
• Edmonds–Karp
• Ford–Fulkerson
• Push–relabel maximum flow
Metaheuristics
• Evolutionary algorithm
• Hill climbing
• Local search
• Parallel metaheuristics
• Simulated annealing
• Spiral optimization algorithm
• Tabu search
• Software
|
Wikipedia
|
Related rates
In differential calculus, related rates problems involve finding a rate at which a quantity changes by relating that quantity to other quantities whose rates of change are known. The rate of change is usually with respect to time. Because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Differentiation with respect to time or one of the other variables requires application of the chain rule,[1] since most problems involve several variables.
Part of a series of articles about
Calculus
• Fundamental theorem
• Limits
• Continuity
• Rolle's theorem
• Mean value theorem
• Inverse function theorem
Differential
Definitions
• Derivative (generalizations)
• Differential
• infinitesimal
• of a function
• total
Concepts
• Differentiation notation
• Second derivative
• Implicit differentiation
• Logarithmic differentiation
• Related rates
• Taylor's theorem
Rules and identities
• Sum
• Product
• Chain
• Power
• Quotient
• L'Hôpital's rule
• Inverse
• General Leibniz
• Faà di Bruno's formula
• Reynolds
Integral
• Lists of integrals
• Integral transform
• Leibniz integral rule
Definitions
• Antiderivative
• Integral (improper)
• Riemann integral
• Lebesgue integration
• Contour integration
• Integral of inverse functions
Integration by
• Parts
• Discs
• Cylindrical shells
• Substitution (trigonometric, tangent half-angle, Euler)
• Euler's formula
• Partial fractions
• Changing order
• Reduction formulae
• Differentiating under the integral sign
• Risch algorithm
Series
• Geometric (arithmetico-geometric)
• Harmonic
• Alternating
• Power
• Binomial
• Taylor
Convergence tests
• Summand limit (term test)
• Ratio
• Root
• Integral
• Direct comparison
• Limit comparison
• Alternating series
• Cauchy condensation
• Dirichlet
• Abel
Vector
• Gradient
• Divergence
• Curl
• Laplacian
• Directional derivative
• Identities
Theorems
• Gradient
• Green's
• Stokes'
• Divergence
• generalized Stokes
Multivariable
Formalisms
• Matrix
• Tensor
• Exterior
• Geometric
Definitions
• Partial derivative
• Multiple integral
• Line integral
• Surface integral
• Volume integral
• Jacobian
• Hessian
Advanced
• Calculus on Euclidean space
• Generalized functions
• Limit of distributions
Specialized
• Fractional
• Malliavin
• Stochastic
• Variations
Miscellaneous
• Precalculus
• History
• Glossary
• List of topics
• Integration Bee
• Mathematical analysis
• Nonstandard analysis
Fundamentally, if a function $F$ is defined such that $F=f(x)$, then the derivative of the function $F$ can be taken with respect to another variable. We assume $x$ is a function of $t$, i.e. $x=g(t)$. Then $F=f(g(t))$, so
$F'(t)=f'(g(t))\cdot g'(t)$
Written in Leibniz notation, this is:
${\frac {dF}{dt}}={\frac {df}{dx}}\cdot {\frac {dx}{dt}}.$
Thus, if it is known how $x$ changes with respect to $t$, then we can determine how $F$ changes with respect to $t$ and vice versa. We can extend this application of the chain rule with the sum, difference, product and quotient rules of calculus, etc.
For example, if $F(x)=G(y)+H(z)$ then
${\frac {dF}{dx}}\cdot {\frac {dx}{dt}}={\frac {dG}{dy}}\cdot {\frac {dy}{dt}}+{\frac {dH}{dz}}\cdot {\frac {dz}{dt}}.$
Procedure
The most common way to approach related rates problems is the following:[2]
1. Identify the known variables, including rates of change and the rate of change that is to be found. (Drawing a picture or representation of the problem can help to keep everything in order)
2. Construct an equation relating the quantities whose rates of change are known to the quantity whose rate of change is to be found.
3. Differentiate both sides of the equation with respect to time (or other rate of change). Often, the chain rule is employed at this step.
4. Substitute the known rates of change and the known quantities into the equation.
5. Solve for the wanted rate of change.
Errors in this procedure are often caused by plugging in the known values for the variables before (rather than after) finding the derivative with respect to time. Doing so will yield an incorrect result, since if those values are substituted for the variables before differentiation, those variables will become constants; and when the equation is differentiated, zeroes appear in places of all variables for which the values were plugged in.
Example
A 10-meter ladder is leaning against the wall of a building, and the base of the ladder is sliding away from the building at a rate of 3 meters per second. How fast is the top of the ladder sliding down the wall when the base of the ladder is 6 meters from the wall?
The distance between the base of the ladder and the wall, x, and the height of the ladder on the wall, y, represent the sides of a right triangle with the ladder as the hypotenuse, h. The objective is to find dy/dt, the rate of change of y with respect to time, t, when h, x and dx/dt, the rate of change of x, are known.
Step 1:
• $x=6$
• $h=10$
• ${\frac {dx}{dt}}=3$
• ${\frac {dh}{dt}}=0$
• ${\frac {dy}{dt}}={\text{?}}$
Step 2: From the Pythagorean theorem, the equation
$x^{2}+y^{2}=h^{2},$
describes the relationship between x, y and h, for a right triangle. Differentiating both sides of this equation with respect to time, t, yields
${\frac {d}{dt}}\left(x^{2}+y^{2}\right)={\frac {d}{dt}}\left(h^{2}\right)$
Step 3: When solved for the wanted rate of change, dy/dt, gives us
${\frac {d}{dt}}\left(x^{2}\right)+{\frac {d}{dt}}\left(y^{2}\right)={\frac {d}{dt}}\left(h^{2}\right)$
$(2x){\frac {dx}{dt}}+(2y){\frac {dy}{dt}}=(2h){\frac {dh}{dt}}$
$x{\frac {dx}{dt}}+y{\frac {dy}{dt}}=h{\frac {dh}{dt}}$
${\frac {dy}{dt}}={\frac {h{\frac {dh}{dt}}-x{\frac {dx}{dt}}}{y}}.$
Step 4 & 5: Using the variables from step 1 gives us:
${\frac {dy}{dt}}={\frac {h{\frac {dh}{dt}}-x{\frac {dx}{dt}}}{y}}.$
${\frac {dy}{dt}}={\frac {10\times 0-6\times 3}{y}}=-{\frac {18}{y}}.$
Solving for y using the Pythagorean Theorem gives:
$x^{2}+y^{2}=h^{2}$
$6^{2}+y^{2}=10^{2}$
$y=8$
Plugging in 8 for the equation:
$-{\frac {18}{y}}=-{\frac {18}{8}}=-{\frac {9}{4}}$
It is generally assumed that negative values represent the downward direction. In doing such, the top of the ladder is sliding down the wall at a rate of 9/4 meters per second.
Physics examples
Because one physical quantity often depends on another, which, in turn depends on others, such as time, related-rates methods have broad applications in Physics. This section presents an example of related rates kinematics and electromagnetic induction.
Relative kinematics of two vehicles
For example, one can consider the kinematics problem where one vehicle is heading West toward an intersection at 80 miles per hour while another is heading North away from the intersection at 60 miles per hour. One can ask whether the vehicles are getting closer or further apart and at what rate at the moment when the North bound vehicle is 3 miles North of the intersection and the West bound vehicle is 4 miles East of the intersection.
Big idea: use chain rule to compute rate of change of distance between two vehicles.
Plan:
1. Choose coordinate system
2. Identify variables
3. Draw picture
4. Big idea: use chain rule to compute rate of change of distance between two vehicles
5. Express c in terms of x and y via Pythagorean theorem
6. Express dc/dt using chain rule in terms of dx/dt and dy/dt
7. Substitute in x, y, dx/dt, dy/dt
8. Simplify.
Choose coordinate system: Let the y-axis point North and the x-axis point East.
Identify variables: Define y(t) to be the distance of the vehicle heading North from the origin and x(t) to be the distance of the vehicle heading West from the origin.
Express c in terms of x and y via the Pythagorean theorem:
$c=\left(x^{2}+y^{2}\right)^{1/2}$
Express dc/dt using chain rule in terms of dx/dt and dy/dt:
${\frac {dc}{dt}}={\frac {d}{dt}}\left(x^{2}+y^{2}\right)^{1/2}$Apply derivative operator to entire function
$={\frac {1}{2}}\left(x^{2}+y^{2}\right)^{-1/2}{\frac {d}{dt}}\left(x^{2}+y^{2}\right)$Square root is outside function; Sum of squares is inside function
$={\frac {1}{2}}\left(x^{2}+y^{2}\right)^{-1/2}\left[{\frac {d}{dt}}(x^{2})+{\frac {d}{dt}}(y^{2})\right]$Distribute differentiation operator
$={\frac {1}{2}}\left(x^{2}+y^{2}\right)^{-1/2}\left[2x{\frac {dx}{dt}}+2y{\frac {dy}{dt}}\right]$Apply chain rule to x(t) and y(t)}
$={\frac {x{\frac {dx}{dt}}+y{\frac {dy}{dt}}}{\sqrt {x^{2}+y^{2}}}}$Simplify.
Substitute in x = 4 mi, y = 3 mi, dx/dt = −80 mi/hr, dy/dt = 60 mi/hr and simplify
${\begin{aligned}{\frac {dc}{dt}}&={\frac {4{\text{ mi}}\cdot (-80{\text{ mi}}/{\text{hr}})+3{\text{ mi}}\cdot (60){\text{mi}}/{\text{hr}}}{\sqrt {(4{\text{ mi}})^{2}+(3{\text{ mi}})^{2}}}}\\&={\frac {-320{\text{ mi}}^{2}/{\text{hr}}+180{\text{ mi}}^{2}/{\text{hr}}}{5{\text{ mi}}}}\\&={\frac {-140{\text{ mi}}^{2}/{\text{hr}}}{5{\text{ mi}}}}\\&=-28{\text{ mi}}/{\text{hr}}\end{aligned}}$
Consequently, the two vehicles are getting closer together at a rate of 28 mi/hr.
Electromagnetic induction of conducting loop spinning in magnetic field
The magnetic flux through a loop of area A whose normal is at an angle θ to a magnetic field of strength B is
$\Phi _{B}=BA\cos(\theta ),$
Faraday's law of electromagnetic induction states that the induced electromotive force ${\mathcal {E}}$ is the negative rate of change of magnetic flux $\Phi _{B}$ through a conducting loop.
${\mathcal {E}}=-{\frac {d\Phi _{B}}{dt}},$
If the loop area A and magnetic field B are held constant, but the loop is rotated so that the angle θ is a known function of time, the rate of change of θ can be related to the rate of change of $\Phi _{B}$ (and therefore the electromotive force) by taking the time derivative of the flux relation
${\mathcal {E}}=-{\frac {d\Phi _{B}}{dt}}=BA\sin \theta {\frac {d\theta }{dt}}$
If for example, the loop is rotating at a constant angular velocity ω, so that θ = ωt, then
${\mathcal {E}}=\omega BA\sin \omega t$
References
1. "Related Rates". Whitman College. Retrieved 2013-10-27.
2. Kreider, Donald. "Related Rates". Dartmouth. Retrieved 2013-10-27.
Calculus
Precalculus
• Binomial theorem
• Concave function
• Continuous function
• Factorial
• Finite difference
• Free variables and bound variables
• Graph of a function
• Linear function
• Radian
• Rolle's theorem
• Secant
• Slope
• Tangent
Limits
• Indeterminate form
• Limit of a function
• One-sided limit
• Limit of a sequence
• Order of approximation
• (ε, δ)-definition of limit
Differential calculus
• Derivative
• Second derivative
• Partial derivative
• Differential
• Differential operator
• Mean value theorem
• Notation
• Leibniz's notation
• Newton's notation
• Rules of differentiation
• linearity
• Power
• Sum
• Chain
• L'Hôpital's
• Product
• General Leibniz's rule
• Quotient
• Other techniques
• Implicit differentiation
• Inverse functions and differentiation
• Logarithmic derivative
• Related rates
• Stationary points
• First derivative test
• Second derivative test
• Extreme value theorem
• Maximum and minimum
• Further applications
• Newton's method
• Taylor's theorem
• Differential equation
• Ordinary differential equation
• Partial differential equation
• Stochastic differential equation
Integral calculus
• Antiderivative
• Arc length
• Riemann integral
• Basic properties
• Constant of integration
• Fundamental theorem of calculus
• Differentiating under the integral sign
• Integration by parts
• Integration by substitution
• trigonometric
• Euler
• Tangent half-angle substitution
• Partial fractions in integration
• Quadratic integral
• Trapezoidal rule
• Volumes
• Washer method
• Shell method
• Integral equation
• Integro-differential equation
Vector calculus
• Derivatives
• Curl
• Directional derivative
• Divergence
• Gradient
• Laplacian
• Basic theorems
• Line integrals
• Green's
• Stokes'
• Gauss'
Multivariable calculus
• Divergence theorem
• Geometric
• Hessian matrix
• Jacobian matrix and determinant
• Lagrange multiplier
• Line integral
• Matrix
• Multiple integral
• Partial derivative
• Surface integral
• Volume integral
• Advanced topics
• Differential forms
• Exterior derivative
• Generalized Stokes' theorem
• Tensor calculus
Sequences and series
• Arithmetico-geometric sequence
• Types of series
• Alternating
• Binomial
• Fourier
• Geometric
• Harmonic
• Infinite
• Power
• Maclaurin
• Taylor
• Telescoping
• Tests of convergence
• Abel's
• Alternating series
• Cauchy condensation
• Direct comparison
• Dirichlet's
• Integral
• Limit comparison
• Ratio
• Root
• Term
Special functions
and numbers
• Bernoulli numbers
• e (mathematical constant)
• Exponential function
• Natural logarithm
• Stirling's approximation
History of calculus
• Adequality
• Brook Taylor
• Colin Maclaurin
• Generality of algebra
• Gottfried Wilhelm Leibniz
• Infinitesimal
• Infinitesimal calculus
• Isaac Newton
• Fluxion
• Law of Continuity
• Leonhard Euler
• Method of Fluxions
• The Method of Mechanical Theorems
Lists
• Differentiation rules
• List of integrals of exponential functions
• List of integrals of hyperbolic functions
• List of integrals of inverse hyperbolic functions
• List of integrals of inverse trigonometric functions
• List of integrals of irrational functions
• List of integrals of logarithmic functions
• List of integrals of rational functions
• List of integrals of trigonometric functions
• Secant
• Secant cubed
• List of limits
• Lists of integrals
Miscellaneous topics
• Complex calculus
• Contour integral
• Differential geometry
• Manifold
• Curvature
• of curves
• of surfaces
• Tensor
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Regiomontanus' angle maximization problem
• Steinmetz solid
|
Wikipedia
|
Relation (mathematics)
In mathematics, a binary relation on a set may, or may not, hold between two given set members. For example, "is less than" is a relation on the set of natural numbers; it holds e.g. between 1 and 3 (denoted as 1<3) , and likewise between 3 and 4 (denoted as 3<4), but neither between 3 and 1 nor between 4 and 4. As another example, "is sister of" is a relation on the set of all people, it holds e.g. between Marie Curie and Bronisława Dłuska, and likewise vice versa. Set members may not be in relation "to a certain degree" - either they are in relation or they are not.
This article is about basic notions of (homogeneous binary) relations in mathematics. For a more in-depth treatment, see Binary relation. For relations between more than two elements, see Finitary relation.
Formally, a relation R over a set X can be seen as a set of ordered pairs (x, y) of members of X.[1] The relation R holds between x and y if (x, y) is a member of R. For example, the relation "is less than" on the natural numbers is an infinite set Rless of pairs of natural numbers that contains both (1,3) and (3,4), but neither (3,1) nor (4,4). The relation "is a nontrivial divisor of" on the set of one-digit natural numbers is sufficiently small to be shown here: Rdiv = { (2,4), (2,6), (2,8), (3,6), (3,9), (4,8) }; for example 2 is a nontrivial divisor of 8, but not vice versa, hence (2,8) ∈ Rdiv, but (8,2) ∉ Rdiv.
If R is a relation that holds for x and y one often writes xRy. For most common relations in mathematics, special symbols are introduced, like "<" for "is less than", and "|" for "is a nontrivial divisor of", and, most popular "=" for "is equal to". For example, "1<3", "1 is less than 3", and "(1,3) ∈ Rless" mean all the same; some authors also write "(1,3) ∈ (<)".
Various properties of relations are investigated. A relation R is reflexive if xRx holds for all x, and irreflexive if xRx holds for no x. It is symmetric if xRy always implies yRx, and asymmetric if xRy implies that yRx is impossible. It is transitive if xRy and yRz always implies xRz. For example, "is less than" is irreflexive, asymmetric, and transitive, but neither reflexive nor symmetric, "is sister of" is transitive, but neither reflexive (e.g. Pierre Curie is not a sister of himself), nor symmetric, nor asymmetric, while being irreflexive or not may be a matter of definition (is every woman a sister of herself?), "is ancestor of" is transitive, while "is parent of" is not. Mathematical theorems are known about combinations of relation properties, such as "A transitive relation is irreflexive if, and only if, it is asymmetric".
Of particular importance are relations that satisfy certain combinations of properties. A partial order is a relation that is reflexive, antisymmetric, and transitive,[2] an equivalence relation is a relation that is reflexive, symmetric, and transitive,[3] a function is a relation that is right-unique and left-total (see below).[4][5]
Since relations are sets, they can be manipulated using set operations, including union, intersection, and complementation, and satisfying the laws of an algebra of sets. Beyond that, operations like the converse of a relation and the composition of relations are available, satisfying the laws of a calculus of relations.[6][7][8]
The above concept of relation[note 1] has been generalized to admit relations between members of two different sets (heterogeneous relation, like "lies on" between the set of all points and that of all lines in geometry), relations between three or more sets (Finitary relation, like "person x lives in town y at time z"), and relations between classes[note 2] (like "is an element of" on the class of all sets, see Binary relation § Sets versus classes).
Definition
Given a set X, a relation R over X is a set of ordered pairs of elements from X, formally: R ⊆ {(x,y): x,y ∈ X}.[1][9]
The statement (x, y) ∈ R reads "x is R-related to y" and is written in infix notation as xRy.[6][7] The order of the elements is important; if x ≠ y then yRx can be true or false independently of xRy. For example, 3 divides 9, but 9 does not divide 3.
Representation of relations
y
x
1234612
1
2
3
4
6
12
Representation as boolean matrix
A relation on a finite set may be represented as:
• Hasse diagram
• directed graph
• boolean matrix
• 2D-plot
For example, on the set of all divisors of 12, define the relation Rdiv by
x Rdiv y if x is a divisor of y and x≠y.
Formally, X = { 1, 2, 3, 4, 6, 12 } and Rdiv = { (1,2), (1,3), (1,4), (1,6), (1,12), (2,4), (2,6), (2,12), (3,6), (3,12), (4,12), (6,12) }. The representation of Rdiv as a boolean matrix is shown in the left table; the representation both as a Hasse diagram and as a directed graph is shown in the right picture.
The following are equivalent:
• x Rdiv y is true.
• (x,y) ∈ Rdiv.
• A path from x to y exists in the Hasse diagram representing Rdiv.
• An edge from x to y exists in the directed graph representing Rdiv.
• In the boolean matrix representing Rdiv, the element in line x, column y is "".
Properties of relations
Some important properties that a relation R over a set X may have are:
Reflexive
for all x ∈ X, xRx. For example, ≥ is a reflexive relation but > is not.
Irreflexive (or strict)
for all x ∈ X, not xRx. For example, > is an irreflexive relation, but ≥ is not.
The previous 2 alternatives are not exhaustive; e.g., the red binary relation y = x2 given in the diagram below is neither irreflexive, nor reflexive, since it contains the pair (0, 0), but not (2, 2), respectively.
Symmetric
for all x, y ∈ X, if xRy then yRx. For example, "is a blood relative of" is a symmetric relation, because x is a blood relative of y if and only if y is a blood relative of x.
Antisymmetric
for all x, y ∈ X, if xRy and yRx then x = y. For example, ≥ is an antisymmetric relation; so is >, but vacuously (the condition in the definition is always false).[10]
Asymmetric
for all x, y ∈ X, if xRy then not yRx. A relation is asymmetric if and only if it is both antisymmetric and irreflexive.[11] For example, > is an asymmetric relation, but ≥ is not.
Again, the previous 3 alternatives are far from being exhaustive; as an example over the natural numbers, the relation xRy defined by x > 2 is neither symmetric (e.g. 5R1, but not 1R5) nor antisymmetric (e.g. 6R4, but also 4R6), let alone asymmetric.
Transitive
for all x, y, z ∈ X, if xRy and yRz then xRz. A transitive relation is irreflexive if and only if it is asymmetric.[12] For example, "is ancestor of" is a transitive relation, while "is parent of" is not.
Connected
for all x, y ∈ X, if x ≠ y then xRy or yRx. For example, on the natural numbers, < is connected, while "is a divisor of" is not (e.g. neither 5R7 nor 7R5).
Strongly connected
for all x, y ∈ X, xRy or yRx. For example, on the natural numbers, ≤ is strongly connected, but < is not. A relation is strongly connected if, and only if, it is connected and reflexive.
Uniqueness properties:
Injective[note 3] (also called left-unique)[13]
For all x, y, z ∈ X, if xRy and zRy then x = z. For example, the green and blue binary relations in the diagram are injective, but the red one is not (as it relates both −1 and 1 to 1), nor is the black one (as it relates both −1 and 1 to 0).
Functional[note 3] (also called right-unique,[13] right-definite[14] or univalent)[8]
For all x, y, z ∈ X, if xRy and xRz then y = z. Such a binary relation is called a partial function. For example, the red and green binary relations in the diagram are functional, but the blue one is not (as it relates 1 to both −1 and 1), nor is the black one (as it relates 0 to both −1 and 1).
Totality properties:
Serial[note 3] (also called total or left-total)
For all x ∈ X, there exists some y ∈ X such that xRy. Such a relation is called a multivalued function. For example, the red and green binary relations in the diagram are total, but the blue one is not (as it does not relate −1 to any real number), nor is the black one (as it does not relate 2 to any real number). As another example, > is a serial relation over the integers. But it is not a serial relation over the positive integers, because there is no y in the positive integers such that 1 > y.[15] However, < is a serial relation over the positive integers, the rational numbers and the real numbers. Every reflexive relation is serial: for a given x, choose y = x.
Surjective[note 3] (also called right-total[13] or onto)
For all y in X, there exists an x in X such that xRy. For example, the green and blue binary relations in the diagram are surjective, but the red one is not (as it does not relate any real number to −1), nor is the black one (as it does not relate any real number to 2).
Combinations of properties
Relations by property
Reflexivity
Symmetry
Transitivity
Connectedness
Example
Partial order Refl Antisym Yes Subset
Strict partial order Irrefl Asym Yes Strict subset
Total order Refl Antisym Yes Yes Alphabetical order
Strict total order Irrefl Asym Yes Yes Strict alphabetical order
Equivalence relation Refl Sym Yes Equality
Relations that satisfy certain combinations of the above properties are particularly useful, and thus have received names by their own.
Equivalence relation
A relation that is reflexive, symmetric, and transitive. It is also a relation that is symmetric, transitive, and serial, since these properties imply reflexivity.
Orderings:
Partial order
A relation that is reflexive, antisymmetric, and transitive.
Strict partial order
A relation that is irreflexive, antisymmetric, and transitive.
Total order
A relation that is reflexive, antisymmetric, transitive and connected.[16]
Strict total order
A relation that is irreflexive, antisymmetric, transitive and connected.
Uniqueness properties:
One-to-one[note 3]
Injective and functional. For example, the green binary relation in the diagram is one-to-one, but the red, blue and black ones are not.
One-to-many[note 3]
Injective and not functional. For example, the blue binary relation in the diagram is one-to-many, but the red, green and black ones are not.
Many-to-one[note 3]
Functional and not injective. For example, the red binary relation in the diagram is many-to-one, but the green, blue and black ones are not.
Many-to-many[note 3]
Not injective nor functional. For example, the black binary relation in the diagram is many-to-many, but the red, green and blue ones are not.
Uniqueness and totality properties:
A function[note 3]
A binary relation that is functional and total. For example, the red and green binary relations in the diagram are functions, but the blue and black ones are not.
An injection[note 3]
A function that is injective. For example, the green binary relation in the diagram is an injection, but the red, blue and black ones are not.
A surjection[note 3]
A function that is surjective. For example, the green binary relation in the diagram is a surjection, but the red, blue and black ones are not.
A bijection[note 3]
A function that is injective and surjective. For example, the green binary relation in the diagram is a bijection, but the red, blue and black ones are not.
Operations on relations
Union[note 4]
If R and S are relations over X then R ∪ S = {(x, y) | xRy or xSy} is the union relation of R and S. The identity element of this operation is the empty relation. For example, ≤ is the union of < and =, and ≥ is the union of > and =.
Intersection[note 4]
If R and S are binary relations over X then R ∩ S = {(x, y) | xRy and xSy} is the intersection relation of R and S. The identity element of this operation is the universal relation. For example, "is a lower card of the same suit as" is the intersection of "is a lower card than" and "belongs to the same suit as".
Composition[note 4]
If R and S are binary relations over X then S ∘ R = {(x, z) | there exists y ∈ X such that xRy and ySz} (also denoted by R; S) is the composition relation of R and S. The identity element is the identity relation. The order of R and S in the notation S ∘ R, used here agrees with the standard notational order for composition of functions. For example, the composition "is mother of" ∘ "is parent of" yields "is maternal grandparent of", while the composition "is parent of" ∘ "is mother of" yields "is grandmother of". For the former case, if x is the parent of y and y is the mother of z, then x is the maternal grandparent of z.
Converse[note 4]
If R is a binary relation over sets X and Y then RT = {(y, x) | xRy} is the converse relation of R over Y and X. For example, = is the converse of itself, as is ≠, and < and > are each other's converse, as are ≤ and ≥. A binary relation is equal to its converse if and only if it is symmetric.
Complement[note 4]
If R is a binary relation over X then R = {(x, y) | x, y ∈ X and not xRy} (also denoted by R or ¬ R) is the complementary relation of R. For example, = and ≠ are each other's complement, as are ⊆ and ⊈, ⊇ and ⊉, and ∈ and ∉, and, for total orders, also < and ≥, and > and ≤. The complement of the converse relation RT is the converse of the complement: ${\overline {R^{\mathsf {T}}}}={\bar {R}}^{\mathsf {T}}.$
Restriction[note 4]
If R is a relation over X and S is a subset of X then R|S = {(x, y) | xRy and x, y ∈ S} is the restriction relation of R to S. The expression R|S = {(x, y) | xRy and x ∈ S} is the left-restriction relation of R to S; the expression R|S = {(x, y) | xRy and y ∈ S} is called the right-restriction relation of R to S. If a relation is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, strict weak order, total preorder (weak order), or an equivalence relation, then so too are its restrictions. However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general not equal. For example, restricting the relation "x is parent of y" to females yields the relation "x is mother of the woman y"; its transitive closure does not relate a woman with her paternal grandmother. On the other hand, the transitive closure of "is parent of" is "is ancestor of"; its restriction to females does relate a woman with her paternal grandmother.
A binary relation R over sets X and Y is said to be contained in a relation S over X and Y, written $R\subseteq S,$ if R is a subset of S, that is, for all $x\in X$ and $y\in Y,$ if xRy, then xSy. If R is contained in S and S is contained in R, then R and S are called equal written R = S. If R is contained in S but S is not contained in R, then R is said to be smaller than S, written R ⊊ S. For example, on the rational numbers, the relation > is smaller than ≥, and equal to the composition > ∘ >.
Examples
• Order relations, including strict orders:
• Greater than
• Greater than or equal to
• Less than
• Less than or equal to
• Divides (evenly)
• Subset of
• Equivalence relations:
• Equality
• Parallel with (for affine spaces)
• Is in bijection with
• Isomorphic
• Tolerance relation, a reflexive and symmetric relation:
• Dependency relation, a finite tolerance relation
• Independency relation, the complement of some dependency relation
• Kinship relations
Generalizations
The above concept of relation has been generalized to admit relations between members of two different sets. Given sets X and Y, a heterogeneous relation R over X and Y is a subset of { (x,y): x∈X, y∈Y}.[1][17] When X = Y, the relation concept described above is obtained; it is often called homogeneous relation (or endorelation)[18][19] to distinguish it from its generalization. The above properties and operations that are marked "[note 3]" and "[note 4]", respectively, generalize to heterogeneous relations. An example of a heterogeneous relation is "ocean x borders continent y". The best-known examples are functions[note 5] with distinct domains and ranges, such as $sqrt:\mathbb {N} \rightarrow \mathbb {R} _{+}.$
See also
• Incidence structure, a heterogeneous relation between set of points and lines
• Order theory, investigates properties of order relations
Notes
1. called "homogeneous binary relation (on sets)" when delineation from its generalizations is important
2. a generalization of sets
3. These properties also generalize to heterogeneous relations.
4. This operation also generalizes to heterogeneous relations.
5. that is, right-unique and left-total heterogeneous relations
References
1. Codd, Edgar Frank (June 1970). "A Relational Model of Data for Large Shared Data Banks" (PDF). Communications of the ACM. 13 (6): 377–387. doi:10.1145/362384.362685. S2CID 207549016. Retrieved 2020-04-29.
2. Paul R. Halmos (1968). Naive Set Theory. Princeton: Nostrand. Chapter 14
3. Halmos (1968), Chapter 7
4. "Relation definition – Math Insight". mathinsight.org. Retrieved 2019-12-11.
5. Halmos (1968), Chapter 8
6. Ernst Schröder (1895) Algebra und Logic der Relative, via Internet Archive
7. C. I. Lewis (1918) A Survey of Symbolic Logic , pages 269 to 279, via internet Archive
8. Gunther Schmidt, 2010. Relational Mathematics. Cambridge University Press, ISBN 978-0-521-76268-7, Chapt. 5
9. Enderton 1977, Ch 3. pg. 40
10. Smith, Douglas; Eggen, Maurice; St. Andre, Richard (2006), A Transition to Advanced Mathematics (6th ed.), Brooks/Cole, p. 160, ISBN 0-534-39900-2
11. Nievergelt, Yves (2002), Foundations of Logic and Mathematics: Applications to Computer Science and Cryptography, Springer-Verlag, p. 158.
12. Flaška, V.; Ježek, J.; Kepka, T.; Kortelainen, J. (2007). Transitive Closures of Binary Relations I (PDF). Prague: School of Mathematics – Physics Charles University. p. 1. Archived from the original (PDF) on 2013-11-02. Lemma 1.1 (iv). This source refers to asymmetric relations as "strictly antisymmetric".
13. Kilp, Knauer and Mikhalev: p. 3. The same four definitions appear in the following:
• Peter J. Pahl; Rudolf Damrath (2001). Mathematical Foundations of Computational Engineering: A Handbook. Springer Science & Business Media. p. 506. ISBN 978-3-540-67995-0.
• Eike Best (1996). Semantics of Sequential and Parallel Programs. Prentice Hall. pp. 19–21. ISBN 978-0-13-460643-9.
• Robert-Christoph Riemann (1999). Modelling of Concurrent Systems: Structural and Semantical Methods in the High Level Petri Net Calculus. Herbert Utz Verlag. pp. 21–22. ISBN 978-3-89675-629-9.
14. Mäs, Stephan (2007), "Reasoning on Spatial Semantic Integrity Constraints", Spatial Information Theory: 8th International Conference, COSIT 2007, Melbourne, Australia, September 19–23, 2007, Proceedings, Lecture Notes in Computer Science, vol. 4736, Springer, pp. 285–302, doi:10.1007/978-3-540-74788-8_18
15. Yao, Y.Y.; Wong, S.K.M. (1995). "Generalization of rough sets using relationships between attribute values" (PDF). Proceedings of the 2nd Annual Joint Conference on Information Sciences: 30–33..
16. Joseph G. Rosenstein, Linear orderings, Academic Press, 1982, ISBN 0-12-597680-1, p. 4
17. Enderton 1977, Ch 3. pg. 40
18. M. E. Müller (2012). Relational Knowledge Discovery. Cambridge University Press. p. 22. ISBN 978-0-521-19021-3.
19. Peter J. Pahl; Rudolf Damrath (2001). Mathematical Foundations of Computational Engineering: A Handbook. Springer Science & Business Media. p. 496. ISBN 978-3-540-67995-0.
Bibliography
• Codd, Edgar Frank (1990). The Relational Model for Database Management: Version 2 (PDF). Boston: Addison-Wesley. ISBN 978-0201141924.
• Enderton, Herbert (1977). Elements of Set Theory. Boston: Academic Press. ISBN 978-0-12-238440-0.
• Kilp, Mati; Knauer, Ulrich; Mikhalev, Alexander (2000). Monoids, Acts and Categories: with Applications to Wreath Products and Graphs. Berlin: De Gruyter. ISBN 978-3-11-015248-7.
• Peirce, Charles Sanders (1873). "Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic". Memoirs of the American Academy of Arts and Sciences. 9 (2): 317–178. Bibcode:1873MAAAS...9..317P. doi:10.2307/25058006. hdl:2027/hvd.32044019561034. JSTOR 25058006. Retrieved 2020-05-05.
• Schmidt, Gunther (2010). Relational Mathematics. Cambridge: Cambridge University Press. ISBN 978-0-521-76268-7.
|
Wikipedia
|
Relation algebra
In mathematics and abstract algebra, a relation algebra is a residuated Boolean algebra expanded with an involution called converse, a unary operation. The motivating example of a relation algebra is the algebra 2 X 2 of all binary relations on a set X, that is, subsets of the cartesian square X2, with R•S interpreted as the usual composition of binary relations R and S, and with the converse of R as the converse relation.
For the concept related to databases, see Relational algebra.
Relation algebra emerged in the 19th-century work of Augustus De Morgan and Charles Peirce, which culminated in the algebraic logic of Ernst Schröder. The equational form of relation algebra treated here was developed by Alfred Tarski and his students, starting in the 1940s. Tarski and Givant (1987) applied relation algebra to a variable-free treatment of axiomatic set theory, with the implication that mathematics founded on set theory could itself be conducted without variables.
Definition
A relation algebra (L, ∧, ∨, −, 0, 1, •, I, ˘) is an algebraic structure equipped with the Boolean operations of conjunction x∧y, disjunction x∨y, and negation x−, the Boolean constants 0 and 1, the relational operations of composition x•y and converse x˘, and the relational constant I, such that these operations and constants satisfy certain equations constituting an axiomatization of a calculus of relations. Roughly, a relation algebra is to a system of binary relations on a set containing the empty (0), universal (1), and identity (I) relations and closed under these five operations as a group is to a system of permutations of a set containing the identity permutation and closed under composition and inverse. However, the first-order theory of relation algebras is not complete for such systems of binary relations.
Following Jónsson and Tsinakis (1993) it is convenient to define additional operations x ◁ y = x • y˘, and, dually, x ▷ y = x˘ • y. Jónsson and Tsinakis showed that I ◁ x = x ▷ I, and that both were equal to x˘. Hence a relation algebra can equally well be defined as an algebraic structure (L, ∧, ∨, −, 0, 1, •, I, ◁ , ▷). The advantage of this signature over the usual one is that a relation algebra can then be defined in full simply as a residuated Boolean algebra for which I ◁ x is an involution, that is, I ◁ (I ◁ x) = x. The latter condition can be thought of as the relational counterpart of the equation 1/(1/x) = x for ordinary arithmetic reciprocal, and some authors use reciprocal as a synonym for converse.
Since residuated Boolean algebras are axiomatized with finitely many identities, so are relation algebras. Hence the latter form a variety, the variety RA of relation algebras. Expanding the above definition as equations yields the following finite axiomatization.
Axioms
The axioms B1-B10 below are adapted from Givant (2006: 283), and were first set out by Tarski in 1948.[1]
L is a Boolean algebra under binary disjunction, ∨, and unary complementation ()−:
B1: A ∨ B = B ∨ A
B2: A ∨ (B ∨ C) = (A ∨ B) ∨ C
B3: (A− ∨ B)− ∨ (A− ∨ B−)− = A
This axiomatization of Boolean algebra is due to Huntington (1933). Note that the meet of the implied Boolean algebra is not the • operator (even though it distributes over ∨ like a meet does), nor is the 1 of the Boolean algebra the I constant.
L is a monoid under binary composition (•) and nullary identity I:
B4: A • (B • C) = (A • B) • C
B5: A • I = A
Unary converse ()˘ is an involution with respect to composition:
B6: A˘˘ = A
B7: (A • B)˘ = B˘ • A˘
Axiom B6 defines conversion as an involution, whereas B7 expresses the antidistributive property of conversion relative to composition.[2]
Converse and composition distribute over disjunction:
B8: (A ∨ B)˘ = A˘ ∨ B˘
B9: (A ∨ B) • C = (A • C) ∨ (B • C)
B10 is Tarski's equational form of the fact, discovered by Augustus De Morgan, that A • B ≤ C− $\leftrightarrow $ A˘ • C ≤ B− $\leftrightarrow $ C • B˘ ≤ A−.
B10: (A˘ • (A • B)−) ∨ B− = B−
These axioms are ZFC theorems; for the purely Boolean B1-B3, this fact is trivial. After each of the following axioms is shown the number of the corresponding theorem in Chapter 3 of Suppes (1960), an exposition of ZFC: B4 27, B5 45, B6 14, B7 26, B8 16, B9 23.
Expressing properties of binary relations in RA
The following table shows how many of the usual properties of binary relations can be expressed as succinct RA equalities or inequalities. Below, an inequality of the form A ≤ B is shorthand for the Boolean equation A∨B = B.
The most complete set of results of this nature is Chapter C of Carnap (1958), where the notation is rather distant from that of this entry. Chapter 3.2 of Suppes (1960) contains fewer results, presented as ZFC theorems and using a notation that more resembles that of this entry. Neither Carnap nor Suppes formulated their results using the RA of this entry, or in an equational manner.
R isIf and only if:
FunctionalR˘ • R ≤ I
Left-totalI ≤ R • R˘ (R˘ is surjective)
Functionfunctional and left-total.
Injective
R • R˘ ≤ I (R˘ is functional)
SurjectiveI ≤ R˘ • R (R˘ is left-total)
BijectionR˘ • R = R • R˘ = I (Injective surjective function)
TransitiveR • R ≤ R
ReflexiveI ≤ R
CoreflexiveR ≤ I
IrreflexiveR ∧ I = 0
SymmetricR˘ = R
AntisymmetricR ∧ R˘ ≤ I
AsymmetricR ∧ R˘ = 0
Strongly connectedR ∨ R˘ = 1
ConnectedI ∨ R ∨ R˘ = 1
IdempotentR • R = R
PreorderR is transitive and reflexive.
EquivalenceR is a symmetric preorder.
Partial orderR is an antisymmetric preorder.
Total orderR is strongly connected and a partial order.
Strict partial orderR is transitive and irreflexive.
Strict total orderR is connected and a strict partial order.
DenseR ∧ I− ≤ (R ∧ I−) • (R ∧ I−).
Expressive power
The metamathematics of RA are discussed at length in Tarski and Givant (1987), and more briefly in Givant (2006).
RA consists entirely of equations manipulated using nothing more than uniform replacement and the substitution of equals for equals. Both rules are wholly familiar from school mathematics and from abstract algebra generally. Hence RA proofs are carried out in a manner familiar to all mathematicians, unlike the case in mathematical logic generally.
RA can express any (and up to logical equivalence, exactly the) first-order logic (FOL) formulas containing no more than three variables. (A given variable can be quantified multiple times and hence quantifiers can be nested arbitrarily deeply by "reusing" variables.) Surprisingly, this fragment of FOL suffices to express Peano arithmetic and almost all axiomatic set theories ever proposed. Hence RA is, in effect, a way of algebraizing nearly all mathematics, while dispensing with FOL and its connectives, quantifiers, turnstiles, and modus ponens. Because RA can express Peano arithmetic and set theory, Gödel's incompleteness theorems apply to it; RA is incomplete, incompletable, and undecidable. (N.B. The Boolean algebra fragment of RA is complete and decidable.)
The representable relation algebras, forming the class RRA, are those relation algebras isomorphic to some relation algebra consisting of binary relations on some set, and closed under the intended interpretation of the RA operations. It is easily shown, e.g. using the method of pseudoelementary classes, that RRA is a quasivariety, that is, axiomatizable by a universal Horn theory. In 1950, Roger Lyndon proved the existence of equations holding in RRA that did not hold in RA. Hence the variety generated by RRA is a proper subvariety of the variety RA. In 1955, Alfred Tarski showed that RRA is itself a variety. In 1964, Donald Monk showed that RRA has no finite axiomatization, unlike RA which is finitely axiomatized by definition.
Q-relation algebras
An RA is a Q-relation algebra (QRA) if, in addition to B1-B10, there exist some A and B such that (Tarski and Givant 1987: §8.4):
Q0: A˘ • A ≤ I
Q1: B˘ • B ≤ I
Q2: A˘ • B = 1
Essentially these axioms imply that the universe has a (non-surjective) pairing relation whose projections are A and B. It is a theorem that every QRA is a RRA (Proof by Maddux, see Tarski & Givant 1987: 8.4(iii)).
Every QRA is representable (Tarski and Givant 1987). That not every relation algebra is representable is a fundamental way RA differs from QRA and Boolean algebras, which, by Stone's representation theorem for Boolean algebras, are always representable as sets of subsets of some set, closed under union, intersection, and complement.
Examples
1. Any Boolean algebra can be turned into a RA by interpreting conjunction as composition (the monoid multiplication •), i.e. x • y is defined as x∧y. This interpretation requires that converse interpret identity (ў = y), and that both residuals y \ x and x /y interpret the conditional y → x (i.e., ¬y ∨ x).
2. The motivating example of a relation algebra depends on the definition of a binary relation R on a set X as any subset R ⊆ X 2, where X 2 is the cartesian square of X. The power set 2 X 2 consisting of all binary relations on X is a Boolean algebra. While 2 X 2 can be made a relation algebra by taking R • S = R ∧ S, as per example (1) above, the standard interpretation of • is instead x(R • S )z = ∃y : xRy.ySz. That is, the ordered pair (x, z) belongs to the relation R • S just when there exists y in X such that (x, y) ∈ R and (y, z) ∈ S. This interpretation uniquely determines R \ S as consisting of all pairs (y, z) such that for all x ∈ X, if xRy then xSz. Dually, S /R consists of all pairs (x, y) such that for all z in X, if yRz then xSz. The translation ў = ¬(y\¬I) then establishes the converse R˘ of R as consisting of all pairs (y, x) such that (x, y) ∈ R.
3. An important generalization of the previous example is the power set 2E where E ⊆ X 2 is any equivalence relation on the set X. This is a generalization because X 2 is itself an equivalence relation, namely the complete relation consisting of all pairs. While 2E is not a subalgebra of 2 X 2 when E ≠ X 2 (since in that case it does not contain the relation X 2, the top element 1 being E instead of X 2), it is nevertheless turned into a relation algebra using the same definitions of the operations. Its importance resides in the definition of a representable relation algebra as any relation algebra isomorphic to a subalgebra of the relation algebra 2E for some equivalence relation E on some set. The previous section says more about the relevant metamathematics.
4. Let G be a group. Then the power set $2^{G}$ is a relation algebra with the obvious Boolean algebra operations, composition given by the product of group subsets, the converse by the inverse subset ($A^{-1}=\{a^{-1}\!:a\in A\}$), and the identity by the singleton subset $\{e\}$. There is a relation algebra homomorphism embedding $2^{G}$ in $2^{G\times G}$ which sends each subset $A\subset G$ to the relation $R_{A}=\{(g,h)\in G\times G:h\in Ag\}$. The image of this homomorphism is the set of all right-invariant relations on G.
5. If group sum or product interprets composition, group inverse interprets converse, group identity interprets I, and if R is a one-to-one correspondence, so that R˘ • R = R • R˘ = I,[3] then L is a group as well as a monoid. B4-B7 become well-known theorems of group theory, so that RA becomes a proper extension of group theory as well as of Boolean algebra.
Historical remarks
De Morgan founded RA in 1860, but C. S. Peirce took it much further and became fascinated with its philosophical power. The work of DeMorgan and Peirce came to be known mainly in the extended and definitive form Ernst Schröder gave it in Vol. 3 of his Vorlesungen (1890–1905). Principia Mathematica drew strongly on Schröder's RA, but acknowledged him only as the inventor of the notation. In 1912, Alwin Korselt proved that a particular formula in which the quantifiers were nested four deep had no RA equivalent.[4] This fact led to a loss of interest in RA until Tarski (1941) began writing about it. His students have continued to develop RA down to the present day. Tarski returned to RA in the 1970s with the help of Steven Givant; this collaboration resulted in the monograph by Tarski and Givant (1987), the definitive reference for this subject. For more on the history of RA, see Maddux (1991, 2006).
Software
• RelMICS / Relational Methods in Computer Science maintained by Wolfram Kahl
• Carsten Sinz: ARA / An Automatic Theorem Prover for Relation Algebras
• Stef Joosten, Relation Algebra as programming language using the Ampersand compiler, Journal of Logical and Algebraic Methods in Programming, Volume 100, April 2018, Pages 113–129. (see also https://ampersandtarski.github.io/)
See also
• Algebraic logic
• Allegory (category theory)
• Binary relation
• Cartesian product
• Cartesian square
• Cylindric algebras
• Extension in logic
• Involution
• Logic of relatives
• Logical matrix
• Predicate functor logic
• Quantale
• Relation
• Relation construction
• Relational calculus
• Relational algebra
• Residuated Boolean algebra
• Spatial-temporal reasoning
• Theory of relations
• Triadic relation
Footnotes
1. Alfred Tarski (1948) "Abstract: Representation Problems for Relation Algebras," Bulletin of the AMS 54: 80.
2. Chris Brink; Wolfram Kahl; Gunther Schmidt (1997). Relational Methods in Computer Science. Springer. pp. 4 and 8. ISBN 978-3-211-82971-4.
3. Tarski, A. (1941), p. 87.
4. Korselt did not publish his finding. It was first published in Leopold Loewenheim (1915) "Über Möglichkeiten im Relativkalkül," Mathematische Annalen 76: 447–470. Translated as "On possibilities in the calculus of relatives" in Jean van Heijenoort, 1967. A Source Book in Mathematical Logic, 1879–1931. Harvard Univ. Press: 228–251.
References
• Carnap, Rudolf (1958). Introduction to Symbolic Logic and its Applications. Dover Publications.
• Givant, Steven (2006). "The calculus of relations as a foundation for mathematics". Journal of Automated Reasoning. 37 (4): 277–322. doi:10.1007/s10817-006-9062-x. S2CID 26324546.
• Halmos, P. R. (1960). Naive Set Theory. Van Nostrand.
• Henkin, Leon; Tarski, Alfred; Monk, J. D. (1971). Cylindric Algebras, Part 1. North Holland.
• Henkin, Leon; Tarski, Alfred; Monk, J. D. (1985). Cylindric Algebras, Part 2. North Holland.
• Hirsch, R.; Hodkinson, I. (2002). Relation Algebra by Games. Studies in Logic and the Foundations of Mathematics. Vol. 147. Elsevier Science.
• Jónsson, Bjarni; Tsinakis, Constantine (1993). "Relation algebras as residuated Boolean algebras". Algebra Universalis. 30 (4): 469–78. doi:10.1007/BF01195378. S2CID 120642402.
• Maddux, Roger (1991). "The Origin of Relation Algebras in the Development and Axiomatization of the Calculus of Relations" (PDF). Studia Logica. 50 (3–4): 421–455. CiteSeerX 10.1.1.146.5668. doi:10.1007/BF00370681. S2CID 12165812.
• Maddux, Roger (2006). Relation Algebras. Studies in Logic and the Foundations of Mathematics. Vol. 150. Elsevier Science. ISBN 9780444520135.
• Schein, Boris M. (1970) "Relation algebras and function semigroups", Semigroup Forum 1: 1–62
• Schmidt, Gunther (2010). Relational Mathematics. Cambridge University Press.
• Suppes, Patrick (1972) [1960]. "Chapter 3". Axiomatic Set Theory (Dover reprint ed.). Van Nostrand.
• Tarski, Alfred (1941). "On the calculus of relations". Journal of Symbolic Logic. 6 (3): 73–89. doi:10.2307/2268577. JSTOR 2268577. S2CID 11899579.
• Tarski, Alfred; Givant, Steven (1987). A Formalization of Set Theory without Variables. Providence RI: American Mathematical Society. ISBN 9780821810415.
External links
• Yohji AKAMA, Yasuo Kawahara, and Hitoshi Furusawa, "Constructing Allegory from Relation Algebra and Representation Theorems."
• Richard Bird, Oege de Moor, Paul Hoogendijk, "Generic Programming with Relations and Functors."
• R.P. de Freitas and Viana, "A Completeness Result for Relation Algebra with Binders."
• Peter Jipsen:
• Relation algebras
• "Foundations of Relations and Kleene Algebra."
• "Computer Aided Investigations of Relation Algebras."
• "A Gentzen System And Decidability For Residuated Lattices."
• Vaughan Pratt:
• "Origins of the Calculus of Binary Relations." A historical treatment.
• "The Second Calculus of Binary Relations."
• Priss, Uta:
• "An FCA interpretation of Relation Algebra."
• "Relation Algebra and FCA" Links to publications and software
• Kahl, Wolfram and Gunther Schmidt: Exploring (Finite) Relation Algebras Using Tools Written in Haskell. and Relation Algebra Tools with Haskell from McMaster University.
|
Wikipedia
|
Relation of degree zero
A relation of degree zero, 0-ary relation, or nullary relation is a relation with zero attributes. There are exactly two relations of degree zero. One has cardinality zero; that is, contains no tuples at all. The other has cardinality 1 contains the unique 0-tuple.[1]:56
The zero-degree relations represent true and false in relational algebra.[1]:57 Under the closed-world assumption, an n-ary relation is interpreted as the extension of some n-adic predicate: all and only those n-tuples whose values, substituted for corresponding free variables in the predicate, yield propositions that hold true, appear in the relation. A zero-degree relation is therefore interpreted as the extension of the 0-adic predicate P() → true. The zero-degree relation with cardinality zero therefore represents false because it contains no tuples that yield a true proposition, and the zero-degree relation with cardinality 1 represents true because it contains the unique 0-tuple that yields a true proposition.
The zero-degree relations are also significant as identities for certain operators in the relational algebra. The zero-degree relation of cardinality 1 is the identity with respect to join (⋈); that is, when it is joined with any other relation R, the result is R. Defining an identity with respect to join makes it possible to extend the binary join operator into a n-ary join operator.[1]:89
Since the relational Cartesian product is a special case of join, the zero-degree relation of cardinality 1 is also the identity with respect to the Cartesian product.[1]:89
A projection of a relation over no attributes yields one of the relations of degree zero. If the projected relation has cardinality 0, the projection will have cardinality 0; if the projected relation has positive cardinality, the result will have cardinality 1.
Hugh Darwen refers to the zero-degree relation with cardinality 0 as TABLE_DUM and the relation with cardinality 1 as TABLE_DEE, alluding to the characters Tweedledum and Tweedledee.[2]
See also
• Empty set
• Identity element
• Relational algebra
References
1. Date, Christopher J. (May 2005). Database In Depth (1 ed.). Sebastopol, California: O'Reilly. ISBN 978-0-596-10012-4.
2. Darwen, Hugh (1992). "The Nullologist in Relationland". In Date, C. J.; Darwen, Hugh (eds.). Relational Database Writings 1989–1991. Reading, Massachusetts: Addison-Wesley.
|
Wikipedia
|
Predicate variable
In mathematical logic, a predicate variable is a predicate letter which functions as a "placeholder" for a relation (between terms), but which has not been specifically assigned any particular relation (or meaning). Common symbols for denoting predicate variables include capital roman letters such as $P$, $Q$ and $R$, or lower case roman letters, e.g., $x$.[1] In first-order logic, they can be more properly called metalinguistic variables. In higher-order logic, predicate variables correspond to propositional variables which can stand for well-formed formulas of the same logic, and such variables can be quantified by means of (at least) second-order quantifiers.
Notation
Predicate variables should be distinguished from predicate constants, which could be represented either with a different (exclusive) set of predicate letters, or by their own symbols which really do have their own specific meaning in their domain of discourse: e.g. $=,\ \in ,\ \leq ,\ <,\ \subset ,...$.
If letters are used for both predicate constants and predicate variables, then there must be a way of distinguishing between them. One possibility is to use letters W, X, Y, Z to represent predicate variables and letters A, B, C,..., U, V to represent predicate constants. If these letters are not enough, then numerical subscripts can be appended after the letter in question (as in X1, X2, X3).
Another option is to use Greek lower-case letters to represent such metavariable predicates. Then, such letters could be used to represent entire well-formed formulae (wff) of the predicate calculus: any free variable terms of the wff could be incorporated as terms of the Greek-letter predicate. This is the first step towards creating a higher-order logic.
Usage
If the predicate variables are not defined as belonging to the vocabulary of the predicate calculus, then they are predicate metavariables, whereas the rest of the predicates are just called "predicate letters". The metavariables are thus understood to be used to code for axiom schema and theorem schemata (derived from the axiom schemata).
Whether the "predicate letters" are constants or variables is a subtle point: they are not constants in the same sense that $=,\ \in ,\ \leq ,\ <,\ \subset ,$ are predicate constants, or that $1,\ 2,\ 3,\ {\sqrt {2}},\ \pi ,\ e\ $ are numerical constants.
If "predicate variables" are only allowed to be bound to predicate letters of zero arity (which have no arguments), where such letters represent propositions, then such variables are propositional variables, and any predicate logic which allows second-order quantifiers to be used to bind such propositional variables is a second-order predicate calculus, or second-order logic.
If predicate variables are also allowed to be bound to predicate letters which are unary or have higher arity, and when such letters represent propositional functions, such that the domain of the arguments is mapped to a range of different propositions, and when such variables can be bound by quantifiers to such sets of propositions, then the result is a higher-order predicate calculus, or higher-order logic.
See also
• Functional predicate
• Metavariable
• Propositional variable – Variable that can either be true or false
References
1. "Predicate variable - Encyclopedia of Mathematics". encyclopediaofmath.org. Retrieved 2020-08-20.
Bibliography
• Rudolf Carnap and William H. Meyer. Introduction to Symbolic Logic and Its Applications. Dover Publications (June 1, 1958). ISBN 0-486-60453-5
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
|
Wikipedia
|
Signature (logic)
In logic, especially mathematical logic, a signature lists and describes the non-logical symbols of a formal language. In universal algebra, a signature lists the operations that characterize an algebraic structure. In model theory, signatures are used for both purposes. They are rarely made explicit in more philosophical treatments of logic.
Definition
Formally, a (single-sorted) signature can be defined as a 4-tuple $\sigma =\left(S_{\operatorname {func} },S_{\operatorname {rel} },S_{\operatorname {const} },\operatorname {ar} \right),$ where $S_{\operatorname {func} }$ and $S_{\operatorname {rel} }$ are disjoint sets not containing any other basic logical symbols, called respectively
• function symbols (examples: $+,\times ,0,1$),
• relation symbols or predicates (examples: $\,\leq ,\,\in $),
• constant symbols (examples: $0,1$),
and a function $\operatorname {ar} :S_{\operatorname {func} }\cup S_{\operatorname {rel} }\to \mathbb {N} $ which assigns a natural number called arity to every function or relation symbol. A function or relation symbol is called $n$-ary if its arity is $n.$ Some authors define a nullary ($0$-ary) function symbol as constant symbol, otherwise constant symbols are defined separately.
A signature with no function symbols is called a relational signature, and a signature with no relation symbols is called an algebraic signature.[1] A finite signature is a signature such that $S_{\operatorname {func} }$ and $S_{\operatorname {rel} }$ are finite. More generally, the cardinality of a signature $\sigma =\left(S_{\operatorname {func} },S_{\operatorname {rel} },S_{\operatorname {const} },\operatorname {ar} \right)$ is defined as $|\sigma |=\left|S_{\operatorname {func} }\right|+\left|S_{\operatorname {rel} }\right|+\left|S_{\operatorname {const} }\right|.$
The language of a signature is the set of all well formed sentences built from the symbols in that signature together with the symbols in the logical system.
Other conventions
In universal algebra the word type or similarity type is often used as a synonym for "signature". In model theory, a signature $\sigma $ is often called a vocabulary, or identified with the (first-order) language $L$ to which it provides the non-logical symbols. However, the cardinality of the language $L$ will always be infinite; if $\sigma $ is finite then $|L|$ will be $\aleph _{0}$.
As the formal definition is inconvenient for everyday use, the definition of a specific signature is often abbreviated in an informal way, as in:
"The standard signature for abelian groups is $\sigma =(+,-,0),$ where $-$ is a unary operator."
Sometimes an algebraic signature is regarded as just a list of arities, as in:
"The similarity type for abelian groups is $\sigma =(2,1,0).$"
Formally this would define the function symbols of the signature as something like $f_{0}$ (which is binary), $f_{1}$ (which is unary) and $f_{2}$ (which is nullary), but in reality the usual names are used even in connection with this convention.
In mathematical logic, very often symbols are not allowed to be nullary, so that constant symbols must be treated separately rather than as nullary function symbols. They form a set $S_{\operatorname {const} }$ disjoint from $S_{\operatorname {func} },$ on which the arity function $\operatorname {ar} $ is not defined. However, this only serves to complicate matters, especially in proofs by induction over the structure of a formula, where an additional case must be considered. Any nullary relation symbol, which is also not allowed under such a definition, can be emulated by a unary relation symbol together with a sentence expressing that its value is the same for all elements. This translation fails only for empty structures (which are often excluded by convention). If nullary symbols are allowed, then every formula of propositional logic is also a formula of first-order logic.
An example for an infinite signature uses $S_{\operatorname {func} }=\{+\}\cup \left\{f_{a}:a\in F\right\}$ and $S_{\operatorname {rel} }=\{=\}$ to formalize expressions and equations about a vector space over an infinite scalar field $F,$ where each $f_{a}$ denotes the unary operation of scalar multiplication by $a.$ This way, the signature and the logic can be kept single-sorted, with vectors being the only sort.[2]
Use of signatures in logic and algebra
In the context of first-order logic, the symbols in a signature are also known as the non-logical symbols, because together with the logical symbols they form the underlying alphabet over which two formal languages are inductively defined: The set of terms over the signature and the set of (well-formed) formulas over the signature.
In a structure, an interpretation ties the function and relation symbols to mathematical objects that justify their names: The interpretation of an $n$-ary function symbol $f$ in a structure $\mathbf {A} $ with domain $A$ is a function $f^{\mathbf {A} }:A^{n}\to A,$ and the interpretation of an $n$-ary relation symbol is a relation $R^{\mathbf {A} }\subseteq A^{n}.$ Here $A^{n}=A\times A\times \cdots \times A$ denotes the $n$-fold cartesian product of the domain $A$ with itself, and so $f$ is in fact an $n$-ary function, and $R$ an $n$-ary relation.
Many-sorted signatures
For many-sorted logic and for many-sorted structures, signatures must encode information about the sorts. The most straightforward way of doing this is via symbol types that play the role of generalized arities.[3]
Symbol types
Let $S$ be a set (of sorts) not containing the symbols $\times $ or $\to .$
The symbol types over $S$ are certain words over the alphabet $S\cup \{\times ,\to \}$: the relational symbol types $s_{1}\times \cdots \times s_{n},$ and the functional symbol types $s_{1}\times \cdots \times s_{n}\to s^{\prime },$ for non-negative integers $n$ and $s_{1},s_{2},\ldots ,s_{n},s^{\prime }\in S.$ (For $n=0,$ the expression $s_{1}\times \cdots \times s_{n}$ denotes the empty word.)
Signature
A (many-sorted) signature is a triple $(S,P,\operatorname {type} )$ consisting of
• a set $S$ of sorts,
• a set $P$ of symbols, and
• a map $\operatorname {type} $ which associates to every symbol in $P$ a symbol type over $S.$
See also
• Term algebra – Freely generated algebraic structure over a given signature
Notes
1. Mokadem, Riad; Litwin, Witold; Rigaux, Philippe; Schwarz, Thomas (September 2007). "Fast nGram-Based String Search Over Data Encoded Using Algebraic Signatures" (PDF). 33rd International Conference on Very Large Data Bases (VLDB). Retrieved 27 February 2019.
2. George Grätzer (1967). "IV. Universal Algebra". In James C. Abbot (ed.). Trends in Lattice Theory. Princeton/NJ: Van Nostrand. pp. 173–210. Here: p.173.
3. Many-Sorted Logic, the first chapter in Lecture notes on Decision Procedures, written by Calogero G. Zarba.
References
• Burris, Stanley N.; Sankappanavar, H.P. (1981). A Course in Universal Algebra. Springer. ISBN 3-540-90578-2. Free online edition.
• Hodges, Wilfrid (1997). A Shorter Model Theory. Cambridge University Press. ISBN 0-521-58713-1.
External links
• Stanford Encyclopedia of Philosophy: "Model theory"—by Wilfred Hodges.
• PlanetMath: Entry "Signature" describes the concept for the case when no sorts are introduced.
• Baillie, Jean, "An Introduction to the Algebraic Specification of Abstract Data Types."
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
|
Wikipedia
|
Relational dependency network
Relational dependency networks (RDNs) are graphical models which extend dependency networks to account for relational data. Relational data is data organized into one or more tables, which are cross-related through standard fields. A relational database is a canonical example of a system that serves to maintain relational data. A relational dependency network can be used to characterize the knowledge contained in a database.
BoostSRL
Developer(s)StARLinG Lab
Initial releaseDecember 29, 2016 (2016-12-29)
Stable release
1.1.1 / August 1, 2019 (2019-08-01)
Repositorygithub.com/starling-lab/BoostSRL
Written inJava
PlatformLinux, macOS, Windows
TypeMachine Learning, Relational dependency network
LicenseGPL 3.0
Websitestarling.utdallas.edu/software/boostsrl/
Introduction
Relational Dependency Networks (or RDNs) aims to get the joint probability distribution over the variables of a dataset represented in the relational domain. They are based on Dependency Networks (or DNs) and extend them to the relational setting. RDNs have efficient learning methods where an RDN can learn the parameters independently, with the conditional probability distributions estimated separately. Since there may be some inconsistencies due to the independent learning method, RDNs use Gibbs sampling to recover joint distribution, like DNs.
Unlike Dependency Networks, RDNs need three graphs to fully represent them.
• Data graph: The nodes of this graph represent objects from the data set, and edges represent the dependencies between these objects. Each object and edge receives a type, and each object has an attribute set.
• Model graph: A higher-order graph representing types. The nodes of this graph represent the attributes of a given type, and the edges represent dependencies between attributes. The dependencies may be between attributes of the same type or different types.
• Each node is associated with a probability distribution conditioned to its parent nodes. The model graph makes no assumptions about the data set, making it general enough to support different data represented by the data graph. Thus, it is possible to use a given data set to learn the model graph's structure and conditional probability distributions and then generate the inference graph from the model graph applied to a data graph representing another set of data.
• Inference graph: A graph generated from the data graph and model graph in a process known as 'roll out'. Inference graphs are generally larger than both data graphs and model graphs as every single attribute of any individual object is an instance on the inference graph whose characteristics correspond to the attribute retrieved from the model graph.
In other words, the data graph guides how the model graph will be rolled out to generate the inference graph.
RDN Learning
The learning methods of an RDN are similar to that employed by a DNs. i.e., all conditional probability distributions can be learned for each of the variables independently. However, only conditional relational learners can be used during the parameter estimation process for RDNs. Therefore, the learners used by DNs, like decision trees or logistic regression, do not work for RDNs.
Neville, J., & Jensen, D. (2007) [1] conducted some experiments comparing RDNs when learning with Relational Bayesian Classifiers and RDNs when learning with Relational Probability Trees. Natarajan et al. (2012) [2] used a series of regression models to represent conditional distributions.
This learning method makes the RDN a model with an efficient learning time. However, this method also makes RDNs susceptible to some structural or numerical inconsistencies. If the conditional probability distribution estimation method uses feature selection, it is possible that a given variable finds a dependency between itself and another variable while the latter doesn't find this dependency. In this case, the RDN is structurally inconsistent. In addition, if the joint distribution doesn't sum to one owing to the approximations caused by the independent learning, then it is called a numerical inconsistency. Such inconsistencies can, however, be bypassed during the inference step.
RDN Inference
RDN inference begins with the creation of an inference graph through a process called roll out. In this process, the model graph is rolled out over the data graph to form the inference graph. Next, Gibbs sampling technique can be used to recover a conditional probability distribution.
Applications
RDNs have been applied in many real-world domains. The main advantages of RDNs are their ability to use relationship information to improve the model's performance. Diagnosis, forecasting, automated vision, sensor fusion and manufacturing control are some examples of problems where RDNs were applied.
Implementations
Some suggestions of RDN implementations:
• BoostSRL:[3] A system specialized on gradient-based boosting approach learning for different types of Statistical Relational Learning models, including Relational Dependency Networks. For more details and notations, see Natarajan et al. (2011).[2]
References
1. Neville, Jennifer; Jensen, David (2007). "Relational Dependency Networks" (PDF). Journal of Machine Learning Research. 8: 653–692. Retrieved 9 February 2020.
2. Natarajan, Sriraam; Khot, Tushar; Kersting, Kristian; Gutmann, Bernd; Shavlik, Jude (10 May 2011). "Gradient-based boosting for statistical relational learning: The relational dependency network case" (PDF). Machine Learning. 86 (1): 25–56. doi:10.1007/s10994-011-5244-9. Retrieved 9 February 2020.
3. Lab, StARLinG. "BoostSRL Wiki". StARLinG. Retrieved 9 February 2020.
|
Wikipedia
|
Fourier analysis
In mathematics, Fourier analysis (/ˈfʊrieɪ, -iər/)[1] is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
Fourier transforms
• Fourier transform
• Fourier series
• Discrete-time Fourier transform
• Discrete Fourier transform
• Discrete Fourier transform over a ring
• Fourier transform on finite groups
• Fourier analysis
• Related transforms
The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.
The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis.
To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably the least-squares spectral analysis (LSSA) methods that use a least squares fit of sinusoids to data samples, similar to Fourier analysis.[2][3] Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems.[4]
Applications
Fourier analysis has many scientific applications – in physics, partial differential equations, number theory, combinatorics, signal processing, digital image processing, probability theory, statistics, forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography, sonar, optics, diffraction, geometry, protein structure analysis, and other areas.
This wide applicability stems from many useful properties of the transforms:
• The transforms are linear operators and, with proper normalization, are unitary as well (a property known as Parseval's theorem or, more generally, as the Plancherel theorem, and most generally via Pontryagin duality).[5]
• The transforms are usually invertible.
• The exponential functions are eigenfunctions of differentiation, which means that this representation transforms linear differential equations with constant coefficients into ordinary algebraic ones.[6] Therefore, the behavior of a linear time-invariant system can be analyzed at each frequency independently.
• By the convolution theorem, Fourier transforms turn the complicated convolution operation into simple multiplication, which means that they provide an efficient way to compute convolution-based operations such as signal filtering, polynomial multiplication, and multiplying large numbers.[7]
• The discrete version of the Fourier transform (see below) can be evaluated quickly on computers using fast Fourier transform (FFT) algorithms.[8]
In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computer-operated FT-IR instrument can produce an infrared absorption pattern comparable to that of a prism instrument.[9]
Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image.
In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed.
When a function $s(t)$ is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function $S(f)$ at frequency $f$ represents the amplitude of a frequency component whose initial phase is given by the angle of $S(f)$ (polar coordinates).
Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches as image processing, heat conduction, and automatic control.
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.[10]
Some examples include:
• Equalization of audio recordings with a series of bandpass filters;
• Digital radio reception without a superheterodyne circuit, as in a modern cell phone or radio scanner;
• Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, strip artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera;
• Cross correlation of similar images for co-alignment;
• X-ray crystallography to reconstruct a crystal structure from its diffraction pattern;
• Fourier-transform ion cyclotron resonance mass spectrometry to determine the mass of ions from the frequency of cyclotron motion in a magnetic field;
• Many other forms of spectroscopy, including infrared and nuclear magnetic resonance spectroscopies;
• Generation of sound spectrograms used to analyze sounds;
• Passive sonar used to classify targets based on machinery noise.
Variants of Fourier analysis
(Continuous) Fourier transform
Main article: Fourier transform
Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, and it produces a continuous function of frequency, known as a frequency distribution. One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time (t), and the domain of the output (final) function is ordinary frequency, the transform of function s(t) at frequency f is given by the complex number:
$S(f)=\int _{-\infty }^{\infty }s(t)\cdot e^{-i2\pi ft}\,dt.$
Evaluating this quantity for all values of f produces the frequency-domain function. Then s(t) can be represented as a recombination of complex exponentials of all possible frequencies:
$s(t)=\int _{-\infty }^{\infty }S(f)\cdot e^{i2\pi ft}\,df,$
which is the inverse transform formula. The complex number, S(f), conveys both amplitude and phase of frequency f.
See Fourier transform for much more information, including:
• conventions for amplitude normalization and frequency scaling/units
• transform properties
• tabulated transforms of specific functions
• an extension/generalization for functions of multiple dimensions, such as images.
Fourier series
Main article: Fourier series
The Fourier transform of a periodic function, sP(t), with period P, becomes a Dirac comb function, modulated by a sequence of complex coefficients:
$S[k]={\frac {1}{P}}\int _{P}s_{P}(t)\cdot e^{-i2\pi {\frac {k}{P}}t}\,dt,\quad k\in \mathbb {Z} ,$ (where ∫P is the integral over any interval of length P).
The inverse transform, known as Fourier series, is a representation of sP(t) in terms of a summation of a potentially infinite number of harmonically related sinusoids or complex exponential functions, each with an amplitude and phase specified by one of the coefficients:
$s_{P}(t)\ \ =\ \ {\mathcal {F}}^{-1}\left\{\sum _{k=-\infty }^{+\infty }S[k]\,\delta \left(f-{\frac {k}{P}}\right)\right\}\ \ =\ \ \sum _{k=-\infty }^{\infty }S[k]\cdot e^{i2\pi {\frac {k}{P}}t}.$
Any sP(t) can be expressed as a periodic summation of another function, s(t):
$s_{P}(t)\,\triangleq \,\sum _{m=-\infty }^{\infty }s(t-mP),$
and the coefficients are proportional to samples of S(f) at discrete intervals of 1/P:
$S[k]={\frac {1}{P}}\cdot S\left({\frac {k}{P}}\right).$[upper-alpha 1]
Note that any s(t) whose transform has the same discrete sample values can be used in the periodic summation. A sufficient condition for recovering s(t) (and therefore S(f)) from just these samples (i.e. from the Fourier series) is that the non-zero portion of s(t) be confined to a known interval of duration P, which is the frequency domain dual of the Nyquist–Shannon sampling theorem.
See Fourier series for more information, including the historical development.
Discrete-time Fourier transform (DTFT)
Main article: Discrete-time Fourier transform
The DTFT is the mathematical dual of the time-domain Fourier series. Thus, a convergent periodic summation in the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function:
$S_{\frac {1}{T}}(f)\ \triangleq \ \underbrace {\sum _{k=-\infty }^{\infty }S\left(f-{\frac {k}{T}}\right)\equiv \overbrace {\sum _{n=-\infty }^{\infty }s[n]\cdot e^{-i2\pi fnT}} ^{\text{Fourier series (DTFT)}}} _{\text{Poisson summation formula}}={\mathcal {F}}\left\{\sum _{n=-\infty }^{\infty }s[n]\ \delta (t-nT)\right\},\,$
which is known as the DTFT. Thus the DTFT of the s[n] sequence is also the Fourier transform of the modulated Dirac comb function.[upper-alpha 2]
The Fourier series coefficients (and inverse transform), are defined by:
$s[n]\ \triangleq \ T\int _{\frac {1}{T}}S_{\frac {1}{T}}(f)\cdot e^{i2\pi fnT}\,df=T\underbrace {\int _{-\infty }^{\infty }S(f)\cdot e^{i2\pi fnT}\,df} _{\triangleq \,s(nT)}.$
Parameter T corresponds to the sampling interval, and this Fourier series can now be recognized as a form of the Poisson summation formula. Thus we have the important result that when a discrete data sequence, s[n], is proportional to samples of an underlying continuous function, s(t), one can observe a periodic summation of the continuous Fourier transform, S(f). Note that any s(t) with the same discrete sample values produces the same DTFT But under certain idealized conditions one can theoretically recover S(f) and s(t) exactly. A sufficient condition for perfect recovery is that the non-zero portion of S(f) be confined to a known frequency interval of width 1/T. When that interval is [−1/2T, 1/2T], the applicable reconstruction formula is the Whittaker–Shannon interpolation formula. This is a cornerstone in the foundation of digital signal processing.
Another reason to be interested in S1/T(f) is that it often provides insight into the amount of aliasing caused by the sampling process.
Applications of the DTFT are not limited to sampled functions. See Discrete-time Fourier transform for more information on this and other topics, including:
• normalized frequency units
• windowing (finite-length sequences)
• transform properties
• tabulated transforms of specific functions
Discrete Fourier transform (DFT)
Main article: Discrete Fourier transform
Similar to a Fourier series, the DTFT of a periodic sequence, $s_{N}[n]$, with period $N$, becomes a Dirac comb function, modulated by a sequence of complex coefficients (see DTFT § Periodic data):
$S[k]=\sum _{n}s_{N}[n]\cdot e^{-i2\pi {\frac {k}{N}}n},\quad k\in \mathbb {Z} ,$ (where Σn is the sum over any sequence of length N).
The S[k] sequence is what is customarily known as the DFT of one cycle of sN. It is also N-periodic, so it is never necessary to compute more than N coefficients. The inverse transform, also known as a discrete Fourier series, is given by:
$s_{N}[n]={\frac {1}{N}}\sum _{k}S[k]\cdot e^{i2\pi {\frac {n}{N}}k},$ where Σk is the sum over any sequence of length N.
When sN[n] is expressed as a periodic summation of another function:
$s_{N}[n]\,\triangleq \,\sum _{m=-\infty }^{\infty }s[n-mN],$ and $s[n]\,\triangleq \,s(nT),$[upper-alpha 3]
the coefficients are proportional to samples of S1/T(f) at disrete intervals of 1/P = 1/NT:
$S[k]={\frac {1}{T}}\cdot S_{\frac {1}{T}}\left({\frac {k}{P}}\right).$[upper-alpha 4]
Conversely, when one wants to compute an arbitrary number (N) of discrete samples of one cycle of a continuous DTFT, S1/T(f), it can be done by computing the relatively simple DFT of sN[n], as defined above. In most cases, N is chosen equal to the length of non-zero portion of s[n]. Increasing N, known as zero-padding or interpolation, results in more closely spaced samples of one cycle of S1/T(f). Decreasing N, causes overlap (adding) in the time-domain (analogous to aliasing), which corresponds to decimation in the frequency domain. (see Discrete-time Fourier transform § L=N×I) In most cases of practical interest, the s[n] sequence represents a longer sequence that was truncated by the application of a finite-length window function or FIR filter array.
The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.
See Discrete Fourier transform for much more information, including:
• transform properties
• applications
• tabulated transforms of specific functions
Summary
For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence via Dirac delta and Dirac comb functions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finite-duration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact.
It is common in practice for the duration of s(•) to be limited to the period, P or N. But these formulas do not require that condition.
s(t) transforms (continuous-time)
Continuous frequencyDiscrete frequencies
Transform $S(f)\,\triangleq \,\int _{-\infty }^{\infty }s(t)\cdot e^{-i2\pi ft}\,dt$ $\overbrace {{\frac {1}{P}}\cdot S\left({\frac {k}{P}}\right)} ^{S[k]}\,\triangleq \,{\frac {1}{P}}\int _{-\infty }^{\infty }s(t)\cdot e^{-i2\pi {\frac {k}{P}}t}\,dt\equiv {\frac {1}{P}}\int _{P}s_{P}(t)\cdot e^{-i2\pi {\frac {k}{P}}t}\,dt$
Inverse $s(t)=\int _{-\infty }^{\infty }S(f)\cdot e^{i2\pi ft}\,df$ $\underbrace {s_{P}(t)=\sum _{k=-\infty }^{\infty }S[k]\cdot e^{i2\pi {\frac {k}{P}}t}} _{\text{Poisson summation formula (Fourier series)}}\,$
s(nT) transforms (discrete-time)
Continuous frequencyDiscrete frequencies
Transform $\underbrace {{\frac {1}{T}}S_{\frac {1}{T}}(f)\,\triangleq \,\sum _{n=-\infty }^{\infty }s(nT)\cdot e^{-i2\pi fnT}} _{\text{Poisson summation formula (DTFT)}}$
${\begin{aligned}\overbrace {{\frac {1}{T}}S_{\frac {1}{T}}\left({\frac {k}{NT}}\right)} ^{S[k]}\,&\triangleq \,\sum _{n=-\infty }^{\infty }s(nT)\cdot e^{-i2\pi {\frac {kn}{N}}}\\&\equiv \underbrace {\sum _{n}s_{P}(nT)\cdot e^{-i2\pi {\frac {kn}{N}}}} _{\text{DFT}}\,\end{aligned}}$
Inverse $s(nT)=T\int _{\frac {1}{T}}{\frac {1}{T}}S_{\frac {1}{T}}(f)\cdot e^{i2\pi fnT}\,df$
$\sum _{n=-\infty }^{\infty }s(nT)\cdot \delta (t-nT)=\underbrace {\int _{-\infty }^{\infty }{\frac {1}{T}}\ S_{\frac {1}{T}}(f)\cdot e^{i2\pi ft}\,df} _{\text{inverse Fourier transform}}\,$
${\begin{aligned}s_{P}(nT)&=\overbrace {{\frac {1}{N}}\sum _{k}S[k]\cdot e^{i2\pi {\frac {kn}{N}}}} ^{\text{inverse DFT}}\\&={\tfrac {1}{P}}\sum _{k}S_{\frac {1}{T}}\left({\frac {k}{P}}\right)\cdot e^{i2\pi {\frac {kn}{N}}}\end{aligned}}$
Symmetry properties
When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:[11]
${\begin{array}{rccccccccc}{\text{Time domain}}&s&=&s_{_{\text{RE}}}&+&s_{_{\text{RO}}}&+&is_{_{\text{IE}}}&+&\underbrace {i\ s_{_{\text{IO}}}} \\&{\Bigg \Updownarrow }{\mathcal {F}}&&{\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}\\{\text{Frequency domain}}&S&=&S_{\text{RE}}&+&\overbrace {\,i\ S_{\text{IO}}\,} &+&iS_{\text{IE}}&+&S_{\text{RO}}\end{array}}$
From this, various relationships are apparent, for example:
• The transform of a real-valued function (sRE + sRO) is the even symmetric function SRE + i SIO. Conversely, an even-symmetric transform implies a real-valued time-domain.
• The transform of an imaginary-valued function (i sIE + i sIO) is the odd symmetric function SRO + i SIE, and the converse is true.
• The transform of an even-symmetric function (sRE + i sIO) is the real-valued function SRE + SRO, and the converse is true.
• The transform of an odd-symmetric function (sRO + i sIE) is the imaginary-valued function i SIE + i SIO, and the converse is true.
History
See also: Fourier series § Historical development
An early form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions).[12][13][14][15]
The Classical Greek concepts of deferent and epicycle in the Ptolemaic system of astronomy were related to Fourier series (see Deferent and epicycle § Mathematical formalism).
In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit,[16] which has been described as the first formula for the DFT,[17] and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string.[17] Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits.[18] Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.[17]
An early modern development toward Fourier analysis was the 1770 paper Réflexions sur la résolution algébrique des équations by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic:[19] Lagrange transformed the roots x1, x2, x3 into the resolvents:
${\begin{aligned}r_{1}&=x_{1}+x_{2}+x_{3}\\r_{2}&=x_{1}+\zeta x_{2}+\zeta ^{2}x_{3}\\r_{3}&=x_{1}+\zeta ^{2}x_{2}+\zeta x_{3}\end{aligned}}$
where ζ is a cubic root of unity, which is the DFT of order 3.
A number of authors, notably Jean le Rond d'Alembert, and Carl Friedrich Gauss used trigonometric series to study the heat equation,[20] but the breakthrough development was the 1807 paper Mémoire sur la propagation de la chaleur dans les corps solides by Joseph Fourier, whose crucial insight was to model all functions by trigonometric series, introducing the Fourier series.
Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.[17]
The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory.
The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey.[18][16]
Time–frequency transforms
Further information: Time–frequency analysis
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information.
As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, the Gabor transform or fractional Fourier transform (FRFT), or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.
Fourier transforms on arbitrary locally compact abelian topological groups
The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact Abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.
More specific, Fourier analysis can be done on cosets,[21] even discrete cosets.
See also
• Conjugate Fourier series
• Generalized Fourier series
• Fourier–Bessel series
• Fourier-related transforms
• Laplace transform (LT)
• Two-sided Laplace transform
• Mellin transform
• Non-uniform discrete Fourier transform (NDFT)
• Quantum Fourier transform (QFT)
• Number-theoretic transform
• Basis vectors
• Bispectrum
• Characteristic function (probability theory)
• Orthogonal functions
• Schwartz space
• Spectral density
• Spectral density estimation
• Spectral music
• Walsh function
• Wavelet
Notes
1. $\int _{P}\left(\sum _{m=-\infty }^{\infty }s(t-mP)\right)\cdot e^{-i2\pi {\frac {k}{P}}t}\,dt=\underbrace {\int _{-\infty }^{\infty }s(t)\cdot e^{-i2\pi {\frac {k}{P}}t}\,dt} _{\triangleq \,S\left({\frac {k}{P}}\right)}$
2. We may also note that:
${\begin{aligned}\sum _{n=-\infty }^{+\infty }T\cdot s(nT)\delta (t-nT)&=\sum _{n=-\infty }^{+\infty }T\cdot s(t)\delta (t-nT)\\&=s(t)\cdot T\sum _{n=-\infty }^{+\infty }\delta (t-nT).\end{aligned}}$
Consequently, a common practice is to model "sampling" as a multiplication by the Dirac comb function, which of course is only "possible" in a purely mathematical sense.
3. Note that this definition intentionally differs from the DTFT section by a factor of T. This facilitates the "$s(nT)$ transforms" table. Alternatively, $s[n]$ can be defined as $T\cdot s(nT),$ in which case $S[k]=S_{\frac {1}{T}}\left({\frac {k}{P}}\right).$
4. $\sum _{n=0}^{N-1}\left(\sum _{m=-\infty }^{\infty }s([n-mN]T)\right)\cdot e^{-i2\pi {\frac {k}{N}}n}=\underbrace {\sum _{n=-\infty }^{\infty }s(nT)\cdot e^{-i2\pi {\frac {k}{N}}n}} _{\triangleq \,{\frac {1}{T}}S_{\frac {1}{T}}\left({\frac {k}{NT}}\right)}$
References
1. "Fourier". Dictionary.com Unabridged (Online). n.d.
2. Cafer Ibanoglu (2000). Variable Stars As Essential Astrophysical Tools. Springer. ISBN 0-7923-6084-2.
3. D. Scott Birney; David Oesper; Guillermo Gonzalez (2006). Observational Astronomy. Cambridge University Press. ISBN 0-521-85370-2.
4. Press (2007). Numerical Recipes (3rd ed.). Cambridge University Press. ISBN 978-0-521-88068-8.
5. Rudin, Walter (1990). Fourier Analysis on Groups. Wiley-Interscience. ISBN 978-0-471-52364-2.
6. Evans, L. (1998). Partial Differential Equations. American Mathematical Society. ISBN 978-3-540-76124-2.
7. Knuth, Donald E. (1997). The Art of Computer Programming Volume 2: Seminumerical Algorithms (3rd ed.). Addison-Wesley Professional. Section 4.3.3.C: Discrete Fourier transforms, pg.305. ISBN 978-0-201-89684-8.
8. Conte, S. D.; de Boor, Carl (1980). Elementary Numerical Analysis (Third ed.). New York: McGraw Hill, Inc. ISBN 978-0-07-066228-5.
9. Saferstein, Richard (2013). Criminalistics: An Introduction to Forensic Science.
10. Rabiner, Lawrence R.; Gold, Bernard (1975). Theory and Application of Digital Signal Processing. Englewood Cliffs, NJ. ISBN 9780139141010.{{cite book}}: CS1 maint: location missing publisher (link)
11. Proakis, John G.; Manolakis, Dimitri G. (1996), Digital Signal Processing: Principles, Algorithms and Applications (3 ed.), New Jersey: Prentice-Hall International, p. 291, ISBN 9780133942897, sAcfAQAAIAAJ
12. Prestini, Elena (2004). The Evolution of Applied Harmonic Analysis: Models of the Real World. Birkhäuser. p. 62. ISBN 978-0-8176-4125-2.
13. Rota, Gian-Carlo; Palombi, Fabrizio (1997). Indiscrete Thoughts. Birkhäuser. p. 11. ISBN 978-0-8176-3866-5.
14. Neugebauer, Otto (1969) [1957]. The Exact Sciences in Antiquity. pp. 1–191. ISBN 978-0-486-22332-2. PMID 14884919. {{cite book}}: |journal= ignored (help)
15. Brack-Bernsen, Lis; Brack, Matthias (2004). "Analyzing shell structure from Babylonian and modern times". International Journal of Modern Physics E. 13 (1): 247. arXiv:physics/0310126. Bibcode:2004IJMPE..13..247B. doi:10.1142/S0218301304002028. S2CID 15704235.
16. Terras, Audrey (1999). Fourier Analysis on Finite Groups and Applications. Cambridge University Press. pp. 30-32. ISBN 978-0-521-45718-7.
17. Briggs, William L.; Henson, Van Emden (1995). The DFT: An Owner's Manual for the Discrete Fourier Transform. SIAM. pp. 2–4. ISBN 978-0-89871-342-8.
18. Heideman, M.T.; Johnson, D. H.; Burrus, C. S. (1984). "Gauss and the history of the fast Fourier transform". IEEE ASSP Magazine. 1 (4): 14–21. doi:10.1109/MASSP.1984.1162257. S2CID 10032502.
19. Knapp, Anthony W. (2006). Basic Algebra. Springer. p. 501. ISBN 978-0-8176-3248-9.
20. Narasimhan, T.N. (February 1999). "Fourier's heat conduction equation: History, influence, and connections". Reviews of Geophysics. 37 (1): 151–172. Bibcode:1999RvGeo..37..151N. CiteSeerX 10.1.1.455.4798. doi:10.1029/1998RG900006. ISSN 1944-9208. OCLC 5156426043. S2CID 38786145.
21. Forrest, Brian. (1998). Fourier Analysis on Coset Spaces. Rocky Mountain Journal of Mathematics. 28. 10.1216/rmjm/1181071828.
Further reading
• Howell, Kenneth B. (2001). Principles of Fourier Analysis. CRC Press. ISBN 978-0-8493-8275-8.
• Kamen, E.W.; Heck, B.S. (2 March 2000). Fundamentals of Signals and Systems Using the Web and Matlab (2 ed.). Prentiss-Hall. ISBN 978-0-13-017293-8.
• Müller, Meinard (2015). The Fourier Transform in a Nutshell (PDF). Springer. In Fundamentals of Music Processing, Section 2.1, pp. 40–56. doi:10.1007/978-3-319-21945-5. ISBN 978-3-319-21944-8. S2CID 8691186. Archived (PDF) from the original on 8 April 2016.
• Polyanin, A. D.; Manzhirov, A. V. (1998). Handbook of Integral Equations. Boca Raton: CRC Press. ISBN 978-0-8493-2876-3.
• Smith, Steven W. (1999). The Scientist and Engineer's Guide to Digital Signal Processing (Second ed.). San Diego: California Technical Publishing. ISBN 978-0-9660176-3-2.
• Stein, E. M.; Weiss, G. (1971). Introduction to Fourier Analysis on Euclidean Spaces. Princeton University Press. ISBN 978-0-691-08078-9.
External links
• Tables of Integral Transforms at EqWorld: The World of Mathematical Equations.
• An Intuitive Explanation of Fourier Theory by Steven Lehar.
• Lectures on Image Processing: A collection of 18 lectures in pdf format from Vanderbilt University. Lecture 6 is on the 1- and 2-D Fourier Transform. Lectures 7–15 make use of it., by Alan Peters
• Moriarty, Philip; Bowley, Roger (2009). "Σ Summation (and Fourier Analysis)". Sixty Symbols. Brady Haran for the University of Nottingham.
• Introduction to Fourier analysis of time series at Medium
Major topics in mathematical analysis
• Calculus: Integration
• Differentiation
• Differential equations
• ordinary
• partial
• stochastic
• Fundamental theorem of calculus
• Calculus of variations
• Vector calculus
• Tensor calculus
• Matrix calculus
• Lists of integrals
• Table of derivatives
• Real analysis
• Complex analysis
• Hypercomplex analysis (quaternionic analysis)
• Functional analysis
• Fourier analysis
• Least-squares spectral analysis
• Harmonic analysis
• P-adic analysis (P-adic numbers)
• Measure theory
• Representation theory
• Functions
• Continuous function
• Special functions
• Limit
• Series
• Infinity
Mathematics portal
Lp spaces
Basic concepts
• Banach & Hilbert spaces
• Lp spaces
• Measure
• Lebesgue
• Measure space
• Measurable space/function
• Minkowski distance
• Sequence spaces
L1 spaces
• Integrable function
• Lebesgue integration
• Taxicab geometry
L2 spaces
• Bessel's
• Cauchy–Schwarz
• Euclidean distance
• Hilbert space
• Parseval's identity
• Polarization identity
• Pythagorean theorem
• Square-integrable function
$L^{\infty }$ spaces
• Bounded function
• Chebyshev distance
• Infimum and supremum
• Essential
• Uniform norm
Maps
• Almost everywhere
• Convergence almost everywhere
• Convergence in measure
• Function space
• Integral transform
• Locally integrable function
• Measurable function
• Symmetric decreasing rearrangement
Inequalities
• Babenko–Beckner
• Chebyshev's
• Clarkson's
• Hanner's
• Hausdorff–Young
• Hölder's
• Markov's
• Minkowski
• Young's convolution
Results
• Marcinkiewicz interpolation theorem
• Plancherel theorem
• Riemann–Lebesgue
• Riesz–Fischer theorem
• Riesz–Thorin theorem
For Lebesgue measure
• Isoperimetric inequality
• Brunn–Minkowski theorem
• Milman's reverse
• Minkowski–Steiner formula
• Prékopa–Leindler inequality
• Vitale's random Brunn–Minkowski inequality
Applications & related
• Bochner space
• Fourier analysis
• Lorentz space
• Probability theory
• Quasinorm
• Real analysis
• Sobolev space
• *-algebra
• C*-algebra
• Von Neumann
Authority control
International
• FAST
National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
Other
• IdRef
• 2
|
Wikipedia
|
Relationship between mathematics and physics
The relationship between mathematics and physics has been a subject of study of philosophers, mathematicians and physicists since Antiquity, and more recently also by historians and educators.[2] Generally considered a relationship of great intimacy,[3] mathematics has been described as "an essential tool for physics"[4] and physics has been described as "a rich source of inspiration and insight in mathematics".[5]
In his work Physics, one of the topics treated by Aristotle is about how the study carried out by mathematicians differs from that carried out by physicists.[6] Considerations about mathematics being the language of nature can be found in the ideas of the Pythagoreans: the convictions that "Numbers rule the world" and "All is number",[7][8] and two millennia later were also expressed by Galileo Galilei: "The book of nature is written in the language of mathematics".[9][10]
Before giving a mathematical proof for the formula for the volume of a sphere, Archimedes used physical reasoning to discover the solution (imagining the balancing of bodies on a scale).[11] From the seventeenth century, many of the most important advances in mathematics appeared motivated by the study of physics, and this continued in the following centuries (although in the nineteenth century mathematics started to become increasingly independent from physics).[12][13] The creation and development of calculus were strongly linked to the needs of physics:[14] There was a need for a new mathematical language to deal with the new dynamics that had arisen from the work of scholars such as Galileo Galilei and Isaac Newton.[15] During this period there was little distinction between physics and mathematics;[16] as an example, Newton regarded geometry as a branch of mechanics.[17] As time progressed, the mathematics used in physics has become increasingly sophisticated, as in the case of superstring theory.[18]
Philosophical problems
Some of the problems considered in the philosophy of mathematics are the following:
• Explain the effectiveness of mathematics in the study of the physical world: "At this point an enigma presents itself which in all ages has agitated inquiring minds. How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?" —Albert Einstein, in Geometry and Experience (1921).[19]
• Clearly delineate mathematics and physics: For some results or discoveries, it is difficult to say to which area they belong: to the mathematics or to physics.[20]
• What is the geometry of physical space?[21]
• What is the origin of the axioms of mathematics?[22]
• How does the already existing mathematics influence in the creation and development of physical theories?[23]
• Is arithmetic analytic or synthetic? (from Kant, see Analytic–synthetic distinction)[24]
• What is essentially different between doing a physical experiment to see the result and making a mathematical calculation to see the result? (from the Turing–Wittgenstein debate)[25]
• Do Gödel's incompleteness theorems imply that physical theories will always be incomplete? (from Stephen Hawking)[26][27]
• Is mathematics invented or discovered? (millennia-old question, raised among others by Mario Livio)[28]
Education
In recent times the two disciplines have most often been taught separately, despite all the interrelations between physics and mathematics.[29] This led some professional mathematicians who were also interested in mathematics education, such as Felix Klein, Richard Courant, Vladimir Arnold and Morris Kline, to strongly advocate teaching mathematics in a way more closely related to the physical sciences.[30][31]
See also
• Pure mathematics
• Applied mathematics
• Theoretical physics
• Mathematical physics
• Non-Euclidean geometry
• Fourier series
• Conic section
• Kepler's laws of planetary motion
• Saving the phenomena
• Positron#History
• The Unreasonable Effectiveness of Mathematics in the Natural Sciences
• Mathematical universe hypothesis
• Zeno's paradoxes
• Axiomatic system
• Mathematical model
• Hilbert's sixth problem
• Empiricism
• Logicism
• Formalism
• Mathematics of general relativity
• Bourbaki
• Experimental mathematics
• History of Maxwell's equations
• Philosophy of mathematics#Platonism
• History of astronomy
• Why Johnny Can't Add
References
1. Jed Z. Buchwald; Robert Fox (10 October 2013). The Oxford Handbook of the History of Physics. OUP Oxford. p. 128. ISBN 978-0-19-151019-9.
2. Uhden, Olaf; Karam, Ricardo; Pietrocola, Maurício; Pospiech, Gesche (20 October 2011). "Modelling Mathematical Reasoning in Physics Education". Science & Education. 21 (4): 485–506. Bibcode:2012Sc&Ed..21..485U. doi:10.1007/s11191-011-9396-6. S2CID 122869677.
3. Francis Bailly; Giuseppe Longo (2011). Mathematics and the Natural Sciences: The Physical Singularity of Life. World Scientific. p. 149. ISBN 978-1-84816-693-6.
4. Sanjay Moreshwar Wagh; Dilip Abasaheb Deshpande (27 September 2012). Essentials of Physics. PHI Learning Pvt. Ltd. p. 3. ISBN 978-81-203-4642-0.
5. Atiyah, Michael (1990). On the Work of Edward Witten (PDF). International Congress of Mathematicians. Japan. pp. 31–35. Archived from the original (PDF) on 2017-03-01.
6. Lear, Jonathan (1990). Aristotle: the desire to understand (Repr. ed.). Cambridge [u.a.]: Cambridge Univ. Press. p. 232. ISBN 9780521347624.
7. Gerard Assayag; Hans G. Feichtinger; José-Francisco Rodrigues (10 July 2002). Mathematics and Music: A Diderot Mathematical Forum. Springer. p. 216. ISBN 978-3-540-43727-7.
8. Al-Rasasi, Ibrahim (21 June 2004). "All is number" (PDF). King Fahd University of Petroleum and Minerals. Archived from the original (PDF) on 28 December 2014. Retrieved 13 June 2015.
9. Aharon Kantorovich (1 July 1993). Scientific Discovery: Logic and Tinkering. SUNY Press. p. 59. ISBN 978-0-7914-1478-1.
10. Kyle Forinash, William Rumsey, Chris Lang, Galileo's Mathematical Language of Nature Archived 2013-09-27 at the Wayback Machine.
11. Arthur Mazer (26 September 2011). The Ellipse: A Historical and Mathematical Journey. John Wiley & Sons. p. 5. Bibcode:2010ehmj.book.....M. ISBN 978-1-118-21143-4.
12. E. J. Post, A History of Physics as an Exercise in Philosophy, p. 76.
13. Arkady Plotnitsky, Niels Bohr and Complementarity: An Introduction, p. 177.
14. Roger G. Newton (1997). The Truth of Science: Physical Theories and Reality. Harvard University Press. pp. 125–126. ISBN 978-0-674-91092-8.
15. Eoin P. O'Neill (editor), What Did You Do Today, Professor?: Fifteen Illuminating Responses from Trinity College Dublin, p. 62.
16. Timothy Gowers; June Barrow-Green; Imre Leader (18 July 2010). The Princeton Companion to Mathematics. Princeton University Press. p. 7. ISBN 978-1-4008-3039-8.
17. David E. Rowe (2008). "Euclidean Geometry and Physical Space". The Mathematical Intelligencer. 28 (2): 51–59. doi:10.1007/BF02987157. S2CID 56161170.
18. "String theories". Particle Central. Four Peaks Technologies. Retrieved 13 June 2015.
19. Albert Einstein, Geometry and Experience.
20. Pierre Bergé, Des rythmes au chaos.
21. Gary Carl Hatfield (1990). The Natural and the Normative: Theories of Spatial Perception from Kant to Helmholtz. MIT Press. p. 223. ISBN 978-0-262-08086-6.
22. Gila Hanna; Hans Niels Jahnke; Helmut Pulte (4 December 2009). Explanation and Proof in Mathematics: Philosophical and Educational Perspectives. Springer Science & Business Media. pp. 29–30. ISBN 978-1-4419-0576-5.
23. "FQXi Community Trick or Truth: the Mysterious Connection Between Physics and Mathematics". Archived from the original on 14 December 2021. Retrieved 16 April 2015.
24. James Van Cleve Professor of Philosophy Brown University (16 July 1999). Problems from Kant. Oxford University Press, USA. p. 22. ISBN 978-0-19-534701-2.
25. Ludwig Wittgenstein; R. G. Bosanquet; Cora Diamond (15 October 1989). Wittgenstein's Lectures on the Foundations of Mathematics, Cambridge, 1939. University of Chicago Press. p. 96. ISBN 978-0-226-90426-9.
26. Pudlák, Pavel (2013). Logical Foundations of Mathematics and Computational Complexity: A Gentle Introduction. Springer Science & Business Media. p. 659. ISBN 978-3-319-00119-7.
27. "Stephen Hawking. "Godel and the End of the Universe"". Archived from the original on 2020-05-29. Retrieved 2015-06-12.
28. Mario Livio (August 2011). "Why math works?". Scientific American: 80–83.
29. Karam; Pospiech; & Pietrocola (2010). "Mathematics in physics lessons: developing structural skills"
30. Stakhov "Dirac’s Principle of Mathematical Beauty, Mathematics of Harmony"
31. Richard Lesh; Peter L. Galbraith; Christopher R. Haines; Andrew Hurford (2009). Modeling Students' Mathematical Modeling Competencies: ICTMA 13. Springer. p. 14. ISBN 978-1-4419-0561-1.
Further reading
• Arnold, V. I. (1999). "Mathematics and physics: mother and daughter or sisters?". Physics-Uspekhi. 42 (12): 1205–1217. Bibcode:1999PhyU...42.1205A. doi:10.1070/pu1999v042n12abeh000673. S2CID 250835608.
• Arnold, V. I. (1998). Translated by A. V. Goryunov. "On teaching mathematics". Russian Mathematical Surveys. 53 (1): 229–236. Bibcode:1998RuMaS..53..229A. doi:10.1070/RM1998v053n01ABEH000005. S2CID 250833432. Archived from the original on 28 April 2017. Retrieved 29 May 2014.
• Atiyah, M.; Dijkgraaf, R.; Hitchin, N. (1 February 2010). "Geometry and physics". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 368 (1914): 913–926. Bibcode:2010RSPTA.368..913A. doi:10.1098/rsta.2009.0227. PMC 3263806. PMID 20123740.
• Boniolo, Giovanni; Budinich, Paolo; Trobok, Majda, eds. (2005). The Role of Mathematics in Physical Sciences: Interdisciplinary and Philosophical Aspects. Dordrecht: Springer. ISBN 9781402031069.
• Colyvan, Mark (2001). "The Miracle of Applied Mathematics" (PDF). Synthese. 127 (3): 265–277. doi:10.1023/A:1010309227321. S2CID 40819230. Retrieved 30 May 2014.
• Dirac, Paul (1938–1939). "The Relation between Mathematics and Physics". Proceedings of the Royal Society of Edinburgh. 59 Part II: 122–129. Retrieved 30 March 2014.
• Feynman, Richard P. (1992). "The Relation of Mathematics to Physics". The Character of Physical Law (Reprint ed.). London: Penguin Books. pp. 35–58. ISBN 978-0140175059.
• Hardy, G. H. (2005). A Mathematician's Apology (PDF) (First electronic ed.). University of Alberta Mathematical Sciences Society. Archived from the original (PDF) on 9 October 2021. Retrieved 30 May 2014.
• Hitchin, Nigel (2007). "Interaction between mathematics and physics". ARBOR Ciencia, Pensamiento y Cultura. 725. Retrieved 31 May 2014.
• Harvey, Alex (2012). "The Reasonable Effectiveness of Mathematics in the Physical Sciences". General Relativity and Gravitation. 43 (2011): 3057–3064. arXiv:1212.5854. Bibcode:2011GReGr..43.3657H. doi:10.1007/s10714-011-1248-9. S2CID 121985996.
• Neumann, John von (1947). "The Mathematician". Works of the Mind. 1 (1): 180–196. (part 1) (part 2).
• Poincaré, Henri (1907). The Value of Science (PDF). Translated by George Bruce Halsted. New York: The Science Press.
• Schlager, Neil; Lauer, Josh, eds. (2000). "The Intimate Relation between Mathematics and Physics". Science and Its Times: Understanding the Social Significance of Scientific Discovery. Vol. 7: 1950 to Present. Gale Group. pp. 226–229. ISBN 978-0-7876-3939-6.
• Vafa, Cumrun (2000). "On the Future of Mathematics/Physics Interaction". Mathematics: Frontiers and Perspectives. USA: AMS. pp. 321–328. ISBN 978-0-8218-2070-4.
• Witten, Edward (1986). Physics and Geometry (PDF). Proceedings of the International Conference of Mathematicians. Berkeley, California. pp. 267–303. Archived from the original (PDF) on 2013-12-28. Retrieved 2014-05-27.
• Eugene Wigner (1960). "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". Communications on Pure and Applied Mathematics. 13 (1): 1–14. Bibcode:1960CPAM...13....1W. doi:10.1002/cpa.3160130102. S2CID 6112252. Archived from the original on 2011-02-28. Retrieved 2014-05-27.
External links
• Gregory W. Moore – Physical Mathematics and the Future (July 4, 2014)
• IOP Institute of Physics – Mathematical Physics: What is it and why do we need it? (September 2014)
|
Wikipedia
|
Relationship square
In statistics, the relationship square is a graphical representation for use in the factorial analysis of a table individuals x variables. This representation completes classical representations provided by principal component analysis (PCA) or multiple correspondence analysis (MCA), namely those of individuals, of quantitative variables (correlation circle) and of the categories of qualitative variables (at the centroid of the individuals who possess them). It is especially important in factor analysis of mixed data (FAMD) and in multiple factor analysis (MFA).
Definition of relationship square in the MCA frame
The first interest of the relationship square is to represent the variables themselves, not their categories, which is all the more valuable as there are many variables. For this, we calculate for each qualitative variable $j$ and each factor $F_{s}$ ($F_{s}$ , rank $s$ factor, is the vector of coordinates of the individuals along the axis of rank $s$; in PCA, $F_{s}$ is called principal component of rank $s$) , the square of the correlation ratio between the $F_{s}$ and the variable $j$, usually denoted : $\eta ^{2}(j,F_{s})$
Thus, to each factorial plane, we can associate a representation of qualitative variables themselves. Their coordinates being between 0 and 1 , the variables appear in the square having as vertices the points (0,0), ( 0,1), (1,0) and (1,1).
Example in MCA
Six individuals ($i_{1},\ldots ,i_{6})$ are described by three variables $(q_{1},q_{2},q_{3})$ having respectively 3, 2 and 3 categories. Example : the individual $i_{1}$ possesses the category $a$ of $q_{1}$, $d$ of $q_{2}$ and $f$ of $q_{3}$.
Table 1. Minute data set for MCA.
$q_{1}$$q_{2}$$q_{3}$
$i_{1}$ $q_{1}$-a$q_{2}$-d$q_{3}$-f
$i_{2}$ $q_{1}$-b$q_{2}$-d$q_{3}$-f
$i_{3}$ $q_{1}$-c$q_{2}$-d$q_{3}$-g
$i_{4}$ $q_{1}$-a$q_{2}$-e$q_{3}$-g
$i_{5}$ $q_{1}$-b$q_{2}$-e$q_{3}$-h
$i_{6}$ $q_{1}$-c$q_{2}$-e$q_{3}$-h
Applied to these data, the MCA function included in the R Package FactoMineR provides to the classical graph in Figure 1.
The relationship square (Figure 2) makes easier the reading of the classic factorial plane. It indicates that:
• The first factor is related to the three variables but especially $q_{3}$ (which have a very high coordinate along the first axis) and then $q_{2}$.
• The second factor is related only to $q_{1}$ and $q_{3}$ (and not to $q_{2}$ which has a coordinate along axis 2 equal to 0) and that in a strong and equal manner.
All this is visible on the classic graphic but not so clearly. The role of the relationship square is first to assist in reading a conventional graphic. This is precious when the variables are numerous and possess numerous coordinates.
Extensions
This representation may be supplemented with those of quantitative variables, the coordinates of the latter being the square of correlation coefficients (and not of correlation ratios). Thus, the second advantage of the relationship square lies in the ability to represent simultaneously quantitative and qualitative variables.[1]
The relationship square can be constructed from any factorial analysis of a table individuals x variables. In particular, it is (or should be) used systematically:
• in multiple correspondences analysis (MCA);[2]
• in principal components analysis (PCA) when there are many supplementary variables;
• in factor analysis of mixed data (FAMD).
An extension of this graphic to groups of variables (how to represent a group of variables by a single point ?) is used in Multiple Factor Analysis (MFA)
History
The idea of representing the qualitative variables themselves by a point (and not the categories) is due to Brigitte Escofier.[3] The graphic as it is used now has been introduced by Brigitte Escofier and Jérôme Pagès in the framework of multiple factor analysis[4]
Conclusion
In MCA, the relationship square provides a synthetic view of the connections between mixed variables, all the more valuable as there are many variables having many categories. This representation iscan be useful in any factorial analysis when there are numerous mixed variables, active and/or supplementary.
References
1. Several examples with two types of variables are in Pagès Jérôme (2014). Multiple Factor Analysis by Example Using R. Chapman & Hall/CRC The R Series London 272 p
2. Husson F., Lê S. & Pagès J. (2009). Exploratory Multivariate Analysis by Example Using R. Chapman & Hall/CRC The R Series, London. ISBN 978-2-7535-0938-2
3. Escofier Brigitte (1979). Une représentation des variables dans l'analyse des correspondances multiples. Revue de statistique appliquée. vol. XXVII, n°4, pp 37–47. http://archive.numdam.org/ARCHIVE/RSA/RSA_1979__27_4/RSA_1979__27_4_37_0/RSA_1979__27_4_37_0.pdf
4. Escofier B. & Pagès J. (1988 1st ed. 2008 4th ed) Analyses factorielles simples et multiples ; objectifs, méthodes et interprétation. Dunod, Paris, 318 p ISBN 978-2-10-051932-3
External links
• FactoMineR A R software devoted to exploratory data analysis.
|
Wikipedia
|
Algebraic extension
In mathematics, an algebraic extension is a field extension L/K such that every element of the larger field L is algebraic over the smaller field K; that is, every element of L is a root of a non-zero polynomial with coefficients in K.[1][2] A field extension that is not algebraic, is said to be transcendental, and must contain transcendental elements, that is, elements that are not algebraic.[3][4]
The algebraic extensions of the field $\mathbb {Q} $ of the rational numbers are called algebraic number fields and are the main objects of study of algebraic number theory. Another example of a common algebraic extension is the extension $\mathbb {C} /\mathbb {R} $ of the real numbers by the complex numbers.
Some properties
All transcendental extensions are of infinite degree. This in turn implies that all finite extensions are algebraic.[5] The converse is not true however: there are infinite extensions which are algebraic.[6] For instance, the field of all algebraic numbers is an infinite algebraic extension of the rational numbers.[7]
Let E be an extension field of K, and a ∈ E. The smallest subfield of E that contains K and a is commonly denoted $K(a).$ If a is algebraic over K, then the elements of K(a) can be expressed as polynomials in a with coefficients in K; that is, K(a) is also the smallest ring containing K and a. In this case, $K(a)$ is a finite extension of K (it is a finite dimensional K-vector space), and all its elements are algebraic over K.[8] These properties do not hold if a is not algebraic. For example, $\mathbb {Q} (\pi )\neq \mathbb {Q} [\pi ],$ and they are both infinite dimensional vector spaces over $\mathbb {Q} .$[9]
An algebraically closed field F has no proper algebraic extensions, that is, no algebraic extensions E with F < E.[10] An example is the field of complex numbers. Every field has an algebraic extension which is algebraically closed (called its algebraic closure), but proving this in general requires some form of the axiom of choice.[11]
An extension L/K is algebraic if and only if every sub K-algebra of L is a field.
Properties
The following three properties hold:[12]
1. If E is an algebraic extension of F and F is an algebraic extension of K then E is an algebraic extension of K.
2. If E and F are algebraic extensions of K in a common overfield C, then the compositum EF is an algebraic extension of K.
3. If E is an algebraic extension of F and E > K > F then E is an algebraic extension of K.
These finitary results can be generalized using transfinite induction:
1. The union of any chain of algebraic extensions over a base field is itself an algebraic extension over the same base field.
This fact, together with Zorn's lemma (applied to an appropriately chosen poset), establishes the existence of algebraic closures.
Generalizations
Main article: Substructure (mathematics)
Model theory generalizes the notion of algebraic extension to arbitrary theories: an embedding of M into N is called an algebraic extension if for every x in N there is a formula p with parameters in M, such that p(x) is true and the set
$\left\{y\in N\mid p(y)\right\}$
is finite. It turns out that applying this definition to the theory of fields gives the usual definition of algebraic extension. The Galois group of N over M can again be defined as the group of automorphisms, and it turns out that most of the theory of Galois groups can be developed for the general case.
Relative algebraic closures
Given a field k and a field K containing k, one defines the relative algebraic closure of k in K to be the subfield of K consisting of all elements of K that are algebraic over k, that is all elements of K that are a root of some nonzero polynomial with coefficients in k.
See also
• Integral element
• Lüroth's theorem
• Galois extension
• Separable extension
• Normal extension
Notes
1. Fraleigh (2014), Definition 31.1, p. 283.
2. Malik, Mordeson, Sen (1997), Definition 21.1.23, p. 453.
3. Fraleigh (2014), Definition 29.6, p. 267.
4. Malik, Mordeson, Sen (1997), Theorem 21.1.8, p. 447.
5. See also Hazewinkel et al. (2004), p. 3.
6. Fraleigh (2014), Theorem 31.18, p. 288.
7. Fraleigh (2014), Corollary 31.13, p. 287.
8. Fraleigh (2014), Theorem 30.23, p. 280.
9. Fraleigh (2014), Example 29.8, p. 268.
10. Fraleigh (2014), Corollary 31.16, p. 287.
11. Fraleigh (2014), Theorem 31.22, p. 290.
12. Lang (2002) p.228
References
• Fraleigh, John B. (2014), A First Course in Abstract Algebra, Pearson, ISBN 978-1-292-02496-7
• Hazewinkel, Michiel; Gubareni, Nadiya; Gubareni, Nadezhda Mikhaĭlovna; Kirichenko, Vladimir V. (2004), Algebras, rings and modules, vol. 1, Springer, ISBN 1-4020-2690-0
• Lang, Serge (1993), "V.1:Algebraic Extensions", Algebra (Third ed.), Reading, Mass.: Addison-Wesley, pp. 223ff, ISBN 978-0-201-55540-0, Zbl 0848.13001
• Malik, D. B.; Mordeson, John N.; Sen, M. K. (1997), Fundamentals of Abstract Algebra, McGraw-Hill, ISBN 0-07-040035-0
• McCarthy, Paul J. (1991) [corrected reprint of 2nd edition, 1976], Algebraic extensions of fields, New York: Dover Publications, ISBN 0-486-66651-4, Zbl 0768.12001
• Roman, Steven (1995), Field Theory, GTM 158, Springer-Verlag, ISBN 9780387944081
• Rotman, Joseph J. (2002), Advanced Modern Algebra, Prentice Hall, ISBN 9780130878687
|
Wikipedia
|
Relative canonical model
In the mathematical field of algebraic geometry, the relative canonical model of a singular variety of a mathematical object where $X$ is a particular canonical variety that maps to $X$, which simplifies the structure.
Description
The precise definition is:
If $f:Y\to X$ is a resolution define the adjunction sequence to be the sequence of subsheaves $f_{*}\omega _{Y}^{\otimes n};$ if $\omega _{X}$ is invertible $f_{*}\omega _{Y}^{\otimes n}=I_{n}\omega _{X}^{\otimes n}$ where $I_{n}$ is the higher adjunction ideal. Problem. Is $\oplus _{n}f_{*}\omega _{Y}^{\otimes n}$ finitely generated? If this is true then $Proj\oplus _{n}f_{*}\omega _{Y}^{\otimes n}\to X$ is called the relative canonical model of $Y$, or the canonical blow-up of $X$.[1]
Some basic properties were as follows: The relative canonical model was independent of the choice of resolution. Some integer multiple $r$ of the canonical divisor of the relative canonical model was Cartier and the number of exceptional components where this agrees with the same multiple of the canonical divisor of Y is also independent of the choice of Y. When it equals the number of components of Y it was called crepant.[1] It was not known whether relative canonical models were Cohen–Macaulay.
Because the relative canonical model is independent of $Y$, most authors simplify the terminology, referring to it as the relative canonical model of $X$ rather than either the relative canonical model of $Y$ or the canonical blow-up of $X$. The class of varieties that are relative canonical models have canonical singularities. Since that time in the 1970s other mathematicians solved affirmatively the problem of whether they are Cohen–Macaulay. The minimal model program started by Shigefumi Mori proved that the sheaf in the definition always is finitely generated and therefore that relative canonical models always exist.
References
1. M. Reid, Canonical 3-folds (courtesy copy), proceedings of the Angiers 'Journees de Geometrie Algebrique' 1979
|
Wikipedia
|
Relative contact homology
In mathematics, in the area of symplectic topology, relative contact homology is an invariant of spaces together with a chosen subspace. Namely, it is associated to a contact manifold and one of its Legendrian submanifolds. It is a part of a more general invariant known as symplectic field theory, and is defined using pseudoholomorphic curves.
Legendrian knots
The simplest case yields invariants of Legendrian knots inside contact three-manifolds. The relative contact homology has been shown to be a strictly more powerful invariant than the "classical invariants", namely Thurston-Bennequin number and rotation number (within a class of smooth knots).
Yuri Chekanov developed a purely combinatorial version of relative contact homology for Legendrian knots, i.e. a combinatorially defined invariant that reproduces the results of relative contact homology.
Tamas Kalman developed a combinatorial invariant for loops of Legendrian knots, with which he detected differences between the fundamental groups of the space of smooth knots and of the space of Legendrian knots.
Higher-dimensional legendrian submanifolds
In the work of Lenhard Ng, relative SFT is used to obtain invariants of smooth knots: a knot or link inside a topological three-manifold gives rise to a Legendrian torus inside a contact five-manifold, consisisting of the unit conormal bundle to the knot inside the unit cotangent bundle of the ambient three-manifold. The relative SFT of this pair is a differential graded algebra; Ng derives a powerful knot invariant from a combinatorial version of the zero-th degree part of the homology. It has the form of a finitely presented tensor algebra over a certain ring of multivariable Laurent polynomials with integer coefficients. This invariant assigns distinct invariants to (at least) knots of at most ten crossings, and dominates the Alexander polynomial and the A-polynomial (and thus distinguishes the unknot).
See also
• Relative homology
References
• Lenhard Ng, Conormal bundles, contact homology, and knot invariants.
• Tobias Ekholm, John Etnyre, Michael G. Sullivan, Legendrian Submanifolds in $R^{2n+1}$ and Contact Homology.
• Yuri Chekanov, "Differential Algebra of Legendrian Links". Inventiones Mathematicae 150 (2002), pp. 441-483.
• Contact homology and one parameter families of Legendrian knots by Tamas Kalman
|
Wikipedia
|
Relative cycle
In algebraic geometry, a relative cycle is a type of algebraic cycle on a scheme. In particular, let $X$ be a scheme of finite type over a Noetherian scheme $S$, so that $X\rightarrow S$. Then a relative cycle is a cycle on $X$ which lies over the generic points of $S$, such that the cycle has a well-defined specialization to any fiber of the projection $X\rightarrow S$.(Voevodsky & Suslin 2000)
The notion was introduced by Andrei Suslin and Vladimir Voevodsky in 2000; the authors were motivated to overcome some of the deficiencies of sheaves with transfers.
References
• Cisinski, Denis-Charles; Déglise, Frédéric (2019). Triangulated Categories of Mixed Motives. Springer Monographs in Mathematics. arXiv:0912.2110. doi:10.1007/978-3-030-33242-6. ISBN 978-3-030-33241-9. S2CID 115163824.
• Voevodsky, Vladimir; Suslin, Andrei (2000). "Relative cycles and Chow sheaves". Cycles, Transfers and Motivic Homology Theories. Annals of Mathematics Studies, vol. 143. Princeton University Press. pp. 10–86. ISBN 9780691048147. OCLC 43895658.
• Appendix 1A of Mazza, Carlo; Voevodsky, Vladimir; Weibel, Charles (2006), Lecture notes on motivic cohomology, Clay Mathematics Monographs, vol. 2, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-3847-1, MR 2242284
|
Wikipedia
|
Relative dimension
In mathematics, specifically linear algebra and geometry, relative dimension is the dual notion to codimension.
In linear algebra, given a quotient map $V\to Q$, the difference dim V − dim Q is the relative dimension; this equals the dimension of the kernel.
In fiber bundles, the relative dimension of the map is the dimension of the fiber.
More abstractly, the codimension of a map is the dimension of the cokernel, while the relative dimension of a map is the dimension of the kernel.
These are dual in that the inclusion of a subspace $V\to W$ of codimension k dualizes to yield a quotient map $W^{*}\to V^{*}$ of relative dimension k, and conversely.
The additivity of codimension under intersection corresponds to the additivity of relative dimension in a fiber product. Just as codimension is mostly used for injective maps, relative dimension is mostly used for surjective maps.
References
|
Wikipedia
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.