text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Rate (mathematics)
In mathematics, a rate is the quotient of two quantities in different units of measurement, often represented as a fraction.[1] If the divisor (or fraction denominator) in the rate is equal to one expressed as a single unit, and if it is assumed that this quantity can be changed systematically (i.e., is an independent variable), then the dividend (the fraction numerator) of the rate expresses the corresponding rate of change in the other (dependent) variable.
One common type of rate is "per unit of time", such as speed, heart rate, and flux. In fact, often rate is a synonym of rhythm or frequency, a count per second (i.e., hertz); e.g., radio frequencies, heart rates, or sample rates. In describing the units of a rate, the word "per" is used to separate the units of the two measurements used to calculate the rate; for example, a heart rate is expressed as "beats per minute".
Rates that have a non-time divisor or denominator include exchange rates, literacy rates, and electric field (in volts per meter).
A rate defined using two numbers of the same units will result in a dimensionless quantity, also known as ratio or simply as a rate (such as tax rates) or counts (such as literacy rate). Dimensionless rates can be expressed as a percentage (for example, the global literacy rate in 1998 was 80%), fraction, or multiple.
Properties and examples
Further information: Ratio
Rates and ratios often vary with time, location, particular element (or subset) of a set of objects, etc. Thus they are often mathematical functions.
A rate (or ratio) may often be thought of as an output-input ratio, benefit-cost ratio, all considered in the broad sense. For example, miles per hour in transportation is the output (or benefit) in terms of miles of travel, which one gets from spending an hour (a cost in time) of traveling (at this velocity).
A set of sequential indices may be used to enumerate elements (or subsets) of a set of ratios under study. For example, in finance, one could define I by assigning consecutive integers to companies, to political subdivisions (such as states), to different investments, etc. The reason for using indices I is so a set of ratios (i=0, N) can be used in an equation to calculate a function of the rates such as an average of a set of ratios. For example, the average velocity found from the set of v I 's mentioned above. Finding averages may involve using weighted averages and possibly using the harmonic mean.
A ratio r=a/b has both a numerator "a" and a denominator "b". The value of a and b may be a real number or integer. The inverse of a ratio r is 1/r = b/a. A rate may be equivalently expressed as an inverse of its value if the ratio of its units is also inverse. For example, 5 miles (mi) per kilowatt-hour (kWh) corresponds to 1/5 kWh/mi (or 200 Wh/mi).
Rates are relevant to many aspects of everyday life. For example: How fast are you driving? The speed of the car (often expressed in miles per hour) is a rate. What interest does your savings account pay you? The amount of interest paid per year is a rate.
Rate of change
Consider the case where the numerator $f$ of a rate is a function $f(a)$ where $a$ happens to be the denominator of the rate $\delta f/\delta a$. A rate of change of $f$ with respect to $a$ (where $a$ is incremented by $h$) can be formally defined in two ways:[2]
${\begin{aligned}{\mbox{Average rate of change}}&={\frac {f(x+h)-f(x)}{h}}\\{\mbox{Instantaneous rate of change}}&=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\end{aligned}}$
where f(x) is the function with respect to x over the interval from a to a+h. An instantaneous rate of change is equivalent to a derivative.
For example, the average speed of a car can be calculated using the total distance traveled between two points, divided by the travel time. In contrast, the instantaneous velocity can be determined by viewing a speedometer.
Temporal rates
See also: Time derivative
In chemistry and physics:
• Speed, the rate of change of position, or the change of position per unit of time
• Acceleration, the rate of change in speed, or the change in speed per unit of time
• Power, the rate of doing work, or the amount of energy transferred per unit time
• Frequency, the number of occurrences of a repeating event per unit of time
• Angular frequency and rotation speed, the number of turns per unit of time
• Reaction rate, the speed at which chemical reactions occur
• Volumetric flow rate, the volume of fluid which passes through a given surface per unit of time; e.g., cubic meters per second
Counts-per-time rates
Main articles: Frequency and Aperiodic frequency
• Radioactive decay, the amount of radioactive material in which one nucleus decays per second, measured in becquerels
In computing:
• Bit rate, the number of bits that are conveyed or processed by a computer per unit of time
• Symbol rate, the number of symbol changes (signaling events) made to the transmission medium per second
• Sampling rate, the number of samples (signal measurements) per second
Miscellaneous definitions:
• Rate of reinforcement, number of reinforcements per unit of time, usually per minute
• Heart rate, usually measured in beats per minute
Economics/finance rates/ratios
• Exchange rate, how much one currency is worth in terms of the other
• Inflation rate, the ratio of the change in the general price level during a year to the starting price level
• Interest rate, the price a borrower pays for the use of the money they do not own (ratio of payment to amount borrowed)
• Price–earnings ratio, market price per share of stock divided by annual earnings per share
• Rate of return, the ratio of money gained or lost on an investment relative to the amount of money invested
• Tax rate, the tax amount divided by the taxable income
• Unemployment rate, the ratio of the number of people who are unemployed to the number in the labor force
• Wage rate, the amount paid for working a given amount of time (or doing a standard amount of accomplished work) (ratio of payment to time)
Other rates
• Birth rate, and mortality rate, the number of births or deaths scaled to the size of that population, per unit of time
• Literacy rate, the proportion of the population over age fifteen that can read and write
• Sex ratio or gender ratio, the ratio of males to females in a population
See also
• Derivative
• Gradient
• Hertz
• Slope
References
1. See Webster's New International Dictionary of the English Language, 2nd edition, Unabridged. Merriam Webster Co. 2016. p.2065 definition 3.
2. Adams, Robert A. (1995). Calculus: A Complete Course (3rd ed.). Addison-Wesley Publishers Ltd. p. 129. ISBN 0-201-82823-5.
|
Wikipedia
|
Rathjen's psi function
In mathematics, Rathjen's $\psi $ psi function is an ordinal collapsing function developed by Michael Rathjen. It collapses weakly Mahlo cardinals $M$ to generate large countable ordinals.[1] A weakly Mahlo cardinal is a cardinal such that the set of regular cardinals below $M$ is closed under $M$ (i.e. all normal functions closed in $M$ are closed under some regular ordinal $<M$). Rathjen uses this to diagonalise over the weakly inaccessible hierarchy.
It admits an associated ordinal notation $T(M)$ whose limit (i.e. ordinal type) is $\psi _{\Omega }(\chi _{\varepsilon _{M}+1}(0))$, which is strictly greater than both $\vert KPM\vert $ and the limit of countable ordinals expressed by Rathjen's $\psi $. $\vert KPM\vert $, which is called the "Small Rathjen ordinal" is the proof-theoretic ordinal of ${\mathsf {KPM}}$, Kripke–Platek set theory augmented by the axiom schema "for any $\Delta _{0}$-formula $H(x,y)$ satisfying $\forall x\,\exists y\,(H(x,y))$, there exists an addmissible set $z$ satisfying $\forall x\in z\,\exists y\,(H(x,y))$". It is equal to $\psi _{\Omega }(\psi _{\chi _{\varepsilon _{M}+1}(0)}(0))$ in Rathjen's $\psi $ function.[2]
Definition
Restrict $\pi $ and $\kappa $ to uncountable regular cardinals $<M$; for a function $f$ let $\operatorname {dom} (f)$ denote the domain of $f$; let $\operatorname {cl} _{M}(X)$ denote $X\cup \{\alpha <M:\alpha {\text{ is a limit point of }}X\}$, and let $\operatorname {enum} (X)$ denote the enumeration of $X$. Lastly, an ordinal $\alpha $ is said to be to be strongly critical if $\varphi _{\alpha }(0)=\alpha $.
For $\alpha \in \Gamma _{M+1}$ and $\beta \in M$:
${\begin{aligned}&\beta \cup \{0,M\}\subseteq B^{n}(\alpha ,\beta )\gamma =\gamma _{1}+\cdots +\gamma _{k}{\text{ and }}\gamma _{1},\ldots ,\gamma _{k}\in B^{n}(\alpha ,\beta )\\[5pt]&\rightarrow \gamma \in B^{n+1}(\alpha ,\beta )\gamma =\varphi _{\gamma _{0}}(\gamma _{1}){\text{ and }}\gamma _{0},\gamma _{1}\in B^{n}(\alpha ,\beta )\rightarrow \gamma \in B^{n+1}(\alpha ,\beta )\pi \in B^{n}(\alpha ,\beta )\\[5pt]&{\text{and }}\gamma <\pi \rightarrow \gamma \in B^{n+1}(\alpha ,\beta )\delta ,\eta \in B^{n}(\alpha ,\beta )\land \delta <\alpha \land \eta \in \operatorname {dom} (\chi _{\delta })\\[5pt]&\rightarrow \chi _{\delta }(\eta )\in B^{n+1}(\alpha ,\beta )B(\alpha ,\beta )\\[5pt]&\bigcup _{n<\omega }B^{n}(\alpha ,\beta )\chi _{\alpha }\\[5pt]&=\operatorname {enum} (\operatorname {cl} (\kappa :\kappa \notin B(\alpha ,\kappa )\land \alpha \in B(\alpha ,\kappa )\})).\end{aligned}}$ :\kappa \notin B(\alpha ,\kappa )\land \alpha \in B(\alpha ,\kappa )\})).\end{aligned}}}
If $\kappa =\chi _{\alpha }(\beta +1)$ for some $(\alpha ,\beta )\in \Gamma _{M+1}\times M$, define $\kappa ^{-}:=\chi _{\alpha }(\beta )$ using the unique $(\alpha ,\beta )$. Otherwise if $\kappa =\chi _{\alpha }(0)$ for some $\alpha \in \Gamma _{M+1}$, then define $\kappa ^{-}:=\sup(\operatorname {SC} _{M}(\alpha )\cup \{0\})$ using the unique $\alpha $, where $\operatorname {SC} _{M}(\alpha )$ is a set of strongly critical ordinals $<M$ explicitly defined in the original source.
For $\alpha \in \Gamma _{M+1}$:
${\begin{aligned}&\kappa ^{-}\cup \{\kappa ^{-},M\}\subset C_{\kappa }^{n}(\alpha )\gamma =\gamma _{1}+\cdots +\gamma _{k}{\text{ and }}\gamma _{1},\ldots ,\gamma _{k}\in C^{n}(\alpha )\rightarrow \gamma \in C^{n+1}(\alpha )\gamma =\varphi _{\gamma _{0}}(\gamma _{1})\land \gamma _{0},\gamma _{1}\in C^{n}(\alpha ,\beta )\\[5pt]&\rightarrow \gamma \in C^{n+1}(\alpha )\pi \in C_{\kappa }^{n}(\alpha )\cap \kappa \land \gamma <\pi \land \pi \in {\textrm {R}}\\[5pt]&\rightarrow \gamma \in C_{\kappa }^{n+1}(\alpha )\gamma =\chi _{\delta }(\eta )\land \delta ,\eta \in C_{\kappa }^{n}(\alpha )\rightarrow \gamma \in C_{\kappa }^{n+1}(\alpha )\\[5pt]&\gamma =\Phi _{\delta }(\eta )\land \delta ,\eta \in C_{\kappa }^{n}(\alpha )\land 0<\delta \land \delta ,\eta <M\rightarrow \gamma \in C_{\kappa }^{n+1}(\alpha )\beta <\alpha \land \pi ,\beta \in C_{\kappa }^{n}(\alpha )\land \beta \in C_{\pi }(\beta )\rightarrow \psi _{\pi }(\beta )\in C_{\kappa }^{n+1}(\alpha )C_{\kappa }(\alpha ):=\bigcup _{C_{\kappa }^{n}(\alpha ):n<\omega }.\end{aligned}}$
$\psi _{\kappa }(\alpha ):=\min(\{\xi :\xi \notin C_{\kappa }(\alpha )\}).$ :\xi \notin C_{\kappa }(\alpha )\}).}
Explanation
• Restrict $\pi $ to uncountable regular cardinals.
• $\operatorname {enum} (X)$ is a unique increasing function such that the range of $\operatorname {enum} (X)$ is exactly $X$.
• $\operatorname {cl} (X)$ is the closure of $X$, i.e. $X\cup \{\beta \in \operatorname {Lim} \mid \sup(X\cap \beta )=\beta \}$, where $\operatorname {Lim} $ denotes the class of non-zero limit ordinals.
• $B_{0}(\alpha ,\beta )=\beta \cup \{0,M\}$
• $B_{n+1}(\alpha ,\beta )=\{\gamma +\delta ,\varphi _{\gamma }(\delta ),\chi _{\mu }(\delta )|\gamma ,\delta ,\mu \in B_{n}(\alpha ,\beta )\land \mu <\alpha \}$
• $B(\alpha ,\beta )=\bigcup _{n<\omega }B_{n}(\alpha ,\beta )$
• $\chi _{\alpha }(\beta )=\operatorname {enum} (\operatorname {cl} (\{\pi :B(\alpha ,\pi )\cap M\subseteq \pi \land \alpha \in B(\alpha ,\pi )\}))=\operatorname {enum} (\{\beta \in \operatorname {Lim} \mid \sup\{\pi :B(\alpha ,\pi )\cap M\subseteq \pi \land \alpha \in B(\alpha ,\pi )\}\cap \beta )=\beta \}$
• $C_{0}(\alpha ,\beta )=\beta \cup \{0,M\}$
• $C_{n+1}(\alpha ,\beta )=\{\gamma +\delta ,\varphi _{\gamma }(\delta ),\chi _{\mu }(\delta ),\psi _{\pi }(\mu )|\gamma ,\delta ,\mu ,\pi \in B_{n}(\alpha ,\beta )\land \mu <\alpha \}$
• $C(\alpha ,\beta )=\bigcup _{n<\omega }C_{n}(\alpha ,\beta )$
• $\psi _{\pi }(\alpha )=\min(\{\beta :C(\alpha ,\beta )\cap \pi \subseteq \beta \land \alpha \in C(\alpha ,\beta )\})$
Rathjen originally defined the $\psi $ function in more complicated a way in order to create an ordinal notation associated to it. Therefore, it is not certain whether the simplified OCF above yields an ordinal notation or not. The original $\chi $ functions used in Rathjen's original OCF are also not so easy to understand, and differ from the $\chi $ functions defined above.
Rathjen's $\psi $ and the simplification provided above are not the same OCF. This is partially because the former is known to admit an ordinal notation, while the latter isn't known to admit an ordinal notation. Rathjen's $\psi $ is often confounded with another of his OCFs which also uses the symbol $\psi $, but they are distinct notions. The former one is a published OCF, while the latter one is just a function symbol in an ordinal notation associated to an unpublished OCF.[3]
References
1. Rathjen, Michael (1990). "Ordinal Notation Based on a Weakly Mahlo Cardinal" (PDF). University of Leeds. Retrieved 2021-09-18.{{cite web}}: CS1 maint: url-status (link)
2. Rathjen, Michael (1994-01-01). "Collapsing functions based on recursively large ordinals: A well-ordering proof for KPM". Archive for Mathematical Logic. 33 (1): 35–55. doi:10.1007/BF01275469. ISSN 1432-0665. S2CID 35012853.
3. Rathjen, Michael (1989-09-04). "Proof-theoretic analysis of KPM" (PDF). University of Leeds. Retrieved 2021-09-18.{{cite web}}: CS1 maint: url-status (link)
|
Wikipedia
|
Rational root theorem
In algebra, the rational root theorem (or rational root test, rational zero theorem, rational zero test or p/q theorem) states a constraint on rational solutions of a polynomial equation
$a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{0}=0$
with integer coefficients $a_{i}\in \mathbb {Z} $ and $a_{0},a_{n}\neq 0$. Solutions of the equation are also called roots or zeros of the polynomial on the left side.
The theorem states that each rational solution x = p⁄q, written in lowest terms so that p and q are relatively prime, satisfies:
• p is an integer factor of the constant term a0, and
• q is an integer factor of the leading coefficient an.
The rational root theorem is a special case (for a single linear factor) of Gauss's lemma on the factorization of polynomials. The integral root theorem is the special case of the rational root theorem when the leading coefficient is an = 1.
Application
The theorem is used to find all rational roots of a polynomial, if any. It gives a finite number of possible fractions which can be checked to see if they are roots. If a rational root x = r is found, a linear polynomial (x – r) can be factored out of the polynomial using polynomial long division, resulting in a polynomial of lower degree whose roots are also roots of the original polynomial.
Cubic equation
The general cubic equation
$ax^{3}+bx^{2}+cx+d=0$
with integer coefficients has three solutions in the complex plane. If the rational root test finds no rational solutions, then the only way to express the solutions algebraically uses cube roots. But if the test finds a rational solution r, then factoring out (x – r) leaves a quadratic polynomial whose two roots, found with the quadratic formula, are the remaining two roots of the cubic, avoiding cube roots.
Proofs
Elementary proof
Let $P(x)\ =\ a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{1}x+a_{0}$ with $a_{0},\ldots a_{n}\in \mathbb {Z} .$
Suppose P(p/q) = 0 for some coprime p, q ∈ ℤ:
$P\left({\tfrac {p}{q}}\right)=a_{n}\left({\tfrac {p}{q}}\right)^{n}+a_{n-1}\left({\tfrac {p}{q}}\right)^{n-1}+\cdots +a_{1}\left({\tfrac {p}{q}}\right)+a_{0}=0.$
To clear denominators, multiply both sides by qn:
$a_{n}p^{n}+a_{n-1}p^{n-1}q+\cdots +a_{1}pq^{n-1}+a_{0}q^{n}=0.$
Shifting the a0 term to the right side and factoring out p on the left side produces:
$p\left(a_{n}p^{n-1}+a_{n-1}qp^{n-2}+\cdots +a_{1}q^{n-1}\right)=-a_{0}q^{n}.$
Thus, p divides a0qn. But p is coprime to q and therefore to qn, so by Euclid's lemma p must divide the remaining factor a0.
On the other hand, shifting the an term to the right side and factoring out q on the left side produces:
$q\left(a_{n-1}p^{n-1}+a_{n-2}qp^{n-2}+\cdots +a_{0}q^{n-1}\right)=-a_{n}p^{n}.$
Reasoning as before, it follows that q divides an.[1]
Proof using Gauss's lemma
Should there be a nontrivial factor dividing all the coefficients of the polynomial, then one can divide by the greatest common divisor of the coefficients so as to obtain a primitive polynomial in the sense of Gauss's lemma; this does not alter the set of rational roots and only strengthens the divisibility conditions. That lemma says that if the polynomial factors in Q[X], then it also factors in Z[X] as a product of primitive polynomials. Now any rational root p/q corresponds to a factor of degree 1 in Q[X] of the polynomial, and its primitive representative is then qx − p, assuming that p and q are coprime. But any multiple in Z[X] of qx − p has leading term divisible by q and constant term divisible by p, which proves the statement. This argument shows that more generally, any irreducible factor of P can be supposed to have integer coefficients, and leading and constant coefficients dividing the corresponding coefficients of P.
Examples
First
In the polynomial
$2x^{3}+x-1,$
any rational root fully reduced would have to have a numerator that divides evenly into 1 and a denominator that divides evenly into 2. Hence the only possible rational roots are ±1/2 and ±1; since neither of these equates the polynomial to zero, it has no rational roots.
Second
In the polynomial
$x^{3}-7x+6$
the only possible rational roots would have a numerator that divides 6 and a denominator that divides 1, limiting the possibilities to ±1, ±2, ±3, and ±6. Of these, 1, 2, and –3 equate the polynomial to zero, and hence are its rational roots. (In fact these are its only roots since a cubic has only three roots; in general, a polynomial could have some rational and some irrational roots.)
Third
Every rational root of the polynomial
$3x^{3}-5x^{2}+5x-2$
must be among the numbers
$\pm {\tfrac {1,2}{1,3}}=\pm \left\{1,2,{\tfrac {1}{3}},{\tfrac {2}{3}}\right\}.$
These 8 root candidates x = r can be tested by evaluating P(r), for example using Horner's method. It turns out there is exactly one with P(r) = 0.
This process may be made more efficient: if P(r) ≠ 0, it can be used to shorten the list of remaining candidates.[2] For example, x = 1 does not work, as P(1) = 1. Substituting x = 1 + t yields a polynomial in t with constant term P(1) = 1, while the coefficient of t3 remains the same as the coefficient of x3. Applying the rational root theorem thus yields the possible roots $t=\pm {\tfrac {1}{1,3}}$, so that
$x=1+t=2,0,{\tfrac {4}{3}},{\tfrac {2}{3}}.$
True roots must occur on both lists, so list of rational root candidates has shrunk to just x = 2 and x = 2/3.
If k ≥ 1 rational roots are found, Horner's method will also yield a polynomial of degree n − k whose roots, together with the rational roots, are exactly the roots of the original polynomial. If none of the candidates is a solution, there can be no rational solution.
See also
• Fundamental theorem of algebra
• Integrally closed domain
• Descartes' rule of signs
• Gauss–Lucas theorem
• Properties of polynomial roots
• Content (algebra)
• Eisenstein's criterion
Notes
1. Arnold, D.; Arnold, G. (1993). Four unit mathematics. Edward Arnold. pp. 120–121. ISBN 0-340-54335-3.
2. King, Jeremy D. (November 2006). "Integer roots of polynomials". Mathematical Gazette. 90: 455–456.
References
• Charles D. Miller, Margaret L. Lial, David I. Schneider: Fundamentals of College Algebra. Scott & Foresman/Little & Brown Higher Education, 3rd edition 1990, ISBN 0-673-38638-4, pp. 216–221
• Phillip S. Jones, Jack D. Bedient: The historical roots of elementary mathematics. Dover Courier Publications 1998, ISBN 0-486-25563-8, pp. 116–117 (online copy, p. 116, at Google Books)
• Ron Larson: Calculus: An Applied Approach. Cengage Learning 2007, ISBN 978-0-618-95825-2, pp. 23–24 (online copy, p. 23, at Google Books)
External links
• Weisstein, Eric W. "Rational Zero Theorem". MathWorld.
• RationalRootTheorem at PlanetMath
• Another proof that nth roots of integers are irrational, except for perfect nth powers by Scott E. Brodie
• The Rational Roots Test at purplemath.com
|
Wikipedia
|
Rational arrival process
In queueing theory, a discipline within the mathematical theory of probability, a rational arrival process (RAP) is a mathematical model for the time between job arrivals to a system. It extends the concept of a Markov arrival process, allowing for dependent matrix-exponential distributed inter-arrival times.[1]
The processes were first characterised by Asmussen and Bladt[2] and are referred to as rational arrival processes because the inter-arrival times have a rational Laplace–Stieltjes transform.
Software
• Q-MAM a MATLAB toolbox which can solve queueing systems with RAP arrivals.[3]
References
1. Bladt, M.; Neuts, M. F. (2003). "Matrix‐Exponential Distributions: Calculus and Interpretations via Flows". Stochastic Models. 19: 113. doi:10.1081/STM-120018141.
2. Asmussen, S. R.; Bladt, M. (1999). "Point processes with finite-dimensional conditional probabilities". Stochastic Processes and their Applications. 82: 127. doi:10.1016/S0304-4149(99)00006-X.
3. Pérez, J. F.; Van Velthoven, J.; Van Houdt, B. (2008). "Q-MAM: A Tool for Solving Infinite Queues using Matrix-Analytic Methods". Proceedings of the 3rd International Conference on Performance Evaluation Methodologies and Tools (PDF). doi:10.4108/ICST.VALUETOOLS2008.4368. ISBN 978-963-9799-31-8.
Queueing theory
Single queueing nodes
• D/M/1 queue
• M/D/1 queue
• M/D/c queue
• M/M/1 queue
• Burke's theorem
• M/M/c queue
• M/M/∞ queue
• M/G/1 queue
• Pollaczek–Khinchine formula
• Matrix analytic method
• M/G/k queue
• G/M/1 queue
• G/G/1 queue
• Kingman's formula
• Lindley equation
• Fork–join queue
• Bulk queue
Arrival processes
• Poisson point process
• Markovian arrival process
• Rational arrival process
Queueing networks
• Jackson network
• Traffic equations
• Gordon–Newell theorem
• Mean value analysis
• Buzen's algorithm
• Kelly network
• G-network
• BCMP network
Service policies
• FIFO
• LIFO
• Processor sharing
• Round-robin
• Shortest job next
• Shortest remaining time
Key concepts
• Continuous-time Markov chain
• Kendall's notation
• Little's law
• Product-form solution
• Balance equation
• Quasireversibility
• Flow-equivalent server method
• Arrival theorem
• Decomposition method
• Beneš method
Limit theorems
• Fluid limit
• Mean-field theory
• Heavy traffic approximation
• Reflected Brownian motion
Extensions
• Fluid queue
• Layered queueing network
• Polling system
• Adversarial queueing network
• Loss network
• Retrial queue
Information systems
• Data buffer
• Erlang (unit)
• Erlang distribution
• Flow control (data)
• Message queue
• Network congestion
• Network scheduler
• Pipeline (software)
• Quality of service
• Scheduling (computing)
• Teletraffic engineering
Category
|
Wikipedia
|
Frobenius normal form
In linear algebra, the Frobenius normal form or rational canonical form of a square matrix A with entries in a field F is a canonical form for matrices obtained by conjugation by invertible matrices over F. The form reflects a minimal decomposition of the vector space into subspaces that are cyclic for A (i.e., spanned by some vector and its repeated images under A). Since only one normal form can be reached from a given matrix (whence the "canonical"), a matrix B is similar to A if and only if it has the same rational canonical form as A. Since this form can be found without any operations that might change when extending the field F (whence the "rational"), notably without factoring polynomials, this shows that whether two matrices are similar does not change upon field extensions. The form is named after German mathematician Ferdinand Georg Frobenius.
Some authors use the term rational canonical form for a somewhat different form that is more properly called the primary rational canonical form. Instead of decomposing into a minimum number of cyclic subspaces, the primary form decomposes into a maximum number of cyclic subspaces. It is also defined over F, but has somewhat different properties: finding the form requires factorization of polynomials, and as a consequence the primary rational canonical form may change when the same matrix is considered over an extension field of F. This article mainly deals with the form that does not require factorization, and explicitly mentions "primary" when the form using factorization is meant.
Motivation
When trying to find out whether two square matrices A and B are similar, one approach is to try, for each of them, to decompose the vector space as far as possible into a direct sum of stable subspaces, and compare the respective actions on these subspaces. For instance if both are diagonalizable, then one can take the decomposition into eigenspaces (for which the action is as simple as it can get, namely by a scalar), and then similarity can be decided by comparing eigenvalues and their multiplicities. While in practice this is often a quite insightful approach, there are various drawbacks this has as a general method. First, it requires finding all eigenvalues, say as roots of the characteristic polynomial, but it may not be possible to give an explicit expression for them. Second, a complete set of eigenvalues might exist only in an extension of the field one is working over, and then one does not get a proof of similarity over the original field. Finally A and B might not be diagonalizable even over this larger field, in which case one must instead use a decomposition into generalized eigenspaces, and possibly into Jordan blocks.
But obtaining such a fine decomposition is not necessary to just decide whether two matrices are similar. The rational canonical form is based on instead using a direct sum decomposition into stable subspaces that are as large as possible, while still allowing a very simple description of the action on each of them. These subspaces must be generated by a single nonzero vector v and all its images by repeated application of the linear operator associated to the matrix; such subspaces are called cyclic subspaces (by analogy with cyclic subgroups) and they are clearly stable under the linear operator. A basis of such a subspace is obtained by taking v and its successive images as long as they are linearly independent. The matrix of the linear operator with respect to such a basis is the companion matrix of a monic polynomial; this polynomial (the minimal polynomial of the operator restricted to the subspace, which notion is analogous to that of the order of a cyclic subgroup) determines the action of the operator on the cyclic subspace up to isomorphism, and is independent of the choice of the vector v generating the subspace.
A direct sum decomposition into cyclic subspaces always exists, and finding one does not require factoring polynomials. However it is possible that cyclic subspaces do allow a decomposition as direct sum of smaller cyclic subspaces (essentially by the Chinese remainder theorem). Therefore, just having for both matrices some decomposition of the space into cyclic subspaces, and knowing the corresponding minimal polynomials, is not in itself sufficient to decide their similarity. An additional condition is imposed to ensure that for similar matrices one gets decompositions into cyclic subspaces that exactly match: in the list of associated minimal polynomials each one must divide the next (and the constant polynomial 1 is forbidden to exclude trivial cyclic subspaces of dimension 0). The resulting list of polynomials are called the invariant factors of (the K[X]-module defined by) the matrix, and two matrices are similar if and only if they have identical lists of invariant factors. The rational canonical form of a matrix A is obtained by expressing it on a basis adapted to a decomposition into cyclic subspaces whose associated minimal polynomials are the invariant factors of A; two matrices are similar if and only if they have the same rational canonical form.
Example
Consider the following matrix A, over Q:
$\scriptstyle A={\begin{pmatrix}-1&3&-1&0&-2&0&0&-2\\-1&-1&1&1&-2&-1&0&-1\\-2&-6&4&3&-8&-4&-2&1\\-1&8&-3&-1&5&2&3&-3\\0&0&0&0&0&0&0&1\\0&0&0&0&-1&0&0&0\\1&0&0&0&2&0&0&0\\0&0&0&0&4&0&1&0\end{pmatrix}}.$
A has minimal polynomial $\mu =X^{6}-4X^{4}-2X^{3}+4X^{2}+4X+1$, so that the dimension of a subspace generated by the repeated images of a single vector is at most 6. The characteristic polynomial is $\chi =X^{8}-X^{7}-5X^{6}+2X^{5}+10X^{4}+2X^{3}-7X^{2}-5X-1$, which is a multiple of the minimal polynomial by a factor $X^{2}-X-1$. There always exist vectors such that the cyclic subspace that they generate has the same minimal polynomial as the operator has on the whole space; indeed most vectors will have this property, and in this case the first standard basis vector $e_{1}$ does so: the vectors $A^{k}(e_{1})$ for $k=0,1,\ldots ,5$ are linearly independent and span a cyclic subspace with minimal polynomial $\mu $. There exist complementary stable subspaces (of dimension 2) to this cyclic subspace, and the space generated by vectors $v=(3,4,8,0,-1,0,2,-1)^{\top }$ and $w=(5,4,5,9,-1,1,1,-2)^{\top }$ is an example. In fact one has $A\cdot v=w$, so the complementary subspace is a cyclic subspace generated by $v$; it has minimal polynomial $X^{2}-X-1$. Since $\mu $ is the minimal polynomial of the whole space, it is clear that $X^{2}-X-1$ must divide $\mu $ (and it is easily checked that it does), and we have found the invariant factors $X^{2}-X-1$ and $\mu =X^{6}-4X^{4}-2X^{3}+4X^{2}+4X+1$ of A. Then the rational canonical form of A is the block diagonal matrix with the corresponding companion matrices as diagonal blocks, namely
$\scriptstyle C={\begin{pmatrix}0&1&0&0&0&0&0&0\\1&1&0&0&0&0&0&0\\0&0&0&0&0&0&0&-1\\0&0&1&0&0&0&0&-4\\0&0&0&1&0&0&0&-4\\0&0&0&0&1&0&0&2\\0&0&0&0&0&1&0&4\\0&0&0&0&0&0&1&0\end{pmatrix}}.$
A basis on which this form is attained is formed by the vectors $v,w$ above, followed by $A^{k}(e_{1})$ for $k=0,1,\ldots ,5$; explicitly this means that for
$\scriptstyle P={\begin{pmatrix}3&5&1&-1&0&0&-4&0\\4&4&0&-1&-1&-2&-3&-5\\8&5&0&-2&-5&-2&-11&-6\\0&9&0&-1&3&-2&0&0\\-1&-1&0&0&0&1&-1&4\\0&1&0&0&0&0&-1&1\\2&1&0&1&-1&0&2&-6\\-1&-2&0&0&1&-1&4&-2\end{pmatrix}}$,
one has $A=PCP^{-1}.$
General case and theory
Fix a base field F and a finite-dimensional vector space V over F. Given a polynomial P ∈ F[X], there is associated to it a companion matrix CP whose characteristic polynomial and minimal polynomial are both equal to P.
Theorem: Let V be a finite-dimensional vector space over a field F, and A a square matrix over F. Then V (viewed as an F[X]-module with the action of X given by A) admits a F[X]-module isomorphism
V ≅ F[X]/f1 ⊕ … ⊕ F[X]/fk
where the fi ∈ F[X] may be taken to be monic polynomials of positive degree (so they are non-units in F[X]) that satisfy the relations
f1 | f2 | … | fk
where "a | b" is notation for "a divides b"; with these conditions the list of polynomials fi is unique.
Sketch of Proof: Apply the structure theorem for finitely generated modules over a principal ideal domain to V, viewing it as an F[X]-module. The structure theorem provides a decomposition into cyclic factors, each of which is a quotient of F[X] by a proper ideal; the zero ideal cannot be present since the resulting free module would be infinite-dimensional as F vector space, while V is finite-dimensional. For the polynomials fi one then takes the unique monic generators of the respective ideals, and since the structure theorem ensures containment of every ideal in the preceding ideal, one obtains the divisibility conditions for the fi. See [DF] for details.
Given an arbitrary square matrix, the elementary divisors used in the construction of the Jordan normal form do not exist over F[X], so the invariant factors fi as given above must be used instead. The last of these factors fk is then the minimal polynomial, which all the invariant factors therefore divide, and the product of the invariant factors gives the characteristic polynomial. Note that this implies that the minimal polynomial divides the characteristic polynomial (which is essentially the Cayley-Hamilton theorem), and that every irreducible factor of the characteristic polynomial also divides the minimal polynomial (possibly with lower multiplicity).
For each invariant factor fi one takes its companion matrix Cfi, and the block diagonal matrix formed from these blocks yields the rational canonical form of A. When the minimal polynomial is identical to the characteristic polynomial (the case k = 1), the Frobenius normal form is the companion matrix of the characteristic polynomial. As the rational canonical form is uniquely determined by the unique invariant factors associated to A, and these invariant factors are independent of basis, it follows that two square matrices A and B are similar if and only if they have the same rational canonical form.
A rational normal form generalizing the Jordan normal form
The Frobenius normal form does not reflect any form of factorization of the characteristic polynomial, even if it does exist over the ground field F. This implies that it is invariant when F is replaced by a different field (as long as it contains the entries of the original matrix A). On the other hand, this makes the Frobenius normal form rather different from other normal forms that do depend on factoring the characteristic polynomial, notably the diagonal form (if A is diagonalizable) or more generally the Jordan normal form (if the characteristic polynomial splits into linear factors). For instance, the Frobenius normal form of a diagonal matrix with distinct diagonal entries is just the companion matrix of its characteristic polynomial.
There is another way to define a normal form, that, like the Frobenius normal form, is always defined over the same field F as A, but that does reflect a possible factorization of the characteristic polynomial (or equivalently the minimal polynomial) into irreducible factors over F, and which reduces to the Jordan normal form when this factorization only contains linear factors (corresponding to eigenvalues). This form[1] is sometimes called the generalized Jordan normal form, or primary rational canonical form. It is based on the fact that the vector space can be canonically decomposed into a direct sum of stable subspaces corresponding to the distinct irreducible factors P of the characteristic polynomial (as stated by the lemme des noyaux[2]), where the characteristic polynomial of each summand is a power of the corresponding P. These summands can be further decomposed, non-canonically, as a direct sum of cyclic F[x]-modules (like is done for the Frobenius normal form above), where the characteristic polynomial of each summand is still a (generally smaller) power of P. The primary rational canonical form is a block diagonal matrix corresponding to such a decomposition into cyclic modules, with a particular form called generalized Jordan block in the diagonal blocks, corresponding to a particular choice of a basis for the cyclic modules. This generalized Jordan block is itself a block matrix of the form
$\scriptstyle {\begin{pmatrix}C&0&\cdots &0\\U&C&\cdots &0\\\vdots &\ddots &\ddots &\vdots \\0&\cdots &U&C\end{pmatrix}}$
where C is the companion matrix of the irreducible polynomial P, and U is a matrix whose sole nonzero entry is a 1 in the upper right hand corner. For the case of a linear irreducible factor P = x − λ, these blocks are reduced to single entries C = λ and U = 1 and, one finds a (transposed) Jordan block. In any generalized Jordan block, all entries immediately below the main diagonal are 1. A basis of the cyclic module giving rise to this form is obtained by choosing a generating vector v (one that is not annihilated by Pk−1(A) where the minimal polynomial of the cyclic module is Pk), and taking as basis
$v,A(v),A^{2}(v),\ldots ,A^{d-1}(v),~P(A)(v),A(P(A)(v)),\ldots ,A^{d-1}(P(A)(v)),~P^{2}(A)(v),\ldots ,~P^{k-1}(A)(v),\ldots ,A^{d-1}(P^{k-1}(A)(v))$
where d = deg(P).
See also
• Smith normal form
References
• [DF] David S. Dummit and Richard M. Foote. Abstract Algebra. 2nd Edition, John Wiley & Sons. pp. 442, 446, 452-458. ISBN 0-471-36857-1.
1. Phani Bhushan Bhattacharya, Surender Kumar Jain, S. R. Nagpaul, Basic abstract algebra, Theorem 5.4, p.423
2. Xavier Gourdon, Les maths en tête, Mathématiques pour M', Algèbre, 1998, Ellipses, Th. 1 p. 173
External links
• Rational Canonical Form (Mathworld)
Algorithms
• An O(n3) Algorithm for Frobenius Normal Form
• An Algorithm for the Frobenius Normal Form (pdf)
• A rational canonical form Algorithm (pdf)
|
Wikipedia
|
Comodule
In mathematics, a comodule or corepresentation is a concept dual to a module. The definition of a comodule over a coalgebra is formed by dualizing the definition of a module over an associative algebra.
Formal definition
Let K be a field, and C be a coalgebra over K. A (right) comodule over C is a K-vector space M together with a linear map
$\rho \colon M\to M\otimes C$
such that
1. $(\mathrm {id} \otimes \Delta )\circ \rho =(\rho \otimes \mathrm {id} )\circ \rho $
2. $(\mathrm {id} \otimes \varepsilon )\circ \rho =\mathrm {id} $,
where Δ is the comultiplication for C, and ε is the counit.
Note that in the second rule we have identified $M\otimes K$ with $M\,$.
Examples
• A coalgebra is a comodule over itself.
• If M is a finite-dimensional module over a finite-dimensional K-algebra A, then the set of linear functions from A to K forms a coalgebra, and the set of linear functions from M to K forms a comodule over that coalgebra.
• A graded vector space V can be made into a comodule. Let I be the index set for the graded vector space, and let $C_{I}$ be the vector space with basis $e_{i}$ for $i\in I$. We turn $C_{I}$ into a coalgebra and V into a $C_{I}$-comodule, as follows:
1. Let the comultiplication on $C_{I}$ be given by $\Delta (e_{i})=e_{i}\otimes e_{i}$.
2. Let the counit on $C_{I}$ be given by $\varepsilon (e_{i})=1\ $.
3. Let the map $\rho $ on V be given by $\rho (v)=\sum v_{i}\otimes e_{i}$, where $v_{i}$ is the i-th homogeneous piece of $v$.
In algebraic topology
One important result in algebraic topology is the fact that homology $H_{*}(X)$ over the dual Steenrod algebra ${\mathcal {A}}^{*}$ forms a comodule.[1] This comes from the fact the Steenrod algebra ${\mathcal {A}}$ has a canonical action on the cohomology
$\mu :{\mathcal {A}}\otimes H^{*}(X)\to H^{*}(X)$ :{\mathcal {A}}\otimes H^{*}(X)\to H^{*}(X)}
When we dualize to the dual Steenrod algebra, this gives a comodule structure
$\mu ^{*}:H_{*}(X)\to {\mathcal {A}}^{*}\otimes H_{*}(X)$
This result extends to other cohomology theories as well, such as complex cobordism and is instrumental in computing its cohomology ring $\Omega _{U}^{*}(\{pt\})$.[2] The main reason for considering the comodule structure on homology instead of the module structure on cohomology lies in the fact the dual Steenrod algebra ${\mathcal {A}}^{*}$ is a commutative ring, and the setting of commutative algebra provides more tools for studying its structure.
Rational comodule
If M is a (right) comodule over the coalgebra C, then M is a (left) module over the dual algebra C∗, but the converse is not true in general: a module over C∗ is not necessarily a comodule over C. A rational comodule is a module over C∗ which becomes a comodule over C in the natural way.
Comodule morphisms
Let R be a ring, M, N, and C be R-modules, and
$\rho _{M}:M\rightarrow M\otimes C,\ \rho _{N}:N\rightarrow N\otimes C$
be right C-comodules. Then an R-linear map $f:M\rightarrow N$ is called a (right) comodule morphism, or (right) C-colinear, if
$\rho _{N}\circ f=(f\otimes 1)\circ \rho _{M}.$
This notion is dual to the notion of a linear map between vector spaces, or, more generally, of a homomorphism between R-modules.[3]
See also
• Divided power structure
References
1. Liulevicius, Arunas (1968). "Homology Comodules" (PDF). Transactions of the American Mathematical Society. 134 (2): 375–382. doi:10.2307/1994750. ISSN 0002-9947.
2. Mueller, Michael. "Calculating Cobordism Rings" (PDF). Archived (PDF) from the original on 2 Jan 2021.
3. Khaled AL-Takhman, Equivalences of Comodule Categories for Coalgebras over Rings, J. Pure Appl. Algebra,.V. 173, Issue: 3, September 7, 2002, pp. 245–271
• Gómez-Torrecillas, José (1998), "Coalgebras and comodules over a commutative ring", Revue Roumaine de Mathématiques Pures et Appliquées, 43: 591–603
• Montgomery, Susan (1993). Hopf algebras and their actions on rings. Regional Conference Series in Mathematics. Vol. 82. Providence, RI: American Mathematical Society. ISBN 0-8218-0738-2. Zbl 0793.16029.
• Sweedler, Moss (1969), Hopf Algebras, New York: W.A.Benjamin
|
Wikipedia
|
Rational dependence
In mathematics, a collection of real numbers is rationally independent if none of them can be written as a linear combination of the other numbers in the collection with rational coefficients. A collection of numbers which is not rationally independent is called rationally dependent. For instance we have the following example.
${\begin{matrix}{\mbox{independent}}\qquad \\\underbrace {\overbrace {3,\quad {\sqrt {8}}\quad } ,1+{\sqrt {2}}} \\{\mbox{dependent}}\\\end{matrix}}$
Because if we let $x=3,y={\sqrt {8}}$, then $1+{\sqrt {2}}={\frac {1}{3}}x+{\frac {1}{2}}y$.
Formal definition
The real numbers ω1, ω2, ... , ωn are said to be rationally dependent if there exist integers k1, k2, ... , kn, not all of which are zero, such that
$k_{1}\omega _{1}+k_{2}\omega _{2}+\cdots +k_{n}\omega _{n}=0.$
If such integers do not exist, then the vectors are said to be rationally independent. This condition can be reformulated as follows: ω1, ω2, ... , ωn are rationally independent if the only n-tuple of integers k1, k2, ... , kn such that
$k_{1}\omega _{1}+k_{2}\omega _{2}+\cdots +k_{n}\omega _{n}=0$
is the trivial solution in which every ki is zero.
The real numbers form a vector space over the rational numbers, and this is equivalent to the usual definition of linear independence in this vector space.
See also
• Baker's theorem
• Dehn invariant
• Gelfond–Schneider theorem
• Hamel basis
• Hodge conjecture
• Lindemann–Weierstrass theorem
• Linear flow on the torus
• Schanuel's conjecture
Bibliography
• Anatole Katok and Boris Hasselblatt (1996). Introduction to the modern theory of dynamical systems. Cambridge. ISBN 0-521-57557-5.
|
Wikipedia
|
Elliptic rational functions
In mathematics the elliptic rational functions are a sequence of rational functions with real coefficients. Elliptic rational functions are extensively used in the design of elliptic electronic filters. (These functions are sometimes called Chebyshev rational functions, not to be confused with certain other functions of the same name).
Rational elliptic functions are identified by a positive integer order n and include a parameter ξ ≥ 1 called the selectivity factor. A rational elliptic function of degree n in x with selectivity factor ξ is generally defined as:
$R_{n}(\xi ,x)\equiv \mathrm {cd} \left(n{\frac {K(1/L_{n}(\xi ))}{K(1/\xi )}}\,\mathrm {cd} ^{-1}(x,1/\xi ),1/L_{n}(\xi )\right)$
where
• cd(u,k) is the Jacobi elliptic cosine function.
• K() is a complete elliptic integral of the first kind.
• $L_{n}(\xi )=R_{n}(\xi ,\xi )$ is the discrimination factor, equal to the minimum value of the magnitude of $R_{n}(\xi ,x)$ for $|x|\geq \xi $.
For many cases, in particular for orders of the form n = 2a3b where a and b are integers, the elliptic rational functions can be expressed using algebraic functions alone. Elliptic rational functions are closely related to the Chebyshev polynomials: Just as the circular trigonometric functions are special cases of the Jacobi elliptic functions, so the Chebyshev polynomials are special cases of the elliptic rational functions.
Expression as a ratio of polynomials
For even orders, the elliptic rational functions may be expressed as a ratio of two polynomials, both of order n.
$R_{n}(\xi ,x)=r_{0}\,{\frac {\prod _{i=1}^{n}(x-x_{i})}{\prod _{i=1}^{n}(x-x_{pi})}}$ (for n even)
where $x_{i}$ are the zeroes and $x_{pi}$ are the poles, and $r_{0}$ is a normalizing constant chosen such that $R_{n}(\xi ,1)=1$. The above form would be true for even orders as well except that for odd orders, there will be a pole at x=∞ and a zero at x=0 so that the above form must be modified to read:
$R_{n}(\xi ,x)=r_{0}\,x\,{\frac {\prod _{i=1}^{n-1}(x-x_{i})}{\prod _{i=1}^{n-1}(x-x_{pi})}}$ (for n odd)
Properties
The canonical properties
• $R_{n}^{2}(\xi ,x)\leq 1$ for $|x|\leq 1\,$
• $R_{n}^{2}(\xi ,x)=1$ at $|x|=1\,$
• $R_{n}^{2}(\xi ,-x)=R_{n}^{2}(\xi ,x)$
• $R_{n}^{2}(\xi ,x)>1$ for $x>1\,$
• The slope at x=1 is as large as possible
• The slope at x=1 is larger than the corresponding slope of the Chebyshev polynomial of the same order.
The only rational function satisfying the above properties is the elliptic rational function (Lutovac, Tošić & Evans 2001, § 13.2). The following properties are derived:
Normalization
The elliptic rational function is normalized to unity at x=1:
$R_{n}(\xi ,1)=1\,$
Nesting property
The nesting property is written:
$R_{m}(R_{n}(\xi ,\xi ),R_{n}(\xi ,x))=R_{m\cdot n}(\xi ,x)\,$
This is a very important property:
• If $R_{n}$ is known for all prime n, then nesting property gives $R_{n}$ for all n. In particular, since $R_{2}$ and $R_{3}$ can be expressed in closed form without explicit use of the Jacobi elliptic functions, then all $R_{n}$ for n of the form $n=2^{a}3^{b}$ can be so expressed.
• It follows that if the zeroes of $R_{n}$ for prime n are known, the zeros of all $R_{n}$ can be found. Using the inversion relationship (see below), the poles can also be found.
• The nesting property implies the nesting property of the discrimination factor:
$L_{m\cdot n}(\xi )=L_{m}(L_{n}(\xi ))$
Limiting values
The elliptic rational functions are related to the Chebyshev polynomials of the first kind $T_{n}(x)$ by:
$\lim _{\xi =\rightarrow \,\infty }R_{n}(\xi ,x)=T_{n}(x)\,$
Symmetry
$R_{n}(\xi ,-x)=R_{n}(\xi ,x)\,$ for n even
$R_{n}(\xi ,-x)=-R_{n}(\xi ,x)\,$ for n odd
Equiripple
$R_{n}(\xi ,x)$ has equal ripple of $\pm 1$ in the interval $-1\leq x\leq 1$. By the inversion relationship (see below), it follows that $1/R_{n}(\xi ,x)$ has equiripple in $-1/\xi \leq x\leq 1/\xi $ of $\pm 1/L_{n}(\xi )$.
Inversion relationship
The following inversion relationship holds:
$R_{n}(\xi ,\xi /x)={\frac {R_{n}(\xi ,\xi )}{R_{n}(\xi ,x)}}\,$
This implies that poles and zeroes come in pairs such that
$x_{pi}x_{zi}=\xi \,$
Odd order functions will have a zero at x=0 and a corresponding pole at infinity.
Poles and Zeroes
The zeroes of the elliptic rational function of order n will be written $x_{ni}(\xi )$ or $x_{ni}$ when $\xi $ is implicitly known. The zeroes of the elliptic rational function will be the zeroes of the polynomial in the numerator of the function.
The following derivation of the zeroes of the elliptic rational function is analogous to that of determining the zeroes of the Chebyshev polynomials (Lutovac, Tošić & Evans 2001, § 12.6). Using the fact that for any z
$\mathrm {cd} \left((2m-1)K\left(1/z\right),{\frac {1}{z}}\right)=0\,$
the defining equation for the elliptic rational functions implies that
$n{\frac {K(1/L_{n})}{K(1/\xi )}}\mathrm {cd} ^{-1}(x_{m},1/\xi )=(2m-1)K(1/L_{n})$
so that the zeroes are given by
$x_{m}=\mathrm {cd} \left(K(1/\xi )\,{\frac {2m-1}{n}},{\frac {1}{\xi }}\right).$
Using the inversion relationship, the poles may then be calculated.
From the nesting property, if the zeroes of $R_{m}$ and $R_{n}$ can be algebraically expressed (i.e. without the need for calculating the Jacobi ellipse functions) then the zeroes of $R_{m\cdot n}$ can be algebraically expressed. In particular, the zeroes of elliptic rational functions of order $2^{i}3^{j}$ may be algebraically expressed (Lutovac, Tošić & Evans 2001, § 12.9, 13.9). For example, we can find the zeroes of $R_{8}(\xi ,x)$ as follows: Define
$X_{n}\equiv R_{n}(\xi ,x)\qquad L_{n}\equiv R_{n}(\xi ,\xi )\qquad t_{n}\equiv {\sqrt {1-1/L_{n}^{2}}}.$
Then, from the nesting property and knowing that
$R_{2}(\xi ,x)={\frac {(t+1)x^{2}-1}{(t-1)x^{2}+1}}$
where $t\equiv {\sqrt {1-1/\xi ^{2}}}$ we have:
$L_{2}={\frac {1+t}{1-t}},\qquad L_{4}={\frac {1+t_{2}}{1-t_{2}}},\qquad L_{8}={\frac {1+t_{4}}{1-t_{4}}}$
$X_{2}={\frac {(t+1)x^{2}-1}{(t-1)x^{2}+1}},\qquad X_{4}={\frac {(t_{2}+1)X_{2}^{2}-1}{(t_{2}-1)X_{2}^{2}+1}},\qquad X_{8}={\frac {(t_{4}+1)X_{4}^{2}-1}{(t_{4}-1)X_{4}^{2}+1}}.$
These last three equations may be inverted:
$x={\frac {1}{\pm {\sqrt {1+t\,\left({\frac {1-X_{2}}{1+X_{2}}}\right)}}}},\qquad X_{2}={\frac {1}{\pm {\sqrt {1+t_{2}\,\left({\frac {1-X_{4}}{1+X_{4}}}\right)}}}},\qquad X_{4}={\frac {1}{\pm {\sqrt {1+t_{4}\,\left({\frac {1-X_{8}}{1+X_{8}}}\right)}}}}.\qquad $
To calculate the zeroes of $R_{8}(\xi ,x)$ we set $X_{8}=0$ in the third equation, calculate the two values of $X_{4}$, then use these values of $X_{4}$ in the second equation to calculate four values of $X_{2}$ and finally, use these values in the first equation to calculate the eight zeroes of $R_{8}(\xi ,x)$. (The $t_{n}$ are calculated by a similar recursion.) Again, using the inversion relationship, these zeroes can be used to calculate the poles.
Particular values
We may write the first few elliptic rational functions as:
$R_{1}(\xi ,x)=x\,$
$R_{2}(\xi ,x)={\frac {(t+1)x^{2}-1}{(t-1)x^{2}+1}}$
where
$t\equiv {\sqrt {1-{\frac {1}{\xi ^{2}}}}}$
$R_{3}(\xi ,x)=x\,{\frac {(1-x_{p}^{2})(x^{2}-x_{z}^{2})}{(1-x_{z}^{2})(x^{2}-x_{p}^{2})}}$
where
$G\equiv {\sqrt {4\xi ^{2}+(4\xi ^{2}(\xi ^{2}\!-\!1))^{2/3}}}$
$x_{p}^{2}\equiv {\frac {2\xi ^{2}{\sqrt {G}}}{{\sqrt {8\xi ^{2}(\xi ^{2}\!+\!1)+12G\xi ^{2}-G^{3}}}-{\sqrt {G^{3}}}}}$
$x_{z}^{2}=\xi ^{2}/x_{p}^{2}$
$R_{4}(\xi ,x)=R_{2}(R_{2}(\xi ,\xi ),R_{2}(\xi ,x))={\frac {(1+t)(1+{\sqrt {t}})^{2}x^{4}-2(1+t)(1+{\sqrt {t}})x^{2}+1}{(1+t)(1-{\sqrt {t}})^{2}x^{4}-2(1+t)(1-{\sqrt {t}})x^{2}+1}}$
$R_{6}(\xi ,x)=R_{3}(R_{2}(\xi ,\xi ),R_{2}(\xi ,x))\,$ etc.
See Lutovac, Tošić & Evans (2001, § 13) for further explicit expressions of order n=5 and $n=2^{i}\,3^{j}$.
The corresponding discrimination factors are:
$L_{1}(\xi )=\xi \,$
$L_{2}(\xi )={\frac {1+t}{1-t}}=\left(\xi +{\sqrt {\xi ^{2}-1}}\right)^{2}$
$L_{3}(\xi )=\xi ^{3}\left({\frac {1-x_{p}^{2}}{\xi ^{2}-x_{p}^{2}}}\right)^{2}$
$L_{4}(\xi )=\left({\sqrt {\xi }}+(\xi ^{2}-1)^{1/4}\right)^{4}\left(\xi +{\sqrt {\xi ^{2}-1}}\right)^{2}$
$L_{6}(\xi )=L_{3}(L_{2}(\xi ))\,$ etc.
The corresponding zeroes are $x_{nj}$ where n is the order and j is the number of the zero. There will be a total of n zeroes for each order.
$x_{11}=0\,$
$x_{21}=\xi {\sqrt {1-t}}\,$
$x_{22}=-x_{21}\,$
$x_{31}=x_{z}\,$
$x_{32}=0\,$
$x_{33}=-x_{31}\,$
$x_{41}=\xi {\sqrt {\left(1-{\sqrt {t}}\right)\left(1+t-{\sqrt {t(t+1)}}\right)}}\,$
$x_{42}=\xi {\sqrt {\left(1-{\sqrt {t}}\right)\left(1+t+{\sqrt {t(t+1)}}\right)}}\,$
$x_{43}=-x_{42}\,$
$x_{44}=-x_{41}\,$
From the inversion relationship, the corresponding poles $x_{p,ni}$ may be found by $x_{p,ni}=\xi /(x_{ni})$
References
• MathWorld
• Daniels, Richard W. (1974). Approximation Methods for Electronic Filter Design. New York: McGraw-Hill. ISBN 0-07-015308-6.
• Lutovac, Miroslav D.; Tošić, Dejan V.; Evans, Brian L. (2001). Filter Design for Signal Processing using MATLAB© and Mathematica©. New Jersey, USA: Prentice Hall. ISBN 0-201-36130-2.
|
Wikipedia
|
Logical intuition
Logical Intuition, or mathematical intuition or rational intuition, is a series of instinctive foresight, know-how, and savviness often associated with the ability to perceive logical or mathematical truth—and the ability to solve mathematical challenges efficiently.[1] Humans apply logical intuition in proving mathematical theorems,[2] validating logical arguments,[3] developing algorithms and heuristics,[4] and in related contexts where mathematical challenges are involved.[5] The ability to recognize logical or mathematical truth and identify viable methods may vary from person to person, and may even be a result of knowledge and experience, which are subject to cultivation.[6] The ability may not be realizable in a computer program by means other than genetic programming or evolutionary programming.[7]
History
Plato and Aristotle considered intuition a means for perceiving ideas, significant enough that for Aristotle, intuition comprised the only means of knowing principles that are not subject to argument.[8]
Henri Poincaré distinguished logical intuition from other forms of intuition. In his book The Value of Science, he points out that:
...[T]here are many kinds of intuition. I have said how much the intuition of pure number, whence comes rigorous mathematical induction, differs from sensible intuition to which the imagination, properly so called, is the principal contributor.[9]
The passage goes on to assign two roles to logical intuition: to permit one to choose which route to follow in search of scientific truth, and to allow one to comprehend logical developments.[10]
Bertrand Russell, though critical of intuitive mysticism,[11] pointed out that the degree to which a truth is self-evident according to logical intuition can vary, from one situation to another, and stated that some self-evident truths are practically infallible:
When a certain number of logical principles have been admitted, the rest can be deduced from them; but the propositions deduced are often just as self-evident as those that were assumed without proof. All arithmetic, moreover, can be deduced from the general principles of logic, yet the simple propositions of arithmetic, such as 'two and two are four', are just as self-evident as the principles of logic.[12]
Kurt Gödel demonstrated based on his incompleteness theorems that intuition-based propositional calculus cannot be finitely valued.[13] Gödel also likened logical intuition to sense perception, and considered the mathematical constructs that humans perceive to have an independent existence of their own.[14] Under this line of reasoning, the human mind's ability to sense such abstract constructs may not be finitely implementable.[15]
Discussion
Dissent regarding the value of intuition in a logical or mathematical context may often hinge on the breadth of the definition of intuition and the psychological underpinning of the word.[16][17] Dissent regarding the implications of logical intuition in the fields of artificial intelligence and cognitive computing may similarly hinge on definitions. However, similarity between the potentially infinite nature of logical intuition posited by Gödel and the hard problem of consciousness posited by David Chalmers suggest that the realms of intuitive knowledge and experiential consciousness may both have aspects that are not reducible to classical physics concepts.[18]
See also
• Intuition
• Epistemology
• Philosophy of mind
• Philosophy of mathematics
• Cognition
• Numerical cognition
• Consciousness
• Hard problem of consciousness
• Panpsychism
• Transcendental idealism
• Intuitionism
• Intuitionistic logic
• Continuum hypothesis
• Logical truth
References
1. Parsons, Charles (1980). "X - Mathematical Intuition". Proceedings of the Aristotelian Society. 80 (New Series): 145–168. doi:10.1093/aristotelian/80.1.145. JSTOR 4544956.
2. Lipton, Richard (2010). "Mathematical Intuition—What Is It?".
3. Nakamura, Hiroko; Kawaguchi, Jun (2016). "People Like Logical Truth: Testing the Intuitive Detection of Logical Value in Basic Propositions". PLOS ONE. 11 (12): e0169166. doi:10.1371/journal.pone.0169166. PMC 5201307. PMID 28036402.
4. "Intuitive way to understand tree recursion". StackOverflow.com. 2014.
5. "Godel and the Nature of Mathematical Truth - A Talk with Rebecca Newberger Goldstein". Edge Foundation, Inc. 2005.
6. "Developing Your Intuition For Math". BetterExplained.com.
7. Rucker, Rudy. Infinity and the Mind. Princeton University Press., section 330 "Artificial Intelligence via Evolutionary Processes"
8. Piętka, Dariusz (2015). "The Concept of Intuition and Its Role in Plato and Aristotle". Organon. {{cite journal}}: Cite journal requires |journal= (help)
9. Poincaré, Henri (1905). "Intuition and Logic in Mathematics, from the book The Value of Science".
10. Poincaré, Henri (1905). The Value of Science.
11. Popova, Maria (2016). "A Largeness of Contemplation: Bertrand Russell on Intuition, the Intellect, and the Nature of Time". BrainPickings.org.
12. Russell, Bertrand (1912). Problems of Philosophy. Chapter XI "On Intuitive Knowledge"
13. Kennedy, Juliette (2015). Kurt Gödel. Stanford Encyclopedia of Philosophy.
14. Ravitch, Harold (1998). "On Gödel's Philosophy of Mathematics".
15. Solomon, Martin (1998). "On Kurt Gödel's Philosophy of Mathematics".
16. XiXiDu (2011). "Intuition and Mathematics".
17. Burton, Leone (2014). "Why is Intuition so Important to Mathematicians but Missing from Mathematics Education?" (PDF). Semantic Scholar. S2CID 56059874. Archived (PDF) from the original on 2019-10-21. Retrieved October 21, 2019.
18. Aas, Benjamin (2011). "Body-Gödel-Mind: The unsolvability of the hard problem of consciousness" (PDF).
|
Wikipedia
|
2-bridge knot
In the mathematical field of knot theory, a 2-bridge knot is a knot which can be regular isotoped so that the natural height function given by the z-coordinate has only two maxima and two minima as critical points. Equivalently, these are the knots with bridge number 2, the smallest possible bridge number for a nontrivial knot.
Bridge number 2
31
51
63
71...
Other names for 2-bridge knots are rational knots, 4-plats, and Viergeflechte (German for 'four braids'). 2-bridge links are defined similarly as above, but each component will have one min and max. 2-bridge knots were classified by Horst Schubert, using the fact that the 2-sheeted branched cover of the 3-sphere over the knot is a lens space.
Schubert normal form
The names rational knot and rational link were coined by John Conway who defined them as arising from numerator closures of rational tangles. This definition can be used to give a bijection between the set of 2-bridge links and the set of rational numbers; the rational number associated to a given link is called the Schubert normal form of the link (as this invariant was first defined by Schubert[1]), and is precisely the fraction associated to the rational tangle whose numerator closure gives the link.[2]: chapter 10
Further reading
• Louis H. Kauffman, Sofia Lambropoulou: On the classification of rational knots, L' Enseignement Mathématique, 49:357–410 (2003). preprint available at arxiv.org
• C. C. Adams, The Knot Book: An elementary introduction to the mathematical theory of knots. American Mathematical Society, Providence, RI, 2004. xiv+307 pp. ISBN 0-8218-3678-1
References
1. Schubert, Horst (1956). "Knoten mit zwei Brücken". Mathematische Zeitschrift. 65: 133–170. doi:10.1007/bf01473875.
2. Purcell, Jessica (2020). Hyperbolic knot theory. American Mathematical Society. ISBN 978-1-4704-5499-9.
External links
• Table and invariants of rational knots with up to 16 crossings
Knot theory (knots and links)
Hyperbolic
• Figure-eight (41)
• Three-twist (52)
• Stevedore (61)
• 62
• 63
• Endless (74)
• Carrick mat (818)
• Perko pair (10161)
• (−2,3,7) pretzel (12n242)
• Whitehead (52
1
)
• Borromean rings (63
2
)
• L10a140
• Conway knot (11n34)
Satellite
• Composite knots
• Granny
• Square
• Knot sum
Torus
• Unknot (01)
• Trefoil (31)
• Cinquefoil (51)
• Septafoil (71)
• Unlink (02
1
)
• Hopf (22
1
)
• Solomon's (42
1
)
Invariants
• Alternating
• Arf invariant
• Bridge no.
• 2-bridge
• Brunnian
• Chirality
• Invertible
• Crosscap no.
• Crossing no.
• Finite type invariant
• Hyperbolic volume
• Khovanov homology
• Genus
• Knot group
• Link group
• Linking no.
• Polynomial
• Alexander
• Bracket
• HOMFLY
• Jones
• Kauffman
• Pretzel
• Prime
• list
• Stick no.
• Tricolorability
• Unknotting no. and problem
Notation
and operations
• Alexander–Briggs notation
• Conway notation
• Dowker–Thistlethwaite notation
• Flype
• Mutation
• Reidemeister move
• Skein relation
• Tabulation
Other
• Alexander's theorem
• Berge
• Braid theory
• Conway sphere
• Complement
• Double torus
• Fibered
• Knot
• List of knots and links
• Ribbon
• Slice
• Sum
• Tait conjectures
• Twist
• Wild
• Writhe
• Surgery theory
• Category
• Commons
|
Wikipedia
|
Regular language
In theoretical computer science and formal language theory, a regular language (also called a rational language)[1][2] is a formal language that can be defined by a regular expression, in the strict sense in theoretical computer science (as opposed to many modern regular expression engines, which are augmented with features that allow the recognition of non-regular languages).
"Kleene's theorem" redirects here. For his theorems for recursive functions, see Kleene's recursion theorem.
Alternatively, a regular language can be defined as a language recognized by a finite automaton. The equivalence of regular expressions and finite automata is known as Kleene's theorem[3] (after American mathematician Stephen Cole Kleene). In the Chomsky hierarchy, regular languages are the languages generated by Type-3 grammars.
Formal definition
The collection of regular languages over an alphabet Σ is defined recursively as follows:
• The empty language Ø is a regular language.
• For each a ∈ Σ (a belongs to Σ), the singleton language {a } is a regular language.
• If A is a regular language, A* (Kleene star) is a regular language. Due to this, the empty string language {ε} is also regular.
• If A and B are regular languages, then A ∪ B (union) and A • B (concatenation) are regular languages.
• No other languages over Σ are regular.
See regular expression for syntax and semantics of regular expressions.
Examples
All finite languages are regular; in particular the empty string language {ε} = Ø* is regular. Other typical examples include the language consisting of all strings over the alphabet {a, b} which contain an even number of a's, or the language consisting of all strings of the form: several a's followed by several b's.
A simple example of a language that is not regular is the set of strings {anbn | n ≥ 0}.[4] Intuitively, it cannot be recognized with a finite automaton, since a finite automaton has finite memory and it cannot remember the exact number of a's. Techniques to prove this fact rigorously are given below.
Equivalent formalisms
A regular language satisfies the following equivalent properties:
1. it is the language of a regular expression (by the above definition)
2. it is the language accepted by a nondeterministic finite automaton (NFA)[note 1][note 2]
3. it is the language accepted by a deterministic finite automaton (DFA)[note 3][note 4]
4. it can be generated by a regular grammar[note 5][note 6]
5. it is the language accepted by an alternating finite automaton
6. it is the language accepted by a two-way finite automaton
7. it can be generated by a prefix grammar
8. it can be accepted by a read-only Turing machine
9. it can be defined in monadic second-order logic (Büchi–Elgot–Trakhtenbrot theorem)[5]
10. it is recognized by some finite syntactic monoid M, meaning it is the preimage {w ∈ Σ* | f(w) ∈ S} of a subset S of a finite monoid M under a monoid homomorphism f: Σ* → M from the free monoid on its alphabet[note 7]
11. the number of equivalence classes of its syntactic congruence is finite.[note 8][note 9] (This number equals the number of states of the minimal deterministic finite automaton accepting L.)
Properties 10. and 11. are purely algebraic approaches to define regular languages; a similar set of statements can be formulated for a monoid M ⊆ Σ*. In this case, equivalence over M leads to the concept of a recognizable language.
Some authors use one of the above properties different from "1." as an alternative definition of regular languages.
Some of the equivalences above, particularly those among the first four formalisms, are called Kleene's theorem in textbooks. Precisely which one (or which subset) is called such varies between authors. One textbook calls the equivalence of regular expressions and NFAs ("1." and "2." above) "Kleene's theorem".[6] Another textbook calls the equivalence of regular expressions and DFAs ("1." and "3." above) "Kleene's theorem".[7] Two other textbooks first prove the expressive equivalence of NFAs and DFAs ("2." and "3.") and then state "Kleene's theorem" as the equivalence between regular expressions and finite automata (the latter said to describe "recognizable languages").[2][8] A linguistically oriented text first equates regular grammars ("4." above) with DFAs and NFAs, calls the languages generated by (any of) these "regular", after which it introduces regular expressions which it terms to describe "rational languages", and finally states "Kleene's theorem" as the coincidence of regular and rational languages.[9] Other authors simply define "rational expression" and "regular expressions" as synonymous and do the same with "rational languages" and "regular languages".[1][2]
Apparently, the term "regular" originates from a 1951 technical report where Kleene introduced "regular events" and explicitly welcomed "any suggestions as to a more descriptive term".[10] Noam Chomsky, in his 1959 seminal article, used the term "regular" in a different meaning at first (referring to what is called "Chomsky normal form" today),[11] but noticed that his "finite state languages" were equivalent to Kleene's "regular events".[12]
Closure properties
The regular languages are closed under various operations, that is, if the languages K and L are regular, so is the result of the following operations:
• the set-theoretic Boolean operations: union K ∪ L, intersection K ∩ L, and complement L, hence also relative complement K − L.[13]
• the regular operations: K ∪ L, concatenation $K\circ L$, and Kleene star L*.[14]
• the trio operations: string homomorphism, inverse string homomorphism, and intersection with regular languages. As a consequence they are closed under arbitrary finite state transductions, like quotient K / L with a regular language. Even more, regular languages are closed under quotients with arbitrary languages: If L is regular then L / K is regular for any K.
• the reverse (or mirror image) LR.[15] Given a nondeterministic finite automaton to recognize L, an automaton for LR can be obtained by reversing all transitions and interchanging starting and finishing states. This may result in multiple starting states; ε-transitions can be used to join them.
Decidability properties
Given two deterministic finite automata A and B, it is decidable whether they accept the same language.[16] As a consequence, using the above closure properties, the following problems are also decidable for arbitrarily given deterministic finite automata A and B, with accepted languages LA and LB, respectively:
• Containment: is LA ⊆ LB ?[note 10]
• Disjointness: is LA ∩ LB = {} ?
• Emptiness: is LA = {} ?
• Universality: is LA = Σ* ?
• Membership: given a ∈ Σ*, is a ∈ LB ?
For regular expressions, the universality problem is NP-complete already for a singleton alphabet.[17] For larger alphabets, that problem is PSPACE-complete.[18] If regular expressions are extended to allow also a squaring operator, with "A2" denoting the same as "AA", still just regular languages can be described, but the universality problem has an exponential space lower bound,[19][20][21] and is in fact complete for exponential space with respect to polynomial-time reduction.[22]
For a fixed finite alphabet, the theory of the set of all languages — together with strings, membership of a string in a language, and for each character, a function to append the character to a string (and no other operations) — is decidable, and its minimal elementary substructure consists precisely of regular languages. For a binary alphabet, the theory is called S2S.[23]
Complexity results
In computational complexity theory, the complexity class of all regular languages is sometimes referred to as REGULAR or REG and equals DSPACE(O(1)), the decision problems that can be solved in constant space (the space used is independent of the input size). REGULAR ≠ AC0, since it (trivially) contains the parity problem of determining whether the number of 1 bits in the input is even or odd and this problem is not in AC0.[24] On the other hand, REGULAR does not contain AC0, because the nonregular language of palindromes, or the nonregular language $\{0^{n}1^{n}:n\in \mathbb {N} \}$ can both be recognized in AC0.[25]
If a language is not regular, it requires a machine with at least Ω(log log n) space to recognize (where n is the input size).[26] In other words, DSPACE(o(log log n)) equals the class of regular languages. In practice, most nonregular problems are solved by machines taking at least logarithmic space.
Location in the Chomsky hierarchy
To locate the regular languages in the Chomsky hierarchy, one notices that every regular language is context-free. The converse is not true: for example, the language consisting of all strings having the same number of a's as b's is context-free but not regular. To prove that a language is not regular, one often uses the Myhill–Nerode theorem and the pumping lemma. Other approaches include using the closure properties of regular languages[27] or quantifying Kolmogorov complexity.[28]
Important subclasses of regular languages include
• Finite languages, those containing only a finite number of words.[29] These are regular languages, as one can create a regular expression that is the union of every word in the language.
• Star-free languages, those that can be described by a regular expression constructed from the empty symbol, letters, concatenation and all boolean operators (see algebra of sets) including complementation but not the Kleene star: this class includes all finite languages.[30]
The number of words in a regular language
Let $s_{L}(n)$ denote the number of words of length $n$ in $L$. The ordinary generating function for L is the formal power series
$S_{L}(z)=\sum _{n\geq 0}s_{L}(n)z^{n}\ .$
The generating function of a language L is a rational function if L is regular.[31] Hence for every regular language $L$ the sequence $s_{L}(n)_{n\geq 0}$ is constant-recursive; that is, there exist an integer constant $n_{0}$, complex constants $\lambda _{1},\,\ldots ,\,\lambda _{k}$ and complex polynomials $p_{1}(x),\,\ldots ,\,p_{k}(x)$ such that for every $n\geq n_{0}$ the number $s_{L}(n)$ of words of length $n$ in $L$ is $s_{L}(n)=p_{1}(n)\lambda _{1}^{n}+\dotsb +p_{k}(n)\lambda _{k}^{n}$.[32][33][34][35]
Thus, non-regularity of certain languages $L'$ can be proved by counting the words of a given length in $L'$. Consider, for example, the Dyck language of strings of balanced parentheses. The number of words of length $2n$ in the Dyck language is equal to the Catalan number $C_{n}\sim {\frac {4^{n}}{n^{3/2}{\sqrt {\pi }}}}$, which is not of the form $p(n)\lambda ^{n}$, witnessing the non-regularity of the Dyck language. Care must be taken since some of the eigenvalues $\lambda _{i}$ could have the same magnitude. For example, the number of words of length $n$ in the language of all even binary words is not of the form $p(n)\lambda ^{n}$, but the number of words of even or odd length are of this form; the corresponding eigenvalues are $2,-2$. In general, for every regular language there exists a constant $d$ such that for all $a$, the number of words of length $dm+a$ is asymptotically $C_{a}m^{p_{a}}\lambda _{a}^{m}$.[36]
The zeta function of a language L is[31]
$\zeta _{L}(z)=\exp \left({\sum _{n\geq 0}s_{L}(n){\frac {z^{n}}{n}}}\right).$
The zeta function of a regular language is not in general rational, but that of an arbitrary cyclic language is.[37][38]
Generalizations
The notion of a regular language has been generalized to infinite words (see ω-automata) and to trees (see tree automaton).
Rational set generalizes the notion (of regular/rational language) to monoids that are not necessarily free. Likewise, the notion of a recognizable language (by a finite automaton) has namesake as recognizable set over a monoid that is not necessarily free. Howard Straubing notes in relation to these facts that “The term "regular language" is a bit unfortunate. Papers influenced by Eilenberg's monograph[39] often use either the term "recognizable language", which refers to the behavior of automata, or "rational language", which refers to important analogies between regular expressions and rational power series. (In fact, Eilenberg defines rational and recognizable subsets of arbitrary monoids; the two notions do not, in general, coincide.) This terminology, while better motivated, never really caught on, and "regular language" is used almost universally.”[40]
Rational series is another generalization, this time in the context of a formal power series over a semiring. This approach gives rise to weighted rational expressions and weighted automata. In this algebraic context, the regular languages (corresponding to Boolean-weighted rational expressions) are usually called rational languages.[41][42] Also in this context, Kleene's theorem finds a generalization called the Kleene-Schützenberger theorem.
Notes
1. 1. ⇒ 2. by Thompson's construction algorithm
2. 2. ⇒ 1. by Kleene's algorithm or using Arden's lemma
3. 2. ⇒ 3. by the powerset construction
4. 3. ⇒ 2. since the former definition is stronger than the latter
5. 2. ⇒ 4. see Hopcroft, Ullman (1979), Theorem 9.2, p.219
6. 4. ⇒ 2. see Hopcroft, Ullman (1979), Theorem 9.1, p.218
7. 3. ⇔ 10. by the Myhill–Nerode theorem
8. u~v is defined as: uw∈L if and only if vw∈L for all w∈Σ*
9. 3. ⇔ 11. see the proof in the Syntactic monoid article, and see p.160 in Holcombe, W.M.L. (1982). Algebraic automata theory. Cambridge Studies in Advanced Mathematics. Vol. 1. Cambridge University Press. ISBN 0-521-60492-3. Zbl 0489.68046.
10. Check if LA ∩ LB = LA. Deciding this property is NP-hard in general; see File:RegSubsetNP.pdf for an illustration of the proof idea.
References
• Berstel, Jean; Reutenauer, Christophe (2011). Noncommutative rational series with applications. Encyclopedia of Mathematics and Its Applications. Vol. 137. Cambridge: Cambridge University Press. ISBN 978-0-521-19022-0. Zbl 1250.68007.
• Eilenberg, Samuel (1974). Automata, Languages, and Machines. Volume A. Pure and Applied Mathematics. Vol. 58. New York: Academic Press. Zbl 0317.94045.
• Salomaa, Arto (1981). Jewels of Formal Language Theory. Pitman Publishing. ISBN 0-273-08522-0. Zbl 0487.68064.
• Sipser, Michael (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X. Zbl 1169.68300. Chapter 1: Regular Languages, pp. 31–90. Subsection "Decidable Problems Concerning Regular Languages" of section 4.1: Decidable Languages, pp. 152–155.
• Philippe Flajolet and Robert Sedgewick, Analytic Combinatorics: Symbolic Combinatorics. Online book, 2002.
• John E. Hopcroft; Jeffrey D. Ullman (1979). Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. ISBN 0-201-02988-X.
• Alfred V. Aho and John E. Hopcroft and Jeffrey D. Ullman (1974). The Design and Analysis of Computer Algorithms. Addison-Wesley. ISBN 9780201000290.
1. Ruslan Mitkov (2003). The Oxford Handbook of Computational Linguistics. Oxford University Press. p. 754. ISBN 978-0-19-927634-9.
2. Mark V. Lawson (2003). Finite Automata. CRC Press. pp. 98–103. ISBN 978-1-58488-255-8.
3. Sheng Yu (1997). "Regular languages". In Grzegorz Rozenberg; Arto Salomaa (eds.). Handbook of Formal Languages: Volume 1. Word, Language, Grammar. Springer. p. 41. ISBN 978-3-540-60420-4.
4. Eilenberg (1974), p. 16 (Example II, 2.8) and p. 25 (Example II, 5.2).
5. M. Weyer: Chapter 12 - Decidability of S1S and S2S, p. 219, Theorem 12.26. In: Erich Grädel, Wolfgang Thomas, Thomas Wilke (Eds.): Automata, Logics, and Infinite Games: A Guide to Current Research. Lecture Notes in Computer Science 2500, Springer 2002.
6. Robert Sedgewick; Kevin Daniel Wayne (2011). Algorithms. Addison-Wesley Professional. p. 794. ISBN 978-0-321-57351-3.
7. Jean-Paul Allouche; Jeffrey Shallit (2003). Automatic Sequences: Theory, Applications, Generalizations. Cambridge University Press. p. 129. ISBN 978-0-521-82332-6.
8. Kenneth Rosen (2011). Discrete Mathematics and Its Applications 7th edition. McGraw-Hill Science. pp. 873–880.
9. Horst Bunke; Alberto Sanfeliu (January 1990). Syntactic and Structural Pattern Recognition: Theory and Applications. World Scientific. p. 248. ISBN 978-9971-5-0566-0.
10. Stephen Cole Kleene (Dec 1951). Representation of Events in Nerve Nets and Finite Automate (PDF) (Research Memorandum). U.S. Air Force / RAND Corporation. Here: p.46
11. Noam Chomsky (1959). "On Certain Formal Properties of Grammars" (PDF). Information and Control. 2 (2): 137–167. doi:10.1016/S0019-9958(59)90362-6. Here: Definition 8, p.149
12. Chomsky 1959, Footnote 10, p.150
13. Salomaa (1981) p.28
14. Salomaa (1981) p.27
15. Hopcroft, Ullman (1979), Chapter 3, Exercise 3.4g, p. 72
16. Hopcroft, Ullman (1979), Theorem 3.8, p.64; see also Theorem 3.10, p.67
17. Aho, Hopcroft, Ullman (1974), Exercise 10.14, p.401
18. Aho, Hopcroft, Ullman (1974), Theorem 10.14, p399
19. Hopcroft, Ullman (1979), Theorem 13.15, p.351
20. A.R. Meyer & L.J. Stockmeyer (Oct 1972). The Equivalence Problem for Regular Expressions with Squaring Requires Exponential Space (PDF). pp. 125–129. {{cite book}}: |work= ignored (help)
21. L.J. Stockmeyer & A.R. Meyer (1973). Word Problems Requiring Exponential Time (PDF). pp. 1–9. {{cite book}}: |work= ignored (help)
22. Hopcroft, Ullman (1979), Corollary p.353
23. Weyer, Mark (2002). "Decidability of S1S and S2S". Automata, Logics, and Infinite Games. Lecture Notes in Computer Science. Vol. 2500. Springer. pp. 207–230. doi:10.1007/3-540-36387-4_12. ISBN 978-3-540-00388-5.
24. Furst, Merrick; Saxe, James B.; Sipser, Michael (1984). "Parity, circuits, and the polynomial-time hierarchy". Mathematical Systems Theory. 17 (1): 13–27. doi:10.1007/BF01744431. MR 0738749. S2CID 14677270.
25. Cook, Stephen; Nguyen, Phuong (2010). Logical foundations of proof complexity (1. publ. ed.). Ithaca, NY: Association for Symbolic Logic. p. 75. ISBN 978-0-521-51729-4.
26. J. Hartmanis, P. L. Lewis II, and R. E. Stearns. Hierarchies of memory-limited computations. Proceedings of the 6th Annual IEEE Symposium on Switching Circuit Theory and Logic Design, pp. 179–190. 1965.
27. "How to prove that a language is not regular?". cs.stackexchange.com. Retrieved 10 April 2018.
28. Hromkovič, Juraj (2004). Theoretical computer science: Introduction to Automata, Computability, Complexity, Algorithmics, Randomization, Communication, and Cryptography. Springer. pp. 76–77. ISBN 3-540-14015-8. OCLC 53007120.
29. A finite language shouldn't be confused with a (usually infinite) language generated by a finite automaton.
30. Volker Diekert; Paul Gastin (2008). "First-order definable languages" (PDF). In Jörg Flum; Erich Grädel; Thomas Wilke (eds.). Logic and automata: history and perspectives. Amsterdam University Press. ISBN 978-90-5356-576-6.
31. Honkala, Juha (1989). "A necessary condition for the rationality of the zeta function of a regular language". Theor. Comput. Sci. 66 (3): 341–347. doi:10.1016/0304-3975(89)90159-x. Zbl 0675.68034.
32. Flajolet & Sedgweick, section V.3.1, equation (13).
33. "Number of words in the regular language $(00)^*$". cs.stackexchange.com. Retrieved 10 April 2018.
34. "Proof of theorem for arbitrary DFAs".
35. "Number of words of a given length in a regular language". cs.stackexchange.com. Retrieved 10 April 2018.
36. Flajolet & Sedgewick (2002) Theorem V.3
37. Berstel, Jean; Reutenauer, Christophe (1990). "Zeta functions of formal languages". Trans. Am. Math. Soc. 321 (2): 533–546. CiteSeerX 10.1.1.309.3005. doi:10.1090/s0002-9947-1990-0998123-x. Zbl 0797.68092.
38. Berstel & Reutenauer (2011) p.222
39. Samuel Eilenberg. Automata, languages, and machines. Academic Press. in two volumes "A" (1974, ISBN 9780080873749) and "B" (1976, ISBN 9780080873756), the latter with two chapters by Bret Tilson.
40. Straubing, Howard (1994). Finite automata, formal logic, and circuit complexity. Progress in Theoretical Computer Science. Basel: Birkhäuser. p. 8. ISBN 3-7643-3719-2. Zbl 0816.68086.
41. Berstel & Reutenauer (2011) p.47
42. Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French by Reuben Thomas. Cambridge: Cambridge University Press. p. 86. ISBN 978-0-521-84425-3. Zbl 1188.68177.
Further reading
• Kleene, S.C.: Representation of events in nerve nets and finite automata. In: Shannon, C.E., McCarthy, J. (eds.) Automata Studies, pp. 3–41. Princeton University Press, Princeton (1956); it is a slightly modified version of his 1951 RAND Corporation report of the same title, RM704.
• Sakarovitch, J (1987). "Kleene's theorem revisited". Trends, Techniques, and Problems in Theoretical Computer Science. Lecture Notes in Computer Science. Vol. 1987. pp. 39–50. doi:10.1007/3540185356_29. ISBN 978-3-540-18535-2.
External links
• Complexity Zoo: Class REG
Automata theory: formal languages and formal grammars
Chomsky hierarchyGrammarsLanguagesAbstract machines
• Type-0
• —
• Type-1
• —
• —
• —
• —
• —
• Type-2
• —
• —
• Type-3
• —
• —
• Unrestricted
• (no common name)
• Context-sensitive
• Positive range concatenation
• Indexed
• —
• Linear context-free rewriting systems
• Tree-adjoining
• Context-free
• Deterministic context-free
• Visibly pushdown
• Regular
• —
• Non-recursive
• Recursively enumerable
• Decidable
• Context-sensitive
• Positive range concatenation*
• Indexed*
• —
• Linear context-free rewriting language
• Tree-adjoining
• Context-free
• Deterministic context-free
• Visibly pushdown
• Regular
• Star-free
• Finite
• Turing machine
• Decider
• Linear-bounded
• PTIME Turing Machine
• Nested stack
• Thread automaton
• restricted Tree stack automaton
• Embedded pushdown
• Nondeterministic pushdown
• Deterministic pushdown
• Visibly pushdown
• Finite
• Counter-free (with aperiodic finite monoid)
• Acyclic finite
Each category of languages, except those marked by a *, is a proper subset of the category directly above it. Any language in each category is generated by a grammar and by an automaton in the category in the same line.
|
Wikipedia
|
Rational normal curve
In mathematics, the rational normal curve is a smooth, rational curve C of degree n in projective n-space Pn. It is a simple example of a projective variety; formally, it is the Veronese variety when the domain is the projective line. For n = 2 it is the plane conic Z0Z2 = Z2
1
,
and for n = 3 it is the twisted cubic. The term "normal" refers to projective normality, not normal schemes. The intersection of the rational normal curve with an affine space is called the moment curve.
Definition
The rational normal curve may be given parametrically as the image of the map
$\nu :\mathbf {P} ^{1}\to \mathbf {P} ^{n}$ :\mathbf {P} ^{1}\to \mathbf {P} ^{n}}
which assigns to the homogeneous coordinates [S : T] the value
$\nu :[S:T]\mapsto \left[S^{n}:S^{n-1}T:S^{n-2}T^{2}:\cdots :T^{n}\right].$ :[S:T]\mapsto \left[S^{n}:S^{n-1}T:S^{n-2}T^{2}:\cdots :T^{n}\right].}
In the affine coordinates of the chart x0 ≠ 0 the map is simply
$\nu :x\mapsto \left(x,x^{2},\ldots ,x^{n}\right).$
That is, the rational normal curve is the closure by a single point at infinity of the affine curve
$\left(x,x^{2},\ldots ,x^{n}\right).$
Equivalently, rational normal curve may be understood to be a projective variety, defined as the common zero locus of the homogeneous polynomials
$F_{i,j}\left(X_{0},\ldots ,X_{n}\right)=X_{i}X_{j}-X_{i+1}X_{j-1}$
where $[X_{0}:\cdots :X_{n}]$ are the homogeneous coordinates on Pn. The full set of these polynomials is not needed; it is sufficient to pick n of these to specify the curve.
Alternate parameterization
Let $[a_{i}:b_{i}]$ be n + 1 distinct points in P1. Then the polynomial
$G(S,T)=\prod _{i=0}^{n}\left(a_{i}S-b_{i}T\right)$
is a homogeneous polynomial of degree n + 1 with distinct roots. The polynomials
$H_{i}(S,T)={\frac {G(S,T)}{(a_{i}S-b_{i}T)}}$
are then a basis for the space of homogeneous polynomials of degree n. The map
$[S:T]\mapsto \left[H_{0}(S,T):H_{1}(S,T):\cdots :H_{n}(S,T)\right]$
or, equivalently, dividing by G(S, T)
$[S:T]\mapsto \left[{\frac {1}{(a_{0}S-b_{0}T)}}:\cdots :{\frac {1}{(a_{n}S-b_{n}T)}}\right]$ :{\frac {1}{(a_{n}S-b_{n}T)}}\right]}
is a rational normal curve. That this is a rational normal curve may be understood by noting that the monomials
$S^{n},S^{n-1}T,S^{n-2}T^{2},\cdots ,T^{n},$
are just one possible basis for the space of degree n homogeneous polynomials. In fact, any basis will do. This is just an application of the statement that any two projective varieties are projectively equivalent if they are congruent modulo the projective linear group PGLn + 1(K) (with K the field over which the projective space is defined).
This rational curve sends the zeros of G to each of the coordinate points of Pn; that is, all but one of the Hi vanish for a zero of G. Conversely, any rational normal curve passing through the n + 1 coordinate points may be written parametrically in this way.
Properties
The rational normal curve has an assortment of nice properties:
• Any n + 1 points on C are linearly independent, and span Pn. This property distinguishes the rational normal curve from all other curves.
• Given n + 3 points in Pn in linear general position (that is, with no n + 1 lying in a hyperplane), there is a unique rational normal curve passing through them. The curve may be explicitly specified using the parametric representation, by arranging n + 1 of the points to lie on the coordinate axes, and then mapping the other two points to [S : T] = [0 : 1] and [S : T] = [1 : 0].
• The tangent and secant lines of a rational normal curve are pairwise disjoint, except at points of the curve itself. This is a property shared by sufficiently positive embeddings of any projective variety.
• There are
${\binom {n+2}{2}}-2n-1$
independent quadrics that generate the ideal of the curve.
• The curve is not a complete intersection, for n > 2. That is, it cannot be defined (as a subscheme of projective space) by only n − 1 equations, that being the codimension of the curve in $\mathbf {P} ^{n}$.
• The canonical mapping for a hyperelliptic curve has image a rational normal curve, and is 2-to-1.
• Every irreducible non-degenerate curve C ⊂ Pn of degree n is a rational normal curve.
See also
• Rational normal scroll
References
• Joe Harris, Algebraic Geometry, A First Course, (1992) Springer-Verlag, New York. ISBN 0-387-97716-3
Topics in algebraic curves
Rational curves
• Five points determine a conic
• Projective line
• Rational normal curve
• Riemann sphere
• Twisted cubic
Elliptic curves
Analytic theory
• Elliptic function
• Elliptic integral
• Fundamental pair of periods
• Modular form
Arithmetic theory
• Counting points on elliptic curves
• Division polynomials
• Hasse's theorem on elliptic curves
• Mazur's torsion theorem
• Modular elliptic curve
• Modularity theorem
• Mordell–Weil theorem
• Nagell–Lutz theorem
• Supersingular elliptic curve
• Schoof's algorithm
• Schoof–Elkies–Atkin algorithm
Applications
• Elliptic curve cryptography
• Elliptic curve primality
Higher genus
• De Franchis theorem
• Faltings's theorem
• Hurwitz's automorphisms theorem
• Hurwitz surface
• Hyperelliptic curve
Plane curves
• AF+BG theorem
• Bézout's theorem
• Bitangent
• Cayley–Bacharach theorem
• Conic section
• Cramer's paradox
• Cubic plane curve
• Fermat curve
• Genus–degree formula
• Hilbert's sixteenth problem
• Nagata's conjecture on curves
• Plücker formula
• Quartic plane curve
• Real plane curve
Riemann surfaces
• Belyi's theorem
• Bring's curve
• Bolza surface
• Compact Riemann surface
• Dessin d'enfant
• Differential of the first kind
• Klein quartic
• Riemann's existence theorem
• Riemann–Roch theorem
• Teichmüller space
• Torelli theorem
Constructions
• Dual curve
• Polar curve
• Smooth completion
Structure of curves
Divisors on curves
• Abel–Jacobi map
• Brill–Noether theory
• Clifford's theorem on special divisors
• Gonality of an algebraic curve
• Jacobian variety
• Riemann–Roch theorem
• Weierstrass point
• Weil reciprocity law
Moduli
• ELSV formula
• Gromov–Witten invariant
• Hodge bundle
• Moduli of algebraic curves
• Stable curve
Morphisms
• Hasse–Witt matrix
• Riemann–Hurwitz formula
• Prym variety
• Weber's theorem (Algebraic curves)
Singularities
• Acnode
• Crunode
• Cusp
• Delta invariant
• Tacnode
Vector bundles
• Birkhoff–Grothendieck theorem
• Stable vector bundle
• Vector bundles on algebraic curves
|
Wikipedia
|
Rational point
In number theory and algebraic geometry, a rational point of an algebraic variety is a point whose coordinates belong to a given field. If the field is not mentioned, the field of rational numbers is generally understood. If the field is the field of real numbers, a rational point is more commonly called a real point.
Understanding rational points is a central goal of number theory and Diophantine geometry. For example, Fermat's Last Theorem may be restated as: for n > 2, the Fermat curve of equation $x^{n}+y^{n}=1$ has no other rational points than (1, 0), (0, 1), and, if n is even, (–1, 0) and (0, –1).
Definition
Given a field k, and an algebraically closed extension K of k, an affine variety X over k is the set of common zeros in Kn of a collection of polynomials with coefficients in k:
${\begin{aligned}&f_{1}(x_{1},\ldots ,x_{n})=0,\\&\qquad \quad \vdots \\&f_{r}(x_{1},\dots ,x_{n})=0.\end{aligned}}$
These common zeros are called the points of X.
A k-rational point (or k-point) of X is a point of X that belongs to kn, that is, a sequence $(a_{1},\dots ,a_{n})$ of n elements of k such that $f_{j}(a_{1},\dots ,a_{n})=0$ for all j. The set of k-rational points of X is often denoted X(k).
Sometimes, when the field k is understood, or when k is the field $\mathbb {Q} $ of rational numbers, one says "rational point" instead of "k-rational point".
For example, the rational points of the unit circle of equation
$x^{2}+y^{2}=1$
are the pairs of rational numbers
$\left({\frac {a}{c}},{\frac {b}{c}}\right),$
where (a, b, c) is a Pythagorean triple.
The concept also makes sense in more general settings. A projective variety X in projective space $\mathbb {P} ^{n}$ over a field k can be defined by a collection of homogeneous polynomial equations in variables $x_{0},\dots ,x_{n}.$ A k-point of $\mathbb {P} ^{n},$ written $[a_{0},\dots ,a_{n}],$ is given by a sequence of n + 1 elements of k, not all zero, with the understanding that multiplying all of $a_{0},\dots ,a_{n}$ by the same nonzero element of k gives the same point in projective space. Then a k-point of X means a k-point of $\mathbb {P} ^{n}$ at which the given polynomials vanish.
More generally, let X be a scheme over a field k. This means that a morphism of schemes f: X → Spec(k) is given. Then a k-point of X means a section of this morphism, that is, a morphism a: Spec(k) → X such that the composition fa is the identity on Spec(k). This agrees with the previous definitions when X is an affine or projective variety (viewed as a scheme over k).
When X is a variety over an algebraically closed field k, much of the structure of X is determined by its set X(k) of k-rational points. For a general field k, however, X(k) gives only partial information about X. In particular, for a variety X over a field k and any field extension E of k, X also determines the set X(E) of E-rational points of X, meaning the set of solutions of the equations defining X with values in E.
Example: Let X be the conic curve $x^{2}+y^{2}=-1$ in the affine plane A2 over the real numbers $\mathbb {R} .$ Then the set of real points $X(\mathbb {R} )$ is empty, because the square of any real number is nonnegative. On the other hand, in the terminology of algebraic geometry, the algebraic variety X over $\mathbb {R} $ is not empty, because the set of complex points $X(\mathbb {C} )$ is not empty.
More generally, for a scheme X over a commutative ring R and any commutative R-algebra S, the set X(S) of S-points of X means the set of morphisms Spec(S) → X over Spec(R). The scheme X is determined up to isomorphism by the functor S ↦ X(S); this is the philosophy of identifying a scheme with its functor of points. Another formulation is that the scheme X over R determines a scheme XS over S by base change, and the S-points of X (over R) can be identified with the S-points of XS (over S).
The theory of Diophantine equations traditionally meant the study of integral points, meaning solutions of polynomial equations in the integers $\mathbb {Z} $ rather than the rationals $\mathbb {Q} .$ For homogeneous polynomial equations such as $x^{3}+y^{3}=z^{3},$ the two problems are essentially equivalent, since every rational point can be scaled to become an integral point.
Rational points on curves
Much of number theory can be viewed as the study of rational points of algebraic varieties, a convenient setting being smooth projective varieties. For smooth projective curves, the behavior of rational points depends strongly on the genus of the curve.
Genus 0
Every smooth projective curve X of genus zero over a field k is isomorphic to a conic (degree 2) curve in $\mathbb {P} ^{2}.$ If X has a k-rational point, then it is isomorphic to $\mathbb {P} ^{1}$ over k, and so its k-rational points are completely understood.[1] If k is the field $\mathbb {Q} $ of rational numbers (or more generally a number field), there is an algorithm to determine whether a given conic has a rational point, based on the Hasse principle: a conic over $\mathbb {Q} $ has a rational point if and only if it has a point over all completions of $\mathbb {Q} ,$ that is, over $\mathbb {R} $ and all p-adic fields $\mathbb {Q} _{p}.$
Genus 1
It is harder to determine whether a curve of genus 1 has a rational point. The Hasse principle fails in this case: for example, by Ernst Selmer, the cubic curve $3x^{3}+4y^{3}+5z^{3}=0$ in $\mathbb {P} ^{2}$ has a point over all completions of $\mathbb {Q} ,$ but no rational point.[2] The failure of the Hasse principle for curves of genus 1 is measured by the Tate–Shafarevich group.
If X is a curve of genus 1 with a k-rational point p0, then X is called an elliptic curve over k. In this case, X has the structure of a commutative algebraic group (with p0 as the zero element), and so the set X(k) of k-rational points is an abelian group. The Mordell–Weil theorem says that for an elliptic curve (or, more generally, an abelian variety) X over a number field k, the abelian group X(k) is finitely generated. Computer algebra programs can determine the Mordell–Weil group X(k) in many examples, but it is not known whether there is an algorithm that always succeeds in computing this group. That would follow from the conjecture that the Tate–Shafarevich group is finite, or from the related Birch–Swinnerton-Dyer conjecture.[3]
Genus at least 2
Faltings's theorem (formerly the Mordell conjecture) says that for any curve X of genus at least 2 over a number field k, the set X(k) is finite.[4]
Some of the great achievements of number theory amount to determining the rational points on particular curves. For example, Fermat's Last Theorem (proved by Richard Taylor and Andrew Wiles) is equivalent to the statement that for an integer n at least 3, the only rational points of the curve $x^{n}+y^{n}=z^{n}$ in $\mathbb {P} ^{2}$ over $\mathbb {Q} $ are the obvious ones: [0,1,1] and [1,0,1]; [0,1,−1] and [1,0,−1] for n even; and [1,−1,0] for n odd. The curve X (like any smooth curve of degree n in $\mathbb {P} ^{2}$) has genus ${\tfrac {(n-1)(n-2)}{2}}.$
It is not known whether there is an algorithm to find all the rational points on an arbitrary curve of genus at least 2 over a number field. There is an algorithm that works in some cases. Its termination in general would follow from the conjectures that the Tate–Shafarevich group of an abelian variety over a number field is finite and that the Brauer–Manin obstruction is the only obstruction to the Hasse principle, in the case of curves.[5]
Higher dimensions
Varieties with few rational points
In higher dimensions, one unifying goal is the Bombieri–Lang conjecture that, for any variety X of general type over a number field k, the set of k-rational points of X is not Zariski dense in X. (That is, the k-rational points are contained in a finite union of lower-dimensional subvarieties of X.) In dimension 1, this is exactly Faltings's theorem, since a curve is of general type if and only if it has genus at least 2. Lang also made finer conjectures relating finiteness of rational points to Kobayashi hyperbolicity.[6]
For example, the Bombieri–Lang conjecture predicts that a smooth hypersurface of degree d in projective space $\mathbb {P} ^{n}$ over a number field does not have Zariski dense rational points if d ≥ n + 2. Not much is known about that case. The strongest known result on the Bombieri–Lang conjecture is Faltings's theorem on subvarieties of abelian varieties (generalizing the case of curves). Namely, if X is a subvariety of an abelian variety A over a number field k, then all k-rational points of X are contained in a finite union of translates of abelian subvarieties contained in X.[7] (So if X contains no translated abelian subvarieties of positive dimension, then X(k) is finite.)
Varieties with many rational points
In the opposite direction, a variety X over a number field k is said to have potentially dense rational points if there is a finite extension field E of k such that the E-rational points of X are Zariski dense in X. Frédéric Campana conjectured that a variety is potentially dense if and only if it has no rational fibration over a positive-dimensional orbifold of general type.[8] A known case is that every cubic surface in $\mathbb {P} ^{3}$ over a number field k has potentially dense rational points, because (more strongly) it becomes rational over some finite extension of k (unless it is the cone over a plane cubic curve). Campana's conjecture would also imply that a K3 surface X (such as a smooth quartic surface in $\mathbb {P} ^{3}$) over a number field has potentially dense rational points. That is known only in special cases, for example if X has an elliptic fibration.[9]
One may ask when a variety has a rational point without extending the base field. In the case of a hypersurface X of degree d in $\mathbb {P} ^{n}$ over a number field, there are good results when d is much smaller than n, often based on the Hardy–Littlewood circle method. For example, the Hasse–Minkowski theorem says that the Hasse principle holds for quadric hypersurfaces over a number field (the case d = 2). Christopher Hooley proved the Hasse principle for smooth cubic hypersurfaces in $\mathbb {P} ^{n}$ over $\mathbb {Q} $ when n ≥ 8.[10] In higher dimensions, even more is true: every smooth cubic in $\mathbb {P} ^{n}$ over $\mathbb {Q} $ has a rational point when n ≥ 9, by Roger Heath-Brown.[11] More generally, Birch's theorem says that for any odd positive integer d, there is an integer N such that for all n ≥ N, every hypersurface of degree d in $\mathbb {P} ^{n}$ over $\mathbb {Q} $ has a rational point.
For hypersurfaces of smaller dimension (in terms of their degree), things can be more complicated. For example, the Hasse principle fails for the smooth cubic surface $5x^{3}+9y^{3}+10z^{3}+12w^{3}=0$ in $\mathbb {P} ^{3}$ over $\mathbb {Q} ,$ by Ian Cassels and Richard Guy.[12] Jean-Louis Colliot-Thélène has conjectured that the Brauer–Manin obstruction is the only obstruction to the Hasse principle for cubic surfaces. More generally, that should hold for every rationally connected variety over a number field.[13]
In some cases, it is known that X has "many" rational points whenever it has one. For example, extending work of Beniamino Segre and Yuri Manin, János Kollár showed: for a cubic hypersurface X of dimension at least 2 over a perfect field k with X not a cone, X is unirational over k if it has a k-rational point.[14] (In particular, for k infinite, unirationality implies that the set of k-rational points is Zariski dense in X.) The Manin conjecture is a more precise statement that would describe the asymptotics of the number of rational points of bounded height on a Fano variety.
Counting points over finite fields
Main article: Weil conjectures
A variety X over a finite field k has only finitely many k-rational points. The Weil conjectures, proved by André Weil in dimension 1 and by Pierre Deligne in any dimension, give strong estimates for the number of k-points in terms of the Betti numbers of X. For example, if X is a smooth projective curve of genus g over a field k of order q (a prime power), then
${\big |}|X(k)|-(q+1){\big |}\leq 2g{\sqrt {q}}.$
For a smooth hypersurface X of degree d in $\mathbb {P} ^{n}$ over a field k of order q, Deligne's theorem gives the bound:[15]
${\big |}|X(k)|-(q^{n-1}+\cdots +q+1){\big |}\leq {\bigg (}{\frac {(d-1)^{n+1}+(-1)^{n+1}(d-1)}{d}}{\bigg )}q^{(n-1)/2}.$
There are also significant results about when a projective variety over a finite field k has at least one k-rational point. For example, the Chevalley–Warning theorem implies that any hypersurface X of degree d in $\mathbb {P} ^{n}$ over a finite field k has a k-rational point if d ≤ n. For smooth X, this also follows from Hélène Esnault's theorem that every smooth projective rationally chain connected variety, for example every Fano variety, over a finite field k has a k-rational point.[16]
See also
• Arithmetic dynamics
• Birational geometry
• Functor represented by a scheme
Notes
1. Hindry & Silverman (2000), Theorem A.4.3.1.
2. Silverman (2009), Remark X.4.11.
3. Silverman (2009), Conjecture X.4.13.
4. Hindry & Silverman (2000), Theorem E.0.1.
5. Skorobogatov (2001), section 6,3.
6. Hindry & Silverman (2000), section F.5.2.
7. Hindry & Silverman (2000), Theorem F.1.1.1.
8. Campana (2004), Conjecture 9.20.
9. Hassett (2003), Theorem 6.4.
10. Hooley (1988), Theorem.
11. Heath-Brown (1983), Theorem.
12. Colliot-Thélène, Kanevsky & Sansuc (1987), section 7.
13. Colliot-Thélène (2015), section 6.1.
14. Kollár (2002), Theorem 1.1.
15. Katz (1980), section II.
16. Esnault (2003), Corollary 1.3.
References
• Campana, Frédéric (2004), "Orbifolds, special varieties and classification theory" (PDF), Annales de l'Institut Fourier, 54 (3): 499–630, doi:10.5802/aif.2027, MR 2097416
• Colliot-Thélène, Jean-Louis; Kanevsky, Dimitri; Sansuc, Jean-Jacques (1987), "Arithmétique des surfaces cubiques diagonales", Diophantine Approximation and Transcendence Theory, Lecture Notes in Mathematics, vol. 1290, Springer Nature, pp. 1–108, doi:10.1007/BFb0078705, ISBN 978-3-540-18597-0, MR 0927558
• Esnault, Hélène (2003), "Varieties over a finite field with trivial Chow group of 0-cycles have a rational point", Inventiones Mathematicae, 151 (1): 187–191, arXiv:math/0207022, Bibcode:2003InMat.151..187E, doi:10.1007/s00222-002-0261-8, MR 1943746
• Hassett, Brendan (2003), "Potential density of rational points on algebraic varieties", Higher Dimensional Varieties and Rational Points (Budapest, 2001), Bolyai Society Mathematical Studies, vol. 12, Springer Nature, pp. 223–282, doi:10.1007/978-3-662-05123-8_8, ISBN 978-3-642-05644-4, MR 2011748
• Heath-Brown, D. R. (1983), "Cubic forms in ten variables", Proceedings of the London Mathematical Society, 47 (2): 225–257, doi:10.1112/plms/s3-47.2.225, MR 0703978
• Hindry, Marc; Silverman, Joseph H. (2000), Diophantine Geometry: an Introduction, Springer Nature, ISBN 978-0-387-98981-5, MR 1745599
• Hooley, Christopher (1988), "On nonary cubic forms", Journal für die reine und angewandte Mathematik, 1988 (386): 32–98, doi:10.1515/crll.1988.386.32, MR 0936992
• Katz, N. M. (1980), "The work of Pierre Deligne" (PDF), Proceedings of the International Congress of Mathematicians (Helsinki, 1978), Helsinki: Academia Scientiarum Fennica, pp. 47–52, MR 0562594
• Kollár, János (2002), "Unirationality of cubic hypersurfaces", Journal of the Mathematical Institute of Jussieu, 1 (3): 467–476, arXiv:math/0005146, doi:10.1017/S1474748002000117, MR 1956057
• Poonen, Bjorn (2017), Rational Points on Varieties, American Mathematical Society, ISBN 978-1-4704-3773-2, MR 3729254
• Silverman, Joseph H. (2009) [1986], The Arithmetic of Elliptic Curves (2nd ed.), Springer Nature, ISBN 978-0-387-96203-0, MR 2514094
• Skorobogatov, Alexei (2001), Torsors and Rational Points, Cambridge University Press, ISBN 978-0-521-80237-6, MR 1845760
External links
• Colliot-Thélène, Jean-Louis (2015), Local-global principles for rational points and zero-cycles (PDF)
|
Wikipedia
|
Rational quadratic covariance function
In statistics, the rational quadratic covariance function is used in spatial statistics, geostatistics, machine learning, image analysis, and other fields where multivariate statistical analysis is conducted on metric spaces. It is commonly used to define the statistical covariance between measurements made at two points that are d units distant from each other. Since the covariance only depends on distances between points, it is stationary. If the distance is Euclidean distance, the rational quadratic covariance function is also isotropic.
The rational quadratic covariance between two points separated by d distance units is given by
$C(d)={\Bigg (}1+{\frac {d^{2}}{2\alpha k^{2}}}{\Bigg )}^{-\alpha }$
where α and k are non-negative parameters of the covariance.[1][2]
References
1. Williams, Christopher K.I, Rasmussen, Carl Edward (2006). Gaussian Processes for Machine Learning. United Kingdom: MIT Press. p. 86.{{cite book}}: CS1 maint: multiple names: authors list (link)
2. Kocijan, Juš (2015-11-22), "Control with GP Models", Modelling and Control of Dynamic Systems Using Gaussian Process Models, Advances in Industrial Control, Cham: Springer International Publishing, pp. 147–208, doi:10.1007/978-3-319-21021-6_4, ISBN 978-3-319-21020-9, retrieved 2022-11-25
|
Wikipedia
|
Rational reciprocity law
In number theory, a rational reciprocity law is a reciprocity law involving residue symbols that are related by a factor of +1 or –1 rather than a general root of unity.
As an example, there are rational biquadratic and octic reciprocity laws. Define the symbol (x|p)k to be +1 if x is a k-th power modulo the prime p and -1 otherwise.
Let p and q be distinct primes congruent to 1 modulo 4, such that (p|q)2 = (q|p)2 = +1. Let p = a2 + b2 and q = A2 + B2 with aA odd. Then
$(p|q)_{4}(q|p)_{4}=(-1)^{(q-1)/4}(aB-bA|q)_{2}\ .$
If in addition p and q are congruent to 1 modulo 8, let p = c2 + 2d2 and q = C2 + 2D2. Then
$(p|q)_{8}=(q|p)_{8}=(aB-bA|q)_{4}(cD-dC|q)_{2}\ .$
References
• Burde, K. (1969), "Ein rationales biquadratisches Reziprozitätsgesetz", J. Reine Angew. Math. (in German), 235: 175–184, Zbl 0169.36902
• Lehmer, Emma (1978), "Rational reciprocity laws", The American Mathematical Monthly, 85 (6): 467–472, doi:10.2307/2320065, ISSN 0002-9890, JSTOR 2320065, MR 0498352, Zbl 0383.10003
• Lemmermeyer, Franz (2000), Reciprocity laws. From Euler to Eisenstein, Springer Monographs in Mathematics, Berlin: Springer-Verlag, pp. 153–183, ISBN 3-540-66957-4, MR 1761696, Zbl 0949.11002
• Williams, Kenneth S. (1976), "A rational octic reciprocity law", Pacific Journal of Mathematics, 63 (2): 563–570, doi:10.2140/pjm.1976.63.563, ISSN 0030-8730, MR 0414467, Zbl 0311.10004
|
Wikipedia
|
Rational reconstruction (mathematics)
In mathematics, rational reconstruction is a method that allows one to recover a rational number from its value modulo a sufficiently large integer.
Problem statement
In the rational reconstruction problem, one is given as input a value $n\equiv r/s{\pmod {m}}$. That is, $n$ is an integer with the property that $ns\equiv r{\pmod {m}}$. The rational number $r/s$ is unknown, and the goal of the problem is to recover it from the given information.
In order for the problem to be solvable, it is necessary to assume that the modulus $m$ is sufficiently large relative to $r$ and $s$. Typically, it is assumed that a range for the possible values of $r$ and $s$ is known: $|r|<N$ and $0<s<D$ for some two numerical parameters $N$ and $D$. Whenever $m>2ND$ and a solution exists, the solution is unique and can be found efficiently.
Solution
Using a method from Paul S. Wang, it is possible to recover $r/s$ from $n$ and $m$ using the Euclidean algorithm, as follows.[1][2]
One puts $v=(m,0)$ and $w=(n,1)$. One then repeats the following steps until the first component of w becomes $\leq N$. Put $q=\left\lfloor {\frac {v_{1}}{w_{1}}}\right\rfloor $, put z = v − qw. The new v and w are then obtained by putting v = w and w = z.
Then with w such that $w_{1}\leq N$, one makes the second component positive by putting w = −w if $w_{2}<0$. If $w_{2}<D$ and $\gcd(w_{1},w_{2})=1$, then the fraction ${\frac {r}{s}}$ exists and $r=w_{1}$ and $s=w_{2}$, else no such fraction exists.
References
1. Wang, Paul S. (1981), "A p-adic algorithm for univariate partial fractions", Proceedings of the Fourth International Symposium on Symbolic and Algebraic Computation (SYMSAC '81), New York, NY, USA: Association for Computing Machinery, pp. 212–217, doi:10.1145/800206.806398, ISBN 0-89791-047-8, S2CID 10695567
2. Wang, Paul S.; Guy, M. J. T.; Davenport, J. H. (May 1982), "P-adic reconstruction of rational numbers", SIGSAM Bulletin, New York, NY, USA: Association for Computing Machinery, 16 (2): 2–3, CiteSeerX 10.1.1.395.6529, doi:10.1145/1089292.1089293, S2CID 44536107.
|
Wikipedia
|
Rational representation
In mathematics, in the representation theory of algebraic groups, a linear representation of an algebraic group is said to be rational if, viewed as a map from the group to the general linear group, it is a rational map of algebraic varieties.
Further information: Group representation
Finite direct sums and products of rational representations are rational.
A rational $G$ module is a module that can be expressed as a sum (not necessarily direct) of rational representations.
References
• Bialynicki-Birula, A.; Hochschild, G.; Mostow, G. D. (1963). "Extensions of Representations of Algebraic Linear Groups". American Journal of Mathematics. Johns Hopkins University Press. 85 (1): 131–44. doi:10.2307/2373191. ISSN 1080-6377. JSTOR 2373191 – via JSTOR.
• Springer Online Reference Works: Rational Representation
|
Wikipedia
|
Rational series
In mathematics and computer science, a rational series is a generalisation of the concept of formal power series over a ring to the case when the basic algebraic structure is no longer a ring but a semiring, and the indeterminates adjoined are not assumed to commute. They can be regarded as algebraic expressions of a formal language over a finite alphabet.
Definition
Let R be a semiring and A a finite alphabet.
A non-commutative polynomial over A is a finite formal sum of words over A. They form a semiring $R\langle A\rangle $.
A formal series is a R-valued function c, on the free monoid A*, which may be written as
$\sum _{w\in A^{*}}c(w)w.$
The set of formal series is denoted $R\langle \langle A\rangle \rangle $ and becomes a semiring under the operations
$c+d:w\mapsto c(w)+d(w)$
$c\cdot d:w\mapsto \sum _{uv=w}c(u)\cdot d(v)$
A non-commutative polynomial thus corresponds to a function c on A* of finite support.
In the case when R is a ring, then this is the Magnus ring over R.[1]
If L is a language over A, regarded as a subset of A* we can form the characteristic series of L as the formal series
$\sum _{w\in L}w$
corresponding to the characteristic function of L.
In $R\langle \langle A\rangle \rangle $ one can define an operation of iteration expressed as
$S^{*}=\sum _{n\geq 0}S^{n}$
and formalised as
$c^{*}(w)=\sum _{u_{1}u_{2}\cdots u_{n}=w}c(u_{1})c(u_{2})\cdots c(u_{n}).$
The rational operations are the addition and multiplication of formal series, together with iteration. A rational series is a formal series obtained by rational operations from $R\langle A\rangle .$
See also
• Formal power series
• Rational language
• Rational set
• Hahn series (Malcev–Neumann series)
• Weighted automaton
References
1. Koch, Helmut (1997). Algebraic Number Theory. Encycl. Math. Sci. Vol. 62 (2nd printing of 1st ed.). Springer-Verlag. p. 167. ISBN 3-540-63003-1. Zbl 0819.11044.
• Berstel, Jean; Reutenauer, Christophe (2011). Noncommutative rational series with applications. Encyclopedia of Mathematics and Its Applications. Vol. 137. Cambridge: Cambridge University Press. ISBN 978-0-521-19022-0. Zbl 1250.68007.
Further reading
• Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French by Reuben Thomas. Cambridge: Cambridge University Press. Part IV (where they are called $\mathbb {K} $-rational series). ISBN 978-0-521-84425-3. Zbl 1188.68177.
• Droste, M., & Kuich, W. (2009). Semirings and Formal Power Series. Handbook of Weighted Automata, 3–28. doi:10.1007/978-3-642-01492-5_1
• Sakarovitch, J. Rational and Recognisable Power Series. Handbook of Weighted Automata, 105–174 (2009). doi:10.1007/978-3-642-01492-5_4
• W. Kuich. Semirings and formal power series: Their relevance to formal languages and automata theory. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 1, Chapter 9, pages 609–677. Springer, Berlin, 1997
|
Wikipedia
|
Rational set
In computer science, more precisely in automata theory, a rational set of a monoid is an element of the minimal class of subsets of this monoid that contains all finite subsets and is closed under union, product and Kleene star. Rational sets are useful in automata theory, formal languages and algebra.
A rational set generalizes the notion of rational (regular) language (understood as defined by regular expressions) to monoids that are not necessarily free.
Definition
Let $(N,\cdot )$ be a monoid with identity element $e$. The set $\mathrm {RAT} (N)$ of rational subsets of $N$ is the smallest set that contains every finite set and is closed under
• union: if $A,B\in \mathrm {RAT} (N)$ then $A\cup B\in \mathrm {RAT} (N)$
• product: if $A,B\in \mathrm {RAT} (N)$ then $A\cdot B=\{a\cdot b\mid a\in A,b\in B\}\in \mathrm {RAT} (N)$
• Kleene star: if $A\in \mathrm {RAT} (N)$ then $A^{*}=\bigcup _{i=0}^{\infty }A^{i}\in \mathrm {RAT} (N)$ where $A^{0}=\{e\}$ is the singleton containing the identity element, and where $A^{n+1}=A^{n}\cdot A$.
This means that any rational subset of $N$ can be obtained by taking a finite number of finite subsets of $N$ and applying the union, product and Kleene star operations a finite number of times.
In general a rational subset of a monoid is not a submonoid.
Example
Let $A$ be an alphabet, the set $A^{*}$ of words over $A$ is a monoid. The rational subset of $A^{*}$ are precisely the regular languages. Indeed, the regular languages may be defined by a finite regular expression.
The rational subsets of $\mathbb {N} $ are the ultimately periodic sets of integers. More generally, the rational subsets of $\mathbb {N} ^{k}$ are the semilinear sets.[1]
Properties
McKnight's theorem states that if $N$ is finitely generated then its recognizable subset are rational sets. This is not true in general, since the whole $N$ is always recognizable but it is not rational if $N$ is infinitely generated.
Rational sets are closed under morphism: given $N$ and $M$ two monoids and $\phi :N\rightarrow M$ a morphism, if $S\in \mathrm {RAT} (N)$ then $\phi (S)=\{\phi (x)\mid x\in S\}\in \mathrm {RAT} (M)$.
$\mathrm {RAT} (N)$ is not closed under complement as the following example shows.[2] Let $N=\{a\}^{*}\times \{b,c\}^{*}$, the sets $R=(a,b)^{*}(1,c)^{*}=\{(a^{n},b^{n}c^{m})\mid n,m\in \mathbb {N} \}$ and $S=(1,b)^{*}(a,c)^{*}=\{(a^{n},b^{m}c^{n})\mid n,m\in \mathbb {N} \}$ are rational but $R\cap S=\{(a^{n},b^{n}c^{n})\mid n\in \mathbb {N} \}$ is not because its projection to the second element $\{b^{n}c^{n}\mid n\in \mathbb {N} \}$ is not rational.
The intersection of a rational subset and of a recognizable subset is rational.
For finite groups the following result of A. Anissimov and A. W. Seifert is well known: a subgroup H of a finitely generated group G is recognizable if and only if H has finite index in G. In contrast, H is rational if and only if H is finitely generated.[3]
Rational relations and rational functions
A binary relation between monoids M and N is a rational relation if the graph of the relation, regarded as a subset of M×N is a rational set in the product monoid. A function from M to N is a rational function if the graph of the function is a rational set.[4]
See also
• Rational series
• Recognizable set
• Rational monoid
References
• Diekert, Volker; Kufleitner, Manfred; Rosenberg, Gerhard; Hertrampf, Ulrich (2016). "Chapter 7: Automata". Discrete Algebraic Methods. Berlin/Boston: Walter de Gruyther GmbH. ISBN 978-3-11-041332-8.
• Jean-Éric Pin, Mathematical Foundations of Automata Theory, Chapter IV: Recognisable and rational sets
• Samuel Eilenberg and M. P. Schützenberger, Rational Sets in Commutative Monoids, Journal of Algebra, 1969.
1. Mathematical Foundations of Automata Theory
2. cf. Jean-Eric Pin, Mathematical Foundations of Automata Theory, p. 76, Example 1.3
3. John Meakin (2007). "Groups and semigroups: connections and contrasts". In C.M. Campbell; M.R. Quick; E.F. Robertson; G.C. Smith (eds.). Groups St Andrews 2005 Volume 2. Cambridge University Press. p. 376. ISBN 978-0-521-69470-4. preprint
4. Hoffmann, Michael; Kuske, Dietrich; Otto, Friedrich; Thomas, Richard M. (2002). "Some relatives of automatic and hyperbolic groups". In Gomes, Gracinda M. S. (ed.). Semigroups, algorithms, automata and languages. Proceedings of workshops held at the International Centre of Mathematics, CIM, Coimbra, Portugal, May, June and July 2001. Singapore: World Scientific. pp. 379–406. Zbl 1031.20047.
Further reading
• Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French by Reuben Thomas. Cambridge: Cambridge University Press. Part II: The power of algebra. ISBN 978-0-521-84425-3. Zbl 1188.68177.
|
Wikipedia
|
Rational singularity
In mathematics, more particularly in the field of algebraic geometry, a scheme $X$ has rational singularities, if it is normal, of finite type over a field of characteristic zero, and there exists a proper birational map
$f\colon Y\rightarrow X$
from a regular scheme $Y$ such that the higher direct images of $f_{*}$ applied to ${\mathcal {O}}_{Y}$ are trivial. That is,
$R^{i}f_{*}{\mathcal {O}}_{Y}=0$ for $i>0$.
If there is one such resolution, then it follows that all resolutions share this property, since any two resolutions of singularities can be dominated by a third.
For surfaces, rational singularities were defined by (Artin 1966).
Formulations
Alternately, one can say that $X$ has rational singularities if and only if the natural map in the derived category
${\mathcal {O}}_{X}\rightarrow Rf_{*}{\mathcal {O}}_{Y}$
is a quasi-isomorphism. Notice that this includes the statement that ${\mathcal {O}}_{X}\simeq f_{*}{\mathcal {O}}_{Y}$ and hence the assumption that $X$ is normal.
There are related notions in positive and mixed characteristic of
• pseudo-rational
and
• F-rational
Rational singularities are in particular Cohen-Macaulay, normal and Du Bois. They need not be Gorenstein or even Q-Gorenstein.
Log terminal singularities are rational.[1]
Examples
An example of a rational singularity is the singular point of the quadric cone
$x^{2}+y^{2}+z^{2}=0.\,$
Artin[2] showed that the rational double points of algebraic surfaces are the Du Val singularities.
See also
• Elliptic singularity
References
1. (Kollár & Mori 1998, Theorem 5.22.)
2. (Artin 1966)
• Artin, Michael (1966), "On isolated rational singularities of surfaces", American Journal of Mathematics, The Johns Hopkins University Press, 88 (1): 129–136, doi:10.2307/2373050, ISSN 0002-9327, JSTOR 2373050, MR 0199191
• Kollár, János; Mori, Shigefumi (1998), Birational geometry of algebraic varieties, Cambridge Tracts in Mathematics, vol. 134, Cambridge University Press, doi:10.1017/CBO9780511662560, ISBN 978-0-521-63277-5, MR 1658959
• Lipman, Joseph (1969), "Rational singularities, with applications to algebraic surfaces and unique factorization", Publications Mathématiques de l'IHÉS (36): 195–279, ISSN 1618-1913, MR 0276239
|
Wikipedia
|
Ratner's theorems
In mathematics, Ratner's theorems are a group of major theorems in ergodic theory concerning unipotent flows on homogeneous spaces proved by Marina Ratner around 1990. The theorems grew out of Ratner's earlier work on horocycle flows. The study of the dynamics of unipotent flows played a decisive role in the proof of the Oppenheim conjecture by Grigory Margulis. Ratner's theorems have guided key advances in the understanding of the dynamics of unipotent flows. Their later generalizations provide ways to both sharpen the results and extend the theory to the setting of arbitrary semisimple algebraic groups over a local field.
Short description
The Ratner orbit closure theorem asserts that the closures of orbits of unipotent flows on the quotient of a Lie group by a lattice are nice, geometric subsets. The Ratner equidistribution theorem further asserts that each such orbit is equidistributed in its closure. The Ratner measure classification theorem is the weaker statement that every ergodic invariant probability measure is homogeneous, or algebraic: this turns out to be an important step towards proving the more general equidistribution property. There is no universal agreement on the names of these theorems: they are variously known as the "measure rigidity theorem", the "theorem on invariant measures" and its "topological version", and so on.
The formal statement of such a result is as follows. Let $G$ be a Lie group, ${\mathit {\Gamma }}$ a lattice in $G$, and $u^{t}$ a one-parameter subgroup of $G$ consisting of unipotent elements, with the associated flow $\phi _{t}$ on ${\mathit {\Gamma }}\setminus G$. Then the closure of every orbit $\left\{xu^{t}\right\}$ of $\phi _{t}$ is homogeneous. This means that there exists a connected, closed subgroup $S$ of $G$ such that the image of the orbit $\,xS\,$ for the action of $S$ by right translations on $G$ under the canonical projection to ${\mathit {\Gamma }}\setminus G$ is closed, has a finite $S$-invariant measure, and contains the closure of the $\phi _{t}$-orbit of $x$ as a dense subset.
Example: $SL_{2}(\mathbb {R} )$
The simplest case to which the statement above applies is $G=SL_{2}(\mathbb {R} )$. In this case it takes the following more explicit form; let $\Gamma $ be a lattice in $SL_{2}(\mathbb {R} )$ and $F\subset \Gamma \backslash G$ a closed subset which is invariant under all maps $\Gamma g\mapsto \Gamma (gu_{t})$ where $u_{t}={\begin{pmatrix}1&t\\0&1\end{pmatrix}}$. Then either there exists an $x\in \Gamma \backslash G$ such that $F=xU$ (where $U=\{u_{t},t\in \mathbb {R} \}$) or $F=\Gamma \backslash G$.
In geometric terms $\Gamma $ is a cofinite Fuchsian group, so the quotient $M=\Gamma \backslash \mathbb {H} ^{2}$ of the hyperbolic plane by $\Gamma $ is a hyperbolic orbifold of finite volume. The theorem above implies that every horocycle of $\mathbb {H} ^{2}$ has an image in $M$ which is either a closed curve (a horocycle around a cusp of $M$) or dense in $M$.
See also
• Equidistribution theorem
References
Expositions
• Morris, Dave Witte (2005). Ratner's Theorems on Unipotent Flows (PDF). Chicago Lectures in Mathematics. Chicago, IL: University of Chicago Press. ISBN 978-0-226-53984-3. MR 2158954.
• Einsiedler, Manfred (2009). "What is... measure rigidity?" (PDF). Notices of the AMS. 56 (5): 600–601.
Selected original articles
• Ratner, Marina (1990). "Strict measure rigidity for unipotent subgroups of solvable groups". Invent. Math. 101 (2): 449–482. doi:10.1007/BF01231511. MR 1062971.
• Ratner, Marina (1990). "On measure rigidity of unipotent subgroups of semisimple groups". Acta Math. 165 (1): 229–309. doi:10.1007/BF02391906. MR 1075042.
• Ratner, Marina (1991). "On Raghunathan's measure conjecture". Ann. of Math. 134 (3): 545–607. doi:10.2307/2944357. MR 1135878.
• Ratner, Marina (1991). "Raghunathan's topological conjecture and distributions of unipotent flows". Duke Math. J. 63 (1): 235–280. doi:10.1215/S0012-7094-91-06311-8. MR 1106945.
• Ratner, Marina (1993). "Raghunathan's conjectures for p-adic Lie groups". International Mathematics Research Notices (5): 141–146. doi:10.1155/S1073792893000145. MR 1219864.
• Ratner, Marina (1995). "Raghunathan's conjectures for cartesian products of real and p-adic Lie groups". Duke Math. J. 77 (2): 275–382. doi:10.1215/S0012-7094-95-07710-2. MR 1321062.
• Margulis, Grigory A.; Tomanov, Georges M. (1994). "Invariant measures for actions of unipotent groups over local fields on homogeneous spaces". Invent. Math. 116 (1): 347–392. doi:10.1007/BF01231565. MR 1253197.
|
Wikipedia
|
Rauch comparison theorem
In Riemannian geometry, the Rauch comparison theorem, named after Harry Rauch, who proved it in 1951, is a fundamental result which relates the sectional curvature of a Riemannian manifold to the rate at which geodesics spread apart. Intuitively, it states that for positive curvature, geodesics tend to converge, while for negative curvature, geodesics tend to spread.
The statement of the theorem involves two Riemannian manifolds, and allows to compare the infinitesimal rate at which geodesics spread apart in the two manifolds, provided that their curvature can be compared. Most of the time, one of the two manifolds is a "comparison model", generally a manifold with constant curvature , and the second one is the manifold under study : a bound (either lower or upper) on its sectional curvature is then needed in order to apply Rauch comparison theorem.
Statement
Let $M,{\widetilde {M}}$ be Riemannian manifolds, on which are drawn unit speed geodesic segments $\gamma :[0,T]\to M$ :[0,T]\to M} and ${\widetilde {\gamma }}:[0,T]\to {\widetilde {M}}$. Assume that ${\widetilde {\gamma }}(0)$ has no conjugate points along ${\widetilde {\gamma }}$, and let $J,{\widetilde {J}}$ be two normal Jacobi fields along $\gamma $ and ${\widetilde {\gamma }}$ such that :
• $J(0)=0$ and ${\widetilde {J}}(0)=0$
• $|D_{t}J(0)|=\left|{\widetilde {D}}_{t}{\widetilde {J}}(0)\right|$.
If the sectional curvature of every 2-plane $\Pi \subset T_{\gamma (t)}M$containing ${\dot {\gamma }}(t)$ is less or equal than the sectional curvature of every 2-plane ${\widetilde {\Pi }}\subset T_{{\tilde {\gamma }}(t)}{\widetilde {M}}$ containing ${\dot {\widetilde {\gamma }}}(t)$, then $|J(t)|\geq |{\widetilde {J}}(t)|$ for all $t\in [0,T]$.
Conditions of the theorem
The theorem is formulated using Jacobi fields to measure the variation in geodesics. As the tangential part of a Jacobi field is independent of the geometry of the manifold, the theorem focuses on normal Jacobi fields, i.e. Jacobi fields which are orthogonal to the speed vector ${\dot {\gamma }}(t)$ of the geodesic for all time $t$. Up to reparametrization, every variation of geodesics induces a normal Jacobi field.
Jacobi fields are requested to vanish at time $t=0$ because the theorem measures the infinitesimal divergence (or convergence) of a family of geodesics issued from the same point $\gamma (0)$, and such a family induces a Jacobi field vanishing at $t=0$.
Analog theorems
Under very similar conditions, it is also possible to compare the Hessian of the distance function to a given point.[1] It is also possible to compare the Laplacian of this function (which is the trace of the Hessian), with some additional condition on one of the two manifolds: it is then enough to have an inequality on the Ricci curvature (which is the trace of the curvature tensor).[1]
See also
• Toponogov's theorem
References
1. Greene, Robert Everist; Wu, Hongxi (1979). Function theory on manifolds which possess a pole. Berlin: Springer-Verlag. ISBN 0-387-09108-4. OCLC 4593089.
• do Carmo, M.P. Riemannian Geometry, Birkhäuser, 1992.
• Lee, J. M., Riemannian Manifolds: An Introduction to Curvature, Springer, 1997.
|
Wikipedia
|
Ravenel's conjectures
In mathematics, the Ravenel conjectures are a set of mathematical conjectures in the field of stable homotopy theory posed by Douglas Ravenel at the end of a paper published in 1984.[1] It was earlier circulated in preprint.[2] The problems involved have largely been resolved, with all but the "telescope conjecture" being proved in later papers by others.[3][4] The telescope conjecture is now generally believed not to be true, though there are some conflicting claims concerning it in the published literature, and is taken to be an open problem.[2] Ravenel's conjectures exerted influence on the field through the founding of the approach of chromatic homotopy theory.
The first of the seven conjectures, then the nilpotence conjecture, was proved in 1988 and is now known as the nilpotence theorem.
The telescope conjecture, which was #4 on the original list, remains of substantial interest because of its connection with the convergence of an Adams–Novikov spectral sequence. While opinion has been generally against the truth of the original statement, investigations of associated phenomena (for a triangulated category in general) have become a research area in its own right.[5][6]
On June 6, 2023, Robert Burklund, Jeremy Hahn, Ishan Levy, and Tomer Schlank announced a disproof of the telescope conjecture.[7] Their preprint is forthcoming.
References
1. Ravenel, Douglas C. (1984). "Localization with Respect to Certain Periodic Homology Theories" (PDF). American Journal of Mathematics. 106 (2): 351–414. doi:10.2307/2374308. JSTOR 2374308. MR 0737778.
2. Hopkins, Michael J. (2008). "The mathematical work of Douglas C. Ravenel" (PDF). Homology, Homotopy and Applications. 10 (3, Proceedings of a Conference in Honor of Douglas C. Ravenel and W. Stephen Wilson): 1–13. doi:10.4310/HHA.2008.v10.n3.a1.
3. Devinatz, Ethan S.; Hopkins, Michael J.; Smith, Jeffrey H. (1988). "Nilpotence and stable homotopy theory. I". Annals of Mathematics. Second Series. 128 (2): 207–241. doi:10.2307/1971440. ISSN 0003-486X. JSTOR 1971440. MR 0960945.
4. Hopkins, Michael J.; Smith, Jeffrey H. (1998). "Nilpotence and Stable Homotopy Theory II". Annals of Mathematics. Second Series. 148 (1): 1–49. CiteSeerX 10.1.1.568.9148. doi:10.2307/120991. JSTOR 120991.
5. Brüning, Kristian (2007). "Subcategories of Triangulated Categories and the Smashing Conjecture" (PDF). Dissertation zur Erlangung des akademischen Grades: 25. {{cite journal}}: Cite journal requires |journal= (help)
6. Jack, Hall; David, Rydh (2016-06-27). "The telescope conjecture for algebraic stacks". Journal of Topology. 10 (3): 776–794. arXiv:1606.08413. doi:10.1112/topo.12021. S2CID 119336098.
7. Hartnett, Kevin (2023-08-22). "An Old Conjecture Falls, Making Spheres a Lot More Complicated". Quanta Magazine. Retrieved 2023-08-22.
|
Wikipedia
|
Raviart–Thomas basis functions
In applied mathematics, Raviart–Thomas basis functions are vector basis functions used in finite element and boundary element methods. They are regularly used as basis functions when working in electromagnetics. They are sometimes called Rao-Wilton-Glisson basis functions.[1]
The space $\mathrm {RT} _{q}$ spanned by the Raviart–Thomas basis functions of order $q$ is the smallest polynomial space such that the divergence maps $\mathrm {RT} _{q}$ onto $\mathrm {P} _{q}$, the space of piecewise polynomials of order $q$.[2]
Order 0 Raviart-Thomas Basis Functions in 2D
In two-dimensional space, the lowest order Raviart Thomas space, $\mathrm {RT} _{0}$, has degrees of freedom on the edges of the elements of the finite element mesh. The $n$th edge has an associated basis function defined by[3]
$\mathbf {f} _{n}(\mathbf {r} )=\left\{{\begin{array}{ll}{\frac {l_{n}}{2A_{n}^{+}}}(\mathbf {r} -\mathbf {p} _{+})\quad &\mathrm {if\ \mathbf {r} \in \ T_{+}} \\-{\frac {l_{n}}{2A_{n}^{-}}}(\mathbf {r} -\mathbf {p} _{-})\quad &\mathrm {if\ \mathbf {r} \in \ T_{-}} \\\mathbf {0} \quad &\mathrm {otherwise} \end{array}}\right.$
where $l_{n}$ is the length of the edge, $T_{+}$ and $T_{-}$ are the two triangles adjacent to the edge, $A_{n}^{+}$ and $A_{n}^{-}$ are the areas of the triangles and $\mathbf {p} _{+}$ and $\mathbf {p} _{-}$ are the opposite corners of the triangles.
Sometimes the basis functions are alternatively defined as
$\mathbf {f} _{n}(\mathbf {r} )=\left\{{\begin{array}{ll}{\frac {1}{2A_{n}^{+}}}(\mathbf {r} -\mathbf {p} _{+})\quad &\mathrm {if\ \mathbf {r} \in \ T_{+}} \\-{\frac {1}{2A_{n}^{-}}}(\mathbf {r} -\mathbf {p} _{-})\quad &\mathrm {if\ \mathbf {r} \in \ T_{-}} \\\mathbf {0} \quad &\mathrm {otherwise} \end{array}}\right.$
with the length factor not included.
References
1. Andriulli, Francesco P.; Cools; Bagci; Olyslager; Buffa; Christiansen; Michelssen (2008). "A Mulitiplicative Calderon Preconditioner for the Electric Field Integral Equation". IEEE Transactions on Antennas and Propagation. 56 (8): 2398–2412. Bibcode:2008ITAP...56.2398A. doi:10.1109/tap.2008.926788. hdl:1854/LU-677703. S2CID 38745490.
2. Logg, Anders; Mardal, Kent-Andre; Wells, Garth, eds. (2012). "Chapter 3. Common and unusual finite elements". Automated Solution of Differential Equations by the Finite Element Method. Lecture Notes in Computational Science and Engineering. Vol. 84. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 95–119. doi:10.1007/978-3-642-23099-8. ISBN 978-3-642-23098-1.
3. Bahriawati, C.; Carstensen, C. (2005). "Three MATLAB Implementations Of The Lowest-Order Raviart-Thomas MFEM With a Posteriori Error Control" (PDF). Computational Methods in Applied Mathematics. 5 (4): 331–361. doi:10.2478/cmam-2005-0016. S2CID 3897312. Retrieved 8 October 2015.
|
Wikipedia
|
Moment (mathematics)
In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total mass) is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.
For a distribution of mass or probability on a bounded interval, the collection of all the moments (of all orders, from 0 to ∞) uniquely determines the distribution (Hausdorff moment problem). The same is not true on unbounded intervals (Hamburger moment problem).
In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the moments of random variables.[1]
Significance of the moments
The n-th raw moment (i.e., moment about zero) of a distribution is defined by[2]
$\mu '_{n}=\langle x^{n}\rangle $
where
$\langle f(x)\rangle ={\begin{cases}\sum f(x)P(x),&{\text{discrete distribution}}\\\int f(x)P(x)dx,&{\text{continuous distribution}}\end{cases}}$
The n-th moment of a real-valued continuous function f(x) of a real variable about a value c is the integral
$\mu _{n}=\int _{-\infty }^{\infty }(x-c)^{n}\,f(x)\,\mathrm {d} x.$
It is possible to define moments for random variables in a more general fashion than moments for real-valued functions — see moments in metric spaces. The moment of a function, without further explanation, usually refers to the above expression with c = 0.
For the second and higher moments, the central moment (moments about the mean, with c being the mean) are usually used rather than the moments about zero, because they provide clearer information about the distribution's shape.
Other moments may also be defined. For example, the nth inverse moment about zero is $\operatorname {E} \left[X^{-n}\right]$ and the n-th logarithmic moment about zero is $\operatorname {E} \left[\ln ^{n}(X)\right].$
The n-th moment about zero of a probability density function f(x) is the expected value of Xn and is called a raw moment or crude moment.[3] The moments about its mean μ are called central moments; these describe the shape of the function, independently of translation.
If f is a probability density function, then the value of the integral above is called the n-th moment of the probability distribution. More generally, if F is a cumulative probability distribution function of any probability distribution, which may not have a density function, then the n-th moment of the probability distribution is given by the Riemann–Stieltjes integral
$\mu '_{n}=\operatorname {E} \left[X^{n}\right]=\int _{-\infty }^{\infty }x^{n}\,\mathrm {d} F(x)$
where X is a random variable that has this cumulative distribution F, and E is the expectation operator or mean. When
$\operatorname {E} \left[\left|X^{n}\right|\right]=\int _{-\infty }^{\infty }\left|x^{n}\right|\,\mathrm {d} F(x)=\infty $
the moment is said not to exist. If the n-th moment about any point exists, so does the (n − 1)-th moment (and thus, all lower-order moments) about every point.
The zeroth moment of any probability density function is 1, since the area under any probability density function must be equal to one.
Significance of moments (raw, central, normalised) and cumulants (raw, normalised), in connection with named properties of distributions
Moment
ordinal
Moment Cumulant
Raw Central Standardized Raw Normalized
1Mean00Mean—
2–Variance1Variance1
3––Skewness–Skewness
4––(Non-excess or historical) kurtosis–Excess kurtosis
5––Hyperskewness––
6––Hypertailedness––
7+–––––
Standardized moments
The normalised n-th central moment or standardised moment is the n-th central moment divided by σn; the normalised n-th central moment of the random variable X is
${\frac {\mu _{n}}{\sigma ^{n}}}={\frac {\operatorname {E} \left[(X-\mu )^{n}\right]}{\sigma ^{n}}}={\frac {\operatorname {E} \left[(X-\mu )^{n}\right]}{\operatorname {E} \left[(X-\mu )^{2}\right]^{\frac {n}{2}}}}.$
These normalised central moments are dimensionless quantities, which represent the distribution independently of any linear change of scale.
For an electric signal, the first moment is its DC level, and the second moment is proportional to its average power.[4][5]
Mean
Main article: Mean
The first raw moment is the mean, usually denoted $\mu \equiv \operatorname {E} [X].$
Variance
Main article: Variance
The second central moment is the variance. The positive square root of the variance is the standard deviation $\sigma \equiv \left(\operatorname {E} \left[(x-\mu )^{2}\right]\right)^{\frac {1}{2}}.$
Skewness
Main article: Skewness
The third central moment is the measure of the lopsidedness of the distribution; any symmetric distribution will have a third central moment, if defined, of zero. The normalised third central moment is called the skewness, often γ. A distribution that is skewed to the left (the tail of the distribution is longer on the left) will have a negative skewness. A distribution that is skewed to the right (the tail of the distribution is longer on the right), will have a positive skewness.
For distributions that are not too different from the normal distribution, the median will be somewhere near μ − γσ/6; the mode about μ − γσ/2.
Kurtosis
Main article: Kurtosis
The fourth central moment is a measure of the heaviness of the tail of the distribution. Since it is the expectation of a fourth power, the fourth central moment, where defined, is always nonnegative; and except for a point distribution, it is always strictly positive. The fourth central moment of a normal distribution is 3σ4.
The kurtosis κ is defined to be the standardized fourth central moment. (Equivalently, as in the next section, excess kurtosis is the fourth cumulant divided by the square of the second cumulant.)[6][7] If a distribution has heavy tails, the kurtosis will be high (sometimes called leptokurtic); conversely, light-tailed distributions (for example, bounded distributions such as the uniform) have low kurtosis (sometimes called platykurtic).
The kurtosis can be positive without limit, but κ must be greater than or equal to γ2 + 1; equality only holds for binary distributions. For unbounded skew distributions not too far from normal, κ tends to be somewhere in the area of γ2 and 2γ2.
The inequality can be proven by considering
$\operatorname {E} \left[\left(T^{2}-aT-1\right)^{2}\right]$
where T = (X − μ)/σ. This is the expectation of a square, so it is non-negative for all a; however it is also a quadratic polynomial in a. Its discriminant must be non-positive, which gives the required relationship.
Higher moments
High-order moments are moments beyond 4th-order moments.
As with variance, skewness, and kurtosis, these are higher-order statistics, involving non-linear combinations of the data, and can be used for description or estimation of further shape parameters. The higher the moment, the harder it is to estimate, in the sense that larger samples are required in order to obtain estimates of similar quality. This is due to the excess degrees of freedom consumed by the higher orders. Further, they can be subtle to interpret, often being most easily understood in terms of lower order moments – compare the higher-order derivatives of jerk and jounce in physics. For example, just as the 4th-order moment (kurtosis) can be interpreted as "relative importance of tails as compared to shoulders in contribution to dispersion" (for a given amount of dispersion, higher kurtosis corresponds to thicker tails, while lower kurtosis corresponds to broader shoulders), the 5th-order moment can be interpreted as measuring "relative importance of tails as compared to center (mode and shoulders) in contribution to skewness" (for a given amount of skewness, higher 5th moment corresponds to higher skewness in the tail portions and little skewness of mode, while lower 5th moment corresponds to more skewness in shoulders).
Mixed moments
Mixed moments are moments involving multiple variables.
The value $E[X^{k}]$ is called the moment of order $k$ (moments are also defined for non-integral $k$). The moments of the joint distribution of random variables $X_{1}...X_{n}$ are defined similarly. For any integers $k_{i}\geq 0$, the mathematical expectation $E[{X_{1}}^{k_{1}}\cdots {X_{n}}^{k_{n}}]$ is called a mixed moment of order $k$ (where $k=k_{1}+...+k_{n}$), and $E[(X_{1}-E[X_{1}])^{k_{1}}\cdots (X_{n}-E[X_{n}])^{k_{n}}]$ is called a central mixed moment of order $k$. The mixed moment $E[(X_{1}-E[X_{1}])(X_{2}-E[X_{2}])]$ is called the covariance and is one of the basic characteristics of dependency between random variables.
Some examples are covariance, coskewness and cokurtosis. While there is a unique covariance, there are multiple co-skewnesses and co-kurtoses.
Properties of moments
Transformation of center
Since
$(x-b)^{n}=(x-a+a-b)^{n}=\sum _{i=0}^{n}{n \choose i}(x-a)^{i}(a-b)^{n-i}$
where $ {\binom {n}{i}}$ is the binomial coefficient, it follows that the moments about b can be calculated from the moments about a by:
$E\left[(x-b)^{n}\right]=\sum _{i=0}^{n}{n \choose i}E\left[(x-a)^{i}\right](a-b)^{n-i}.$
The moment of a convolution of function
Main article: Convolution
The moment of a convolution $ h(t)=(f*g)(t)=\int _{-\infty }^{\infty }f(\tau )g(t-\tau )\,d\tau $ reads
$\mu _{n}[h]=\sum _{i=0}^{n}{n \choose i}\mu _{i}[f]\mu _{n-i}[g]$
where $\mu _{n}[\,\cdot \,]$ denotes the $n$-th moment of the function given in the brackets. This identity follows by the convolution theorem for moment generating function and applying the chain rule for differentiating a product.
Cumulants
Main article: Cumulant
The first raw moment and the second and third unnormalized central moments are additive in the sense that if X and Y are independent random variables then
${\begin{aligned}m_{1}(X+Y)&=m_{1}(X)+m_{1}(Y)\\\operatorname {Var} (X+Y)&=\operatorname {Var} (X)+\operatorname {Var} (Y)\\\mu _{3}(X+Y)&=\mu _{3}(X)+\mu _{3}(Y)\end{aligned}}$
(These can also hold for variables that satisfy weaker conditions than independence. The first always holds; if the second holds, the variables are called uncorrelated).
In fact, these are the first three cumulants and all cumulants share this additivity property.
Sample moments
For all k, the k-th raw moment of a population can be estimated using the k-th raw sample moment
${\frac {1}{n}}\sum _{i=1}^{n}X_{i}^{k}$
applied to a sample X1, ..., Xn drawn from the population.
It can be shown that the expected value of the raw sample moment is equal to the k-th raw moment of the population, if that moment exists, for any sample size n. It is thus an unbiased estimator. This contrasts with the situation for central moments, whose computation uses up a degree of freedom by using the sample mean. So for example an unbiased estimate of the population variance (the second central moment) is given by
${\frac {1}{n-1}}\sum _{i=1}^{n}\left(X_{i}-{\bar {X}}\right)^{2}$
in which the previous denominator n has been replaced by the degrees of freedom n − 1, and in which ${\bar {X}}$ refers to the sample mean. This estimate of the population moment is greater than the unadjusted observed sample moment by a factor of ${\tfrac {n}{n-1}},$ and it is referred to as the "adjusted sample variance" or sometimes simply the "sample variance".
Problem of moments
Main article: Moment problem
Problems of determining a probability distribution from its sequence of moments are called problem of moments. Such problems were first discussed by P.L. Chebyshev (1874)[8] in connection with research on limit theorems. In order that the probability distribution of a random variable $X$ be uniquely defined by its moments $\alpha _{k}=EX^{k}$ it is sufficient, for example, that Carleman's condition be satisfied:
$\sum _{k=1}^{\infty }{\frac {1}{\alpha _{2k}^{1/2k}}}=\infty $
A similar result even holds for moments of random vectors. The problem of moments seeks characterizations of sequences ${{\mu _{n}}':n=1,2,3,\dots }$that are sequences of moments of some function f, all moments $\alpha _{k}(n)$ of which are finite, and for each integer $k\geq 1$ let
$\alpha _{k}(n)\rightarrow \alpha _{k},n\rightarrow \infty ,$
where $\alpha _{k}$ is finite. Then there is a sequence ${\mu _{n}}'$ that weakly converges to a distribution function $\mu $ having $\alpha _{k}$ as its moments. If the moments determine $\mu $ uniquely, then the sequence ${\mu _{n}}'$ weakly converges to $\mu $.
Partial moments
Partial moments are sometimes referred to as "one-sided moments." The n-th order lower and upper partial moments with respect to a reference point r may be expressed as
$\mu _{n}^{-}(r)=\int _{-\infty }^{r}(r-x)^{n}\,f(x)\,\mathrm {d} x,$
$\mu _{n}^{+}(r)=\int _{r}^{\infty }(x-r)^{n}\,f(x)\,\mathrm {d} x.$
If the integral function do not converge, the partial moment does not exist.
Partial moments are normalized by being raised to the power 1/n. The upside potential ratio may be expressed as a ratio of a first-order upper partial moment to a normalized second-order lower partial moment. They have been used in the definition of some financial metrics, such as the Sortino ratio, as they focus purely on upside or downside.
Central moments in metric spaces
Let (M, d) be a metric space, and let B(M) be the Borel σ-algebra on M, the σ-algebra generated by the d-open subsets of M. (For technical reasons, it is also convenient to assume that M is a separable space with respect to the metric d.) Let 1 ≤ p ≤ ∞.
The p-th central moment of a measure μ on the measurable space (M, B(M)) about a given point x0 ∈ M is defined to be
$\int _{M}d\left(x,x_{0}\right)^{p}\,\mathrm {d} \mu (x).$
μ is said to have finite p-th central moment if the p-th central moment of μ about x0 is finite for some x0 ∈ M.
This terminology for measures carries over to random variables in the usual way: if (Ω, Σ, P) is a probability space and X : Ω → M is a random variable, then the p-th central moment of X about x0 ∈ M is defined to be
$\int _{M}d\left(x,x_{0}\right)^{p}\,\mathrm {d} \left(X_{*}\left(\mathbf {P} \right)\right)(x)=\int _{\Omega }d\left(X(\omega ),x_{0}\right)^{p}\,\mathrm {d} \mathbf {P} (\omega )=\operatorname {\mathbf {E} } [d(X,x_{0})^{p}],$
and X has finite p-th central moment if the p-th central moment of X about x0 is finite for some x0 ∈ M.
See also
• Energy (signal processing)
• Factorial moment
• Generalised mean
• Image moment
• L-moment
• Method of moments (probability theory)
• Method of moments (statistics)
• Moment-generating function
• Moment measure
• Second moment method
• Standardised moment
• Stieltjes moment problem
• Taylor expansions for the moments of functions of random variables
References
• Text was copied from Moment at the Encyclopedia of Mathematics, which is released under a Creative Commons Attribution-Share Alike 3.0 (Unported) (CC-BY-SA 3.0) license and the GNU Free Documentation License.
1. George Mackey (July 1980). "HARMONIC ANALYSIS AS THE EXPLOITATION OF SYMMETRY - A HISTORICAL SURVEY". Bulletin of the American Mathematical Society. New Series. 3 (1): 549.
2. Papoulis, A. (1984). Probability, Random Variables, and Stochastic Processes, 2nd ed. New York: McGraw Hill. pp. 145–149.
3. "Raw Moment -- from Wolfram MathWorld". Archived from the original on 2009-05-28. Retrieved 2009-06-24. Raw Moments at Math-world
4. Clive Maxfield; John Bird; Tim Williams; Walt Kester; Dan Bensky (2011). Electrical Engineering: Know It All. Newnes. p. 884. ISBN 978-0-08-094966-6.
5. Ha H. Nguyen; Ed Shwedyk (2009). A First Course in Digital Communications. Cambridge University Press. p. 87. ISBN 978-0-521-87613-1.
6. Casella, George; Berger, Roger L. (2002). Statistical Inference (2 ed.). Pacific Grove: Duxbury. ISBN 0-534-24312-6.
7. Ballanda, Kevin P.; MacGillivray, H. L. (1988). "Kurtosis: A Critical Review". The American Statistician. American Statistical Association. 42 (2): 111–119. doi:10.2307/2684482. JSTOR 2684482.
8. Feller, W. (1957-1971). An introduction to probability theory and its applications. New York: John Wiley & Sons. 419 p.
Further reading
• Spanos, Aris (1999). Probability Theory and Statistical Inference. New York: Cambridge University Press. pp. 109–130. ISBN 0-521-42408-9.
• Walker, Helen M. (1929). Studies in the history of statistical method, with special reference to certain educational problems. Baltimore, Williams & Wilkins Co. p. 71.
External links
• "Moment", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Moments at Mathworld
Theory of probability distributions
• probability mass function (pmf)
• probability density function (pdf)
• cumulative distribution function (cdf)
• quantile function
• raw moment
• central moment
• mean
• variance
• standard deviation
• skewness
• kurtosis
• L-moment
• moment-generating function (mgf)
• characteristic function
• probability-generating function (pgf)
• cumulant
• combinant
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
|
Wikipedia
|
Rawson W. Rawson
Sir Rawson William Rawson, KCMG, CB (8 September 1812 – 20 November 1899) was a British government official and statistician.[1] During his tenure as a public servant in Canada he contributed to the Report on the affairs of the Indians in Canada, a foundational document in the establishment of the Canadian Indian residential school system.
Sir Rawson William Rawson
KCMG CB
Governor of Barbados and the Windward Islands
In office
1868–1875
Preceded bySir James Walker
Succeeded bySanford Freeling (acting)
Governor of the Bahamas
In office
1864–1869
Preceded byCharles John Bayley
Succeeded bySir James Walker
Colonial Secretary for the Cape Colony
In office
9 May 1854 – 21 July 1864
GovernorSir George Grey
Sir Philip Edmond Wodehouse
Succeeded byRichard Southey
Personal details
Born(1812-09-08)September 8, 1812
London, United Kingdom
DiedNovember 20, 1899(1899-11-20) (aged 87)
London, United Kingdom
SpouseMary-Anne Ward (m. 1849)
Children8 including Herbert Rawson and William Rawson
Parents
• Sir William Rawson (father)
• Jane Rawson (mother)
EducationEton College
OccupationBritish government official and statistician
Early life and Board of Trade
Rawson Rawson was born in 1812, the son of the noted oculist Sir William Adams Rawson (1783-1827) and Jane Eliza Rawson (died 1844), daughter of Colonel George Rawson of Belmont House, County Wicklow, MP for Armagh and his wife Mary Bowes Benson. His father, son of Henry Adams, a native of Morwenstow in Cornwall, had originally had the surname Adams, but had changed his name to Rawson in 1825 to commemorate his wife's father, and also gave it as a first name to his son.
Rawson was educated at Eton and entered the Board of Trade at the age of seventeen. He served as private secretary to three successive vice-presidents of the Board, Charles Poulett Thomson, Alexander Baring and William Ewart Gladstone.
Colonial service (1842–1875)
In 1842, having served Gladstone for one year he was appointed Civil Secretary to the then Governor-General of Canada Charles Bagot. The same year, he was appointed by Bagot – along with John Davidson and William Hepburn – as commissioner for a report regarding government policies and expenditures related to Indigenous peoples in Canada East and Canada West. Completed in 1844, the final report, titled the Report on the affairs of the Indians in Canada, included a call for the introduction of industrial schools to address the noted failure of day schools to effectively keep Indigenous children from the influence of their parents. The report is regarded as a foundational document in the rationale for establishing the Canadian Indian residential school system.[2][3]: 12–17 In 1846, following his work on the report, Rawson was appointed treasurer and paymaster-general to Mauritius.[4]
In 1854 he became colonial secretary in the Cape of Good Hope,[5] which had just formed its first locally elected parliament. Soon after accepting this post, he was awarded a CB, and attained considerable local fame for his overly elaborate dress of lace collars, cuffs and buttons. Whilst in the Cape, he was exceptionally involved in the study of ferns and other plants, in the establishment of the South African Museum, as well as in the details of parliamentary procedure. However, his abilities as a financier were repeatedly questioned, as the Cape government became severely indebted and eventually entered a recession. Parliamentary writer Richard William Murray records that in both Mauritius and the Cape Colony, Rawson had left the state "as nearly bankrupt as it is possible for a British dependency to be". Rawson was also notable for being among the government officials who supported the early movement for "responsible government" in the Cape, and therefore supported the handing over of power to a locally elected executive, to replace imperial officials like himself. He was retired from the post on 21 July 1864, to be succeeded by Sir Richard Southey.[6][7][8]
His next post was the governorship of the Bahamas in July 1864,[9] and he was subsequently promoted to the governorship of the Windward Islands and received a KCMG. He retired from public office in 1875. He was elected to the American Philosophical Society the year before, in 1874.[10]
Statistical Society and later life
He was president of the Statistical Society (now called the Royal Statistical Society) (1884–1886), an organisation of which he was a staunch supporter. He had originally joined the Society in March 1835, and briefly held the post of editor of the Society's Journal, from 1837 to 1842.
On his retirement from public office, he was re-elected to the Society's Council in 1876 and remained in the post till his death. It was largely due to the efforts of Rawson that the society received its charter of incorporation in 1887. He was also the founding president of the International Statistical Institute.
Family
Rawson married in 1849 Sophia Mary-Anne Ward, daughter of the Reverend Henry Ward, vicar of Killinchy, County Down and sister of the New Zealand-based politician Crosbie Ward. They had had eight children, including Herbert Rawson (1852–1924) and William Rawson (1854–1932).
References
1. 'RAWSON, Sir Rawson William', Who Was Who, A & C Black, an imprint of Bloomsbury Publishing plc, 1920–2008; online edn, Oxford University Press, Dec 2007 accessed 23 July 2013
2. Leslie, John (1982). "The Bagot Commission: Developing a Corporate Memory for the Indian Department" (PDF). Historical Papers. 17 (1): 31–52. doi:10.7202/030883ar. Retrieved 7 September 2017.
3. Milloy, John S. (1999). A National Crime: The Canadian Government and the Residential School System 1879–1986. University of Manitoba Press. ISBN 978-0-88755-646-3.
4. Obituary, The Times, 22 November 1899, p.6. Available online at The Times Digital Archive (subscription required). Retrieved 31 July 2020.
5. The London Gazette. Issue 21530 (1854) pp. 785
6. JL. McCracken: The Cape Parliament. Clarendon Press: Oxford. 1967.
7. "S2A3 Biographical Database of Southern African Science".
8. R. Kilpin: The Old Cape House. p.72.
9. The London Gazette. Issue 22912 (1864), pp. 5371
10. "APS Member History". search.amphilsoc.org. Retrieved 5 May 2021.
11. International Plant Names Index. Rawson.
• Obituary in Journal of the Royal Statistical Society, LXII (1899), 677–679.
Governors of Barbados
Barbados (1627–1833)
• H. Powell
• W. Deane
• C. Wolferstone
• J. Powell
• R. Wheatley*
• W. Tufton
• Henry Hawley
• Richard Peers*
• William Hawley*
• Henry Huncks
• Philip Bell
• Francis Willoughby
• George Ayscue
• Daniel Searle*
• Thomas Modyford*
• Humphrey Walrond*
• Francis Willoughby
• Henry Willoughby*
• William Willoughby*
• Samuel Barwick*
• Henry Hawley*
• William Willoughby
• Christopher Codrington*
• Peter Colleton*
• Jonathan Atkins*
• John Witham*
• Richard Dutton
• Edwyn Stede*
• James Kendall
• Francis Russell
• Francis Bond*
• Ralph Grey
• John Farmer*
• Bevil Granville
• Mitford Crow
• George Lillington*
• Robert Lowther
• William Sharpe*
• John Frere*
• Samuel Cox*
• Henry Worsley
• Thomas Catesby Paget
• James Dotin*
• Walter Chetwynd
• Emanuel Howe
• James Dotin*
• Orlando Bridgeman
• Humphrey Howarth
• Thomas Gage
• Robert Byng
• James Dotin*
• Thomas Robinson
• Henry Grenville
• Charles Pinfold
• Samuel Rous*
• William Spry
• Samuel Rous*
• Edward Hay*
• John Dotin*
• James Cunninghame
• John Dotin*
• David Parry
• William Bishop*
• George Poyntz Ricketts
• William Bishop*
• Francis Mackenzie
• John Spooner*
• George Beckwith
• James Leith
• John Foster Alleyne*
• Stapleton Cotton
• John Skeete*
• Samuel Hinds*
• Henry Warde
• James Frederick Lyon
Barbados and the Windward Islands (1833–1885)
• Lionel Smith
• Evan MacGregor
• Charles Henry Darling
• Charles Grey
• William Reid
• William Colebrooke
• Francis Hincks
• James Walker
• Rawson William Rawson
• Sanford Freeling*
• John Hennessy
• George Strahan
• D. J. Gamble*
• William Robinson
Barbados (1885–1966)
• Charles Lees
• Walter Sendall
• James Hay
• Frederick Hodgson
• Gilbert Carter
• Leslie Probyn
• Charles O'Brien
• William Robertson
• Harry Newlands
• Mark Young
• Eubule Waddington
• Henry Bushe
• Hilary Blood
• Alfred Savage
• Robert Arundell
• John Stow
Related
• Government House
* Served as Acting Governor of Barbados.
Presidents of the Royal Statistical Society
19th century
• 1834–1836 The Marquess of Lansdowne
• 1836–1838 Sir Charles Lemon, Bt
• 1838–1840 The Earl FitzWilliam
• 1840–1842 Viscount Sandon
• 1842–1843 The Marquess of Lansdowne
• 1843–1845 Lord Ashley
• 1845–1847 The Lord Monteagle of Brandon
• 1847–1849 The Earl FitzWilliam
• 1849–1851 The Earl of Harrowby
• 1851–1853 The Lord Overstone
• 1853–1855 The Earl FitzWilliam
• 1855–1857 The Earl of Harrowby
• 1857–1859 Lord Stanley
• 1859–1861 Lord John Russell
• 1861–1863 Sir John Pakington, Bt
• 1863–1865 William Henry Sykes
• 1865–1867 The Lord Houghton
• 1867–1869 William Ewart Gladstone
• 1869–1871 William Newmarch
• 1871–1873 William Farr
• 1873–1875 William Guy
• 1875–1877 James Heywood
• 1877–1879 George Shaw-Lefevre
• 1879–1880 Thomas Brassey
• 1880–1882 James Caird
• 1882–1884 Robert Giffen
• 1884–1886 Rawson W. Rawson
• 1886–1888 George Goschen
• 1888–1890 Thomas Graham Balfour
• 1890–1892 Frederic J. Mouat
• 1892–1894 Charles Booth
• 1894–1896 The Lord Farrer
• 1896–1897 John Biddulph Martin
• 1897 Alfred Edmund Bateman
• 1897–1899 Leonard Courtney
• 1899–1900 Henry Fowler
• 1900–1902 The Lord Avebury
20th century
• 1902–1904 Patrick George Craigie
• 1904–1905 Sir Francis Powell, Bt
• 1905–1906 The Earl of Onslow
• 1906–1907 Richard Martin
• 1907–1909 Sir Charles Dilke, Bt
• 1909–1910 Jervoise Athelstane Baines
• 1910–1912 Lord George Hamilton
• 1912–1914 Francis Ysidro Edgeworth
• 1914–1915 The Lord Welby
• 1915–1916 Lord George Hamilton
• 1916–1918 Bernard Mallet, Registrar General
• 1918–1920 Herbert Samuel
• 1920–1922 R. Henry Rew
• 1922–1924 The Lord Emmott
• 1924–1926 Udny Yule
• 1926–1928 The Viscount D'Abernon
• 1928–1930 A. William Flux
• 1930–1932 Sir Josiah Stamp
• 1932–1934 The Lord Meston
• 1934–1936 Major Greenwood
• 1936–1938 The Lord Kennet
• 1938–1940 Arthur Lyon Bowley
• 1940–1941 Henry William Macrosty
• 1941 Hector Leak
• 1941–1943 William Beveridge
• 1943–1945 Ernest Charles Snow
• 1945–1947 The Lord Woolton
• 1947–1949 David Heron
• 1949–1950 Sir Geoffrey Heyworth
• 1950–1952 Austin Bradford Hill
• 1952–1954 Ronald Fisher
• 1954–1955 The Lord Piercy
• 1955–1957 Egon Pearson
• 1957–1959 Harry Campion
• 1959–1960 Hugh Beaver
• 1960–1962 Maurice Kendall
• 1962–1964 Joseph Oscar Irwin
• 1964–1965 Sir Paul Chambers
• 1965–1966 L. H. C. Tippett
• 1966–1967 M. S. Bartlett
• 1967–1968 Frank Yates
• 1968–1969 Arthur Cockfield
• 1969–1970 R. G. D. Allen
• 1970–1971 Bernard Benjamin
• 1971–1972 George Alfred Barnard
• 1972–1973 Harold Wilson
• 1973–1974 D. J. Finney
• 1974–1975 Henry Daniels
• 1975–1977 Stella Cunliffe
• 1977–1978 Henry Wynn
• 1978–1980 Sir Claus Moser
• 1980–1982 David Cox
• 1982–1984 Peter Armitage
• 1984–1985 Walter Bodmer
• 1985–1986 John Nelder
• 1986–1987 James Durbin
• 1987–1989 John Kingman
• 1989–1991 Peter G. Moore
• 1991–1993 T. M. F. Smith
• 1993–1995 D. J. Bartholomew
• 1995–1997 Adrian Smith
• 1997–1999 Robert Nicholas Curnow
• 1999–2001 Denise Lievesley
21st century
• 2001–2003 Peter Green
• 2003–2005 Andy Grieve
• 2005–2007 Tim Holt
• 2008–2009 David Hand
• 2010–2010 Bernard Silverman (resigned Feb 2010; replaced pro tem by David Hand)
• 2011–2012 Valerie Isham
• 2013–2014 John Pullinger
• 2014–2016 Peter Diggle
• 2017–2018 David Spiegelhalter
• 2019– Deborah Ashby
• Category
• List
Authority control
International
• FAST
• ISNI
• VIAF
National
• Germany
• Belgium
• United States
• Netherlands
Academics
• International Plant Names Index
People
• Deutsche Biographie
• Trove
Other
• SNAC
|
Wikipedia
|
Dijen K. Ray-Chaudhuri
Dwijendra Kumar Ray-Chaudhuri (born November 1, 1933) is a professor emeritus at Ohio State University. He and his student R. M. Wilson together solved Kirkman's schoolgirl problem in 1968 [1] which contributed to developments in design theory.
D. K. Ray-Chaudhuri
BornNovember 1 1933
Alma materRajabazar Science College (University of Calcutta)
University of North Carolina at Chapel Hill
Known forBCH code
Kirkman's schoolgirl problem
AwardsEuler Medal (1999)
Scientific career
FieldsCombinatorics
InstitutionsOhio State University
Doctoral advisorRaj Chandra Bose
He received his M.Sc. (1956) in mathematics from the famous Rajabazar Science College, University of Calcutta and Ph.D. in combinatorics (1959) from University of North Carolina at Chapel Hill. He served as consultant at Cornell Medicine and Sloan Kettering, a professor and chairman of the Department of Mathematics at Ohio State University, as well as a visiting professor of University of Göttingen and University of Erlangen in Germany, University of London, and Tata Institute of Fundamental Research in Mumbai.
He is best known for his work in design theory and the theory of error-correcting codes, in which the class of BCH codes is partly named after him and his Ph.D. advisor Bose.[2] Ray-Chaudhuri is the recipient of the Euler Medal by the Institute of Combinatorics and its Applications for his career contributions to combinatorics. In 2000, a festschrift appeared on the occasion of his 65th birthday.[3] In 2012 he became a fellow of the American Mathematical Society.[4]
Honors, Awards, and Fellowships
• Senior U.S. Scientist Award of the Humboldt Foundation of Germany
• Distinguished Senior Research Award from Ohio State University
• President for Forum in New Delhi
• Foundation Fellow of the ICA
• Euler Medal of ICA.[5]
• Fellow of the American Mathematical Society.[6]
Selected publications
• R. C. Bose and D. K. Ray-Chaudhuri: On a class of error correcting binary group codes. Information and Control 3(1): 68-79 (March 1960).
• C. T. Abraham, S. P Ghosh and D. K. Ray-Chaudhuri: File organization schemes based on finite geometries. Information and control, 1968.
• D. K. Ray-Chaudhuri and R. M. Wilson: Solution of Kirkman's schoolgirl problem. Proc. symp. pure Math, 1971.
• D. K. Ray-Chaudhuri and R. M. Wilson: On t-designs. Osaka Journal of Mathematics 1975.
References
1. "DijenCV" (PDF). people.math.osu.edu. Retrieved 2020-03-03.
2. Mathematics Genealogy Project
3. Codes and Designs: Proceedings of a Conference Honoring Professor Dijen K. Ray-Chaudhuri on the Occasion of His 65th Birthday (The Ohio State University, May 18–21, 2000). Editors: K.T. Arasu and Ákos Seress. Berlin, New York: Walter de Gruyter, 2002. ISBN 978-3-11-017396-3. doi:10.1515/9783110198119
4. List of Fellows of the American Mathematical Society, retrieved 2013-06-09.
5. "DijenCV" (PDF). OSU. Retrieved 2020-03-03.
6. "Dijen K. Ray-Chaudhuri". WIKIDATA. Retrieved 2020-03-03.
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
|
Wikipedia
|
End (graph theory)
In the mathematics of infinite graphs, an end of a graph represents, intuitively, a direction in which the graph extends to infinity. Ends may be formalized mathematically as equivalence classes of infinite paths, as havens describing strategies for pursuit–evasion games on the graph, or (in the case of locally finite graphs) as topological ends of topological spaces associated with the graph.
Ends of graphs may be used (via Cayley graphs) to define ends of finitely generated groups. Finitely generated infinite groups have one, two, or infinitely many ends, and the Stallings theorem about ends of groups provides a decomposition for groups with more than one end.
Definition and characterization
Ends of graphs were defined by Rudolf Halin (1964) in terms of equivalence classes of infinite paths.[1] A ray in an infinite graph is a semi-infinite simple path; that is, it is an infinite sequence of vertices $v_{0},v_{1},v_{2},\dots $ in which each vertex appears at most once in the sequence and each two consecutive vertices in the sequence are the two endpoints of an edge in the graph. According to Halin's definition, two rays $r_{0}$ and $r_{1}$ are equivalent if there is a ray $r_{2}$ (which may equal one of the two given rays) that contains infinitely many of the vertices in each of $r_{0}$ and $r_{1}$. This is an equivalence relation: each ray is equivalent to itself, the definition is symmetric with regard to the ordering of the two rays, and it can be shown to be transitive. Therefore, it partitions the set of all rays into equivalence classes, and Halin defined an end as one of these equivalence classes.[2]
An alternative definition of the same equivalence relation has also been used: two rays $r_{0}$ and $r_{1}$ are equivalent if there is no finite set $X$ of vertices that separates infinitely many vertices of $r_{0}$ from infinitely many vertices of $r_{1}$.[3] This is equivalent to Halin's definition: if the ray $r_{2}$ from Halin's definition exists, then any separator must contain infinitely many points of $r_{2}$ and therefore cannot be finite, and conversely if $r_{2}$ does not exist then a path that alternates as many times as possible between $r_{0}$ and $r_{1}$ must form the desired finite separator.
Ends also have a more concrete characterization in terms of havens, functions that describe evasion strategies for pursuit–evasion games on a graph $G$.[4] In the game in question, a robber is trying to evade a set of policemen by moving from vertex to vertex along the edges of $G$. The police have helicopters and therefore do not need to follow the edges; however the robber can see the police coming and can choose where to move next before the helicopters land. A haven is a function $\beta $ that maps each set $X$ of police locations to one of the connected components of the subgraph formed by deleting $X$; a robber can evade the police by moving in each round of the game to a vertex within this component. Havens must satisfy a consistency property (corresponding to the requirement that the robber cannot move through vertices on which police have already landed): if $X$ is a subset of $Y$, and both $X$ and $Y$ are valid sets of locations for the given set of police, then $\beta (X)$ must be a superset of $\beta (Y)$. A haven has order $k$ if the collection of police locations for which it provides an escape strategy includes all subsets of fewer than $k$ vertices in the graph; in particular, it has order $\aleph _{0}$ (the smallest aleph number) if it maps every finite subset $X$ of vertices to a component of $G\setminus X$. Every ray in $G$ corresponds to a haven of order $\aleph _{0}$, namely, the function $\beta $; that maps every finite set $X$ to the unique component of $G\setminus X$ that contains infinitely many vertices of the ray. Conversely, every haven of order $\aleph _{0}$ can be defined in this way by a ray.[5] Two rays are equivalent if and only if they define the same haven, so the ends of a graph are in one-to-one correspondence with its havens of order $\aleph _{0}$.[4]
Examples
If the infinite graph $G$ is itself a ray, then it has infinitely many ray subgraphs, one starting from each vertex of $G$. However, all of these rays are equivalent to each other, so $G$ only has one end.
If $G$ is a forest (that is, a graph with no finite cycles), then the intersection of any two rays is either a path or a ray; two rays are equivalent if their intersection is a ray. If a base vertex is chosen in each connected component of $G$, then each end of $G$ contains a unique ray starting from one of the base vertices, so the ends may be placed in one-to-one correspondence with these canonical rays. Every countable graph $G$ has a spanning forest with the same set of ends as $G$.[6] However, there exist uncountably infinite graphs with only one end in which every spanning tree has infinitely many ends.[7]
If $G$ is an infinite grid graph, then it has many rays, and arbitrarily large sets of vertex-disjoint rays. However, it has only one end. This may be seen most easily using the characterization of ends in terms of havens: the removal of any finite set of vertices leaves exactly one infinite connected component, so there is only one haven (the one that maps each finite set to the unique infinite connected component).
Relation to topological ends
In point-set topology, there is a concept of an end that is similar to, but not quite the same as, the concept of an end in graph theory, dating back much earlier to Freudenthal (1931). If a topological space can be covered by a nested sequence of compact sets $\kappa _{0}\subset \kappa _{1}\subset \kappa _{2}\dots $, then an end of the space is a sequence of components $U_{0}\supset U_{1}\supset U_{2}\dots $ of the complements of the compact sets. This definition does not depend on the choice of the compact sets: the ends defined by one such choice may be placed in one-to-one correspondence with the ends defined by any other choice.
An infinite graph $G$ may be made into a topological space in two different but related ways:
• Replacing each vertex of the graph by a point and each edge of the graph by an open unit interval produces a Hausdorff space from the graph in which a set $S$ is defined to be open whenever each intersection of $S$ with an edge of the graph is an open subset of the unit interval.
• Replacing each vertex of the graph by a point and each edge of the graph by a point produces a non-Hausdorff space in which the open sets are the sets $S$ with the property that, if a vertex $v$ of $G$ belongs to $S$, then so does every edge having $v$ as one of its endpoints.
In either case, every finite subgraph of $G$ corresponds to a compact subspace of the topological space, and every compact subspace corresponds to a finite subgraph together with, in the Hausdorff case, finitely many compact proper subsets of edges. Thus, a graph may be covered by a nested sequence of compact sets if and only if it is locally finite, having a finite number of edges at every vertex.
If a graph $G$ is connected and locally finite, then it has a compact cover in which the set $\kappa _{i}$ is the set of vertices at distance at most $i$ from some arbitrarily chosen starting vertex. In this case any haven $\beta $ defines an end of the topological space in which $U_{i}=\beta (\kappa _{i})$. And conversely, if $U_{0}\supset U_{1}\supset U_{2}\dots $ is an end of the topological space defined from $G$, it defines a haven in which $\beta (X)$ is the component containing $U_{i}$, where $i$ is any number large enough that $\kappa _{i}$ contains $X$. Thus, for connected and locally finite graphs, the topological ends are in one-to-one correspondence with the graph-theoretic ends.[8]
For graphs that may not be locally finite, it is still possible to define a topological space from the graph and its ends. This space can be represented as a metric space if and only if the graph has a normal spanning tree, a rooted spanning tree such that each graph edge connects an ancestor-descendant pair. If a normal spanning tree exists, it has the same set of ends as the given graph: each end of the graph must contain exactly one infinite path in the tree.[9]
Special kinds of ends
Free ends
An end $E$ of a graph $G$ is defined to be a free end if there is a finite set $X$ of vertices with the property that $X$ separates $E$ from all other ends of the graph. (That is, in terms of havens, $\beta _{E}(X)$ is disjoint from $\beta _{D}(X)$ for every other end $D$.) In a graph with finitely many ends, every end must be free. Halin (1964) proves that, if $G$ has infinitely many ends, then either there exists an end that is not free, or there exists an infinite family of rays that share a common starting vertex and are otherwise disjoint from each other.
Thick ends
A thick end of a graph $G$ is an end that contains infinitely many pairwise-disjoint rays. Halin's grid theorem characterizes the graphs that contain thick ends: they are exactly the graphs that have a subdivision of the hexagonal tiling as a subgraph.[10]
Special kinds of graphs
Symmetric and almost-symmetric graphs
Mohar (1991) defines a connected locally finite graph to be "almost symmetric" if there exist a vertex $v$ and a number $D$ such that, for every other vertex $w$, there is an automorphism of the graph for which the image of $v$ is within distance $D$ of $w$; equivalently, a connected locally finite graph is almost symmetric if its automorphism group has finitely many orbits. As he shows, for every connected locally finite almost-symmetric graph, the number of ends is either at most two or uncountable; if it is uncountable, the ends have the topology of a Cantor set. Additionally, Mohar shows that the number of ends controls the Cheeger constant
$h=\inf \left\{{\frac {|\partial V|}{|V|}}\right\},$
where $V$ ranges over all finite nonempty sets of vertices of the graph and where $\partial V$ denotes the set of edges with one endpoint in $V$. For almost-symmetric graphs with uncountably many ends, $h>0$; however, for almost-symmetric graphs with only two ends, $h=0$.
Cayley graphs
Every group and a set of generators for the group determine a Cayley graph, a graph whose vertices are the group elements and the edges are pairs of elements $(x,gx)$ where $g$ is one of the generators. In the case of a finitely generated group, the ends of the group are defined to be the ends of the Cayley graph for the finite set of generators; this definition is invariant under the choice of generators, in the sense that if two different finite set of generators are chosen, the ends of the two Cayley graphs are in one-to-one correspondence with each other.
For instance, every free group has a Cayley graph (for its free generators) that is a tree. The free group on one generator has a doubly infinite path as its Cayley graph, with two ends. Every other free group has infinitely many ends.
Every finitely generated infinite group has either 1, 2, or infinitely many ends, and the Stallings theorem about ends of groups provides a decomposition of groups with more than one end.[11] In particular:
1. A finitely generated infinite group has 2 ends if and only if it has a cyclic subgroup of finite index.
2. A finitely generated infinite group has infinitely many ends if and only if it is either a nontrivial free product with amalgamation or HNN-extension with finite amalgamation.
3. All other finitely generated infinite groups have exactly one end.
Notes
1. However, as Krön & Möller (2008) point out, ends of graphs were already considered by Freudenthal (1945).
2. Halin (1964).
3. E.g., this is the form of the equivalence relation used by Diestel & Kühn (2003).
4. The haven nomenclature, and the fact that two rays define the same haven if and only if they are equivalent, is due to Robertson, Seymour & Thomas (1991). Diestel & Kühn (2003) proved that every haven comes from an end, completing the bijection between ends and havens, using a different nomenclature in which they called havens "directions".
5. The proof by Diestel & Kühn (2003) that every haven can be defined by a ray is nontrivial and involves two cases. If the set
$S=\bigcap _{X}\left(\beta (X)\cup X\right)$
(where $X$ ranges over all finite sets of vertices) is infinite, then there exists a ray that passes through infinitely many vertices of $S$, which necessarily determines $\beta $. On the other hand, if $S$ is finite, then Diestel & Kühn (2003) show that in this case there exists a sequence of finite sets $X_{i}$ that separate the end from all points whose distance from an arbitrarily chosen starting point in $G\setminus S$ is $i$. In this case, the haven is defined by any ray that is followed by a robber using the haven to escape police who land at set $X_{i}$ in round $i$ of the pursuit–evasion game.
6. More precisely, in the original formulation of this result by Halin (1964) in which ends are defined as equivalence classes of rays, every equivalence class of rays of $G$ contains a unique nonempty equivalence class of rays of the spanning forest. In terms of havens, there is a one-to-one correspondence of havens of order $\aleph _{0}$ between $G$ and its spanning tree $T$ for which $\beta _{T}(X)\subset \beta _{G}(X)$ for every finite set $X$ and every corresponding pair of havens $\beta _{T}$ and $\beta _{G}$.
7. Seymour & Thomas (1991); Thomassen (1992); Diestel (1992).
8. Diestel & Kühn (2003).
9. Diestel (2006).
10. Halin (1965); Diestel (2004).
11. Stallings (1968, 1971).
References
• Diestel, Reinhard (1992), "The end structure of a graph: recent results and open problems", Discrete Mathematics, 100 (1–3): 313–327, doi:10.1016/0012-365X(92)90650-5, MR 1172358
• Diestel, Reinhard (2004), "A short proof of Halin's grid theorem", Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 74: 237–242, doi:10.1007/BF02941538, MR 2112834
• Diestel, Reinhard (2006), "End spaces and spanning trees", Journal of Combinatorial Theory, Series B, 96 (6): 846–854, doi:10.1016/j.jctb.2006.02.010, MR 2274079
• Diestel, Reinhard; Kühn, Daniela (2003), "Graph-theoretical versus topological ends of graphs", Journal of Combinatorial Theory, Series B, 87 (1): 197–206, doi:10.1016/S0095-8956(02)00034-5, MR 1967888
• Freudenthal, Hans (1931), "Über die Enden topologischer Räume und Gruppen", Mathematische Zeitschrift, 33: 692–713, doi:10.1007/BF01174375
• Freudenthal, Hans (1945), "Über die Enden diskreter Räume und Gruppen", Commentarii Mathematici Helvetici, 17: 1–38, doi:10.1007/bf02566233, MR 0012214
• Halin, Rudolf (1964), "Über unendliche Wege in Graphen", Mathematische Annalen, 157 (2): 125–137, doi:10.1007/bf01362670, hdl:10338.dmlcz/102294, MR 0170340
• Halin, Rudolf (1965), "Über die Maximalzahl fremder unendlicher Wege in Graphen", Mathematische Nachrichten, 30 (1–2): 63–85, doi:10.1002/mana.19650300106, MR 0190031
• Krön, Bernhard; Möller, Rögnvaldur G. (2008), "Metric ends, fibers and automorphisms of graphs" (PDF), Mathematische Nachrichten, 281 (1): 62–74, doi:10.1002/mana.200510587, MR 2376468
• Mohar, Bojan (1991), "Some relations between analytic and geometric properties of infinite graphs" (PDF), Discrete Mathematics, 95 (1–3): 193–219, doi:10.1016/0012-365X(91)90337-2, MR 1141939
• Robertson, Neil; Seymour, Paul; Thomas, Robin (1991), "Excluding infinite minors", Discrete Mathematics, 95 (1–3): 303–319, doi:10.1016/0012-365X(91)90343-Z, MR 1141945
• Seymour, Paul; Thomas, Robin (1991), "An end-faithful spanning tree counterexample", Proceedings of the American Mathematical Society, 113 (4): 1163–1171, doi:10.2307/2048796, JSTOR 2048796, MR 1045600
• Stallings, John R. (1968), "On torsion-free groups with infinitely many ends", Annals of Mathematics, Second Series, 88 (2): 312–334, doi:10.2307/1970577, JSTOR 1970577, MR 0228573
• Stallings, John R. (1971), Group theory and three-dimensional manifolds: A James K. Whittemore Lecture in Mathematics given at Yale University, 1969, Yale Mathematical Monographs, vol. 4, New Haven, Conn.: Yale University Press, MR 0415622
• Thomassen, Carsten (1992), "Infinite connected graphs with no end-preserving spanning trees", Journal of Combinatorial Theory, Series B, 54 (2): 322–324, doi:10.1016/0095-8956(92)90059-7, hdl:10338.dmlcz/127625, MR 1152455
|
Wikipedia
|
Projective Hilbert space
In mathematics and the foundations of quantum mechanics, the projective Hilbert space $P(H)$ of a complex Hilbert space $H$ is the set of equivalence classes of non-zero vectors $v$ in $H$, for the relation $\sim $ on $H$ given by
$w\sim v$ if and only if $v=\lambda w$ for some non-zero complex number $\lambda $.
The equivalence classes of $v$ for the relation $\sim $ are also called rays or projective rays.
This is the usual construction of projectivization, applied to a complex Hilbert space.
Overview
The physical significance of the projective Hilbert space is that in quantum theory, the wave functions $\psi $ and $\lambda \psi $ represent the same physical state, for any $\lambda \neq 0$. It is conventional to choose a $\psi $ from the ray so that it has unit norm, $\langle \psi |\psi \rangle =1$, in which case it is called a normalized wavefunction. The unit norm constraint does not completely determine $\psi $ within the ray, since $\psi $ could be multiplied by any $\lambda $ with absolute value 1 (the U(1) action) and retain its normalization. Such a $\lambda $ can be written as $\lambda =e^{i\phi }$ with $\phi $ called the global phase.
Rays that differ by such a $\lambda $ correspond to the same state (cf. quantum state (algebraic definition), given a C*-algebra of observables and a representation on $H$). No measurement can recover the phase of a ray; it is not observable. One says that $U(1)$ is a gauge group of the first kind.
If $H$ is an irreducible representation of the algebra of observables then the rays induce pure states. Convex linear combinations of rays naturally give rise to density matrix which (still in case of an irreducible representation) correspond to mixed states.
The same construction can be applied also to real Hilbert spaces.
In the case $H$ is finite-dimensional, that is, $H=H_{n}$, the set of projective rays may be treated just as any other projective space; it is a homogeneous space for a unitary group $\mathrm {U} (n)$ or orthogonal group $\mathrm {O} (n)$, in the complex and real cases respectively. For the finite-dimensional complex Hilbert space, one writes
$P(H_{n})=\mathbb {C} P^{n-1}$
so that, for example, the projectivization of two-dimensional complex Hilbert space (the space describing one qubit) is the complex projective line $\mathbb {C} P^{1}$. This is known as the Bloch sphere. See Hopf fibration for details of the projectivization construction in this case.
Complex projective Hilbert space may be given a natural metric, the Fubini–Study metric, derived from the Hilbert space's norm.
Product
The Cartesian product of projective Hilbert spaces is not a projective space. The Segre mapping is an embedding of the Cartesian product of two projective spaces into the projective space associated to the tensor product of the two Hilbert spaces, given by $P(H)\times P(H')\to P(H\otimes H'),([x],[y])\mapsto [x\otimes y]$. In quantum theory, it describes how to make states of the composite system from states of its constituents. It is only an embedding, not a surjection; most of the tensor product space does not lie in its range and represents entangled states.
See also
• Projective space, for the concept in general
• Complex projective space
• Projective representation
References
Ashtekar, Abhay; Schilling, Troy A. (1997). "Geometrical Formulation of Quantum Mechanics". arXiv:gr-qc/9706069.
|
Wikipedia
|
Ray Kunze
Ray Alden Kunze (March 7, 1928 – May 21, 2014) was an American mathematician who chaired the mathematics departments at the University of California, Irvine and the University of Georgia.[1][2] His mathematical research concerned the representation theory of groups and noncommutative harmonic analysis.[2]
Kunze was born in Des Moines, Iowa and grew up near Milwaukee, Wisconsin.[1][2] He began his undergraduate studies at Denison University but transferred to the University of Chicago after two years, and earned bachelor's and master's degrees in mathematics.[2] After working as a military mathematical analyst,[1][2] he returned to the University of Chicago, and earned his Ph.D. in 1957 with a dissertation on Fourier transformations supervised by Irving Segal.[2][3] As well as his positions at UCI and Georgia, he also worked at the Institute for Advanced Study, Massachusetts Institute of Technology, Brandeis University, and Washington University in St. Louis.[1][2] He has over 50 academic descendants, many of them through his students Paul Sally at Brandeis and Edward N. Wilson at Washington University.[3]
With his advisor Irving Segal, Kunze was the author of the textbook Integrals and Operators (McGraw-Hill, 1968; 2nd ed., Grundlehren der Mathematischen Wissenschaften 228, Springer, 1978).[4] With Kenneth M. Hoffman he was the author of Linear Algebra (Prentice-Hall, 1961; 2nd ed., Pearson, 1971).[2][5]
In 1994, a special session on representation theory and harmonic analysis was held in honor of Kunze as part of the 889th meeting of the American Mathematical Society, and the papers from the session were published as a festschrift.[6] In 2012, Kunze was recognized as one of the inaugural fellows of the American Mathematical Society.[7]
References
1. "Ray Alden Kunze", Paid Obituaries, Los Angeles Times, July 12, 2014
2. Dedication to Representation theory and harmonic analysis, pp. ix–x.
3. Ray Kunze at the Mathematics Genealogy Project.
4. Reviews of Integrals and Operators by S. K. Berberian, 1st ed., MR0217244, and 2nd ed., MR0486380.
5. Review of Linear algebra by Leon Mirsky, MR0125849; for the 2nd ed., see MR0276251.
6. Ton-That, Tuong; Gross, Kenneth I.; Richards, Donald St. P.; Sally, Paul J., Jr. (1995), Representation theory and harmonic analysis: Papers from the conference held in honor of Ray A. Kunze at the AMS Special Session, Cincinnati, Ohio, January 12–14, 1994, Contemporary Mathematics, vol. 191, American Mathematical Society, Providence, RI, doi:10.1090/conm/191, ISBN 0-8218-0310-7, MR 1365528{{citation}}: CS1 maint: multiple names: authors list (link).
7. List of Fellows of the American Mathematical Society, retrieved 2015-01-26.
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Japan
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Rayleigh quotient
In mathematics, the Rayleigh quotient[1] (/ˈreɪ.li/) for a given complex Hermitian matrix $M$ and nonzero vector $x$ is defined as:[2][3]
$R(M,x)={x^{*}Mx \over x^{*}x}.$
For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugate transpose $x^{*}$ to the usual transpose $x'$. Note that $R(M,cx)=R(M,x)$ for any non-zero scalar $c$. Recall that a Hermitian (or real symmetric) matrix is diagonalizable with only real eigenvalues. It can be shown that, for a given matrix, the Rayleigh quotient reaches its minimum value $\lambda _{\min }$ (the smallest eigenvalue of $M$) when $x$ is $v_{\min }$ (the corresponding eigenvector).[4] Similarly, $R(M,x)\leq \lambda _{\max }$ and $R(M,v_{\max })=\lambda _{\max }$.
The Rayleigh quotient is used in the min-max theorem to get exact values of all eigenvalues. It is also used in eigenvalue algorithms (such as Rayleigh quotient iteration) to obtain an eigenvalue approximation from an eigenvector approximation.
The range of the Rayleigh quotient (for any matrix, not necessarily Hermitian) is called a numerical range and contains its spectrum. When the matrix is Hermitian, the numerical radius is equal to the spectral norm. Still in functional analysis, $\lambda _{\max }$ is known as the spectral radius. In the context of $C^{\star }$-algebras or algebraic quantum mechanics, the function that to $M$ associates the Rayleigh–Ritz quotient $R(M,x)$ for a fixed $x$ and $M$ varying through the algebra would be referred to as vector state of the algebra.
In quantum mechanics, the Rayleigh quotient gives the expectation value of the observable corresponding to the operator $M$ for a system whose state is given by $x$.
If we fix the complex matrix $M$, then the resulting Rayleigh quotient map (considered as a function of $x$) completely determines $M$ via the polarization identity; indeed, this remains true even if we allow $M$ to be non-Hermitian. (However, if we restrict the field of scalars to the real numbers, then the Rayleigh quotient only determines the symmetric part of $M$.)
Bounds for Hermitian M
As stated in the introduction, for any vector x, one has $R(M,x)\in \left[\lambda _{\min },\lambda _{\max }\right]$, where $\lambda _{\min },\lambda _{\max }$ are respectively the smallest and largest eigenvalues of $M$. This is immediate after observing that the Rayleigh quotient is a weighted average of eigenvalues of M:
$R(M,x)={x^{*}Mx \over x^{*}x}={\frac {\sum _{i=1}^{n}\lambda _{i}y_{i}^{2}}{\sum _{i=1}^{n}y_{i}^{2}}}$
where $(\lambda _{i},v_{i})$ is the $i$-th eigenpair after orthonormalization and $y_{i}=v_{i}^{*}x$ is the $i$th coordinate of x in the eigenbasis. It is then easy to verify that the bounds are attained at the corresponding eigenvectors $v_{\min },v_{\max }$.
The fact that the quotient is a weighted average of the eigenvalues can be used to identify the second, the third, ... largest eigenvalues. Let $\lambda _{\max }=\lambda _{1}\geq \lambda _{2}\geq \cdots \geq \lambda _{n}=\lambda _{\min }$ be the eigenvalues in decreasing order. If $n=2$ and $x$ is constrained to be orthogonal to $v_{1}$, in which case $y_{1}=v_{1}^{*}x=0$, then $R(M,x)$ has maximum value $\lambda _{2}$, which is achieved when $x=v_{2}$.
Special case of covariance matrices
An empirical covariance matrix $M$ can be represented as the product $A'A$ of the data matrix $A$ pre-multiplied by its transpose $A'$. Being a positive semi-definite matrix, $M$ has non-negative eigenvalues, and orthogonal (or orthogonalisable) eigenvectors, which can be demonstrated as follows.
Firstly, that the eigenvalues $\lambda _{i}$ are non-negative:
${\begin{aligned}&Mv_{i}=A'Av_{i}=\lambda _{i}v_{i}\\\Rightarrow {}&v_{i}'A'Av_{i}=v_{i}'\lambda _{i}v_{i}\\\Rightarrow {}&\left\|Av_{i}\right\|^{2}=\lambda _{i}\left\|v_{i}\right\|^{2}\\\Rightarrow {}&\lambda _{i}={\frac {\left\|Av_{i}\right\|^{2}}{\left\|v_{i}\right\|^{2}}}\geq 0.\end{aligned}}$
Secondly, that the eigenvectors $v_{i}$ are orthogonal to one another:
${\begin{aligned}&Mv_{i}=\lambda _{i}v_{i}\\\Rightarrow {}&v_{j}'Mv_{i}=v_{j}'\lambda _{i}v_{i}\\\Rightarrow {}&\left(Mv_{j}\right)'v_{i}=\lambda _{i}v_{j}'v_{i}\\\Rightarrow {}&\lambda _{j}v_{j}'v_{i}=\lambda _{i}v_{j}'v_{i}\\\Rightarrow {}&\left(\lambda _{j}-\lambda _{i}\right)v_{j}'v_{i}=0\\\Rightarrow {}&v_{j}'v_{i}=0\end{aligned}}$
if the eigenvalues are different – in the case of multiplicity, the basis can be orthogonalized.
To now establish that the Rayleigh quotient is maximized by the eigenvector with the largest eigenvalue, consider decomposing an arbitrary vector $x$ on the basis of the eigenvectors $v_{i}$:
$x=\sum _{i=1}^{n}\alpha _{i}v_{i},$
where
$\alpha _{i}={\frac {x'v_{i}}{v_{i}'v_{i}}}={\frac {\langle x,v_{i}\rangle }{\left\|v_{i}\right\|^{2}}}$
is the coordinate of $x$ orthogonally projected onto $v_{i}$. Therefore, we have:
${\begin{aligned}R(M,x)&={\frac {x'A'Ax}{x'x}}\\&={\frac {{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'\left(A'A\right){\Bigl (}\sum _{i=1}^{n}\alpha _{i}v_{i}{\Bigr )}}{{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'{\Bigl (}\sum _{i=1}^{n}\alpha _{i}v_{i}{\Bigr )}}}\\&={\frac {{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'{\Bigl (}\sum _{i=1}^{n}\alpha _{i}(A'A)v_{i}{\Bigr )}}{{\Bigl (}\sum _{i=1}^{n}\alpha _{i}^{2}{v_{i}}'{v_{i}}{\Bigr )}}}\\&={\frac {{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'{\Bigl (}\sum _{i=1}^{n}\alpha _{i}\lambda _{i}v_{i}{\Bigr )}}{{\Bigl (}\sum _{i=1}^{n}\alpha _{i}^{2}\|{v_{i}}\|^{2}{\Bigr )}}}\end{aligned}}$
which, by orthonormality of the eigenvectors, becomes:
${\begin{aligned}R(M,x)&={\frac {\sum _{i=1}^{n}\alpha _{i}^{2}\lambda _{i}}{\sum _{i=1}^{n}\alpha _{i}^{2}}}\\&=\sum _{i=1}^{n}\lambda _{i}{\frac {(x'v_{i})^{2}}{(x'x)(v_{i}'v_{i})^{2}}}\\&=\sum _{i=1}^{n}\lambda _{i}{\frac {(x'v_{i})^{2}}{(x'x)}}\end{aligned}}$
The last representation establishes that the Rayleigh quotient is the sum of the squared cosines of the angles formed by the vector $x$ and each eigenvector $v_{i}$, weighted by corresponding eigenvalues.
If a vector $x$ maximizes $R(M,x)$, then any non-zero scalar multiple $kx$ also maximizes $R$, so the problem can be reduced to the Lagrange problem of maximizing $ \sum _{i=1}^{n}\alpha _{i}^{2}\lambda _{i}$ under the constraint that $ \sum _{i=1}^{n}\alpha _{i}^{2}=1$.
Define: $\beta _{i}=\alpha _{i}^{2}$. This then becomes a linear program, which always attains its maximum at one of the corners of the domain. A maximum point will have $\alpha _{1}=\pm 1$ and $\alpha _{i}=0$ for all $i>1$ (when the eigenvalues are ordered by decreasing magnitude).
Thus, the Rayleigh quotient is maximized by the eigenvector with the largest eigenvalue.
Formulation using Lagrange multipliers
Alternatively, this result can be arrived at by the method of Lagrange multipliers. The first part is to show that the quotient is constant under scaling $x\to cx$, where $c$ is a scalar
$R(M,cx)={\frac {(cx)^{*}Mcx}{(cx)^{*}cx}}={\frac {c^{*}c}{c^{*}c}}{\frac {x^{*}Mx}{x^{*}x}}=R(M,x).$
Because of this invariance, it is sufficient to study the special case $\|x\|^{2}=x^{T}x=1$. The problem is then to find the critical points of the function
$R(M,x)=x^{\mathsf {T}}Mx,$
subject to the constraint $\|x\|^{2}=x^{T}x=1.$ In other words, it is to find the critical points of
${\mathcal {L}}(x)=x^{\mathsf {T}}Mx-\lambda \left(x^{\mathsf {T}}x-1\right),$
where $\lambda $ is a Lagrange multiplier. The stationary points of ${\mathcal {L}}(x)$ occur at
${\begin{aligned}&{\frac {d{\mathcal {L}}(x)}{dx}}=0\\\Rightarrow {}&2x^{\mathsf {T}}M-2\lambda x^{\mathsf {T}}=0\\\Rightarrow {}&2Mx-2\lambda x=0{\text{ (taking the transpose of both sides and noting that M is Hermitian)}}\\\Rightarrow {}&Mx=\lambda x\end{aligned}}$
and
$\therefore R(M,x)={\frac {x^{\mathsf {T}}Mx}{x^{\mathsf {T}}x}}=\lambda {\frac {x^{\mathsf {T}}x}{x^{\mathsf {T}}x}}=\lambda .$
Therefore, the eigenvectors $x_{1},\ldots ,x_{n}$ of $M$ are the critical points of the Rayleigh quotient and their corresponding eigenvalues $\lambda _{1},\ldots ,\lambda _{n}$ are the stationary values of ${\mathcal {L}}$. This property is the basis for principal components analysis and canonical correlation.
Use in Sturm–Liouville theory
Sturm–Liouville theory concerns the action of the linear operator
$L(y)={\frac {1}{w(x)}}\left(-{\frac {d}{dx}}\left[p(x){\frac {dy}{dx}}\right]+q(x)y\right)$
on the inner product space defined by
$\langle {y_{1},y_{2}}\rangle =\int _{a}^{b}w(x)y_{1}(x)y_{2}(x)\,dx$
of functions satisfying some specified boundary conditions at a and b. In this case the Rayleigh quotient is
${\frac {\langle {y,Ly}\rangle }{\langle {y,y}\rangle }}={\frac {\int _{a}^{b}y(x)\left(-{\frac {d}{dx}}\left[p(x){\frac {dy}{dx}}\right]+q(x)y(x)\right)dx}{\int _{a}^{b}{w(x)y(x)^{2}}dx}}.$
This is sometimes presented in an equivalent form, obtained by separating the integral in the numerator and using integration by parts:
${\begin{aligned}{\frac {\langle {y,Ly}\rangle }{\langle {y,y}\rangle }}&={\frac {\left\{\int _{a}^{b}y(x)\left(-{\frac {d}{dx}}\left[p(x)y'(x)\right]\right)dx\right\}+\left\{\int _{a}^{b}{q(x)y(x)^{2}}\,dx\right\}}{\int _{a}^{b}{w(x)y(x)^{2}}\,dx}}\\&={\frac {\left\{\left.-y(x)\left[p(x)y'(x)\right]\right|_{a}^{b}\right\}+\left\{\int _{a}^{b}y'(x)\left[p(x)y'(x)\right]\,dx\right\}+\left\{\int _{a}^{b}{q(x)y(x)^{2}}\,dx\right\}}{\int _{a}^{b}w(x)y(x)^{2}\,dx}}\\&={\frac {\left\{\left.-p(x)y(x)y'(x)\right|_{a}^{b}\right\}+\left\{\int _{a}^{b}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx\right\}}{\int _{a}^{b}{w(x)y(x)^{2}}\,dx}}.\end{aligned}}$
Generalizations
1. For a given pair (A, B) of matrices, and a given non-zero vector x, the generalized Rayleigh quotient is defined as:
$R(A,B;x):={\frac {x^{*}Ax}{x^{*}Bx}}.$
The Generalized Rayleigh Quotient can be reduced to the Rayleigh Quotient $R(D,C^{*}x)$ through the transformation $D=C^{-1}A{C^{*}}^{-1}$ where $CC^{*}$ is the Cholesky decomposition of the Hermitian positive-definite matrix B.
2. For a given pair (x, y) of non-zero vectors, and a given Hermitian matrix H, the generalized Rayleigh quotient can be defined as:
$R(H;x,y):={\frac {y^{*}Hx}{\sqrt {y^{*}y\cdot x^{*}x}}}$
which coincides with R(H,x) when x = y. In quantum mechanics, this quantity is called a "matrix element" or sometimes a "transition amplitude".
See also
• Field of values
• Min-max theorem
• Rayleigh's quotient in vibrations analysis
• Dirichlet eigenvalue
References
1. Also known as the Rayleigh–Ritz ratio; named after Walther Ritz and Lord Rayleigh.
2. Horn, R. A.; Johnson, C. A. (1985). Matrix Analysis. Cambridge University Press. pp. 176–180. ISBN 0-521-30586-1.
3. Parlett, B. N. (1998). The Symmetric Eigenvalue Problem. Classics in Applied Mathematics. SIAM. ISBN 0-89871-402-8.
4. Costin, Rodica D. (2013). "Midterm notes" (PDF). Mathematics 5102 Linear Mathematics in Infinite Dimensions, lecture notes. The Ohio State University.
Further reading
• Shi Yu, Léon-Charles Tranchevent, Bart Moor, Yves Moreau, Kernel-based Data Fusion for Machine Learning: Methods and Applications in Bioinformatics and Text Mining, Ch. 2, Springer, 2011.
|
Wikipedia
|
Rayleigh quotient iteration
Rayleigh quotient iteration is an eigenvalue algorithm which extends the idea of the inverse iteration by using the Rayleigh quotient to obtain increasingly accurate eigenvalue estimates.
Rayleigh quotient iteration is an iterative method, that is, it delivers a sequence of approximate solutions that converges to a true solution in the limit. Very rapid convergence is guaranteed and no more than a few iterations are needed in practice to obtain a reasonable approximation. The Rayleigh quotient iteration algorithm converges cubically for Hermitian or symmetric matrices, given an initial vector that is sufficiently close to an eigenvector of the matrix that is being analyzed.
Algorithm
The algorithm is very similar to inverse iteration, but replaces the estimated eigenvalue at the end of each iteration with the Rayleigh quotient. Begin by choosing some value $\mu _{0}$ as an initial eigenvalue guess for the Hermitian matrix $A$. An initial vector $b_{0}$ must also be supplied as initial eigenvector guess.
Calculate the next approximation of the eigenvector $b_{i+1}$ by
$b_{i+1}={\frac {(A-\mu _{i}I)^{-1}b_{i}}{\|(A-\mu _{i}I)^{-1}b_{i}\|}},$
where $I$ is the identity matrix, and set the next approximation of the eigenvalue to the Rayleigh quotient of the current iteration equal to
$\mu _{i+1}={\frac {b_{i+1}^{*}Ab_{i+1}}{b_{i+1}^{*}b_{i+1}}}.$
To compute more than one eigenvalue, the algorithm can be combined with a deflation technique.
Note that for very small problems it is beneficial to replace the matrix inverse with the adjugate, which will yield the same iteration because it is equal to the inverse up to an irrelevant scale (the inverse of the determinant, specifically). The adjugate is easier to compute explicitly than the inverse (though the inverse is easier to apply to a vector for problems that aren't small), and is more numerically sound because it remains well defined as the eigenvalue converges.
Example
Consider the matrix
$A=\left[{\begin{matrix}1&2&3\\1&2&1\\3&2&1\\\end{matrix}}\right]$
for which the exact eigenvalues are $\lambda _{1}=3+{\sqrt {5}}$, $\lambda _{2}=3-{\sqrt {5}}$ and $\lambda _{3}=-2$, with corresponding eigenvectors
$v_{1}=\left[{\begin{matrix}1\\\varphi -1\\1\\\end{matrix}}\right]$, $v_{2}=\left[{\begin{matrix}1\\-\varphi \\1\\\end{matrix}}\right]$ and $v_{3}=\left[{\begin{matrix}1\\0\\1\\\end{matrix}}\right]$.
(where $\textstyle \varphi ={\frac {1+{\sqrt {5}}}{2}}$ is the golden ratio).
The largest eigenvalue is $\lambda _{1}\approx 5.2361$ and corresponds to any eigenvector proportional to $v_{1}\approx \left[{\begin{matrix}1\\0.6180\\1\\\end{matrix}}\right].$
We begin with an initial eigenvalue guess of
$b_{0}=\left[{\begin{matrix}1\\1\\1\\\end{matrix}}\right],~\mu _{0}=200$.
Then, the first iteration yields
$b_{1}\approx \left[{\begin{matrix}-0.57927\\-0.57348\\-0.57927\\\end{matrix}}\right],~\mu _{1}\approx 5.3355$
the second iteration,
$b_{2}\approx \left[{\begin{matrix}0.64676\\0.40422\\0.64676\\\end{matrix}}\right],~\mu _{2}\approx 5.2418$
and the third,
$b_{3}\approx \left[{\begin{matrix}-0.64793\\-0.40045\\-0.64793\\\end{matrix}}\right],~\mu _{3}\approx 5.2361$
from which the cubic convergence is evident.
Octave implementation
The following is a simple implementation of the algorithm in Octave.
function x = rayleigh(A, epsilon, mu, x)
x = x / norm(x);
% the backslash operator in Octave solves a linear system
y = (A - mu * eye(rows(A))) \ x;
lambda = y' * x;
mu = mu + 1 / lambda
err = norm(y - lambda * x) / norm(y)
while err > epsilon
x = y / norm(y);
y = (A - mu * eye(rows(A))) \ x;
lambda = y' * x;
mu = mu + 1 / lambda
err = norm(y - lambda * x) / norm(y)
end
end
See also
• Power iteration
• Inverse iteration
References
• Lloyd N. Trefethen and David Bau, III, Numerical Linear Algebra, Society for Industrial and Applied Mathematics, 1997. ISBN 0-89871-361-7.
• Rainer Kress, "Numerical Analysis", Springer, 1991. ISBN 0-387-98408-9
Numerical linear algebra
Key concepts
• Floating point
• Numerical stability
Problems
• System of linear equations
• Matrix decompositions
• Matrix multiplication (algorithms)
• Matrix splitting
• Sparse problems
Hardware
• CPU cache
• TLB
• Cache-oblivious algorithm
• SIMD
• Multiprocessing
Software
• MATLAB
• Basic Linear Algebra Subprograms (BLAS)
• LAPACK
• Specialized libraries
• General purpose software
|
Wikipedia
|
Rayleigh's quotient in vibrations analysis
The Rayleigh's quotient represents a quick method to estimate the natural frequency of a multi-degree-of-freedom vibration system, in which the mass and the stiffness matrices are known.
The eigenvalue problem for a general system of the form
$M\,{\ddot {\textbf {q}}}(t)+C\,{\dot {\textbf {q}}}(t)+K\,{\textbf {q}}(t)={\textbf {Q}}(t)$
in absence of damping and external forces reduces to
$M\,{\ddot {\textbf {q}}}(t)+K\,{\textbf {q}}(t)=0$
The previous equation can be written also as the following:
$K\,{\textbf {u}}=\lambda \,M\,{\textbf {u}}$
where $\lambda =\omega ^{2}$, in which $\omega $ represents the natural frequency, M and K are the real positive symmetric mass and stiffness matrices respectively.
For an n-degree-of-freedom system the equation has n solutions $\lambda _{m}$, ${\textbf {u}}_{m}$ that satisfy the equation
$K\,{\textbf {u}}_{m}=\lambda _{m}\,M\,{\textbf {u}}_{m}$
By multiplying both sides of the equation by ${\textbf {u}}_{m}^{T}$ and dividing by the scalar ${\textbf {u}}_{m}^{T}\,M\,{\textbf {u}}_{m}$, it is possible to express the eigenvalue problem as follow:
$\lambda _{m}=\omega _{m}^{2}={\frac {{\textbf {u}}_{m}^{T}\,K\,{\textbf {u}}_{m}}{{\textbf {u}}_{m}^{T}\,M\,{\textbf {u}}_{m}}}$
for m = 1, 2, 3, ..., n.
In the previous equation it is also possible to observe that the numerator is proportional to the potential energy while the denominator depicts a measure of the kinetic energy. Moreover, the equation allow us to calculate the natural frequency only if the eigenvector (as well as any other displacement vector) ${\textbf {u}}_{m}$ is known. For academic interests, if the modal vectors are not known, we can repeat the foregoing process but with $\lambda =\omega ^{2}$ and ${\textbf {u}}$ taking the place of $\lambda _{m}=\omega _{m}^{2}$ and ${\textbf {u}}_{m}$, respectively. By doing so we obtain the scalar $R({\textbf {u}})$, also known as Rayleigh's quotient:[1]
$R({\textbf {u}})=\lambda =\omega ^{2}={\frac {{\textbf {u}}^{T}\,K\,{\textbf {u}}}{{\textbf {u}}^{T}\,M\,{\textbf {u}}}}$
Therefore, the Rayleigh's quotient is a scalar whose value depends on the vector ${\textbf {u}}$ and it can be calculated with good approximation for any arbitrary vector ${\textbf {u}}$ as long as it lays reasonably far from the modal vectors ${\textbf {u}}_{i}$, i = 1,2,3,...,n.
Since, it is possible to state that the vector ${\textbf {u}}$ differs from the modal vector ${\textbf {u}}_{m}$ by a small quantity of first order, the correct result of the Rayleigh's quotient will differ not sensitively from the estimated one and that's what makes this method very useful. A good way to estimate the lowest modal vector $(u_{1})$, that generally works well for most structures (even though is not guaranteed), is to assume $(u_{1})$ equal to the static displacement from an applied force that has the same relative distribution of the diagonal mass matrix terms. The latter can be elucidated by the following 3-DOF example.
Example – 3DOF
As an example, we can consider a 3-degree-of-freedom system in which the mass and the stiffness matrices of them are known as follows:
$M={\begin{bmatrix}1&0&0\\0&1&0\\0&0&3\end{bmatrix}}\;,\quad K={\begin{bmatrix}3&-1&0\\-1&3&-2\\0&-2&2\end{bmatrix}}$
To get an estimation of the lowest natural frequency we choose a trial vector of static displacement obtained by loading the system with a force proportional to the masses:
${\textbf {F}}=k{\begin{bmatrix}m_{1}\\m_{2}\\m_{3}\end{bmatrix}}=1{\begin{bmatrix}1\\1\\3\end{bmatrix}}$
Thus, the trial vector will become
${\textbf {u}}=K^{-1}{\textbf {F}}={\begin{bmatrix}2.5\\6.5\\8\end{bmatrix}}$
that allow us to calculate the Rayleigh's quotient:
$R={\frac {{\textbf {u}}^{T}\,K\,{\textbf {u}}}{{\textbf {u}}^{T}\,M\,{\textbf {u}}}}=\cdots =0.137214$
Thus, the lowest natural frequency, calculated by means of Rayleigh's quotient is:
$w_{\text{Ray}}=0.370424$
Using a calculation tool is pretty fast to verify how much it differs from the "real" one. In this case, using MATLAB, it has been calculated that the lowest natural frequency is: $w_{\text{real}}=0.369308$ that has led to an error of $0.302315\%$ using the Rayleigh's approximation, that is a remarkable result.
The example shows how the Rayleigh's quotient is capable of getting an accurate estimation of the lowest natural frequency. The practice of using the static displacement vector as a trial vector is valid as the static displacement vector tends to resemble the lowest vibration mode.
References
1. Meirovitch, Leonard (2003). Fundamentals of Vibration. McGraw-Hill Education. p. 806. ISBN 9780071219839.
|
Wikipedia
|
Raymond Flood (mathematician)
Raymond Flood is Emeritus Fellow and a member of the Continuing Education Department at Kellogg College, Oxford,[1] and has been a Professor of Geometry at Gresham College.[2]
Raymond Flood
Raymond Flood at Gresham College, 2012
NationalityBritish
Scientific career
FieldsMathematics
History of mathematics
Computational mathematics
Theoretical Physics
InstitutionsUniversity of Oxford
Kellogg College
Gresham College
British Society for the History of Mathematics
Committee for the History of Science, Medicine and Technology
Education
Flood achieved a Bachelor of Science degree at Queens University, Belfast and a master's degree at Linacre College, Oxford. He obtained his PhD from University College, Dublin.[3] Flood obtained his doctorate through part-time study, as he had already acquired a family and a job.[4]
Career
In 1990, Flood was made a Founding Fellow of Kellogg College, Oxford, formally Rewley House. Kellogg College, Oxford was created to look after the interests of mature and part-time students. Flood primarily teaches those students who are either mature, or who study part-time.[4] He has held numerous positions at the College and the University of Oxford, including Curator of the University Libraries and as a University lecturer at the University of Oxford.[1]
Flood has dedicated much of his academic career promoting mathematics and computing to adult audiences. He has been President of the British Society for the History of Mathematics from 2006 until 2009,[5] and also Research Associate in the School of Theoretical Physics, Dublin Institute for Advanced Studies.[1] On Gresham College, Flood has said "Gresham College comes from a long tradition of liberal adult education. Allowing people from a variety of backgrounds... to get access to current thinking on the major issues of the day. Gresham College ethos is very similar to my own ethos"[6]
In August 2012, Flood was appointed Gresham Professor of Geometry at Gresham College for a period of three years, replacing John D. Barrow.[7] During his term at the College he delivered series of free public lectures on Shaping Modern Mathematics,[8] Applying Modern Mathematics,[9] and Great Mathematicians, Great Mathematics.[10]
Other research work and publications
Aside from his academic work, Flood is active in communicating mathematics and its history to non-specialist audiences. He has appeared on BBC Radio 4's In Our Time[11] and has lectured on transatlantic voyages with RMS Queen Mary 2.[2]
Flood has produced and co-produced many publications and books on Mathematics. Some of the most recent books with which he has been involved are James Clerk Maxwell: Perspectives on his Life and Work (Oxford University Press, 2014),[12] The Great Mathematicians (Arcturus, 2011), which celebrates the achievements of the great mathematicians in their historical context, and Mathematics in Victorian Britain (Oxford University Press, 2011), which assembles in a single source, research on the history of mathematicians in Victorian Britain that would otherwise be out of reach of the general reader.[13]
References
1. Raymond G Flood, Kellogg College (accessed 9 June 2014)
2. Professor Raymond Flood, Gresham College (accessed 9 June 2014)
3. Raymond Flood Appointed Professor of Geometry at Gresham College (accessed 9 June 2014)
4. Video Interview short with Professor Raymond Flood (accessed 22 January 2015)
5. 'The British Society for the History of Mathematics' by Robin Wilson and Raymond Flood in Newsletter of the European Mathematics Society, March 2013, Issue 87
6. Professor Raymond Flood and Gresham College (accessed 22 January 2015)
7. Gresham Professor of Geometry on www.gresham.ac.uk
8. Shaping Mathematics series page on www.gresham.ac.uk
9. Applying Modern Mathematics series page on www.gresham.ac.uk
10. Great Mathematicians, Great Mathematics series page on www.gresham.ac.uk
11. The Laws of Motion and Negative Numbers on the BBC In Our Time website (accessed 10 January 2015)
12. Jame Clerk Maxwell: Perspectives on his Life and Work listing on the OUP website
13. Mathematics in Victorian Britain listing on the OUP website
External links
• Raymond G Flood, Kellogg College
• Professor Raymond Flood, Gresham College
• Raymond Flood's past Gresham College mathematics lectures
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Israel
• United States
• Korea
• Netherlands
Academics
• Mathematics Genealogy Project
Other
• IdRef
|
Wikipedia
|
Raymond Keiller Butchart
Raymond Keiller Butchart FRSE (1888–1930) was a short-lived Scottish mathematician. He served for two years as Professor of Mathematics at the illustrious Raffles College in Singapore. He lost a leg in the First World War.
Life
He was born in Dundee in Scotland on 4 May 1888, the only son of Margaret and Robert K Butchart. His father was a manager in a local jute spinning mill.
He attended Morgan Academy and the High School of Dundee before receiving a place at the University of St Andrews where he received a bachelor's degree in mathematics in 1913. During this time he studied at University College, Dundee, now the University of Dundee, which was then a college of the university in St Andrews.[1] After graduating he worked as a student assistant in the Mathematics department of University College, Dundee until December 1914. He then gave up a position in Wilson College in Bombay to instead serve his country.[2] He received a commission as a lieutenant in the 14th battalion Royal Scots on 24 December 1915.[3]
After training at Stobs in the Scottish borders he got a position as brigade signals officer. He left for France and Flanders in the summer of 1915. He rose to the rank of captain. He was seriously wounded and lost a leg.[4] He was not discharged from the army until 1920. He was elected a fellow of the Royal Society of Edinburgh in February 1915 (shortly before being sent to France). His proposers included D'Arcy Wentworth Thompson.[5]
In July 1921 the University of St Andrews awarded him a PhD and gave him the new title of lecturer in mathematics.
From 1928 to 1930 he was professor of mathematics at Raffles College in Singapore and apparently very much enjoyed the climate there. He left Singapore with his wife on 24 March 1930, for their first return trip to Scotland.
He died of malaria, which materialised soon after boarding ship. He died in the Indian Ocean. He was buried at sea, 65 miles south-east of Colombo on the same day, 30 March 1930.[6]
Family
He married Jean Ainslie Broome in 1921.
Publications
• The Dissipation of Energy in Simple and Multiple Wires (1921)[7]
References
1. University of St Andrews Students of University College Dundee and Medical School 1897-1947. Dundee: University College, Dundee. c. 1949.
2. "RSE Obituary". st-and.ac.uk. Retrieved 26 June 2015.
3. "List" (PDF). The London Gazette. 29 December 1914. Retrieved 26 January 2017.
4. S, J. E. A. (January 1932). "Proceedings of the Royal Society of Edinburgh – Raymond Keiller Butchart, B.Sc., Ph.D.. – Cambridge Journals Online". Proceedings of the Royal Society of Edinburgh. 51: 200–201. doi:10.1017/S0370164600023154.
5. "Former Fellows of The Royal Society of Edinburgh - 1783 – 2002" (PDF). The Royal Society of Edinburgh. July 2006. Retrieved 25 January 2017.
6. "Butchart biography". st-and.ac.uk. Retrieved 26 June 2015.
7. Butchart, Raymond Keiller (1921). "The Dussipation of Energy in Simple and Compound Wires". google.co.uk. Retrieved 26 June 2015.
Authority control: Academics
• zbMATH
|
Wikipedia
|
Raymond Redheffer
Raymond Moos Redheffer (April 17, 1921 – May 13, 2005)[1] was an American mathematician. He was the creator of one of the first electronic games, Nim, a knowledge game.[2]
Early life
He earned his PhD in 1948 from the Massachusetts Institute of Technology under Norman Levinson.
Career
He taught as a Peirce Fellow at Harvard from 1948 to 1950. His teaching skills were acknowledged 6 decades later by one of his students.[3] He taught for 55 years at the University of California, Los Angeles,[4] writing more than 200 research papers and three textbooks.[1]
Notable and unusual is the physically motivated discussion of the functions of vector calculus in his book with Sokolnikoff. He is known for the Redheffer matrix, the Redheffer star product, and for (with Charles Eames) his 1966 timeline of mathematics entitled Men of Modern Mathematics that was printed and distributed by IBM. He collaborated with Eames on a series of short films about mathematics,[1] and may have invented a version of Nim with electronic components.
Recognition
• UCLA Distinguished Teaching Award (1969).
Books
• Sokolnikoff, Ivan Stephen; Redheffer, Raymond M. (1966). Mathematics of Physics and Modern Engineering. McGraw-Hill. ISBN 978-0-07-059625-2.
• Levinson, Norman; Redheffer, Raymond M. (1970), Complex Variables, Holden-Day, ISBN 978-0-07-037492-8.
• Redheffer, Raymond M. (1991), Differential equations : theory and applications, Jones & Bartlett Publishers, ISBN 0-86720-200-9.
• Redheffer, Raymond M. (1992), Introduction to Differential Equations, Jones & Bartlett Publishers, ISBN 978-0-86720-289-2.
References
1. Gamelin, Theodore W. (2005), In Memoriam: Raymond Redheffer, University of California Senate.
2. A História dos Games - A origem (1942-1961)
3. Seligman, Stephen J. (2009) Precepts for Freshmen, The Harvard Crimson September 2
4. Raymond Redheffer at the Mathematics Genealogy Project
Authority control
International
• FAST
• ISNI
• VIAF
• WorldCat
National
• France
• BnF data
• Germany
• Israel
• United States
• Greece
• Netherlands
• Poland
Academics
• CiNii
• Leopoldina
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
People
• Deutsche Biographie
Other
• SNAC
• IdRef
|
Wikipedia
|
Raymond Louis Wilder
Raymond Louis Wilder (3 November 1896 in Palmer, Massachusetts – 7 July 1982 in Santa Barbara, California) was an American mathematician, who specialized in topology and gradually acquired philosophical and anthropological interests.
Life
Wilder's father was a printer. Raymond was musically inclined. He played cornet in the family orchestra, which performed at dances and fairs, and accompanied silent films on the piano.
He entered Brown University in 1914, intending to become an actuary. During World War I, he served in the U.S. Navy as an ensign. Brown awarded him his first degree in 1920, and a master's degree in actuarial mathematics in 1921. That year, he married Una Maude Greene; they had four children, thanks to whom they have ample descent.
Wilder chose to do his Ph.D. at the University of Texas at Austin, the most fateful decision of his life. At Texas, Wilder discovered pure mathematics and topology, thanks to the remarkable influence of Robert Lee Moore, the founder of topology in the US and the inventor of the Moore method for teaching mathematical proof. Moore was initially unimpressed by the young actuary, but Wilder went on to solve a difficult open problem that Moore had posed to his class. Moore suggested Wilder write up the solution for his Ph.D. thesis, which he did in 1923, titling it Concerning Continuous Curves. Wilder thus became the first of Moore's many doctoral students at the University of Texas.
After a year as an instructor at Texas, Wilder was appointed assistant professor at the Ohio State University in 1924. That university required that its academic employees sign a loyalty oath, which Wilder was very reluctant to sign because doing so was inconsistent with his lifelong progressive political and moral views.
In 1926, Wilder joined the faculty of the University of Michigan at Ann Arbor, where he supervised 26 Ph.Ds and became a research professor in 1947. During the 1930s, he helped settle European refugee mathematicians in the United States. Mathematicians who rubbed shoulders with Wilder at Michigan and who later proved prominent included Samuel Eilenberg, the cofounder of category theory, and the topologist Norman Steenrod. After his 1967 retirement from Michigan at the rather advanced age of 71, Wilder became a research associate and occasional lecturer at the University of California at Santa Barbara.
Wilder was vice president of the American Mathematical Society, 1950–1951, president 1955–1956, and the Society's Josiah Willard Gibbs Lecturer in 1969. He was president of the Mathematical Association of America, 1965–1966, which awarded him its Distinguished Service Medal in 1973.[1] He was elected to the American National Academy of Sciences in 1963. Brown University (1958) and the University of Michigan (1980) awarded him honorary doctorates. The mathematics department at the University of California annually bestows one or more graduating seniors with an award in Wilder's name.
The historical, philosophical, and anthropological writings of Wilder's later years suggest a warm, colorful personality. Raymond (2003) attests to this having been the case. For instance:
"[Wilder] was a devoted student of southwestern Native American culture. One day he told me that after retiring he would like to be a bartender in a rural area of Arizona or New Mexico, because he found the stories of the folk he met in bars there so fascinating."
The topologist
Wilder's thesis set out a new approach to the Schönflies programme, which aimed to study positional invariants of sets in the plane or 2-sphere. A positional invariant of a set A with respect to a set B is a property shared by all homeomorphic images of A contained in B. The best known example of such a positional invariant is embodied in the Jordan curve theorem: A simple closed curve in the 2-sphere has precisely two complementary domains and is the boundary of each of them. A converse to the Jordan curve theorem, proved by Schönflies, states that a subset of the 2-sphere is a simple closed curve if it:
• Has two complementary domains;
• Is the boundary of each of these domains;
• Is accessible from each of these domains.
In his "A converse of the Jordan-Brouwer separation theorem in three dimensions" (1930), Wilder showed that a subset of Euclidean 3-space whose complementary domains satisfied certain homology conditions was a 2-sphere.
Around 1930, Wilder moved from set-theoretic topology to algebraic topology, calling in 1932 for the unification of the two areas. He then began an extensive investigation of the theory of manifolds, e.g., his "Generalized closed manifolds in n-space" (1934), in effect extending the Schönflies programme to higher dimensions. This work culminated in his Topology of Manifolds (1949), twice reprinted, whose last three chapters discuss his contributions to the theory of positional topological invariants.
The philosopher
During the 1940s, Wilder met and befriended the University of Michigan anthropologist Leslie White, whose professional curiosity included mathematics as a human activity (White 1947). This encounter proved fateful, and Wilder's research interests underwent a major change, towards the foundations of mathematics. This change was foreshadowed by his 1944 article "The nature of mathematical proof," and heralded by his address to the 1950 International Congress of Mathematicians, titled "The cultural basis of mathematics," which posed the questions:
• "How does culture (in its broadest sense) determine a mathematical structure, such as a logic?"
• "How does culture influence the successive stages of the discovery of a mathematical structure?"
In 1952, he wrote up his course on foundations and the philosophy of mathematics into a widely cited text, Introduction to the foundations of mathematics.
Wilder's Evolution of mathematical concepts. An elementary study (1969) proposed that "we study mathematics as a human artifact, as a natural phenomenon subject to empirical observation and scientific analysis, and, in particular, as a cultural phenomenon understandable in anthropological terms." In this book, Wilder wrote:
"The major difference between mathematics and the other sciences, natural and social, is that whereas the latter are directly restricted in their purview by environmental phenomena of a physical or social nature, mathematics is subject only indirectly to such limitations. ... Plato conceived of an ideal universe in which resided perfect models ... the only reality mathematical concepts have is as cultural elements or artifacts."
Wilder's last book, Mathematics as a cultural system (1981), contained yet more thinking in this anthropological and evolutionary vein.
Wilder's eclectic and humanist perspective on mathematics appears to have had little influence on subsequent mathematical research. It has, however, had some influence on the teaching of mathematics and on the history and philosophy of mathematics. In particular, Wilder can be seen as a precursor to the work of Howard Eves, Evert Willem Beth, and Davis and Hersh (1981). Wilder's call for mathematics to be scrutinized by the methods of social science anticipates some aspects of Where Mathematics Comes From, by George Lakoff and Rafael Nunez. For an introduction to the limited anthropological research on mathematics, see the last chapter of Hersh (1997).
Bibliography
Books by Wilder:
• 1949. Topology of Manifolds.[2]
• 1965 (1952). Introduction to the foundations of mathematics.[3]
• 1969. Evolution of mathematical concepts. An elementary study.
• 1981. Mathematics as a cultural system. (ISBN 0-08-025796-8)
Biographical:
• Raymond, F., 2003, " Raymond Louis Wilder" in Biographical Memoirs National Academy of Sciences 82: 336–51.
Related work cited in this entry:
• Philip J. Davis and Reuben Hersh, 1981. The Mathematical Experience.
• Reuben Hersh, 1997. What Is Mathematics, Really? Oxford Univ. Press.
• Leslie White, 1947, "The Locus of Mathematical Reality: An Anthropological Footnote," Philosophy of Science 14: 289–303. Reprinted in Reuben Hersh, ed., 2006. 18 Unconventional Essays on the Nature of Mathematics. Springer: 304–19.
References
1. MAA presidents: Raymond Louis Wilder
2. Eilenberg, Samuel (1950). "Review: Topology of manifolds, by R. L. Wilder". Bull. Amer. Math. Soc. 56 (1, Part 1): 75–77. doi:10.1090/s0002-9904-1950-09349-5.
3. Frink, Orrin (1953). "Review: Introduction to the foundations of mathematics, by R. L. Wilder". Bull. Amer. Math. Soc. 59 (6): 580–582. doi:10.1090/s0002-9904-1953-09770-1.
External links
• J J O'Connor and E F Robertson, MacTutor: Raymond Louis Wilder. The source for this entry.
• Wilder papers at the University of Texas.
Presidents of the American Mathematical Society
1888–1900
• John Howard Van Amringe (1888–1890)
• Emory McClintock (1891–1894)
• George William Hill (1895–1896)
• Simon Newcomb (1897–1898)
• Robert Simpson Woodward (1899–1900)
1901–1924
• E. H. Moore (1901–1902)
• Thomas Fiske (1903–1904)
• William Fogg Osgood (1905–1906)
• Henry Seely White (1907–1908)
• Maxime Bôcher (1909–1910)
• Henry Burchard Fine (1911–1912)
• Edward Burr Van Vleck (1913–1914)
• Ernest William Brown (1915–1916)
• Leonard Eugene Dickson (1917–1918)
• Frank Morley (1919–1920)
• Gilbert Ames Bliss (1921–1922)
• Oswald Veblen (1923–1924)
1925–1950
• George David Birkhoff (1925–1926)
• Virgil Snyder (1927–1928)
• Earle Raymond Hedrick (1929–1930)
• Luther P. Eisenhart (1931–1932)
• Arthur Byron Coble (1933–1934)
• Solomon Lefschetz (1935–1936)
• Robert Lee Moore (1937–1938)
• Griffith C. Evans (1939–1940)
• Marston Morse (1941–1942)
• Marshall H. Stone (1943–1944)
• Theophil Henry Hildebrandt (1945–1946)
• Einar Hille (1947–1948)
• Joseph L. Walsh (1949–1950)
1951–1974
• John von Neumann (1951–1952)
• Gordon Thomas Whyburn (1953–1954)
• Raymond Louis Wilder (1955–1956)
• Richard Brauer (1957–1958)
• Edward J. McShane (1959–1960)
• Deane Montgomery (1961–1962)
• Joseph L. Doob (1963–1964)
• Abraham Adrian Albert (1965–1966)
• Charles B. Morrey Jr. (1967–1968)
• Oscar Zariski (1969–1970)
• Nathan Jacobson (1971–1972)
• Saunders Mac Lane (1973–1974)
1975–2000
• Lipman Bers (1975–1976)
• R. H. Bing (1977–1978)
• Peter Lax (1979–1980)
• Andrew M. Gleason (1981–1982)
• Julia Robinson (1983–1984)
• Irving Kaplansky (1985–1986)
• George Mostow (1987–1988)
• William Browder (1989–1990)
• Michael Artin (1991–1992)
• Ronald Graham (1993–1994)
• Cathleen Synge Morawetz (1995–1996)
• Arthur Jaffe (1997–1998)
• Felix Browder (1999–2000)
2001–2024
• Hyman Bass (2001–2002)
• David Eisenbud (2003–2004)
• James Arthur (2005–2006)
• James Glimm (2007–2008)
• George Andrews (2009–2010)
• Eric Friedlander (2011–2012)
• David Vogan (2013–2014)
• Robert Bryant (2015–2016)
• Ken Ribet (2017–2018)
• Jill Pipher (2019–2020)
• Ruth Charney (2021–2022)
• Bryna Kra (2023–2024)
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Japan
• Australia
• Greece
• Korea
• Croatia
• Netherlands
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• SNAC
• IdRef
|
Wikipedia
|
Raymundo Favila
Raymundo Acosta Favila was a Filipino mathematician. He has his Ph.D. from the University of California, Berkeley from 1939 under the supervision of Pauline Sperry,[1][2] and had his career at the University of the Philippines in Manila.[3] Dr. Raymundo Favila was elected as Academician of the National Academy of Science and Technology in 1979. He was one of those who initiated mathematics in the Philippines. He contributed extensively to the progression of mathematics and the mathematics learning in the country. He has made fundamental studies such as on stratifiable congruences and geometric inequalities. Dr. Favila has also co-authored textbooks in algebra and trigonometry.
Dissertation: On the Projective Differential Geometry of Certain Systems of Linear Homogeneous Partial Differential Equations of the First Order, with Special Applications
References
1. Raymundo Favila at the Mathematics Genealogy Project
2. "A Filipino in Mathematics" Natural and Applied Science Bulletin, University of the Philippines College of Liberal Arts, University of the Philippines College of Arts and Sciences, 1973
3. Oscar M. Alfonso, "University of the Philippines: The First 75 Years (1908-1983)" p.490
Authority control: Academics
• Mathematics Genealogy Project
|
Wikipedia
|
Raynaud's isogeny theorem
In mathematics, Raynaud's isogeny theorem, proved by Raynaud (1985), relates the Faltings heights of two isogeneous elliptic curves.
References
• Raynaud, Michel (1985). "Hauteurs et isogénies" [Heights and isogenies]. In Szpiro, Lucien (ed.). Séminaire sur les pinceaux arithmétiques: la conjecture de Mordell [Seminar on arithmetic pencils: the Mordell conjecture]. pp. 199–234. ISSN 0303-1179. MR 0801923. Zbl 1182.14049. {{cite book}}: |journal= ignored (help)
|
Wikipedia
|
Raynaud surface
In mathematics, a Raynaud surface is a particular kind of algebraic surface that was introduced in William E. Lang (1979) and named for Michel Raynaud (1978). To be precise, a Raynaud surface is a quasi-elliptic surface over an algebraic curve of genus g greater than 1, such that all fibers are irreducible and the fibration has a section. The Kodaira vanishing theorem fails for such surfaces; in other words the Kodaira theorem, valid in algebraic geometry over the complex numbers, has such surfaces as counterexamples, and these can only exist in characteristic p.
Generalized Raynaud surfaces were introduced in (Lang 1983), and give examples of surfaces of general type with global vector fields.
References
• Lang, William E. (1979), "Quasi-elliptic surfaces in characteristic three", Annales Scientifiques de l'École Normale Supérieure, Série 4, 12 (4): 473–500, ISSN 0012-9593, MR 0565468
• Lang, William E. (1983), "Examples of surfaces of general type with vector fields", Arithmetic and geometry, Vol. II, Progress in Mathematics, vol. 36, Boston, MA: Birkhäuser Boston, pp. 167–173, MR 0717611
• Raynaud, Michel (1978), "Contre-exemple au "vanishing theorem" en caractéristique $p>0$", C. P. Ramanujam—a tribute, Tata Inst. Fund. Res. Studies in Math., vol. 8, Berlin, New York: Springer-Verlag, pp. 273–278, MR 0541027
|
Wikipedia
|
Rayo's number
Rayo's number is a large number named after Mexican philosophy professor Agustín Rayo which has been claimed to be the largest named number.[1][2] It was originally defined in a "big number duel" at MIT on 26 January 2007.[3][4]
Definition
The definition of Rayo's number is a variation on the definition:[5]
The smallest number bigger than any finite number named by an expression in the language of first-order set theory with a googol symbols or less.
Specifically, an initial version of the definition, which was later clarified, read "The smallest number bigger than any number that can be named by an expression in the language of first-order set-theory with less than a googol (10100) symbols."[4]
The formal definition of the number uses the following second-order formula, where [φ] is a Gödel-coded formula and s is a variable assignment:[5]
For all R {
{for any (coded) formula [ψ] and any variable assignment t
(R([ψ],t) ↔
(([ψ] = "xi ∈ xj" ∧ t(xi) ∈ t(xj)) ∨
([ψ] = "xi = xj" ∧ t(xi) = t(xj)) ∨
([ψ] = "(~θ)" ∧ ∼R([θ],t)) ∨
([ψ] = "(θ∧ξ)" ∧ R([θ],t) ∧ R([ξ],t)) ∨
([ψ] = "∃xi (θ)" and, for some an xi-variant t' of t, R([θ],t'))
)} →
R([φ],s)}
Given this formula, Rayo's number is defined as:[5]
The smallest number bigger than every finite number m with the following property: there is a formula φ(x1) in the language of first-order set-theory (as presented in the definition of Sat) with less than a googol symbols and x1 as its only free variable such that: (a) there is a variable assignment s assigning m to x1 such that Sat([φ(x1)],s), and (b) for any variable assignment t, if Sat([φ(x1)],t), then t assigns m to x1.
Explanation
Intuitively, Rayo's number is defined in a formal language, such that:
• "xi∈xj" and "xi=xj" are atomic formulas.
• If θ is a formula, then "(~θ)" is a formula (the negation of θ).
• If θ and ξ are formulas, then "(θ∧ξ)" is a formula (the conjunction of θ and ξ).
• If θ is a formula, then "∃xi(θ)" is a formula (existential quantification).
Notice that it is not allowed to eliminate parentheses. For instance, one must write "∃xi((~θ))" instead of "∃xi(~θ)".
It is possible to express the missing logical connectives in this language. For instance:
• Disjunction: "(θ∨ξ)" as "(~((~θ)∧(~ξ)))".
• Implication: "(θ⇒ξ)" as "(~(θ∧(~ξ)))".
• Biconditional: "(θ⇔ξ)" as "((~(θ∧ξ))∧(~((~θ)∧(~ξ))))".
• Universal quantification: "∀xi(θ)" as "(~∃xi((~θ)))".
The definition concerns formulas in this language that have only one free variable, specifically x1. If a formula with length n is satisfied iff x1 is equal to the finite von Neumann ordinal k, we say such a formula is a "Rayo string" for k, and that k is "Rayo-nameable" in n symbols. Then, Rayo(n) is defined as the smallest k greater than all numbers Rayo-nameable in at most n symbols.
Examples
To Rayo-name 0, which is the empty set, one can write "(¬∃x2(x2∈x1))", which has 10 symbols. It can be shown that this is the optimal Rayo string for 0. Similarly, (∃x2(x2∈x1)∧(¬∃x2((x2∈x1∧∃x3(x3∈x2))))), which has 30 symbols, is the optimal string for 1. Therefore, Rayo(n)=0 for 0≤n<10, and Rayo(n)=1 for 10≤n<30.
Additionally, it can be shown that Rayo(34+20n)>n and Rayo(260+20n)>n2.
References
1. "CH. Rayo's Number". The Math Factor Podcast. Retrieved 24 March 2014.
2. Kerr, Josh (7 December 2013). "Name the biggest number contest". Archived from the original on 20 March 2016. Retrieved 27 March 2014.
3. Elga, Adam. "Large Number Championship" (PDF). Archived from the original (PDF) on 14 July 2019. Retrieved 24 March 2014.
4. Manzari, Mandana; Nick Semenkovich (31 January 2007). "Profs Duke It Out in Big Number Duel". The Tech. Retrieved 24 March 2014.
5. Rayo, Agustín. "Big Number Duel". Retrieved 24 March 2014.
Large numbers
Examples
in
numerical
order
• Thousand
• Ten thousand
• Hundred thousand
• Million
• Ten million
• Hundred million
• Billion
• Trillion
• Quadrillion
• Quintillion
• Sextillion
• Septillion
• Octillion
• Nonillion
• Decillion
• Eddington number
• Googol
• Shannon number
• Googolplex
• Skewes's number
• Moser's number
• Graham's number
• TREE(3)
• SSCG(3)
• BH(3)
• Rayo's number
• Transfinite numbers
Expression
methods
Notations
• Scientific notation
• Knuth's up-arrow notation
• Conway chained arrow notation
• Steinhaus–Moser notation
Operators
• Hyperoperation
• Tetration
• Pentation
• Ackermann function
• Grzegorczyk hierarchy
• Fast-growing hierarchy
Related
articles
(alphabetical
order)
• Busy beaver
• Extended real number line
• Indefinite and fictitious numbers
• Infinitesimal
• Largest known prime number
• List of numbers
• Long and short scales
• Number systems
• Number names
• Orders of magnitude
• Power of two
• Power of three
• Power of 10
• Sagan Unit
• Names
• History
|
Wikipedia
|
Alexander Razborov
Aleksandr Aleksandrovich Razborov (Russian: Алекса́ндр Алекса́ндрович Разбо́ров; born February 16, 1963), sometimes known as Sasha Razborov, is a Soviet and Russian mathematician and computational theorist. He is Andrew McLeish Distinguished Service Professor at the University of Chicago.
Alexander Razborov
Born (1963-02-16) February 16, 1963
Belovo, Russian SFSR, Soviet Union
NationalityAmerican, Russian
Alma materMoscow State University
Known forgroup theory, logic in computer science, theoretical computer science
Awards
• Nevanlinna Prize (1990)
• Gödel Prize (2007)
• Gödel Lecture (2010)
• David P. Robbins Prize (2013)
Scientific career
FieldsMathematician
InstitutionsUniversity of Chicago, Steklov Mathematical Institute, Toyota Technological Institute at Chicago
Doctoral advisorSergei Adian
Research
In his best known work, joint with Steven Rudich, he introduced the notion of natural proofs, a class of strategies used to prove fundamental lower bounds in computational complexity. In particular, Razborov and Rudich showed that, under the assumption that certain kinds of one-way functions exist, such proofs cannot give a resolution of the P = NP problem, so new techniques will be required in order to solve this question.
Awards
• Nevanlinna Prize (1990) for introducing the "approximation method" in proving Boolean circuit lower bounds of some essential algorithmic problems,[1]
• Erdős Lecturer, Hebrew University of Jerusalem, 1998.
• Corresponding member of the Russian Academy of Sciences (2000)[2][3]
• Gödel Prize (2007, with Steven Rudich) for the paper "Natural Proofs."[4][5]
• David P. Robbins Prize for the paper "On the minimal density of triangles in graphs" (Combinatorics, Probability and Computing 17 (2008), no. 4, 603–618), and for introducing a new powerful method, flag algebras, to solve problems in extremal combinatorics
• Gödel Lecturer (2010) with the lecture titled Complexity of Propositional Proofs.[6]
• Andrew MacLeish Distinguished Service Professor (2008) in the Department of Computer Science, University of Chicago.
• Fellow of the American Academy of Arts and Sciences (AAAS) (2020).[7]
Bibliography
• Razborov, A. A. (1985). "Lower bounds for the monotone complexity of some Boolean functions" (PDF). Soviet Mathematics - Doklady. 31: 354–357.
• Razborov, A. A. (June 1985). "Lower bounds on monotone complexity of the logical permanent". Mathematical Notes of the Academy of Sciences of the USSR. 37 (6): 485–493. doi:10.1007/BF01157687. S2CID 120875831.
• Разборов, Александр Александрович (1987). О системах уравнений в свободной группе (PDF) (in Russian). Московский государственный университет. (PhD thesis. 32.56MB)
• Razborov, A. A. (April 1987). "Lower bounds on the size of bounded depth circuits over a complete basis with logical addition". Mathematical Notes of the Academy of Sciences of the USSR. 41 (4): 333–338. doi:10.1007/BF01137685. S2CID 121744639.
• Razborov, Alexander A. (May 1989). "Proceedings of the twenty-first annual ACM symposium on Theory of computing - STOC '89". Proceedings of the 21st Annual ACM Symposium on the Theory of Computing. Seattle, Washington, United States. pp. 167–176. doi:10.1145/73007.73023. ISBN 0897913078.
• Razborov, A. A. (December 1990). "Lower bounds of the complexity of symmetric boolean functions of contact-rectifier circuits". Mathematical Notes of the Academy of Sciences of the USSR. 48 (6): 1226–1234. doi:10.1007/BF01240265. S2CID 120703863.
• Razborov, Alexander A.; Rudich, Stephen (May 1994). "Proceedings of the twenty-sixth annual ACM symposium on Theory of computing - STOC '94". Proceedings of the 26th Annual ACM Symposium on the Theory of Computing. Montreal, Quebec, Canada. pp. 204–213. doi:10.1145/195058.195134. ISBN 0897916638.
• Razborov, Alexander A. (December 1998). "Lower Bounds for the Polynomial Calculus" (PostScript). Computational Complexity. 7 (4): 291–324. CiteSeerX 10.1.1.19.2441. doi:10.1007/s000370050013. S2CID 8130114.
• Razborov, Alexander A. (January 2003). "Propositional proof complexity" (PostScript). Journal of the ACM. 50 (1): 80–82. doi:10.1145/602382.602406. S2CID 17351318. (Survey paper for JACM's 50th anniversary)
See also
• Avi Wigderson
• Circuit complexity
• Free group
• Natural proofs
• One-way function
• Pseudorandom function family
• Resolution (logic)
Notes
1. "International Mathematical Union: Rolf Nevanlinna Prize Winners". Archived from the original on 2007-12-17.
2. "Russian Academy of Sciences: Razborov Aleksandr Aleksandrovich: General info: History".
3. "Russian Genealogy Agencies Tree: R" (in Russian). Archived from the original on 2007-12-21. Retrieved 2008-01-15.
4. "ACM-SIGACT Awards and Prizes: 2007 Gödel Prize".
5. "EATCS: Gödel Prize - 2007". Archived from the original on 2007-12-01.
6. "Gödel Lecturers – Association for Symbolic Logic". Archived from the original on 2021-11-08. Retrieved 2021-11-10.
7. "AAAS Fellows Elected" (PDF). Notices of the American Mathematical Society.
External links
• Alexander Razborov at the Mathematics Genealogy Project.
• Alexander Razborov's Home Page.
• All-Russian Mathematical Portal: Persons: Razborov Alexander Alexandrovich.
• Biography sketch in the Toyota Technological Institute at Chicago.
• Curricula Vitae at the Department of Computer Science, University of Chicago.
• DBLP: Alexander A. Razborov.
• Alexander Razborov's results at International Mathematical Olympiad
• MathSciNet: "Items authored by Razborov, A. A."
• The Work of A.A. Razborov – an article by László Lovász in the Proceedings of the International Congress of Mathematicians, Kyoto, Japan, 1990.
Gödel Prize laureates
1990s
• Babai / Goldwasser / Micali / Moran / Rackoff (1993)
• Håstad (1994)
• Immerman / Szelepcsényi (1995)
• Jerrum / Sinclair (1996)
• Halpern / Moses (1997)
• Toda (1998)
• Shor (1999)
2000s
• Vardi / Wolper (2000)
• Arora / Feige / Goldwasser / Lund / Lovász / Motwani / Safra / Sudan / Szegedy (2001)
• Sénizergues (2002)
• Freund / Schapire (2003)
• Herlihy / Saks / Shavit / Zaharoglou (2004)
• Alon / Matias / Szegedy (2005)
• Agrawal / Kayal / Saxena (2006)
• Razborov / Rudich (2007)
• Teng / Spielman (2008)
• Reingold / Vadhan / Wigderson (2009)
2010s
• Arora / Mitchell (2010)
• Håstad (2011)
• Koutsoupias / Papadimitriou / Roughgarden / É. Tardos / Nisan / Ronen (2012)
• Boneh / Franklin / Joux (2013)
• Fagin / Lotem / Naor (2014)
• Spielman / Teng (2015)
• Brookes / O'Hearn (2016)
• Dwork / McSherry / Nissim / Smith (2017)
• Regev (2018)
• Dinur (2019)
2020s
• Moser / G. Tardos (2020)
• Bulatov / Cai / Chen / Dyer / Richerby (2021)
• Brakerski / Gentry / Vaikuntanathan (2022)
Nevanlinna Prize winners
• Tarjan (1982)
• Valiant (1986)
• Razborov (1990)
• Wigderson (1994)
• Shor (1998)
• Sudan (2002)
• Kleinberg (2006)
• Spielman (2010)
• Khot (2014)
• Daskalakis (2018)
Authority control
International
• VIAF
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
|
Wikipedia
|
Andrea Razmadze
Andrea Mikhailovich Razmadze (sometimes spelled Andria/Andrei Razmadze, 12 August 1889 – 2 October 1929[1]) was a Georgian mathematician, and one of the founders of Tbilisi State University, whose Mathematics Institute was renamed in his honor in 1944.[2] The department's scientific journal, published continuously since 1937, was also renamed as the Proceedings of A. Razmadze Mathematical Institute in his honor.
Andrea Mikhailovich Razmadze
Sc.D.
ანდრია რაზმაძე
Andrea Razmadze
Born(1889-08-12)12 August 1889[1]
Chkhenishi, Tiflis Governorate, Russian Empire (now Samtredia)
Died2 October 1929(1929-10-02) (aged 40)[1]
Tiflis, Georgian SSR, Transcaucasian SFSR, Soviet Union
NationalityGeorgian
EducationMoscow University
Known forCalculus of Variations
Scientific career
FieldsMathematics
InstitutionsTbilisi University
Biography
Andrea Razmadze was the son of Mikhail Gavrilovich Razmadze, a railway worker, and Nino Georgievna Nodia.[3] He graduated from Kutaisi nonclassical secondary school in 1906 (where Public School #41 has been renamed for him[4]), then studied at Moscow University, earning a Diploma in 1910, and then a Masters in 1917 while teaching at local classical and secondary schools.[5] At the invitation of the university, he briefly stayed in Moscow University to teach mathematics in 1917,[6] but soon left to become one of the founders of Tbilisi University.[7] Though he died just 11 years later, during his time there he greatly expanded Georgian mathematical terminology by publishing three textbooks in that language,[3] and insisting that all courses be taught in Georgian, an approach that attracted renowned mathematician Nikoloz Muskhelishvili to the school.[8] He also founded the "Georgian Mathematical Union" on 21 February 1923 and was its first president; this institution lapsed on his death, but was reorganized from 1962 to the present.[9] He is most famous for his work in the calculus of variations, where he discovered an efficient method for finding the extrema of integral functions, and a comprehensive theory for finding the extrema of discontinuous ("angular") functions that can be represented by a finite number of curves.[3] He presented this last result at the 1924 International Congress of Mathematicians in Toronto,[10] for which he was awarded a Sc.D. by the Sorbonne.[5] He also delivered lectures in Jacques Hadamard's famous seminar series in Paris, along with such notables as Paul Lévy, Laurent Schwartz, and Nobel laureates Louis de Broglie and Max Born.[11]
External links
• Razmadze's biography on MacTutor.
• The website of the Georgian Mathematical Union.
References
1. "Dedication page to volume 63" (PDF). Memoirs on Differential Equations and Mathematical Physics. Tbilisi: Razmadze Mathematical Institute. 63: 1. 2014. ISSN 1512-0015.
2. "About". Andrea Razmadze Mathematical Institute. Retrieved 20 July 2016.
3. Youschkevitch, A. P. "Razmadze, Andrei Mikhailovich". www.encyclopedia.com. Retrieved 20 July 2016.
4. Kutaisi Regional Selection Conference. Georgia: European Youth Parliament. 2015. p. 8. Retrieved 20 July 2016.
5. "A. Razmadze. Curriculum Vitae". A. Razmadze Mathematical Institute. Retrieved 20 July 2016.
6. Russian Mathematical Surveys. London Mathematical Society. 1966. pp. 87–88.
7. Mikaberidze, Alexander. Historical Dictionary of Georgia. Rowman & Littlefield. pp. 550–551. ISBN 9781442241466.
8. Maugin, Gerard A. Continuum Mechanics Through the Twentieth Century: A Concise Historical Perspective. Springer Science & Business Media. p. 189. ISBN 9789400763531.
9. "GMU - About Us". www.rmi.ge. Retrieved 20 July 2016.
10. Razmadze, Andrea M. (1925). "Sur les solutions discontinues dans le calcul des variations" (PDF). Mathematische Annalen. 94: 1–52. doi:10.1007/bf01208643.
11. Mazʹja, Vladimir G.; Shaposhnikova, T. O. (1999). Jacques Hadamard: A Universal Mathematician. American Mathematical Soc. p. 172. ISBN 9780821819234.
Founders of the Tbilisi State University
Ivane Javakhishvili
• Kote Abkhazi
• Grigol Gvelesiani
• Ekvtime Takaishvili
• Giorgi Akhvlediani
• Shalva Nutsubidze
• Dimitri Uznadze
• Grigol Tsereteli
• Akaki Shanidze
• Andria Razmadze
• Ioseb Kipshidze
• Korneli Kekelidze
• Petre Melikishvili
Authority control
International
• ISNI
• VIAF
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Raúl Rojas
Raúl Rojas González (born 1955, in Mexico City) is an emeritus professor of Computer Science and Mathematics at the Free University of Berlin, and a renowned specialist in artificial neural networks. The FU-Fighters, football-playing robots he helped build, were world champions in 2004 and 2005. He is now leading an autonomous car project called Spirit of Berlin.
He and his team were awarded the Wolfgang von Kempelen Prize for his work on Konrad Zuse and the history of computers. Although most of his current research and teaching revolves around artificial intelligence and its applications, he holds academic degrees in mathematics and economics.
In 2009 the Mexican government created the Raúl Rojas González Prize for scientific achievement by Mexican citizens. The first recipient of the prize was Luis Rafael Herrera Estrella, for contributions to plant biotechnology.
He ran for president at the Free University of Berlin in 2010.
History
Rojas was born on June 25, 1955, in Mexico City to an engineer and a teacher. He attended university at the National Polytechnic Institute in Mexico City, where he majored in Mathematics and Physics. He moved to Germany in 1982 as a doctoral student in economics under the guidance of the political economist Elmar Altvater. The resulting dissertation was published under the title "Die Armut der Nationen – Handbuch zur Schuldenkrise von Argentinien bis Zaire" (The poverty of nations – Handbook of debt crisis from Argentina to Zaire). He became a full professor at University of Halle-Wittenberg in 1994, and later moved to the Free University of Berlin, where he remains today in the Informatics department. His wife, Margarita Esponda Argüero, is a professor in the same department.
Prizes
• 2001: Gründerpreis Multimedia of the German Ministry of Finance and Technology
• 2002: European Academic Software Award
• 2004 and 2005: World champions in football robots with the FU-Fighters
• 2005: Wolfgang von Kempelen Prize for the History of Informatics
• 2009: Received the Heberto Castillo gold medal for contributions to science from the Mexico City government
• 2015: Was named Professor of the year by the Association of German Universities
• 2015: Received the National Prize of Sciences and Arts by the Mexican Government in the category of Technology and Design
Works
• Rojas, Raúl (1996). Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-61068-4. ISBN 978-3-540-60505-8. S2CID 40019197. Available as a free e-book
• Rojas, Raúl, ed. (1998). Die Rechenmaschinen von Konrad Zuse. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-71944-8. ISBN 978-3-642-71945-5.
• Rojas, Raul; Hashagen, Ulf (26 July 2002). The First Computers. MIT Press. ISBN 978-0-262-68137-7.
• Rojas, Raúl (2001). Encyclopedia of Computers and Computer History. Routledge. ISBN 978-1-57958-235-7.
• "RoboCup 2002: Robot Soccer World Cup VI". Lecture Notes in Computer Science. Vol. 2752. Berlin, Heidelberg: Springer Berlin Heidelberg. 2003. doi:10.1007/b11927. ISBN 978-3-540-40666-2. ISSN 0302-9743. S2CID 6657080.
References
External links
• Homepage of Raúl Rojas at the Free University of Berlin
• Curriculum vitae of Raúl Rojas
• FU-Fighters football robots
• Autonomous car project
Authority control
International
• ISNI
• VIAF
National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
Other
• IdRef
|
Wikipedia
|
ba space
In mathematics, the ba space $ba(\Sigma )$ of an algebra of sets $\Sigma $ is the Banach space consisting of all bounded and finitely additive signed measures on $\Sigma $. The norm is defined as the variation, that is $\|\nu \|=|\nu |(X).$[1]
If Σ is a sigma-algebra, then the space $ca(\Sigma )$ is defined as the subset of $ba(\Sigma )$ consisting of countably additive measures.[2] The notation ba is a mnemonic for bounded additive and ca is short for countably additive.
If X is a topological space, and Σ is the sigma-algebra of Borel sets in X, then $rca(X)$ is the subspace of $ca(\Sigma )$ consisting of all regular Borel measures on X.[3]
Properties
All three spaces are complete (they are Banach spaces) with respect to the same norm defined by the total variation, and thus $ca(\Sigma )$ is a closed subset of $ba(\Sigma )$, and $rca(X)$ is a closed set of $ca(\Sigma )$ for Σ the algebra of Borel sets on X. The space of simple functions on $\Sigma $ is dense in $ba(\Sigma )$.
The ba space of the power set of the natural numbers, ba(2N), is often denoted as simply $ba$ and is isomorphic to the dual space of the ℓ∞ space.
Dual of B(Σ)
Let B(Σ) be the space of bounded Σ-measurable functions, equipped with the uniform norm. Then ba(Σ) = B(Σ)* is the continuous dual space of B(Σ). This is due to Hildebrandt[4] and Fichtenholtz & Kantorovich.[5] This is a kind of Riesz representation theorem which allows for a measure to be represented as a linear functional on measurable functions. In particular, this isomorphism allows one to define the integral with respect to a finitely additive measure (note that the usual Lebesgue integral requires countable additivity). This is due to Dunford & Schwartz,[6] and is often used to define the integral with respect to vector measures,[7] and especially vector-valued Radon measures.
The topological duality ba(Σ) = B(Σ)* is easy to see. There is an obvious algebraic duality between the vector space of all finitely additive measures σ on Σ and the vector space of simple functions ($\mu (A)=\zeta \left(1_{A}\right)$). It is easy to check that the linear form induced by σ is continuous in the sup-norm if σ is bounded, and the result follows since a linear form on the dense subspace of simple functions extends to an element of B(Σ)* if it is continuous in the sup-norm.
Dual of L∞(μ)
If Σ is a sigma-algebra and μ is a sigma-additive positive measure on Σ then the Lp space L∞(μ) endowed with the essential supremum norm is by definition the quotient space of B(Σ) by the closed subspace of bounded μ-null functions:
$N_{\mu }:=\{f\in B(\Sigma ):f=0\ \mu {\text{-almost everywhere}}\}.$
The dual Banach space L∞(μ)* is thus isomorphic to
$N_{\mu }^{\perp }=\{\sigma \in ba(\Sigma ):\mu (A)=0\Rightarrow \sigma (A)=0{\text{ for any }}A\in \Sigma \},$
i.e. the space of finitely additive signed measures on Σ that are absolutely continuous with respect to μ (μ-a.c. for short).
When the measure space is furthermore sigma-finite then L∞(μ) is in turn dual to L1(μ), which by the Radon–Nikodym theorem is identified with the set of all countably additive μ-a.c. measures. In other words, the inclusion in the bidual
$L^{1}(\mu )\subset L^{1}(\mu )^{**}=L^{\infty }(\mu )^{*}$
is isomorphic to the inclusion of the space of countably additive μ-a.c. bounded measures inside the space of all finitely additive μ-a.c. bounded measures.
References
• Dunford, N.; Schwartz, J.T. (1958). Linear operators, Part I. Wiley-Interscience.
1. Dunford & Schwartz 1958, IV.2.15.
2. Dunford & Schwartz 1958, IV.2.16.
3. Dunford & Schwartz 1958, IV.2.17.
4. Hildebrandt, T.H. (1934). "On bounded functional operations". Transactions of the American Mathematical Society. 36 (4): 868–875. doi:10.2307/1989829. JSTOR 1989829.
5. Fichtenholz, G.; Kantorovich, L.V. (1934). "Sur les opérations linéaires dans l'espace des fonctions bornées". Studia Mathematica. 5: 69–98. doi:10.4064/sm-5-1-69-98.
6. Dunford & Schwartz 1958.
7. Diestel, J.; Uhl, J.J. (1977). Vector measures. Mathematical Surveys. Vol. 15. American Mathematical Society. Chapter I.
Further reading
• Diestel, Joseph (1984). Sequences and series in Banach spaces. Springer-Verlag. ISBN 0-387-90859-5. OCLC 9556781.
• Yosida, K.; Hewitt, E. (1952). "Finitely additive measures". Transactions of the American Mathematical Society. 72 (1): 46–66. doi:10.2307/1990654. JSTOR 1990654.
• Kantorovitch, Leonid V.; Akilov, Gleb P. (1982). Functional Analysis. Pergamon. doi:10.1016/C2013-0-03044-7. ISBN 978-0-08-023036-8.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
|
Wikipedia
|
Concave polygon
A simple polygon that is not convex is called concave,[1] non-convex[2] or reentrant.[3] A concave polygon will always have at least one reflex interior angle—that is, an angle with a measure that is between 180 degrees and 360 degrees exclusive.[4]
Polygon
Some lines containing interior points of a concave polygon intersect its boundary at more than two points.[4] Some diagonals of a concave polygon lie partly or wholly outside the polygon.[4] Some sidelines of a concave polygon fail to divide the plane into two half-planes one of which entirely contains the polygon. None of these three statements holds for a convex polygon.
As with any simple polygon, the sum of the internal angles of a concave polygon is π×(n − 2) radians, equivalently 180×(n − 2) degrees (°), where n is the number of sides.
It is always possible to partition a concave polygon into a set of convex polygons. A polynomial-time algorithm for finding a decomposition into as few convex polygons as possible is described by Chazelle & Dobkin (1985).[5]
A triangle can never be concave, but there exist concave polygons with n sides for any n > 3. An example of a concave quadrilateral is the dart.
At least one interior angle does not contain all other vertices in its edges and interior.
The convex hull of the concave polygon's vertices, and that of its edges, contains points that are exterior to the polygon.
Notes
1. McConnell, Jeffrey J. (2006), Computer Graphics: Theory Into Practice, p. 130, ISBN 0-7637-2250-2.
2. Leff, Lawrence (2008), Let's Review: Geometry, Hauppauge, NY: Barron's Educational Series, p. 66, ISBN 978-0-7641-4069-3
3. Mason, J.I. (1946), "On the angles of a polygon", The Mathematical Gazette, The Mathematical Association, 30 (291): 237–238, doi:10.2307/3611229, JSTOR 3611229.
4. "Definition and properties of concave polygons with interactive animation".
5. Chazelle, Bernard; Dobkin, David P. (1985), "Optimal convex decompositions", in Toussaint, G.T. (ed.), Computational Geometry (PDF), Elsevier, pp. 63–133.
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
External links
• Weisstein, Eric W. "Concave polygon". MathWorld.
|
Wikipedia
|
Reach (mathematics)
Definition
Let X be a subset of Rn. Then reach of X is defined as
${\text{reach}}(X):=\sup\{r\in \mathbb {R} :\forall x\in \mathbb {R} ^{n}\setminus X{\text{ with }}{\rm {dist}}(x,X)<r{\text{ exists a unique closest point }}y\in X{\text{ such that }}{\rm {dist}}(x,y)={\rm {dist}}(x,X)\}.$ :\forall x\in \mathbb {R} ^{n}\setminus X{\text{ with }}{\rm {dist}}(x,X)<r{\text{ exists a unique closest point }}y\in X{\text{ such that }}{\rm {dist}}(x,y)={\rm {dist}}(x,X)\}.}
Examples
Shapes that have reach infinity include
• a single point,
• a straight line,
• a full square, and
• any convex set.
The graph of ƒ(x) = |x| has reach zero.
A circle of radius r has reach r.
References
• Federer, Herbert (1969), Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, vol. 153, New York: Springer-Verlag New York Inc., pp. xiv+676, ISBN 978-3-540-60656-7, MR 0257325, Zbl 0176.00801
|
Wikipedia
|
Reachability
In graph theory, reachability refers to the ability to get from one vertex to another within a graph. A vertex $s$ can reach a vertex $t$ (and $t$ is reachable from $s$) if there exists a sequence of adjacent vertices (i.e. a walk) which starts with $s$ and ends with $t$.
In an undirected graph, reachability between all pairs of vertices can be determined by identifying the connected components of the graph. Any pair of vertices in such a graph can reach each other if and only if they belong to the same connected component; therefore, in such a graph, reachability is symmetric ($s$ reaches $t$ iff $t$ reaches $s$). The connected components of an undirected graph can be identified in linear time. The remainder of this article focuses on the more difficult problem of determining pairwise reachability in a directed graph (which, incidentally, need not be symmetric).
Definition
For a directed graph $G=(V,E)$, with vertex set $V$ and edge set $E$, the reachability relation of $G$ is the transitive closure of $E$, which is to say the set of all ordered pairs $(s,t)$ of vertices in $V$ for which there exists a sequence of vertices $v_{0}=s,v_{1},v_{2},...,v_{k}=t$ such that the edge $(v_{i-1},v_{i})$ is in $E$ for all $1\leq i\leq k$.[1]
If $G$ is acyclic, then its reachability relation is a partial order; any partial order may be defined in this way, for instance as the reachability relation of its transitive reduction.[2] A noteworthy consequence of this is that since partial orders are anti-symmetric, if $s$ can reach $t$, then we know that $t$ cannot reach $s$. Intuitively, if we could travel from $s$ to $t$ and back to $s$, then $G$ would contain a cycle, contradicting that it is acyclic. If $G$ is directed but not acyclic (i.e. it contains at least one cycle), then its reachability relation will correspond to a preorder instead of a partial order.[3]
Algorithms
Algorithms for determining reachability fall into two classes: those that require preprocessing and those that do not.
If you have only one (or a few) queries to make, it may be more efficient to forgo the use of more complex data structures and compute the reachability of the desired pair directly. This can be accomplished in linear time using algorithms such as breadth first search or iterative deepening depth-first search.[4]
If you will be making many queries, then a more sophisticated method may be used; the exact choice of method depends on the nature of the graph being analysed. In exchange for preprocessing time and some extra storage space, we can create a data structure which can then answer reachability queries on any pair of vertices in as low as $O(1)$ time. Three different algorithms and data structures for three different, increasingly specialized situations are outlined below.
Floyd–Warshall Algorithm
The Floyd–Warshall algorithm[5] can be used to compute the transitive closure of any directed graph, which gives rise to the reachability relation as in the definition, above.
The algorithm requires $O(|V|^{3})$ time and $O(|V|^{2})$ space in the worst case. This algorithm is not solely interested in reachability as it also computes the shortest path distance between all pairs of vertices. For graphs containing negative cycles, shortest paths may be undefined, but reachability between pairs can still be noted.
Thorup's Algorithm
For planar digraphs, a much faster method is available, as described by Mikkel Thorup in 2004.[6] This method can answer reachability queries on a planar graph in $O(1)$ time after spending $O(n\log {n})$ preprocessing time to create a data structure of $O(n\log {n})$ size. This algorithm can also supply approximate shortest path distances, as well as route information.
The overall approach is to associate with each vertex a relatively small set of so-called separator paths such that any path from a vertex $v$ to any other vertex $w$ must go through at least one of the separators associated with $v$ or $w$. An outline of the reachability related sections follows.
Given a graph $G$, the algorithm begins by organizing the vertices into layers starting from an arbitrary vertex $v_{0}$. The layers are built in alternating steps by first considering all vertices reachable from the previous step (starting with just $v_{0}$) and then all vertices which reach to the previous step until all vertices have been assigned to a layer. By construction of the layers, every vertex appears at most two layers, and every directed path, or dipath, in $G$ is contained within two adjacent layers $L_{i}$ and $L_{i+1}$. Let $k$ be the last layer created, that is, the lowest value for $k$ such that $\bigcup _{i=0}^{k}L_{i}=V$.
The graph is then re-expressed as a series of digraphs $G_{0},G_{1},\ldots ,G_{k-1}$ where each $G_{i}=r_{i}\cup L_{i}\cup L_{i+1}$ and where $r_{i}$ is the contraction of all previous levels $L_{0}\ldots L_{i-1}$ into a single vertex. Because every dipath appears in at most two consecutive layers, and because each $G_{i}$ is formed by two consecutive layers, every dipath in $G$ appears in its entirety in at least one $G_{i}$ (and no more than 2 consecutive such graphs)
For each $G_{i}$, three separators are identified which, when removed, break the graph into three components which each contain at most $1/2$ the vertices of the original. As $G_{i}$ is built from two layers of opposed dipaths, each separator may consist of up to 2 dipaths, for a total of up to 6 dipaths over all of the separators. Let $S$ be this set of dipaths. The proof that such separators can always be found is related to the Planar Separator Theorem of Lipton and Tarjan, and these separators can be located in linear time.
For each $Q\in S$, the directed nature of $Q$ provides for a natural indexing of its vertices from the start to the end of the path. For each vertex $v$ in $G_{i}$, we locate the first vertex in $Q$ reachable by $v$, and the last vertex in $Q$ that reaches to $v$. That is, we are looking at how early into $Q$ we can get from $v$, and how far we can stay in $Q$ and still get back to $v$. This information is stored with each $v$. Then for any pair of vertices $u$ and $w$, $u$ can reach $w$ via $Q$ if $u$ connects to $Q$ earlier than $w$ connects from $Q$.
Every vertex is labelled as above for each step of the recursion which builds $G_{0}\ldots ,G_{k}$. As this recursion has logarithmic depth, a total of $O(\log {n})$ extra information is stored per vertex. From this point, a logarithmic time query for reachability is as simple as looking over each pair of labels for a common, suitable $Q$. The original paper then works to tune the query time down to $O(1)$.
In summarizing the analysis of this method, first consider that the layering approach partitions the vertices so that each vertex is considered only $O(1)$ times. The separator phase of the algorithm breaks the graph into components which are at most $1/2$ the size of the original graph, resulting in a logarithmic recursion depth. At each level of the recursion, only linear work is needed to identify the separators as well as the connections possible between vertices. The overall result is $O(n\log n)$ preprocessing time with only $O(\log {n})$ additional information stored for each vertex.
Kameda's Algorithm
An even faster method for pre-processing, due to T. Kameda in 1975,[7] can be used if the graph is planar, acyclic, and also exhibits the following additional properties: all 0-indegree and all 0-outdegree vertices appear on the same face (often assumed to be the outer face), and it is possible to partition the boundary of that face into two parts such that all 0-indegree vertices appear on one part, and all 0-outdegree vertices appear on the other (i.e. the two types of vertices do not alternate).
If $G$ exhibits these properties, then we can preprocess the graph in only $O(n)$ time, and store only $O(\log {n})$ extra bits per vertex, answering reachability queries for any pair of vertices in $O(1)$ time with a simple comparison.
Preprocessing performs the following steps. We add a new vertex $s$ which has an edge to each 0-indegree vertex, and another new vertex $t$ with edges from each 0-outdegree vertex. Note that the properties of $G$ allow us to do so while maintaining planarity, that is, there will still be no edge crossings after these additions. For each vertex we store the list of adjacencies (out-edges) in order of the planarity of the graph (for example, clockwise with respect to the graph's embedding). We then initialize a counter $i=n+1$ and begin a Depth-First Traversal from $s$. During this traversal, the adjacency list of each vertex is visited from left-to-right as needed. As vertices are popped from the traversal's stack, they are labelled with the value $i$, and $i$ is then decremented. Note that $t$ is always labelled with the value $n+1$ and $s$ is always labelled with $0$. The depth-first traversal is then repeated, but this time the adjacency list of each vertex is visited from right-to-left.
When completed, $s$ and $t$, and their incident edges, are removed. Each remaining vertex stores a 2-dimensional label with values from $1$ to $n$. Given two vertices $u$ and $v$, and their labels $L(u)=(a_{1},a_{2})$ and $L(v)=(b_{1},b_{2})$, we say that $L(u)<L(v)$ if and only if $a_{1}\leq b_{1}$, $a_{2}\leq b_{2}$, and there exists at least one component $a_{1}$ or $a_{2}$ which is strictly less than $b_{1}$ or $b_{2}$, respectively.
The main result of this method then states that $v$ is reachable from $u$ if and only if $L(u)<L(v)$, which is easily calculated in $O(1)$ time.
Related problems
A related problem is to solve reachability queries with some number $k$ of vertex failures. For example: "Can vertex $u$ still reach vertex $v$ even though vertices $s_{1},s_{2},...,s_{k}$ have failed and can no longer be used?" A similar problem may consider edge failures rather than vertex failures, or a mix of the two. The breadth-first search technique works just as well on such queries, but constructing an efficient oracle is more challenging.[8][9]
Another problem related to reachability queries is in quickly recalculating changes to reachability relationships when some portion of the graph is changed. For example, this is a relevant concern to garbage collection which needs to balance the reclamation of memory (so that it may be reallocated) with the performance concerns of the running application.
See also
• Gammoid
• st-connectivity
References
1. Skiena, Steven S. (2011), "15.5 Transitive Closure and Reduction", The Algorithm Design Manual (2nd ed.), Springer, pp. 495–497, ISBN 9781848000698.
2. Cohn, Paul Moritz (2003), Basic Algebra: Groups, Rings, and Fields, Springer, p. 17, ISBN 9781852335878.
3. Schmidt, Gunther (2010), Relational Mathematics, Encyclopedia of Mathematics and Its Applications, vol. 132, Cambridge University Press, p. 77, ISBN 9780521762687.
4. Gersting, Judith L. (2006), Mathematical Structures for Computer Science (6th ed.), Macmillan, p. 519, ISBN 9780716768647.
5. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), "Transitive closure of a directed graph", Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, pp. 632–634, ISBN 0-262-03293-7.
6. Thorup, Mikkel (2004), "Compact oracles for reachability and approximate distances in planar digraphs", Journal of the ACM, 51 (6): 993–1024, doi:10.1145/1039488.1039493, MR 2145261, S2CID 18864647.
7. Kameda, T (1975), "On the vector representation of the reachability in planar directed graphs", Information Processing Letters, 3 (3): 75–77, doi:10.1016/0020-0190(75)90019-8.
8. Demetrescu, Camil; Thorup, Mikkel; Chowdhury, Rezaul Alam; Ramachandran, Vijaya (2008), "Oracles for distances avoiding a failed node or link", SIAM Journal on Computing, 37 (5): 1299–1318, CiteSeerX 10.1.1.329.5435, doi:10.1137/S0097539705429847, MR 2386269.
9. Halftermeyer, Pierre, Connectivity in Networks and Compact Labeling Schemes for Emergency Planning, Universite de Bordeaux.
|
Wikipedia
|
Best response
In game theory, the best response is the strategy (or strategies) which produces the most favorable outcome for a player, taking other players' strategies as given (Fudenberg & Tirole 1991, p. 29; Gibbons 1992, pp. 33–49). The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response (or one of the best responses) to the other players' strategies (Nash 1950).
Correspondence
Reaction correspondences, also known as best response correspondences, are used in the proof of the existence of mixed strategy Nash equilibria (Fudenberg & Tirole 1991, Section 1.3.B; Osborne & Rubinstein 1994, Section 2.2). Reaction correspondences are not "reaction functions" since functions must only have one value per argument, and many reaction correspondences will be undefined, i.e., a vertical line, for some opponent strategy choice. One constructs a correspondence $b(\cdot )$, for each player from the set of opponent strategy profiles into the set of the player's strategies. So, for any given set of opponent's strategies $\sigma _{-i}$, $b_{i}(\sigma _{-i})$ represents player $i$ 's best responses to $\sigma _{-i}$.
Response correspondences for all 2x2 normal form games can be drawn with a line for each player in a unit square strategy space. Figures 1 to 3 graphs the best response correspondences for the stag hunt game. The dotted line in Figure 1 shows the optimal probability that player Y plays 'Stag' (in the y-axis), as a function of the probability that player X plays Stag (shown in the x-axis). In Figure 2 the dotted line shows the optimal probability that player X plays 'Stag' (shown in the x-axis), as a function of the probability that player Y plays Stag (shown in the y-axis). Note that Figure 2 plots the independent and response variables in the opposite axes to those normally used, so that it may be superimposed onto the previous graph, to show the Nash equilibria at the points where the two player's best responses agree in Figure 3.
There are three distinctive reaction correspondence shapes, one for each of the three types of symmetric 2x2 games: coordination games, discoordination games and games with dominated strategies (the trivial fourth case in which payoffs are always equal for both moves is not really a game theoretical problem). Any payoff symmetric 2x2 game will take one of these three forms.
Coordination games
Games in which players score highest when both players choose the same strategy, such as the stag hunt and battle of the sexes, are called coordination games. These games have reaction correspondences of the same shape as Figure 3, where there is one Nash equilibrium in the bottom left corner, another in the top right, and a mixing Nash somewhere along the diagonal between the other two.
Anti-coordination games
Games such as the game of chicken and hawk-dove game in which players score highest when they choose opposite strategies, i.e., discoordinate, are called anti-coordination games. They have reaction correspondences (Figure 4) that cross in the opposite direction to coordination games, with three Nash equilibria, one in each of the top left and bottom right corners, where one player chooses one strategy, the other player chooses the opposite strategy. The third Nash equilibrium is a mixed strategy which lies along the diagonal from the bottom left to top right corners. If the players do not know which one of them is which, then the mixed Nash is an evolutionarily stable strategy (ESS), as play is confined to the bottom left to top right diagonal line. Otherwise an uncorrelated asymmetry is said to exist, and the corner Nash equilibria are ESSes.
Games with dominated strategies
Games with dominated strategies have reaction correspondences which only cross at one point, which will be in either the bottom left, or top right corner in payoff symmetric 2x2 games. For instance, in the single-play prisoner's dilemma, the "Cooperate" move is not optimal for any probability of opponent Cooperation. Figure 5 shows the reaction correspondence for such a game, where the dimensions are "Probability play Cooperate", the Nash equilibrium is in the lower left corner where neither player plays Cooperate. If the dimensions were defined as "Probability play Defect", then both players best response curves would be 1 for all opponent strategy probabilities and the reaction correspondences would cross (and form a Nash equilibrium) at the top right corner.
Other (payoff asymmetric) games
A wider range of reaction correspondences shapes is possible in 2x2 games with payoff asymmetries. For each player there are five possible best response shapes, shown in Figure 6. From left to right these are: dominated strategy (always play 2), dominated strategy (always play 1), rising (play strategy 2 if probability that the other player plays 2 is above threshold), falling (play strategy 1 if probability that the other player plays 2 is above threshold), and indifferent (both strategies play equally well under all conditions).
While there are only four possible types of payoff symmetric 2x2 games (of which one is trivial), the five different best response curves per player allow for a larger number of payoff asymmetric game types. Many of these are not truly different from each other. The dimensions may be redefined (exchange names of strategies 1 and 2) to produce symmetrical games which are logically identical.
Matching pennies
One well-known game with payoff asymmetries is the matching pennies game. In this game one player, the row player — graphed on the y dimension — wins if the players coordinate (both choose heads or both choose tails) while the other player, the column player — shown in the x-axis — wins if the players discoordinate. Player Y's reaction correspondence is that of a coordination game, while that of player X is a discoordination game. The only Nash equilibrium is the combination of mixed strategies where both players independently choose heads and tails with probability 0.5 each.
Dynamics
In evolutionary game theory, best response dynamics represents a class of strategy updating rules, where players strategies in the next round are determined by their best responses to some subset of the population. Some examples include:
• In a large population model, players choose their next action probabilistically based on which strategies are best responses to the population as a whole.
• In a spatial model, players choose (in the next round) the action that is the best response to all of their neighbors (Ellison 1993).
Importantly, in these models players only choose the best response on the next round that would give them the highest payoff on the next round. Players do not consider the effect that choosing a strategy on the next round would have on future play in the game. This constraint results in the dynamical rule often being called myopic best response.
In the theory of potential games, best response dynamics refers to a way of finding a Nash equilibrium by computing the best response for every player:
Theorem: In any finite potential game, best response dynamics always converge to a Nash equilibrium. (Nisan et al. 2007, Section 19.3.2)
Smoothed
Instead of best response correspondences, some models use smoothed best response functions. These functions are similar to the best response correspondence, except that the function does not "jump" from one pure strategy to another. The difference is illustrated in Figure 8, where black represents the best response correspondence and the other colors each represent different smoothed best response functions. In standard best response correspondences, even the slightest benefit to one action will result in the individual playing that action with probability 1. In smoothed best response as the difference between two actions decreases the individual's play approaches 50:50.
There are many functions that represent smoothed best response functions. The functions illustrated here are several variations on the following function:
${\frac {e^{E(1)/\gamma }}{e^{E(1)/\gamma }+e^{E(2)/\gamma }}}$
where $E(x)$ represents the expected payoff of action $x$, and $\gamma $ is a parameter that determines the degree to which the function deviates from the true best response (a larger $\gamma $ implies that the player is more likely to make 'mistakes').
There are several advantages to using smoothed best response, both theoretical and empirical. First, it is consistent with psychological experiments; when individuals are roughly indifferent between two actions they appear to choose more or less at random. Second, the play of individuals is uniquely determined in all cases, since it is a correspondence that is also a function. Finally, using smoothed best response with some learning rules (as in Fictitious play) can result in players learning to play mixed strategy Nash equilibria (Fudenberg & Levine 1998).
See also
• Solved game
References
• Ellison, G. (1993), "Learning, Local Interaction, and Coordination" (PDF), Econometrica, 61 (5): 1047–1071, doi:10.2307/2951493, JSTOR 2951493
• Fudenberg, D.; Levine, David K. (1998), The Theory of Learning in Games, Cambridge MA: MIT Press
• Fudenberg, Drew; Tirole, Jean (1991). Game theory. Cambridge, Massachusetts: MIT Press. ISBN 9780262061414. Book preview.
• Gibbons, R. (1992), A primer in game theory, Harvester-Wheatsheaf, S2CID 10248389
• Nash, John F. (1950), "Equilibrium points in n-person games", Proceedings of the National Academy of Sciences of the United States of America, 36 (1): 48–49, Bibcode:1950PNAS...36...48N, doi:10.1073/pnas.36.1.48, PMC 1063129, PMID 16588946
• Osborne, M.J.; Rubinstein, Ariel (1994), A course in game theory, Cambridge MA: MIT Press
• Young, H.P. (2005), Strategic Learning and Its Limits, Oxford University Press
• Nisan, N.; Roughgarden, T.; Tardos, É.; Vazirani, V.V. (2007), Algorithmic Game Theory (PDF), New York: Cambridge University Press
Topics in game theory
Definitions
• Congestion game
• Cooperative game
• Determinacy
• Escalation of commitment
• Extensive-form game
• First-player and second-player win
• Game complexity
• Graphical game
• Hierarchy of beliefs
• Information set
• Normal-form game
• Preference
• Sequential game
• Simultaneous game
• Simultaneous action selection
• Solved game
• Succinct game
Equilibrium
concepts
• Bayesian Nash equilibrium
• Berge equilibrium
• Core
• Correlated equilibrium
• Epsilon-equilibrium
• Evolutionarily stable strategy
• Gibbs equilibrium
• Mertens-stable equilibrium
• Markov perfect equilibrium
• Nash equilibrium
• Pareto efficiency
• Perfect Bayesian equilibrium
• Proper equilibrium
• Quantal response equilibrium
• Quasi-perfect equilibrium
• Risk dominance
• Satisfaction equilibrium
• Self-confirming equilibrium
• Sequential equilibrium
• Shapley value
• Strong Nash equilibrium
• Subgame perfection
• Trembling hand
Strategies
• Backward induction
• Bid shading
• Collusion
• Forward induction
• Grim trigger
• Markov strategy
• Dominant strategies
• Pure strategy
• Mixed strategy
• Strategy-stealing argument
• Tit for tat
Classes
of games
• Bargaining problem
• Cheap talk
• Global game
• Intransitive game
• Mean-field game
• Mechanism design
• n-player game
• Perfect information
• Large Poisson game
• Potential game
• Repeated game
• Screening game
• Signaling game
• Stackelberg competition
• Strictly determined game
• Stochastic game
• Symmetric game
• Zero-sum game
Games
• Go
• Chess
• Infinite chess
• Checkers
• Tic-tac-toe
• Prisoner's dilemma
• Gift-exchange game
• Optional prisoner's dilemma
• Traveler's dilemma
• Coordination game
• Chicken
• Centipede game
• Lewis signaling game
• Volunteer's dilemma
• Dollar auction
• Battle of the sexes
• Stag hunt
• Matching pennies
• Ultimatum game
• Rock paper scissors
• Pirate game
• Dictator game
• Public goods game
• Blotto game
• War of attrition
• El Farol Bar problem
• Fair division
• Fair cake-cutting
• Cournot game
• Deadlock
• Diner's dilemma
• Guess 2/3 of the average
• Kuhn poker
• Nash bargaining game
• Induction puzzles
• Trust game
• Princess and monster game
• Rendezvous problem
Theorems
• Arrow's impossibility theorem
• Aumann's agreement theorem
• Folk theorem
• Minimax theorem
• Nash's theorem
• Negamax theorem
• Purification theorem
• Revelation principle
• Sprague–Grundy theorem
• Zermelo's theorem
Key
figures
• Albert W. Tucker
• Amos Tversky
• Antoine Augustin Cournot
• Ariel Rubinstein
• Claude Shannon
• Daniel Kahneman
• David K. Levine
• David M. Kreps
• Donald B. Gillies
• Drew Fudenberg
• Eric Maskin
• Harold W. Kuhn
• Herbert Simon
• Hervé Moulin
• John Conway
• Jean Tirole
• Jean-François Mertens
• Jennifer Tour Chayes
• John Harsanyi
• John Maynard Smith
• John Nash
• John von Neumann
• Kenneth Arrow
• Kenneth Binmore
• Leonid Hurwicz
• Lloyd Shapley
• Melvin Dresher
• Merrill M. Flood
• Olga Bondareva
• Oskar Morgenstern
• Paul Milgrom
• Peyton Young
• Reinhard Selten
• Robert Axelrod
• Robert Aumann
• Robert B. Wilson
• Roger Myerson
• Samuel Bowles
• Suzanne Scotchmer
• Thomas Schelling
• William Vickrey
Miscellaneous
• All-pay auction
• Alpha–beta pruning
• Bertrand paradox
• Bounded rationality
• Combinatorial game theory
• Confrontation analysis
• Coopetition
• Evolutionary game theory
• First-move advantage in chess
• Game Description Language
• Game mechanics
• Glossary of game theory
• List of game theorists
• List of games in game theory
• No-win situation
• Solving chess
• Topological game
• Tragedy of the commons
• Tyranny of small decisions
|
Wikipedia
|
Read's conjecture
Read's conjecture is a conjecture, first made by Ronald Read, about the unimodality of the coefficients of chromatic polynomials in the context of graph theory.[1][2] In 1974, S. G. Hoggar tightened this to the conjecture that the coefficients must be strongly log-concave. Hoggar's version of the conjecture is called the Read–Hoggar conjecture.[3][4]
The Read–Hoggar conjecture had been unresolved for more than 40 years before June Huh proved it in 2009, during his PhD studies, using methods from algebraic geometry.[1][5][6][7]
References
1. Baker, Matthew (January 2018). "Hodge theory in combinatorics". Bulletin of the American Mathematical Society. 55 (1): 57–80. doi:10.1090/bull/1599. ISSN 0273-0979. S2CID 51813455.
2. R. C. Read, An introduction to chromatic polynomials, J. Combinatorial Theory 4 (1968), 52–71. MR0224505 (37:104)
3. Hoggar, S. G (1974-06-01). "Chromatic polynomials and logarithmic concavity". Journal of Combinatorial Theory. Series B. 16 (3): 248–254. doi:10.1016/0095-8956(74)90071-9. ISSN 0095-8956.
4. Huh, June. "Hard Lefschetz theorem and Hodge-Riemann relations for combinatorial geometries" (PDF).
5. "He Dropped Out to Become a Poet. Now He's Won a Fields Medal". Quanta Magazine. 5 July 2022. Retrieved 5 July 2022.
6. Kalai, Gil (July 2022). "The Work of June Huh" (PDF). Proceedings of the International Congress of Mathematicians 2022: 1–16., pp. 2–4.
7. Huh, June (2012). "Milnor numbers of projective hypersurfaces and the chromatic polynomial of graphs". Journal of the American Mathematical Society. 25: 907–927. arXiv:1008.4749. doi:10.1090/S0894-0347-2012-00731-0.
|
Wikipedia
|
Read-once function
In mathematics, a read-once function is a special type of Boolean function that can be described by a Boolean expression in which each variable appears only once.
More precisely, the expression is required to use only the operations of logical conjunction, logical disjunction, and negation. By applying De Morgan's laws, such an expression can be transformed into one in which negation is used only on individual variables (still with each variable appearing only once). By replacing each negated variable with a new positive variable representing its negation, such a function can be transformed into an equivalent positive read-once Boolean function, represented by a read-once expression without negations.[1]
Examples
For example, for three variables a, b, and c, the expressions
$a\wedge b\wedge c$
$a\wedge (b\vee c)$
$(a\wedge b)\vee c$, and
$a\vee b\vee c$
are all read-once (as are the other functions obtained by permuting the variables in these expressions). However, the Boolean median operation, given by the expression
$(a\vee b)\wedge (a\vee c)\wedge (b\vee c)$
is not read-once: this formula has more than one copy of each variable, and there is no equivalent formula that uses each variable only once.[2]
Characterization
The disjunctive normal form of a (positive) read-once function is not generally itself read-once. Nevertheless, it carries important information about the function. In particular, if one forms a co-occurrence graph in which the vertices represent variables, and edges connect pairs of variables that both occur in the same clause of the conjunctive normal form, then the co-occurrence graph of a read-once function is necessarily a cograph. More precisely, a positive Boolean function is read-once if and only if its co-occurrence graph is a cograph, and in addition every maximal clique of the co-occurrence graph forms one of the conjunctions (prime implicants) of the disjunctive normal form.[3] That is, when interpreted as a function on sets of vertices of its co-occurrence graph, a read-once function is true for sets of vertices that contain a maximal clique, and false otherwise. For instance the median function has the same co-occurrence graph as the conjunction of three variables, a triangle graph, but the three-vertex complete subgraph of this graph (the whole graph) forms a subset of a clause only for the conjunction and not for the median.[4] Two variables of a positive read-once expression are adjacent in the co-occurrence graph if and only if their lowest common ancestor in the expression is a conjunction,[5] so the expression tree can be interpreted as a cotree for the corresponding cograph.[6]
Another alternative characterization of positive read-once functions combines their disjunctive and conjunctive normal form. A positive function of a given system of variables, that uses all of its variables, is read-once if and only if every prime implicant of the disjunctive normal form and every clause of the conjunctive normal form have exactly one variable in common.[7]
Recognition
It is possible to recognize read-once functions from their disjunctive normal form expressions in polynomial time.[8] It is also possible to find a read-once expression for a positive read-once function, given access to the function only through a "black box" that allows its evaluation at any truth assignment, using only a quadratic number of function evaluations.[9]
Notes
1. Golumbic & Gurvich (2011), p. 519.
2. Golumbic & Gurvich (2011), p. 520.
3. Golumbic & Gurvich (2011), Theorem 10.1, p. 521; Golumbic, Mintz & Rotics (2006).
4. Golumbic & Gurvich (2011), Examples f2 and f3, p. 521.
5. Golumbic & Gurvich (2011), Lemma 10.1, p. 529.
6. Golumbic & Gurvich (2011), Remark 10.4, pp. 540–541.
7. Gurvič (1977); Mundici (1989); Karchmer et al. (1993).
8. Golumbic & Gurvich (2011), Theorem 10.8, p. 541; Golumbic, Mintz & Rotics (2006); Golumbic, Mintz & Rotics (2008).
9. Golumbic & Gurvich (2011), Theorem 10.9, p. 548; Angluin, Hellerstein & Karpinski (1993).
References
• Angluin, Dana; Hellerstein, Lisa; Karpinski, Marek (1993), "Learning read-once formulas with queries", Journal of the ACM, 40 (1): 185–210, CiteSeerX 10.1.1.7.5033, doi:10.1145/138027.138061, MR 1202143, S2CID 6671840.
• Golumbic, Martin C.; Gurvich, Vladimir (2011), "Read-once functions" (PDF), in Crama, Yves; Hammer, Peter L. (eds.), Boolean functions, Encyclopedia of Mathematics and its Applications, vol. 142, Cambridge University Press, Cambridge, pp. 519–560, doi:10.1017/CBO9780511852008, ISBN 978-0-521-84751-3, MR 2742439.
• Golumbic, Martin Charles; Mintz, Aviad; Rotics, Udi (2006), "Factoring and recognition of read-once functions using cographs and normality and the readability of functions associated with partial k-trees", Discrete Applied Mathematics, 154 (10): 1465–1477, doi:10.1016/j.dam.2005.09.016, MR 2222833.
• Golumbic, Martin Charles; Mintz, Aviad; Rotics, Udi (2008), "An improvement on the complexity of factoring read-once Boolean functions", Discrete Applied Mathematics, 156 (10): 1633–1636, doi:10.1016/j.dam.2008.02.011, MR 2432929.
• Gurvič, V. A. (1977), "Repetition-free Boolean functions", Uspekhi Matematicheskikh Nauk, 32 (1(193)): 183–184, MR 0441560.
• Karchmer, M.; Linial, N.; Newman, I.; Saks, M.; Wigderson, A. (1993), "Combinatorial characterization of read-once formulae", Discrete Mathematics, 114 (1–3): 275–282, doi:10.1016/0012-365X(93)90372-Z, MR 1217758.
• Mundici, Daniele (1989), "Functions computed by monotone Boolean formulas with no repeated variables", Theoretical Computer Science, 66 (1): 113–114, doi:10.1016/0304-3975(89)90150-3, MR 1018849.
|
Wikipedia
|
Analytic manifold
In mathematics, an analytic manifold, also known as a $C^{\omega }$ manifold, is a differentiable manifold with analytic transition maps.[1] The term usually refers to real analytic manifolds, although complex manifolds are also analytic.[2] In algebraic geometry, analytic spaces are a generalization of analytic manifolds such that singularities are permitted.
For $U\subseteq \mathbb {R} ^{n}$, the space of analytic functions, $C^{\omega }(U)$, consists of infinitely differentiable functions $f:U\to \mathbb {R} $, such that the Taylor series
$T_{f}(\mathbf {x} )=\sum _{|\alpha |\geq 0}{\frac {D^{\alpha }f(\mathbf {x_{0}} )}{\alpha !}}(\mathbf {x} -\mathbf {x_{0}} )^{\alpha }$ !}}(\mathbf {x} -\mathbf {x_{0}} )^{\alpha }}
converges to $f(\mathbf {x} )$ in a neighborhood of $\mathbf {x_{0}} $, for all $\mathbf {x_{0}} \in U$. The requirement that the transition maps be analytic is significantly more restrictive than that they be infinitely differentiable; the analytic manifolds are a proper subset of the smooth, i.e. $C^{\infty }$, manifolds.[1] There are many similarities between the theory of analytic and smooth manifolds, but a critical difference is that analytic manifolds do not admit analytic partitions of unity, whereas smooth partitions of unity are an essential tool in the study of smooth manifolds.[3] A fuller description of the definitions and general theory can be found at differentiable manifolds, for the real case, and at complex manifolds, for the complex case.
See also
• Complex manifold
• Analytic variety
• Algebraic geometry § Analytic geometry
References
1. Varadarajan, V. S. (1984), Varadarajan, V. S. (ed.), "Differentiable and Analytic Manifolds", Lie Groups, Lie Algebras, and Their Representations, Graduate Texts in Mathematics, Springer, vol. 102, pp. 1–40, doi:10.1007/978-1-4612-1126-6_1, ISBN 978-1-4612-1126-6
2. Vaughn, Michael T. (2008), Introduction to Mathematical Physics, John Wiley & Sons, p. 98, ISBN 9783527618866.
3. Tu, Loring W. (2011). An Introduction to Manifolds. Universitext. New York, NY: Springer New York. doi:10.1007/978-1-4419-7400-6. ISBN 978-1-4419-7399-3.
|
Wikipedia
|
Real-root isolation
In mathematics, and, more specifically in numerical analysis and computer algebra, real-root isolation of a polynomial consist of producing disjoint intervals of the real line, which contain each one (and only one) real root of the polynomial, and, together, contain all the real roots of the polynomial.
Real-root isolation is useful because usual root-finding algorithms for computing the real roots of a polynomial may produce some real roots, but, cannot generally certify having found all real roots. In particular, if such an algorithm does not find any root, one does not know whether it is because there is no real root. Some algorithms compute all complex roots, but, as there are generally much fewer real roots than complex roots, most of their computation time is generally spent for computing non-real roots (in the average, a polynomial of degree n has n complex roots, and only log n real roots; see Geometrical properties of polynomial roots § Real roots). Moreover, it may be difficult to distinguish the real roots from the non-real roots with small imaginary part (see the example of Wilkinson's polynomial in next section).
The first complete real-root isolation algorithm results from Sturm's theorem (1829). However, when real-root-isolation algorithms began to be implemented on computers it appeared that algorithms derived from Sturm's theorem are less efficient than those derived from Descartes' rule of signs (1637).
Since the beginning of 20th century there is an active research activity for improving the algorithms derived from Descartes' rule of signs, getting very efficient implementations, and computing their computational complexity. The best implementations can routinely isolate real roots of polynomials of degree more than 1,000.[1][2]
Specifications and general strategy
For finding real roots of a polynomial, the common strategy is to divide the real line (or an interval of it where root are searched) into disjoint intervals until having at most one root in each interval. Such a procedure is called root isolation, and a resulting interval that contains exactly one root is an isolating interval for this root.
Wilkinson's polynomial shows that a very small modification of one coefficient of a polynomial may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to be close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of $10^{20};$ this implies that an error of $10^{-10}$ on the value of the root may produce a value of the polynomial at the approximate root that is of the order of $10^{10}.$ It follows that, except maybe for very low degrees, a root-isolation procedure cannot give reliable results without using exact arithmetic. Therefore, if one wants to isolate roots of a polynomial with floating-point coefficients, it is often better to convert them to rational numbers, and then take the primitive part of the resulting polynomial, for having a polynomial with integer coefficients.
For this reason, although the methods that are described below work theoretically with real numbers, they are generally used in practice with polynomials with integer coefficients, and intervals ending with rational numbers. Also, the polynomials are always supposed to be square free. There are two reasons for that. Firstly Yun's algorithm for computing the square-free factorization is less costly than twice the cost of the computation of the greatest common divisor of the polynomial and its derivative. As this may produce factors of lower degrees, it is generally advantageous to apply root-isolation algorithms only on polynomials without multiple roots, even when this is not required by the algorithm. The second reason for considering only square-free polynomials is that the fastest root-isolation algorithms do not work in the case of multiple roots.
For root isolation, one requires a procedure for counting the real roots of a polynomial in an interval without having to compute them, or, at least a procedure for deciding whether an interval contains zero, one or more roots. With such a decision procedure, one may work with a working list of intervals that may contain real roots. At the beginning, the list contains a single interval containing all roots of interest, generally the whole real line or its positive part. Then each interval of the list is divided into two smaller intervals. If one of the new intervals does not contain any root, it is removed from the list. If it contains one root, it is put in an output list of isolating intervals. Otherwise, it is kept in the working list for further divisions, and the process may continue until all roots are eventually isolated
Sturm's theorem
The first complete root-isolation procedure results of Sturm's theorem (1829), which expresses the number of real roots in an interval in terms of the number of sign variations of the values of a sequence of polynomials, called Sturm's sequence, at the ends of the interval. Sturm's sequence is the sequence of remainders that occur in a variant of Euclidean algorithm applied to the polynomial and its derivatives. When implemented on computers, it appeared that root isolation with Sturm's theorem is less efficient than the other methods that are described below.[3] Consequently, Sturm's theorem is rarely used for effective computations, although it remains useful for theoretical purposes.
Descartes' rule of signs and its generalizations
Descartes' rule of signs asserts that the difference between the number of sign variations in the sequence of the coefficients of a polynomial and the number of its positive real roots is a nonnegative even integer. It results that if this number of sign variations is zero, then the polynomial does not have any positive real roots, and, if this number is one, then the polynomial has a unique positive real root, which is a single root. Unfortunately the converse is not true, that is, a polynomial which has either no positive real root or has a single positive simple root may have a number of sign variations greater than 1.
This has been generalized by Budan's theorem (1807), into a similar result for the real roots in a half-open interval (a, b]: If f(x) is a polynomial, and v is the difference between of the numbers of sign variations of the sequences of the coefficients of f(x + a) and f(x + b), then v minus the number of real roots in the interval, counted with their multiplicities, is a nonnegative even integer. This is a generalization of Descartes' rule of signs, because, for b sufficiently large, there is no sign variation in the coefficients of f(x + b), and all real roots are smaller than b.
Budan's may provide a real-root-isolation algorithm for a square-free polynomial (a polynomial without multiple root): from the coefficients of polynomial, one may compute an upper bound M of the absolute values of the roots and a lower bound m on the absolute values of the differences of two roots (see Properties of polynomial roots). Then, if one divides the interval [–M, M] into intervals of length less than m, then every real root is contained in some interval, and no interval contains two roots. The isolating intervals are thus the intervals for which Budan's theorem asserts an odd number of roots.
However, this algorithm is very inefficient, as one cannot use a coarser partition of the interval [–M, M], because, if Budan's theorem gives a result larger than 1 for an interval of larger size, there is no way for insuring that it does not contain several roots.
Vincent's and related theorems
Vincent's theorem (1834)[4] provides a method for real-root isolation, which is at the basis of the most efficient real-root-isolation algorithms. It concerns the positive real roots of a square-free polynomial (that is a polynomial without multiple roots). If $a_{1},a_{2},\ldots ,$ is a sequence of positive real numbers, let
$c_{k}=a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\cfrac {1}{\ddots +{\cfrac {1}{a_{k}}}}}}}}}$
be the kth convergent of the continued fraction
$a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\cfrac {1}{\ddots }}}}}}.$
Vincent's theorem — Let $p_{0}(x)$ be a square-free polynomial of degree n, and $a_{1},a_{2},\ldots ,$ be a sequence of real numbers. For i = 1, 2,..., consider the polynomial
$p_{i}(x)=x^{n}p_{i-1}\left(a_{i}+1/x\right).$
Then, there is an integer k such that either $p_{k}(0)=0,$ or the sequence of the coefficients of $p_{k}$ has at most one sign variation.
In the first case, the convergent ck is a positive root of $p_{0}.$ Otherwise, this number of sign variations (either 0 or 1) is the number of real roots of $p_{0}$ in the interval defined by $c_{k-1}$ and $c_{k}.$
For proving his theorem, Vincent proved a result that is useful on its own:[4]
Vincent's auxiliary theorem — If p(x) is a square-free polynomial of degree n, and a, b, c, d are nonnegative real numbers such that $\left|{\frac {a}{c}}-{\frac {b}{d}}\right|$ is small enough (but not 0), then there is at most one sign variation in the coefficients of the polynomial
$q(x)=(cx+d)^{n}p\left({\frac {ax+b}{cx+d}}\right),$
and this number of sign variations is the number of real roots of p(x) in the open interval defined by ${\frac {a}{c}}$ and ${\frac {b}{d}}.$
For working with real numbers, one may always choose c = d = 1, but, as effective computations are done with rational numbers, it is generally convenient to suppose that a, b, c, d are integers.
The "small enough" condition has been quantified independently by Nikola Obreshkov,[5] and Alexander Ostrowski:[6]
Theorem (Obreschkoff–Ostrowski) — The conclusion of Vincent's auxiliary result holds if the polynomial p(x) has at most one root α + iβ such that
$\left(\alpha -{\frac {a}{c}}\right)\left(\alpha -{\frac {b}{d}}\right)+\beta ^{2}\leq {\frac {1}{\sqrt {3}}}\left|\beta \left({\frac {a}{c}}-{\frac {b}{d}}\right)\right|.$
In particular the conclusion holds if
$\left|{\frac {a}{c}}-{\frac {b}{d}}\right|<{\frac {\operatorname {sep} (p)}{2{\sqrt {3}}}},$
where sep(p) is the minimal distance between two roots of p.
For polynomials with integer coefficients, the minimum distance sep(p) may be lower bounded in terms of the degree of the polynomial and the maximal absolute value of its coefficients; see Properties of polynomial roots § Root separation. This allows the analysis of worst-case complexity of algorithms based on Vincent's theorems. However, Obreschkoff–Ostrowski theorem shows that the number of iterations of these algorithms depend on the distances between roots in the neighborhood of the working interval; therefore, the number of iterations may vary dramatically for different roots of the same polynomial.
James V. Uspensky gave a bound on the length of the continued fraction (the integer k needed, in Vincent's theorem, for getting zero or one sign variations:[1][7]
Theorem (Uspensky) — Let p(x) be a polynomial of degree n, and sep(p) be the minimal distance between two roots of p. Let
$\varepsilon =\left(n+{\frac {1}{n}}\right)^{\frac {1}{n-1}}-1.$
Then the integer k, whose existence is asserted in Vincent's theorem, is not greater than the smallest integer h such that
$F_{h-1}\operatorname {sep} (p)>2\quad {\text{and}}\quad F_{h-1}F_{k}\operatorname {sep} (p)>{\frac {1}{\varepsilon }},$
where $F_{h}$ is the hth Fibonacci number.
Continued fraction method
The use of continued fractions for real-root isolation has been introduced by Vincent, although he credited Joseph-Louis Lagrange for this idea, without providing a reference.[4] For making an algorithm of Vincent's theorem, one must provide a criterion for choosing the $a_{i}$ that occur in his theorem. Vincent himself provided some choice (see below). Some other choices are possible, and the efficiency of the algorithm may depend dramatically on these choices. Below is presented an algorithm, in which these choices result from an auxiliary function that will be discussed later.
For running this algorithm one must work with a list of intervals represented by a specific data structure. The algorithm works by choosing an interval, removing it from the list, adding zero, one or two smaller intervals to the list, and possibly outputs an isolation interval.
For isolating the real roots of a polynomial p(x) of degree n, each interval is represented by a pair $(A(x),M(x)),$ where A(x) is a polynomial of degree n and $M(x)={\frac {px+r}{qx+s}}$ is a Möbius transformation with integer coefficients. One has
$A(x)=p(M(x)),$
and the interval represented by this data structure is the interval that has $M(\infty )={\frac {p}{q}}$ and $M(0)={\frac {r}{s}}$ as end points. The Möbius transformation maps the roots of p in this interval to the roots of A in (0, +∞).
The algorithm works with a list of intervals that, at the beginning, contains the two intervals $(A(x)=p(x),M(x)=x)$ and $(A(x)=p(-x),M(x)=-x),$ corresponding to the partition of the reals into the positive and the negative ones (one may suppose that zero is not a root, as, if it were, it suffices to apply the algorithm to p(x)/x). Then for each interval (A(x), M(x)) in the list, the algorithm remove it from the list; if the number of sign variations of the coefficients of A is zero, there is no root in the interval, and one passes to the next interval. If the number of sign variations is one, the interval defined by $M(0)$ and $M(\infty )$ is an isolating interval. Otherwise, one chooses a positive real number b for dividing the interval (0, +∞) into (0, b) and (b, +∞), and, for each subinterval, one composes M with a Möbius transformation that maps the interval onto (0, +∞), for getting two new intervals to be added to the list. In pseudocode, this gives the following, where var(A) denotes the number of sign variations of the coefficients of the polynomial A.
function continued fraction is
input: P(x), a square-free polynomial,
output: a list of pairs of rational numbers defining isolating intervals
/* Initialization */
L := [(P(x), x), (P(–x), –x)] /* two starting intervals */
Isol := [ ]
/* Computation */
while L ≠ [ ] do
Choose (A(x), M(x)) in L, and remove it from L
v := var(A)
if v = 0 then exit /* no root in the interval */
if v = 1 then /* isolating interval found */
add (M(0), M(∞)) to Isol
exit
b := some positive integer
B(x) = A(x + b)
w := v – var(B)
if B(0) = 0 then /* rational root found */
add (M(b), M(b)) to Isol
B(x) := B(x)/x
add (B(x), M(b + x)) to L /* roots in (M(b), M(+∞)) */
if w = 0 then exit /* Budan's theorem */
if w = 1 then /* Budan's theorem again */
add (M(0), M(b)) to Isol
if w > 1 then
add ( A(b/(1 + x)), M(b/(1 + x)) )to L /* roots in (M(0), M(b)) */
The different variants of the algorithm depend essentially on the choice of b. In Vincent's papers, and in Uspensky's book, one has always b = 1, with the difference that Uspensky did not use Budan's theorem for avoiding further bisections of the interval associated to (0, b)
The drawback of always choosing b = 1 is that one has to do many successive changes of variable of the form x → 1 + x. These may be replaced by a single change of variable x → n + x, but, nevertheless, one has to do the intermediate changes of variables for applying Budan's theorem.
A way for improving the efficiency of the algorithm is to take for b a lower bound of the positive real roots, computed from the coefficients of the polynomial (see Properties of polynomial roots for such bounds).[8][1]
Bisection method
The bisection method consists roughly of starting from an interval containing all real roots of a polynomial, and divides it recursively into two parts until getting eventually intervals that contain either zero or one root. The starting interval may be of the form (-B, B), where B is an upper bound on the absolute values of the roots, such as those that are given in Properties of polynomial roots § Bounds on (complex) polynomial roots. For technical reasons (simpler changes of variable, simpler complexity analysis, possibility of taking advantage of the binary analysis of computers), the algorithms are generally presented as starting with the interval [0, 1]. There is no loss of generality, as the changes of variables x = By and x = –By move respectively the positive and the negative roots in the interval [0, 1]. (The single changes variable x = (2By – B) may also be used.)
The method requires an algorithm for testing whether an interval has zero, one, or possibly several roots, and for warranting termination, this testing algorithm must exclude the possibility of getting infinitely many times the output "possibility of several roots". Sturm's theorem and Vincent's auxiliary theorem provide such convenient tests. As the use Descartes' rule of signs and Vincent's auxiliary theorem is much more computationally efficient than the use of Sturm's theorem, only the former is described in this section.
The bisection method based on Descartes' rules of signs and Vincent's auxiliary theorem has been introduced in 1976 by Akritas and Collins under the name of Modified Uspensky algorithm,[3] and has been referred to as the Uspensky algorithm, the Vincent–Akritas–Collins algorithm, or Descartes method, although Descartes, Vincent and Uspensky never described it.
The method works as follows. For searching the roots in some interval, one changes first the variable for mapping the interval onto [0, 1] giving a new polynomial q(x). For searching the roots of q in [0, 1], one maps the interval [0, 1] onto [0, +∞]) by the change of variable $x\to {\frac {1}{x+1}},$ giving a polynomial r(x). Descartes' rule of signs applied to the polynomial r gives indications on the number of real roots of q in the interval [0, 1], and thus on the number of roots of the initial polynomial in the interval that has been mapped on [0, 1]. If there is no sign variation in the sequence of the coefficients of r, then there is no real root in the considered intervals. If there is one sign variation, then one has an isolation interval. Otherwise, one splits the interval [0, 1] into [0, 1/2] and [1/2, 1], one maps them onto [0, 1] by the changes of variable x = y/2 and x = (y + 1)/2. Vincent's auxiliary theorem insures the termination of this procedure.
Except for the initialization, all these changes of variable consists of the composition of at most two very simple changes of variable which are the scalings by two x → x/2 , the translation x → x + 1, and the inversion x → 1/x , the latter consisting simply of reverting the order of the coefficients of the polynomial. As most of the computing time is devoted to changes of variable, the method consisting of mapping every interval to [0, 1] is fundamental for insuring a good efficiency.
Pseudocode
The following notation is used in the pseudocode that follows.
• p(x) is the polynomial for which the real roots in the interval [0, 1] have to be isolated
• var(q(x)) denotes the number of sign variations in the sequence of the coefficients of the polynomial q
• The elements of working list have the form (c, k, q(x)), where
• c and k are two nonnegative integers such that c < 2k, which represent the interval $\left[{\frac {c}{2^{k}}},{\frac {c+1}{2^{k}}}\right],$
• $q(x)=2^{kn}p\left({\frac {x+c}{2^{k}}}\right),$ where n is the degree of p (the polynomial q may be computed directly from p, c and k, but it is less costly to compute it incrementally, as it will be done in the algorithm; if p has integer coefficients, the same is true for q)
function bisection is
input: p(x), a square-free polynomial, such that p(0) p(1) ≠ 0,
for which the roots in the interval [0, 1] are searched
output: a list of triples (c, k, h),
representing isolating intervals of the form $\left[{\frac {c}{2^{k}}},{\frac {c+h}{2^{k}}}\right]$
/* Initialization */
L := [(0, 0, p(x))] /* a single element in the working list L */
Isol := [ ]
n := degree(p}}
/* Computation */
while L ≠ [ ] do
Choose (c, k, q(x)) in L, and remove it from L
if q(0) = 0 then
q(x) := q(x)/x
n := n – 1 /* A rational root found */
add (c, k, 0) to Isol
v := $\operatorname {var} ((x+1)^{n}q(1/(x+1)))$
if v = 1 then /* An isolating interval found */
add (c, k, 1) to Isol
if v > 1 then /* Bisecting */
add (2c, k + 1, $2^{n}q(x/2)$ to L
add (2c + 1, k + 1, $2^{n}q((x+1)/2)$ to L
end
This procedure is essentially the one that has been described by Collins and Akritas.[3] The running time depends mainly on the number of intervals that have to be considered, and on the changes of variables. There are ways for improving the efficiency, which have been an active subject of research since the publication of the algorithm, and mainly since the beginning of the 21st century.
Recent improvements
Various ways for improving Akritas–Collins bisection algorithm have been proposed. They include a method for avoiding storing a long list of polynomials without losing the simplicity of the changes of variables,[9] the use of approximate arithmetic (floating point and interval arithmetic) when it allows getting the right value for the number of sign variations,[9] the use of Newton's method when possible,[9] the use of fast polynomial arithmetic,[10] shortcuts for long chains of bisections in case of clusters of close roots,[10] bisections in unequal parts for limiting instability problems in polynomial evaluation.[10]
All these improvement lead to an algorithm for isolating all real roots of a polynomial with integer coefficients, which has the complexity (using soft O notation, Õ, for omitting logarithmic factors)
${\tilde {O}}(n^{2}(k+t)),$
where n is the degree of the polynomial, k is the number of nonzero terms, t is the maximum of digits of the coefficients.[10]
The implementation of this algorithm appears to be more efficient than any other implemented method for computing the real roots of a polynomial, even in the case of polynomials having very close roots (the case which was previously the most difficult for the bisection method).[2]
References
1. Tsigaridas & Emiris 2006
2. Kobel, Rouillier & Sagraloff 2016
3. Collins & Akritas 1976
4. Vincent 1834
5. Obreschkoff 1963
6. Ostrowski 1950
7. Uspensky 1948
8. Akritas & Strzeboński 2005
9. Rouillier & Zimmerman 2004
10. Sagraloff & Mehlhorn 2016
Sources
• Alesina, Alberto; Massimo Galuzzi (1998). "A new proof of Vincent's theorem". L'Enseignement Mathématique. 44 (3–4): 219–256. Archived from the original on 2014-07-14. Retrieved 2018-12-16.
• Akritas, Alkiviadis G. (1986). There's no "Uspensky's Method". Proceedings of the fifth ACM Symposium on Symbolic and Algebraic Computation (SYMSAC '86). Waterloo, Ontario, Canada. pp. 88–90.
• Akritas, Alkiviadis G.; Strzeboński, A. W.; Vigklas, P. S. (2008). "Improving the performance of the continued fractions method using new bounds of positive roots" (PDF). Nonlinear Analysis: Modelling and Control. 13 (3): 265–279. doi:10.15388/NA.2008.13.3.14557.
• Akritas, Alkiviadis G.; Strzeboński, Adam W. (2005). "A Comparative Study of Two Real Root Isolation Methods" (PDF). Nonlinear Analysis: Modelling and Control. 10 (4): 297–304. doi:10.15388/NA.2005.10.4.15110.
• Collins, George E.; Akritas, Alkiviadis G. (1976). Polynomial Real Root Isolation Using Descartes' Rule of Signs. SYMSAC '76, Proceedings of the third ACM symposium on Symbolic and algebraic computation. Yorktown Heights, NY, USA: ACM. pp. 272–275. doi:10.1145/800205.806346.
• Kobel, Alexander; Rouillier, Fabrice; Sagraloff, Michael (2016). "Computing real roots of real polynomials ... and now for real!". ISSAC '16, Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation. Waterloo, Canada. arXiv:1605.00410. doi:10.1145/2930889.2930937.
• Obreschkoff, Nikola (1963). Verteilung und Berechnung der Nullstellen reeller Polynome (in German). Berlin: VEB Deutscher Verlag der Wissenschaften. p. 81.
• Ostrowski, A. M. (1950). "Note on Vincent's theorem". Annals of Mathematics. Second Series. 52 (3): 702–707. doi:10.2307/1969443. JSTOR 1969443.
• Rouillier, F.; Zimmerman, P. (2004). "Efficient isolation of polynomial's real roots". Journal of Computational and Applied Mathematics. 162 (1): 33–50. Bibcode:2004JCoAM.162...33R. doi:10.1016/j.cam.2003.08.015.
• Sagraloff, M.; Mehlhorn, K. (2016). "Computing real roots of real polynomials". Journal of Symbolic Computation. 73: 46–86. arXiv:1308.4088. doi:10.1016/j.jsc.2015.03.004.
• Tsigaridas, Elias P.; Emiris, Ioannis Z. (2006). "Univariate Polynomial Real Root Isolation: Continued Fractions Revisited". In Azar, Yossi; Erlebach, Thomas (eds.). Algorithms - ESA 2006, 14th Annual European Symposium, Zurich, Switzerland, September 11-13, 2006, Proceedings. Lecture Notes in Computer Science. Vol. 4168. Springer. pp. 817–828. arXiv:cs/0604066. doi:10.1007/11841036_72.
• Uspensky, James Victor (1948). Theory of Equations. New York: McGraw–Hill Book Company.
• Vincent, Alexandre Joseph Hidulphe (1834). "Mémoire sur la résolution des équations numériques". Mémoires de la Société Royale des Sciences, de L' Agriculture et des Arts, de Lille (in French): 1–34.
• Vincent, Alexandre Joseph Hidulphe (1836). "Note sur la résolution des équations numériques" (PDF). Journal de Mathématiques Pures et Appliquées. 1: 341–372.
• Vincent, Alexandre Joseph Hidulphe (1838). "Addition à une précédente note relative à la résolution des équations numériques" (PDF). Journal de Mathématiques Pures et Appliquées. 3: 235–243. Archived from the original (PDF) on 2013-10-29. Retrieved 2018-12-16.
|
Wikipedia
|
Measurable cardinal
In mathematics, a measurable cardinal is a certain kind of large cardinal number. In order to define the concept, one introduces a two-valued measure on a cardinal κ, or more generally on any set. For a cardinal κ, it can be described as a subdivision of all of its subsets into large and small sets such that κ itself is large, ∅ and all singletons {α}, α ∈ κ are small, complements of small sets are large and vice versa. The intersection of fewer than κ large sets is again large.[1]
It turns out that uncountable cardinals endowed with a two-valued measure are large cardinals whose existence cannot be proved from ZFC.[2]
The concept of a measurable cardinal was introduced by Stanislaw Ulam in 1930.[3]
Definition
Formally, a measurable cardinal is an uncountable cardinal number κ such that there exists a κ-additive, non-trivial, 0-1-valued measure on the power set of κ. (Here the term κ-additive means that, for any sequence Aα, α<λ of cardinality λ < κ, Aα being pairwise disjoint sets of ordinals less than κ, the measure of the union of the Aα equals the sum of the measures of the individual Aα.)
Equivalently, κ is measurable means that it is the critical point of a non-trivial elementary embedding of the universe V into a transitive class M. This equivalence is due to Jerome Keisler and Dana Scott, and uses the ultrapower construction from model theory. Since V is a proper class, a technical problem that is not usually present when considering ultrapowers needs to be addressed, by what is now called Scott's trick.
Equivalently, κ is a measurable cardinal if and only if it is an uncountable cardinal with a κ-complete, non-principal ultrafilter. Again, this means that the intersection of any strictly less than κ-many sets in the ultrafilter, is also in the ultrafilter.
Properties
Although it follows from ZFC that every measurable cardinal is inaccessible (and is ineffable, Ramsey, etc.), it is consistent with ZF that a measurable cardinal can be a successor cardinal. It follows from ZF + axiom of determinacy that ω1 is measurable,[4] and that every subset of ω1 contains or is disjoint from a closed and unbounded subset.
Ulam showed that the smallest cardinal κ that admits a non-trivial countably-additive two-valued measure must in fact admit a κ-additive measure. (If there were some collection of fewer than κ measure-0 subsets whose union was κ, then the induced measure on this collection would be a counterexample to the minimality of κ.) From there, one can prove (with the Axiom of Choice) that the least such cardinal must be inaccessible.
It is trivial to note that if κ admits a non-trivial κ-additive measure, then κ must be regular. (By non-triviality and κ-additivity, any subset of cardinality less than κ must have measure 0, and then by κ-additivity again, this means that the entire set must not be a union of fewer than κ sets of cardinality less than κ.) Finally, if λ < κ, then it can't be the case that κ ≤ 2λ. If this were the case, then we could identify κ with some collection of 0-1 sequences of length λ. For each position in the sequence, either the subset of sequences with 1 in that position or the subset with 0 in that position would have to have measure 1. The intersection of these λ-many measure 1 subsets would thus also have to have measure 1, but it would contain exactly one sequence, which would contradict the non-triviality of the measure. Thus, assuming the Axiom of Choice, we can infer that κ is a strong limit cardinal, which completes the proof of its inaccessibility.
If κ is measurable and p∈Vκ and M (the ultrapower of V) satisfies ψ(κ,p), then the set of α < κ such that V satisfies ψ(α,p) is stationary in κ (actually a set of measure 1). In particular if ψ is a Π1 formula and V satisfies ψ(κ,p), then M satisfies it and thus V satisfies ψ(α,p) for a stationary set of α < κ. This property can be used to show that κ is a limit of most types of large cardinals that are weaker than measurable. Notice that the ultrafilter or measure witnessing that κ is measurable cannot be in M since the smallest such measurable cardinal would have to have another such below it, which is impossible.
If one starts with an elementary embedding j1 of V into M1 with critical point κ, then one can define an ultrafilter U on κ as { S⊆κ : κ∈j1(S) }. Then taking an ultrapower of V over U we can get another elementary embedding j2 of V into M2. However, it is important to remember that j2 ≠ j1. Thus other types of large cardinals such as strong cardinals may also be measurable, but not using the same embedding. It can be shown that a strong cardinal κ is measurable and also has κ-many measurable cardinals below it.
Every measurable cardinal κ is a 0-huge cardinal because κM⊆M, that is, every function from κ to M is in M. Consequently, Vκ+1⊆M.
Implications of existence
If a measurable cardinal exists, every ${\boldsymbol {\Sigma }}_{2}^{1}$ (with respect to the Borel hierarchy) set of reals has a Lebesgue measure.[4] In particular, any non-measurable set of reals must not be ${\boldsymbol {\Sigma }}_{2}^{1}$.
Real-valued measurable
A cardinal κ is called real-valued measurable if there is a κ-additive probability measure on the power set of κ that vanishes on singletons. Real-valued measurable cardinals were introduced by Stefan Banach (1930). Banach & Kuratowski (1929) showed that the continuum hypothesis implies that ${\mathfrak {c}}$ is not real-valued measurable. Stanislaw Ulam (1930) showed (see below for parts of Ulam's proof) that real valued measurable cardinals are weakly inaccessible (they are in fact weakly Mahlo). All measurable cardinals are real-valued measurable, and a real-valued measurable cardinal κ is measurable if and only if κ is greater than ${\mathfrak {c}}$. Thus a cardinal is measurable if and only if it is real-valued measurable and strongly inaccessible. A real valued measurable cardinal less than or equal to ${\mathfrak {c}}$ exists if and only if there is a countably additive extension of the Lebesgue measure to all sets of real numbers if and only if there is an atomless probability measure on the power set of some non-empty set.
Solovay (1971) showed that existence of measurable cardinals in ZFC, real valued measurable cardinals in ZFC, and measurable cardinals in ZF, are equiconsistent.
Weak inaccessibility of real-valued measurable cardinals
Say that a cardinal number α is an Ulam number if[5][nb 1]
whenever
μ is an outer measure on a set X,
(1)
$\mu (X)<\infty ,$
(2)
$\mu (\{x\})=0,x\in X,$
(3)
all $A\subset X$ are μ-measurable,
(4)
then
$\operatorname {card} X\leq \alpha \Rightarrow \mu (X)=0.$
Equivalently, a cardinal number α is an Ulam number if
whenever
1. ν is an outer measure on a set Y, and F a disjoint family of subsets of Y,
2. $\nu \left(\bigcup F\right)<\infty ,$
3. $\nu (A)=0$ for $A\in F,$
4. $\bigcup G$ is ν-measurable for every $G\subset F$
then
$\operatorname {card} F\leq \alpha \Rightarrow \nu \left(\bigcup F\right)=0.$
The smallest infinite cardinal $\aleph _{0}$ is an Ulam number. The class of Ulam numbers is closed under the cardinal successor operation.[6] If an infinite cardinal β has an immediate predecessor α that is an Ulam number, assume μ satisfies properties (1)–(4) with $X=\beta $. In the von Neumann model of ordinals and cardinals, choose injective functions
$f_{x}:x\rightarrow \alpha ,\quad \forall x\in \beta ,$
and define the sets
$U(b,a)=\{x\in \beta :f_{x}(b)=a\},\quad a\in \alpha ,b\in \beta .$
Since the $f_{x}$ are one-to-one, the sets
$\left\{U(b,a),b\in \beta \right\}{\text{(}}a{\text{ fixed)}},$
$\left\{U(b,a),a\in \alpha \right\}{\text{(}}b{\text{ fixed)}}$
are disjoint. By property (2) of μ, the set
$\left\{b\in \beta :\mu (U(b,a))>0\right\}$ :\mu (U(b,a))>0\right\}}
is countable, and hence
$\operatorname {card} \left\{(b,a)\in \beta \times \alpha |\mu (U(b,a))>0\right\}\leq \aleph _{0}\cdot \alpha =\alpha .$
Thus there is a $b_{0}$ such that
$\mu (U(b_{0},a))=0\quad \forall a\in \alpha $
implying, since α is an Ulam number and using the second definition (with $\nu =\mu $ and conditions (1)–(4) fulfilled),
$\mu \left(\bigcup _{a\in \alpha }U(b_{0},a)\right)=0.$
If $b_{0}<x<\beta ,$ then $f_{x}(b_{0})=a_{x}\Rightarrow x\in U(b_{0},a_{x}).$ Thus
$\beta =b_{0}\cup \{b_{0}\}\cup \bigcup _{a\in \alpha }U(b_{0},a),$
By property (2), $\mu \{b_{0}\}=0,$ and since $\operatorname {card} b_{0}\leq \alpha $, by (4), (2) and (3), $\mu (b_{0})=0.$ It follows that $\mu (\beta )=0.$ The conclusion is that β is an Ulam number.
There is a similar proof[7] that the supremum of a set S of Ulam numbers with $\operatorname {card} S$ an Ulam number is again a Ulam number. Together with the previous result, this implies that a cardinal that is not an Ulam number is weakly inaccessible.
See also
• Normal measure
• Mitchell order
• List of large cardinal properties
Notes
1. The notion in the article Ulam number is different.
Citations
1. Maddy 1988
2. Jech 2002 harvnb error: no target: CITEREFJech2002 (help)
3. Ulam 1930
4. T. Jech, "The Brave New World of Determinacy" (PDF download). Bulletin of the American Mathematical Society, vol. 5, number 3, November 1981 (pp.339--349).
5. Federer 1996, Section 2.1.6
6. Federer 1996, Second part of theorem in section 2.1.6.
7. Federer 1996, First part of theorem in section 2.1.6.
References
• Banach, Stefan (1930), "Über additive Maßfunktionen in abstrakten Mengen", Fundamenta Mathematicae, 15: 97–101, doi:10.4064/fm-15-1-97-101, ISSN 0016-2736.
• Banach, Stefan; Kuratowski, Kazimierz (1929), "Sur une généralisation du probleme de la mesure", Fundamenta Mathematicae, 14: 127–131, doi:10.4064/fm-14-1-127-131, ISSN 0016-2736.
• Drake, F. R. (1974), Set Theory: An Introduction to Large Cardinals (Studies in Logic and the Foundations of Mathematics; V. 76), Elsevier Science Ltd, ISBN 978-0-7204-2279-5.
• Federer, H. (1996) [1969], Geometric Measure Theory, Classics in Mathematics (1st ed reprint ed.), Berlin, Heidelberg, New York: Springer Verlag, ISBN 978-3540606567.
• Jech, Thomas (2002), Set theory, third millennium edition (revised and expanded), Springer, ISBN 3-540-44085-2.
• Kanamori, Akihiro (2003), The Higher Infinite : Large Cardinals in Set Theory from Their Beginnings (2nd ed.), Springer, ISBN 3-540-00384-3.
• Maddy, Penelope (1988), "Believing the Axioms. II", The Journal of Symbolic Logic, 53 (3): 736–764, doi:10.2307/2274569, JSTOR 2274569, S2CID 16544090. A copy of parts I and II of this article with corrections is available at the author's web page.
• Solovay, Robert M. (1971), "Real-valued measurable cardinals", Axiomatic set theory (Proc. Sympos. Pure Math., Vol. XIII, Part I, Univ. California, Los Angeles, Calif., 1967), Providence, R.I.: Amer. Math. Soc., pp. 397–428, MR 0290961.
• Ulam, Stanislaw (1930), "Zur Masstheorie in der allgemeinen Mengenlehre", Fundamenta Mathematicae, 16: 140–150, doi:10.4064/fm-16-1-140-150, ISSN 0016-2736.
|
Wikipedia
|
Stochastic process
In probability theory and related fields, a stochastic (/stəˈkæstɪk/) or random process is a mathematical object usually defined as a sequence of random variables, where the index of the sequence has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule.[1][4][5] Stochastic processes have applications in many disciplines such as biology,[6] chemistry,[7] ecology,[8] neuroscience,[9] physics,[10] image processing, signal processing,[11] control theory,[12] information theory,[13] computer science,[14] and telecommunications.[15] Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.[16][17][18]
Part of a series on statistics
Probability theory
• Probability
• Axioms
• Determinism
• System
• Indeterminism
• Randomness
• Probability space
• Sample space
• Event
• Collectively exhaustive events
• Elementary event
• Mutual exclusivity
• Outcome
• Singleton
• Experiment
• Bernoulli trial
• Probability distribution
• Bernoulli distribution
• Binomial distribution
• Normal distribution
• Probability measure
• Random variable
• Bernoulli process
• Continuous or discrete
• Expected value
• Markov chain
• Observed value
• Random walk
• Stochastic process
• Complementary event
• Joint probability
• Marginal probability
• Conditional probability
• Independence
• Conditional independence
• Law of total probability
• Law of large numbers
• Bayes' theorem
• Boole's inequality
• Venn diagram
• Tree diagram
Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such stochastic processes include the Wiener process or Brownian motion process,[lower-alpha 1] used by Louis Bachelier to study price changes on the Paris Bourse,[21] and the Poisson process, used by A. K. Erlang to study the number of phone calls occurring in a certain period of time.[22] These two stochastic processes are considered the most important and central in the theory of stochastic processes,[1][4][23] and were discovered repeatedly and independently, both before and after Bachelier and Erlang, in different settings and countries.[21][24]
The term random function is also used to refer to a stochastic or random process,[25][26] because a stochastic process can also be interpreted as a random element in a function space.[27][28] The terms stochastic process and random process are used interchangeably, often with no specific mathematical space for the set that indexes the random variables.[27][29] But often these two terms are used when the random variables are indexed by the integers or an interval of the real line.[5][29] If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, then the collection of random variables is usually called a random field instead.[5][30] The values of a stochastic process are not always numbers and can be vectors or other mathematical objects.[5][28]
Based on their mathematical properties, stochastic processes can be grouped into various categories, which include random walks,[31] martingales,[32] Markov processes,[33] Lévy processes,[34] Gaussian processes,[35] random fields,[36] renewal processes, and branching processes.[37] The study of stochastic processes uses mathematical knowledge and techniques from probability, calculus, linear algebra, set theory, and topology[38][39][40] as well as branches of mathematical analysis such as real analysis, measure theory, Fourier analysis, and functional analysis.[41][42][43] The theory of stochastic processes is considered to be an important contribution to mathematics[44] and it continues to be an active topic of research for both theoretical reasons and applications.[45][46][47]
Introduction
A stochastic or random process can be defined as a collection of random variables that is indexed by some mathematical set, meaning that each random variable of the stochastic process is uniquely associated with an element in the set.[4][5] The set used to index the random variables is called the index set. Historically, the index set was some subset of the real line, such as the natural numbers, giving the index set the interpretation of time.[1] Each random variable in the collection takes values from the same mathematical space known as the state space. This state space can be, for example, the integers, the real line or $n$-dimensional Euclidean space.[1][5] An increment is the amount that a stochastic process changes between two index values, often interpreted as two points in time.[48][49] A stochastic process can have many outcomes, due to its randomness, and a single outcome of a stochastic process is called, among other names, a sample function or realization.[28][50]
Classifications
A stochastic process can be classified in different ways, for example, by its state space, its index set, or the dependence among the random variables. One common way of classification is by the cardinality of the index set and the state space.[51][52][53]
When interpreted as time, if the index set of a stochastic process has a finite or countable number of elements, such as a finite set of numbers, the set of integers, or the natural numbers, then the stochastic process is said to be in discrete time.[54][55] If the index set is some interval of the real line, then time is said to be continuous. The two types of stochastic processes are respectively referred to as discrete-time and continuous-time stochastic processes.[48][56][57] Discrete-time stochastic processes are considered easier to study because continuous-time processes require more advanced mathematical techniques and knowledge, particularly due to the index set being uncountable.[58][59] If the index set is the integers, or some subset of them, then the stochastic process can also be called a random sequence.[55]
If the state space is the integers or natural numbers, then the stochastic process is called a discrete or integer-valued stochastic process. If the state space is the real line, then the stochastic process is referred to as a real-valued stochastic process or a process with continuous state space. If the state space is $n$-dimensional Euclidean space, then the stochastic process is called a $n$-dimensional vector process or $n$-vector process.[51][52]
Etymology
The word stochastic in English was originally used as an adjective with the definition "pertaining to conjecturing", and stemming from a Greek word meaning "to aim at a mark, guess", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence.[60] In his work on probability Ars Conjectandi, originally published in Latin in 1713, Jakob Bernoulli used the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics".[61] This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz[62] who in 1917 wrote in German the word stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph Doob.[60] For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin,[63][64] though the German term had been used earlier, for example, by Andrei Kolmogorov in 1931.[65]
According to the Oxford English Dictionary, early occurrences of the word random in English with its current meaning, which relates to chance or luck, date back to the 16th century, while earlier recorded usages started in the 14th century as a noun meaning "impetuosity, great speed, force, or violence (in riding, running, striking, etc.)". The word itself comes from a Middle French word meaning "speed, haste", and it is probably derived from a French verb meaning "to run" or "to gallop". The first written appearance of the term random process pre-dates stochastic process, which the Oxford English Dictionary also gives as a synonym, and was used in an article by Francis Edgeworth published in 1888.[66]
Terminology
The definition of a stochastic process varies,[67] but a stochastic process is traditionally defined as a collection of random variables indexed by some set.[68][69] The terms random process and stochastic process are considered synonyms and are used interchangeably, without the index set being precisely specified.[27][29][30][70][71][72] Both "collection",[28][70] or "family" are used[4][73] while instead of "index set", sometimes the terms "parameter set"[28] or "parameter space"[30] are used.
The term random function is also used to refer to a stochastic or random process,[5][74][75] though sometimes it is only used when the stochastic process takes real values.[28][73] This term is also used when the index sets are mathematical spaces other than the real line,[5][76] while the terms stochastic process and random process are usually used when the index set is interpreted as time,[5][76][77] and other terms are used such as random field when the index set is $n$-dimensional Euclidean space $\mathbb {R} ^{n}$ or a manifold.[5][28][30]
Notation
A stochastic process can be denoted, among other ways, by $\{X(t)\}_{t\in T}$,[56] $\{X_{t}\}_{t\in T}$,[69] $\{X_{t}\}$[78] $\{X(t)\}$ or simply as $X$ or $X(t)$, although $X(t)$ is regarded as an abuse of function notation.[79] For example, $X(t)$ or $X_{t}$ are used to refer to the random variable with the index $t$, and not the entire stochastic process.[78] If the index set is $T=[0,\infty )$, then one can write, for example, $(X_{t},t\geq 0)$ to denote the stochastic process.[29]
Examples
Bernoulli process
Main article: Bernoulli process
One of the simplest stochastic processes is the Bernoulli process,[80] which is a sequence of independent and identically distributed (iid) random variables, where each random variable takes either the value one or zero, say one with probability $p$ and zero with probability $1-p$. This process can be linked to repeatedly flipping a coin, where the probability of obtaining a head is $p$ and its value is one, while the value of a tail is zero.[81] In other words, a Bernoulli process is a sequence of iid Bernoulli random variables,[82] where each coin flip is an example of a Bernoulli trial.[83]
Random walk
Main article: Random walk
Random walks are stochastic processes that are usually defined as sums of iid random variables or random vectors in Euclidean space, so they are processes that change in discrete time.[84][85][86][87][88] But some also use the term to refer to processes that change in continuous time,[89] particularly the Wiener process used in finance, which has led to some confusion, resulting in its criticism.[90] There are other various types of random walks, defined so their state spaces can be other mathematical objects, such as lattices and groups, and in general they are highly studied and have many applications in different disciplines.[89][91]
A classic example of a random walk is known as the simple random walk, which is a stochastic process in discrete time with the integers as the state space, and is based on a Bernoulli process, where each Bernoulli variable takes either the value positive one or negative one. In other words, the simple random walk takes place on the integers, and its value increases by one with probability, say, $p$, or decreases by one with probability $1-p$, so the index set of this random walk is the natural numbers, while its state space is the integers. If $p=0.5$, this random walk is called a symmetric random walk.[92][93]
Wiener process
Main article: Wiener process
The Wiener process is a stochastic process with stationary and independent increments that are normally distributed based on the size of the increments.[2][94] The Wiener process is named after Norbert Wiener, who proved its mathematical existence, but the process is also called the Brownian motion process or just Brownian motion due to its historical connection as a model for Brownian movement in liquids.[95][96][97]
Playing a central role in the theory of probability, the Wiener process is often considered the most important and studied stochastic process, with connections to other stochastic processes.[1][2][3][98][99][100][101] Its index set and state space are the non-negative numbers and real numbers, respectively, so it has both continuous index set and states space.[102] But the process can be defined more generally so its state space can be $n$-dimensional Euclidean space.[91][99][103] If the mean of any increment is zero, then the resulting Wiener or Brownian motion process is said to have zero drift. If the mean of the increment for any two points in time is equal to the time difference multiplied by some constant $\mu $, which is a real number, then the resulting stochastic process is said to have drift $\mu $.[104][105][106]
Almost surely, a sample path of a Wiener process is continuous everywhere but nowhere differentiable. It can be considered as a continuous version of the simple random walk.[49][105] The process arises as the mathematical limit of other stochastic processes such as certain random walks rescaled,[107][108] which is the subject of Donsker's theorem or invariance principle, also known as the functional central limit theorem.[109][110][111]
The Wiener process is a member of some important families of stochastic processes, including Markov processes, Lévy processes and Gaussian processes.[2][49] The process also has many applications and is the main stochastic process used in stochastic calculus.[112][113] It plays a central role in quantitative finance,[114][115] where it is used, for example, in the Black–Scholes–Merton model.[116] The process is also used in different fields, including the majority of natural sciences as well as some branches of social sciences, as a mathematical model for various random phenomena.[3][117][118]
Poisson process
Main article: Poisson process
The Poisson process is a stochastic process that has different forms and definitions.[119][120] It can be defined as a counting process, which is a stochastic process that represents the random number of points or events up to some time. The number of points of the process that are located in the interval from zero to some given time is a Poisson random variable that depends on that time and some parameter. This process has the natural numbers as its state space and the non-negative numbers as its index set. This process is also called the Poisson counting process, since it can be interpreted as an example of a counting process.[119]
If a Poisson process is defined with a single positive constant, then the process is called a homogeneous Poisson process.[119][121] The homogeneous Poisson process is a member of important classes of stochastic processes such as Markov processes and Lévy processes.[49]
The homogeneous Poisson process can be defined and generalized in different ways. It can be defined such that its index set is the real line, and this stochastic process is also called the stationary Poisson process.[122][123] If the parameter constant of the Poisson process is replaced with some non-negative integrable function of $t$, the resulting process is called an inhomogeneous or nonhomogeneous Poisson process, where the average density of points of the process is no longer constant.[124] Serving as a fundamental process in queueing theory, the Poisson process is an important process for mathematical models, where it finds applications for models of events randomly occurring in certain time windows.[125][126]
Defined on the real line, the Poisson process can be interpreted as a stochastic process,[49][127] among other random objects.[128][129] But then it can be defined on the $n$-dimensional Euclidean space or other mathematical spaces,[130] where it is often interpreted as a random set or a random counting measure, instead of a stochastic process.[128][129] In this setting, the Poisson process, also called the Poisson point process, is one of the most important objects in probability theory, both for applications and theoretical reasons.[22][131] But it has been remarked that the Poisson process does not receive as much attention as it should, partly due to it often being considered just on the real line, and not on other mathematical spaces.[131][132]
Definitions
Stochastic process
A stochastic process is defined as a collection of random variables defined on a common probability space $(\Omega ,{\mathcal {F}},P)$, where $\Omega $ is a sample space, ${\mathcal {F}}$ is a $\sigma $-algebra, and $P$ is a probability measure; and the random variables, indexed by some set $T$, all take values in the same mathematical space $S$, which must be measurable with respect to some $\sigma $-algebra $\Sigma $.[28]
In other words, for a given probability space $(\Omega ,{\mathcal {F}},P)$ and a measurable space $(S,\Sigma )$, a stochastic process is a collection of $S$-valued random variables, which can be written as:[80]
$\{X(t):t\in T\}.$
Historically, in many problems from the natural sciences a point $t\in T$ had the meaning of time, so $X(t)$ is a random variable representing a value observed at time $t$.[133] A stochastic process can also be written as $\{X(t,\omega ):t\in T\}$ to reflect that it is actually a function of two variables, $t\in T$ and $\omega \in \Omega $.[28][134]
There are other ways to consider a stochastic process, with the above definition being considered the traditional one.[68][69] For example, a stochastic process can be interpreted or defined as a $S^{T}$-valued random variable, where $S^{T}$ is the space of all the possible functions from the set $T$ into the space $S$.[27][68] However this alternative definition as a "function-valued random variable" in general requires additional regularity assumptions to be well-defined.[135]
Index set
The set $T$ is called the index set[4][51] or parameter set[28][136] of the stochastic process. Often this set is some subset of the real line, such as the natural numbers or an interval, giving the set $T$ the interpretation of time.[1] In addition to these sets, the index set $T$ can be another set with a total order or a more general set,[1][54] such as the Cartesian plane $R^{2}$ or $n$-dimensional Euclidean space, where an element $t\in T$ can represent a point in space.[48][137] That said, many results and theorems are only possible for stochastic processes with a totally ordered index set.[138]
State space
The mathematical space $S$ of a stochastic process is called its state space. This mathematical space can be defined using integers, real lines, $n$-dimensional Euclidean spaces, complex planes, or more abstract mathematical spaces. The state space is defined using elements that reflect the different values that the stochastic process can take.[1][5][28][51][56]
Sample function
A sample function is a single outcome of a stochastic process, so it is formed by taking a single possible value of each random variable of the stochastic process.[28][139] More precisely, if $\{X(t,\omega ):t\in T\}$ is a stochastic process, then for any point $\omega \in \Omega $, the mapping
$X(\cdot ,\omega ):T\rightarrow S,$
is called a sample function, a realization, or, particularly when $T$ is interpreted as time, a sample path of the stochastic process $\{X(t,\omega ):t\in T\}$.[50] This means that for a fixed $\omega \in \Omega $, there exists a sample function that maps the index set $T$ to the state space $S$.[28] Other names for a sample function of a stochastic process include trajectory, path function[140] or path.[141]
Increment
An increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period. For example, if $\{X(t):t\in T\}$ is a stochastic process with state space $S$ and index set $T=[0,\infty )$, then for any two non-negative numbers $t_{1}\in [0,\infty )$ and $t_{2}\in [0,\infty )$ such that $t_{1}\leq t_{2}$, the difference $X_{t_{2}}-X_{t_{1}}$ is a $S$-valued random variable known as an increment.[48][49] When interested in the increments, often the state space $S$ is the real line or the natural numbers, but it can be $n$-dimensional Euclidean space or more abstract spaces such as Banach spaces.[49]
Law
For a stochastic process $X\colon \Omega \rightarrow S^{T}$ defined on the probability space $(\Omega ,{\mathcal {F}},P)$, the law of stochastic process $X$ is defined as the image measure:
$\mu =P\circ X^{-1},$
where $P$ is a probability measure, the symbol $\circ $ denotes function composition and $X^{-1}$ is the pre-image of the measurable function or, equivalently, the $S^{T}$-valued random variable $X$, where $S^{T}$ is the space of all the possible $S$-valued functions of $t\in T$, so the law of a stochastic process is a probability measure.[27][68][142][143]
For a measurable subset $B$ of $S^{T}$, the pre-image of $X$ gives
$X^{-1}(B)=\{\omega \in \Omega :X(\omega )\in B\},$
so the law of a $X$ can be written as:[28]
$\mu (B)=P(\{\omega \in \Omega :X(\omega )\in B\}).$
The law of a stochastic process or a random variable is also called the probability law, probability distribution, or the distribution.[133][142][144][145][146]
Finite-dimensional probability distributions
Main article: Finite-dimensional distribution
For a stochastic process $X$ with law $\mu $, its finite-dimensional distribution for $t_{1},\dots ,t_{n}\in T$ is defined as:
$\mu _{t_{1},\dots ,t_{n}}=P\circ (X({t_{1}}),\dots ,X({t_{n}}))^{-1},$
This measure $\mu _{t_{1},..,t_{n}}$is the joint distribution of the random vector $(X({t_{1}}),\dots ,X({t_{n}}))$; it can be viewed as a "projection" of the law $\mu $ onto a finite subset of $T$.[27][147]
For any measurable subset $C$ of the $n$-fold Cartesian power $S^{n}=S\times \dots \times S$, the finite-dimensional distributions of a stochastic process $X$ can be written as:[28]
$\mu _{t_{1},\dots ,t_{n}}(C)=P{\Big (}{\big \{}\omega \in \Omega :{\big (}X_{t_{1}}(\omega ),\dots ,X_{t_{n}}(\omega ){\big )}\in C{\big \}}{\Big )}.$ :{\big (}X_{t_{1}}(\omega ),\dots ,X_{t_{n}}(\omega ){\big )}\in C{\big \}}{\Big )}.}
The finite-dimensional distributions of a stochastic process satisfy two mathematical conditions known as consistency conditions.[57]
Stationarity
Stationarity is a mathematical property that a stochastic process has when all the random variables of that stochastic process are identically distributed. In other words, if $X$ is a stationary stochastic process, then for any $t\in T$ the random variable $X_{t}$ has the same distribution, which means that for any set of $n$ index set values $t_{1},\dots ,t_{n}$, the corresponding $n$ random variables
$X_{t_{1}},\dots X_{t_{n}},$
all have the same probability distribution. The index set of a stationary stochastic process is usually interpreted as time, so it can be the integers or the real line.[148][149] But the concept of stationarity also exists for point processes and random fields, where the index set is not interpreted as time.[148][150][151]
When the index set $T$ can be interpreted as time, a stochastic process is said to be stationary if its finite-dimensional distributions are invariant under translations of time. This type of stochastic process can be used to describe a physical system that is in steady state, but still experiences random fluctuations.[148] The intuition behind stationarity is that as time passes the distribution of the stationary stochastic process remains the same.[152] A sequence of random variables forms a stationary stochastic process only if the random variables are identically distributed.[148]
A stochastic process with the above definition of stationarity is sometimes said to be strictly stationary, but there are other forms of stationarity. One example is when a discrete-time or continuous-time stochastic process $X$ is said to be stationary in the wide sense, then the process $X$ has a finite second moment for all $t\in T$ and the covariance of the two random variables $X_{t}$ and $X_{t+h}$ depends only on the number $h$ for all $t\in T$.[152][153] Khinchin introduced the related concept of stationarity in the wide sense, which has other names including covariance stationarity or stationarity in the broad sense.[153][154]
Filtration
A filtration is an increasing sequence of sigma-algebras defined in relation to some probability space and an index set that has some total order relation, such as in the case of the index set being some subset of the real numbers. More formally, if a stochastic process has an index set with a total order, then a filtration $\{{\mathcal {F}}_{t}\}_{t\in T}$, on a probability space $(\Omega ,{\mathcal {F}},P)$ is a family of sigma-algebras such that ${\mathcal {F}}_{s}\subseteq {\mathcal {F}}_{t}\subseteq {\mathcal {F}}$ for all $s\leq t$, where $t,s\in T$ and $\leq $ denotes the total order of the index set $T$.[51] With the concept of a filtration, it is possible to study the amount of information contained in a stochastic process $X_{t}$ at $t\in T$, which can be interpreted as time $t$.[51][155] The intuition behind a filtration ${\mathcal {F}}_{t}$ is that as time $t$ passes, more and more information on $X_{t}$ is known or available, which is captured in ${\mathcal {F}}_{t}$, resulting in finer and finer partitions of $\Omega $.[156][157]
Modification
A modification of a stochastic process is another stochastic process, which is closely related to the original stochastic process. More precisely, a stochastic process $X$ that has the same index set $T$, state space $S$, and probability space $(\Omega ,{\cal {F}},P)$ as another stochastic process $Y$ is said to be a modification of $Y$ if for all $t\in T$ the following
$P(X_{t}=Y_{t})=1,$
holds. Two stochastic processes that are modifications of each other have the same finite-dimensional law[158] and they are said to be stochastically equivalent or equivalent.[159]
Instead of modification, the term version is also used,[150][160][161][162] however some authors use the term version when two stochastic processes have the same finite-dimensional distributions, but they may be defined on different probability spaces, so two processes that are modifications of each other, are also versions of each other, in the latter sense, but not the converse.[163][142]
If a continuous-time real-valued stochastic process meets certain moment conditions on its increments, then the Kolmogorov continuity theorem says that there exists a modification of this process that has continuous sample paths with probability one, so the stochastic process has a continuous modification or version.[161][162][164] The theorem can also be generalized to random fields so the index set is $n$-dimensional Euclidean space[165] as well as to stochastic processes with metric spaces as their state spaces.[166]
Indistinguishable
Two stochastic processes $X$ and $Y$ defined on the same probability space $(\Omega ,{\mathcal {F}},P)$ with the same index set $T$ and set space $S$ are said be indistinguishable if the following
$P(X_{t}=Y_{t}{\text{ for all }}t\in T)=1,$
holds.[142][158] If two $X$ and $Y$ are modifications of each other and are almost surely continuous, then $X$ and $Y$ are indistinguishable.[167]
Separability
Separability is a property of a stochastic process based on its index set in relation to the probability measure. The property is assumed so that functionals of stochastic processes or random fields with uncountable index sets can form random variables. For a stochastic process to be separable, in addition to other conditions, its index set must be a separable space,[lower-alpha 2] which means that the index set has a dense countable subset.[150][168]
More precisely, a real-valued continuous-time stochastic process $X$ with a probability space $(\Omega ,{\cal {F}},P)$ is separable if its index set $T$ has a dense countable subset $U\subset T$ and there is a set $\Omega _{0}\subset \Omega $ of probability zero, so $P(\Omega _{0})=0$, such that for every open set $G\subset T$ and every closed set $F\subset \textstyle R=(-\infty ,\infty )$, the two events $\{X_{t}\in F{\text{ for all }}t\in G\cap U\}$ and $\{X_{t}\in F{\text{ for all }}t\in G\}$ differ from each other at most on a subset of $\Omega _{0}$.[169][170][171] The definition of separability[lower-alpha 3] can also be stated for other index sets and state spaces,[174] such as in the case of random fields, where the index set as well as the state space can be $n$-dimensional Euclidean space.[30][150]
The concept of separability of a stochastic process was introduced by Joseph Doob,.[168] The underlying idea of separability is to make a countable set of points of the index set determine the properties of the stochastic process.[172] Any stochastic process with a countable index set already meets the separability conditions, so discrete-time stochastic processes are always separable.[175] A theorem by Doob, sometimes known as Doob's separability theorem, says that any real-valued continuous-time stochastic process has a separable modification.[168][170][176] Versions of this theorem also exist for more general stochastic processes with index sets and state spaces other than the real line.[136]
Independence
Two stochastic processes $X$ and $Y$ defined on the same probability space $(\Omega ,{\mathcal {F}},P)$ with the same index set $T$ are said be independent if for all $n\in \mathbb {N} $ and for every choice of epochs $t_{1},\ldots ,t_{n}\in T$, the random vectors $\left(X(t_{1}),\ldots ,X(t_{n})\right)$ and $\left(Y(t_{1}),\ldots ,Y(t_{n})\right)$ are independent.[177]: p. 515
Uncorrelatedness
Two stochastic processes $\left\{X_{t}\right\}$ and $\left\{Y_{t}\right\}$ are called uncorrelated if their cross-covariance $\operatorname {K} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=\operatorname {E} \left[\left(X(t_{1})-\mu _{X}(t_{1})\right)\left(Y(t_{2})-\mu _{Y}(t_{2})\right)\right]$ is zero for all times.[178]: p. 142 Formally:
$\left\{X_{t}\right\},\left\{Y_{t}\right\}{\text{ uncorrelated}}\quad \iff \quad \operatorname {K} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=0\quad \forall t_{1},t_{2}$.
Independence implies uncorrelatedness
If two stochastic processes $X$ and $Y$ are independent, then they are also uncorrelated.[178]: p. 151
Orthogonality
Two stochastic processes $\left\{X_{t}\right\}$ and $\left\{Y_{t}\right\}$ are called orthogonal if their cross-correlation $\operatorname {R} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=\operatorname {E} [X(t_{1}){\overline {Y(t_{2})}}]$ is zero for all times.[178]: p. 142 Formally:
$\left\{X_{t}\right\},\left\{Y_{t}\right\}{\text{ orthogonal}}\quad \iff \quad \operatorname {R} _{\mathbf {X} \mathbf {Y} }(t_{1},t_{2})=0\quad \forall t_{1},t_{2}$.
Skorokhod space
Main article: Skorokhod space
A Skorokhod space, also written as Skorohod space, is a mathematical space of all the functions that are right-continuous with left limits, defined on some interval of the real line such as $[0,1]$ or $[0,\infty )$, and take values on the real line or on some metric space.[179][180][181] Such functions are known as càdlàg or cadlag functions, based on the acronym of the French phrase continue à droite, limite à gauche.[179][182] A Skorokhod function space, introduced by Anatoliy Skorokhod,[181] is often denoted with the letter $D$,[179][180][181][182] so the function space is also referred to as space $D$.[179][183][184] The notation of this function space can also include the interval on which all the càdlàg functions are defined, so, for example, $D[0,1]$ denotes the space of càdlàg functions defined on the unit interval $[0,1]$.[182][184][185]
Skorokhod function spaces are frequently used in the theory of stochastic processes because it often assumed that the sample functions of continuous-time stochastic processes belong to a Skorokhod space.[181][183] Such spaces contain continuous functions, which correspond to sample functions of the Wiener process. But the space also has functions with discontinuities, which means that the sample functions of stochastic processes with jumps, such as the Poisson process (on the real line), are also members of this space.[184][186]
Regularity
In the context of mathematical construction of stochastic processes, the term regularity is used when discussing and assuming certain conditions for a stochastic process to resolve possible construction issues.[187][188] For example, to study stochastic processes with uncountable index sets, it is assumed that the stochastic process adheres to some type of regularity condition such as the sample functions being continuous.[189][190]
Further examples
Markov processes and chains
Main article: Markov chain
Markov processes are stochastic processes, traditionally in discrete or continuous time, that have the Markov property, which means the next value of the Markov process depends on the current value, but it is conditionally independent of the previous values of the stochastic process. In other words, the behavior of the process in the future is stochastically independent of its behavior in the past, given the current state of the process.[191][192]
The Brownian motion process and the Poisson process (in one dimension) are both examples of Markov processes[193] in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time.[194][195]
A Markov chain is a type of Markov process that has either discrete state space or discrete index set (often representing time), but the precise definition of a Markov chain varies.[196] For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time),[197][198][199][200] but it has been also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).[196] It has been argued that the first definition of a Markov chain, where it has discrete time, now tends to be used, despite the second definition having been used by researchers like Joseph Doob and Kai Lai Chung.[201]
Markov processes form an important class of stochastic processes and have applications in many areas.[39][202] For example, they are the basis for a general stochastic simulation method known as Markov chain Monte Carlo, which is used for simulating random objects with specific probability distributions, and has found application in Bayesian statistics.[203][204]
The concept of the Markov property was originally for stochastic processes in continuous and discrete time, but the property has been adapted for other index sets such as $n$-dimensional Euclidean space, which results in collections of random variables known as Markov random fields.[205][206][207]
Martingale
Main article: Martingale (probability theory)
A martingale is a discrete-time or continuous-time stochastic process with the property that, at every instant, given the current value and all the past values of the process, the conditional expectation of every future value is equal to the current value. In discrete time, if this property holds for the next value, then it holds for all future values. The exact mathematical definition of a martingale requires two other conditions coupled with the mathematical concept of a filtration, which is related to the intuition of increasing available information as time passes. Martingales are usually defined to be real-valued,[208][209][155] but they can also be complex-valued[210] or even more general.[211]
A symmetric random walk and a Wiener process (with zero drift) are both examples of martingales, respectively, in discrete and continuous time.[208][209] For a sequence of independent and identically distributed random variables $X_{1},X_{2},X_{3},\dots $ with zero mean, the stochastic process formed from the successive partial sums $X_{1},X_{1}+X_{2},X_{1}+X_{2}+X_{3},\dots $ is a discrete-time martingale.[212] In this aspect, discrete-time martingales generalize the idea of partial sums of independent random variables.[213]
Martingales can also be created from stochastic processes by applying some suitable transformations, which is the case for the homogeneous Poisson process (on the real line) resulting in a martingale called the compensated Poisson process.[209] Martingales can also be built from other martingales.[212] For example, there are martingales based on the martingale the Wiener process, forming continuous-time martingales.[208][214]
Martingales mathematically formalize the idea of a fair game,[215] and they were originally developed to show that it is not possible to win a fair game.[216] But now they are used in many areas of probability, which is one of the main reasons for studying them.[155][216][217] Many problems in probability have been solved by finding a martingale in the problem and studying it.[218] Martingales will converge, given some conditions on their moments, so they are often used to derive convergence results, due largely to martingale convergence theorems.[213][219][220]
Martingales have many applications in statistics, but it has been remarked that its use and application are not as widespread as it could be in the field of statistics, particularly statistical inference.[221] They have found applications in areas in probability theory such as queueing theory and Palm calculus[222] and other fields such as economics[223] and finance.[17]
Lévy process
Main article: Lévy process
Lévy processes are types of stochastic processes that can be considered as generalizations of random walks in continuous time.[49][224] These processes have many applications in fields such as finance, fluid mechanics, physics and biology.[225][226] The main defining characteristics of these processes are their stationarity and independence properties, so they were known as processes with stationary and independent increments. In other words, a stochastic process $X$ is a Lévy process if for $n$ non-negatives numbers, $0\leq t_{1}\leq \dots \leq t_{n}$, the corresponding $n-1$ increments
$X_{t_{2}}-X_{t_{1}},\dots ,X_{t_{n}}-X_{t_{n-1}},$
are all independent of each other, and the distribution of each increment only depends on the difference in time.[49]
A Lévy process can be defined such that its state space is some abstract mathematical space, such as a Banach space, but the processes are often defined so that they take values in Euclidean space. The index set is the non-negative numbers, so $I=[0,\infty )$, which gives the interpretation of time. Important stochastic processes such as the Wiener process, the homogeneous Poisson process (in one dimension), and subordinators are all Lévy processes.[49][224]
Random field
Main article: Random field
A random field is a collection of random variables indexed by a $n$-dimensional Euclidean space or some manifold. In general, a random field can be considered an example of a stochastic or random process, where the index set is not necessarily a subset of the real line.[30] But there is a convention that an indexed collection of random variables is called a random field when the index has two or more dimensions.[5][28][227] If the specific definition of a stochastic process requires the index set to be a subset of the real line, then the random field can be considered as a generalization of stochastic process.[228]
Point process
Main article: Point process
A point process is a collection of points randomly located on some mathematical space such as the real line, $n$-dimensional Euclidean space, or more abstract spaces. Sometimes the term point process is not preferred, as historically the word process denoted an evolution of some system in time, so a point process is also called a random point field.[229] There are different interpretations of a point process, such a random counting measure or a random set.[230][231] Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process,[232][233] though it has been remarked that the difference between point processes and stochastic processes is not clear.[233]
Other authors consider a point process as a stochastic process, where the process is indexed by sets of the underlying space[lower-alpha 4] on which it is defined, such as the real line or $n$-dimensional Euclidean space.[236][237] Other stochastic processes such as renewal and counting processes are studied in the theory of point processes.[238][233]
History
Early probability theory
Probability theory has its origins in games of chance, which have a long history, with some games being played thousands of years ago,[239][240] but very little analysis on them was done in terms of probability.[239][241] The year 1654 is often considered the birth of probability theory when French mathematicians Pierre Fermat and Blaise Pascal had a written correspondence on probability, motivated by a gambling problem.[239][242][243] But there was earlier mathematical work done on the probability of gambling games such as Liber de Ludo Aleae by Gerolamo Cardano, written in the 16th century but posthumously published later in 1663.[239][244]
After Cardano, Jakob Bernoulli[lower-alpha 5] wrote Ars Conjectandi, which is considered a significant event in the history of probability theory.[239] Bernoulli's book was published, also posthumously, in 1713 and inspired many mathematicians to study probability.[239][246][247] But despite some renowned mathematicians contributing to probability theory, such as Pierre-Simon Laplace, Abraham de Moivre, Carl Gauss, Siméon Poisson and Pafnuty Chebyshev,[248][249] most of the mathematical community[lower-alpha 6] did not consider probability theory to be part of mathematics until the 20th century.[248][250][251][252]
Statistical mechanics
In the physical sciences, scientists developed in the 19th century the discipline of statistical mechanics, where physical systems, such as containers filled with gases, can be regarded or treated mathematically as collections of many moving particles. Although there were attempts to incorporate randomness into statistical physics by some scientists, such as Rudolf Clausius, most of the work had little or no randomness.[253][254] This changed in 1859 when James Clerk Maxwell contributed significantly to the field, more specifically, to the kinetic theory of gases, by presenting work where he assumed the gas particles move in random directions at random velocities.[255][256] The kinetic theory of gases and statistical physics continued to be developed in the second half of the 19th century, with work done chiefly by Clausius, Ludwig Boltzmann and Josiah Gibbs, which would later have an influence on Albert Einstein's mathematical model for Brownian movement.[257]
Measure theory and probability theory
At the International Congress of Mathematicians in Paris in 1900, David Hilbert presented a list of mathematical problems, where his sixth problem asked for a mathematical treatment of physics and probability involving axioms.[249] Around the start of the 20th century, mathematicians developed measure theory, a branch of mathematics for studying integrals of mathematical functions, where two of the founders were French mathematicians, Henri Lebesgue and Émile Borel. In 1925 another French mathematician Paul Lévy published the first probability book that used ideas from measure theory.[249]
In 1920s fundamental contributions to probability theory were made in the Soviet Union by mathematicians such as Sergei Bernstein, Aleksandr Khinchin,[lower-alpha 7] and Andrei Kolmogorov.[252] Kolmogorov published in 1929 his first attempt at presenting a mathematical foundation, based on measure theory, for probability theory.[258] In the early 1930s Khinchin and Kolmogorov set up probability seminars, which were attended by researchers such as Eugene Slutsky and Nikolai Smirnov,[259] and Khinchin gave the first mathematical definition of a stochastic process as a set of random variables indexed by the real line.[63][260][lower-alpha 8]
Birth of modern probability theory
In 1933 Andrei Kolmogorov published in German, his book on the foundations of probability theory titled Grundbegriffe der Wahrscheinlichkeitsrechnung,[lower-alpha 9] where Kolmogorov used measure theory to develop an axiomatic framework for probability theory. The publication of this book is now widely considered to be the birth of modern probability theory, when the theories of probability and stochastic processes became parts of mathematics.[249][252]
After the publication of Kolmogorov's book, further fundamental work on probability theory and stochastic processes was done by Khinchin and Kolmogorov as well as other mathematicians such as Joseph Doob, William Feller, Maurice Fréchet, Paul Lévy, Wolfgang Doeblin, and Harald Cramér.[249][252] Decades later Cramér referred to the 1930s as the "heroic period of mathematical probability theory".[252] World War II greatly interrupted the development of probability theory, causing, for example, the migration of Feller from Sweden to the United States of America[252] and the death of Doeblin, considered now a pioneer in stochastic processes.[262]
Stochastic processes after World War II
After World War II the study of probability theory and stochastic processes gained more attention from mathematicians, with significant contributions made in many areas of probability and mathematics as well as the creation of new areas.[252][265] Starting in the 1940s, Kiyosi Itô published papers developing the field of stochastic calculus, which involves stochastic integrals and stochastic differential equations based on the Wiener or Brownian motion process.[266]
Also starting in the 1940s, connections were made between stochastic processes, particularly martingales, and the mathematical field of potential theory, with early ideas by Shizuo Kakutani and then later work by Joseph Doob.[265] Further work, considered pioneering, was done by Gilbert Hunt in the 1950s, connecting Markov processes and potential theory, which had a significant effect on the theory of Lévy processes and led to more interest in studying Markov processes with methods developed by Itô.[21][267][268]
In 1953 Doob published his book Stochastic processes, which had a strong influence on the theory of stochastic processes and stressed the importance of measure theory in probability.[265] [264] Doob also chiefly developed the theory of martingales, with later substantial contributions by Paul-André Meyer. Earlier work had been carried out by Sergei Bernstein, Paul Lévy and Jean Ville, the latter adopting the term martingale for the stochastic process.[269][270] Methods from the theory of martingales became popular for solving various probability problems. Techniques and theory were developed to study Markov processes and then applied to martingales. Conversely, methods from the theory of martingales were established to treat Markov processes.[265]
Other fields of probability were developed and used to study stochastic processes, with one main approach being the theory of large deviations.[265] The theory has many applications in statistical physics, among other fields, and has core ideas going back to at least the 1930s. Later in the 1960s and 1970s fundamental work was done by Alexander Wentzell in the Soviet Union and Monroe D. Donsker and Srinivasa Varadhan in the United States of America,[271] which would later result in Varadhan winning the 2007 Abel Prize.[272] In the 1990s and 2000s the theories of Schramm–Loewner evolution[273] and rough paths[142] were introduced and developed to study stochastic processes and other mathematical objects in probability theory, which respectively resulted in Fields Medals being awarded to Wendelin Werner[274] in 2008 and to Martin Hairer in 2014.[275]
The theory of stochastic processes still continues to be a focus of research, with yearly international conferences on the topic of stochastic processes.[45][225]
Discoveries of specific stochastic processes
Although Khinchin gave mathematical definitions of stochastic processes in the 1930s,[63][260] specific stochastic processes had already been discovered in different settings, such as the Brownian motion process and the Poisson process.[21][24] Some families of stochastic processes such as point processes or renewal processes have long and complex histories, stretching back centuries.[276]
Bernoulli process
The Bernoulli process, which can serve as a mathematical model for flipping a biased coin, is possibly the first stochastic process to have been studied.[81] The process is a sequence of independent Bernoulli trials,[82] which are named after Jackob Bernoulli who used them to study games of chance, including probability problems proposed and studied earlier by Christiaan Huygens.[277] Bernoulli's work, including the Bernoulli process, were published in his book Ars Conjectandi in 1713.[278]
Random walks
In 1905 Karl Pearson coined the term random walk while posing a problem describing a random walk on the plane, which was motivated by an application in biology, but such problems involving random walks had already been studied in other fields. Certain gambling problems that were studied centuries earlier can be considered as problems involving random walks.[89][278] For example, the problem known as the Gambler's ruin is based on a simple random walk,[195][279] and is an example of a random walk with absorbing barriers.[242][280] Pascal, Fermat and Huyens all gave numerical solutions to this problem without detailing their methods,[281] and then more detailed solutions were presented by Jakob Bernoulli and Abraham de Moivre.[282]
For random walks in $n$-dimensional integer lattices, George Pólya published, in 1919 and 1921, work where he studied the probability of a symmetric random walk returning to a previous position in the lattice. Pólya showed that a symmetric random walk, which has an equal probability to advance in any direction in the lattice, will return to a previous position in the lattice an infinite number of times with probability one in one and two dimensions, but with probability zero in three or higher dimensions.[283][284]
Wiener process
The Wiener process or Brownian motion process has its origins in different fields including statistics, finance and physics.[21] In 1880, Danish astronomer Thorvald Thiele wrote a paper on the method of least squares, where he used the process to study the errors of a model in time-series analysis.[285][286][287] The work is now considered as an early discovery of the statistical method known as Kalman filtering, but the work was largely overlooked. It is thought that the ideas in Thiele's paper were too advanced to have been understood by the broader mathematical and statistical community at the time.[287]
The French mathematician Louis Bachelier used a Wiener process in his 1900 thesis[288][289] in order to model price changes on the Paris Bourse, a stock exchange,[290] without knowing the work of Thiele.[21] It has been speculated that Bachelier drew ideas from the random walk model of Jules Regnault, but Bachelier did not cite him,[291] and Bachelier's thesis is now considered pioneering in the field of financial mathematics.[290][291]
It is commonly thought that Bachelier's work gained little attention and was forgotten for decades until it was rediscovered in the 1950s by the Leonard Savage, and then become more popular after Bachelier's thesis was translated into English in 1964. But the work was never forgotten in the mathematical community, as Bachelier published a book in 1912 detailing his ideas,[291] which was cited by mathematicians including Doob, Feller[291] and Kolmogorov.[21] The book continued to be cited, but then starting in the 1960s the original thesis by Bachelier began to be cited more than his book when economists started citing Bachelier's work.[291]
In 1905 Albert Einstein published a paper where he studied the physical observation of Brownian motion or movement to explain the seemingly random movements of particles in liquids by using ideas from the kinetic theory of gases. Einstein derived a differential equation, known as a diffusion equation, for describing the probability of finding a particle in a certain region of space. Shortly after Einstein's first paper on Brownian movement, Marian Smoluchowski published work where he cited Einstein, but wrote that he had independently derived the equivalent results by using a different method.[292]
Einstein's work, as well as experimental results obtained by Jean Perrin, later inspired Norbert Wiener in the 1920s[293] to use a type of measure theory, developed by Percy Daniell, and Fourier analysis to prove the existence of the Wiener process as a mathematical object.[21]
Poisson process
The Poisson process is named after Siméon Poisson, due to its definition involving the Poisson distribution, but Poisson never studied the process.[22][294] There are a number of claims for early uses or discoveries of the Poisson process.[22][24] At the beginning of the 20th century the Poisson process would arise independently in different situations.[22][24] In Sweden 1903, Filip Lundberg published a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process.[295][296]
Another discovery occurred in Denmark in 1909 when A.K. Erlang derived the Poisson distribution when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang was not at the time aware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent to each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.[22]
In 1910 Ernest Rutherford and Hans Geiger published experimental results on counting alpha particles. Motivated by their work, Harry Bateman studied the counting problem and derived Poisson probabilities as a solution to a family of differential equations, resulting in the independent discovery of the Poisson process.[22] After this time there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists.[22]
Markov processes
Markov processes and Markov chains are named after Andrey Markov who studied Markov chains in the early 20th century.[297] Markov was interested in studying an extension of independent random sequences.[297] In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption,[298][299][300][301] which had been commonly regarded as a requirement for such mathematical laws to hold.[301] Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains.[298][299]
In 1912 Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov.[299][300] After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé.[302] Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.[299][303]
Andrei Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes.[252][258] Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement.[258][304] He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes.[258][305] Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement.[306] The differential equations are now called the Kolmogorov equations[307] or the Kolmogorov–Chapman equations.[308] Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in the 1930s, and then later Eugene Dynkin, starting in the 1950s.[252]
Lévy processes
Lévy processes such as the Wiener process and the Poisson process (on the real line) are named after Paul Lévy who started studying them in the 1930s,[225] but they have connections to infinitely divisible distributions going back to the 1920s.[224] In a 1932 paper Kolmogorov derived a characteristic function for random variables associated with Lévy processes. This result was later derived under more general conditions by Lévy in 1934, and then Khinchin independently gave an alternative form for this characteristic function in 1937.[252][309] In addition to Lévy, Khinchin and Kolomogrov, early fundamental contributions to the theory of Lévy processes were made by Bruno de Finetti and Kiyosi Itô.[224]
Mathematical construction
In mathematics, constructions of mathematical objects are needed, which is also the case for stochastic processes, to prove that they exist mathematically.[57] There are two main approaches for constructing a stochastic process. One approach involves considering a measurable space of functions, defining a suitable measurable mapping from a probability space to this measurable space of functions, and then deriving the corresponding finite-dimensional distributions.[310]
Another approach involves defining a collection of random variables to have specific finite-dimensional distributions, and then using Kolmogorov's existence theorem[lower-alpha 10] to prove a corresponding stochastic process exists.[57][310] This theorem, which is an existence theorem for measures on infinite product spaces,[314] says that if any finite-dimensional distributions satisfy two conditions, known as consistency conditions, then there exists a stochastic process with those finite-dimensional distributions.[57]
Construction issues
When constructing continuous-time stochastic processes certain mathematical difficulties arise, due to the uncountable index sets, which do not occur with discrete-time processes.[58][59] One problem is that is it possible to have more than one stochastic process with the same finite-dimensional distributions. For example, both the left-continuous modification and the right-continuous modification of a Poisson process have the same finite-dimensional distributions.[315] This means that the distribution of the stochastic process does not, necessarily, specify uniquely the properties of the sample functions of the stochastic process.[310][316]
Another problem is that functionals of continuous-time process that rely upon an uncountable number of points of the index set may not be measurable, so the probabilities of certain events may not be well-defined.[168] For example, the supremum of a stochastic process or random field is not necessarily a well-defined random variable.[30][59] For a continuous-time stochastic process $X$, other characteristics that depend on an uncountable number of points of the index set $T$ include:[168]
• a sample function of a stochastic process $X$ is a continuous function of $t\in T$;
• a sample function of a stochastic process $X$ is a bounded function of $t\in T$; and
• a sample function of a stochastic process $X$ is an increasing function of $t\in T$.
To overcome these two difficulties, different assumptions and approaches are possible.[69]
Resolving construction issues
One approach for avoiding mathematical construction issues of stochastic processes, proposed by Joseph Doob, is to assume that the stochastic process is separable.[317] Separability ensures that infinite-dimensional distributions determine the properties of sample functions by requiring that sample functions are essentially determined by their values on a dense countable set of points in the index set.[318] Furthermore, if a stochastic process is separable, then functionals of an uncountable number of points of the index set are measurable and their probabilities can be studied.[168][318]
Another approach is possible, originally developed by Anatoliy Skorokhod and Andrei Kolmogorov,[319] for a continuous-time stochastic process with any metric space as its state space. For the construction of such a stochastic process, it is assumed that the sample functions of the stochastic process belong to some suitable function space, which is usually the Skorokhod space consisting of all right-continuous functions with left limits. This approach is now more used than the separability assumption,[69][263] but such a stochastic process based on this approach will be automatically separable.[320]
Although less used, the separability assumption is considered more general because every stochastic process has a separable version.[263] It is also used when it is not possible to construct a stochastic process in a Skorokhod space.[173] For example, separability is assumed when constructing and studying random fields, where the collection of random variables is now indexed by sets other than the real line such as $n$-dimensional Euclidean space.[30][321]
See also
• List of stochastic processes topics
• Covariance function
• Deterministic system
• Dynamics of Markovian particles
• Entropy rate (for a stochastic process)
• Ergodic process
• Gillespie algorithm
• Interacting particle system
• Law (stochastic processes)
• Markov chain
• Stochastic cellular automaton
• Random field
• Randomness
• Stationary process
• Statistical model
• Stochastic calculus
• Stochastic control
• Stochastic parrot
• Stochastic processes and boundary value problems
Notes
1. The term Brownian motion can refer to the physical process, also known as Brownian movement, and the stochastic process, a mathematical object, but to avoid ambiguity this article uses the terms Brownian motion process or Wiener process for the latter in a style similar to, for example, Gikhman and Skorokhod[19] or Rosenblatt.[20]
2. The term "separable" appears twice here with two different meanings, where the first meaning is from probability and the second from topology and analysis. For a stochastic process to be separable (in a probabilistic sense), its index set must be a separable space (in a topological or analytic sense), in addition to other conditions.[136]
3. The definition of separability for a continuous-time real-valued stochastic process can be stated in other ways.[172][173]
4. In the context of point processes, the term "state space" can mean the space on which the point process is defined such as the real line,[234][235] which corresponds to the index set in stochastic process terminology.
5. Also known as James or Jacques Bernoulli.[245]
6. It has been remarked that a notable exception was the St Petersburg School in Russia, where mathematicians led by Chebyshev studied probability theory.[250]
7. The name Khinchin is also written in (or transliterated into) English as Khintchine.[63]
8. Doob, when citing Khinchin, uses the term 'chance variable', which used to be an alternative term for 'random variable'.[261]
9. Later translated into English and published in 1950 as Foundations of the Theory of Probability[249]
10. The theorem has other names including Kolmogorov's consistency theorem,[311] Kolmogorov's extension theorem[312] or the Daniell–Kolmogorov theorem.[313]
References
1. Joseph L. Doob (1990). Stochastic processes. Wiley. pp. 46, 47.
2. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. p. 1. ISBN 978-1-107-71749-7.
3. J. Michael Steele (2012). Stochastic Calculus and Financial Applications. Springer Science & Business Media. p. 29. ISBN 978-1-4684-9305-4.
4. Emanuel Parzen (2015). Stochastic Processes. Courier Dover Publications. pp. 7, 8. ISBN 978-0-486-79688-8.
5. Iosif Ilyich Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. p. 1. ISBN 978-0-486-69387-3.
6. Bressloff, Paul C. (2014). Stochastic Processes in Cell Biology. Springer. ISBN 978-3-319-08488-6.
7. Van Kampen, N. G. (2011). Stochastic Processes in Physics and Chemistry. Elsevier. ISBN 978-0-08-047536-3.
8. Lande, Russell; Engen, Steinar; Sæther, Bernt-Erik (2003). Stochastic Population Dynamics in Ecology and Conservation. Oxford University Press. ISBN 978-0-19-852525-7.
9. Laing, Carlo; Lord, Gabriel J. (2010). Stochastic Methods in Neuroscience. Oxford University Press. ISBN 978-0-19-923507-0.
10. Paul, Wolfgang; Baschnagel, Jörg (2013). Stochastic Processes: From Physics to Finance. Springer Science+Business Media. ISBN 978-3-319-00327-6.
11. Dougherty, Edward R. (1999). Random processes for image and signal processing. SPIE Optical Engineering Press. ISBN 978-0-8194-2513-3.
12. Bertsekas, Dimitri P. (1996). Stochastic Optimal Control: The Discrete-Time Case. Athena Scientific. ISBN 1-886529-03-5.
13. Thomas M. Cover; Joy A. Thomas (2012). Elements of Information Theory. John Wiley & Sons. p. 71. ISBN 978-1-118-58577-1.
14. Baron, Michael (2015). Probability and Statistics for Computer Scientists (2nd ed.). CRC Press. p. 131. ISBN 978-1-4987-6060-7.
15. Baccelli, François; Blaszczyszyn, Bartlomiej (2009). Stochastic Geometry and Wireless Networks. Now Publishers Inc. ISBN 978-1-60198-264-3.
16. Steele, J. Michael (2001). Stochastic Calculus and Financial Applications. Springer Science+Business Media. ISBN 978-0-387-95016-7.
17. Musiela, Marek; Rutkowski, Marek (2006). Martingale Methods in Financial Modelling. Springer Science+Business Media. ISBN 978-3-540-26653-2.
18. Shreve, Steven E. (2004). Stochastic Calculus for Finance II: Continuous-Time Models. Springer Science+Business Media. ISBN 978-0-387-40101-0.
19. Iosif Ilyich Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. ISBN 978-0-486-69387-3.
20. Murray Rosenblatt (1962). Random Processes. Oxford University Press.
21. Jarrow, Robert; Protter, Philip (2004). "A short history of stochastic integration and mathematical finance: the early years, 1880–1970". A Festschrift for Herman Rubin. Institute of Mathematical Statistics Lecture Notes - Monograph Series. pp. 75–80. CiteSeerX 10.1.1.114.632. doi:10.1214/lnms/1196285381. ISBN 978-0-940600-61-4. ISSN 0749-2170.
22. Stirzaker, David (2000). "Advice to Hedgehogs, or, Constants Can Vary". The Mathematical Gazette. 84 (500): 197–210. doi:10.2307/3621649. ISSN 0025-5572. JSTOR 3621649. S2CID 125163415.
23. Donald L. Snyder; Michael I. Miller (2012). Random Point Processes in Time and Space. Springer Science & Business Media. p. 32. ISBN 978-1-4612-3166-0.
24. Guttorp, Peter; Thorarinsdottir, Thordis L. (2012). "What Happened to Discrete Chaos, the Quenouille Process, and the Sharp Markov Property? Some History of Stochastic Point Processes". International Statistical Review. 80 (2): 253–268. doi:10.1111/j.1751-5823.2012.00181.x. ISSN 0306-7734. S2CID 80836.
25. Gusak, Dmytro; Kukush, Alexander; Kulik, Alexey; Mishura, Yuliya; Pilipenko, Andrey (2010). Theory of Stochastic Processes: With Applications to Financial Mathematics and Risk Theory. Springer Science & Business Media. p. 21. ISBN 978-0-387-87862-1.
26. Valeriy Skorokhod (2005). Basic Principles and Applications of Probability Theory. Springer Science & Business Media. p. 42. ISBN 978-3-540-26312-8.
27. Olav Kallenberg (2002). Foundations of Modern Probability. Springer Science & Business Media. pp. 24–25. ISBN 978-0-387-95313-7.
28. John Lamperti (1977). Stochastic processes: a survey of the mathematical theory. Springer-Verlag. pp. 1–2. ISBN 978-3-540-90275-1.
29. Loïc Chaumont; Marc Yor (2012). Exercises in Probability: A Guided Tour from Measure Theory to Random Processes, Via Conditioning. Cambridge University Press. p. 175. ISBN 978-1-107-60655-5.
30. Robert J. Adler; Jonathan E. Taylor (2009). Random Fields and Geometry. Springer Science & Business Media. pp. 7–8. ISBN 978-0-387-48116-6.
31. Gregory F. Lawler; Vlada Limic (2010). Random Walk: A Modern Introduction. Cambridge University Press. ISBN 978-1-139-48876-1.
32. David Williams (1991). Probability with Martingales. Cambridge University Press. ISBN 978-0-521-40605-5.
33. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. ISBN 978-1-107-71749-7.
34. David Applebaum (2004). Lévy Processes and Stochastic Calculus. Cambridge University Press. ISBN 978-0-521-83263-2.
35. Mikhail Lifshits (2012). Lectures on Gaussian Processes. Springer Science & Business Media. ISBN 978-3-642-24939-6.
36. Robert J. Adler (2010). The Geometry of Random Fields. SIAM. ISBN 978-0-89871-693-1.
37. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. ISBN 978-0-08-057041-9.
38. Bruce Hajek (2015). Random Processes for Engineers. Cambridge University Press. ISBN 978-1-316-24124-0.
39. G. Latouche; V. Ramaswami (1999). Introduction to Matrix Analytic Methods in Stochastic Modeling. SIAM. ISBN 978-0-89871-425-8.
40. D.J. Daley; David Vere-Jones (2007). An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure. Springer Science & Business Media. ISBN 978-0-387-21337-8.
41. Patrick Billingsley (2008). Probability and Measure. Wiley India Pvt. Limited. ISBN 978-81-265-1771-8.
42. Pierre Brémaud (2014). Fourier Analysis and Stochastic Processes. Springer. ISBN 978-3-319-09590-5.
43. Adam Bobrowski (2005). Functional Analysis for Probability and Stochastic Processes: An Introduction. Cambridge University Press. ISBN 978-0-521-83166-6.
44. Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1336–1347.
45. Jochen Blath; Peter Imkeller; Sylvie Roelly (2011). Surveys in Stochastic Processes. European Mathematical Society. ISBN 978-3-03719-072-2.
46. Michel Talagrand (2014). Upper and Lower Bounds for Stochastic Processes: Modern Methods and Classical Problems. Springer Science & Business Media. pp. 4–. ISBN 978-3-642-54075-2.
47. Paul C. Bressloff (2014). Stochastic Processes in Cell Biology. Springer. pp. vii–ix. ISBN 978-3-319-08488-6.
48. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. p. 27. ISBN 978-0-08-057041-9.
49. Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1337.
50. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. pp. 121–124. ISBN 978-1-107-71749-7.
51. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. pp. 294, 295. ISBN 978-1-118-59320-2.
52. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. p. 26. ISBN 978-0-08-057041-9.
53. Donald L. Snyder; Michael I. Miller (2012). Random Point Processes in Time and Space. Springer Science & Business Media. pp. 24, 25. ISBN 978-1-4612-3166-0.
54. Patrick Billingsley (2008). Probability and Measure. Wiley India Pvt. Limited. p. 482. ISBN 978-81-265-1771-8.
55. Alexander A. Borovkov (2013). Probability Theory. Springer Science & Business Media. p. 527. ISBN 978-1-4471-5201-9.
56. Pierre Brémaud (2014). Fourier Analysis and Stochastic Processes. Springer. p. 120. ISBN 978-3-319-09590-5.
57. Jeffrey S Rosenthal (2006). A First Look at Rigorous Probability Theory. World Scientific Publishing Co Inc. pp. 177–178. ISBN 978-981-310-165-4.
58. Peter E. Kloeden; Eckhard Platen (2013). Numerical Solution of Stochastic Differential Equations. Springer Science & Business Media. p. 63. ISBN 978-3-662-12616-5.
59. Davar Khoshnevisan (2006). Multiparameter Processes: An Introduction to Random Fields. Springer Science & Business Media. pp. 153–155. ISBN 978-0-387-21631-7.
60. "Stochastic". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.)
61. O. B. Sheĭnin (2006). Theory of probability and statistics as exemplified in short dictums. NG Verlag. p. 5. ISBN 978-3-938417-40-9.
62. Oscar Sheynin; Heinrich Strecker (2011). Alexandr A. Chuprov: Life, Work, Correspondence. V&R unipress GmbH. p. 136. ISBN 978-3-89971-812-6.
63. Doob, Joseph (1934). "Stochastic Processes and Statistics". Proceedings of the National Academy of Sciences of the United States of America. 20 (6): 376–379. Bibcode:1934PNAS...20..376D. doi:10.1073/pnas.20.6.376. PMC 1076423. PMID 16587907.
64. Khintchine, A. (1934). "Korrelationstheorie der stationeren stochastischen Prozesse". Mathematische Annalen. 109 (1): 604–615. doi:10.1007/BF01449156. ISSN 0025-5831. S2CID 122842868.
65. Kolmogoroff, A. (1931). "Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung". Mathematische Annalen. 104 (1): 1. doi:10.1007/BF01457949. ISSN 0025-5831. S2CID 119439925.
66. "Random". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.)
67. Bert E. Fristedt; Lawrence F. Gray (2013). A Modern Approach to Probability Theory. Springer Science & Business Media. p. 580. ISBN 978-1-4899-2837-5.
68. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. pp. 121, 122. ISBN 978-1-107-71749-7.
69. Søren Asmussen (2003). Applied Probability and Queues. Springer Science & Business Media. p. 408. ISBN 978-0-387-00211-8.
70. David Stirzaker (2005). Stochastic Processes and Models. Oxford University Press. p. 45. ISBN 978-0-19-856814-8.
71. Murray Rosenblatt (1962). Random Processes. Oxford University Press. p. 91.
72. John A. Gubner (2006). Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press. p. 383. ISBN 978-1-139-45717-0.
73. Kiyosi Itō (2006). Essentials of Stochastic Processes. American Mathematical Soc. p. 13. ISBN 978-0-8218-3898-3.
74. M. Loève (1978). Probability Theory II. Springer Science & Business Media. p. 163. ISBN 978-0-387-90262-3.
75. Pierre Brémaud (2014). Fourier Analysis and Stochastic Processes. Springer. p. 133. ISBN 978-3-319-09590-5.
76. Gusak et al. (2010), p. 1
77. Richard F. Bass (2011). Stochastic Processes. Cambridge University Press. p. 1. ISBN 978-1-139-50147-7.
78. ,John Lamperti (1977). Stochastic processes: a survey of the mathematical theory. Springer-Verlag. p. 3. ISBN 978-3-540-90275-1.
79. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 55. ISBN 978-1-86094-555-7.
80. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 293. ISBN 978-1-118-59320-2.
81. Florescu, Ionut (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 301. ISBN 978-1-118-59320-2.
82. Bertsekas, Dimitri P.; Tsitsiklis, John N. (2002). Introduction to Probability. Athena Scientific. p. 273. ISBN 978-1-886529-40-3.
83. Ibe, Oliver C. (2013). Elements of Random Walk and Diffusion Processes. John Wiley & Sons. p. 11. ISBN 978-1-118-61793-9.
84. Achim Klenke (2013). Probability Theory: A Comprehensive Course. Springer. p. 347. ISBN 978-1-4471-5362-7.
85. Gregory F. Lawler; Vlada Limic (2010). Random Walk: A Modern Introduction. Cambridge University Press. p. 1. ISBN 978-1-139-48876-1.
86. Olav Kallenberg (2002). Foundations of Modern Probability. Springer Science & Business Media. p. 136. ISBN 978-0-387-95313-7.
87. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 383. ISBN 978-1-118-59320-2.
88. Rick Durrett (2010). Probability: Theory and Examples. Cambridge University Press. p. 277. ISBN 978-1-139-49113-6.
89. Weiss, George H. (2006). "Random Walks". Encyclopedia of Statistical Sciences. p. 1. doi:10.1002/0471667196.ess2180.pub2. ISBN 978-0471667193.
90. Aris Spanos (1999). Probability Theory and Statistical Inference: Econometric Modeling with Observational Data. Cambridge University Press. p. 454. ISBN 978-0-521-42408-0.
91. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 81. ISBN 978-1-86094-555-7.
92. Allan Gut (2012). Probability: A Graduate Course. Springer Science & Business Media. p. 88. ISBN 978-1-4614-4708-5.
93. Geoffrey Grimmett; David Stirzaker (2001). Probability and Random Processes. OUP Oxford. p. 71. ISBN 978-0-19-857222-0.
94. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 56. ISBN 978-1-86094-555-7.
95. Brush, Stephen G. (1968). "A history of random processes". Archive for History of Exact Sciences. 5 (1): 1–2. doi:10.1007/BF00328110. ISSN 0003-9519. S2CID 117623580.
96. Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1338.
97. Iosif Ilyich Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. p. 21. ISBN 978-0-486-69387-3.
98. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 471. ISBN 978-1-118-59320-2.
99. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. pp. 21, 22. ISBN 978-0-08-057041-9.
100. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. VIII. ISBN 978-1-4612-0949-2.
101. Daniel Revuz; Marc Yor (2013). Continuous Martingales and Brownian Motion. Springer Science & Business Media. p. IX. ISBN 978-3-662-06400-9.
102. Jeffrey S Rosenthal (2006). A First Look at Rigorous Probability Theory. World Scientific Publishing Co Inc. p. 186. ISBN 978-981-310-165-4.
103. Donald L. Snyder; Michael I. Miller (2012). Random Point Processes in Time and Space. Springer Science & Business Media. p. 33. ISBN 978-1-4612-3166-0.
104. J. Michael Steele (2012). Stochastic Calculus and Financial Applications. Springer Science & Business Media. p. 118. ISBN 978-1-4684-9305-4.
105. Peter Mörters; Yuval Peres (2010). Brownian Motion. Cambridge University Press. pp. 1, 3. ISBN 978-1-139-48657-6.
106. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. 78. ISBN 978-1-4612-0949-2.
107. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. 61. ISBN 978-1-4612-0949-2.
108. Steven E. Shreve (2004). Stochastic Calculus for Finance II: Continuous-Time Models. Springer Science & Business Media. p. 93. ISBN 978-0-387-40101-0.
109. Olav Kallenberg (2002). Foundations of Modern Probability. Springer Science & Business Media. pp. 225, 260. ISBN 978-0-387-95313-7.
110. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. 70. ISBN 978-1-4612-0949-2.
111. Peter Mörters; Yuval Peres (2010). Brownian Motion. Cambridge University Press. p. 131. ISBN 978-1-139-48657-6.
112. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. ISBN 978-1-86094-555-7.
113. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. ISBN 978-1-4612-0949-2.
114. Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1341.
115. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. p. 340. ISBN 978-0-08-057041-9.
116. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 124. ISBN 978-1-86094-555-7.
117. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. 47. ISBN 978-1-4612-0949-2.
118. Ubbo F. Wiersema (2008). Brownian Motion Calculus. John Wiley & Sons. p. 2. ISBN 978-0-470-02171-2.
119. Henk C. Tijms (2003). A First Course in Stochastic Models. Wiley. pp. 1, 2. ISBN 978-0-471-49881-0.
120. D.J. Daley; D. Vere-Jones (2006). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. Springer Science & Business Media. pp. 19–36. ISBN 978-0-387-21564-8.
121. Mark A. Pinsky; Samuel Karlin (2011). An Introduction to Stochastic Modeling. Academic Press. p. 241. ISBN 978-0-12-381416-6.
122. J. F. C. Kingman (1992). Poisson Processes. Clarendon Press. p. 38. ISBN 978-0-19-159124-2.
123. D.J. Daley; D. Vere-Jones (2006). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. Springer Science & Business Media. p. 19. ISBN 978-0-387-21564-8.
124. J. F. C. Kingman (1992). Poisson Processes. Clarendon Press. p. 22. ISBN 978-0-19-159124-2.
125. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. pp. 118, 119. ISBN 978-0-08-057041-9.
126. Leonard Kleinrock (1976). Queueing Systems: Theory. Wiley. p. 61. ISBN 978-0-471-49110-1.
127. Murray Rosenblatt (1962). Random Processes. Oxford University Press. p. 94.
128. Martin Haenggi (2013). Stochastic Geometry for Wireless Networks. Cambridge University Press. pp. 10, 18. ISBN 978-1-107-01469-5.
129. Sung Nok Chiu; Dietrich Stoyan; Wilfrid S. Kendall; Joseph Mecke (2013). Stochastic Geometry and Its Applications. John Wiley & Sons. pp. 41, 108. ISBN 978-1-118-65825-3.
130. J. F. C. Kingman (1992). Poisson Processes. Clarendon Press. p. 11. ISBN 978-0-19-159124-2.
131. Roy L. Streit (2010). Poisson Point Processes: Imaging, Tracking, and Sensing. Springer Science & Business Media. p. 1. ISBN 978-1-4419-6923-1.
132. J. F. C. Kingman (1992). Poisson Processes. Clarendon Press. p. v. ISBN 978-0-19-159124-2.
133. Alexander A. Borovkov (2013). Probability Theory. Springer Science & Business Media. p. 528. ISBN 978-1-4471-5201-9.
134. Georg Lindgren; Holger Rootzen; Maria Sandsten (2013). Stationary Stochastic Processes for Scientists and Engineers. CRC Press. p. 11. ISBN 978-1-4665-8618-5.
135. Aumann, Robert (December 1961). "Borel structures for function spaces". Illinois Journal of Mathematics. 5 (4). doi:10.1215/ijm/1255631584. S2CID 117171116.
136. Valeriy Skorokhod (2005). Basic Principles and Applications of Probability Theory. Springer Science & Business Media. pp. 93, 94. ISBN 978-3-540-26312-8.
137. Donald L. Snyder; Michael I. Miller (2012). Random Point Processes in Time and Space. Springer Science & Business Media. p. 25. ISBN 978-1-4612-3166-0.
138. Valeriy Skorokhod (2005). Basic Principles and Applications of Probability Theory. Springer Science & Business Media. p. 104. ISBN 978-3-540-26312-8.
139. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 296. ISBN 978-1-118-59320-2.
140. Patrick Billingsley (2008). Probability and Measure. Wiley India Pvt. Limited. p. 493. ISBN 978-81-265-1771-8.
141. Bernt Øksendal (2003). Stochastic Differential Equations: An Introduction with Applications. Springer Science & Business Media. p. 10. ISBN 978-3-540-04758-2.
142. Peter K. Friz; Nicolas B. Victoir (2010). Multidimensional Stochastic Processes as Rough Paths: Theory and Applications. Cambridge University Press. p. 571. ISBN 978-1-139-48721-4.
143. Sidney I. Resnick (2013). Adventures in Stochastic Processes. Springer Science & Business Media. pp. 40–41. ISBN 978-1-4612-0387-2.
144. Ward Whitt (2006). Stochastic-Process Limits: An Introduction to Stochastic-Process Limits and Their Application to Queues. Springer Science & Business Media. p. 23. ISBN 978-0-387-21748-2.
145. David Applebaum (2004). Lévy Processes and Stochastic Calculus. Cambridge University Press. p. 4. ISBN 978-0-521-83263-2.
146. Daniel Revuz; Marc Yor (2013). Continuous Martingales and Brownian Motion. Springer Science & Business Media. p. 10. ISBN 978-3-662-06400-9.
147. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. p. 123. ISBN 978-1-107-71749-7.
148. John Lamperti (1977). Stochastic processes: a survey of the mathematical theory. Springer-Verlag. pp. 6 and 7. ISBN 978-3-540-90275-1.
149. Iosif I. Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. p. 4. ISBN 978-0-486-69387-3.
150. Robert J. Adler (2010). The Geometry of Random Fields. SIAM. pp. 14, 15. ISBN 978-0-89871-693-1.
151. Sung Nok Chiu; Dietrich Stoyan; Wilfrid S. Kendall; Joseph Mecke (2013). Stochastic Geometry and Its Applications. John Wiley & Sons. p. 112. ISBN 978-1-118-65825-3.
152. Joseph L. Doob (1990). Stochastic processes. Wiley. pp. 94–96.
153. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. pp. 298, 299. ISBN 978-1-118-59320-2.
154. Iosif Ilyich Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. p. 8. ISBN 978-0-486-69387-3.
155. David Williams (1991). Probability with Martingales. Cambridge University Press. pp. 93, 94. ISBN 978-0-521-40605-5.
156. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. pp. 22–23. ISBN 978-1-86094-555-7.
157. Peter Mörters; Yuval Peres (2010). Brownian Motion. Cambridge University Press. p. 37. ISBN 978-1-139-48657-6.
158. L. C. G. Rogers; David Williams (2000). Diffusions, Markov Processes, and Martingales: Volume 1, Foundations. Cambridge University Press. p. 130. ISBN 978-1-107-71749-7.
159. Alexander A. Borovkov (2013). Probability Theory. Springer Science & Business Media. p. 530. ISBN 978-1-4471-5201-9.
160. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 48. ISBN 978-1-86094-555-7.
161. Bernt Øksendal (2003). Stochastic Differential Equations: An Introduction with Applications. Springer Science & Business Media. p. 14. ISBN 978-3-540-04758-2.
162. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 472. ISBN 978-1-118-59320-2.
163. Daniel Revuz; Marc Yor (2013). Continuous Martingales and Brownian Motion. Springer Science & Business Media. pp. 18–19. ISBN 978-3-662-06400-9.
164. David Applebaum (2004). Lévy Processes and Stochastic Calculus. Cambridge University Press. p. 20. ISBN 978-0-521-83263-2.
165. Hiroshi Kunita (1997). Stochastic Flows and Stochastic Differential Equations. Cambridge University Press. p. 31. ISBN 978-0-521-59925-2.
166. Olav Kallenberg (2002). Foundations of Modern Probability. Springer Science & Business Media. p. 35. ISBN 978-0-387-95313-7.
167. Monique Jeanblanc; Marc Yor; Marc Chesney (2009). Mathematical Methods for Financial Markets. Springer Science & Business Media. p. 11. ISBN 978-1-85233-376-8.
168. Kiyosi Itō (2006). Essentials of Stochastic Processes. American Mathematical Soc. pp. 32–33. ISBN 978-0-8218-3898-3.
169. Iosif Ilyich Gikhman; Anatoly Vladimirovich Skorokhod (1969). Introduction to the Theory of Random Processes. Courier Corporation. p. 150. ISBN 978-0-486-69387-3.
170. Petar Todorovic (2012). An Introduction to Stochastic Processes and Their Applications. Springer Science & Business Media. pp. 19–20. ISBN 978-1-4613-9742-7.
171. Ilya Molchanov (2005). Theory of Random Sets. Springer Science & Business Media. p. 340. ISBN 978-1-85233-892-3.
172. Patrick Billingsley (2008). Probability and Measure. Wiley India Pvt. Limited. pp. 526–527. ISBN 978-81-265-1771-8.
173. Alexander A. Borovkov (2013). Probability Theory. Springer Science & Business Media. p. 535. ISBN 978-1-4471-5201-9.
174. Gusak et al. (2010), p. 22
175. Joseph L. Doob (1990). Stochastic processes. Wiley. p. 56.
176. Davar Khoshnevisan (2006). Multiparameter Processes: An Introduction to Random Fields. Springer Science & Business Media. p. 155. ISBN 978-0-387-21631-7.
177. Lapidoth, Amos, A Foundation in Digital Communication, Cambridge University Press, 2009.
178. Kun Il Park, Fundamentals of Probability and Stochastic Processes with Applications to Communications, Springer, 2018, 978-3-319-68074-3
179. Ward Whitt (2006). Stochastic-Process Limits: An Introduction to Stochastic-Process Limits and Their Application to Queues. Springer Science & Business Media. pp. 78–79. ISBN 978-0-387-21748-2.
180. Gusak et al. (2010), p. 24
181. Vladimir I. Bogachev (2007). Measure Theory (Volume 2). Springer Science & Business Media. p. 53. ISBN 978-3-540-34514-5.
182. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 4. ISBN 978-1-86094-555-7.
183. Søren Asmussen (2003). Applied Probability and Queues. Springer Science & Business Media. p. 420. ISBN 978-0-387-00211-8.
184. Patrick Billingsley (2013). Convergence of Probability Measures. John Wiley & Sons. p. 121. ISBN 978-1-118-62596-5.
185. Richard F. Bass (2011). Stochastic Processes. Cambridge University Press. p. 34. ISBN 978-1-139-50147-7.
186. Nicholas H. Bingham; Rüdiger Kiesel (2013). Risk-Neutral Valuation: Pricing and Hedging of Financial Derivatives. Springer Science & Business Media. p. 154. ISBN 978-1-4471-3856-3.
187. Alexander A. Borovkov (2013). Probability Theory. Springer Science & Business Media. p. 532. ISBN 978-1-4471-5201-9.
188. Davar Khoshnevisan (2006). Multiparameter Processes: An Introduction to Random Fields. Springer Science & Business Media. pp. 148–165. ISBN 978-0-387-21631-7.
189. Petar Todorovic (2012). An Introduction to Stochastic Processes and Their Applications. Springer Science & Business Media. p. 22. ISBN 978-1-4613-9742-7.
190. Ward Whitt (2006). Stochastic-Process Limits: An Introduction to Stochastic-Process Limits and Their Application to Queues. Springer Science & Business Media. p. 79. ISBN 978-0-387-21748-2.
191. Richard Serfozo (2009). Basics of Applied Stochastic Processes. Springer Science & Business Media. p. 2. ISBN 978-3-540-89332-5.
192. Y.A. Rozanov (2012). Markov Random Fields. Springer Science & Business Media. p. 58. ISBN 978-1-4613-8190-7.
193. Sheldon M. Ross (1996). Stochastic processes. Wiley. pp. 235, 358. ISBN 978-0-471-12062-9.
194. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. pp. 373, 374. ISBN 978-1-118-59320-2.
195. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. p. 49. ISBN 978-0-08-057041-9.
196. Søren Asmussen (2003). Applied Probability and Queues. Springer Science & Business Media. p. 7. ISBN 978-0-387-00211-8.
197. Emanuel Parzen (2015). Stochastic Processes. Courier Dover Publications. p. 188. ISBN 978-0-486-79688-8.
198. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. pp. 29, 30. ISBN 978-0-08-057041-9.
199. John Lamperti (1977). Stochastic processes: a survey of the mathematical theory. Springer-Verlag. pp. 106–121. ISBN 978-3-540-90275-1.
200. Sheldon M. Ross (1996). Stochastic processes. Wiley. pp. 174, 231. ISBN 978-0-471-12062-9.
201. Sean Meyn; Richard L. Tweedie (2009). Markov Chains and Stochastic Stability. Cambridge University Press. p. 19. ISBN 978-0-521-73182-9.
202. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. p. 47. ISBN 978-0-08-057041-9.
203. Reuven Y. Rubinstein; Dirk P. Kroese (2011). Simulation and the Monte Carlo Method. John Wiley & Sons. p. 225. ISBN 978-1-118-21052-9.
204. Dani Gamerman; Hedibert F. Lopes (2006). Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference, Second Edition. CRC Press. ISBN 978-1-58488-587-0.
205. Y.A. Rozanov (2012). Markov Random Fields. Springer Science & Business Media. p. 61. ISBN 978-1-4613-8190-7.
206. Donald L. Snyder; Michael I. Miller (2012). Random Point Processes in Time and Space. Springer Science & Business Media. p. 27. ISBN 978-1-4612-3166-0.
207. Pierre Bremaud (2013). Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues. Springer Science & Business Media. p. 253. ISBN 978-1-4757-3124-8.
208. Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. p. 65. ISBN 978-1-86094-555-7.
209. Ioannis Karatzas; Steven Shreve (1991). Brownian Motion and Stochastic Calculus. Springer. p. 11. ISBN 978-1-4612-0949-2.
210. Joseph L. Doob (1990). Stochastic processes. Wiley. pp. 292, 293.
211. Gilles Pisier (2016). Martingales in Banach Spaces. Cambridge University Press. ISBN 978-1-316-67946-3.
212. J. Michael Steele (2012). Stochastic Calculus and Financial Applications. Springer Science & Business Media. pp. 12, 13. ISBN 978-1-4684-9305-4.
213. P. Hall; C. C. Heyde (2014). Martingale Limit Theory and Its Application. Elsevier Science. p. 2. ISBN 978-1-4832-6322-9.
214. J. Michael Steele (2012). Stochastic Calculus and Financial Applications. Springer Science & Business Media. p. 115. ISBN 978-1-4684-9305-4.
215. Sheldon M. Ross (1996). Stochastic processes. Wiley. p. 295. ISBN 978-0-471-12062-9.
216. J. Michael Steele (2012). Stochastic Calculus and Financial Applications. Springer Science & Business Media. p. 11. ISBN 978-1-4684-9305-4.
217. Olav Kallenberg (2002). Foundations of Modern Probability. Springer Science & Business Media. p. 96. ISBN 978-0-387-95313-7.
218. J. Michael Steele (2012). Stochastic Calculus and Financial Applications. Springer Science & Business Media. p. 371. ISBN 978-1-4684-9305-4.
219. J. Michael Steele (2012). Stochastic Calculus and Financial Applications. Springer Science & Business Media. p. 22. ISBN 978-1-4684-9305-4.
220. Geoffrey Grimmett; David Stirzaker (2001). Probability and Random Processes. OUP Oxford. p. 336. ISBN 978-0-19-857222-0.
221. Glasserman, Paul; Kou, Steven (2006). "A Conversation with Chris Heyde". Statistical Science. 21 (2): 292, 293. arXiv:math/0609294. Bibcode:2006math......9294G. doi:10.1214/088342306000000088. ISSN 0883-4237. S2CID 62552177.
222. Francois Baccelli; Pierre Bremaud (2013). Elements of Queueing Theory: Palm Martingale Calculus and Stochastic Recurrences. Springer Science & Business Media. ISBN 978-3-662-11657-9.
223. P. Hall; C. C. Heyde (2014). Martingale Limit Theory and Its Application. Elsevier Science. p. x. ISBN 978-1-4832-6322-9.
224. Jean Bertoin (1998). Lévy Processes. Cambridge University Press. p. viii. ISBN 978-0-521-64632-1.
225. Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1336.
226. David Applebaum (2004). Lévy Processes and Stochastic Calculus. Cambridge University Press. p. 69. ISBN 978-0-521-83263-2.
227. Leonid Koralov; Yakov G. Sinai (2007). Theory of Probability and Random Processes. Springer Science & Business Media. p. 171. ISBN 978-3-540-68829-7.
228. David Applebaum (2004). Lévy Processes and Stochastic Calculus. Cambridge University Press. p. 19. ISBN 978-0-521-83263-2.
229. Sung Nok Chiu; Dietrich Stoyan; Wilfrid S. Kendall; Joseph Mecke (2013). Stochastic Geometry and Its Applications. John Wiley & Sons. p. 109. ISBN 978-1-118-65825-3.
230. Sung Nok Chiu; Dietrich Stoyan; Wilfrid S. Kendall; Joseph Mecke (2013). Stochastic Geometry and Its Applications. John Wiley & Sons. p. 108. ISBN 978-1-118-65825-3.
231. Martin Haenggi (2013). Stochastic Geometry for Wireless Networks. Cambridge University Press. p. 10. ISBN 978-1-107-01469-5.
232. D.J. Daley; D. Vere-Jones (2006). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. Springer Science & Business Media. p. 194. ISBN 978-0-387-21564-8.
233. Cox, D. R.; Isham, Valerie (1980). Point Processes. CRC Press. p. 3. ISBN 978-0-412-21910-8.
234. J. F. C. Kingman (1992). Poisson Processes. Clarendon Press. p. 8. ISBN 978-0-19-159124-2.
235. Jesper Moller; Rasmus Plenge Waagepetersen (2003). Statistical Inference and Simulation for Spatial Point Processes. CRC Press. p. 7. ISBN 978-0-203-49693-0.
236. Samuel Karlin; Howard E. Taylor (2012). A First Course in Stochastic Processes. Academic Press. p. 31. ISBN 978-0-08-057041-9.
237. Volker Schmidt (2014). Stochastic Geometry, Spatial Statistics and Random Fields: Models and Algorithms. Springer. p. 99. ISBN 978-3-319-10064-7.
238. D.J. Daley; D. Vere-Jones (2006). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. Springer Science & Business Media. ISBN 978-0-387-21564-8.
239. Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. US: John Wiley & Sons. pp. 1–2. ISBN 978-1-119-38755-8.
240. David, F. N. (1955). "Studies in the History of Probability and Statistics I. Dicing and Gaming (A Note on the History of Probability)". Biometrika. 42 (1/2): 1–15. doi:10.2307/2333419. ISSN 0006-3444. JSTOR 2333419.
241. L. E. Maistrov (2014). Probability Theory: A Historical Sketch. Elsevier Science. p. 1. ISBN 978-1-4832-1863-2.
242. Seneta, E. (2006). "Probability, History of". Encyclopedia of Statistical Sciences. p. 1. doi:10.1002/0471667196.ess2065.pub2. ISBN 978-0471667193.
243. John Tabak (2014). Probability and Statistics: The Science of Uncertainty. Infobase Publishing. pp. 24–26. ISBN 978-0-8160-6873-9.
244. Bellhouse, David (2005). "Decoding Cardano's Liber de Ludo Aleae". Historia Mathematica. 32 (2): 180–202. doi:10.1016/j.hm.2004.04.001. ISSN 0315-0860.
245. Anders Hald (2005). A History of Probability and Statistics and Their Applications before 1750. John Wiley & Sons. p. 221. ISBN 978-0-471-72517-6.
246. L. E. Maistrov (2014). Probability Theory: A Historical Sketch. Elsevier Science. p. 56. ISBN 978-1-4832-1863-2.
247. John Tabak (2014). Probability and Statistics: The Science of Uncertainty. Infobase Publishing. p. 37. ISBN 978-0-8160-6873-9.
248. Chung, Kai Lai (1998). "Probability and Doob". The American Mathematical Monthly. 105 (1): 28–35. doi:10.2307/2589523. ISSN 0002-9890. JSTOR 2589523.
249. Bingham, N. (2000). "Studies in the history of probability and statistics XLVI. Measure into probability: from Lebesgue to Kolmogorov". Biometrika. 87 (1): 145–156. doi:10.1093/biomet/87.1.145. ISSN 0006-3444.
250. Benzi, Margherita; Benzi, Michele; Seneta, Eugene (2007). "Francesco Paolo Cantelli. b. 20 December 1875 d. 21 July 1966". International Statistical Review. 75 (2): 128. doi:10.1111/j.1751-5823.2007.00009.x. ISSN 0306-7734. S2CID 118011380.
251. Doob, Joseph L. (1996). "The Development of Rigor in Mathematical Probability (1900-1950)". The American Mathematical Monthly. 103 (7): 586–595. doi:10.2307/2974673. ISSN 0002-9890. JSTOR 2974673.
252. Cramer, Harald (1976). "Half a Century with Probability Theory: Some Personal Recollections". The Annals of Probability. 4 (4): 509–546. doi:10.1214/aop/1176996025. ISSN 0091-1798.
253. Truesdell, C. (1975). "Early kinetic theories of gases". Archive for History of Exact Sciences. 15 (1): 22–23. doi:10.1007/BF00327232. ISSN 0003-9519. S2CID 189764116.
254. Brush, Stephen G. (1967). "Foundations of statistical mechanics 1845?1915". Archive for History of Exact Sciences. 4 (3): 150–151. doi:10.1007/BF00412958. ISSN 0003-9519. S2CID 120059181.
255. Truesdell, C. (1975). "Early kinetic theories of gases". Archive for History of Exact Sciences. 15 (1): 31–32. doi:10.1007/BF00327232. ISSN 0003-9519. S2CID 189764116.
256. Brush, S.G. (1958). "The development of the kinetic theory of gases IV. Maxwell". Annals of Science. 14 (4): 243–255. doi:10.1080/00033795800200147. ISSN 0003-3790.
257. Brush, Stephen G. (1968). "A history of random processes". Archive for History of Exact Sciences. 5 (1): 15–16. doi:10.1007/BF00328110. ISSN 0003-9519. S2CID 117623580.
258. Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hayman, W. K.; Hyland, J. M. E.; Lorentz, G. G.; Moffatt, H. K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whittle, P. (1990). "Andrei Nikolaevich Kolmogorov (1903–1987)". Bulletin of the London Mathematical Society. 22 (1): 33. doi:10.1112/blms/22.1.31. ISSN 0024-6093.
259. Vere-Jones, David (2006). "Khinchin, Aleksandr Yakovlevich". Encyclopedia of Statistical Sciences. p. 1. doi:10.1002/0471667196.ess6027.pub2. ISBN 978-0471667193.
260. Vere-Jones, David (2006). "Khinchin, Aleksandr Yakovlevich". Encyclopedia of Statistical Sciences. p. 4. doi:10.1002/0471667196.ess6027.pub2. ISBN 978-0471667193.
261. Snell, J. Laurie (2005). "Obituary: Joseph Leonard Doob". Journal of Applied Probability. 42 (1): 251. doi:10.1239/jap/1110381384. ISSN 0021-9002.
262. Lindvall, Torgny (1991). "W. Doeblin, 1915-1940". The Annals of Probability. 19 (3): 929–934. doi:10.1214/aop/1176990329. ISSN 0091-1798.
263. Getoor, Ronald (2009). "J. L. Doob: Foundations of stochastic processes and probabilistic potential theory". The Annals of Probability. 37 (5): 1655. arXiv:0909.4213. Bibcode:2009arXiv0909.4213G. doi:10.1214/09-AOP465. ISSN 0091-1798. S2CID 17288507.
264. Bingham, N. H. (2005). "Doob: a half-century on". Journal of Applied Probability. 42 (1): 257–266. doi:10.1239/jap/1110381385. ISSN 0021-9002.
265. Meyer, Paul-André (2009). "Stochastic Processes from 1950 to the Present". Electronic Journal for History of Probability and Statistics. 5 (1): 1–42.
266. "Kiyosi Itô receives Kyoto Prize". Notices of the AMS. 45 (8): 981–982. 1998.
267. Jean Bertoin (1998). Lévy Processes. Cambridge University Press. p. viii and ix. ISBN 978-0-521-64632-1.
268. J. Michael Steele (2012). Stochastic Calculus and Financial Applications. Springer Science & Business Media. p. 176. ISBN 978-1-4684-9305-4.
269. P. Hall; C. C. Heyde (2014). Martingale Limit Theory and Its Application. Elsevier Science. pp. 1, 2. ISBN 978-1-4832-6322-9.
270. Dynkin, E. B. (1989). "Kolmogorov and the Theory of Markov Processes". The Annals of Probability. 17 (3): 822–832. doi:10.1214/aop/1176991248. ISSN 0091-1798.
271. Ellis, Richard S. (1995). "An overview of the theory of large deviations and applications to statistical mechanics". Scandinavian Actuarial Journal. 1995 (1): 98. doi:10.1080/03461238.1995.10413952. ISSN 0346-1238.
272. Raussen, Martin; Skau, Christian (2008). "Interview with Srinivasa Varadhan". Notices of the AMS. 55 (2): 238–246.
273. Malte Henkel; Dragi Karevski (2012). Conformal Invariance: an Introduction to Loops, Interfaces and Stochastic Loewner Evolution. Springer Science & Business Media. p. 113. ISBN 978-3-642-27933-1.
274. "2006 Fields Medals Awarded". Notices of the AMS. 53 (9): 1041–1044. 2015.
275. Quastel, Jeremy (2015). "The Work of the 2014 Fields Medalists". Notices of the AMS. 62 (11): 1341–1344.
276. D.J. Daley; D. Vere-Jones (2006). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. Springer Science & Business Media. pp. 1–4. ISBN 978-0-387-21564-8.
277. Anders Hald (2005). A History of Probability and Statistics and Their Applications before 1750. John Wiley & Sons. p. 226. ISBN 978-0-471-72517-6.
278. Joel Louis Lebowitz (1984). Nonequilibrium phenomena II: from stochastics to hydrodynamics. North-Holland Pub. pp. 8–10. ISBN 978-0-444-86806-0.
279. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 374. ISBN 978-1-118-59320-2.
280. Oliver C. Ibe (2013). Elements of Random Walk and Diffusion Processes. John Wiley & Sons. p. 5. ISBN 978-1-118-61793-9.
281. Anders Hald (2005). A History of Probability and Statistics and Their Applications before 1750. John Wiley & Sons. p. 63. ISBN 978-0-471-72517-6.
282. Anders Hald (2005). A History of Probability and Statistics and Their Applications before 1750. John Wiley & Sons. p. 202. ISBN 978-0-471-72517-6.
283. Ionut Florescu (2014). Probability and Stochastic Processes. John Wiley & Sons. p. 385. ISBN 978-1-118-59320-2.
284. Barry D. Hughes (1995). Random Walks and Random Environments: Random walks. Clarendon Press. p. 111. ISBN 978-0-19-853788-5.
285. Thiele, Thorwald N. (1880). "Om Anvendelse af mindste Kvadraterbs Methode i nogle Tilfælde, hvoren Komplikation af visse Slags uensartede tilfældige Fejlkilder giver Fejleneen "systematisk" Karakter". Kongelige Danske Videnskabernes Selskabs Skrifter. Series 5 (12): 381–408.
286. Hald, Anders (1981). "T. N. Thiele's Contributions to Statistics". International Statistical Review / Revue Internationale de Statistique. 49 (1): 1–20. doi:10.2307/1403034. ISSN 0306-7734. JSTOR 1403034.
287. Lauritzen, Steffen L. (1981). "Time Series Analysis in 1880: A Discussion of Contributions Made by T.N. Thiele". International Statistical Review / Revue Internationale de Statistique. 49 (3): 319–320. doi:10.2307/1402616. ISSN 0306-7734. JSTOR 1402616.
288. Bachelier, Luis (1900). "Théorie de la spéculation" (PDF). Ann. Sci. Éc. Norm. Supér. Serie 3, 17: 21–89. doi:10.24033/asens.476. Archived (PDF) from the original on 2011-06-05.
289. Bachelier, Luis (1900). "The Theory of Speculation". Ann. Sci. Éc. Norm. Supér. Serie 3, 17: 21–89 (Engl. translation by David R. May, 2011). doi:10.24033/asens.476.
290. Courtault, Jean-Michel; Kabanov, Yuri; Bru, Bernard; Crepel, Pierre; Lebon, Isabelle; Le Marchand, Arnaud (2000). "Louis Bachelier on the Centenary of Theorie de la Speculation" (PDF). Mathematical Finance. 10 (3): 339–353. doi:10.1111/1467-9965.00098. ISSN 0960-1627. S2CID 14422885. Archived (PDF) from the original on 2018-07-21.
291. Jovanovic, Franck (2012). "Bachelier: Not the forgotten forerunner he has been depicted as. An analysis of the dissemination of Louis Bachelier's work in economics" (PDF). The European Journal of the History of Economic Thought. 19 (3): 431–451. doi:10.1080/09672567.2010.540343. ISSN 0967-2567. S2CID 154003579. Archived (PDF) from the original on 2018-07-21.
292. Brush, Stephen G. (1968). "A history of random processes". Archive for History of Exact Sciences. 5 (1): 25. doi:10.1007/BF00328110. ISSN 0003-9519. S2CID 117623580.
293. Brush, Stephen G. (1968). "A history of random processes". Archive for History of Exact Sciences. 5 (1): 1–36. doi:10.1007/BF00328110. ISSN 0003-9519. S2CID 117623580.
294. D.J. Daley; D. Vere-Jones (2006). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. Springer Science & Business Media. pp. 8–9. ISBN 978-0-387-21564-8.
295. Embrechts, Paul; Frey, Rüdiger; Furrer, Hansjörg (2001). "Stochastic processes in insurance and finance". Stochastic Processes: Theory and Methods. Handbook of Statistics. Vol. 19. p. 367. doi:10.1016/S0169-7161(01)19014-0. ISBN 978-0444500144. ISSN 0169-7161.
296. Cramér, Harald (1969). "Historical review of Filip Lundberg's works on risk theory". Scandinavian Actuarial Journal. 1969 (sup3): 6–12. doi:10.1080/03461238.1969.10404602. ISSN 0346-1238.
297. Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. NJ: John Wiley & Sons. pp. 2–8. ISBN 978-1-119-38755-8.
298. Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. NJ: John Wiley & Sons. pp. 1–235. ISBN 978-1-119-38755-8.
299. Charles Miller Grinstead; James Laurie Snell (1997). Introduction to Probability. American Mathematical Soc. pp. 464–466. ISBN 978-0-8218-0749-1.
300. Pierre Bremaud (2013). Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues. Springer Science & Business Media. p. ix. ISBN 978-1-4757-3124-8.
301. Hayes, Brian (2013). "First links in the Markov chain". American Scientist. 101 (2): 92–96. doi:10.1511/2013.101.92.
302. Seneta, E. (1998). "I.J. Bienaymé [1796-1878]: Criticality, Inequality, and Internationalization". International Statistical Review / Revue Internationale de Statistique. 66 (3): 291–292. doi:10.2307/1403518. ISSN 0306-7734. JSTOR 1403518.
303. Bru, B.; Hertz, S. (2001). "Maurice Fréchet". Statisticians of the Centuries. pp. 331–334. doi:10.1007/978-1-4613-0179-0_71. ISBN 978-0-387-95283-3.
304. Marc Barbut; Bernard Locker; Laurent Mazliak (2016). Paul Lévy and Maurice Fréchet: 50 Years of Correspondence in 107 Letters. Springer London. p. 5. ISBN 978-1-4471-7262-8.
305. Valeriy Skorokhod (2005). Basic Principles and Applications of Probability Theory. Springer Science & Business Media. p. 146. ISBN 978-3-540-26312-8.
306. Bernstein, Jeremy (2005). "Bachelier". American Journal of Physics. 73 (5): 398–396. Bibcode:2005AmJPh..73..395B. doi:10.1119/1.1848117. ISSN 0002-9505.
307. William J. Anderson (2012). Continuous-Time Markov Chains: An Applications-Oriented Approach. Springer Science & Business Media. p. vii. ISBN 978-1-4612-3038-0.
308. Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hayman, W. K.; Hyland, J. M. E.; Lorentz, G. G.; Moffatt, H. K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whittle, P. (1990). "Andrei Nikolaevich Kolmogorov (1903–1987)". Bulletin of the London Mathematical Society. 22 (1): 57. doi:10.1112/blms/22.1.31. ISSN 0024-6093.
309. David Applebaum (2004). Lévy Processes and Stochastic Calculus. Cambridge University Press. p. 67. ISBN 978-0-521-83263-2.
310. Robert J. Adler (2010). The Geometry of Random Fields. SIAM. p. 13. ISBN 978-0-89871-693-1.
311. Krishna B. Athreya; Soumendra N. Lahiri (2006). Measure Theory and Probability Theory. Springer Science & Business Media. ISBN 978-0-387-32903-1.
312. Bernt Øksendal (2003). Stochastic Differential Equations: An Introduction with Applications. Springer Science & Business Media. p. 11. ISBN 978-3-540-04758-2.
313. David Williams (1991). Probability with Martingales. Cambridge University Press. p. 124. ISBN 978-0-521-40605-5.
314. Rick Durrett (2010). Probability: Theory and Examples. Cambridge University Press. p. 410. ISBN 978-1-139-49113-6.
315. Patrick Billingsley (2008). Probability and Measure. Wiley India Pvt. Limited. pp. 493–494. ISBN 978-81-265-1771-8.
316. Alexander A. Borovkov (2013). Probability Theory. Springer Science & Business Media. pp. 529–530. ISBN 978-1-4471-5201-9.
317. Krishna B. Athreya; Soumendra N. Lahiri (2006). Measure Theory and Probability Theory. Springer Science & Business Media. p. 221. ISBN 978-0-387-32903-1.
318. Robert J. Adler; Jonathan E. Taylor (2009). Random Fields and Geometry. Springer Science & Business Media. p. 14. ISBN 978-0-387-48116-6.
319. Krishna B. Athreya; Soumendra N. Lahiri (2006). Measure Theory and Probability Theory. Springer Science & Business Media. p. 211. ISBN 978-0-387-32903-1.
320. Alexander A. Borovkov (2013). Probability Theory. Springer Science & Business Media. p. 536. ISBN 978-1-4471-5201-9.
321. Benjamin Yakir (2013). Extremes in Random Fields: A Theory and Its Applications. John Wiley & Sons. p. 5. ISBN 978-1-118-72062-2.
Further reading
Articles
• Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1336–1347.
• Cramer, Harald (1976). "Half a Century with Probability Theory: Some Personal Recollections". The Annals of Probability. 4 (4): 509–546. doi:10.1214/aop/1176996025. ISSN 0091-1798.
• Guttorp, Peter; Thorarinsdottir, Thordis L. (2012). "What Happened to Discrete Chaos, the Quenouille Process, and the Sharp Markov Property? Some History of Stochastic Point Processes". International Statistical Review. 80 (2): 253–268. doi:10.1111/j.1751-5823.2012.00181.x. ISSN 0306-7734. S2CID 80836.
• Jarrow, Robert; Protter, Philip (2004). "A short history of stochastic integration and mathematical finance: the early years, 1880–1970". A Festschrift for Herman Rubin. Institute of Mathematical Statistics Lecture Notes - Monograph Series. pp. 75–91. doi:10.1214/lnms/1196285381. ISBN 978-0-940600-61-4. ISSN 0749-2170.
• Meyer, Paul-André (2009). "Stochastic Processes from 1950 to the Present". Electronic Journal for History of Probability and Statistics. 5 (1): 1–42.
Books
• Robert J. Adler (2010). The Geometry of Random Fields. SIAM. ISBN 978-0-89871-693-1.
• Robert J. Adler; Jonathan E. Taylor (2009). Random Fields and Geometry. Springer Science & Business Media. ISBN 978-0-387-48116-6.
• Pierre Brémaud (2013). Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues. Springer Science & Business Media. ISBN 978-1-4757-3124-8.
• Joseph L. Doob (1990). Stochastic processes. Wiley.
• Anders Hald (2005). A History of Probability and Statistics and Their Applications before 1750. John Wiley & Sons. ISBN 978-0-471-72517-6.
• Crispin Gardiner (2010). Stochastic Methods. Springer. ISBN 978-3-540-70712-7.
• Iosif I. Gikhman; Anatoly Vladimirovich Skorokhod (1996). Introduction to the Theory of Random Processes. Courier Corporation. ISBN 978-0-486-69387-3.
• Emanuel Parzen (2015). Stochastic Processes. Courier Dover Publications. ISBN 978-0-486-79688-8.
• Murray Rosenblatt (1962). Random Processes. Oxford University Press.
External links
• Media related to Stochastic processes at Wikimedia Commons
Stochastic processes
Discrete time
• Bernoulli process
• Branching process
• Chinese restaurant process
• Galton–Watson process
• Independent and identically distributed random variables
• Markov chain
• Moran process
• Random walk
• Loop-erased
• Self-avoiding
• Biased
• Maximal entropy
Continuous time
• Additive process
• Bessel process
• Birth–death process
• pure birth
• Brownian motion
• Bridge
• Excursion
• Fractional
• Geometric
• Meander
• Cauchy process
• Contact process
• Continuous-time random walk
• Cox process
• Diffusion process
• Empirical process
• Feller process
• Fleming–Viot process
• Gamma process
• Geometric process
• Hawkes process
• Hunt process
• Interacting particle systems
• Itô diffusion
• Itô process
• Jump diffusion
• Jump process
• Lévy process
• Local time
• Markov additive process
• McKean–Vlasov process
• Ornstein–Uhlenbeck process
• Poisson process
• Compound
• Non-homogeneous
• Schramm–Loewner evolution
• Semimartingale
• Sigma-martingale
• Stable process
• Superprocess
• Telegraph process
• Variance gamma process
• Wiener process
• Wiener sausage
Both
• Branching process
• Galves–Löcherbach model
• Gaussian process
• Hidden Markov model (HMM)
• Markov process
• Martingale
• Differences
• Local
• Sub-
• Super-
• Random dynamical system
• Regenerative process
• Renewal process
• Stochastic chains with memory of variable length
• White noise
Fields and other
• Dirichlet process
• Gaussian random field
• Gibbs measure
• Hopfield model
• Ising model
• Potts model
• Boolean network
• Markov random field
• Percolation
• Pitman–Yor process
• Point process
• Cox
• Poisson
• Random field
• Random graph
Time series models
• Autoregressive conditional heteroskedasticity (ARCH) model
• Autoregressive integrated moving average (ARIMA) model
• Autoregressive (AR) model
• Autoregressive–moving-average (ARMA) model
• Generalized autoregressive conditional heteroskedasticity (GARCH) model
• Moving-average (MA) model
Financial models
• Binomial options pricing model
• Black–Derman–Toy
• Black–Karasinski
• Black–Scholes
• Chan–Karolyi–Longstaff–Sanders (CKLS)
• Chen
• Constant elasticity of variance (CEV)
• Cox–Ingersoll–Ross (CIR)
• Garman–Kohlhagen
• Heath–Jarrow–Morton (HJM)
• Heston
• Ho–Lee
• Hull–White
• LIBOR market
• Rendleman–Bartter
• SABR volatility
• Vašíček
• Wilkie
Actuarial models
• Bühlmann
• Cramér–Lundberg
• Risk process
• Sparre–Anderson
Queueing models
• Bulk
• Fluid
• Generalized queueing network
• M/G/1
• M/M/1
• M/M/c
Properties
• Càdlàg paths
• Continuous
• Continuous paths
• Ergodic
• Exchangeable
• Feller-continuous
• Gauss–Markov
• Markov
• Mixing
• Piecewise-deterministic
• Predictable
• Progressively measurable
• Self-similar
• Stationary
• Time-reversible
Limit theorems
• Central limit theorem
• Donsker's theorem
• Doob's martingale convergence theorems
• Ergodic theorem
• Fisher–Tippett–Gnedenko theorem
• Large deviation principle
• Law of large numbers (weak/strong)
• Law of the iterated logarithm
• Maximal ergodic theorem
• Sanov's theorem
• Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy)
Inequalities
• Burkholder–Davis–Gundy
• Doob's martingale
• Doob's upcrossing
• Kunita–Watanabe
• Marcinkiewicz–Zygmund
Tools
• Cameron–Martin formula
• Convergence of random variables
• Doléans-Dade exponential
• Doob decomposition theorem
• Doob–Meyer decomposition theorem
• Doob's optional stopping theorem
• Dynkin's formula
• Feynman–Kac formula
• Filtration
• Girsanov theorem
• Infinitesimal generator
• Itô integral
• Itô's lemma
• Karhunen–Loève theorem
• Kolmogorov continuity theorem
• Kolmogorov extension theorem
• Lévy–Prokhorov metric
• Malliavin calculus
• Martingale representation theorem
• Optional stopping theorem
• Prokhorov's theorem
• Quadratic variation
• Reflection principle
• Skorokhod integral
• Skorokhod's representation theorem
• Skorokhod space
• Snell envelope
• Stochastic differential equation
• Tanaka
• Stopping time
• Stratonovich integral
• Uniform integrability
• Usual hypotheses
• Wiener space
• Classical
• Abstract
Disciplines
• Actuarial mathematics
• Control theory
• Econometrics
• Ergodic theory
• Extreme value theory (EVT)
• Large deviations theory
• Mathematical finance
• Mathematical statistics
• Probability theory
• Queueing theory
• Renewal theory
• Ruin theory
• Signal processing
• Statistics
• Stochastic analysis
• Time series analysis
• Machine learning
• List of topics
• Category
Industrial and applied mathematics
Computational
• Algorithms
• design
• analysis
• Automata theory
• Coding theory
• Computational geometry
• Constraint programming
• Computational logic
• Cryptography
• Information theory
Discrete
• Computer algebra
• Computational number theory
• Combinatorics
• Graph theory
• Discrete geometry
Analysis
• Approximation theory
• Clifford analysis
• Clifford algebra
• Differential equations
• Ordinary differential equations
• Partial differential equations
• Stochastic differential equations
• Differential geometry
• Differential forms
• Gauge theory
• Geometric analysis
• Dynamical systems
• Chaos theory
• Control theory
• Functional analysis
• Operator algebra
• Operator theory
• Harmonic analysis
• Fourier analysis
• Multilinear algebra
• Exterior
• Geometric
• Tensor
• Vector
• Multivariable calculus
• Exterior
• Geometric
• Tensor
• Vector
• Numerical analysis
• Numerical linear algebra
• Numerical methods for ordinary differential equations
• Numerical methods for partial differential equations
• Validated numerics
• Variational calculus
Probability theory
• Distributions (random variables)
• Stochastic processes / analysis
• Path integral
• Stochastic variational calculus
Mathematical
physics
• Analytical mechanics
• Lagrangian
• Hamiltonian
• Field theory
• Classical
• Conformal
• Effective
• Gauge
• Quantum
• Statistical
• Topological
• Perturbation theory
• in quantum mechanics
• Potential theory
• String theory
• Bosonic
• Topological
• Supersymmetry
• Supersymmetric quantum mechanics
• Supersymmetric theory of stochastic dynamics
Algebraic structures
• Algebra of physical space
• Feynman integral
• Poisson algebra
• Quantum group
• Renormalization group
• Representation theory
• Spacetime algebra
• Superalgebra
• Supersymmetry algebra
Decision sciences
• Game theory
• Operations research
• Optimization
• Social choice theory
• Statistics
• Mathematical economics
• Mathematical finance
Other applications
• Biology
• Chemistry
• Psychology
• Sociology
• "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
Related
• Mathematics
• Mathematical software
Organizations
• Society for Industrial and Applied Mathematics
• Japan Society for Industrial and Applied Mathematics
• Société de Mathématiques Appliquées et Industrielles
• International Council for Industrial and Applied Mathematics
• European Community on Computational Methods in Applied Sciences
• Category
• Mathematics portal / outline / topics list
Authority control: National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
|
Wikipedia
|
Mathematical problem
A mathematical problem is a problem that can be represented, analyzed, and possibly solved, with the methods of mathematics. This can be a real-world problem, such as computing the orbits of the planets in the solar system, or a problem of a more abstract nature, such as Hilbert's problems. It can also be a problem referring to the nature of mathematics itself, such as Russell's Paradox.
Real-world problems
Informal "real-world" mathematical problems are questions related to a concrete setting, such as "Adam has five apples and gives John three. How many has he left?". Such questions are usually more difficult to solve than regular mathematical exercises like "5 − 3", even if one knows the mathematics required to solve the problem. Known as word problems, they are used in mathematics education to teach students to connect real-world situations to the abstract language of mathematics.
In general, to use mathematics for solving a real-world problem, the first step is to construct a mathematical model of the problem. This involves abstraction from the details of the problem, and the modeller has to be careful not to lose essential aspects in translating the original problem into a mathematical one. After the problem has been solved in the world of mathematics, the solution must be translated back into the context of the original problem.
Abstract problems
Abstract mathematical problems arise in all fields of mathematics. While mathematicians usually study them for their own sake, by doing so, results may be obtained that find application outside the realm of mathematics. Theoretical physics has historically been a rich source of inspiration.
Some abstract problems have been rigorously proved to be unsolvable, such as squaring the circle and trisecting the angle using only the compass and straightedge constructions of classical geometry, and solving the general quintic equation algebraically. Also provably unsolvable are so-called undecidable problems, such as the halting problem for Turing machines.
Some well-known difficult abstract problems that have been solved relatively recently are the four-colour theorem, Fermat's Last Theorem, and the Poincaré conjecture.
Computers do not need to have a sense of the motivations of mathematicians in order to do what they do.[1] Formal definitions and computer-checkable deductions are absolutely central to mathematical science.
See also: Logical positivism and de:Falsifikationismus
Degradation of problems to exercises
Mathematics educators using problem solving for evaluation have an issue phrased by Alan H. Schoenfeld:
How can one compare test scores from year to year, when very different problems are used? (If similar problems are used year after year, teachers and students will learn what they are, students will practice them: problems become exercises, and the test no longer assesses problem solving).[2]
The same issue was faced by Sylvestre Lacroix almost two centuries earlier:
... it is necessary to vary the questions that students might communicate with each other. Though they may fail the exam, they might pass later. Thus distribution of questions, the variety of topics, or the answers, risks losing the opportunity to compare, with precision, the candidates one-to-another.[3]
Such degradation of problems into exercises is characteristic of mathematics in history. For example, describing the preparations for the Cambridge Mathematical Tripos in the 19th century, Andrew Warwick wrote:
... many families of the then standard problems had originally taxed the abilities of the greatest mathematicians of the 18th century.[4]
See also
• List of unsolved problems in mathematics
• Problem solving
• Mathematical game
References
1. (Newby & Newby 2008), "The second test is, that although such machines might execute many things with equal or perhaps greater perfection than any of us, they would, without doubt, fail in certain others from which it could be discovered that they did not act from knowledge, but solely from the disposition of their organs: for while reason is an universal instrument that is alike available on every occasion, these organs, on the contrary, need a particular arrangement for each particular action; whence it must be morally impossible that there should exist in any machine a diversity of organs sufficient to enable it to act in all the occurrences of life, in the way in which our reason enable us to act." translated from
(Descartes 1637) harv error: no target: CITEREFDescartes1637 (help), page =57, "Et le second est que, bien qu'elles fissent plusieurs choses aussy bien, ou peutestre mieux qu'aucun de nois, ells manqueroient infalliblement en quelques autres, par lesquelles on découuriroit quelles n'agiroient pas par connoissance, mais seulement par la disposition de leurs organs. Car, au lieu que la raison est un instrument univeersel, qui peut seruir en toutes sortes de rencontres, ces organs ont besoin de quelque particliere disposition pour chaque action particuliere; d'oǜ vient qu'il est moralement impossible qu'il y en ait assez de diuers en une machine, pour la faire agir en toutes les occurrences de la vie, de mesme façon que nostre raison nous fait agir."
2. Alan H. Schoenfeld (editor) (2007) Assessing mathematical proficiency, preface pages x,xi, Mathematical Sciences Research Institute, Cambridge University Press ISBN 978-0-521-87492-2
3. S. F. Lacroix (1816) Essais sur l’enseignement en general, et sur celui des mathematiques en particulier, page 201
4. Andrew Warwick (2003) Masters of Theory: Cambridge and the Rise of Mathematical Physics, page 145, University of Chicago Press ISBN 0-226-87375-7
• Newby, Ilana; Newby, Greg (2008-07-01). "Discourse on the Method of rightly conducting the reason, and seeking truth in the sciences by Rene Descartes". Project Gutenberg. Retrieved 2019-02-13., translated from
• René, Descartes (1637). Discours de la méthode pour bien conduire sa raison et chercher la vérité dans les scienses, plus la dioptrique, les météores et la géométrie qui sont des essais de cette method. {{cite book}}: |website= ignored (help)
Wikimedia Commons has media related to Mathematical problems.
|
Wikipedia
|
Fatou conjecture
In mathematics, the Fatou conjecture, named after Pierre Fatou, states that a quadratic family of maps from the complex plane to itself is hyperbolic for an open dense set of parameters.
References
• Świątek, Grzegorz; Graczyk, Jacek (1998), The real Fatou conjecture, Annals of Mathematics Studies, vol. 144, Princeton University Press, ISBN 978-0-691-00257-6, MR 1657075
|
Wikipedia
|
Real algebraic geometry
In mathematics, real algebraic geometry is the sub-branch of algebraic geometry studying real algebraic sets, i.e. real-number solutions to algebraic equations with real-number coefficients, and mappings between them (in particular real polynomial mappings).
Semialgebraic geometry is the study of semialgebraic sets, i.e. real-number solutions to algebraic inequalities with-real number coefficients, and mappings between them. The most natural mappings between semialgebraic sets are semialgebraic mappings, i.e., mappings whose graphs are semialgebraic sets.
Terminology
Nowadays the words 'semialgebraic geometry' and 'real algebraic geometry' are used as synonyms, because real algebraic sets cannot be studied seriously without the use of semialgebraic sets. For example, a projection of a real algebraic set along a coordinate axis need not be a real algebraic set, but it is always a semialgebraic set: this is the Tarski–Seidenberg theorem.[1][2] Related fields are o-minimal theory and real analytic geometry.
Examples: Real plane curves are examples of real algebraic sets and polyhedra are examples of semialgebraic sets. Real algebraic functions and Nash functions are examples of semialgebraic mappings. Piecewise polynomial mappings (see the Pierce–Birkhoff conjecture) are also semialgebraic mappings.
Computational real algebraic geometry is concerned with the algorithmic aspects of real algebraic (and semialgebraic) geometry. The main algorithm is cylindrical algebraic decomposition. It is used to cut semialgebraic sets into nice pieces and to compute their projections.
Real algebra is the part of algebra which is relevant to real algebraic (and semialgebraic) geometry. It is mostly concerned with the study of ordered fields and ordered rings (in particular real closed fields) and their applications to the study of positive polynomials and sums-of-squares of polynomials. (See Hilbert's 17th problem and Krivine's Positivestellensatz.) The relation of real algebra to real algebraic geometry is similar to the relation of commutative algebra to complex algebraic geometry. Related fields are the theory of moment problems, convex optimization, the theory of quadratic forms, valuation theory and model theory.
Timeline of real algebra and real algebraic geometry
• 1826 Fourier's algorithm for systems of linear inequalities.[3] Rediscovered by Lloyd Dines in 1919[4] and Theodore Motzkin in 1936.[5]
• 1835 Sturm's theorem on real root counting[6]
• 1856 Hermite's theorem on real root counting.[7]
• 1876 Harnack's curve theorem.[8] (This bound on the number of components was later extended to all Betti numbers of all real algebraic sets[9][10][11] and all semialgebraic sets.[12])
• 1888 Hilbert's theorem on ternary quartics.[13]
• 1900 Hilbert's problems (especially the 16th and the 17th problem)
• 1902 Farkas' lemma[14] (Can be reformulated as linear positivstellensatz.)
• 1914 Annibale Comessatti showed that not every real algebraic surface is birational to RP2[15]
• 1916 Fejér's conjecture about nonnegative trigonometric polynomials.[16] (Solved by Frigyes Riesz.[17])
• 1927 Emil Artin's solution of Hilbert's 17th problem[18]
• 1927 Krull–Baer Theorem[19][20] (connection between orderings and valuations)
• 1928 Pólya's Theorem on positive polynomials on a simplex[21]
• 1929 B. L. van der Waerden sketches a proof that real algebraic and semialgebraic sets are triangularizable,[22] but the necessary tools have not been developed to make the argument rigorous.
• 1931 Alfred Tarski's real quantifier elimination.[23] Improved and popularized by Abraham Seidenberg in 1954.[24] (Both use Sturm's theorem.)
• 1936 Herbert Seifert proved that every closed smooth submanifold of $\mathbb {R} ^{n}$ with trivial normal bundle, can be isotoped to a component of a nonsingular real algebraic subset of $\mathbb {R} ^{n}$ which is a complete intersection[25] (from the conclusion of this theorem the word "component" can not be removed[26]).
• 1940 Marshall Stone's representation theorem for partially ordered rings.[27] Improved by Richard Kadison in 1951[28] and Donald Dubois in 1967[29] (Kadison–Dubois representation theorem). Further improved by Mihai Putinar in 1993[30] and Jacobi in 2001[31] (Putinar–Jacobi representation theorem).
• 1952 John Nash proved that every closed smooth manifold is diffeomorphic to a nonsingular component of a real algebraic set.[32]
• 1956 Pierce–Birkhoff conjecture formulated.[33] (Solved in dimensions ≤ 2.[34])
• 1964 Krivine's Nullstellensatz and Positivestellensatz.[35] Rediscovered and popularized by Stengle in 1974.[36] (Krivine uses real quantifier elimination while Stengle uses Lang's homomorphism theorem.[37])
• 1964 Lojasiewicz triangulated semi-analytic sets[38]
• 1964 Heisuke Hironaka proved the resolution of singularity theorem[39]
• 1964 Hassler Whitney proved that every analytic variety admits a stratification satisfying the Whitney conditions.[40]
• 1967 Theodore Motzkin finds a positive polynomial which is not a sum of squares of polynomials.[41]
• 1972 Vladimir Rokhlin proved Gudkov's conjecture.[42]
• 1973 Alberto Tognoli proved that every closed smooth manifold is diffeomorphic to a nonsingular real algebraic set.[43]
• 1975 George E. Collins discovers cylindrical algebraic decomposition algorithm, which improves Tarski's real quantifier elimination and allows to implement it on a computer.[44]
• 1973 Jean-Louis Verdier proved that every subanalytic set admits a stratification with condition (w).[45]
• 1979 Michel Coste and Marie-Françoise Roy discover the real spectrum of a commutative ring.[46]
• 1980 Oleg Viro introduced the "patch working" technique and used it to classify real algebraic curves of low degree.[47] Later Ilya Itenberg and Viro used it to produce counterexamples to the Ragsdale conjecture,[48][49] and Grigory Mikhalkin applied it to tropical geometry for curve counting.[50]
• 1980 Selman Akbulut and Henry C. King gave a topological characterization of real algebraic sets with isolated singularities, and topologically characterized nonsingular real algebraic sets (not necessarily compact)[51]
• 1980 Akbulut and King proved that every knot in $S^{n}$ is the link of a real algebraic set with isolated singularity in $\mathbb {R} ^{n+1}$[52]
• 1981 Akbulut and King proved that every compact PL manifold is PL homeomorphic to a real algebraic set.[53][54][55]
• 1983 Akbulut and King introduced "Topological Resolution Towers" as topological models of real algebraic sets, from this they obtained new topological invariants of real algebraic sets, and topologically characterized all 3-dimensional algebraic sets.[56] These invariants later generalized by Michel Coste and Krzysztof Kurdyka[57] as well as Clint McCrory and Adam Parusiński.[58]
• 1984 Ludwig Bröcker's theorem on minimal generation of basic open semialgebraic sets[59] (improved and extended to basic closed semialgebraic sets by Scheiderer.[60])
• 1984 Benedetti and Dedo proved that not every closed smooth manifold is diffeomorphic to a totally algebraic nonsingular real algebraic set (totally algebraic means all its Z/2Z-homology cycles are represented by real algebraic subsets).[61]
• 1991 Akbulut and King proved that every closed smooth manifold is homeomorphic to a totally algebraic real algebraic set.[62]
• 1991 Schmüdgen's solution of the multidimensional moment problem for compact semialgebraic sets and related strict positivstellensatz.[63] Algebraic proof found by Wörmann.[64] Implies Reznick's version of Artin's theorem with uniform denominators.[65]
• 1992 Akbulut and King proved ambient versions of the Nash-Tognoli theorem: Every closed smooth submanifold of Rn is isotopic to the nonsingular points (component) of a real algebraic subset of Rn, and they extended this result to immersed submanifolds of Rn.[66][67]
• 1992 Benedetti and Marin proved that every compact closed smooth 3-manifold M can be obtained from $S^{3}$ by a sequence of blow ups and downs along smooth centers, and that M is homeomorphic to a possibly singular affine real algebraic rational threefold[68]
• 1997 Bierstone and Milman proved a canonical resolution of singularities theorem[69]
• 1997 Mikhalkin proved that every closed smooth n-manifold can be obtained from $S^{n}$ by a sequence of topological blow ups and downs[70]
• 1998 János Kollár showed that not every closed 3-manifold is a projective real 3-fold which is birational to RP3[71]
• 2000 Scheiderer's local-global principle and related non-strict extension of Schmüdgen's positivstellensatz in dimensions ≤ 2.[72][73][74]
• 2000 János Kollár proved that every closed smooth 3–manifold is the real part of a compact complex manifold which can be obtained from $\mathbb {CP} ^{3}$ by a sequence of real blow ups and blow downs.[75]
• 2003 Welschinger introduces an invariant for counting real rational curves[76]
• 2005 Akbulut and King showed that not every nonsingular real algebraic subset of RPn is smoothly isotopic to the real part of a nonsingular complex algebraic subset of CPn[77][78]
References
• S. Akbulut and H.C. King, Topology of real algebraic sets, MSRI Pub, 25. Springer-Verlag, New York (1992) ISBN 0-387-97744-9
• Bochnak, Jacek; Coste, Michel; Roy, Marie-Françoise. Real Algebraic Geometry. Translated from the 1987 French original. Revised by the authors. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], 36. Springer-Verlag, Berlin, 1998. x+430 pp. ISBN 3-540-64663-9
• Basu, Saugata; Pollack, Richard; Roy, Marie-Françoise Algorithms in real algebraic geometry. Second edition. Algorithms and Computation in Mathematics, 10. Springer-Verlag, Berlin, 2006. x+662 pp. ISBN 978-3-540-33098-1; 3-540-33098-4
• Marshall, Murray Positive polynomials and sums of squares. Mathematical Surveys and Monographs, 146. American Mathematical Society, Providence, RI, 2008. xii+187 pp. ISBN 978-0-8218-4402-1; 0-8218-4402-4
Notes
1. van den Dries, L. (1998). Tame topology and o-minimal structures. London Mathematical Society Lecture Note Series. Vol. 248. Cambridge University Press. p. 31. Zbl 0953.03045.
2. Khovanskii, A. G. (1991). Fewnomials. Translations of Mathematical Monographs. Vol. 88. Translated from the Russian by Smilka Zdravkovska. Providence, RI: American Mathematical Society. ISBN 0-8218-4547-0. Zbl 0728.12002.
3. Joseph B. J. Fourier, Solution d'une question particuliére du calcul des inégalités. Bull. sci. Soc. Philomn. Paris 99–100. OEuvres 2, 315–319.
4. Dines, Lloyd L. (1919). "Systems of linear inequalities". Annals of Mathematics. (2). 20 (3): 191–199. doi:10.2307/1967869. JSTOR 1967869.
5. Theodore Motzkin, Beiträge zur Theorie der linearen Ungleichungen. IV+ 76 S. Diss., Basel (1936).
6. Jacques Charles François Sturm, Mémoires divers présentés par des savants étrangers 6, pp. 273–318 (1835).
7. Charles Hermite, Sur le Nombre des Racines d’une Équation Algébrique Comprise Entre des Limites Données, Journal für die reine und angewandte Mathematik, vol. 52, pp. 39–51 (1856).
8. C. G. A. Harnack Über Vieltheiligkeit der ebenen algebraischen Curven, Mathematische Annalen 10 (1876), 189–199
9. I. G. Petrovski˘ı and O. A. Ole˘ınik, On the topology of real algebraic surfaces, Izvestiya Akad. Nauk SSSR. Ser.Mat. 13, (1949). 389–402
10. John Milnor, On the Betti numbers of real varieties, Proceedings of the American Mathematical Society 15 (1964), 275–280.
11. René Thom, Sur l’homologie des vari´et´es algebriques r´eelles, in: S. S. Cairns (ed.), Differential and Combinatorial Topology, pp. 255–265, Princeton University Press, Princeton, NJ, 1965.
12. Basu, Saugata (1999). "On bounding the Betti numbers and computing the Euler characteristic of semi-algebraic sets". Discrete & Computational Geometry. 22 (1): 1–18. doi:10.1007/PL00009443. hdl:2027.42/42421. S2CID 7023328.
13. Hilbert, David (1888). "Uber die Darstellung definiter Formen als Summe von Formenquadraten". Mathematische Annalen. 32 (3): 342–350. doi:10.1007/BF01443605. S2CID 177804714.
14. Farkas, Julius. "Über die Theorie der Einfachen Ungleichungen". Journal für die Reine und Angewandte Mathematik. 124: 1–27.
15. Comessatti, Annibale (1914). "Sulla connessione delle superfizie razionali reali". Annali di Matematica Pura ed Applicata. 23 (3): 215–283. doi:10.1007/BF02419577. S2CID 121297483.
16. Lipót Fejér, ¨Uber trigonometrische Polynome, J. Reine Angew. Math. 146 (1916), 53–82.
17. Frigyes Riesz and Béla Szőkefalvi-Nagy, Functional Analysis, Frederick Ungar Publ. Co., New York, 1955.
18. Artin, Emil (1927). "Uber die Zerlegung definiter Funktionen in Quadrate". Abh. Math. Sem. Univ. Hamburg. 5: 85–99. doi:10.1007/BF02952512. S2CID 122881707.
19. Krull, Wolfgang (1932). "Allgemeine Bewertungstheorie". Journal für die reine und angewandte Mathematik. 1932 (167): 160–196. doi:10.1515/crll.1932.167.160. S2CID 199547002.
20. Baer, Reinhold (1927), "Über nicht-archimedisch geordnete Körper", Sitzungsberichte der Heidelberger Akademie der Wissenschaften. Mathematisch-Naturwissenschaftliche Klasse, 8: 3–13
21. George Pólya, Über positive Darstellung von Polynomen Vierteljschr, Naturforsch. Ges. Zürich 73 (1928) 141–145, in: R.P. Boas (Ed.), Collected Papers Vol. 2, MIT Press, Cambridge, MA, 1974, pp. 309–313
22. B. L. van der Waerden, Topologische Begründung des Kalküls der abzählenden Geometrie. Math. Ann. 102, 337–362 (1929).
23. Alfred Tarski, A decision method for elementary algebra and geometry, Rand. Corp.. 1948; UC Press, Berkeley, 1951, Announced in : Ann. Soc. Pol. Math. 9 (1930, published 1931) 206–7; and in Fund. Math. 17 (1931) 210–239.
24. Abraham Seidenberg, A new decision method for elementary algebra, Annals of Mathematics 60 (1954), 365–374.
25. Herbert Seifert, Algebraische approximation von Mannigfaltigkeiten, Mathematische Zeitschrift 41 (1936), 1–17
26. Selman Akbulut and Henry C. King, Submanifolds and homology of nonsingular real algebraic varieties, American Journal of Mathematics, vol. 107, no. 1 (Feb., 1985) p.72
27. Stone, Marshall (1940). "A general theory of spectra. I." Proceedings of the National Academy of Sciences of the United States of America. 26 (4): 280–283. doi:10.1073/pnas.26.4.280. PMC 1078172. PMID 16588355.
28. Kadison, Richard V. (1951), "A representation theory for commutative topological algebra", Memoirs of the American Mathematical Society, 7: 39 pp, MR 0044040
29. Dubois, Donald W. (1967). "A note on David Harrison's theory of preprimes". Pacific Journal of Mathematics. 21: 15–19. doi:10.2140/pjm.1967.21.15. MR 0209200. S2CID 120262803.
30. Mihai Putinar, Positive polynomials on compact semi-algebraic sets. Indiana University Mathematics Journal 42 (1993), no. 3, 969–984.
31. T. Jacobi, A representation theorem for certain partially ordered commutative rings. Mathematische Zeitschrift 237 (2001), no. 2, 259–273.
32. Nash, John (1952). "Real algebraic manifolds". Annals of Mathematics. 56 (3): 405–421. doi:10.2307/1969649. JSTOR 1969649.
33. Birkhoff, Garrett; Pierce, Richard Scott (1956). "Lattice ordered rings". Anais da Academia Brasileira de Ciências. 28: 41–69.
34. Mahé, Louis (1984). "On the Pierce–Birkhoff conjecture". Rocky Mountain Journal of Mathematics. 14 (4): 983–985. doi:10.1216/RMJ-1984-14-4-983. MR 0773148.
35. Krivine, J.-L. (1964). "Anneaux préordonnés" (PDF). Journal d'Analyse Mathématique. 12: 307–326. doi:10.1007/BF02807438.
36. G. Stengle, A nullstellensatz and a positivstellensatz in semialgebraic geometry. Math. Ann. 207 (1974), 87–97.
37. S. Lang, Algebra. Addison–Wesley Publishing Co., Inc., Reading, Mass. 1965 xvii+508 pp.
38. S. Lojasiewicz, Triangulation of semi-analytic sets, Ann. Scu. Norm. di Pisa, 18 (1964), 449–474.
39. Heisuke Hironaka, Resolution of singularities of an algebraic variety over a field of characteristic zero. I, Annals of Mathematics (2) 79 (1): (1964) 109–203, and part II, pp. 205–326.
40. Hassler Whitney, Local properties of analytic varieties, Differential and combinatorial topology (ed. S. Cairns), Princeton Univ. Press, Princeton N.J. (1965), 205–244.
41. Theodore S. Motzkin, The arithmetic-geometric inequality. 1967 Inequalities (Proc. Sympos. Wright-Patterson Air Force Base, Ohio, 1965) pp. 205–224 MR0223521.
42. "Proof of Gudkov's hypothesis". V. A. Rokhlin. Functional Analysis and Its Applications, volume 6, pp. 136–138 (1972)
43. Alberto Tognoli, Su una congettura di Nash, Annali della Scuola Normale Superiore di Pisa 27, 167–185 (1973).
44. George E. Collins, "Quantifier elimination for real closed fields by cylindrical algebraic decomposition", Lect. Notes Comput. Sci. 33, 134–183, 1975 MR0403962.
45. Jean-Louis Verdier, Stratifications de Whitney et théorème de Bertini-Sard, Inventiones Mathematicae 36, 295–312 (1976).
46. Marie-Françoise Coste-Roy, Michel Coste, Topologies for real algebraic geometry. Topos theoretic methods in geometry, pp. 37–100, Various Publ. Ser., 30, Aarhus Univ., Aarhus, 1979.
47. Oleg Ya. Viro, Gluing of plane real algebraic curves and constructions of curves of degrees 6 and 7. In Topology (Leningrad, 1982), volume 1060 of Lecture Notes in Mathematics, pages 187–200. Springer, Berlin, 1984
48. Viro, Oleg Ya. (1980). "Кривые степени 7, кривые степени 8 и гипотеза Рэгсдейл" [Curves of degree 7, curves of degree 8 and the hypothesis of Ragsdale]. Doklady Akademii Nauk SSSR. 254 (6): 1306–1309. Translated in "Curves of degree 7, curves of degree 8 and Ragsdale's conjecture". Soviet Mathematics - Doklady. 22: 566–570. 1980. Zbl 0422.14032.
49. Itenberg, Ilia; Mikhalkin, Grigory; Shustin, Eugenii (2007). Tropical algebraic geometry. Oberwolfach Seminars. Vol. 35. Basel: Birkhäuser. pp. 34–35. ISBN 978-3-7643-8309-1. Zbl 1162.14300.
50. Mikhalkin, Grigory (2005). "Enumerative tropical algebraic geometry in $\mathbb {R} ^{2}$". Journal of the American Mathematical Society. 18: 313–377. doi:10.1090/S0894-0347-05-00477-7.
51. Selman Akbulut and Henry C. King, The topology of real algebraic sets with isolated singularities, Annals of Mathematics 113 (1981), 425–446.
52. Selman Akbulut and Henry C. King, All knots are algebraic, Commentarii Mathematici Helvetici 56, Fasc. 3 (1981), 339–351.
53. S. Akbulut and H.C. King, Real algebraic structures on topological spaces, Publications Mathématiques de l'IHÉS 53 (1981), 79–162.
54. S. Akbulut and L. Taylor, A topological resolution theorem, Publications Mathématiques de l'IHÉS 53 (1981), 163–196.
55. S. Akbulut and H.C. King, The topology of real algebraic sets, L'Enseignement Mathématique 29 (1983), 221–261.
56. Selman Akbulut and Henry C. King, Topology of real algebraic sets, MSRI Pub, 25. Springer-Verlag, New York (1992) ISBN 0-387-97744-9
57. Coste, Michel; Kurdyka, Krzysztof (1992). "On the link of a stratum in a real algebraic set". Topology. 31 (2): 323–336. doi:10.1016/0040-9383(92)90025-d. MR 1167174.
58. McCrory, Clint; Parusiński, Adam (2007), "Algebraically constructible functions: real algebra and topology", Arc spaces and additive invariants in real algebraic and analytic geometry, Panoramas et Synthèses, vol. 24, Paris: Société mathématique de France, pp. 69–85, arXiv:math/0202086, MR 2409689
59. Bröcker, Ludwig (1984). "Minimale erzeugung von Positivbereichen". Geometriae Dedicata (in German). 16 (3): 335–350. doi:10.1007/bf00147875. MR 0765338. S2CID 117475206.
60. C. Scheiderer, Stability index of real varieties. Inventiones Mathematicae 97 (1989), no. 3, 467–483.
61. R. Benedetti and M. Dedo, Counterexamples to representing homology classes by real algebraic subvarieties up to homeomorphism, Compositio Mathematica, 53, (1984), 143–151.
62. S. Akbulut and H.C. King, All compact manifolds are homeomorphic to totally algebraic real algebraic sets, Comment. Math. Helv. 66 (1991) 139–149.
63. K. Schmüdgen, The K-moment problem for compact semi-algebraic sets. Math. Ann. 289 (1991), no. 2, 203–206.
64. T. Wörmann Strikt Positive Polynome in der Semialgebraischen Geometrie, Univ. Dortmund 1998.
65. B. Reznick, Uniform denominators in Hilbert's seventeenth problem. Math. Z. 220 (1995), no. 1, 75–97.
66. S. Akbulut and H.C. King On approximating submanifolds by algebraic sets and a solution to the Nash conjecture, Inventiones Mathematicae 107 (1992), 87–98
67. S. Akbulut and H.C. King, Algebraicity of Immersions, Topology, vol. 31, no. 4, (1992), 701–712.
68. R. Benedetti and A. Marin , Déchirures de variétés de dimension trois ...., Comment. Math. Helv. 67 (1992), 514–545.
69. E. Bierstone and P.D. Milman , Canonical desingularization in characteristic zero by blowing up the maximum strata of a local invariant, Inventiones Mathematicae 128 (2) (1997) 207–302.
70. G. Mikhalkin, Blow up equivalence of smooth closed manifolds, Topology, 36 (1997) 287–299
71. János Kollár, The Nash conjecture for algebraic threefolds, ERA of AMS 4 (1998) 63–73
72. C. Scheiderer, Sums of squares of regular functions on real algebraic varieties. Transactions of the American Mathematical Society 352 (2000), no. 3, 1039–1069.
73. C. Scheiderer, Sums of squares on real algebraic curves, Mathematische Zeitschrift 245 (2003), no. 4, 725–760.
74. C. Scheiderer, Sums of squares on real algebraic surfaces. Manuscripta Mathematica 119 (2006), no. 4, 395–410.
75. János Kollár, The Nash conjecture for nonprojective threefolds, arXiv:math/0009108v1
76. J.-Y. Welschinger, Invariants of real rational symplectic 4-manifolds and lower bounds in real enumerative geometry, Inventiones Mathematicae 162 (2005), no. 1, 195–234. Zbl 1082.14052
77. S. Akbulut and H.C. King, Transcendental submanifolds of RPn Comment. Math. Helv., 80, (2005), 427–432
78. S. Akbulut, Real algebraic structures, Proceedings of GGT, (2005) 49–58, arXiv:math/0601105v3.
External links
Wikimedia Commons has media related to Real algebraic geometry.
• The Role of Hilbert Problems in Real Algebraic Geometry (PostScript)
• Real Algebraic and Analytic Geometry Preprint Server
|
Wikipedia
|
Real computation
In computability theory, the theory of real computation deals with hypothetical computing machines using infinite-precision real numbers. They are given this name because they operate on the set of real numbers. Within this theory, it is possible to prove interesting statements such as "The complement of the Mandelbrot set is only partially decidable."
These hypothetical computing machines can be viewed as idealised analog computers which operate on real numbers, whereas digital computers are limited to computable numbers. They may be further subdivided into differential and algebraic models (digital computers, in this context, should be thought of as topological, at least insofar as their operation on computable reals is concerned[1]). Depending on the model chosen, this may enable real computers to solve problems that are inextricable on digital computers (For example, Hava Siegelmann's neural nets can have noncomputable real weights, making them able to compute nonrecursive languages.) or vice versa. (Claude Shannon's idealized analog computer can only solve algebraic differential equations, while a digital computer can solve some transcendental equations as well. However this comparison is not entirely fair since in Claude Shannon's idealized analog computer computations are immediately done; i.e. computation is done in real time. Shannon's model can be adapted to cope with this problem.)[2]
A canonical model of computation over the reals is Blum–Shub–Smale machine (BSS).
If real computation were physically realizable, one could use it to solve NP-complete problems, and even #P-complete problems, in polynomial time. Unlimited precision real numbers in the physical universe are prohibited by the holographic principle and the Bekenstein bound.[3]
See also
• Hypercomputation, for other such powerful machines.
References
1. Klaus Weihrauch (1995). A Simple Introduction to Computable Analysis.
2. O. Bournez; M. L. Campagnolo; D. S. Graça & E. Hainry (Jun 2007). "Polynomial differential equations compute all real computable functions on computable compact intervals". Journal of Complexity. 23 (3): 317–335. doi:10.1016/j.jco.2006.12.005.
3. Scott Aaronson, NP-complete Problems and Physical Reality, ACM SIGACT News, Vol. 36, No. 1. (March 2005), pp. 30–52.
Further reading
• Lenore Blum, Felipe Cucker, Michael Shub, and Stephen Smale (1998). Complexity and Real Computation. ISBN 0-387-98281-7.{{cite book}}: CS1 maint: multiple names: authors list (link)
• Campagnolo, Manuel Lameiras (July 2001). Computational complexity of real valued recursive functions and analog circuits. Universidade Técnica de Lisboa, Instituto Superior Técnico.
• Natschläger, Thomas, Wolfgang Maass, Henry Markram. The "Liquid Computer" A Novel Strategy for Real-Time Computing on Time Series (PDF).{{cite book}}: CS1 maint: multiple names: authors list (link)
• Siegelmann, Hava (December 1998). Neural Networks and Analog Computation: Beyond the Turing Limit. ISBN 0-8176-3949-7.
• Siegelmann, Hava & Sontag, Eduardo D. On The Computational Power Of Neural Nets.
|
Wikipedia
|
Real coordinate space
In mathematics, the real coordinate space of dimension n, denoted Rn or $\mathbb {R} ^{n}$, is the set of the n-tuples of real numbers, that is the set of all sequences of n real numbers. Special cases are called the real line R1 and the real coordinate plane R2. With component-wise addition and scalar multiplication, it is a real vector space, and its elements are called coordinate vectors.
The coordinates over any basis of the elements of a real vector space form a real coordinate space of the same dimension as that of the vector space. Similarly, the Cartesian coordinates of the points of a Euclidean space of dimension n form a real coordinate space of dimension n.
These one to one correspondences between vectors, points and coordinate vectors explain the names of coordinate space and coordinate vector. It allows using geometric terms and methods for studying real coordinate spaces, and, conversely, to use methods of calculus in geometry. This approach of geometry was introduced by René Descartes in the 17th century. It is widely used, as it allows locating points in Euclidean spaces, and computing with them.
Definition and structures
For any natural number n, the set Rn consists of all n-tuples of real numbers (R). It is called the "n-dimensional real space" or the "real n-space".
An element of Rn is thus a n-tuple, and is written
$(x_{1},x_{2},\ldots ,x_{n})$
where each xi is a real number. So, in multivariable calculus, the domain of a function of several real variables and the codomain of a real vector valued function are subsets of Rn for some n.
The real n-space has several further properties, notably:
• With componentwise addition and scalar multiplication, it is a real vector space. Every n-dimensional real vector space is isomorphic to it.
• With the dot product (sum of the term by term product of the components), it is an inner product space. Every n-dimensional real inner product space is isomorphic to it.
• As every inner product space, it is a topological space, and a topological vector space.
• It is a Euclidean space and a real affine space, and every Euclidean or affine space is isomorphic to it.
• It is an analytic manifold, and can be considered as the prototype of all manifolds, as, by definition, a manifold is, near each point, isomorphic to an open subset of Rn.
• It is an algebraic variety, and every real algebraic variety is a subset of Rn.
These properties and structures of Rn make it fundamental in almost all areas of mathematics and their application domains, such as statistics, probability theory, and many parts of physics.
The domain of a function of several variables
Main articles: Multivariable calculus and Real multivariable function
Any function f(x1, x2, ..., xn) of n real variables can be considered as a function on Rn (that is, with Rn as its domain). The use of the real n-space, instead of several variables considered separately, can simplify notation and suggest reasonable definitions. Consider, for n = 2, a function composition of the following form:
$F(t)=f(g_{1}(t),g_{2}(t)),$
where functions g1 and g2 are continuous. If
• ∀x1 ∈ R : f(x1, ·) is continuous (by x2)
• ∀x2 ∈ R : f(·, x2) is continuous (by x1)
then F is not necessarily continuous. Continuity is a stronger condition: the continuity of f in the natural R2 topology (discussed below), also called multivariable continuity, which is sufficient for continuity of the composition F.
Vector space
The coordinate space Rn forms an n-dimensional vector space over the field of real numbers with the addition of the structure of linearity, and is often still denoted Rn. The operations on Rn as a vector space are typically defined by
$\mathbf {x} +\mathbf {y} =(x_{1}+y_{1},x_{2}+y_{2},\ldots ,x_{n}+y_{n})$
$\alpha \mathbf {x} =(\alpha x_{1},\alpha x_{2},\ldots ,\alpha x_{n}).$
The zero vector is given by
$\mathbf {0} =(0,0,\ldots ,0)$
and the additive inverse of the vector x is given by
$-\mathbf {x} =(-x_{1},-x_{2},\ldots ,-x_{n}).$
This structure is important because any n-dimensional real vector space is isomorphic to the vector space Rn.
Matrix notation
Main article: Matrix (mathematics)
In standard matrix notation, each element of Rn is typically written as a column vector
$\mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}}$
and sometimes as a row vector:
$\mathbf {x} ={\begin{bmatrix}x_{1}&x_{2}&\cdots &x_{n}\end{bmatrix}}.$
The coordinate space Rn may then be interpreted as the space of all n × 1 column vectors, or all 1 × n row vectors with the ordinary matrix operations of addition and scalar multiplication.
Linear transformations from Rn to Rm may then be written as m × n matrices which act on the elements of Rn via left multiplication (when the elements of Rn are column vectors) and on elements of Rm via right multiplication (when they are row vectors). The formula for left multiplication, a special case of matrix multiplication, is:
$(A{\mathbf {x} })_{k}=\sum _{l=1}^{n}A_{kl}x_{l}$
Any linear transformation is a continuous function (see below). Also, a matrix defines an open map from Rn to Rm if and only if the rank of the matrix equals to m.
Standard basis
Main article: Standard basis
The coordinate space Rn comes with a standard basis:
${\begin{aligned}\mathbf {e} _{1}&=(1,0,\ldots ,0)\\\mathbf {e} _{2}&=(0,1,\ldots ,0)\\&{}\;\;\vdots \\\mathbf {e} _{n}&=(0,0,\ldots ,1)\end{aligned}}$
To see that this is a basis, note that an arbitrary vector in Rn can be written uniquely in the form
$\mathbf {x} =\sum _{i=1}^{n}x_{i}\mathbf {e} _{i}.$
Geometric properties and uses
Orientation
The fact that real numbers, unlike many other fields, constitute an ordered field yields an orientation structure on Rn. Any full-rank linear map of Rn to itself either preserves or reverses orientation of the space depending on the sign of the determinant of its matrix. If one permutes coordinates (or, in other words, elements of the basis), the resulting orientation will depend on the parity of the permutation.
Diffeomorphisms of Rn or domains in it, by their virtue to avoid zero Jacobian, are also classified to orientation-preserving and orientation-reversing. It has important consequences for the theory of differential forms, whose applications include electrodynamics.
Another manifestation of this structure is that the point reflection in Rn has different properties depending on evenness of n. For even n it preserves orientation, while for odd n it is reversed (see also improper rotation).
Affine space
Further information: Affine space
Rn understood as an affine space is the same space, where Rn as a vector space acts by translations. Conversely, a vector has to be understood as a "difference between two points", usually illustrated by a directed line segment connecting two points. The distinction says that there is no canonical choice of where the origin should go in an affine n-space, because it can be translated anywhere.
Convexity
Further information: Convex analysis
In a real vector space, such as Rn, one can define a convex cone, which contains all non-negative linear combinations of its vectors. Corresponding concept in an affine space is a convex set, which allows only convex combinations (non-negative linear combinations that sum to 1).
In the language of universal algebra, a vector space is an algebra over the universal vector space R∞ of finite sequences of coefficients, corresponding to finite sums of vectors, while an affine space is an algebra over the universal affine hyperplane in this space (of finite sequences summing to 1), a cone is an algebra over the universal orthant (of finite sequences of nonnegative numbers), and a convex set is an algebra over the universal simplex (of finite sequences of nonnegative numbers summing to 1). This geometrizes the axioms in terms of "sums with (possible) restrictions on the coordinates".
Another concept from convex analysis is a convex function from Rn to real numbers, which is defined through an inequality between its value on a convex combination of points and sum of values in those points with the same coefficients.
Euclidean space
Main articles: Euclidean space and Cartesian coordinate system
The dot product
$\mathbf {x} \cdot \mathbf {y} =\sum _{i=1}^{n}x_{i}y_{i}=x_{1}y_{1}+x_{2}y_{2}+\cdots +x_{n}y_{n}$
defines the norm |x| = √x ⋅ x on the vector space Rn. If every vector has its Euclidean norm, then for any pair of points the distance
$d(\mathbf {x} ,\mathbf {y} )=\|\mathbf {x} -\mathbf {y} \|={\sqrt {\sum _{i=1}^{n}(x_{i}-y_{i})^{2}}}$
is defined, providing a metric space structure on Rn in addition to its affine structure.
As for vector space structure, the dot product and Euclidean distance usually are assumed to exist in Rn without special explanations. However, the real n-space and a Euclidean n-space are distinct objects, strictly speaking. Any Euclidean n-space has a coordinate system where the dot product and Euclidean distance have the form shown above, called Cartesian. But there are many Cartesian coordinate systems on a Euclidean space.
Conversely, the above formula for the Euclidean metric defines the standard Euclidean structure on Rn, but it is not the only possible one. Actually, any positive-definite quadratic form q defines its own "distance" √q(x − y), but it is not very different from the Euclidean one in the sense that
$\exists C_{1}>0,\ \exists C_{2}>0,\ \forall \mathbf {x} ,\mathbf {y} \in \mathbb {R} ^{n}:C_{1}d(\mathbf {x} ,\mathbf {y} )\leq {\sqrt {q(\mathbf {x} -\mathbf {y} )}}\leq C_{2}d(\mathbf {x} ,\mathbf {y} ).$
Such a change of the metric preserves some of its properties, for example the property of being a complete metric space. This also implies that any full-rank linear transformation of Rn, or its affine transformation, does not magnify distances more than by some fixed C2, and does not make distances smaller than 1 / C1 times, a fixed finite number times smaller.
The aforementioned equivalence of metric functions remains valid if √q(x − y) is replaced with M(x − y), where M is any convex positive homogeneous function of degree 1, i.e. a vector norm (see Minkowski distance for useful examples). Because of this fact that any "natural" metric on Rn is not especially different from the Euclidean metric, Rn is not always distinguished from a Euclidean n-space even in professional mathematical works.
In algebraic and differential geometry
Although the definition of a manifold does not require that its model space should be Rn, this choice is the most common, and almost exclusive one in differential geometry.
On the other hand, Whitney embedding theorems state that any real differentiable m-dimensional manifold can be embedded into R2m.
Other appearances
Other structures considered on Rn include the one of a pseudo-Euclidean space, symplectic structure (even n), and contact structure (odd n). All these structures, although can be defined in a coordinate-free manner, admit standard (and reasonably simple) forms in coordinates.
Rn is also a real vector subspace of Cn which is invariant to complex conjugation; see also complexification.
Polytopes in Rn
See also: Linear programming and Convex polytope
There are three families of polytopes which have simple representations in Rn spaces, for any n, and can be used to visualize any affine coordinate system in a real n-space. Vertices of a hypercube have coordinates (x1, x2, ..., xn) where each xk takes on one of only two values, typically 0 or 1. However, any two numbers can be chosen instead of 0 and 1, for example −1 and 1. An n-hypercube can be thought of as the Cartesian product of n identical intervals (such as the unit interval [0,1]) on the real line. As an n-dimensional subset it can be described with a system of 2n inequalities:
${\begin{matrix}0\leq x_{1}\leq 1\\\vdots \\0\leq x_{n}\leq 1\end{matrix}}$
for [0,1], and
${\begin{matrix}|x_{1}|\leq 1\\\vdots \\|x_{n}|\leq 1\end{matrix}}$
for [−1,1].
Each vertex of the cross-polytope has, for some k, the xk coordinate equal to ±1 and all other coordinates equal to 0 (such that it is the kth standard basis vector up to sign). This is a dual polytope of hypercube. As an n-dimensional subset it can be described with a single inequality which uses the absolute value operation:
$\sum _{k=1}^{n}|x_{k}|\leq 1\,,$
but this can be expressed with a system of 2n linear inequalities as well.
The third polytope with simply enumerable coordinates is the standard simplex, whose vertices are n standard basis vectors and the origin (0, 0, ..., 0). As an n-dimensional subset it is described with a system of n + 1 linear inequalities:
${\begin{matrix}0\leq x_{1}\\\vdots \\0\leq x_{n}\\\sum \limits _{k=1}^{n}x_{k}\leq 1\end{matrix}}$
Replacement of all "≤" with "<" gives interiors of these polytopes.
Topological properties
The topological structure of Rn (called standard topology, Euclidean topology, or usual topology) can be obtained not only from Cartesian product. It is also identical to the natural topology induced by Euclidean metric discussed above: a set is open in the Euclidean topology if and only if it contains an open ball around each of its points. Also, Rn is a linear topological space (see continuity of linear maps above), and there is only one possible (non-trivial) topology compatible with its linear structure. As there are many open linear maps from Rn to itself which are not isometries, there can be many Euclidean structures on Rn which correspond to the same topology. Actually, it does not depend much even on the linear structure: there are many non-linear diffeomorphisms (and other homeomorphisms) of Rn onto itself, or its parts such as a Euclidean open ball or the interior of a hypercube).
Rn has the topological dimension n. An important result on the topology of Rn, that is far from superficial, is Brouwer's invariance of domain. Any subset of Rn (with its subspace topology) that is homeomorphic to another open subset of Rn is itself open. An immediate consequence of this is that Rm is not homeomorphic to Rn if m ≠ n – an intuitively "obvious" result which is nonetheless difficult to prove.
Despite the difference in topological dimension, and contrary to a naïve perception, it is possible to map a lesser-dimensional real space continuously and surjectively onto Rn. A continuous (although not smooth) space-filling curve (an image of R1) is possible.
Examples
Empty column vector,
the only element of R0
n ≤ 1
Cases of 0 ≤ n ≤ 1 do not offer anything new: R1 is the real line, whereas R0 (the space containing the empty column vector) is a singleton, understood as a zero vector space. However, it is useful to include these as trivial cases of theories that describe different n.
n = 2
Further information: Two-dimensional space
See also: SL2(R)
The case of (x,y) where x and y are real numbers has been developed as the Cartesian plane P. Further structure has been attached with Euclidean vectors representing directed line segments in P. The plane has also been developed as the field extension $\mathbb {C} $ by appending roots of X2 + 1 = 0 to the real field $\mathbb {R} .$ The root i acts on P as a quarter turn with counterclockwise orientation. This root generates the group { i, –1, –i, +1} ≅ ℤ/4ℤ. When (x,y) is written x + y i it is a complex number.
Another group action by ℤ/2ℤ, where the actor has been expressed as j, uses the line y=x for the involution of flipping the plane (x,y) ↦ (y,x), an exchange of coordinates. In this case points of P are written x + y j and called split-complex numbers. These numbers, with the coordinate-wise addition and multiplication according to jj=+1, form a ring that is not a field.
Another ring structure on P uses a nilpotent e to write x + y e for (x,y). The action of e on P reduces the plane to a line: It can be decomposed into the projection into the x-coordinate, then quarter-turning the result to the y-axis: e (x + y e) = x e since e2 = 0. A number x + y e is a dual number. The dual numbers form a ring, but, since e has no multiplicative inverse, it does not generate a group so the action is not a group action.
Excluding (0,0) from P makes [x : y] projective coordinates which describe the real projective line, a one-dimensional space. Since the origin is excluded, at least one of the ratios x/y and y/x exists. Then [x : y] = [x/y : 1] or [x : y] = [1 : y/x]. The projective line P1(R) is a topological manifold covered by two coordinate charts, [z : 1] → z or [1 : z] → z, which form an atlas. For points covered by both charts the transition function is multiplicative inversion on an open neighborhood of the point, which provides a homeomorphism as required in a manifold. One application of the real projective line is found in Cayley–Klein metric geometry.
n = 3
Main article: Three-dimensional space
n = 4
Further information: Four-dimensional space
R4 can be imagined using the fact that 16 points (x1, x2, x3, x4), where each xk is either 0 or 1, are vertices of a tesseract (pictured), the 4-hypercube (see above).
The first major use of R4 is a spacetime model: three spatial coordinates plus one temporal. This is usually associated with theory of relativity, although four dimensions were used for such models since Galilei. The choice of theory leads to different structure, though: in Galilean relativity the t coordinate is privileged, but in Einsteinian relativity it is not. Special relativity is set in Minkowski space. General relativity uses curved spaces, which may be thought of as R4 with a curved metric for most practical purposes. None of these structures provide a (positive-definite) metric on R4.
Euclidean R4 also attracts the attention of mathematicians, for example due to its relation to quaternions, a 4-dimensional real algebra themselves. See rotations in 4-dimensional Euclidean space for some information.
In differential geometry, n = 4 is the only case where Rn admits a non-standard differential structure: see exotic R4.
Norms on Rn
One could define many norms on the vector space Rn. Some common examples are
• the p-norm, defined by $ \|\mathbf {x} \|_{p}:={\sqrt[{p}]{\sum _{i=1}^{n}|x_{i}|^{p}}}$ for all $\mathbf {x} \in \mathbb {R} ^{n}$ where $p$ is a positive integer. The case $p=2$ is very important, because it is exactly the Euclidean norm.
• the $\infty $-norm or maximum norm, defined by $\|\mathbf {x} \|_{\infty }:=\max\{x_{1},\dots ,x_{n}\}$ for all $\mathbf {x} \in \mathbb {R} ^{n}$. This is the limit of all the p-norms: $ \|\mathbf {x} \|_{\infty }=\lim _{p\to \infty }{\sqrt[{p}]{\sum _{i=1}^{n}|x_{i}|^{p}}}$.
A really surprising and helpful result is that every norm defined on Rn is equivalent. This means for two arbitrary norms $\|\cdot \|$ and $\|\cdot \|'$ on Rn you can always find positive real numbers $\alpha ,\beta >0$, such that
$\alpha \cdot \|\mathbf {x} \|\leq \|\mathbf {x} \|'\leq \beta \cdot \|\mathbf {x} \|$
for all $\mathbf {x} \in \mathbb {R} ^{n}$.
This defines an equivalence relation on the set of all norms on Rn. With this result you can check that a sequence of vectors in Rn converges with $\|\cdot \|$ if and only if it converges with $\|\cdot \|'$.
Here is a sketch of what a proof of this result may look like:
Because of the equivalence relation it is enough to show that every norm on Rn is equivalent to the Euclidean norm $\|\cdot \|_{2}$. Let $\|\cdot \|$ be an arbitrary norm on Rn. The proof is divided in two steps:
• We show that there exists a $\beta >0$, such that $\|\mathbf {x} \|\leq \beta \cdot \|\mathbf {x} \|_{2}$ for all $\mathbf {x} \in \mathbb {R} ^{n}$. In this step you use the fact that every $\mathbf {x} =(x_{1},\dots ,x_{n})\in \mathbb {R} ^{n}$ can be represented as a linear combination of the standard basis: $ \mathbf {x} =\sum _{i=1}^{n}e_{i}\cdot x_{i}$. Then with the Cauchy–Schwarz inequality
$\|\mathbf {x} \|=\left\|\sum _{i=1}^{n}e_{i}\cdot x_{i}\right\|\leq \sum _{i=1}^{n}\|e_{i}\|\cdot |x_{i}|\leq {\sqrt {\sum _{i=1}^{n}\|e_{i}\|^{2}}}\cdot {\sqrt {\sum _{i=1}^{n}|x_{i}|^{2}}}=\beta \cdot \|\mathbf {x} \|_{2},$
where $ \beta :={\sqrt {\sum _{i=1}^{n}\|e_{i}\|^{2}}}$ :={\sqrt {\sum _{i=1}^{n}\|e_{i}\|^{2}}}} .
• Now we have to find an $\alpha >0$, such that $\alpha \cdot \|\mathbf {x} \|_{2}\leq \|\mathbf {x} \|$ for all $\mathbf {x} \in \mathbb {R} ^{n}$. Assume there is no such $\alpha $. Then there exists for every $k\in \mathbb {N} $ a $\mathbf {x} _{k}\in \mathbb {R} ^{n}$, such that $\|\mathbf {x} _{k}\|_{2}>k\cdot \|\mathbf {x} _{k}\|$. Define a second sequence $({\tilde {\mathbf {x} }}_{k})_{k\in \mathbb {N} }$ by $ {\tilde {\mathbf {x} }}_{k}:={\frac {\mathbf {x} _{k}}{\|\mathbf {x} _{k}\|_{2}}}$. This sequence is bounded because $\|{\tilde {\mathbf {x} }}_{k}\|_{2}=1$. So because of the Bolzano–Weierstrass theorem there exists a convergent subsequence $({\tilde {\mathbf {x} }}_{k_{j}})_{j\in \mathbb {N} }$ with limit $\mathbf {a} \in $ Rn. Now we show that $\|\mathbf {a} \|_{2}=1$ but $\mathbf {a} =\mathbf {0} $, which is a contradiction. It is
$\|\mathbf {a} \|\leq \left\|\mathbf {a} -{\tilde {\mathbf {x} }}_{k_{j}}\right\|+\left\|{\tilde {\mathbf {x} }}_{k_{j}}\right\|\leq \beta \cdot \left\|\mathbf {a} -{\tilde {\mathbf {x} }}_{k_{j}}\right\|_{2}+{\frac {\|\mathbf {x} _{k_{j}}\|}{\|\mathbf {x} _{k_{j}}\|_{2}}}\ {\overset {j\to \infty }{\longrightarrow }}\ 0,$
because $\|\mathbf {a} -{\tilde {\mathbf {x} }}_{k_{j}}\|\to 0$ and $0\leq {\frac {\|\mathbf {x} _{k_{j}}\|}{\|\mathbf {x} _{k_{j}}\|_{2}}}<{\frac {1}{k_{j}}}$, so ${\frac {\|\mathbf {x} _{k_{j}}\|}{\|\mathbf {x} _{k_{j}}\|_{2}}}\to 0$. This implies $\|\mathbf {a} \|=0$, so $\mathbf {a} =\mathbf {0} $. On the other hand $\|\mathbf {a} \|_{2}=1$, because $\|\mathbf {a} \|_{2}=\left\|\lim _{j\to \infty }{\tilde {\mathbf {x} }}_{k_{j}}\right\|_{2}=\lim _{j\to \infty }\left\|{\tilde {\mathbf {x} }}_{k_{j}}\right\|_{2}=1$. This can not ever be true, so the assumption was false and there exists such a $\alpha >0$.
See also
• Exponential object, for theoretical explanation of the superscript notation
• Geometric space
• Real projective space
Footnotes
References
• Kelley, John L. (1975). General Topology. Springer-Verlag. ISBN 0-387-90125-6.
• Munkres, James (1999). Topology. Prentice-Hall. ISBN 0-13-181629-2.
Real numbers
• 0.999...
• Absolute difference
• Cantor set
• Cantor–Dedekind axiom
• Completeness
• Construction
• Decidability of first-order theories
• Extended real number line
• Gregory number
• Irrational number
• Normal number
• Rational number
• Rational zeta series
• Real coordinate space
• Real line
• Tarski axiomatization
• Vitali set
|
Wikipedia
|
Complex dimension
In mathematics, complex dimension usually refers to the dimension of a complex manifold or a complex algebraic variety.[1] These are spaces in which the local neighborhoods of points (or of non-singular points in the case of a variety) are modeled on a Cartesian product of the form $\mathbb {C} ^{d}$ for some $d$, and the complex dimension is the exponent $d$ in this product. Because $\mathbb {C} $ can in turn be modeled by $\mathbb {R} ^{2}$, a space with complex dimension $d$ will have real dimension $2d$.[2] That is, a smooth manifold of complex dimension $d$ has real dimension $2d$; and a complex algebraic variety of complex dimension $d$, away from any singular point, will also be a smooth manifold of real dimension $2d$.
However, for a real algebraic variety (that is a variety defined by equations with real coefficients), its dimension refers commonly to its complex dimension, and its real dimension refers to the maximum of the dimensions of the manifolds contained in the set of its real points. The real dimension is not greater than the dimension, and equals it if the variety is irreducible and has real points that are nonsingular. For example, the equation $x^{2}+y^{2}+z^{2}=0$ defines a variety of (complex) dimension 2 (a surface), but of real dimension 0 — it has only one real point, (0, 0, 0), which is singular.[3]
The same considerations apply to codimension. For example a smooth complex hypersurface in complex projective space of dimension n will be a manifold of dimension 2(n − 1). A complex hyperplane does not separate a complex projective space into two components, because it has real codimension 2.
References
1. Cavagnaro, Catherine; Haight, William T., II (2001), Dictionary of Classical and Theoretical Mathematics, CRC Press, p. 22, ISBN 9781584880509{{citation}}: CS1 maint: multiple names: authors list (link).
2. Marsden, Jerrold E.; Ratiu, Tudor S. (1999), Introduction to Mechanics and Symmetry: A Basic Exposition of Classical Mechanical Systems, Texts in Applied Mathematics, vol. 17, Springer, p. 152, ISBN 9780387986432.
3. Bates, Daniel J.; Hauenstein, Jonathan D.; Sommese, Andrew J.; Wampler, Charles W. (2013), Numerically Solving Polynomial Systems with Bertini, Software, Environments, and Tools, vol. 25, SIAM, p. 225, ISBN 9781611972702.
|
Wikipedia
|
Frobenius theorem (real division algebras)
In mathematics, more specifically in abstract algebra, the Frobenius theorem, proved by Ferdinand Georg Frobenius in 1877, characterizes the finite-dimensional associative division algebras over the real numbers. According to the theorem, every such algebra is isomorphic to one of the following:
• R (the real numbers)
• C (the complex numbers)
• H (the quaternions).
These algebras have real dimension 1, 2, and 4, respectively. Of these three algebras, R and C are commutative, but H is not.
Proof
The main ingredients for the following proof are the Cayley–Hamilton theorem and the fundamental theorem of algebra.
Introducing some notation
• Let D be the division algebra in question.
• Let n be the dimension of D.
• We identify the real multiples of 1 with R.
• When we write a ≤ 0 for an element a of D, we imply that a is contained in R.
• We can consider D as a finite-dimensional R-vector space. Any element d of D defines an endomorphism of D by left-multiplication, we identify d with that endomorphism. Therefore, we can speak about the trace of d, and its characteristic and minimal polynomials.
• For any z in C define the following real quadratic polynomial:
$Q(z;x)=x^{2}-2\operatorname {Re} (z)x+|z|^{2}=(x-z)(x-{\overline {z}})\in \mathbf {R} [x].$
Note that if z ∈ C ∖ R then Q(z; x) is irreducible over R.
The claim
The key to the argument is the following
Claim. The set V of all elements a of D such that a2 ≤ 0 is a vector subspace of D of dimension n - 1. Moreover D = R ⊕ V as R-vector spaces, which implies that V generates D as an algebra.
Proof of Claim: Let m be the dimension of D as an R-vector space, and pick a in D with characteristic polynomial p(x). By the fundamental theorem of algebra, we can write
$p(x)=(x-t_{1})\cdots (x-t_{r})(x-z_{1})(x-{\overline {z_{1}}})\cdots (x-z_{s})(x-{\overline {z_{s}}}),\qquad t_{i}\in \mathbf {R} ,\quad z_{j}\in \mathbf {C} \backslash \mathbf {R} .$
We can rewrite p(x) in terms of the polynomials Q(z; x):
$p(x)=(x-t_{1})\cdots (x-t_{r})Q(z_{1};x)\cdots Q(z_{s};x).$
Since zj ∈ C\R, the polynomials Q(zj; x) are all irreducible over R. By the Cayley–Hamilton theorem, p(a) = 0 and because D is a division algebra, it follows that either a − ti = 0 for some i or that Q(zj; a) = 0 for some j. The first case implies that a is real. In the second case, it follows that Q(zj; x) is the minimal polynomial of a. Because p(x) has the same complex roots as the minimal polynomial and because it is real it follows that
$p(x)=Q(z_{j};x)^{k}=\left(x^{2}-2\operatorname {Re} (z_{j})x+|z_{j}|^{2}\right)^{k}$
Since p(x) is the characteristic polynomial of a the coefficient of x2k−1 in p(x) is tr(a) up to a sign. Therefore, we read from the above equation we have: tr(a) = 0 if and only if Re(zj) = 0, in other words tr(a) = 0 if and only if a2 = −|zj|2 < 0.
So V is the subset of all a with tr(a) = 0. In particular, it is a vector subspace. The rank–nullity theorem then implies that V has dimension n - 1 since it is the kernel of $\operatorname {tr} :D\to \mathbf {R} $. Since R and V are disjoint (i.e. they satisfy $\mathbf {R} \cap V=\{0\}$), and their dimensions sum to n, we have that D = R ⊕ V.
The finish
For a, b in V define B(a, b) = (−ab − ba)/2. Because of the identity (a + b)2 − a2 − b2 = ab + ba, it follows that B(a, b) is real. Furthermore, since a2 ≤ 0, we have: B(a, a) > 0 for a ≠ 0. Thus B is a positive definite symmetric bilinear form, in other words, an inner product on V.
Let W be a subspace of V that generates D as an algebra and which is minimal with respect to this property. Let e1, ..., en be an orthonormal basis of W with respect to B. Then orthonormality implies that:
$e_{i}^{2}=-1,\quad e_{i}e_{j}=-e_{j}e_{i}.$
If n = 0, then D is isomorphic to R.
If n = 1, then D is generated by 1 and e1 subject to the relation e2
1
= −1
. Hence it is isomorphic to C.
If n = 2, it has been shown above that D is generated by 1, e1, e2 subject to the relations
$e_{1}^{2}=e_{2}^{2}=-1,\quad e_{1}e_{2}=-e_{2}e_{1},\quad (e_{1}e_{2})(e_{1}e_{2})=-1.$
These are precisely the relations for H.
If n > 2, then D cannot be a division algebra. Assume that n > 2. Let u = e1e2en. It is easy to see that u2 = 1 (this only works if n > 2). If D were a division algebra, 0 = u2 − 1 = (u − 1)(u + 1) implies u = ±1, which in turn means: en = ∓e1e2 and so e1, ..., en−1 generate D. This contradicts the minimality of W.
Remarks and related results
• The fact that D is generated by e1, ..., en subject to the above relations means that D is the Clifford algebra of Rn. The last step shows that the only real Clifford algebras which are division algebras are Cℓ0, Cℓ1 and Cℓ2.
• As a consequence, the only commutative division algebras are R and C. Also note that H is not a C-algebra. If it were, then the center of H has to contain C, but the center of H is R. Therefore, the only finite-dimensional division algebra over C is C itself.
• This theorem is closely related to Hurwitz's theorem, which states that the only real normed division algebras are R, C, H, and the (non-associative) algebra O.
• Pontryagin variant. If D is a connected, locally compact division ring, then D = R, C, or H.
References
• Ray E. Artz (2009) Scalar Algebras and Quaternions, Theorem 7.1 "Frobenius Classification", page 26.
• Ferdinand Georg Frobenius (1878) "Über lineare Substitutionen und bilineare Formen", Journal für die reine und angewandte Mathematik 84:1–63 (Crelle's Journal). Reprinted in Gesammelte Abhandlungen Band I, pp. 343–405.
• Yuri Bahturin (1993) Basic Structures of Modern Algebra, Kluwer Acad. Pub. pp. 30–2 ISBN 0-7923-2459-5 .
• Leonard Dickson (1914) Linear Algebras, Cambridge University Press. See §11 "Algebra of real quaternions; its unique place among algebras", pages 10 to 12.
• R.S. Palais (1968) "The Classification of Real Division Algebras" American Mathematical Monthly 75:366–8.
• Lev Semenovich Pontryagin, Topological Groups, page 159, 1966.
|
Wikipedia
|
Tensor product of fields
In mathematics, the tensor product of two fields is their tensor product as algebras over a common subfield. If no subfield is explicitly specified, the two fields must have the same characteristic and the common subfield is their prime subfield.
For other uses, see Tensor product (disambiguation).
Not to be confused with Tensor field.
The tensor product of two fields is sometimes a field, and often a direct product of fields; In some cases, it can contain non-zero nilpotent elements.
The tensor product of two fields expresses in a single structure the different way to embed the two fields in a common extension field.
Compositum of fields
First, one defines the notion of the compositum of fields. This construction occurs frequently in field theory. The idea behind the compositum is to make the smallest field containing two other fields. In order to formally define the compositum, one must first specify a tower of fields. Let k be a field and L and K be two extensions of k. The compositum, denoted K.L, is defined to be $K.L=k(K\cup L)$ where the right-hand side denotes the extension generated by K and L. This assumes some field containing both K and L. Either one starts in a situation where an ambient field is easy to identify (for example if K and L are both subfields of the complex numbers), or one proves a result that allows one to place both K and L (as isomorphic copies) in some large enough field.
In many cases one can identify K.L as a vector space tensor product, taken over the field N that is the intersection of K and L. For example, if one adjoins √2 to the rational field $\mathbb {Q} $ to get K, and √3 to get L, it is true that the field M obtained as K.L inside the complex numbers $\mathbb {C} $ is (up to isomorphism)
$K\otimes _{\mathbb {Q} }L$
as a vector space over $\mathbb {Q} $. (This type of result can be verified, in general, by using the ramification theory of algebraic number theory.)
Subfields K and L of M are linearly disjoint (over a subfield N) when in this way the natural N-linear map of
$K\otimes _{N}L$
to K.L is injective.[1] Naturally enough this isn't always the case, for example when K = L. When the degrees are finite, injectivity is equivalent here to bijectivity. Hence, when K and L are linearly disjoint finite-degree extension fields over N, $K.L\cong K\otimes _{N}L$, as with the aforementioned extensions of the rationals.
A significant case in the theory of cyclotomic fields is that for the nth roots of unity, for n a composite number, the subfields generated by the pk th roots of unity for prime powers dividing n are linearly disjoint for distinct p.[2]
The tensor product as ring
To get a general theory, one needs to consider a ring structure on $K\otimes _{N}L$. One can define the product $(a\otimes b)(c\otimes d)$ to be $ac\otimes bd$ (see Tensor product of algebras). This formula is multilinear over N in each variable; and so defines a ring structure on the tensor product, making $K\otimes _{N}L$ into a commutative N-algebra, called the tensor product of fields.
Analysis of the ring structure
The structure of the ring can be analysed by considering all ways of embedding both K and L in some field extension of N. The construction here assumes the common subfield N; but does not assume a priori that K and L are subfields of some field M (thus getting round the caveats about constructing a compositum field). Whenever one embeds K and L in such a field M, say using embeddings α of K and β of L, there results a ring homomorphism γ from $K\otimes _{N}L$ into M defined by:
$\gamma (a\otimes b)=(\alpha (a)\otimes 1)\star (1\otimes \beta (b))=\alpha (a).\beta (b).$
The kernel of γ will be a prime ideal of the tensor product; and conversely any prime ideal of the tensor product will give a homomorphism of N-algebras to an integral domain (inside a field of fractions) and so provides embeddings of K and L in some field as extensions of (a copy of) N.
In this way one can analyse the structure of $K\otimes _{N}L$: there may in principle be a non-zero nilradical (intersection of all prime ideals) – and after taking the quotient by that one can speak of the product of all embeddings of K and L in various M, over N.
In case K and L are finite extensions of N, the situation is particularly simple since the tensor product is of finite dimension as an N-algebra (and thus an Artinian ring). One can then say that if R is the radical, one has $(K\otimes _{N}L)/R$ as a direct product of finitely many fields. Each such field is a representative of an equivalence class of (essentially distinct) field embeddings for K and L in some extension M.
Examples
For example, if K is generated over $\mathbb {Q} $ by the cube root of 2, then $K\otimes _{\mathbb {Q} }K$ is the product of (a copy of) K, and a splitting field of
X 3 − 2,
of degree 6 over $\mathbb {Q} $. One can prove this by calculating the dimension of the tensor product over $\mathbb {Q} $ as 9, and observing that the splitting field does contain two (indeed three) copies of K, and is the compositum of two of them. That incidentally shows that R = {0} in this case.
An example leading to a non-zero nilpotent: let
P(X) = X p − T
with K the field of rational functions in the indeterminate T over the finite field with p elements (see Separable polynomial: the point here is that P is not separable). If L is the field extension K(T 1/p) (the splitting field of P) then L/K is an example of a purely inseparable field extension. In $L\otimes _{K}L$ the element
$T^{1/p}\otimes 1-1\otimes T^{1/p}$
is nilpotent: by taking its pth power one gets 0 by using K-linearity.
Classical theory of real and complex embeddings
In algebraic number theory, tensor products of fields are (implicitly, often) a basic tool. If K is an extension of $\mathbb {Q} $ of finite degree n, $K\otimes _{\mathbb {Q} }\mathbb {R} $ is always a product of fields isomorphic to $\mathbb {R} $ or $\mathbb {C} $. The totally real number fields are those for which only real fields occur: in general there are r1 real and r2 complex fields, with r1 + 2r2 = n as one sees by counting dimensions. The field factors are in 1–1 correspondence with the real embeddings, and pairs of complex conjugate embeddings, described in the classical literature.
This idea applies also to $K\otimes _{\mathbb {Q} }\mathbb {Q} _{p},$ where $\mathbb {Q} $p is the field of p-adic numbers. This is a product of finite extensions of $\mathbb {Q} $p, in 1–1 correspondence with the completions of K for extensions of the p-adic metric on $\mathbb {Q} $.
Consequences for Galois theory
This gives a general picture, and indeed a way of developing Galois theory (along lines exploited in Grothendieck's Galois theory). It can be shown that for separable extensions the radical is always {0}; therefore the Galois theory case is the semisimple one, of products of fields alone.
See also
• Extension of scalars—tensor product of a field extension and a vector space over that field
Notes
1. "Linearly-disjoint extensions", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
2. "Cyclotomic field", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
References
• "Compositum of field extensions", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Kempf, George R. (2012) [1995]. "9.2 Decomposition of Tensor Products of Fields". Algebraic Structures. Springer. pp. 85–87. ISBN 978-3-322-80278-1.
• Milne, J.S. (18 March 2017). Algebraic Number Theory (PDF). p. 17. 3.07.
• Stein, William (2004). "A Brief Introduction to Classical and Adelic Algebraic Number Theory" (PDF). pp. 140–2.
• Zariski, Oscar; Samuel, Pierre (1975) [1958]. Commutative algebra I. Graduate Texts in Mathematics. Vol. 28. Springer-Verlag. ISBN 978-0-387-90089-6. MR 0090581.
External links
• MathOverflow thread on the definition of linear disjointness
|
Wikipedia
|
Real number
In mathematics, a real number is a number that can be used to measure a continuous one-dimensional quantity such as a distance, duration or temperature. Here, continuous means that pairs of values can have arbitrarily small differences.[lower-alpha 1] Every real number can be almost uniquely represented by an infinite decimal expansion.[lower-alpha 2][1]
For the real numbers used in descriptive set theory, see Baire space (set theory).
The real numbers are fundamental in calculus (and more generally in all mathematics), in particular by their role in the classical definitions of limits, continuity and derivatives.[lower-alpha 3]
The set of real numbers is denoted R or $\mathbb {R} $[2] and is sometimes called "the reals".[3] The adjective real, used in the 17th century by René Descartes, distinguishes real numbers from imaginary numbers such as the square roots of −1.[4]
The real numbers include the rational numbers, such as the integer −5 and the fraction 4 / 3. The rest of the real numbers are called irrational numbers. Some irrational numbers (as well as all the rationals) are the root of a polynomial with integer coefficients, such as the square root √2 = 1.414...; these are called algebraic numbers. There are also real numbers which are not, such as π = 3.1415...; these are called transcendental numbers.[4]
Real numbers can be thought of as all points on a line called the number line or real line, where the points corresponding to integers (..., −2, −1, 0, 1, 2, ...) are equally spaced.
Conversely, analytic geometry is the association of points on lines (especially axis lines) to real numbers such that geometric displacements are proportional to differences between corresponding numbers.
The informal descriptions above of the real numbers are not sufficient for ensuring the correctness of proofs of theorems involving real numbers. The realization that a better definition was needed, and the elaboration of such a definition was a major development of 19th-century mathematics and is the foundation of real analysis, the study of real functions and real-valued sequences. A current axiomatic definition is that real numbers form the unique (up to an isomorphism) Dedekind-complete ordered field.[lower-alpha 4] Other common definitions of real numbers include equivalence classes of Cauchy sequences (of rational numbers), Dedekind cuts, and infinite decimal representations. All these definitions satisfy the axiomatic definition and are thus equivalent.
Characterizing properties
Real numbers are completely characterized by their fundamental properties that can be summarized by saying that they form an ordered field that is Dedekind complete. Here, "completely characterized" means that there is a unique isomorphism between any two Dedekind complete ordered fields, and thus that their elements have exactly the same properties. This implies that one can manipulate real numbers and compute with them, without knowing how they can be defined; this is what mathematicians and physicists did during several centuries before the first formal definitions were provided in the second half of the 19th century. See Construction of the real numbers for details about these formal definitions and the proof of their equivalence.
Arithmetic
The real numbers form an ordered field. Intuitively, this means that methods and rules of elementary arithmetic apply to them. More precisely, there are two binary operations, addition and multiplication, and a total order that have the following properties.
• The addition of two real numbers a and b produce a real number denoted $a+b,$ which is the sum of a and b.
• The multiplication of two real numbers a and b produce a real number denoted $ab,$ $a\cdot b$ or $a\times b,$ which is the product of a and b.
• Addition and multiplication are both commutative, which means that $a+b=b+a$ and $ab=ba$ for every real numbers a and b.
• Addition and multiplication are both associative, which means that $(a+b)+c=a+(b+c)$ and $(ab)c=a(bc)$ for every real numbers a, b and c, and that parentheses may be omitted in both cases.
• Multiplication is distributive over addition, which means that $a(b+c)=ab+ac$ for every real numbers a, b and c.
• There is a real number called zero and denoted 0 which is an additive identity, which means that $a+0=a$ for every real number a.
• There is a real number denoted 1 which is a multiplicative identity, which means that $a\times 1=a$ for every real number a.
• Every real number a has an additive inverse denoted $-a.$ This means that $a+(-a)=0$ for every real number a.
• Every nonzero real number a has a multiplicative inverse denoted $a^{-1}$ or ${\frac {1}{a}}.$ This means that $aa^{-1}=1$ for every nonzero real number a.
• The total order is denoted $a<b.$ being that it is a total order means two properties: given two real numbers a and b, exactly one of $a<b,$ $a=b$ or $b<a$ is true; and if $a<b$ and $b<c,$ then one has also $a<c.$
• The order is compatible with addition and multiplication, which means that $a<b$ implies $a+c<b+c$ for every real number c, and $0<ab$ is implied by $0<a$ and $0<b.$
Many other properties can be deduced from the above ones. In particular:
• $0\cdot a=0$ for every real number a
• $0<1$
• $0<a^{2}$ for every nonzero real number a
Auxiliary operations
Several other operations are commonly used, which can be deduced from the above ones.
• Subtraction: the subtraction of two real numbers a and b results in the sum of a and the additive inverse −b of b; that is,
$a-b=a+(-b).$
• Division: the division of a real number a by a nonzero real number b is denoted $ {\frac {a}{b}},$ or $a/b$ and defined as the multiplication of a with the multiplicative inverse of b; that is,
${\frac {a}{b}}=ab^{-1}.$
• Absolute value: the absolute value of a real number a, denoted $|a|,$ measures its distance from zero, and is defined as
$|a|=\max(a,-a).$
Auxiliary order relations
The total order that is considered above is denoted $a<b$ and read as "a is less than b". Three other order relations are also commonly used:
• Greater than: $a>b,$ read as "a is greater than b", is defined as $a>b$ if and only if $b<a.$
• Less than or equal to: $a\leq b,$ read as "a is less than or equal to b" or "a is not greater than b", is defined as $(a<b){\text{ or }}(a=b),$ or equivalently as ${\text{not }}(b<a).$
• Greater than or equal to: $a\geq b,$ read as "a is greater than or equal to b" or "a is not less than b", is defined as $(b<a){\text{ or }}(a=b),$ or equivalently as ${\text{not }}(a<b).$
Integers and fractions as real numbers
The real numbers 0 and 1 are commonly identified with the natural numbers 0 and 1. This allows identifying any natural number n with the sum of n real numbers equal to 1.
This identification can be pursued by identifying a negative integer $-n$ (where $n$ is a natural number) with the additive inverse $-n$ of the real number identified with $n.$ Similarly a rational number $p/q$ (where p and q are integers and $q\neq 0$) is identified with the division of the real numbers identified with p and q.
These identifications make the set $\mathbb {Q} $ of the rational numbers an ordered subfield of the real numbers $\mathbb {R} .$ The Dedekind completeness described below implies that some real numbers, such as ${\sqrt {2}},$ are not rational numbers; they are called irrational numbers.
The above identifications make sense, since natural numbers, integers and real numbers are generally not defined by their individual nature, but by defining properties (axioms). So, the identification of natural numbers with some real numbers is justified by the fact that Peano axioms are satisfied by these real numbers, with the addition with 1 taken as the successor function.
Formally, one has a injective homomorphism of ordered monoids from the natural numbers $\mathbb {N} $ to the integers $\mathbb {Z} ,$ an injective homomorphism of ordered rings from $\mathbb {Z} $ to the rational numbers $\mathbb {Q} ,$ and an injective homomorphism of ordered fields from $\mathbb {Q} $ to the real numbers $\mathbb {R} .$ The identifications consist of not distinguishing the source and the image of each injective homomorphism, and thus to write
$\mathbb {N} \subset \mathbb {Q} \subset \mathbb {R} .$
These identifications are formally abuses of notation, and are generally harmless. It is only in very specific situations, that one must avoid them and replace them by using explicitly the above homomorphisms. This is the case in constructive mathematics and computer programming. In the latter case, these homomorphisms are interpreted as type conversions that can often be done automatically by the compiler.
Dedekind completeness
Previous properties do not distinguish real numbers from rational numbers. This distinction is provided by Dedekind completeness, which states that every set of real numbers with an upper bound admits a least upper bound. This means the following. A set of real numbers $S$ is bounded above if there is a real number $u$ such that $s\leq u$ for all $s\in S$; such a $u$ is called a upper bound of $S.$ So, Dedekind completeness means that, if S is bounded above, it has an upper bound that is less than any other upper bound.
Dedekind completeness implies other sorts of completeness (see below), but also has some important consequences.
• Archimedean property: for every real number x, there is an integer n such that $x<n$ (take, $n=u+1,$ where $u$ is the least upper bound of the integers less than x).
• Equivalently, if x is a positive real number, there is a positive integer n such that $0<{\frac {1}{n}}<x$.
• Every positive real number x has a positive square root, that is, there exist a positive real number $r$ such that $r^{2}=x.$
• Every univariate polynomial of odd degree with real coefficients has at least one real root (if the leading coefficient is positive, take the least upper bound of real numbers for which the value of the polynomial is negative).
The last two properties are summarized by saying that the real numbers form a real closed field. This implies the real version of the fundamental theorem of algebra, namely that every polynomial with real coefficients can be factored into polynomials with real coefficients of degree at most two.
Decimal representation
A key property of real numbers is their decimal representation. A decimal representation consists of a nonnegative integer k and an infinite sequence of decimal digits (nonnegative integers less than 10)
$b_{k},b_{k-1},\ldots ,b_{0},a_{1},a_{2},\ldots ,$
that is written
$b_{k}b_{k-1}\cdots b_{0}.a_{1}a_{2}\cdots .$
(Commonly, one supposes, without loss of generality, that either $k=0$ or $b_{k}\neq 0.$) For example, for $3.14159\cdots ,$ one has $k=0,$ $b_{0}=3,$ $a_{1}=1,$ $a_{2}=4,$ etc.
Such a decimal representation specifies a unique nonnegative real number as the least upper bound of the decimal fractions that are obtained by truncating the sequence. More precisely, given a positive integer n, the truncation of the sequence at the place n is the finite sequence $b_{k},b_{k-1},\ldots ,b_{0},a_{1},a_{2},\ldots ,a_{n},$ which defines the decimal number
${\begin{aligned}D_{n}&=b_{k}\cdots b_{0}.a_{1}\cdots a_{n}\\&=b_{k}10^{k}+b_{k-1}10^{k-1}+\cdots b_{0}+{\frac {a_{1}}{10}}+\cdots +{\frac {a_{n}}{10^{n}}}\\&=\sum _{i=0}^{k}b_{i}10^{i}+\sum _{j=1}^{n}{\frac {a_{j}}{10^{j}}}\end{aligned}}$
The real number defined by the sequence is the least upper bound of the $D_{n},$ which exists by Dedekind completeness.
Conversely, given a nonnegative real number a, one can define a decimal representation of a by induction, as follows. Define $b_{k}\cdots b_{0}$ as decimal representation of the largest integer $D_{0}$ such that $D_{0}\leq a$ (this integer exists because of the Archimedean property). Then, supposing by induction that the decimal fraction $D_{i}$ has been defined for $i<n,$ one defines $a_{n}$ as the largest digit such that $D_{n-1}+a_{n}/10^{n}\leq a,$ and one sets $D_{n}=D_{n-1}+a_{n}/10^{n}.$
One can use the defining properties of the real numbers to show that a is the least upper bound of the $D_{n}.$ So, the resulting sequence of digits is called a decimal representation of a.
Another decimal representation can be obtained by replacing $\leq a$ with $<a$ in the preceding construction. These two representations are identical, unless a is a decimal fraction of the form $ {\frac {m}{10^{h}}}.$ In this case, in the first decimal representation, all $a_{n}$ are zero for $n>h,$ and, in the second representation, all $a_{n}$ 9. (see 0.999... for details).
In summary, there is a bijection between the real numbers and the decimal representations that do not end with infinitely many trailing 9.
The preceding considerations apply directly for every numeral base $B\geq 2,$ simply by replacing 10 with $B$ and 9 with $B-1.$
Topological completeness
Main article: Completeness of the real numbers
A main reason for using real numbers is so that many sequences have limits. More formally, the reals are complete (in the sense of metric spaces or uniform spaces, which is a different sense than the Dedekind completeness of the order in the previous section):
A sequence (xn) of real numbers is called a Cauchy sequence if for any ε > 0 there exists an integer N (possibly depending on ε) such that the distance |xn − xm| is less than ε for all n and m that are both greater than N. This definition, originally provided by Cauchy, formalizes the fact that the xn eventually come and remain arbitrarily close to each other.
A sequence (xn) converges to the limit x if its elements eventually come and remain arbitrarily close to x, that is, if for any ε > 0 there exists an integer N (possibly depending on ε) such that the distance |xn − x| is less than ε for n greater than N.
Every convergent sequence is a Cauchy sequence, and the converse is true for real numbers, and this means that the topological space of the real numbers is complete.
The set of rational numbers is not complete. For example, the sequence (1; 1.4; 1.41; 1.414; 1.4142; 1.41421; ...), where each term adds a digit of the decimal expansion of the positive square root of 2, is Cauchy but it does not converge to a rational number (in the real numbers, in contrast, it converges to the positive square root of 2).
The completeness property of the reals is the basis on which calculus, and, more generally mathematical analysis are built. In particular, the test that a sequence is a Cauchy sequence allows proving that a sequence has a limit, without computing it, and even without knowing it.
For example, the standard series of the exponential function
$e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}$
converges to a real number for every x, because the sums
$\sum _{n=N}^{M}{\frac {x^{n}}{n!}}$
can be made arbitrarily small (independently of M) by choosing N sufficiently large. This proves that the sequence is Cauchy, and thus converges, showing that $e^{x}$ is well defined for every x.
"The complete ordered field"
The real numbers are often described as "the complete ordered field", a phrase that can be interpreted in several ways.
First, an order can be lattice-complete. It is easy to see that no ordered field can be lattice-complete, because it can have no largest element (given any element z, z + 1 is larger).
Additionally, an order can be Dedekind-complete, see § Axiomatic approach. The uniqueness result at the end of that section justifies using the word "the" in the phrase "complete ordered field" when this is the sense of "complete" that is meant. This sense of completeness is most closely related to the construction of the reals from Dedekind cuts, since that construction starts from an ordered field (the rationals) and then forms the Dedekind-completion of it in a standard way.
These two notions of completeness ignore the field structure. However, an ordered group (in this case, the additive group of the field) defines a uniform structure, and uniform structures have a notion of completeness; the description in § Completeness is a special case. (We refer to the notion of completeness in uniform spaces rather than the related and better known notion for metric spaces, since the definition of metric space relies on already having a characterization of the real numbers.) It is not true that $\mathbb {R} $ is the only uniformly complete ordered field, but it is the only uniformly complete Archimedean field, and indeed one often hears the phrase "complete Archimedean field" instead of "complete ordered field". Every uniformly complete Archimedean field must also be Dedekind-complete (and vice versa), justifying using "the" in the phrase "the complete Archimedean field". This sense of completeness is most closely related to the construction of the reals from Cauchy sequences (the construction carried out in full in this article), since it starts with an Archimedean field (the rationals) and forms the uniform completion of it in a standard way.
But the original use of the phrase "complete Archimedean field" was by David Hilbert, who meant still something else by it. He meant that the real numbers form the largest Archimedean field in the sense that every other Archimedean field is a subfield of $\mathbb {R} $. Thus $\mathbb {R} $ is "complete" in the sense that nothing further can be added to it without making it no longer an Archimedean field. This sense of completeness is most closely related to the construction of the reals from surreal numbers, since that construction starts with a proper class that contains every ordered field (the surreals) and then selects from it the largest Archimedean subfield.
Cardinality
The set of all real numbers is uncountable, in the sense that while both the set of all natural numbers {1, 2, 3, 4, ...} and the set of all real numbers are infinite sets, there can be no one-to-one function from the real numbers to the natural numbers. The cardinality of the set of all real numbers is denoted by ${\mathfrak {c}}.$ and called the cardinality of the continuum. It is strictly greater than the cardinality of the set of all natural numbers (denoted $\aleph _{0}$ and called 'aleph-naught'), and equals the cardinality of the power set of the set of the natural numbers.
The statement that there is no subset of the reals with cardinality strictly greater than $\aleph _{0}$ and strictly smaller than ${\mathfrak {c}}$ is known as the continuum hypothesis (CH). It is neither provable nor refutable using the axioms of Zermelo–Fraenkel set theory including the axiom of choice (ZFC)—the standard foundation of modern mathematics. In fact, some models of ZFC satisfy CH, while others violate it.[5]
Other properties
See also: Real line
As a topological space, the real numbers are separable. This is because the set of rationals, which is countable, is dense in the real numbers. The irrational numbers are also dense in the real numbers, however they are uncountable and have the same cardinality as the reals.
The real numbers form a metric space: the distance between x and y is defined as the absolute value |x − y|. By virtue of being a totally ordered set, they also carry an order topology; the topology arising from the metric and the one arising from the order are identical, but yield different presentations for the topology—in the order topology as ordered intervals, in the metric topology as epsilon-balls. The Dedekind cuts construction uses the order topology presentation, while the Cauchy sequences construction uses the metric topology presentation. The reals form a contractible (hence connected and simply connected), separable and complete metric space of Hausdorff dimension 1. The real numbers are locally compact but not compact. There are various properties that uniquely specify them; for instance, all unbounded, connected, and separable order topologies are necessarily homeomorphic to the reals.
Every nonnegative real number has a square root in $\mathbb {R} $, although no negative number does. This shows that the order on $\mathbb {R} $ is determined by its algebraic structure. Also, every polynomial of odd degree admits at least one real root: these two properties make $\mathbb {R} $ the premier example of a real closed field. Proving this is the first half of one proof of the fundamental theorem of algebra.
The reals carry a canonical measure, the Lebesgue measure, which is the Haar measure on their structure as a topological group normalized such that the unit interval [0;1] has measure 1. There exist sets of real numbers that are not Lebesgue measurable, e.g. Vitali sets.
The supremum axiom of the reals refers to subsets of the reals and is therefore a second-order logical statement. It is not possible to characterize the reals with first-order logic alone: the Löwenheim–Skolem theorem implies that there exists a countable dense subset of the real numbers satisfying exactly the same sentences in first-order logic as the real numbers themselves. The set of hyperreal numbers satisfies the same first order sentences as $\mathbb {R} $. Ordered fields that satisfy the same first-order sentences as $\mathbb {R} $ are called nonstandard models of $\mathbb {R} $. This is what makes nonstandard analysis work; by proving a first-order statement in some nonstandard model (which may be easier than proving it in $\mathbb {R} $), we know that the same statement must also be true of $\mathbb {R} $.
The field $\mathbb {R} $ of real numbers is an extension field of the field $\mathbb {Q} $ of rational numbers, and $\mathbb {R} $ can therefore be seen as a vector space over $\mathbb {Q} $. Zermelo–Fraenkel set theory with the axiom of choice guarantees the existence of a basis of this vector space: there exists a set B of real numbers such that every real number can be written uniquely as a finite linear combination of elements of this set, using rational coefficients only, and such that no element of B is a rational linear combination of the others. However, this existence theorem is purely theoretical, as such a base has never been explicitly described.
The well-ordering theorem implies that the real numbers can be well-ordered if the axiom of choice is assumed: there exists a total order on $\mathbb {R} $ with the property that every nonempty subset of $\mathbb {R} $ has a least element in this ordering. (The standard ordering ≤ of the real numbers is not a well-ordering since e.g. an open interval does not contain a least element in this ordering.) Again, the existence of such a well-ordering is purely theoretical, as it has not been explicitly described. If V=L is assumed in addition to the axioms of ZF, a well ordering of the real numbers can be shown to be explicitly definable by a formula.[6]
A real number may be either computable or uncomputable; either algorithmically random or not; and either arithmetically random or not.
History
Simple fractions were used by the Egyptians around 1000 BC; the Vedic "Shulba Sutras" ("The rules of chords") in c. 600 BC include what may be the first "use" of irrational numbers. The concept of irrationality was implicitly accepted by early Indian mathematicians such as Manava (c. 750–690 BC), who was aware that the square roots of certain numbers, such as 2 and 61, could not be exactly determined.[7] Around 500 BC, the Greek mathematicians led by Pythagoras also realized that the square root of 2 is irrational.
The Middle Ages brought about the acceptance of zero, negative numbers, integers, and fractional numbers, first by Indian and Chinese mathematicians, and then by Arabic mathematicians, who were also the first to treat irrational numbers as algebraic objects (the latter being made possible by the development of algebra).[8] Arabic mathematicians merged the concepts of "number" and "magnitude" into a more general idea of real numbers.[9] The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850–930) was the first to accept irrational numbers as solutions to quadratic equations, or as coefficients in an equation (often in the form of square roots, cube roots and fourth roots).[10] In Europe, such numbers, not commensurable with the numerical unit, were called irrational or surd ("deaf").
In the 16th century, Simon Stevin created the basis for modern decimal notation, and insisted that there is no difference between rational and irrational numbers in this regard.
In the 17th century, Descartes introduced the term "real" to describe roots of a polynomial, distinguishing them from "imaginary" ones.
In the 18th and 19th centuries, there was much work on irrational and transcendental numbers. Lambert (1761) gave a flawed proof that π cannot be rational; Legendre (1794) completed the proof[11] and showed that π is not the square root of a rational number.[12] Liouville (1840) showed that neither e nor e2 can be a root of an integer quadratic equation, and then established the existence of transcendental numbers; Cantor (1873) extended and greatly simplified this proof.[13] Hermite (1873) proved that e is transcendental, and Lindemann (1882), showed that π is transcendental. Lindemann's proof was much simplified by Weierstrass (1885), Hilbert (1893), Hurwitz,[14] and Gordan.[15]
The developers of calculus used real numbers without having defined them rigorously. The first rigorous definition was published by Cantor in 1871. In 1874, he showed that the set of all real numbers is uncountably infinite, but the set of all algebraic numbers is countably infinite. Cantor's first uncountability proof was different from his famous diagonal argument published in 1891.
Formal definitions
Main article: Construction of the real numbers
The real number system $(\mathbb {R} ;{}+{};{}\cdot {};{}<{})$ ;{}+{};{}\cdot {};{}<{})} can be defined axiomatically up to an isomorphism, which is described hereinafter. There are also many ways to construct "the" real number system, and a popular approach involves starting from natural numbers, then defining rational numbers algebraically, and finally defining real numbers as equivalence classes of their Cauchy sequences or as Dedekind cuts, which are certain subsets of rational numbers.[16] Another approach is to start from some rigorous axiomatization of Euclidean geometry (say of Hilbert or of Tarski), and then define the real number system geometrically. All these constructions of the real numbers have been shown to be equivalent, in the sense that the resulting number systems are isomorphic.
Axiomatic approach
Let $\mathbb {R} $ denote the set of all real numbers, then:
• The set $\mathbb {R} $ is a field, meaning that addition and multiplication are defined and have the usual properties.
• The field $\mathbb {R} $ is ordered, meaning that there is a total order ≥ such that for all real numbers x, y and z:
• if x ≥ y, then x + z ≥ y + z;
• if x ≥ 0 and y ≥ 0, then xy ≥ 0.
• The order is Dedekind-complete, meaning that every nonempty subset S of $\mathbb {R} $ with an upper bound in $\mathbb {R} $ has a least upper bound (a.k.a., supremum) in $\mathbb {R} $.
The last property is what differentiates the real numbers from the rational numbers (and from other more exotic ordered fields). For example, $\{x\in \mathbb {Q} :x^{2}<2\}$ has a rational upper bound (e.g., 1.42), but no least rational upper bound, because ${\sqrt {2}}$ is not rational.
These properties imply the Archimedean property (which is not implied by other definitions of completeness), which states that the set of integers has no upper bound in the reals. In fact, if this were false, then the integers would have a least upper bound N; then, N – 1 would not be an upper bound, and there would be an integer n such that n > N – 1, and thus n + 1 > N, which is a contradiction with the upper-bound property of N.
The real numbers are uniquely specified by the above properties. More precisely, given any two Dedekind-complete ordered fields $\mathbb {R} _{1}$ and $\mathbb {R} _{2}$, there exists a unique field isomorphism from $\mathbb {R} _{1}$ to $\mathbb {R_{2}} $. This uniqueness allows us to think of them as essentially the same mathematical object.
For another axiomatization of $\mathbb {R} $, see Tarski's axiomatization of the reals.
Construction from the rational numbers
The real numbers can be constructed as a completion of the rational numbers, in such a way that a sequence defined by a decimal or binary expansion like (3; 3.1; 3.14; 3.141; 3.1415; ...) converges to a unique real number—in this case π. For details and other constructions of real numbers, see construction of the real numbers.
Applications and connections
Physics
In the physical sciences, most physical constants such as the universal gravitational constant, and physical variables, such as position, mass, speed, and electric charge, are modeled using real numbers. In fact, the fundamental physical theories such as classical mechanics, electromagnetism, quantum mechanics, general relativity and the standard model are described using mathematical structures, typically smooth manifolds or Hilbert spaces, that are based on the real numbers, although actual measurements of physical quantities are of finite accuracy and precision.
Physicists have occasionally suggested that a more fundamental theory would replace the real numbers with quantities that do not form a continuum, but such proposals remain speculative.[17]
Logic
The real numbers are most often formalized using the Zermelo–Fraenkel axiomatization of set theory, but some mathematicians study the real numbers with other logical foundations of mathematics. In particular, the real numbers are also studied in reverse mathematics and in constructive mathematics.[18]
The hyperreal numbers as developed by Edwin Hewitt, Abraham Robinson and others extend the set of the real numbers by introducing infinitesimal and infinite numbers, allowing for building infinitesimal calculus in a way closer to the original intuitions of Leibniz, Euler, Cauchy and others.
Edward Nelson's internal set theory enriches the Zermelo–Fraenkel set theory syntactically by introducing a unary predicate "standard". In this approach, infinitesimals are (non-"standard") elements of the set of the real numbers (rather than being elements of an extension thereof, as in Robinson's theory).
The continuum hypothesis posits that the cardinality of the set of the real numbers is $\aleph _{1}$; i.e. the smallest infinite cardinal number after $\aleph _{0}$, the cardinality of the integers. Paul Cohen proved in 1963 that it is an axiom independent of the other axioms of set theory; that is: one may choose either the continuum hypothesis or its negation as an axiom of set theory, without contradiction.
Computation
Electronic calculators and computers cannot operate on arbitrary real numbers, because finite computers cannot directly store infinitely many digits or other infinite representations. Nor do they usually even operate on arbitrary definable real numbers, which are inconvenient to manipulate.
Instead, computers typically work with finite-precision approximations called floating-point numbers, a representation similar to scientific notation. The achievable precision is limited by the data storage space allocated for each number, whether as fixed-point, floating-point, or arbitrary-precision numbers, or some other representation. Most scientific computation uses binary floating-point arithmetic, often a 64-bit representation with around 16 decimal digits of precision. Real numbers satisfy the usual rules of arithmetic, but floating-point numbers do not. The field of numerical analysis studies the stability and accuracy of numerical algorithms implemented with approximate arithmetic.
Alternately, computer algebra systems can operate on irrational quantities exactly by manipulating symbolic formulas for them (such as $ {\sqrt {2}},$ $ \arctan 5,$ or $ \int _{0}^{1}x^{x}\,dx$) rather than their rational or decimal approximation.[19] But exact and symbolic arithmetic also have limitations: for instance, they are computationally more expensive; it is not in general possible to determine whether two symbolic expressions are equal (the constant problem); and arithmetic operations can cause exponential explosion in the size of representation of a single number (for instance, squaring a rational number roughly doubles the number of digits in its numerator and denominator, and squaring a polynomial roughly doubles its number of terms), overwhelming finite computer storage.[20]
A real number is called computable if there exists an algorithm that yields its digits. Because there are only countably many algorithms,[21] but an uncountable number of reals, almost all real numbers fail to be computable. Moreover, the equality of two computable numbers is an undecidable problem. Some constructivists accept the existence of only those reals that are computable. The set of definable numbers is broader, but still only countable.
Set theory
In set theory, specifically descriptive set theory, the Baire space is used as a surrogate for the real numbers since the latter have some topological properties (connectedness) that are a technical inconvenience. Elements of Baire space are referred to as "reals".
Vocabulary and notation
The set of all real numbers is denoted $\mathbb {R} $ (blackboard bold) or R (upright bold). As it is naturally endowed with the structure of a field, the expression field of real numbers is frequently used when its algebraic properties are under consideration.
The sets of positive real numbers and negative real numbers are often noted $\mathbb {R} ^{+}$ and $\mathbb {R} ^{-}$,[22] respectively; $\mathbb {R} _{+}$ and $\mathbb {R} _{-}$ are also used.[23] The non-negative real numbers can be noted $\mathbb {R} _{\geq 0}$ but one often sees this set noted $\mathbb {R} ^{+}\cup \{0\}.$[22] In French mathematics, the positive real numbers and negative real numbers commonly include zero, and these sets are noted respectively $\mathbb {R_{+}} $ and $\mathbb {R} _{-}.$[23] In this understanding, the respective sets without zero are called strictly positive real numbers and strictly negative real numbers, and are noted $\mathbb {R} _{+}^{*}$ and $\mathbb {R} _{-}^{*}.$[23]
The notation $\mathbb {R} ^{n}$ refers to the set of the n-tuples of elements of $\mathbb {R} $ (real coordinate space), which can be identified to the Cartesian product of n copies of $\mathbb {R} .$ It is an n-dimensional vector space over the field of the real numbers, often called the coordinate space of dimension n; this space may be identified to the n-dimensional Euclidean space as soon as a Cartesian coordinate system has been chosen in the latter. In this identification, a point of the Euclidean space is identified with the tuple of its Cartesian coordinates.
In mathematics, real is used as an adjective, meaning that the underlying field is the field of the real numbers (or the real field). For example, real matrix, real polynomial and real Lie algebra. The word is also used as a noun, meaning a real number (as in "the set of all reals").
Generalizations and extensions
The real numbers can be generalized and extended in several different directions:
• The complex numbers contain solutions to all polynomial equations and hence are an algebraically closed field unlike the real numbers. However, the complex numbers are not an ordered field.
• The affinely extended real number system adds two elements +∞ and −∞. It is a compact space. It is no longer a field, or even an additive group, but it still has a total order; moreover, it is a complete lattice.
• The real projective line adds only one value ∞. It is also a compact space. Again, it is no longer a field, or even an additive group. However, it allows division of a nonzero element by zero. It has cyclic order described by a separation relation.
• The long real line pastes together ℵ1* + ℵ1 copies of the real line plus a single point (here ℵ1* denotes the reversed ordering of ℵ1) to create an ordered set that is "locally" identical to the real numbers, but somehow longer; for instance, there is an order-preserving embedding of ℵ1 in the long real line but not in the real numbers. The long real line is the largest ordered set that is complete and locally Archimedean. As with the previous two examples, this set is no longer a field or additive group.
• Ordered fields extending the reals are the hyperreal numbers and the surreal numbers; both of them contain infinitesimal and infinitely large numbers and are therefore non-Archimedean ordered fields.
• Self-adjoint operators on a Hilbert space (for example, self-adjoint square complex matrices) generalize the reals in many respects: they can be ordered (though not totally ordered), they are complete, all their eigenvalues are real and they form a real associative algebra. Positive-definite operators correspond to the positive reals and normal operators correspond to the complex numbers.
See also
• Completeness of the real numbers
• Continued fraction
• Definable real numbers
• Positive real numbers
• Real analysis
Number systems
Complex $:\;\mathbb {C} $ :\;\mathbb {C} }
Real $:\;\mathbb {R} $ :\;\mathbb {R} }
Rational $:\;\mathbb {Q} $ :\;\mathbb {Q} }
Integer $:\;\mathbb {Z} $ :\;\mathbb {Z} }
Natural $:\;\mathbb {N} $ :\;\mathbb {N} }
Zero: 0
One: 1
Prime numbers
Composite numbers
Negative integers
Fraction
Finite decimal
Dyadic (finite binary)
Repeating decimal
Irrational
Algebraic irrational
Transcendental
Imaginary
Notes
1. This is not sufficient for distinguishing the real numbers from the rational numbers; a property of completeness is also required.
2. The terminating rational numbers may have two decimal expansions (see 0.999...); the other real numbers have exactly one decimal expansion.
3. Limits and continuity can be defined in general topology without reference to real numbers, but these generalizations are relatively recent, and used only in very specific cases.
4. More precisely, given two complete totally ordered fields, there is a unique isomorphism between them. This implies that the identity is the unique field automorphism of the reals that is compatible with the ordering.
References
Citations
1. "Real number". Oxford Reference. 2011-08-03.
2. Weisstein, Eric W. "Real Number". mathworld.wolfram.com. Retrieved 2020-08-11.
3. "real". Oxford English Dictionary (3rd ed.). 2008. 'real', n.2, B.4. Mathematics. A real number. Usually in plural
4. "Real number". Encyclopedia Britannica.
5. Koellner, Peter (2013). "The Continuum Hypothesis". In Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy. Stanford University.
6. Moschovakis, Yiannis N. (1980), "5. The Constructible Universe", Descriptive Set Theory, North-Holland, pp. 274–285, ISBN 978-0-444-85305-9
7. T. K. Puttaswamy, "The Accomplishments of Ancient Indian Mathematicians", pp. 410–11. In: Selin, Helaine; D'Ambrosio, Ubiratan, eds. (2000), Mathematics Across Cultures: The History of Non-western Mathematics, Springer, ISBN 978-1-4020-0260-1.
8. O'Connor, John J.; Robertson, Edmund F. (1999), "Arabic mathematics: forgotten brilliance?", MacTutor History of Mathematics Archive, University of St Andrews
9. Matvievskaya, Galina (1987), "The Theory of Quadratic Irrationals in Medieval Oriental Mathematics", Annals of the New York Academy of Sciences, 500 (1): 253–77 [254], Bibcode:1987NYASA.500..253M, doi:10.1111/j.1749-6632.1987.tb37206.x, S2CID 121416910
10. Jacques Sesiano, "Islamic mathematics", p. 148, in Selin, Helaine; D'Ambrosio, Ubiratan (2000), Mathematics Across Cultures: The History of Non-western Mathematics, Springer, ISBN 978-1-4020-0260-1
11. Beckmann, Petr (1971). A History of π (PI). St. Martin's Press. p. 170. ISBN 9780312381851.
12. Arndt, Jörg; Haenel, Christoph (2001), Pi Unleashed, Springer, p. 192, ISBN 978-3-540-66572-4, retrieved 2015-11-15.
13. Dunham, William (2015), The Calculus Gallery: Masterpieces from Newton to Lebesgue, Princeton University Press, p. 127, ISBN 978-1-4008-6679-3, retrieved 2015-02-17, Cantor found a remarkable shortcut to reach Liouville's conclusion with a fraction of the work
14. Hurwitz, Adolf (1893). "Beweis der Transendenz der Zahl e". Mathematische Annalen (43): 134–35.
15. Gordan, Paul (1893). "Transcendenz von e und π". Mathematische Annalen. 43 (2–3): 222–224. doi:10.1007/bf01443647. S2CID 123203471.
16. "Lecture #1" (PDF). 18.095 Lecture Series in Mathematics. 2015-01-05.
17. Wheeler, John Archibald (1986). "Hermann Weyl and the Unity of Knowledge: In the linkage of four mysteries—the "how come" of existence, time, the mathematical continuum, and the discontinuous yes-or-no of quantum physics—may lie the key to deep new insight". American Scientist. 74 (4): 366–75. Bibcode:1986AmSci..74..366W. JSTOR 27854250.
Bengtsson, Ingemar (2017). "The Number Behind the Simplest SIC-POVM". Foundations of Physics. 47 (8): 1031–41. arXiv:1611.09087. Bibcode:2017FoPh...47.1031B. doi:10.1007/s10701-017-0078-3. S2CID 118954904.
18. Bishop, Errett; Bridges, Douglas (1985), Constructive analysis, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 279, Berlin, New York: Springer-Verlag, ISBN 978-3-540-15066-4, chapter 2.
19. Cohen, Joel S. (2002), Computer algebra and symbolic computation: elementary algorithms, vol. 1, A K Peters, p. 32, ISBN 978-1-56881-158-1
20. Trefethen, Lloyd N. (2007). "Computing numerically with functions instead of numbers" (PDF). Mathematics in Computer Science. 1 (1): 9–19. doi:10.1007/s11786-007-0001-y.
21. Hein, James L. (2010), "14.1.1", Discrete Structures, Logic, and Computability (3 ed.), Sudbury, MA: Jones and Bartlett Publishers, ISBN 97-80763772062, retrieved 2015-11-15
22. Schumacher, Carol (1996). Chapter Zero: Fundamental Notions of Abstract Mathematics. Addison-Wesley. pp. 114–115. ISBN 9780201826531.
23. École Normale Supérieure of Paris, "Nombres réels" ("Real numbers") Archived 2014-05-08 at the Wayback Machine, p. 6
Sources
• Bos, Henk J.M. (2001). Redefining Geometrical Exactness: Descartes' Transformation of the Early Modern Concept of Construction. Sources and Studies in the History of Mathematics and Physical Sciences. Springer. doi:10.1007/978-1-4613-0087-8. ISBN 978-1-4612-6521-4.
• Bottazzini, Umberto (1986). The Higher Calculus: A History of Real and Complex Analysis from Euler to Weierstrass. Springer. ISBN 9780387963020.
• Cantor, Georg (1874). "Über eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen" [On a property of the collection of all real algebraic numbers]. Crelle's Journal (in German). 77: 258–62.
• Dieudonné, Jean (1960). Foundations of Modern Analysis. Academic Press.
• Feferman, Solomon (1964). The Number Systems: Foundations of Algebra and Analysis. Addison-Wesley.
• Howie, John M. (2001). Real Analysis. Springer Undergraduate Mathematics Series. Springer. doi:10.1007/978-1-4471-0341-7. ISBN 978-1-85233-314-0.
• Katz, Robert (1964). Axiomatic Analysis. Heath.
• Krantz, David H.; Luce, R. Duncan; Suppes, Patrick; Tversky, Amos (1971). Foundations of Measurement, Vol. 1. Academic Press. ISBN 9780124254015. Vol. 2, 1989. Vol. 3, 1990.
• Mac Lane, Saunders (1986). "4. Real Numbers". Mathematics: Form and Function. Springer. ISBN 9780387962177.
• Landau, Edmund (1966). Foundations of Analysis (3rd ed.). Chelsea. ISBN 9780828400794. Translated from the German Grundlagen der Analysis, 1930.
• Stevenson, Frederick W. (2000). Exploring the Real Numbers. Prentice Hall. ISBN 9780130402615.
• Stillwell, John (2013). The Real Numbers: An Introduction to Set Theory and Analysis. Undergraduate Texts in Mathematics. Springer. doi:10.1007/978-3-319-01577-4. ISBN 978-3-319-01576-7.
External links
• "Real number", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Real numbers
• 0.999...
• Absolute difference
• Cantor set
• Cantor–Dedekind axiom
• Completeness
• Construction
• Decidability of first-order theories
• Extended real number line
• Gregory number
• Irrational number
• Normal number
• Rational number
• Rational zeta series
• Real coordinate space
• Real line
• Tarski axiomatization
• Vitali set
Complex numbers
• Complex conjugate
• Complex plane
• Imaginary number
• Real number
• Unit complex number
Number systems
Sets of definable numbers
• Natural numbers ($\mathbb {N} $)
• Integers ($\mathbb {Z} $)
• Rational numbers ($\mathbb {Q} $)
• Constructible numbers
• Algebraic numbers ($\mathbb {A} $)
• Closed-form numbers
• Periods
• Computable numbers
• Arithmetical numbers
• Set-theoretically definable numbers
• Gaussian integers
Composition algebras
• Division algebras: Real numbers ($\mathbb {R} $)
• Complex numbers ($\mathbb {C} $)
• Quaternions ($\mathbb {H} $)
• Octonions ($\mathbb {O} $)
Split
types
• Over $\mathbb {R} $:
• Split-complex numbers
• Split-quaternions
• Split-octonions
Over $\mathbb {C} $:
• Bicomplex numbers
• Biquaternions
• Bioctonions
Other hypercomplex
• Dual numbers
• Dual quaternions
• Dual-complex numbers
• Hyperbolic quaternions
• Sedenions ($\mathbb {S} $)
• Split-biquaternions
• Multicomplex numbers
• Geometric algebra/Clifford algebra
• Algebra of physical space
• Spacetime algebra
Other types
• Cardinal numbers
• Extended natural numbers
• Irrational numbers
• Fuzzy numbers
• Hyperreal numbers
• Levi-Civita field
• Surreal numbers
• Transcendental numbers
• Ordinal numbers
• p-adic numbers (p-adic solenoids)
• Supernatural numbers
• Profinite integers
• Superreal numbers
• Normal numbers
• Classification
• List
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
|
Wikipedia
|
Number line
In elementary mathematics, a number line is a picture of a graduated straight line that serves as visual representation of the real numbers. Every point of a number line is assumed to correspond to a real number, and every real number to a point.[1]
The integers are often shown as specially-marked points evenly spaced on the line. Although the image only shows the integers from –3 to 3, the line includes all real numbers, continuing forever in each direction, and also numbers that are between the integers. It is often used as an aid in teaching simple addition and subtraction, especially involving negative numbers.
In advanced mathematics, the number line can be called the real line or real number line, formally defined as the set R of all real numbers. It is viewed as a geometric space, namely the real coordinate space of dimension one, or the Euclidean space of dimension one – the Euclidean line. It can also be thought of as a vector space (or affine space), a metric space, a topological space, a measure space, or a linear continuum.
Just like the set of real numbers, the real line is usually denoted by the symbol R (or alternatively, $\mathbb {R} $, the letter “R” in blackboard bold). However, it is sometimes denoted R1 or E1 in order to emphasize its role as the first real space or first Euclidean space.
History
The first mention of the number line used for operation purposes is found in John Wallis's Treatise of algebra.[2] In his treatise, Wallis describes addition and subtraction on a number line in terms of moving forward and backward, under the metaphor of a person walking.
An earlier depiction without mention to operations, though, is found in John Napier's A description of the admirable table of logarithmes, which shows values 1 through 12 lined up from left to right.[3]
Contrary to popular belief, Rene Descartes's original La Géométrie does not feature a number line, defined as we use it today, though it does use a coordinate system. In particular, Descartes's work does not contain specific numbers mapped onto lines, only abstract quantities.[4]
Drawing the number line
A number line is usually represented as being horizontal, but in a Cartesian coordinate plane the vertical axis (y-axis) is also a number line.[5] According to one convention, positive numbers always lie on the right side of zero, negative numbers always lie on the left side of zero, and arrowheads on both ends of the line are meant to suggest that the line continues indefinitely in the positive and negative directions. Another convention uses only one arrowhead which indicates the direction in which numbers grow.[5] The line continues indefinitely in the positive and negative directions according to the rules of geometry which define a line without endpoints as an infinite line, a line with one endpoint as a ray, and a line with two endpoints as a line segment.
Comparing numbers
If a particular number is farther to the right on the number line than is another number, then the first number is greater than the second (equivalently, the second is less than the first). The distance between them is the magnitude of their difference—that is, it measures the first number minus the second one, or equivalently the absolute value of the second number minus the first one. Taking this difference is the process of subtraction.
Thus, for example, the length of a line segment between 0 and some other number represents the magnitude of the latter number.
Two numbers can be added by "picking up" the length from 0 to one of the numbers, and putting it down again with the end that was 0 placed on top of the other number.
Two numbers can be multiplied as in this example: To multiply 5 × 3, note that this is the same as 5 + 5 + 5, so pick up the length from 0 to 5 and place it to the right of 5, and then pick up that length again and place it to the right of the previous result. This gives a result that is 3 combined lengths of 5 each; since the process ends at 15, we find that 5 × 3 = 15.
Division can be performed as in the following example: To divide 6 by 2—that is, to find out how many times 2 goes into 6—note that the length from 0 to 2 lies at the beginning of the length from 0 to 6; pick up the former length and put it down again to the right of its original position, with the end formerly at 0 now placed at 2, and then move the length to the right of its latest position again. This puts the right end of the length 2 at the right end of the length from 0 to 6. Since three lengths of 2 filled the length 6, 2 goes into 6 three times (that is, 6 ÷ 2 = 3).
• The ordering on the number line: Greater elements are in direction of the arrow.
• The difference 3-2=3+(-2) on the real number line.
• The addition 1+2 on the real number line
• The absolute difference.
• The multiplication 2 times 1.5
• The division 3÷2 on the real number line
Portions of the number line
The section of the number line between two numbers is called an interval. If the section includes both numbers it is said to be a closed interval, while if it excludes both numbers it is called an open interval. If it includes one of the numbers but not the other one, it is called a half-open interval.
All the points extending forever in one direction from a particular point are together known as a ray. If the ray includes the particular point, it is a closed ray; otherwise it is an open ray.
Extensions of the concept
Logarithmic scale
Main article: Logarithmic scale
On the number line, the distance between two points is the unit length if and only if the difference of the represented numbers equals 1. Other choices are possible.
One of the most common choices is the logarithmic scale, which is a representation of the positive numbers on a line, such that the distance of two points is the unit length, if the ratio of the represented numbers has a fixed value, typically 10. In such a logarithmic scale, the origin represents 1; one inch to the right, one has 10, one inch to the right of 10 one has 10×10 = 100, then 10×100 = 1000 = 103, then 10×1000 = 10,000 = 104, etc. Similarly, one inch to the left of 1, one has 1/10 = 10–1, then 1/100 = 10–2, etc.
This approach is useful, when one wants to represent, on the same figure, values with very different order of magnitude. For example, one requires a logarithmic scale for representing simultaneously the size of the different bodies that exist in the Universe, typically, a photon, an electron, an atom, a molecule, a human, the Earth, the Solar System, a galaxy, and the visible Universe.
Logarithmic scales are used in slide rules for multiplying or dividing numbers by adding or subtracting lengths on logarithmic scales.
Combining number lines
A line drawn through the origin at right angles to the real number line can be used to represent the imaginary numbers. This line, called imaginary line, extends the number line to a complex number plane, with points representing complex numbers.
Alternatively, one real number line can be drawn horizontally to denote possible values of one real number, commonly called x, and another real number line can be drawn vertically to denote possible values of another real number, commonly called y. Together these lines form what is known as a Cartesian coordinate system, and any point in the plane represents the value of a pair of real numbers. Further, the Cartesian coordinate system can itself be extended by visualizing a third number line "coming out of the screen (or page)", measuring a third variable called z. Positive numbers are closer to the viewer's eyes than the screen is, while negative numbers are "behind the screen"; larger numbers are farther from the screen. Then any point in the three-dimensional space that we live in represents the values of a trio of real numbers.
Advanced concepts
As a linear continuum
The real line is a linear continuum under the standard < ordering. Specifically, the real line is linearly ordered by <, and this ordering is dense and has the least-upper-bound property.
In addition to the above properties, the real line has no maximum or minimum element. It also has a countable dense subset, namely the set of rational numbers. It is a theorem that any linear continuum with a countable dense subset and no maximum or minimum element is order-isomorphic to the real line.
The real line also satisfies the countable chain condition: every collection of mutually disjoint, nonempty open intervals in R is countable. In order theory, the famous Suslin problem asks whether every linear continuum satisfying the countable chain condition that has no maximum or minimum element is necessarily order-isomorphic to R. This statement has been shown to be independent of the standard axiomatic system of set theory known as ZFC.
As a metric space
The real line forms a metric space, with the distance function given by absolute difference:
$d(x,y)=|x-y|.$
The metric tensor is clearly the 1-dimensional Euclidean metric. Since the n-dimensional Euclidean metric can be represented in matrix form as the n-by-n identity matrix, the metric on the real line is simply the 1-by-1 identity matrix, i.e. 1.
If p ∈ R and ε > 0, then the ε-ball in R centered at p is simply the open interval (p − ε, p + ε).
This real line has several important properties as a metric space:
• The real line is a complete metric space, in the sense that any Cauchy sequence of points converges.
• The real line is path-connected and is one of the simplest examples of a geodesic metric space.
• The Hausdorff dimension of the real line is equal to one.
As a topological space
The real line carries a standard topology, which can be introduced in two different, equivalent ways. First, since the real numbers are totally ordered, they carry an order topology. Second, the real numbers inherit a metric topology from the metric defined above. The order topology and metric topology on R are the same. As a topological space, the real line is homeomorphic to the open interval (0, 1).
The real line is trivially a topological manifold of dimension 1. Up to homeomorphism, it is one of only two different connected 1-manifolds without boundary, the other being the circle. It also has a standard differentiable structure on it, making it a differentiable manifold. (Up to diffeomorphism, there is only one differentiable structure that the topological space supports.)
The real line is a locally compact space and a paracompact space, as well as second-countable and normal. It is also path-connected, and is therefore connected as well, though it can be disconnected by removing any one point. The real line is also contractible, and as such all of its homotopy groups and reduced homology groups are zero.
As a locally compact space, the real line can be compactified in several different ways. The one-point compactification of R is a circle (namely, the real projective line), and the extra point can be thought of as an unsigned infinity. Alternatively, the real line has two ends, and the resulting end compactification is the extended real line [−∞, +∞]. There is also the Stone–Čech compactification of the real line, which involves adding an infinite number of additional points.
In some contexts, it is helpful to place other topologies on the set of real numbers, such as the lower limit topology or the Zariski topology. For the real numbers, the latter is the same as the finite complement topology.
As a vector space
The real line is a vector space over the field R of real numbers (that is, over itself) of dimension 1. It has the usual multiplication as an inner product, making it a Euclidean vector space. The norm defined by this inner product is simply the absolute value.
As a measure space
The real line carries a canonical measure, namely the Lebesgue measure. This measure can be defined as the completion of a Borel measure defined on R, where the measure of any interval is the length of the interval.
Lebesgue measure on the real line is one of the simplest examples of a Haar measure on a locally compact group.
In real algebras
When A is a unital real algebra, the products of real numbers with 1 is a real line within the algebra. For example, in the complex plane z = x + iy, the subspace {z : y = 0} is a real line. Similarly, the algebra of quaternions
q = w + x i + y j + z k
has a real line in the subspace {q : x = y = z = 0 }.
When the real algebra is a direct sum $A=R\oplus V,$ then a conjugation on A is introduced by the mapping $v\to -v$ of subspace V. In this way the real line consists of the fixed points of the conjugation.
For a dimension n, the square matrices form a ring that has a real line in the form of real products with the identity matrix in the ring.
See also
• Cantor–Dedekind axiom
• Imaginary line (mathematics)
• Line (geometry)
• Projectively extended real line
• Chronology
• Cuisenaire rods
• Extended real number line
• Hyperreal number line
• Number form (neurological phenomenon)
References
1. Stewart, James B.; Redlin, Lothar; Watson, Saleem (2008). College Algebra (5th ed.). Brooks Cole. pp. 13–19. ISBN 978-0-495-56521-5.
2. Wallis, John (1685). Treatise of algebra. http://lhldigital.lindahall.org/cdm/ref/collection/math/id/11231 pp. 265
3. Napier, John (1616). A description of the admirable table of logarithmes https://www.math.ru.nl/werkgroepen/gmfw/bronnen/napier1.html
4. Núñez, Rafael (2017). How Much Mathematics Is "Hardwired", If Any at All Minnesota Symposia on Child Psychology: Culture and Developmental Systems, Volume 38. http://www.cogsci.ucsd.edu/~nunez/COGS152_Readings/Nunez_ch3_MN.pdf pp. 98
5. Introduction to the x,y-plane Archived 2015-11-09 at the Wayback Machine "Purplemath" Retrieved 2015-11-13
Further reading
• Munkres, James (1999). Topology (2nd ed.). Prentice Hall. ISBN 0-13-181629-2.
• Rudin, Walter (1966). Real and Complex Analysis. McGraw-Hill. ISBN 0-07-100276-6.
• Media related to Number lines at Wikimedia Commons
Real numbers
• 0.999...
• Absolute difference
• Cantor set
• Cantor–Dedekind axiom
• Completeness
• Construction
• Decidability of first-order theories
• Extended real number line
• Gregory number
• Irrational number
• Normal number
• Rational number
• Rational zeta series
• Real coordinate space
• Real line
• Tarski axiomatization
• Vitali set
|
Wikipedia
|
Real plane curve
In mathematics, a real plane curve is usually a real algebraic curve defined in the real projective plane.
Ovals
The field of real numbers is not algebraically closed, the geometry of even a plane curve C in the real projective plane. Assuming no singular points, the real points of C form a number of ovals, in other words submanifolds that are topologically circles. The real projective plane has a fundamental group that is a cyclic group with two elements. Such an oval may represent either group element; in other words we may or may not be able to contract it down in the plane. Taking out the line at infinity L, any oval that stays in the finite part of the affine plane will be contractible, and so represent the identity element of the fundamental group; the other type of oval must therefore intersect L.
There is still the question of how the various ovals are nested. This was the topic of Hilbert's sixteenth problem. See Harnack's curve theorem for a classical result.
See also
• Real algebraic geometry
• Ragsdale conjecture
References
• "Plane real algebraic curve", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia
|
Real point
In geometry, a real point is a point in the complex projective plane with homogeneous coordinates (x,y,z) for which there exists a nonzero complex number λ such that λx, λy, and λz are all real numbers.
This definition can be widened to a complex projective space of arbitrary finite dimension as follows:
$(u_{1},u_{2},\ldots ,u_{n})$
are the homogeneous coordinates of a real point if there exists a nonzero complex number λ such that the coordinates of
$(\lambda u_{1},\lambda u_{2},\ldots ,\lambda u_{n})$
are all real.
A point which is not real is called an imaginary point.[1]
Context
Geometries that are specializations of real projective geometry, such as Euclidean geometry, elliptic geometry or conformal geometry may be complexified, thus embedding the points of the geometry in a complex projective space, but retaining the identity of the original real space as special. Lines, planes etc. are expanded to the lines, etc. of the complex projective space. As with the inclusion of points at infinity and complexification of real polynomials, this allows some theorems to be stated more simply without exceptions and for a more regular algebraic analysis of the geometry.
Viewed in terms of homogeneous coordinates, a real vector space of homogeneous coordinates of the original geometry is complexified. A point of the original geometric space is defined by an equivalence class of homogeneous vectors of the form λu, where λ is an nonzero complex value and u is a real vector. A point of this form (and hence belongs to the original real space) is called a real point, whereas a point that has been added through the complexification and thus does not have this form is called an imaginary point.
Real subspace
A subspace of a projective space is real if it is spanned by real points. Every imaginary point belongs to exactly one real line, the line through the point and its complex conjugate.[1]
References
1. Pottmann, Helmut; Wallner, Johannes (2009), Computational Line Geometry, Mathematics and visualization, Springer, pp. 54–55, ISBN 9783642040184.
|
Wikipedia
|
Real projective line
In geometry, a real projective line is a projective line over the real numbers. It is an extension of the usual concept of a line that has been historically introduced to solve a problem set by visual perspective: two parallel lines do not intersect but seem to intersect "at infinity". For solving this problem, points at infinity have been introduced, in such a way that in a real projective plane, two distinct projective lines meet in exactly one point. The set of these points at infinity, the "horizon" of the visual perspective in the plane, is a real projective line. It is the set of directions emanating from an observer situated at any point, with opposite directions identified.
An example of a real projective line is the projectively extended real line, which is often called the projective line.
Formally, a real projective line P(R) is defined as the set of all one-dimensional linear subspaces of a two-dimensional vector space over the reals. The automorphisms of a real projective line are called projective transformations, homographies, or linear fractional transformations. They form the projective linear group PGL(2, R). Each element of PGL(2, R) can be defined by a nonsingular 2×2 real matrix, and two matrices define the same element of PGL(2, R) if one is the product of the other and a nonzero real number.
Topologically, real projective lines are homeomorphic to circles. The complex analog of a real projective line is a complex projective line, also called a Riemann sphere.
Definition
The points of the real projective line are usually defined as equivalence classes of an equivalence relation. The starting point is a real vector space of dimension 2, V. Define on V ∖ 0 the binary relation v ~ w to hold when there exists a nonzero real number t such that v = tw. The definition of a vector space implies almost immediately that this is an equivalence relation. The equivalence classes are the vector lines from which the zero vector has been removed. The real projective line P(V) is the set of all equivalence classes. Each equivalence class is considered as a single point, or, in other words, a point is defined as being an equivalence class.
If one chooses a basis of V, this amounts (by identifying a vector with its coordinate vector) to identify V with the direct product R × R = R2, and the equivalence relation becomes (x, y) ~ (w, z) if there exists a nonzero real number t such that (x, y) = (tw, tz). In this case, the projective line P(R2) is preferably denoted P1(R) or $\mathbb {R} \mathbb {P} ^{1}$. The equivalence class of the pair (x, y) is traditionally denoted [x: y], the colon in the notation recalling that, if y ≠ 0, the ratio x : y is the same for all elements of the equivalence class. If a point P is the equivalence class [x: y] one says that (x, y) is a pair of projective coordinates of P.[1]
As P(V) is defined through an equivalence relation, the canonical projection from V to P(V) defines a topology (the quotient topology) and a differential structure on the projective line. However, the fact that equivalence classes are not finite induces some difficulties for defining the differential structure. These are solved by considering V as a Euclidean vector space. The circle of the unit vectors is, in the case of R2, the set of the vectors whose coordinates satisfy x2 + y2 = 1. This circle intersects each equivalence classes in exactly two opposite points. Therefore, the projective line may be considered as the quotient space of the circle by the equivalence relation such that v ~ w if and only if either v = w or v = −w.
See also: projectivization
Charts
The projective line is a manifold. This can be seen by above construction through an equivalence relation, but is easier to understand by providing an atlas consisting of two charts
• Chart #1: $y\neq 0,\quad [x:y]\mapsto {\frac {x}{y}}$
• Chart #2: $x\neq 0,\quad [x:y]\mapsto {\frac {y}{x}}$
The equivalence relation provides that all representatives of an equivalence class are sent to the same real number by a chart.
Either of x or y may be zero, but not both, so both charts are needed to cover the projective line. The transition map between these two charts is the multiplicative inverse. As it is a differentiable function, and even an analytic function (outside of zero), the real projective line is both a differentiable manifold and an analytic manifold.
The inverse function of chart #1 is the map
$x\mapsto [x:1].$
It defines an embedding of the real line into the projective line, whose complement of the image is the point [1: 0]. The pair consisting of this embedding and the projective line is called the projectively extended real line. Identifying the real line with its image by this embedding, one sees that the projective line may be considered as the union of the real line and the single point [1: 0], called the point at infinity of the projectively extended real line, and denoted ∞. This embedding allows us to identify the point [x: y] either with the real number x/y if y ≠ 0, or with ∞ in the other case.
The same construction may be done with the other chart. In this case, the point at infinity is [0: 1]. This shows that the notion of point at infinity is not intrinsic to the real projective line, but is relative to the choice of an embedding of the real line into the projective line.
Structure
The real projective line is a complete projective range that is found in the real projective plane and in the complex projective line. Its structure is thus inherited from these superstructures. Primary among these structures is the relation of projective harmonic conjugates among the points of the projective range.
The real projective line has a cyclic order that extends the usual order of the real numbers.
Automorphisms
The projective linear group and its action
Matrix-vector multiplication defines a left action of GL2(R) on the space R2 of column vectors: explicitly,
${\begin{pmatrix}a&b\\c&d\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}ax+by\\cx+dy\end{pmatrix}}.$
Since each matrix in GL2(R) fixes the zero vector and maps proportional vectors to proportional vectors, there is an induced action of GL2(R) on P1(R): explicitly,[2]
${\begin{pmatrix}a&b\\c&d\end{pmatrix}}[x:y]=[ax+by:cx+dy].$
(Here and below, the notation $[x:y]$ for homogeneous coordinates denotes the equivalence class of the column matrix $\textstyle {\begin{pmatrix}x\\y\end{pmatrix}};$ it must not be confused with the row matrix $[x\;y].$)
The elements of GL2(R) that act trivially on P1(R) are the nonzero scalar multiples of the identity matrix; these form a subgroup denoted R×. The projective linear group is defined to be the quotient group PGL2(R) = GL2(R)/R×. By the above, there is an induced faithful action of PGL2(R) on P1(R). For this reason, the group PGL2(R) may also be called the group of linear automorphisms of P1(R).
Linear fractional transformations
Using the identification R ∪ ∞ → P1(R) sending x to [x:1] and ∞ to [1:0], one obtains a corresponding action of PGL2(R) on R ∪ ∞ , which is by linear fractional transformations: explicitly, since
${\begin{pmatrix}a&b\\c&d\end{pmatrix}}[x:1]=[ax+b:cx+d]\quad \mathrm {and} \quad {\begin{pmatrix}a&b\\c&d\end{pmatrix}}[1:0]=[a:c],$
the class of ${\begin{pmatrix}a&b\\c&d\end{pmatrix}}$ in PGL2(R) acts as $x\mapsto {\frac {ax+b}{cx+d}}$[3][4][5] and $\infty \mapsto {\frac {a}{c}}$,[6] with the understanding that each fraction with denominator 0 should be interpreted as ∞.[7]
Properties
• Given two ordered triples of distinct points in P1(R), there exists a unique element of PGL2(R) mapping the first triple to the second; that is, the action is sharply 3-transitive. For example, the linear fractional transformation mapping (0, 1, ∞) to (−1, 0, 1) is the Cayley transform $x\mapsto {\frac {x-1}{x+1}}$.
• The stabilizer in PGL2(R) of the point ∞ is the affine group of the real line, consisting of the transformations $x\mapsto ax+b$ for all a ∈ R× and b ∈ R.
See also
• Real projective plane
• Complex projective plane
• Wheel theory
Notes
1. The argument used to construct P1(R) can also be used with any field K and any dimension to construct the projective space Pn(K).
2. Miyake, Modular forms, Springer, 2006, §1.1. This reference and some of the others below work with P1(C) instead of P1(R), but the principle is the same.
3. Lang, Elliptic functions, Springer, 1987, 3.§1.
4. Serre, A course in arithmetic, Springer, 1973, VII.1.1.
5. Stillwell, Mathematics and its history, Springer, 2010, §8.6
6. Lang, Complex analysis, Springer, 1999, VII, §5.
7. Koblitz, Introduction to elliptic curves and modular forms, Springer, 1993, III.§1.
References
• Juan Carlos Alvarez Paiva (2000) The Real Projective Line, course content from New York University.
• Santiago Cañez (2014) Notes on Projective Geometry from Northwestern University.
|
Wikipedia
|
Real projective plane
In mathematics, the real projective plane is an example of a compact non-orientable two-dimensional manifold; in other words, a one-sided surface. It cannot be embedded in standard three-dimensional space without intersecting itself. It has basic applications to geometry, since the common construction of the real projective plane is as the space of lines in $\mathbb {R} ^{3}$ passing through the origin.
The fundamental polygon of the projective plane.
The Möbius strip with a single edge, can be closed into a projective plane by gluing opposite open edges together.
In comparison, the Klein bottle is a Möbius strip closed into a cylinder.
The plane is also often described topologically, in terms of a construction based on the Möbius strip: if one could glue the (single) edge of the Möbius strip to itself in the correct direction, one would obtain the projective plane. (This cannot be done in three-dimensional space without the surface intersecting itself.) Equivalently, gluing a disk along the boundary of the Möbius strip gives the projective plane. Topologically, it has Euler characteristic 1, hence a demigenus (non-orientable genus, Euler genus) of 1.
Since the Möbius strip, in turn, can be constructed from a square by gluing two of its sides together with a half-twist, the real projective plane can thus be represented as a unit square (that is, [0, 1] × [0,1]) with its sides identified by the following equivalence relations:
(0, y) ~ (1, 1 − y) for 0 ≤ y ≤ 1
and
(x, 0) ~ (1 − x, 1) for 0 ≤ x ≤ 1,
as in the leftmost diagram shown here.
Examples
Projective geometry is not necessarily concerned with curvature and the real projective plane may be twisted up and placed in the Euclidean plane or 3-space in many different ways.[1] Some of the more important examples are described below.
The projective plane cannot be embedded (that is without intersection) in three-dimensional Euclidean space. The proof that the projective plane does not embed in three-dimensional Euclidean space goes like this: Assuming that it does embed, it would bound a compact region in three-dimensional Euclidean space by the generalized Jordan curve theorem. The outward-pointing unit normal vector field would then give an orientation of the boundary manifold, but the boundary manifold would be the projective plane, which is not orientable. This is a contradiction, and so our assumption that it does embed must have been false.
The projective sphere
Consider a sphere, and let the great circles of the sphere be "lines", and let pairs of antipodal points be "points". It is easy to check that this system obeys the axioms required of a projective plane:
• any pair of distinct great circles meet at a pair of antipodal points; and
• any two distinct pairs of antipodal points lie on a single great circle.
If we identify each point on the sphere with its antipodal point, then we get a representation of the real projective plane in which the "points" of the projective plane really are points. This means that the projective plane is the quotient space of the sphere obtained by partitioning the sphere into equivalence classes under the equivalence relation ~, where x ~ y if y = x or y = −x. This quotient space of the sphere is homeomorphic with the collection of all lines passing through the origin in R3.
The quotient map from the sphere onto the real projective plane is in fact a two sheeted (i.e. two-to-one) covering map. It follows that the fundamental group of the real projective plane is the cyclic group of order 2; i.e., integers modulo 2. One can take the loop AB from the figure above to be the generator.
The projective hemisphere
Because the sphere covers the real projective plane twice, the plane may be represented as a closed hemisphere around whose rim opposite points are similarly identified.[2]
Boy's surface – an immersion
The projective plane can be immersed (local neighbourhoods of the source space do not have self-intersections) in 3-space. Boy's surface is an example of an immersion.
Polyhedral examples must have at least nine faces.[3]
Roman surface
Steiner's Roman surface is a more degenerate map of the projective plane into 3-space, containing a cross-cap.
A polyhedral representation is the tetrahemihexahedron,[4] which has the same general form as Steiner's Roman Surface, shown here.
Hemi polyhedra
Looking in the opposite direction, certain abstract regular polytopes – hemi-cube, hemi-dodecahedron, and hemi-icosahedron – can be constructed as regular figures in the projective plane; see also projective polyhedra.
Planar projections
Various planar (flat) projections or mappings of the projective plane have been described. In 1874 Klein described the mapping:[1]
$k(x,y)=\left(1+x^{2}+y^{2}\right)^{\frac {1}{2}}{\binom {x}{y}}$
Central projection of the projective hemisphere onto a plane yields the usual infinite projective plane, described below.
Cross-capped disk
A closed surface is obtained by gluing a disk to a cross-cap. This surface can be represented parametrically by the following equations:
${\begin{aligned}X(u,v)&=r\,(1+\cos v)\,\cos u,\\Y(u,v)&=r\,(1+\cos v)\,\sin u,\\Z(u,v)&=-\operatorname {tanh} \left(u-\pi \right)\,r\,\sin v,\end{aligned}}$
where both u and v range from 0 to 2π.
These equations are similar to those of a torus. Figure 1 shows a closed cross-capped disk.
Figure 1. Two views of a cross-capped disk.
A cross-capped disk has a plane of symmetry which passes through its line segment of double points. In Figure 1 the cross-capped disk is seen from above its plane of symmetry z = 0, but it would look the same if seen from below.
A cross-capped disk can be sliced open along its plane of symmetry, while making sure not to cut along any of its double points. The result is shown in Figure 2.
Figure 2. Two views of a cross-capped disk which has been sliced open.
Once this exception is made, it will be seen that the sliced cross-capped disk is homeomorphic to a self-intersecting disk, as shown in Figure 3.
Figure 3. Two alternative views of a self-intersecting disk.
The self-intersecting disk is homeomorphic to an ordinary disk. The parametric equations of the self-intersecting disk are:
${\begin{aligned}X(u,v)&=r\,v\,\cos 2u,\\Y(u,v)&=r\,v\,\sin 2u,\\Z(u,v)&=r\,v\,\cos u,\end{aligned}}$
where u ranges from 0 to 2π and v ranges from 0 to 1.
Projecting the self-intersecting disk onto the plane of symmetry (z = 0 in the parametrization given earlier) which passes only through the double points, the result is an ordinary disk which repeats itself (doubles up on itself).
The plane z = 0 cuts the self-intersecting disk into a pair of disks which are mirror reflections of each other. The disks have centers at the origin.
Now consider the rims of the disks (with v = 1). The points on the rim of the self-intersecting disk come in pairs which are reflections of each other with respect to the plane z = 0.
A cross-capped disk is formed by identifying these pairs of points, making them equivalent to each other. This means that a point with parameters (u, 1) and coordinates $(r\,\cos 2u,r\,\sin 2u,r\,\cos u)$ is identified with the point (u + π, 1) whose coordinates are $(r\,\cos 2u,r\,\sin 2u,-r\,\cos u)$. But this means that pairs of opposite points on the rim of the (equivalent) ordinary disk are identified with each other; this is how a real projective plane is formed out of a disk. Therefore, the surface shown in Figure 1 (cross-cap with disk) is topologically equivalent to the real projective plane RP2.
Homogeneous coordinates
Main article: Homogeneous coordinates
The points in the plane can be represented by homogeneous coordinates. A point has homogeneous coordinates [x : y : z], where the coordinates [x : y : z] and [tx : ty : tz] are considered to represent the same point, for all nonzero values of t. The points with coordinates [x : y : 1] are the usual real plane, called the finite part of the projective plane, and points with coordinates [x : y : 0], called points at infinity or ideal points, constitute a line called the line at infinity. (The homogeneous coordinates [0 : 0 : 0] do not represent any point.)
The lines in the plane can also be represented by homogeneous coordinates. A projective line corresponding to the plane ax + by + cz = 0 in R3 has the homogeneous coordinates (a : b : c). Thus, these coordinates have the equivalence relation (a : b : c) = (da : db : dc) for all nonzero values of d. Hence a different equation of the same line dax + dby + dcz = 0 gives the same homogeneous coordinates. A point [x : y : z] lies on a line (a : b : c) if ax + by + cz = 0. Therefore, lines with coordinates (a : b : c) where a, b are not both 0 correspond to the lines in the usual real plane, because they contain points that are not at infinity. The line with coordinates (0 : 0 : 1) is the line at infinity, since the only points on it are those with z = 0.
Points, lines, and planes
A line in P2 can be represented by the equation ax + by + cz = 0. If we treat a, b, and c as the column vector ℓ and x, y, z as the column vector x then the equation above can be written in matrix form as:
xTℓ = 0 or ℓTx = 0.
Using vector notation we may instead write x ⋅ ℓ = 0 or ℓ ⋅ x = 0.
The equation k(xTℓ) = 0 (which k is a non-zero scalar) sweeps out a plane that goes through zero in R3 and k(x) sweeps out a line, again going through zero. The plane and line are linear subspaces in R3, which always go through zero.
Ideal points
In P2 the equation of a line is ax + by + cz = 0 and this equation can represent a line on any plane parallel to the x, y plane by multiplying the equation by k.
If z = 1 we have a normalized homogeneous coordinate. All points that have z = 1 create a plane. Let's pretend we are looking at that plane (from a position further out along the z axis and looking back towards the origin) and there are two parallel lines drawn on the plane. From where we are standing (given our visual capabilities) we can see only so much of the plane, which we represent as the area outlined in red in the diagram. If we walk away from the plane along the z axis, (still looking backwards towards the origin), we can see more of the plane. In our field of view original points have moved. We can reflect this movement by dividing the homogeneous coordinate by a constant. In the adjacent image we have divided by 2 so the z value now becomes 0.5. If we walk far enough away what we are looking at becomes a point in the distance. As we walk away we see more and more of the parallel lines. The lines will meet at a line at infinity (a line that goes through zero on the plane at z = 0). Lines on the plane when z = 0 are ideal points. The plane at z = 0 is the line at infinity.
The homogeneous point (0, 0, 0) is where all the real points go when you're looking at the plane from an infinite distance, a line on the z = 0 plane is where parallel lines intersect.
Duality
In the equation xTℓ = 0 there are two column vectors. You can keep either constant and vary the other. If we keep the point x constant and vary the coefficients ℓ we create new lines that go through the point. If we keep the coefficients constant and vary the points that satisfy the equation we create a line. We look upon x as a point, because the axes we are using are x, y, and z. If we instead plotted the coefficients using axis marked a, b, c points would become lines and lines would become points. If you prove something with the data plotted on axis marked x, y, and z the same argument can be used for the data plotted on axis marked a, b, and c. That is duality.
Lines joining points and intersection of lines (using duality)
The equation xTℓ = 0 calculates the inner product of two column vectors. The inner product of two vectors is zero if the vectors are orthogonal. In P2, the line between the points x1 and x2 may be represented as a column vector ℓ that satisfies the equations x1Tℓ = 0 and x2Tℓ = 0, or in other words a column vector ℓ that is orthogonal to x1 and x2. The cross product will find such a vector: the line joining two points has homogeneous coordinates given by the equation x1 × x2. The intersection of two lines may be found in the same way, using duality, as the cross product of the vectors representing the lines, ℓ1 × ℓ2.
Embedding into 4-dimensional space
The projective plane embeds into 4-dimensional Euclidean space. The real projective plane P2(R) is the quotient of the two-sphere
S2 = {(x, y, z) ∈ R3 : x2 + y2 + z2 = 1}
by the antipodal relation (x, y, z) ~ (−x, −y, −z). Consider the function R3 → R4 given by (x, y, z) ↦ (xy, xz, y2 − z2, 2yz). This map restricts to a map whose domain is S2 and, since each component is a homogeneous polynomial of even degree, it takes the same values in R4 on each of any two antipodal points on S2. This yields a map P2(R) → R4. Moreover, this map is an embedding. Notice that this embedding admits a projection into R3 which is the Roman surface.
Higher non-orientable surfaces
By gluing together projective planes successively we get non-orientable surfaces of higher demigenus. The gluing process consists of cutting out a little disk from each surface and identifying (gluing) their boundary circles. Gluing two projective planes creates the Klein bottle.
The article on the fundamental polygon describes the higher non-orientable surfaces.
See also
• Real projective space
• Projective space
• Pu's inequality for real projective plane
• Smooth projective plane
References
1. Apéry, F.; Models of the real projective plane, Vieweg (1987)
2. Weeks, J.; The shape of space, CRC (2002), p 59
3. Brehm, U.; "How to build minimal polyhedral models of the Boy surface", The mathematical intelligencer 12, No. 4 (1990), pp 51-56.
4. (Richter)
• Coxeter, H.S.M. (1955), The Real Projective Plane, 2nd ed. Cambridge: At the University Press.
• Reinhold Baer, Linear Algebra and Projective Geometry, Dover, 2005 (ISBN 0-486-44565-8 )
• Richter, David A., Two Models of the Real Projective Plane, retrieved 2010-04-15
External links
• Weisstein, Eric W. "Real Projective Plane". MathWorld.
• Line field coloring using Werner Boy's real projective plane immersion
• The real projective plane on YouTube
Compact topological surfaces and their immersions in 3D
Without boundary
Orientable
• Sphere (genus 0)
• Torus (genus 1)
• Number 8 (genus 2)
• Pretzel (genus 3) ...
Non-orientable
• Real projective plane
• genus 1; Boy's surface
• Roman surface
• Klein bottle (genus 2)
• Dyck's surface (genus 3) ...
With boundary
• Disk
• Semisphere
• Ribbon
• Annulus
• Cylinder
• Möbius strip
• Cross-cap
• Sphere with three holes ...
Related
notions
Properties
• Connectedness
• Compactness
• Triangulatedness or smoothness
• Orientability
Characteristics
• Number of boundary components
• Genus
• Euler characteristic
Operations
• Connected sum
• Making a hole
• Gluing a handle
• Gluing a cross-cap
• Immersion
|
Wikipedia
|
Real radical
In algebra, the real radical of an ideal I in a polynomial ring with real coefficients is the largest ideal containing I with the same vanishing locus. It plays a similar role in real algebraic geometry that the radical of an ideal plays in algebraic geometry over an algebraically closed field. More specifically, the Nullstellensatz says that when I is an ideal in a polynomial ring with coefficients coming from an algebraically closed field, the radical of I is the set of polynomials vanishing on the vanishing locus of I. In real algebraic geometry, the Nullstellensatz fails as the real numbers are not algebraically closed. However, one can recover a similar theorem, the real Nullstellensatz, by using the real radical in place of the (ordinary) radical.
Definition
The real radical of an ideal I in a polynomial ring $\mathbb {R} [x_{1},\dots ,x_{n}]$ over the real numbers, denoted by ${\sqrt[{\mathbb {R} }]{I}}$, is defined as
${\sqrt[{\mathbb {R} }]{I}}={\Big \{}f\in \mathbb {R} [x_{1},\dots ,x_{n}]\left|\,-f^{2m}=\textstyle {\sum _{i}}h_{i}^{2}+g\right.{\text{ where }}\ m\in \mathbb {Z} _{+},\,h_{i}\in \mathbb {R} [x_{1},\dots ,x_{n}],\,{\text{and }}g\in I{\Big \}}.$
The Positivstellensatz then implies that ${\sqrt[{\mathbb {R} }]{I}}$ is the set of all polynomials that vanish on the real variety[Note 1] defined by the vanishing of $I$.
References
• Marshall, Murray Positive polynomials and sums of squares. Mathematical Surveys and Monographs, 146. American Mathematical Society, Providence, RI, 2008. xii+187 pp. ISBN 978-0-8218-4402-1; 0-8218-4402-4
Notes
1. that is, the set of the points with real coordinates of a variety defined by polynomials with real coefficients
|
Wikipedia
|
Real rank (C*-algebras)
In mathematics, the real rank of a C*-algebra is a noncommutative analogue of Lebesgue covering dimension. The notion was first introduced by Lawrence G. Brown and Gert K. Pedersen.[1]
Definition
The real rank of a unital C*-algebra A is the smallest non-negative integer n, denoted RR(A), such that for every (n + 1)-tuple (x0, x1, ... ,xn) of self-adjoint elements of A and every ε > 0, there exists an (n + 1)-tuple (y0, y1, ... ,yn) of self-adjoint elements of A such that $\sum _{i=0}^{n}y_{i}^{2}$ is invertible and $\lVert \sum _{i=0}^{n}(x_{i}-y_{i})^{2}\rVert <\varepsilon $. If no such integer exists, then the real rank of A is infinite. The real rank of a non-unital C*-algebra is defined to be the real rank of its unitalization.
Comparisons with dimension
If X is a locally compact Hausdorff space, then RR(C0(X)) = dim(X), where dim is the Lebesgue covering dimension of X. As a result, real rank is considered a noncommutative generalization of dimension, but real rank can be rather different when compared to dimension. For example, most noncommutative tori have real rank zero, despite being a noncommutative version of the two-dimensional torus. For locally compact Hausdorff spaces, being zero-dimensional is equivalent to being totally disconnected. The analogous relationship fails for C*-algebras; while AF-algebras have real rank zero, the converse is false. Formulas that hold for dimension may not generalize for real rank. For example, Brown and Pedersen conjectured that RR(A ⊗ B) ≤ RR(A) + RR(B), since it is true that dim(X × Y) ≤ dim(X) + dim(Y). They proved a special case that if A is AF and B has real rank zero, then A ⊗ B has real rank zero. But in general their conjecture is false, there are C*-algebras A and B with real rank zero such that A ⊗ B has real rank greater than zero.[2]
Real rank zero
C*-algebras with real rank zero are of particular interest. By definition, a unital C*-algebra has real rank zero if and only if the invertible self-adjoint elements of A are dense in the self-adjoint elements of A. This condition is equivalent to the previously studied conditions:
• (FS) The self-adjoint elements of A with finite spectrum are dense in the self-adjoint elements of A.
• (HP) Every hereditary C*-subalgebra of A has an approximate identity consisting of projections.
This equivalence can be used to give many examples of C*-algebras with real rank zero including AW*-algebras, Bunce–Deddens algebras,[3] and von Neumann algebras. More broadly, simple unital purely infinite C*-algebras have real rank zero including the Cuntz algebras and Cuntz–Krieger algebras. Since simple graph C*-algebras are either AF or purely infinite, every simple graph C*-algebra has real rank zero.
Having real rank zero is a property closed under taking direct limits, hereditary C*-subalgebras, and strong Morita equivalence. In particular, if A has real rank zero, then Mn(A), the algebra of n × n matrices over A, has real rank zero for any integer n ≥ 1.
References
1. Brown, Lawrence G; Pedersen, Gert K (July 1991). "C*-algebras of real rank zero". Journal of Functional Analysis. 99 (1): 131–149. doi:10.1016/0022-1236(91)90056-B. Zbl 0776.46026.
2. Kodaka, Kazunori; Osaka, Hiroyuki (July 1995). "Real Rank of Tensor Products of C*-algebras". Proceedings of the American Mathematical Society. 123 (7): 2213–2215. doi:10.1090/S0002-9939-1995-1264820-4. Zbl 0835.46053.
3. Blackadar, Bruce; Kumjian, Alexander (March 1985). "Skew Products of Relations and the Structure of Simple C*-Algebras". Mathematische Zeitschrift. 189 (1): 55–63. doi:10.1007/BF01246943. Zbl 0613.46049.
|
Wikipedia
|
Real representation
In the mathematical field of representation theory a real representation is usually a representation on a real vector space U, but it can also mean a representation on a complex vector space V with an invariant real structure, i.e., an antilinear equivariant map
$j\colon V\to V$
which satisfies
$j^{2}=+1.$
The two viewpoints are equivalent because if U is a real vector space acted on by a group G (say), then V = U⊗C is a representation on a complex vector space with an antilinear equivariant map given by complex conjugation. Conversely, if V is such a complex representation, then U can be recovered as the fixed point set of j (the eigenspace with eigenvalue 1).
In physics, where representations are often viewed concretely in terms of matrices, a real representation is one in which the entries of the matrices representing the group elements are real numbers. These matrices can act either on real or complex column vectors.
A real representation on a complex vector space is isomorphic to its complex conjugate representation, but the converse is not true: a representation which is isomorphic to its complex conjugate but which is not real is called a pseudoreal representation. An irreducible pseudoreal representation V is necessarily a quaternionic representation: it admits an invariant quaternionic structure, i.e., an antilinear equivariant map
$j\colon V\to V$
which satisfies
$j^{2}=-1.$
A direct sum of real and quaternionic representations is neither real nor quaternionic in general.
A representation on a complex vector space can also be isomorphic to the dual representation of its complex conjugate. This happens precisely when the representation admits a nondegenerate invariant sesquilinear form, e.g. a hermitian form. Such representations are sometimes said to be complex or (pseudo-)hermitian.
Frobenius-Schur indicator
Main article: Frobenius-Schur indicator
A criterion (for compact groups G) for reality of irreducible representations in terms of character theory is based on the Frobenius-Schur indicator defined by
$\int _{g\in G}\chi (g^{2})\,d\mu $
where χ is the character of the representation and μ is the Haar measure with μ(G) = 1. For a finite group, this is given by
${1 \over |G|}\sum _{g\in G}\chi (g^{2}).$
The indicator may take the values 1, 0 or −1. If the indicator is 1, then the representation is real. If the indicator is zero, the representation is complex (hermitian),[1] and if the indicator is −1, the representation is quaternionic.
Examples
All representation of the symmetric groups are real (and in fact rational), since we can build a complete set of irreducible representations using Young tableaux.
All representations of the rotation groups on odd-dimensional spaces are real, since they all appear as subrepresentations of tensor products of copies of the fundamental representation, which is real.
Further examples of real representations are the spinor representations of the spin groups in 8k−1, 8k, and 8k+1 dimensions for k = 1, 2, 3 ... This periodicity modulo 8 is known in mathematics not only in the theory of Clifford algebras, but also in algebraic topology, in KO-theory; see spin representation.
Notes
1. Any complex representation V of a compact group has an invariant hermitian form, so the significance of zero indicator is that there is no invariant nondegenerate complex bilinear form on V.
References
• Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103..
• Serre, Jean-Pierre (1977), Linear Representations of Finite Groups, Springer-Verlag, ISBN 978-0-387-90190-9.
|
Wikipedia
|
Real structure
In mathematics, a real structure on a complex vector space is a way to decompose the complex vector space in the direct sum of two real vector spaces. The prototype of such a structure is the field of complex numbers itself, considered as a complex vector space over itself and with the conjugation map $\sigma :{\mathbb {C} }\to {\mathbb {C} }\,$ :{\mathbb {C} }\to {\mathbb {C} }\,} , with $\sigma (z)={\bar {z}}$, giving the "canonical" real structure on ${\mathbb {C} }\,$, that is ${\mathbb {C} }={\mathbb {R} }\oplus i{\mathbb {R} }\,$.
The conjugation map is antilinear: $\sigma (\lambda z)={\bar {\lambda }}\sigma (z)\,$ and $\sigma (z_{1}+z_{2})=\sigma (z_{1})+\sigma (z_{2})\,$.
Vector space
A real structure on a complex vector space V is an antilinear involution $\sigma :V\to V$. A real structure defines a real subspace $V_{\mathbb {R} }\subset V$, its fixed locus, and the natural map
$V_{\mathbb {R} }\otimes _{\mathbb {R} }{\mathbb {C} }\to V$
is an isomorphism. Conversely any vector space that is the complexification of a real vector space has a natural real structure.
One first notes that every complex space V has a realification obtained by taking the same vectors as in the original set and restricting the scalars to be real. If $t\in V\,$ and $t\neq 0$ then the vectors $t\,$ and $it\,$ are linearly independent in the realification of V. Hence:
$\dim _{\mathbb {R} }V=2\dim _{\mathbb {C} }V$
Naturally, one would wish to represent V as the direct sum of two real vector spaces, the "real and imaginary parts of V". There is no canonical way of doing this: such a splitting is an additional real structure in V. It may be introduced as follows.[1] Let $\sigma :V\to V\,$ be an antilinear map such that $\sigma \circ \sigma =id_{V}\,$, that is an antilinear involution of the complex space V. Any vector $v\in V\,$ can be written ${v=v^{+}+v^{-}}\,$, where $v^{+}={1 \over {2}}(v+\sigma v)$ and $v^{-}={1 \over {2}}(v-\sigma v)\,$.
Therefore, one gets a direct sum of vector spaces $V=V^{+}\oplus V^{-}\,$ where:
$V^{+}=\{v\in V|\sigma v=v\}$ and $V^{-}=\{v\in V|\sigma v=-v\}\,$.
Both sets $V^{+}\,$ and $V^{-}\,$ are real vector spaces. The linear map $K:V^{+}\to V^{-}\,$, where $K(t)=it\,$, is an isomorphism of real vector spaces, whence:
$\dim _{\mathbb {R} }V^{+}=\dim _{\mathbb {R} }V^{-}=\dim _{\mathbb {C} }V\,$.
The first factor $V^{+}\,$ is also denoted by $V_{\mathbb {R} }\,$ and is left invariant by $\sigma \,$, that is $\sigma (V_{\mathbb {R} })\subset V_{\mathbb {R} }\,$. The second factor $V^{-}\,$ is usually denoted by $iV_{\mathbb {R} }\,$. The direct sum $V=V^{+}\oplus V^{-}\,$ reads now as:
$V=V_{\mathbb {R} }\oplus iV_{\mathbb {R} }\,$,
i.e. as the direct sum of the "real" $V_{\mathbb {R} }\,$ and "imaginary" $iV_{\mathbb {R} }\,$ parts of V. This construction strongly depends on the choice of an antilinear involution of the complex vector space V. The complexification of the real vector space $V_{\mathbb {R} }\,$, i.e., $V^{\mathbb {C} }=V_{\mathbb {R} }\otimes _{\mathbb {R} }\mathbb {C} \,$ admits a natural real structure and hence is canonically isomorphic to the direct sum of two copies of $V_{\mathbb {R} }\,$:
$V_{\mathbb {R} }\otimes _{\mathbb {R} }\mathbb {C} =V_{\mathbb {R} }\oplus iV_{\mathbb {R} }\,$.
It follows a natural linear isomorphism $V_{\mathbb {R} }\otimes _{\mathbb {R} }\mathbb {C} \to V\,$ between complex vector spaces with a given real structure.
A real structure on a complex vector space V, that is an antilinear involution $\sigma :V\to V\,$, may be equivalently described in terms of the linear map ${\hat {\sigma }}:V\to {\bar {V}}\,$ from the vector space $V\,$ to the complex conjugate vector space ${\bar {V}}\,$ defined by
$v\mapsto {\hat {\sigma }}(v):={\overline {\sigma (v)}}\,$.[2]
Algebraic variety
For an algebraic variety defined over a subfield of the real numbers, the real structure is the complex conjugation acting on the points of the variety in complex projective or affine space. Its fixed locus is the space of real points of the variety (which may be empty).
Scheme
For a scheme defined over a subfield of the real numbers, complex conjugation is in a natural way a member of the Galois group of the algebraic closure of the base field. The real structure is the Galois action of this conjugation on the extension of the scheme over the algebraic closure of the base field. The real points are the points whose residue field is fixed (which may be empty).
Reality structure
In mathematics, a reality structure on a complex vector space V is a decomposition of V into two real subspaces, called the real and imaginary parts of V:
$V=V_{\mathbb {R} }\oplus iV_{\mathbb {R} }.$
Here VR is a real subspace of V, i.e. a subspace of V considered as a vector space over the real numbers. If V has complex dimension n (real dimension 2n), then VR must have real dimension n.
The standard reality structure on the vector space $\mathbb {C} ^{n}$ is the decomposition
$\mathbb {C} ^{n}=\mathbb {R} ^{n}\oplus i\,\mathbb {R} ^{n}.$
In the presence of a reality structure, every vector in V has a real part and an imaginary part, each of which is a vector in VR:
$v=\operatorname {Re} \{v\}+i\,\operatorname {Im} \{v\}$
In this case, the complex conjugate of a vector v is defined as follows:
${\overline {v}}=\operatorname {Re} \{v\}-i\,\operatorname {Im} \{v\}$
This map $v\mapsto {\overline {v}}$ is an antilinear involution, i.e.
${\overline {\overline {v}}}=v,\quad {\overline {v+w}}={\overline {v}}+{\overline {w}},\quad {\text{and}}\quad {\overline {\alpha v}}={\overline {\alpha }}\,{\overline {v}}.$
Conversely, given an antilinear involution $v\mapsto c(v)$ on a complex vector space V, it is possible to define a reality structure on V as follows. Let
$\operatorname {Re} \{v\}={\frac {1}{2}}\left(v+c(v)\right),$
and define
$V_{\mathbb {R} }=\left\{\operatorname {Re} \{v\}\mid v\in V\right\}.$
Then
$V=V_{\mathbb {R} }\oplus iV_{\mathbb {R} }.$
This is actually the decomposition of V as the eigenspaces of the real linear operator c. The eigenvalues of c are +1 and −1, with eigenspaces VR and $i$ VR, respectively. Typically, the operator c itself, rather than the eigenspace decomposition it entails, is referred to as the reality structure on V.
See also
• Antilinear map
• Canonical complex conjugation map
• Complex conjugate
• Complex conjugate vector space
• Complexification
• Linear complex structure
• Linear map
• Sesquilinear form
• Spinor calculus
Notes
1. Budinich, P. and Trautman, A. The Spinorial Chessboard. Springer-Verlag, 1988, p. 29.
2. Budinich, P. and Trautman, A. The Spinorial Chessboard. Springer-Verlag, 1988, p. 29.
References
• Horn and Johnson, Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-38632-2. (antilinear maps are discussed in section 4.6).
• Budinich, P. and Trautman, A. The Spinorial Chessboard. Springer-Verlag, 1988. ISBN 0-387-19078-3. (antilinear maps are discussed in section 3.3).
• Penrose, Roger; Rindler, Wolfgang (1986), Spinors and space-time. Vol. 2, Cambridge Monographs on Mathematical Physics, Cambridge University Press, ISBN 978-0-521-25267-6, MR 0838301
|
Wikipedia
|
Real tree
In mathematics, real trees (also called $\mathbb {R} $-trees) are a class of metric spaces generalising simplicial trees. They arise naturally in many mathematical contexts, in particular geometric group theory and probability theory. They are also the simplest examples of Gromov hyperbolic spaces.
Definition and examples
Formal definition
A metric space $X$ is a real tree if it is a geodesic space where every triangle is a tripod. That is, for every three points $x,y,\rho \in X$ there exists a point $c=x\wedge y$ such that the geodesic segments $[\rho ,x],[\rho ,y]$ intersect in the segment $[\rho ,c]$ and also $c\in [x,y]$. This definition is equivalent to $X$ being a "zero-hyperbolic space" in the sense of Gromov (all triangles are "zero-thin"). Real trees can also be characterised by a topological property. A metric space $X$ is a real tree if for any pair of points $x,y\in X$ all topological embeddings $\sigma $ of the segment $[0,1]$ into $X$ such that $\sigma (0)=x,\,\sigma (1)=y$ have the same image (which is then a geodesic segment from $x$ to $y$).
Simple examples
• If $X$ is a connected graph with the combinatorial metric then it is a real tree if and only if it is a tree (i.e. it has no cycles). Such a tree is often called a simplicial tree. They are characterised by the following topological property: a real tree $T$ is simplicial if and only if the set of singular points of $X$ (points whose complement in $X$ has three or more connected components) is closed and discrete in $X$.
• The $\mathbb {R} $-tree obtained in the following way is nonsimplicial. Start with the interval [0, 2] and glue, for each positive integer n, an interval of length 1/n to the point 1 − 1/n in the original interval. The set of singular points is discrete, but fails to be closed since 1 is an ordinary point in this $\mathbb {R} $-tree. Gluing an interval to 1 would result in a closed set of singular points at the expense of discreteness.
• The Paris metric makes the plane into a real tree. It is defined as follows: one fixes an origin $P$, and if two points are on the same ray from $P$, their distance is defined as the Euclidean distance. Otherwise, their distance is defined to be the sum of the Euclidean distances of these two points to the origin $P$.
• The plane under the Paris metric is an example of a hedgehog space, a collection of line segments joined at a common endpoint. Any such space is a real tree.
Characterizations
Here are equivalent characterizations of real trees which can be used as definitions:
1) (similar to trees as graphs) A real tree is a geodesic metric space which contains no subset homeomorphic to a circle.[1]
2) A real tree is a connected metric space $(X,d)$ which has the four points condition[2] (see figure):
For all $x,y,z,t\in X,$ $d(x,y)+d(z,t)\leq \max[d(x,z)+d(y,t)\,;\,d(x,t)+d(y,z)]$.
3) A real tree is a connected 0-hyperbolic metric space[3] (see figure). Formally:
For all $x,y,z,t\in X,$ $(x,y)_{t}\geq \min[(x,z)_{t}\,;\,(y,z)_{t}]$.
4) (similar to the characterization of Galton-Watson trees by the contour process). Consider a positive excursion of a function. In other words, let $e$ be a continuous real-valued function and $[a,b]$ an interval such that $e(a)=e(b)=0$ and $e(t)>0$ for $t\in ]a,b[$.
For $x,y\in [a,b]$, $x\leq y$, define a pseudometric and an equivalence relation with:
$d_{e}(x,y):=e(x)+e(y)-2\min(e(z)\,;z\in [x,y]),$
$x\sim _{e}y\Leftrightarrow d_{e}(x,y)=0.$
Then, the quotient space $([a,b]/\sim _{e}\,,\,d_{e})$ is a real tree.[3] Intuitively, the local minima of the excursion e are the parents of the local maxima. Another visual way to construct the real tree from an excursion is to "put glue" under the curve of e, and "bend" this curve, identifying the glued points (see animation).
Examples
Real trees often appear, in various situations, as limits of more classical metric spaces.
Brownian trees
A Brownian tree[4] is a stochastic process whose value is a (non-simplicial) real tree almost surely. Brownian trees arise as limits of various random processes on finite trees.[5]
Ultralimits of metric spaces
Any ultralimit of a sequence $(X_{i})$ of $\delta _{i}$-hyperbolic spaces with $\delta _{i}\to 0$ is a real tree. In particular, the asymptotic cone of any hyperbolic space is a real tree.
Limit of group actions
Let $G$ be a group. For a sequence of based $G$-spaces $(X_{i},*_{i},\rho _{i})$ there is a notion of convergence to a based $G$-space $(X_{\infty },x_{\infty },\rho _{\infty })$ due to M. Bestvina and F. Paulin. When the spaces are hyperbolic and the actions are unbounded the limit (if it exists) is a real tree.[6]
A simple example is obtained by taking $G=\pi _{1}(S)$ where $S$ is a compact surface, and $X_{i}$ the universal cover of $S$ with the metric $i\rho $ (where $\rho $ is a fixed hyperbolic metric on $S$).
This is useful to produce actions of hyperbolic groups on real trees. Such actions are analyzed using the so-called Rips machine. A case of particular interest is the study of degeneration of groups acting properly discontinuously on a real hyperbolic space (this predates Rips', Bestvina's and Paulin's work and is due to J. Morgan and P. Shalen[7]).
Algebraic groups
If $F$ is a field with an ultrametric valuation then the Bruhat–Tits building of $\mathrm {SL} _{2}(F)$ is a real tree. It is simplicial if and only if the valuations is discrete.
Generalisations
$\Lambda $-trees
If $\Lambda $ is a totally ordered abelian group there is a natural notion of a distance with values in $\Lambda $ (classical metric spaces correspond to $\Lambda =\mathbb {R} $). There is a notion of $\Lambda $-tree[8] which recovers simplicial trees when $\Lambda =\mathbb {Z} $ and real trees when $\Lambda =\mathbb {R} $. The structure of finitely presented groups acting freely on $\Lambda $-trees was described. [9] In particular, such a group acts freely on some $\mathbb {R} ^{n}$-tree.
Real buildings
The axioms for a building can be generalized to give a definition of a real building. These arise for example as asymptotic cones of higher-rank symmetric spaces or as Bruhat-Tits buildings of higher-rank groups over valued fields.
See also
• Dendroid (topology)
• Tree-graded space
References
1. Chiswell, Ian (2001). Introduction to [lambda]-trees. Singapore: World Scientific. ISBN 978-981-281-053-3. OCLC 268962256.
2. Peter Buneman, A Note on the Metric Properties of Trees, Journal of combinatorial theory, B (17), p. 48-50, 1974.
3. Evans, Stevan N. (2005). Probability and Real Trees. École d’Eté de Probabilités de Saint-Flour XXXV.
4. Aldous, D. (1991), "The continuum random tree I", Annals of Probability, 19: 1–28, doi:10.1214/aop/1176990534
5. Aldous, D. (1991), "The continuum random tree III", Annals of Probability, 21: 248–289
6. Bestvina, Mladen (2002), "$\mathbb {R} $-trees in topology, geometry and group theory", Handbook of Geometric Topology, Elsevier, pp. 55–91, ISBN 9780080532851
7. Shalen, Peter B. (1987), "Dendrology of groups: an introduction", in Gersten, S. M. (ed.), Essays in Group Theory, Math. Sci. Res. Inst. Publ., vol. 8, Springer-Verlag, pp. 265–319, ISBN 978-0-387-96618-2, MR 0919830
8. Chiswell, Ian (2001), Introduction to Λ-trees, River Edge, NJ: World Scientific Publishing Co. Inc., ISBN 981-02-4386-3, MR 1851337
9. O. Kharlampovich, A. Myasnikov, D. Serbin, Actions, length functions and non-archimedean words IJAC 23, No. 2, 2013.{{citation}}: CS1 maint: multiple names: authors list (link)
|
Wikipedia
|
Real-valued function
In mathematics, a real-valued function is a function whose values are real numbers. In other words, it is a function that assigns a real number to each member of its domain.
Function
x ↦ f (x)
Examples of domains and codomains
• $X$ → $\mathbb {B} $, $\mathbb {B} $ → $X$, $\mathbb {B} ^{n}$ → $X$
• $X$ → $\mathbb {Z} $, $\mathbb {Z} $ → $X$
• $X$ → $\mathbb {R} $, $\mathbb {R} $ → $X$, $\mathbb {R} ^{n}$ → $X$
• $X$ → $\mathbb {C} $, $\mathbb {C} $ → $X$, $\mathbb {C} ^{n}$ → $X$
Classes/properties
• Constant
• Identity
• Linear
• Polynomial
• Rational
• Algebraic
• Analytic
• Smooth
• Continuous
• Measurable
• Injective
• Surjective
• Bijective
Constructions
• Restriction
• Composition
• λ
• Inverse
Generalizations
• Partial
• Multivalued
• Implicit
• space
Real-valued functions of a real variable (commonly called real functions) and real-valued functions of several real variables are the main object of study of calculus and, more generally, real analysis. In particular, many function spaces consist of real-valued functions.
Algebraic structure
Let ${\mathcal {F}}(X,{\mathbb {R} })$ be the set of all functions from a set X to real numbers $\mathbb {R} $. Because $\mathbb {R} $ is a field, ${\mathcal {F}}(X,{\mathbb {R} })$ may be turned into a vector space and a commutative algebra over the reals with the following operations:
• $f+g:x\mapsto f(x)+g(x)$ – vector addition
• $\mathbf {0} :x\mapsto 0$ – additive identity
• $cf:x\mapsto cf(x),\quad c\in \mathbb {R} $ – scalar multiplication
• $fg:x\mapsto f(x)g(x)$ – pointwise multiplication
These operations extend to partial functions from X to $\mathbb {R} ,$ with the restriction that the partial functions f + g and f g are defined only if the domains of f and g have a nonempty intersection; in this case, their domain is the intersection of the domains of f and g.
Also, since $\mathbb {R} $ is an ordered set, there is a partial order
• $\ f\leq g\quad \iff \quad \forall x:f(x)\leq g(x),$
on ${\mathcal {F}}(X,{\mathbb {R} }),$ which makes ${\mathcal {F}}(X,{\mathbb {R} })$ a partially ordered ring.
Measurable
See also: Borel function
The σ-algebra of Borel sets is an important structure on real numbers. If X has its σ-algebra and a function f is such that the preimage f −1(B) of any Borel set B belongs to that σ-algebra, then f is said to be measurable. Measurable functions also form a vector space and an algebra as explained above in § Algebraic structure.
Moreover, a set (family) of real-valued functions on X can actually define a σ-algebra on X generated by all preimages of all Borel sets (or of intervals only, it is not important). This is the way how σ-algebras arise in (Kolmogorov's) probability theory, where real-valued functions on the sample space Ω are real-valued random variables.
Continuous
Real numbers form a topological space and a complete metric space. Continuous real-valued functions (which implies that X is a topological space) are important in theories of topological spaces and of metric spaces. The extreme value theorem states that for any real continuous function on a compact space its global maximum and minimum exist.
The concept of metric space itself is defined with a real-valued function of two variables, the metric, which is continuous. The space of continuous functions on a compact Hausdorff space has a particular importance. Convergent sequences also can be considered as real-valued continuous functions on a special topological space.
Continuous functions also form a vector space and an algebra as explained above in § Algebraic structure, and are a subclass of measurable functions because any topological space has the σ-algebra generated by open (or closed) sets.
Smooth
Main article: Smooth function
Real numbers are used as the codomain to define smooth functions. A domain of a real smooth function can be the real coordinate space (which yields a real multivariable function), a topological vector space,[1] an open subset of them, or a smooth manifold.
Spaces of smooth functions also are vector spaces and algebras as explained above in § Algebraic structure and are subspaces of the space of continuous functions.
Appearances in measure theory
A measure on a set is a non-negative real-valued functional on a σ-algebra of subsets.[2] Lp spaces on sets with a measure are defined from aforementioned real-valued measurable functions, although they are actually quotient spaces. More precisely, whereas a function satisfying an appropriate summability condition defines an element of Lp space, in the opposite direction for any f ∈ Lp(X) and x ∈ X which is not an atom, the value f(x) is undefined. Though, real-valued Lp spaces still have some of the structure described above in § Algebraic structure. Each of Lp spaces is a vector space and have a partial order, and there exists a pointwise multiplication of "functions" which changes p, namely
$\cdot :L^{1/\alpha }\times L^{1/\beta }\to L^{1/(\alpha +\beta )},\quad 0\leq \alpha ,\beta \leq 1,\quad \alpha +\beta \leq 1.$
For example, pointwise product of two L2 functions belongs to L1.
Other appearances
Other contexts where real-valued functions and their special properties are used include monotonic functions (on ordered sets), convex functions (on vector and affine spaces), harmonic and subharmonic functions (on Riemannian manifolds), analytic functions (usually of one or more real variables), algebraic functions (on real algebraic varieties), and polynomials (of one or more real variables).
See also
• Real analysis
• Partial differential equations, a major user of real-valued functions
• Norm (mathematics)
• Scalar (mathematics)
Footnotes
1. Different definitions of derivative exist in general, but for finite dimensions they result in equivalent definitions of classes of smooth functions.
2. Actually, a measure may have values in [0, +∞]: see extended real number line.
References
• Apostol, Tom M. (1974). Mathematical Analysis (2nd ed.). Addison–Wesley. ISBN 978-0-201-00288-1.
• Gerald Folland, Real Analysis: Modern Techniques and Their Applications, Second Edition, John Wiley & Sons, Inc., 1999, ISBN 0-471-31716-0.
• Rudin, Walter (1976). Principles of Mathematical Analysis (3rd ed.). New York: McGraw-Hill. ISBN 978-0-07-054235-8.
External links
Weisstein, Eric W. "Real Function". MathWorld.
|
Wikipedia
|
Real analysis
In mathematics, the branch of real analysis studies the behavior of real numbers, sequences and series of real numbers, and real functions.[1] Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.
Real analysis is distinguished from complex analysis, which deals with the study of complex numbers and their functions.
Scope
Construction of the real numbers
Main article: Construction of the real numbers
The theorems of real analysis rely on the properties of the real number system, which must be established. The real number system consists of an uncountable set ($\mathbb {R} $), together with two binary operations denoted + and ⋅, and an order denoted <. The operations make the real numbers a field, and, along with the order, an ordered field. The real number system is the unique complete ordered field, in the sense that any other complete ordered field is isomorphic to it. Intuitively, completeness means that there are no 'gaps' in the real numbers. This property distinguishes the real numbers from other ordered fields (e.g., the rational numbers $\mathbb {Q} $) and is critical to the proof of several key properties of functions of the real numbers. The completeness of the reals is often conveniently expressed as the least upper bound property (see below).
Order properties of the real numbers
The real numbers have various lattice-theoretic properties that are absent in the complex numbers. Also, the real numbers form an ordered field, in which sums and products of positive numbers are also positive. Moreover, the ordering of the real numbers is total, and the real numbers have the least upper bound property:
Every nonempty subset of $\mathbb {R} $ that has an upper bound has a least upper bound that is also a real number.
These order-theoretic properties lead to a number of fundamental results in real analysis, such as the monotone convergence theorem, the intermediate value theorem and the mean value theorem.
However, while the results in real analysis are stated for real numbers, many of these results can be generalized to other mathematical objects. In particular, many ideas in functional analysis and operator theory generalize properties of the real numbers – such generalizations include the theories of Riesz spaces and positive operators. Also, mathematicians consider real and imaginary parts of complex sequences, or by pointwise evaluation of operator sequences.
Topological properties of the real numbers
Many of the theorems of real analysis are consequences of the topological properties of the real number line. The order properties of the real numbers described above are closely related to these topological properties. As a topological space, the real numbers has a standard topology, which is the order topology induced by order $<$. Alternatively, by defining the metric or distance function $d:\mathbb {R} \times \mathbb {R} \to \mathbb {R} _{\geq 0}$ using the absolute value function as $d(x,y)=|x-y|$, the real numbers become the prototypical example of a metric space. The topology induced by metric $d$ turns out to be identical to the standard topology induced by order $<$. Theorems like the intermediate value theorem that are essentially topological in nature can often be proved in the more general setting of metric or topological spaces rather than in $\mathbb {R} $ only. Often, such proofs tend to be shorter or simpler compared to classical proofs that apply direct methods.
Sequences
Main article: Sequence
A sequence is a function whose domain is a countable, totally ordered set.[2] The domain is usually taken to be the natural numbers,[3] although it is occasionally convenient to also consider bidirectional sequences indexed by the set of all integers, including negative indices.
Of interest in real analysis, a real-valued sequence, here indexed by the natural numbers, is a map $a:\mathbb {N} \to \mathbb {R} :n\mapsto a_{n}$. Each $a(n)=a_{n}$ is referred to as a term (or, less commonly, an element) of the sequence. A sequence is rarely denoted explicitly as a function; instead, by convention, it is almost always notated as if it were an ordered ∞-tuple, with individual terms or a general term enclosed in parentheses:[4]
$(a_{n})=(a_{n})_{n\in \mathbb {N} }=(a_{1},a_{2},a_{3},\dots ).$
A sequence that tends to a limit (i.e., $ \lim _{n\to \infty }a_{n}$ exists) is said to be convergent; otherwise it is divergent. (See the section on limits and convergence for details.) A real-valued sequence $(a_{n})$ is bounded if there exists $M\in \mathbb {R} $ such that $|a_{n}|<M$ for all $n\in \mathbb {N} $. A real-valued sequence $(a_{n})$ is monotonically increasing or decreasing if
$a_{1}\leq a_{2}\leq a_{3}\leq \cdots $
or
$a_{1}\geq a_{2}\geq a_{3}\geq \cdots $
holds, respectively. If either holds, the sequence is said to be monotonic. The monotonicity is strict if the chained inequalities still hold with $\leq $ or $\geq $ replaced by < or >.
Given a sequence $(a_{n})$, another sequence $(b_{k})$ is a subsequence of $(a_{n})$ if $b_{k}=a_{n_{k}}$ for all positive integers $k$ and $(n_{k})$ is a strictly increasing sequence of natural numbers.
Limits and convergence
Main article: Limit (mathematics)
Roughly speaking, a limit is the value that a function or a sequence "approaches" as the input or index approaches some value.[5] (This value can include the symbols $\pm \infty $ when addressing the behavior of a function or sequence as the variable increases or decreases without bound.) The idea of a limit is fundamental to calculus (and mathematical analysis in general) and its formal definition is used in turn to define notions like continuity, derivatives, and integrals. (In fact, the study of limiting behavior has been used as a characteristic that distinguishes calculus and mathematical analysis from other branches of mathematics.)
The concept of limit was informally introduced for functions by Newton and Leibniz, at the end of the 17th century, for building infinitesimal calculus. For sequences, the concept was introduced by Cauchy, and made rigorous, at the end of the 19th century by Bolzano and Weierstrass, who gave the modern ε-δ definition, which follows.
Definition. Let $f$ be a real-valued function defined on $E\subset \mathbb {R} $. We say that $f(x)$ tends to $L$ as $x$ approaches $x_{0}$, or that the limit of $f(x)$ as $x$ approaches $x_{0}$ is $L$ if, for any $\varepsilon >0$, there exists $\delta >0$ such that for all $x\in E$, $0<|x-x_{0}|<\delta $ implies that $|f(x)-L|<\varepsilon $. We write this symbolically as
$f(x)\to L\ \ {\text{as}}\ \ x\to x_{0},$
or as
$\lim _{x\to x_{0}}f(x)=L.$
Intuitively, this definition can be thought of in the following way: We say that $f(x)\to L$ as $x\to x_{0}$, when, given any positive number $\varepsilon $, no matter how small, we can always find a $\delta $, such that we can guarantee that $f(x)$ and $L$ are less than $\varepsilon $ apart, as long as $x$ (in the domain of $f$) is a real number that is less than $\delta $ away from $x_{0}$ but distinct from $x_{0}$. The purpose of the last stipulation, which corresponds to the condition $0<|x-x_{0}|$ in the definition, is to ensure that $ \lim _{x\to x_{0}}f(x)=L$ does not imply anything about the value of $f(x_{0})$ itself. Actually, $x_{0}$ does not even need to be in the domain of $f$ in order for $ \lim _{x\to x_{0}}f(x)$ to exist.
In a slightly different but related context, the concept of a limit applies to the behavior of a sequence $(a_{n})$ when $n$ becomes large.
Definition. Let $(a_{n})$ be a real-valued sequence. We say that $(a_{n})$ converges to $a$ if, for any $\varepsilon >0$, there exists a natural number $N$ such that $n\geq N$ implies that $|a-a_{n}|<\varepsilon $. We write this symbolically as
$a_{n}\to a\ \ {\text{as}}\ \ n\to \infty ,$
or as
$\lim _{n\to \infty }a_{n}=a;$
if $(a_{n})$ fails to converge, we say that $(a_{n})$ diverges.
Generalizing to a real-valued function of a real variable, a slight modification of this definition (replacement of sequence $(a_{n})$ and term $a_{n}$ by function $f$ and value $f(x)$ and natural numbers $N$ and $n$ by real numbers $M$ and $x$, respectively) yields the definition of the limit of $f(x)$ as $x$ increases without bound, notated $ \lim _{x\to \infty }f(x)$. Reversing the inequality $x\geq M$ to $x\leq M$ gives the corresponding definition of the limit of $f(x)$ as $x$ decreases without bound, $ \lim _{x\to -\infty }f(x)$.
Sometimes, it is useful to conclude that a sequence converges, even though the value to which it converges is unknown or irrelevant. In these cases, the concept of a Cauchy sequence is useful.
Definition. Let $(a_{n})$ be a real-valued sequence. We say that $(a_{n})$ is a Cauchy sequence if, for any $\varepsilon >0$, there exists a natural number $N$ such that $m,n\geq N$ implies that $|a_{m}-a_{n}|<\varepsilon $.
It can be shown that a real-valued sequence is Cauchy if and only if it is convergent. This property of the real numbers is expressed by saying that the real numbers endowed with the standard metric, $(\mathbb {R} ,|\cdot |)$, is a complete metric space. In a general metric space, however, a Cauchy sequence need not converge.
In addition, for real-valued sequences that are monotonic, it can be shown that the sequence is bounded if and only if it is convergent.
Uniform and pointwise convergence for sequences of functions
Main article: Uniform convergence
In addition to sequences of numbers, one may also speak of sequences of functions on $E\subset \mathbb {R} $, that is, infinite, ordered families of functions $f_{n}:E\to \mathbb {R} $, denoted $(f_{n})_{n=1}^{\infty }$, and their convergence properties. However, in the case of sequences of functions, there are two kinds of convergence, known as pointwise convergence and uniform convergence, that need to be distinguished.
Roughly speaking, pointwise convergence of functions $f_{n}$ to a limiting function $f:E\to \mathbb {R} $, denoted $f_{n}\rightarrow f$, simply means that given any $x\in E$, $f_{n}(x)\to f(x)$ as $n\to \infty $. In contrast, uniform convergence is a stronger type of convergence, in the sense that a uniformly convergent sequence of functions also converges pointwise, but not conversely. Uniform convergence requires members of the family of functions, $f_{n}$, to fall within some error $\varepsilon >0$ of $f$ for every value of $x\in E$, whenever $n\geq N$, for some integer $N$. For a family of functions to uniformly converge, sometimes denoted $f_{n}\rightrightarrows f$, such a value of $N$ must exist for any $\varepsilon >0$ given, no matter how small. Intuitively, we can visualize this situation by imagining that, for a large enough $N$, the functions $f_{N},f_{N+1},f_{N+2},\ldots $ are all confined within a 'tube' of width $2\varepsilon $ about $f$ (that is, between $f-\varepsilon $ and $f+\varepsilon $) for every value in their domain $E$.
The distinction between pointwise and uniform convergence is important when exchanging the order of two limiting operations (e.g., taking a limit, a derivative, or integral) is desired: in order for the exchange to be well-behaved, many theorems of real analysis call for uniform convergence. For example, a sequence of continuous functions (see below) is guaranteed to converge to a continuous limiting function if the convergence is uniform, while the limiting function may not be continuous if convergence is only pointwise. Karl Weierstrass is generally credited for clearly defining the concept of uniform convergence and fully investigating its implications.
Compactness
Main article: Compactness
Compactness is a concept from general topology that plays an important role in many of the theorems of real analysis. The property of compactness is a generalization of the notion of a set being closed and bounded. (In the context of real analysis, these notions are equivalent: a set in Euclidean space is compact if and only if it is closed and bounded.) Briefly, a closed set contains all of its boundary points, while a set is bounded if there exists a real number such that the distance between any two points of the set is less than that number. In $\mathbb {R} $, sets that are closed and bounded, and therefore compact, include the empty set, any finite number of points, closed intervals, and their finite unions. However, this list is not exhaustive; for instance, the set $\{1/n:n\in \mathbb {N} \}\cup \{0}\$ is a compact set; the Cantor ternary set ${\mathcal {C}}\subset [0,1]$ is another example of a compact set. On the other hand, the set $\{1/n:n\in \mathbb {N} \}$ is not compact because it is bounded but not closed, as the boundary point 0 is not a member of the set. The set $[0,\infty )$ is also not compact because it is closed but not bounded.
For subsets of the real numbers, there are several equivalent definitions of compactness.
Definition. A set $E\subset \mathbb {R} $ is compact if it is closed and bounded.
This definition also holds for Euclidean space of any finite dimension, $\mathbb {R} ^{n}$, but it is not valid for metric spaces in general. The equivalence of the definition with the definition of compactness based on subcovers, given later in this section, is known as the Heine-Borel theorem.
A more general definition that applies to all metric spaces uses the notion of a subsequence (see above).
Definition. A set $E$ in a metric space is compact if every sequence in $E$ has a convergent subsequence.
This particular property is known as subsequential compactness. In $\mathbb {R} $, a set is subsequentially compact if and only if it is closed and bounded, making this definition equivalent to the one given above. Subsequential compactness is equivalent to the definition of compactness based on subcovers for metric spaces, but not for topological spaces in general.
The most general definition of compactness relies on the notion of open covers and subcovers, which is applicable to topological spaces (and thus to metric spaces and $\mathbb {R} $ as special cases). In brief, a collection of open sets $U_{\alpha }$ is said to be an open cover of set $X$ if the union of these sets is a superset of $X$. This open cover is said to have a finite subcover if a finite subcollection of the $U_{\alpha }$ could be found that also covers $X$.
Definition. A set $X$ in a topological space is compact if every open cover of $X$ has a finite subcover.
Compact sets are well-behaved with respect to properties like convergence and continuity. For instance, any Cauchy sequence in a compact metric space is convergent. As another example, the image of a compact metric space under a continuous map is also compact.
Continuity
Main article: Continuous function
A function from the set of real numbers to the real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve with no "holes" or "jumps".
There are several ways to make this intuition mathematically rigorous. Several definitions of varying levels of generality can be given. In cases where two or more definitions are applicable, they are readily shown to be equivalent to one another, so the most convenient definition can be used to determine whether a given function is continuous or not. In the first definition given below, $f:I\to \mathbb {R} $ is a function defined on a non-degenerate interval $I$ of the set of real numbers as its domain. Some possibilities include $I=\mathbb {R} $, the whole set of real numbers, an open interval $I=(a,b)=\{x\in \mathbb {R} \mid a<x<b\},$ or a closed interval $I=[a,b]=\{x\in \mathbb {R} \mid a\leq x\leq b\}.$ Here, $a$ and $b$ are distinct real numbers, and we exclude the case of $I$ being empty or consisting of only one point, in particular.
Definition. If $I\subset \mathbb {R} $ is a non-degenerate interval, we say that $f:I\to \mathbb {R} $ is continuous at $p\in I$ if $ \lim _{x\to p}f(x)=f(p)$. We say that $f$ is a continuous map if $f$ is continuous at every $p\in I$.
In contrast to the requirements for $f$ to have a limit at a point $p$, which do not constrain the behavior of $f$ at $p$ itself, the following two conditions, in addition to the existence of $ \lim _{x\to p}f(x)$, must also hold in order for $f$ to be continuous at $p$: (i) $f$ must be defined at $p$, i.e., $p$ is in the domain of $f$; and (ii) $f(x)\to f(p)$ as $x\to p$. The definition above actually applies to any domain $E$ that does not contain an isolated point, or equivalently, $E$ where every $p\in E$ is a limit point of $E$. A more general definition applying to $f:X\to \mathbb {R} $ with a general domain $X\subset \mathbb {R} $ is the following:
Definition. If $X$ is an arbitrary subset of $\mathbb {R} $, we say that $f:X\to \mathbb {R} $ is continuous at $p\in X$ if, for any $\varepsilon >0$, there exists $\delta >0$ such that for all $x\in X$, $|x-p|<\delta $ implies that $|f(x)-f(p)|<\varepsilon $. We say that $f$ is a continuous map if $f$ is continuous at every $p\in X$.
A consequence of this definition is that $f$ is trivially continuous at any isolated point $p\in X$. This somewhat unintuitive treatment of isolated points is necessary to ensure that our definition of continuity for functions on the real line is consistent with the most general definition of continuity for maps between topological spaces (which includes metric spaces and $\mathbb {R} $ in particular as special cases). This definition, which extends beyond the scope of our discussion of real analysis, is given below for completeness.
Definition. If $X$ and $Y$ are topological spaces, we say that $f:X\to Y$ is continuous at $p\in X$ if $f^{-1}(V)$ is a neighborhood of $p$ in $X$ for every neighborhood $V$ of $f(p)$ in $Y$. We say that $f$ is a continuous map if $f^{-1}(U)$ is open in $X$ for every $U$ open in $Y$.
(Here, $f^{-1}(S)$ refers to the preimage of $S\subset Y$ under $f$.)
Uniform continuity
Main article: Uniform continuity
Definition. If $X$ is a subset of the real numbers, we say a function $f:X\to \mathbb {R} $ is uniformly continuous on $X$ if, for any $\varepsilon >0$, there exists a $\delta >0$ such that for all $x,y\in X$, $|x-y|<\delta $ implies that $|f(x)-f(y)|<\varepsilon $.
Explicitly, when a function is uniformly continuous on $X$, the choice of $\delta $ needed to fulfill the definition must work for all of $X$ for a given $\varepsilon $. In contrast, when a function is continuous at every point $p\in X$ (or said to be continuous on $X$), the choice of $\delta $ may depend on both $\varepsilon $ and $p$. In contrast to simple continuity, uniform continuity is a property of a function that only makes sense with a specified domain; to speak of uniform continuity at a single point $p$ is meaningless.
On a compact set, it is easily shown that all continuous functions are uniformly continuous. If $E$ is a bounded noncompact subset of $\mathbb {R} $, then there exists $f:E\to \mathbb {R} $ that is continuous but not uniformly continuous. As a simple example, consider $f:(0,1)\to \mathbb {R} $ defined by $f(x)=1/x$. By choosing points close to 0, we can always make $|f(x)-f(y)|>\varepsilon $ for any single choice of $\delta >0$, for a given $\varepsilon >0$.
Absolute continuity
Main article: Absolute continuity
Definition. Let $I\subset \mathbb {R} $ be an interval on the real line. A function $f:I\to \mathbb {R} $ is said to be absolutely continuous on $I$ if for every positive number $\varepsilon $, there is a positive number $\delta $ such that whenever a finite sequence of pairwise disjoint sub-intervals $(x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{n},y_{n})$ of $I$ satisfies[6]
$\sum _{k=1}^{n}(y_{k}-x_{k})<\delta $
then
$\sum _{k=1}^{n}|f(y_{k})-f(x_{k})|<\varepsilon .$
Absolutely continuous functions are continuous: consider the case n = 1 in this definition. The collection of all absolutely continuous functions on I is denoted AC(I). Absolute continuity is a fundamental concept in the Lebesgue theory of integration, allowing the formulation of a generalized version of the fundamental theorem of calculus that applies to the Lebesgue integral.
Differentiation
Main articles: Derivative and Differential calculus
The notion of the derivative of a function or differentiability originates from the concept of approximating a function near a given point using the "best" linear approximation. This approximation, if it exists, is unique and is given by the line that is tangent to the function at the given point $a$, and the slope of the line is the derivative of the function at $a$.
A function $f:\mathbb {R} \to \mathbb {R} $ is differentiable at $a$ if the limit
$f'(a)=\lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}$
exists. This limit is known as the derivative of $f$ at $a$, and the function $f'$, possibly defined on only a subset of $\mathbb {R} $, is the derivative (or derivative function) of $f$. If the derivative exists everywhere, the function is said to be differentiable.
As a simple consequence of the definition, $f$ is continuous at $a$ if it is differentiable there. Differentiability is therefore a stronger regularity condition (condition describing the "smoothness" of a function) than continuity, and it is possible for a function to be continuous on the entire real line but not differentiable anywhere (see Weierstrass's nowhere differentiable continuous function). It is possible to discuss the existence of higher-order derivatives as well, by finding the derivative of a derivative function, and so on.
One can classify functions by their differentiability class. The class $C^{0}$ (sometimes $C^{0}([a,b])$ to indicate the interval of applicability) consists of all continuous functions. The class $C^{1}$ consists of all differentiable functions whose derivative is continuous; such functions are called continuously differentiable. Thus, a $C^{1}$ function is exactly a function whose derivative exists and is of class $C^{0}$. In general, the classes $C^{k}$ can be defined recursively by declaring $C^{0}$ to be the set of all continuous functions and declaring $C^{k}$ for any positive integer $k$ to be the set of all differentiable functions whose derivative is in $C^{k-1}$. In particular, $C^{k}$ is contained in $C^{k-1}$ for every $k$, and there are examples to show that this containment is strict. Class $C^{\infty }$ is the intersection of the sets $C^{k}$ as $k$ varies over the non-negative integers, and the members of this class are known as the smooth functions. Class $C^{\omega }$ consists of all analytic functions, and is strictly contained in $C^{\infty }$ (see bump function for a smooth function that is not analytic).
Series
Main article: Series (mathematics)
A series formalizes the imprecise notion of taking the sum of an endless sequence of numbers. The idea that taking the sum of an "infinite" number of terms can lead to a finite result was counterintuitive to the ancient Greeks and led to the formulation of a number of paradoxes by Zeno and other philosophers. The modern notion of assigning a value to a series avoids dealing with the ill-defined notion of adding an "infinite" number of terms. Instead, the finite sum of the first $n$ terms of the sequence, known as a partial sum, is considered, and the concept of a limit is applied to the sequence of partial sums as $n$ grows without bound. The series is assigned the value of this limit, if it exists.
Given an (infinite) sequence $(a_{n})$, we can define an associated series as the formal mathematical object $ a_{1}+a_{2}+a_{3}+\cdots =\sum _{n=1}^{\infty }a_{n}$, sometimes simply written as $ \sum a_{n}$. The partial sums of a series $ \sum a_{n}$ are the numbers $ s_{n}=\sum _{j=1}^{n}a_{j}$. A series $ \sum a_{n}$ is said to be convergent if the sequence consisting of its partial sums, $(s_{n})$, is convergent; otherwise it is divergent. The sum of a convergent series is defined as the number $ s=\lim _{n\to \infty }s_{n}$.
The word "sum" is used here in a metaphorical sense as a shorthand for taking the limit of a sequence of partial sums and should not be interpreted as simply "adding" an infinite number of terms. For instance, in contrast to the behavior of finite sums, rearranging the terms of an infinite series may result in convergence to a different number (see the article on the Riemann rearrangement theorem for further discussion).
An example of a convergent series is a geometric series which forms the basis of one of Zeno's famous paradoxes:
$\sum _{n=1}^{\infty }{\frac {1}{2^{n}}}={\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+\cdots =1.$
In contrast, the harmonic series has been known since the Middle Ages to be a divergent series:
$\sum _{n=1}^{\infty }{\frac {1}{n}}=1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots =\infty .$
(Here, "$=\infty $" is merely a notational convention to indicate that the partial sums of the series grow without bound.)
A series $ \sum a_{n}$ is said to converge absolutely if $ \sum |a_{n}|$ is convergent. A convergent series $ \sum a_{n}$ for which $ \sum |a_{n}|$ diverges is said to converge non-absolutely.[7] It is easily shown that absolute convergence of a series implies its convergence. On the other hand, an example of a series that converges non-absolutely is
$\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}}{n}}=1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+\cdots =\ln 2.$
Taylor series
Main article: Taylor series
The Taylor series of a real or complex-valued function ƒ(x) that is infinitely differentiable at a real or complex number a is the power series
$f(a)+{\frac {f'(a)}{1!}}(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+{\frac {f^{(3)}(a)}{3!}}(x-a)^{3}+\cdots .$
which can be written in the more compact sigma notation as
$\sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}\,(x-a)^{n}$
where n! denotes the factorial of n and ƒ (n)(a) denotes the nth derivative of ƒ evaluated at the point a. The derivative of order zero ƒ is defined to be ƒ itself and (x − a)0 and 0! are both defined to be 1. In the case that a = 0, the series is also called a Maclaurin series.
A Taylor series of f about point a may diverge, converge at only the point a, converge for all x such that $|x-a|<R$ (the largest such R for which convergence is guaranteed is called the radius of convergence), or converge on the entire real line. Even a converging Taylor series may converge to a value different from the value of the function at that point. If the Taylor series at a point has a nonzero radius of convergence, and sums to the function in the disc of convergence, then the function is analytic. The analytic functions have many fundamental properties. In particular, an analytic function of a real variable extends naturally to a function of a complex variable. It is in this way that the exponential function, the logarithm, the trigonometric functions and their inverses are extended to functions of a complex variable.
Fourier series
Main article: Fourier series
Fourier series decomposes periodic functions or periodic signals into the sum of a (possibly infinite) set of simple oscillating functions, namely sines and cosines (or complex exponentials). The study of Fourier series typically occurs and is handled within the branch mathematics > mathematical analysis > Fourier analysis.
Integration
Integration is a formalization of the problem of finding the area bound by a curve and the related problems of determining the length of a curve or volume enclosed by a surface. The basic strategy to solving problems of this type was known to the ancient Greeks and Chinese, and was known as the method of exhaustion. Generally speaking, the desired area is bounded from above and below, respectively, by increasingly accurate circumscribing and inscribing polygonal approximations whose exact areas can be computed. By considering approximations consisting of a larger and larger ("infinite") number of smaller and smaller ("infinitesimal") pieces, the area bound by the curve can be deduced, as the upper and lower bounds defined by the approximations converge around a common value.
The spirit of this basic strategy can easily be seen in the definition of the Riemann integral, in which the integral is said to exist if upper and lower Riemann (or Darboux) sums converge to a common value as thinner and thinner rectangular slices ("refinements") are considered. Though the machinery used to define it is much more elaborate compared to the Riemann integral, the Lebesgue integral was defined with similar basic ideas in mind. Compared to the Riemann integral, the more sophisticated Lebesgue integral allows area (or length, volume, etc.; termed a "measure" in general) to be defined and computed for much more complicated and irregular subsets of Euclidean space, although there still exist "non-measurable" subsets for which an area cannot be assigned.
Riemann integration
Main article: Riemann integral
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let $[a,b]$ be a closed interval of the real line; then a tagged partition ${\cal {P}}$ of $[a,b]$ is a finite sequence
$a=x_{0}\leq t_{1}\leq x_{1}\leq t_{2}\leq x_{2}\leq \cdots \leq x_{n-1}\leq t_{n}\leq x_{n}=b.\,\!$
This partitions the interval $[a,b]$ into $n$ sub-intervals $[x_{i-1},x_{i}]$ indexed by $i=1,\ldots ,n$, each of which is "tagged" with a distinguished point $t_{i}\in [x_{i-1},x_{i}]$. For a function $f$ bounded on $[a,b]$, we define the Riemann sum of $f$ with respect to tagged partition ${\cal {P}}$ as
$\sum _{i=1}^{n}f(t_{i})\Delta _{i},$
where $\Delta _{i}=x_{i}-x_{i-1}$ is the width of sub-interval $i$. Thus, each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, $ \|\Delta _{i}\|=\max _{i=1,\ldots ,n}\Delta _{i}$. We say that the Riemann integral of $f$ on $[a,b]$ is $S$ if for any $\varepsilon >0$ there exists $\delta >0$ such that, for any tagged partition ${\cal {P}}$ with mesh $\|\Delta _{i}\|<\delta $, we have
$\left|S-\sum _{i=1}^{n}f(t_{i})\Delta _{i}\right|<\varepsilon .$
This is sometimes denoted $ {\mathcal {R}}\int _{a}^{b}f=S$. When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum is known as the upper (respectively, lower) Darboux sum. A function is Darboux integrable if the upper and lower Darboux sums can be made to be arbitrarily close to each other for a sufficiently small mesh. Although this definition gives the Darboux integral the appearance of being a special case of the Riemann integral, they are, in fact, equivalent, in the sense that a function is Darboux integrable if and only if it is Riemann integrable, and the values of the integrals are equal. In fact, calculus and real analysis textbooks often conflate the two, introducing the definition of the Darboux integral as that of the Riemann integral, due to the slightly easier to apply definition of the former.
The fundamental theorem of calculus asserts that integration and differentiation are inverse operations in a certain sense.
Lebesgue integration and measure
Main article: Lebesgue integral
Lebesgue integration is a mathematical construction that extends the integral to a larger class of functions; it also extends the domains on which these functions can be defined. The concept of a measure, an abstraction of length, area, or volume, is central to Lebesgue integral probability theory.
Distributions
Main article: Distribution (mathematics)
Distributions (or generalized functions) are objects that generalize functions. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.
Relation to complex analysis
Real analysis is an area of analysis that studies concepts such as sequences and their limits, continuity, differentiation, integration and sequences of functions. By definition, real analysis focuses on the real numbers, often including positive and negative infinity to form the extended real line. Real analysis is closely related to complex analysis, which studies broadly the same properties of complex numbers. In complex analysis, it is natural to define differentiation via holomorphic functions, which have a number of useful properties, such as repeated differentiability, expressibility as power series, and satisfying the Cauchy integral formula.
In real analysis, it is usually more natural to consider differentiable, smooth, or harmonic functions, which are more widely applicable, but may lack some more powerful properties of holomorphic functions. However, results such as the fundamental theorem of algebra are simpler when expressed in terms of complex numbers.
Techniques from the theory of analytic functions of a complex variable are often used in real analysis – such as evaluation of real integrals by residue calculus.
Important results
Important results include the Bolzano–Weierstrass and Heine–Borel theorems, the intermediate value theorem and mean value theorem, Taylor's theorem, the fundamental theorem of calculus, the Arzelà-Ascoli theorem, the Stone-Weierstrass theorem, Fatou's lemma, and the monotone convergence and dominated convergence theorems.
Generalizations and related areas of mathematics
Various ideas from real analysis can be generalized from the real line to broader or more abstract contexts. These generalizations link real analysis to other disciplines and subdisciplines. For instance, generalization of ideas like continuous functions and compactness from real analysis to metric spaces and topological spaces connects real analysis to the field of general topology, while generalization of finite-dimensional Euclidean spaces to infinite-dimensional analogs led to the concepts of Banach spaces and Hilbert spaces and, more generally to functional analysis. Georg Cantor's investigation of sets and sequence of real numbers, mappings between them, and the foundational issues of real analysis gave birth to naive set theory. The study of issues of convergence for sequences of functions eventually gave rise to Fourier analysis as a subdiscipline of mathematical analysis. Investigation of the consequences of generalizing differentiability from functions of a real variable to ones of a complex variable gave rise to the concept of holomorphic functions and the inception of complex analysis as another distinct subdiscipline of analysis. On the other hand, the generalization of integration from the Riemann sense to that of Lebesgue led to the formulation of the concept of abstract measure spaces, a fundamental concept in measure theory. Finally, the generalization of integration from the real line to curves and surfaces in higher dimensional space brought about the study of vector calculus, whose further generalization and formalization played an important role in the evolution of the concepts of differential forms and smooth (differentiable) manifolds in differential geometry and other closely related areas of geometry and topology.
See also
• List of real analysis topics
• Time-scale calculus – a unification of real analysis with calculus of finite differences
• Real multivariable function
• Real coordinate space
• Complex analysis
References
1. Tao, Terence (2003). "Lecture notes for MATH 131AH" (PDF). Course Website for MATH 131AH, Department of Mathematics, UCLA.
2. "Sequences intro". khanacademy.org.
3. Gaughan, Edward (2009). "1.1 Sequences and Convergence". Introduction to Analysis. AMS (2009). ISBN 978-0-8218-4787-9.
4. Some authors (e.g., Rudin 1976) use braces instead and write $\{a_{n}\}$. However, this notation conflicts with the usual notation for a set, which, in contrast to a sequence, disregards the order and the multiplicity of its elements.
5. Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 978-0-495-01166-8.
6. Royden 1988, Sect. 5.4, page 108; Nielsen 1997, Definition 15.6 on page 251; Athreya & Lahiri 2006, Definitions 4.4.1, 4.4.2 on pages 128,129. The interval I is assumed to be bounded and closed in the former two books but not the latter book.
7. The term unconditional convergence refers to series whose sum does not depend on the order of the terms (i.e., any rearrangement gives the same sum). Convergence is termed conditional otherwise. For series in $\mathbb {R} ^{n}$, it can be shown that absolute convergence and unconditional convergence are equivalent. Hence, the term "conditional convergence" is often used to mean non-absolute convergence. However, in the general setting of Banach spaces, the terms do not coincide, and there are unconditionally convergent series that do not converge absolutely.
Sources
• Athreya, Krishna B.; Lahiri, Soumendra N. (2006), Measure theory and probability theory, Springer, ISBN 0-387-32903-X
• Nielsen, Ole A. (1997), An introduction to integration and measure theory, Wiley-Interscience, ISBN 0-471-59518-7
• Royden, H.L. (1988), Real Analysis (third ed.), Collier Macmillan, ISBN 0-02-404151-3
Bibliography
• Abbott, Stephen (2001). Understanding Analysis. Undergraduate Texts in Mathematics. New York: Springer-Verlag. ISBN 0-387-95060-5.
• Aliprantis, Charalambos D.; Burkinshaw, Owen (1998). Principles of real analysis (3rd ed.). Academic. ISBN 0-12-050257-7.
• Bartle, Robert G.; Sherbert, Donald R. (2011). Introduction to Real Analysis (4th ed.). New York: John Wiley and Sons. ISBN 978-0-471-43331-6.
• Bressoud, David (2007). A Radical Approach to Real Analysis. MAA. ISBN 978-0-88385-747-2.
• Browder, Andrew (1996). Mathematical Analysis: An Introduction. Undergraduate Texts in Mathematics. New York: Springer-Verlag. ISBN 0-387-94614-4.
• Carothers, Neal L. (2000). Real Analysis. Cambridge: Cambridge University Press. ISBN 978-0521497565.
• Dangello, Frank; Seyfried, Michael (1999). Introductory Real Analysis. Brooks Cole. ISBN 978-0-395-95933-6.
• Kolmogorov, A. N.; Fomin, S. V. (1975). Introductory Real Analysis. Translated by Richard A. Silverman. Dover Publications. ISBN 0486612260. Retrieved 2 April 2013.
• Rudin, Walter (1976). Principles of Mathematical Analysis. Walter Rudin Student Series in Advanced Mathematics (3rd ed.). New York: McGraw–Hill. ISBN 978-0-07-054235-8.
• Rudin, Walter (1987). Real and Complex Analysis (3rd ed.). New York: McGraw-Hill. ISBN 978-0-07-054234-1.
• Spivak, Michael (1994). Calculus (3rd ed.). Houston, Texas: Publish or Perish, Inc. ISBN 091409890X.
External links
• How We Got From There to Here: A Story of Real Analysis by Robert Rogers and Eugene Boman
• A First Course in Analysis by Donald Yau
• Analysis WebNotes by John Lindsay Orr
• Interactive Real Analysis by Bert G. Wachsmuth
• A First Analysis Course by John O'Connor
• Mathematical Analysis I by Elias Zakon
• Mathematical Analysis II by Elias Zakon
• Trench, William F. (2003). Introduction to Real Analysis (PDF). Prentice Hall. ISBN 978-0-13-045786-8.
• Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis
• Basic Analysis: Introduction to Real Analysis by Jiri Lebl
• Topics in Real and Functional Analysis by Gerald Teschl, University of Vienna.
Major topics in mathematical analysis
• Calculus: Integration
• Differentiation
• Differential equations
• ordinary
• partial
• stochastic
• Fundamental theorem of calculus
• Calculus of variations
• Vector calculus
• Tensor calculus
• Matrix calculus
• Lists of integrals
• Table of derivatives
• Real analysis
• Complex analysis
• Hypercomplex analysis (quaternionic analysis)
• Functional analysis
• Fourier analysis
• Least-squares spectral analysis
• Harmonic analysis
• P-adic analysis (P-adic numbers)
• Measure theory
• Representation theory
• Functions
• Continuous function
• Special functions
• Limit
• Series
• Infinity
Mathematics portal
|
Wikipedia
|
Function of a real variable
In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers $\mathbb {R} $, or a subset of $\mathbb {R} $ that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers.
Function
x ↦ f (x)
Examples of domains and codomains
• $X$ → $\mathbb {B} $, $\mathbb {B} $ → $X$, $\mathbb {B} ^{n}$ → $X$
• $X$ → $\mathbb {Z} $, $\mathbb {Z} $ → $X$
• $X$ → $\mathbb {R} $, $\mathbb {R} $ → $X$, $\mathbb {R} ^{n}$ → $X$
• $X$ → $\mathbb {C} $, $\mathbb {C} $ → $X$, $\mathbb {C} ^{n}$ → $X$
Classes/properties
• Constant
• Identity
• Linear
• Polynomial
• Rational
• Algebraic
• Analytic
• Smooth
• Continuous
• Measurable
• Injective
• Surjective
• Bijective
Constructions
• Restriction
• Composition
• λ
• Inverse
Generalizations
• Partial
• Multivalued
• Implicit
• space
Nevertheless, the codomain of a function of a real variable may be any set. However, it is often assumed to have a structure of $\mathbb {R} $-vector space over the reals. That is, the codomain may be a Euclidean space, a coordinate vector, the set of matrices of real numbers of a given size, or an $\mathbb {R} $-algebra, such as the complex numbers or the quaternions. The structure $\mathbb {R} $-vector space of the codomain induces a structure of $\mathbb {R} $-vector space on the functions. If the codomain has a structure of $\mathbb {R} $-algebra, the same is true for the functions.
The image of a function of a real variable is a curve in the codomain. In this context, a function that defines curve is called a parametric equation of the curve.
When the codomain of a function of a real variable is a finite-dimensional vector space, the function may be viewed as a sequence of real functions. This is often used in applications.
Real function
A real function is a function from a subset of $\mathbb {R} $ to $\mathbb {R} ,$ where $\mathbb {R} $ denotes as usual the set of real numbers. That is, the domain of a real function is a subset $\mathbb {R} $, and its codomain is $\mathbb {R} .$ It is generally assumed that the domain contains an interval of positive length.
Basic examples
For many commonly used real functions, the domain is the whole set of real numbers, and the function is continuous and differentiable at every point of the domain. One says that these functions are defined, continuous and differentiable everywhere. This is the case of:
• All polynomial functions, including constant functions and linear functions
• Sine and cosine functions
• Exponential function
Some functions are defined everywhere, but not continuous at some points. For example
• The Heaviside step function is defined everywhere, but not continuous at zero.
Some functions are defined and continuous everywhere, but not everywhere differentiable. For example
• The absolute value is defined and continuous everywhere, and is differentiable everywhere, except for zero.
• The cubic root is defined and continuous everywhere, and is differentiable everywhere, except for zero.
Many common functions are not defined everywhere, but are continuous and differentiable everywhere where they are defined. For example:
• A rational function is a quotient of two polynomial functions, and is not defined at the zeros of the denominator.
• The tangent function is not defined for ${\frac {\pi }{2}}+k\pi ,$ where k is any integer.
• The logarithm function is defined only for positive values of the variable.
Some functions are continuous in their whole domain, and not differentiable at some points. This is the case of:
• The square root is defined only for nonnegative values of the variable, and not differentiable at 0 (it is differentiable for all positive values of the variable).
General definition
A real-valued function of a real variable is a function that takes as input a real number, commonly represented by the variable x, for producing another real number, the value of the function, commonly denoted f(x). For simplicity, in this article a real-valued function of a real variable will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified.
Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable is taken in a subset X of ℝ, the domain of the function, which is always supposed to contain an interval of positive length. In other words, a real-valued function of a real variable is a function
$f:X\to \mathbb {R} $
such that its domain X is a subset of ℝ that contains an interval of positive length.
A simple example of a function in one variable could be:
$f:X\to \mathbb {R} $
$X=\{x\in \mathbb {R} \,:\,x\geq 0\}$
$f(x)={\sqrt {x}}$
which is the square root of x.
Image
Main article: Image (mathematics)
The image of a function $f(x)$ is the set of all values of f when the variable x runs in the whole domain of f. For a continuous (see below for a definition) real-valued function with a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function.
The preimage of a given real number y is the set of the solutions of the equation y = f(x).
Domain
The domain of a function of several real variables is a subset of ℝ that is sometimes explicitly defined. In fact, if one restricts the domain X of a function f to a subset Y ⊂ X, one gets formally a different function, the restriction of f to Y, which is denoted f|Y. In practice, it is often not harmful to identify f and f|Y, and to omit the subscript |Y.
Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation. This means that it is not worthy to explicitly define the domain of a function of a real variable.
Algebraic structure
The arithmetic operations may be applied to the functions in the following way:
• For every real number r, the constant function $(x)\mapsto r$, is everywhere defined.
• For every real number r and every function f, the function $rf:(x)\mapsto rf(x)$ has the same domain as f (or is everywhere defined if r = 0).
• If f and g are two functions of respective domains X and Y such that X∩Y contains an open subset of ℝ, then $f+g:(x)\mapsto f(x)+g(x)$ and $f\,g:(x)\mapsto f(x)\,g(x)$ are functions that have a domain containing X∩Y.
It follows that the functions of n variables that are everywhere defined and the functions of n variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (ℝ-algebras).
One may similarly define $1/f:(x)\mapsto 1/f(x),$ which is a function only if the set of the points (x) in the domain of f such that f(x) ≠ 0 contains an open subset of ℝ. This constraint implies that the above two algebras are not fields.
Continuity and limit
Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of a real variable are ubiquitous in mathematics, it is worth defining this notion without reference to the general notion of continuous maps between topological space.
For defining the continuity, it is useful to consider the distance function of ℝ, which is an everywhere defined function of 2 real variables: $d(x,y)=|x-y|$
A function f is continuous at a point $a$ which is interior to its domain, if, for every positive real number ε, there is a positive real number φ such that $|f(x)-f(a)|<\varepsilon $ for all $x$ such that $d(x,a)<\varphi .$ In other words, φ may be chosen small enough for having the image by f of the interval of radius φ centered at $a$ contained in the interval of length 2ε centered at $f(a).$ A function is continuous if it is continuous at every point of its domain.
The limit of a real-valued function of a real variable is as follows.[1] Let a be a point in topological closure of the domain X of the function f. The function, f has a limit L when x tends toward a, denoted
$L=\lim _{x\to a}f(x),$
if the following condition is satisfied: For every positive real number ε > 0, there is a positive real number δ > 0 such that
$|f(x)-L|<\varepsilon $
for all x in the domain such that
$d(x,a)<\delta .$
If the limit exists, it is unique. If a is in the interior of the domain, the limit exists if and only if the function is continuous at a. In this case, we have
$f(a)=\lim _{x\to a}f(x).$
When a is in the boundary of the domain of f, and if f has a limit at a, the latter formula allows to "extend by continuity" the domain of f to a.
Calculus
One can collect a number of functions each of a real variable, say
$y_{1}=f_{1}(x)\,,\quad y_{2}=f_{2}(x)\,,\ldots ,y_{n}=f_{n}(x)$
into a vector parametrized by x:
$\mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})=[f_{1}(x),f_{2}(x),\ldots ,f_{n}(x)]$
The derivative of the vector y is the vector derivatives of fi(x) for i = 1, 2, ..., n:
${\frac {d\mathbf {y} }{dx}}=\left({\frac {dy_{1}}{dx}},{\frac {dy_{2}}{dx}},\ldots ,{\frac {dy_{n}}{dx}}\right)$
One can also perform line integrals along a space curve parametrized by x, with position vector r = r(x), by integrating with respect to the variable x:
$\int _{a}^{b}\mathbf {y} (x)\cdot d\mathbf {r} =\int _{a}^{b}\mathbf {y} (x)\cdot {\frac {d\mathbf {r} (x)}{dx}}dx$
where · is the dot product, and x = a and x = b are the start and endpoints of the curve.
Theorems
With the definitions of integration and derivatives, key theorems can be formulated, including the fundamental theorem of calculus, integration by parts, and Taylor's theorem. Evaluating a mixture of integrals and derivatives can be done by using theorem differentiation under the integral sign.
Implicit functions
A real-valued implicit function of a real variable is not written in the form "y = f(x)". Instead, the mapping is from the space ℝ2 to the zero element in ℝ (just the ordinary zero 0):
$\phi :\mathbb {R} ^{2}\to \{0\}$ :\mathbb {R} ^{2}\to \{0\}}
and
$\phi (x,y)=0$
is an equation in the variables. Implicit functions are a more general way to represent functions, since if:
$y=f(x)$
then we can always define:
$\phi (x,y)=y-f(x)=0$
but the converse is not always possible, i.e. not all implicit functions have the form of this equation.
One-dimensional space curves in ℝn
Formulation
Given the functions r1 = r1(t), r2 = r2(t), ..., rn = rn(t) all of a common variable t, so that:
${\begin{aligned}r_{1}:\mathbb {R} \rightarrow \mathbb {R} &\quad r_{2}:\mathbb {R} \rightarrow \mathbb {R} &\cdots &\quad r_{n}:\mathbb {R} \rightarrow \mathbb {R} \\r_{1}=r_{1}(t)&\quad r_{2}=r_{2}(t)&\cdots &\quad r_{n}=r_{n}(t)\\\end{aligned}}$
or taken together:
$\mathbf {r} :\mathbb {R} \rightarrow \mathbb {R} ^{n}\,,\quad \mathbf {r} =\mathbf {r} (t)$ :\mathbb {R} \rightarrow \mathbb {R} ^{n}\,,\quad \mathbf {r} =\mathbf {r} (t)}
then the parametrized n-tuple,
$\mathbf {r} (t)=[r_{1}(t),r_{2}(t),\ldots ,r_{n}(t)]$
describes a one-dimensional space curve.
Tangent line to curve
At a point r(t = c) = a = (a1, a2, ..., an) for some constant t = c, the equations of the one-dimensional tangent line to the curve at that point are given in terms of the ordinary derivatives of r1(t), r2(t), ..., rn(t), and r with respect to t:
${\frac {r_{1}(t)-a_{1}}{dr_{1}(t)/dt}}={\frac {r_{2}(t)-a_{2}}{dr_{2}(t)/dt}}=\cdots ={\frac {r_{n}(t)-a_{n}}{dr_{n}(t)/dt}}$
Normal plane to curve
The equation of the n-dimensional hyperplane normal to the tangent line at r = a is:
$(p_{1}-a_{1}){\frac {dr_{1}(t)}{dt}}+(p_{2}-a_{2}){\frac {dr_{2}(t)}{dt}}+\cdots +(p_{n}-a_{n}){\frac {dr_{n}(t)}{dt}}=0$
or in terms of the dot product:
$(\mathbf {p} -\mathbf {a} )\cdot {\frac {d\mathbf {r} (t)}{dt}}=0$
where p = (p1, p2, ..., pn) are points in the plane, not on the space curve.
Relation to kinematics
The physical and geometric interpretation of dr(t)/dt is the "velocity" of a point-like particle moving along the path r(t), treating r as the spatial position vector coordinates parametrized by time t, and is a vector tangent to the space curve for all t in the instantaneous direction of motion. At t = c, the space curve has a tangent vector dr(t)/dt|t = c, and the hyperplane normal to the space curve at t = c is also normal to the tangent at t = c. Any vector in this plane (p − a) must be normal to dr(t)/dt|t = c.
Similarly, d2r(t)/dt2 is the "acceleration" of the particle, and is a vector normal to the curve directed along the radius of curvature.
Matrix valued functions
A matrix can also be a function of a single variable. For example, the rotation matrix in 2d:
$R(\theta )={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}$
is a matrix valued function of rotation angle of about the origin. Similarly, in special relativity, the Lorentz transformation matrix for a pure boost (without rotations):
$\Lambda (\beta )={\begin{bmatrix}{\frac {1}{\sqrt {1-\beta ^{2}}}}&-{\frac {\beta }{\sqrt {1-\beta ^{2}}}}&0&0\\-{\frac {\beta }{\sqrt {1-\beta ^{2}}}}&{\frac {1}{\sqrt {1-\beta ^{2}}}}&0&0\\0&0&1&0\\0&0&0&1\\\end{bmatrix}}$
is a function of the boost parameter β = v/c, in which v is the relative velocity between the frames of reference (a continuous variable), and c is the speed of light, a constant.
Banach and Hilbert spaces and quantum mechanics
Generalizing the previous section, the output of a function of a real variable can also lie in a Banach space or a Hilbert space. In these spaces, division and multiplication and limits are all defined, so notions such as derivative and integral still apply. This occurs especially often in quantum mechanics, where one takes the derivative of a ket or an operator. This occurs, for instance, in the general time-dependent Schrödinger equation:
$i\hbar {\frac {\partial }{\partial t}}\Psi ={\hat {H}}\Psi $
where one takes the derivative of a wave function, which can be an element of several different Hilbert spaces.
Complex-valued function of a real variable
A complex-valued function of a real variable may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values.
If f(x) is such a complex valued function, it may be decomposed as
f(x) = g(x) + ih(x),
where g and h are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions.
Cardinality of sets of functions of a real variable
The cardinality of the set of real-valued functions of a real variable, $\mathbb {R} ^{\mathbb {R} }=\{f:\mathbb {R} \to \mathbb {R} \}$, is $\beth _{2}=2^{\mathfrak {c}}$, which is strictly larger than the cardinality of the continuum (i.e., set of all real numbers). This fact is easily verified by cardinal arithmetic:
$\mathrm {card} (\mathbb {R} ^{\mathbb {R} })=\mathrm {card} (\mathbb {R} )^{\mathrm {card} (\mathbb {R} )}={\mathfrak {c}}^{\mathfrak {c}}=(2^{\aleph _{0}})^{\mathfrak {c}}=2^{\aleph _{0}\cdot {\mathfrak {c}}}=2^{\mathfrak {c}}.$
Furthermore, if $X$ is a set such that $2\leq \mathrm {card} (X)\leq {\mathfrak {c}}$, then the cardinality of the set $X^{\mathbb {R} }=\{f:\mathbb {R} \to X\}$ is also $2^{\mathfrak {c}}$, since
$2^{\mathfrak {c}}=\mathrm {card} (2^{\mathbb {R} })\leq \mathrm {card} (X^{\mathbb {R} })\leq \mathrm {card} (\mathbb {R} ^{\mathbb {R} })=2^{\mathfrak {c}}.$
However, the set of continuous functions $C^{0}(\mathbb {R} )=\{f:\mathbb {R} \to \mathbb {R} :f\ \mathrm {continuous} \}$ has a strictly smaller cardinality, the cardinality of the continuum, ${\mathfrak {c}}$. This follows from the fact that a continuous function is completely determined by its value on a dense subset of its domain.[2] Thus, the cardinality of the set of continuous real-valued functions on the reals is no greater than the cardinality of the set of real-valued functions of a rational variable. By cardinal arithmetic:
$\mathrm {card} (C^{0}(\mathbb {R} ))\leq \mathrm {card} (\mathbb {R} ^{\mathbb {Q} })=(2^{\aleph _{0}})^{\aleph _{0}}=2^{\aleph _{0}\cdot \aleph _{0}}=2^{\aleph _{0}}={\mathfrak {c}}.$
On the other hand, since there is a clear bijection between $\mathbb {R} $ and the set of constant functions $\{f:\mathbb {R} \to \mathbb {R} :f(x)\equiv x_{0}\}$, which forms a subset of $C^{0}(\mathbb {R} )$, $\mathrm {card} (C^{0}(\mathbb {R} ))\geq {\mathfrak {c}}$ must also hold. Hence, $\mathrm {card} (C^{0}(\mathbb {R} ))={\mathfrak {c}}$.
See also
• Real analysis
• Function of several real variables
• Complex analysis
• Function of several complex variables
References
1. R. Courant. Differential and Integral Calculus. Vol. 2. Wiley Classics Library. pp. 46–47. ISBN 0-471-60840-8.
2. Rudin, W. (1976). Principles of Mathematical Analysis. New York: McGraw-Hill. pp. 98–99. ISBN 0-07-054235X.
• F. Ayres, E. Mendelson (2009). Calculus. Schaum's outline series (5th ed.). McGraw Hill. ISBN 978-0-07-150861-2.
• R. Wrede, M. R. Spiegel (2010). Advanced calculus. Schaum's outline series (3rd ed.). McGraw Hill. ISBN 978-0-07-162366-7.
• N. Bourbaki (2004). Functions of a Real Variable: Elementary Theory. Springer. ISBN 354-065-340-6.
External links
• Multivariable Calculus
• L. A. Talman (2007) Differentiability for Multivariable Functions
Major topics in mathematical analysis
• Calculus: Integration
• Differentiation
• Differential equations
• ordinary
• partial
• stochastic
• Fundamental theorem of calculus
• Calculus of variations
• Vector calculus
• Tensor calculus
• Matrix calculus
• Lists of integrals
• Table of derivatives
• Real analysis
• Complex analysis
• Hypercomplex analysis (quaternionic analysis)
• Functional analysis
• Fourier analysis
• Least-squares spectral analysis
• Harmonic analysis
• P-adic analysis (P-adic numbers)
• Measure theory
• Representation theory
• Functions
• Continuous function
• Special functions
• Limit
• Series
• Infinity
Mathematics portal
Authority control: National
• Spain
• France
• BnF data
• Israel
• United States
• Czech Republic
|
Wikipedia
|
Realcompact space
In mathematics, in the field of topology, a topological space is said to be realcompact if it is completely regular Hausdorff and it contains every point of its Stone–Čech compactification which is real (meaning that the quotient field at that point of the ring of real functions is the reals). Realcompact spaces have also been called Q-spaces, saturated spaces, functionally complete spaces, real-complete spaces, replete spaces and Hewitt–Nachbin spaces (named after Edwin Hewitt and Leopoldo Nachbin). Realcompact spaces were introduced by Hewitt (1948).
Properties
• A space is realcompact if and only if it can be embedded homeomorphically as a closed subset in some (not necessarily finite) Cartesian power of the reals, with the product topology. Moreover, a (Hausdorff) space is realcompact if and only if it has the uniform topology and is complete for the uniform structure generated by the continuous real-valued functions (Gillman, Jerison, p. 226).
• For example Lindelöf spaces are realcompact; in particular all subsets of $\mathbb {R} ^{n}$ are realcompact.
• The (Hewitt) realcompactification υX of a topological space X consists of the real points of its Stone–Čech compactification βX. A topological space X is realcompact if and only if it coincides with its Hewitt realcompactification.
• Write C(X) for the ring of continuous real-valued functions on a topological space X. If Y is a real compact space, then ring homomorphisms from C(Y) to C(X) correspond to continuous maps from X to Y. In particular the category of realcompact spaces is dual to the category of rings of the form C(X).
• In order that a Hausdorff space X is compact it is necessary and sufficient that X is realcompact and pseudocompact (see Engelking, p. 153).
See also
• Compact space
• Paracompact space
• Normal space
• Pseudocompact space
• Tychonoff space
References
• Gillman, Leonard; Jerison, Meyer, "Rings of continuous functions". Reprint of the 1960 edition. Graduate Texts in Mathematics, No. 43. Springer-Verlag, New York-Heidelberg, 1976. xiii+300 pp.
• Hewitt, Edwin (1948), "Rings of real-valued continuous functions. I", Transactions of the American Mathematical Society, 64 (1): 45–99, doi:10.2307/1990558, ISSN 0002-9947, JSTOR 1990558, MR 0026239.
• Engelking, Ryszard (1968). Outline of General Topology. translated from Polish. Amsterdam: North-Holland Publ. Co..
• Willard, Stephen (1970), General Topology, Reading, Mass.: Addison-Wesley.
|
Wikipedia
|
Realizability
In mathematical logic, realizability is a collection of methods in proof theory used to study constructive proofs and extract additional information from them.[1] Formulas from a formal theory are "realized" by objects, known as "realizers", in a way that knowledge of the realizer gives knowledge about the truth of the formula. There are many variations of realizability; exactly which class of formulas is studied and which objects are realizers differ from one variation to another.
Realizability can be seen as a formalization of the BHK interpretation of intuitionistic logic; in realizability the notion of "proof" (which is left undefined in the BHK interpretation) is replaced with a formal notion of "realizer". Most variants of realizability begin with a theorem that any statement that is provable in the formal system being studied is realizable. The realizer, however, usually gives more information about the formula than a formal proof would directly provide.
Beyond giving insight into intuitionistic provability, realizability can be applied to prove the disjunction and existence properties for intuitionistic theories and to extract programs from proofs, as in proof mining. It is also related to topos theory via realizability topoi.
Example: Kleene's 1945-realizability
Kleene's original version of realizability uses natural numbers as realizers for formulas in Heyting arithmetic. A few pieces of notation are required: first, an ordered pair (n,m) is treated as a single number using a fixed primitive recursive pairing function; second, for each natural number n, φn is the computable function with index n. The following clauses are used to define a relation "n realizes A" between natural numbers n and formulas A in the language of Heyting arithmetic, known as Kleene's 1945-realizability relation:[2]
• Any number n realizes an atomic formula s=t if and only if s=t is true. Thus every number realizes a true equation, and no number realizes a false equation.
• A pair (n,m) realizes a formula A∧B if and only if n realizes A and m realizes B. Thus a realizer for a conjunction is a pair of realizers for the conjuncts.
• A pair (n,m) realizes a formula A∨B if and only if the following hold: n is 0 or 1; and if n is 0 then m realizes A; and if n is 1 then m realizes B. Thus a realizer for a disjunction explicitly picks one of the disjuncts (with n) and provides a realizer for it (with m).
• A number n realizes a formula A→B if and only if, for every m that realizes A, φn(m) realizes B. Thus a realizer for an implication corresponds to a computable function that takes any realizer for the hypothesis and produces a realizer for the conclusion.
• A pair (n,m) realizes a formula (∃ x)A(x) if and only if m is a realizer for A(n). Thus a realizer for an existential formula produces an explicit witness for the quantifier along with a realizer for the formula instantiated with that witness.
• A number n realizes a formula (∀ x)A(x) if and only if, for all m, φn(m) is defined and realizes A(m). Thus a realizer for a universal statement is a computable function that produces, for each m, a realizer for the formula instantiated with m.
With this definition, the following theorem is obtained:[3]
Let A be a sentence of Heyting arithmetic (HA). If HA proves A then there is an n such that n realizes A.
On the other hand, there are classical theorems (even propositional formula schemas) that are realized but which are not provable in HA, a fact first established by Rose.[4] So realizability does not exactly mirror intuitionistic reasoning.
Further analysis of the method can be used to prove that HA has the "disjunction and existence properties":[5]
• If HA proves a sentence (∃ x)A(x), then there is an n such that HA proves A(n)
• If HA proves a sentence A∨B, then HA proves A or HA proves B.
More such properties are obtained involving Harrop formulas.
Later developments
Kreisel introduced modified realizability, which uses typed lambda calculus as the language of realizers. Modified realizability is one way to show that Markov's principle is not derivable in intuitionistic logic. On the contrary, it allows to constructively justify the principle of independence of premise:
$(A\rightarrow \exists x\;P(x))\rightarrow \exists x\;(A\rightarrow P(x))$.
Relative realizability[6] is an intuitionist analysis of recursive or recursively enumerable elements of data structures that are not necessarily computable, such as computable operations on all real numbers when reals can be only approximated on digital computer systems.
Applications
Realizability is one of the methods used in proof mining to extract concrete "programs" from seemingly non-constructive mathematical proofs. Program extraction using realizability is implemented in some proof assistants such as Coq.
See also
• Curry–Howard correspondence
• Dialectica interpretation
• Harrop formula
Notes
1. van Oosten 2000
2. A. Ščedrov, "Intuitionistic Set Theory" (pp.263--264). From Harvey Friedman's Research on the Foundations of Mathematics (1985), Studies in Logic and the Foundations of Mathematics vol. 117.
3. van Oosten 2000, p. 7
4. Rose 1953
5. van Oosten 2000, p. 6
6. Birkedal 2000
References
• Birkedal, Lars; Jaap van Oosten (2000). Relative and modified relative realizability.
• Kreisel G. (1959). "Interpretation of Analysis by Means of Constructive Functionals of Finite Types", in: Constructivity in Mathematics, edited by A. Heyting, North-Holland, pp. 101–128.
• Kleene, S. C. (1945). "On the interpretation of intuitionistic number theory". Journal of Symbolic Logic. 10 (4): 109–124. doi:10.2307/2269016. JSTOR 2269016.
• Kleene, S. C. (1973). "Realizability: a retrospective survey" from Mathias, A. R. D.; Hartley Rogers (1973). Cambridge Summer School in Mathematical Logic : held in Cambridge/England, August 1–21, 1971. Berlin: Springer. ISBN 3-540-05569-X., pp. 95–112.
• van Oosten, Jaap (2000). Realizability: An Historical Essay.
• Rose, G. F. (1953). "Propositional calculus and realizability". Transactions of the American Mathematical Society. 75 (1): 1–19. doi:10.2307/1990776. JSTOR 1990776.
External links
• Realizability Collection of links to recent papers on realizability and related topics.
|
Wikipedia
|
K-epsilon turbulence model
K-epsilon (k-ε) turbulence model is the most common model used in computational fluid dynamics (CFD) to simulate mean flow characteristics for turbulent flow conditions. It is a two equation model that gives a general description of turbulence by means of two transport equations (partial differential equations, PDEs). The original impetus for the K-epsilon model was to improve the mixing-length model, as well as to find an alternative to algebraically prescribing turbulent length scales in moderate to high complexity flows.[1]
• The first transported variable is the turbulent kinetic energy (k).
• The second transported variable is the rate of dissipation of turbulent kinetic energy (ε).
Principle
Unlike earlier turbulence models, k-ε model focuses on the mechanisms that affect the turbulent kinetic energy. The mixing length model lacks this kind of generality.[2] The underlying assumption of this model is that the turbulent viscosity is isotropic, in other words, the ratio between Reynolds stress and mean rate of deformations is the same in all directions.
Standard k-ε turbulence model
The exact k-ε equations contain many unknown and unmeasurable terms. For a much more practical approach, the standard k-ε turbulence model (Launder and Spalding, 1974[3]) is used which is based on our best understanding of the relevant processes, thus minimizing unknowns and presenting a set of equations which can be applied to a large number of turbulent applications.
For turbulent kinetic energy k[4]
${\frac {\partial (\rho k)}{\partial t}}+{\frac {\partial (\rho ku_{i})}{\partial x_{i}}}={\frac {\partial }{\partial x_{j}}}\left[{\frac {\mu _{t}}{\sigma _{k}}}{\frac {\partial k}{\partial x_{j}}}\right]+2{\mu _{t}}{E_{ij}}{E_{ij}}-\rho \varepsilon $
For dissipation $\varepsilon $[4]
${\frac {\partial (\rho \varepsilon )}{\partial t}}+{\frac {\partial (\rho \varepsilon u_{i})}{\partial x_{i}}}={\frac {\partial }{\partial x_{j}}}\left[{\frac {\mu _{t}}{\sigma _{\varepsilon }}}{\frac {\partial \varepsilon }{\partial x_{j}}}\right]+C_{1\varepsilon }{\frac {\varepsilon }{k}}2{\mu _{t}}{E_{ij}}{E_{ij}}-C_{2\varepsilon }\rho {\frac {\varepsilon ^{2}}{k}}$
Rate of change of k or ε in time + Transport of k or ε by advection = Transport of k or ε by diffusion + Rate of production of k or ε - Rate of destruction of k or ε
where
$u_{i}$ represents velocity component in corresponding direction
$E_{ij}$ represents component of rate of deformation
$\mu _{t}$ represents eddy viscosity
$\mu _{t}=\rho C_{\mu }{\frac {k^{2}}{\varepsilon }}$
The equations also consist of some adjustable constants $\sigma _{k}$, $\sigma _{\varepsilon }$ , $C_{1\varepsilon }$ and $C_{2\varepsilon }$. The values of these constants have been arrived at by numerous iterations of data fitting for a wide range of turbulent flows. These are as follows:[2]
$C_{\mu }=0.09$ $\sigma _{k}=1.00$ $\sigma _{\varepsilon }=1.30$ $C_{1\varepsilon }=1.44$ $C_{2\varepsilon }=1.92$
Applications
The k-ε model has been tailored specifically for planar shear layers[5] and recirculating flows.[6] This model is the most widely used and validated turbulence model with applications ranging from industrial to environmental flows, which explains its popularity. It is usually useful for free-shear layer flows with relatively small pressure gradients as well as in confined flows where the Reynolds shear stresses are most important.[7] It can also be stated as the simplest turbulence model for which only initial and/or boundary conditions needs to be supplied.
However it is more expensive in terms of memory than the mixing length model as it requires two extra PDEs. This model would be an inappropriate choice for problems such as inlets and compressors as accuracy has been shown experimentally to be reduced for flows containing large adverse pressure gradients. The k-ε model also performs poorly in a variety of important cases such as unconfined flows,[8] curved boundary layers, rotating flows and flows in non-circular ducts.[9]
Other models
Realizable k-ε Model: An immediate benefit of the realizable k-ɛ model is that it provides improved predictions for the spreading rate of both planar and round jets. It also exhibits superior performance for flows involving rotation, boundary layers under strong adverse pressure gradients, separation, and recirculation. In virtually every measure of comparison, Realizable k-ɛ demonstrates a superior ability to capture the mean flow of the complex structures.
k-ω Model: used when there are wall effects present within the case.
Reynolds stress equation model: In case of complex turbulent flows, Reynolds stress models are able to provide better predictions.[10] Such flows include turbulent flows with high degrees of anisotropy, significant streamline curvature, flow separation, zones of recirculation and influence of mean rotation effects.
References
1. K-epsilon models
2. Henk Kaarle Versteeg, Weeratunge Malalasekera (2007). An Introduction to Computational Fluid Dynamics: The Finite Volume Method. Pearson Education Limited. ISBN 9780131274983.
3. Launder, B.E.; Spalding, D.B. (March 1974). "The numerical computation of turbulent flows". Computer Methods in Applied Mechanics and Engineering. 3 (2): 269–289. Bibcode:1974CMAME...3..269L. doi:10.1016/0045-7825(74)90029-2.
4. Versteeg, Henk Kaarle; Malalasekera, Weeratunge (2007). An introduction to Computational Fluid Dynamics: The Finite Volume Method. Pearson Education.
5. usage of k-e to model shear layers
6. usage of k-e approach for modelling recirculating flows
7. The Turbulence Model Can Make a Big Difference in Your Results
8. P Bradshaw (1987), "Turbulent Secondary Flows", Annual Review of Fluid Mechanics, 19 (1): 53–74, Bibcode:1987AnRFM..19...53B, doi:10.1146/annurev.fl.19.010187.000413
9. Larsson, I. A. S.; Lindmark, E. M.; Lundström, T. S.; Nathan, G. J. (2011), "Secondary Flow in Semi Circular Ducts" (PDF), Journal of Fluids Engineering, 133 (10): 101206–101214, doi:10.1115/1.4004991, hdl:2263/42958
10. Pope, Stephen. "Turbulent Flows". Cambridge University Press, 2000.
Notes
• 'An Introduction to Computational Fluid Dynamics: The Finite Volume Method (2nd Edition)' , H. Versteeg, W. Malalasekera; Pearson Education Limited; 2007; ISBN 0131274988
• 'Turbulence Modeling for CFD' 2nd Ed. , Wilcox C. D. ; DCW Industries ; 1998 ; ISBN 0963605100
• 'An introduction to turbulence and its measurement' , Bradshaw, P. ; Pergamon Press ; 1971 ; ISBN 0080166210
|
Wikipedia
|
Rearrangement inequality
In mathematics, the rearrangement inequality[1] states that
$x_{1}y_{n}+\cdots +x_{n}y_{1}\leq x_{1}y_{\sigma (1)}+\cdots +x_{n}y_{\sigma (n)}\leq x_{1}y_{1}+\cdots +x_{n}y_{n}$
(1)
for every choice of real numbers
$x_{1}\leq \cdots \leq x_{n}\quad {\text{ and }}\quad y_{1}\leq \cdots \leq y_{n}$
and every permutation $y_{\sigma (1)},\ldots ,y_{\sigma (n)}$ of $y_{1},\ldots ,y_{n}.$
If the numbers $x_{1},\ldots ,x_{n}$ are different, meaning that $x_{1}<\cdots <x_{n},$ then:
• The upper bound in (1) is attained only for permutations $\sigma $ of $1,\ldots ,n$ which keep the order of $y_{1},\ldots ,y_{n},$ that is,
$y_{\sigma (1)}\leq \cdots \leq y_{\sigma (n)},$
or equivalently
$(y_{1},\ldots ,y_{n})=(y_{\sigma (1)},\ldots ,y_{\sigma (n)}).$
(Such a $\sigma $ can permute the indices of $y$-values, which are equal; in the extreme case of $y_{1}=\cdots =y_{n}$ every permutation keeps the order of $y_{1},\ldots ,y_{n}.$) If $y_{1}<\cdots <y_{n},$ then the identity, that means $\sigma (i)=i$ for all $i=1,\ldots ,n,$ is the only permutation keeping the order.
• Correspondingly, the lower bound in (1) is attained only for permutations $\sigma $ which reverse the order of $y_{1},\ldots ,y_{n},$ meaning that
$y_{\sigma (1)}\geq \cdots \geq y_{\sigma (n)}.$
If $y_{1}<\cdots <y_{n},$ then $\sigma (i)=n-i+1$ for all $i=1,\ldots ,n,$ is the only permutation doing this.
Note that the rearrangement inequality (1) makes no assumptions on the signs of the real numbers.
Applications
Many important inequalities can be proved by the rearrangement inequality, such as the arithmetic mean – geometric mean inequality, the Cauchy–Schwarz inequality, and Chebyshev's sum inequality.
Here is a particular consequence for all real numbers $x_{1}\leq \cdots \leq x_{n}$: By applying (1) with $y_{i}:=x_{i}$ for all $i=1,\ldots ,n,$ it follows that
$x_{1}x_{n}+\cdots +x_{n}x_{1}\leq x_{1}x_{\sigma (1)}+\cdots +x_{n}x_{\sigma (n)}\leq x_{1}^{2}+\cdots +x_{n}^{2}$
for every permutation $\sigma $ of $1,\ldots ,n.$
Intuition
The rearrangement inequality is actually very intuitive. Imagine there is a heap of $10 bills, a heap of $20 bills and one more heap of $100 bills. You are allowed to take 7 bills from a heap of your choice and then the heap disappears. In the second round you are allowed to take 5 bills from another heap and the heap disappears. In the last round you may take 3 bills from the last heap. In what order do you want to choose the heaps to maximize your profit? Obviously, the best you can do is to gain $7\cdot 100+5\cdot 20+3\cdot 10$ dollars. This is exactly what the upper bound of the rearrangement inequality (1) says for the sequences $3<5<7$ and $10<20<100.$ It is also an application of a greedy algorithm.
Geometric interpretation
Assume that $0<x_{1}<\cdots <x_{n}$ and $0<y_{1}<\cdots <y_{n}.$ Consider a rectangle of width $x_{1}+\cdots +x_{n}$ and height $y_{1}+\cdots +y_{n},$ subdivided into $n$ columns of widths $x_{1},\ldots ,x_{n}$ and the same number of rows of heights $y_{1},\ldots ,y_{n},$ so there are $\textstyle n^{2}$ small rectangles. You are supposed to take $n$ of these, one from each column and one from each row. The rearrangement inequality (1) says that you optimize the total area of your selection by taking the rectangles on the diagonal or the antidiagonal.
Proof
The lower bound and the corresponding discussion of equality follow by applying the results for the upper bound to
$-y_{n}\leq \cdots \leq -y_{1}.$
Therefore, it suffices to prove the upper bound in (1) and discuss when equality holds. Since there are only finitely many permutations of $1,\ldots ,n,$ there exists at least one $\sigma $ for which the middle term in (1)
$x_{1}y_{\sigma (1)}+\cdots +x_{n}y_{\sigma (n)}$
is maximal. In case there are several permutations with this property, let σ denote one with the highest number of integers $i$ from $\{1,\ldots ,n\}$ satisfying $y_{i}=y_{\sigma (i)}.$
We will now prove by contradiction, that $\sigma $ has to keep the order of $y_{1},\ldots ,y_{n}$ (then we are done with the upper bound in (1), because the identity has that property). Assume that there exists a $j\in \{1,\ldots ,n-1\}$ such that $y_{i}=y_{\sigma (i)}$ for all $i\in \{1,\ldots ,j-1\}$ and $y_{j}\neq y_{\sigma (j)}.$ Hence $y_{j}<y_{\sigma (j)}$ and there has to exist a $k\in \{j+1,\ldots ,n\}$ with $y_{j}=y_{\sigma (k)}$ to fill the gap. Therefore,
$x_{j}\leq x_{k}\qquad {\text{and}}\qquad y_{\sigma (k)}<y_{\sigma (j)},$
(2)
which implies that
$0\leq (x_{k}-x_{j})(y_{\sigma (j)}-y_{\sigma (k)}).$
(3)
Expanding this product and rearranging gives
$x_{j}y_{\sigma (j)}+x_{k}y_{\sigma (k)}\leq x_{j}y_{\sigma (k)}+x_{k}y_{\sigma (j)}\,,$
(4)
which is equivalent to (3). Hence the permutation
$\tau (i):={\begin{cases}\sigma (i)&{\text{for }}i\in \{1,\ldots ,n\}\setminus \{j,k\},\\\sigma (k)&{\text{for }}i=j,\\\sigma (j)&{\text{for }}i=k,\end{cases}}$
which arises from $\sigma $ by exchanging the values $\sigma (j)$ and $\sigma (k),$ has at least one additional point which keeps the order compared to $\sigma ,$ namely at $j$ satisfying $y_{j}=y_{\tau (j)},$ and also attains the maximum in (1) due to (4). This contradicts the choice of $\sigma .$
If $x_{1}<\cdots <x_{n},$ then we have strict inequalities in (2), (3), and (4), hence the maximum can only be attained by permutations keeping the order of $y_{1}\leq \cdots \leq y_{n},$ and every other permutation $\sigma $ cannot be optimal.
Proof by mathematical induction
As above, if suffices to treat the upper bound in (1). For a proof by mathematical induction, we start with $n=2.$ Observe that
$x_{1}\leq x_{2}\quad {\text{ and }}\quad y_{1}\leq y_{2}$
implies that
$0\leq (x_{2}-x_{1})(y_{2}-y_{1}),$
(5)
which is equivalent to
$x_{1}y_{2}+x_{2}y_{1}\leq x_{1}y_{1}+x_{2}y_{2},$
(6)
hence the upper bound in (1) is true for $n=2.$ If $x_{1}<x_{2},$ then we get strict inequality in (5) and (6) if and only if $y_{1}<y_{2}.$ Hence only the identity, which is the only permutation here keeping the order of $y_{1}<y_{2},$ gives the maximum.
As induction hypothesis assume that the upper bound in the rearrangement inequality (1) is true for $n-1$ with $n\geq 3$ and that in the case $x_{1}<\cdots <x_{n-1}$ there is equality only when the permutation $\sigma $ of $1,\ldots ,n-1$ keeps the order of $y_{1},\ldots ,y_{n-1}.$
Consider now $x_{1}\leq \cdots \leq x_{n}$ and $y_{1}\leq \cdots \leq y_{n}.$ Take a $\sigma $ from the finite number of permutations of $1,\ldots ,n$ such that the rearrangement in the middle of (1) gives the maximal result. There are two cases:
• If $\sigma (n)=n,$ then $y_{n}=y_{\sigma (n)}$ and, using the induction hypothesis, the upper bound in (1) is true with equality and $\sigma $ keeps the order of $y_{1},\ldots ,y_{n-1},y_{n}$ in the case $x_{1}<\cdots <x_{n}.$
• If $k:=\sigma (n)<n,$ then there is a $j\in \{1,\dots ,n-1\}$ with $\sigma (j)=n.$ Define the permutation
$\tau (i)={\begin{cases}\sigma (i)&{\text{for }}i\in \{1,\ldots ,n\}\setminus \{j,n\},\\k&{\text{for }}i=j,\\n&{\text{for }}i=n,\end{cases}}$
which arises from $\sigma $ by exchanging the values of $j$ and $n.$ There are now to subcases:
1. If $x_{k}=x_{n}$ or $y_{k}=y_{n},$ then this exchange of values of $\sigma $ has no effect on the middle term in (1) because $\tau $ gives the same sum, and we can proceed by applying the first case to $\tau .$ Note that in the case $x_{1}<\cdots <x_{n},$ the permutation $\tau $ keeps the order of $y_{1},\ldots ,y_{n}$ if and only if $\sigma $ does.
2. If $x_{k}<x_{n}$ and $y_{k}<y_{n},$ then $0<(x_{n}-x_{k})(y_{n}-y_{k}),$ which is equivalent to $x_{k}y_{n}+x_{n}y_{k}<x_{k}y_{k}+x_{n}y_{n}$ and shows that $\sigma $ is not optimal, hence this case cannot happen due to the choice of $\sigma .$
Generalizations
Three or more sequences
A straightforward generalization takes into account more sequences. Assume we have finite ordered sequences of nonnegative real numbers
$0\leq x_{1}\leq \cdots \leq x_{n}\quad {\text{and}}\quad 0\leq y_{1}\leq \cdots \leq y_{n}\quad {\text{and}}\quad 0\leq z_{1}\leq \cdots \leq z_{n}$
and a permutation $y_{\sigma (1)},\ldots ,y_{\sigma (n)}$ of $y_{1},\dots ,y_{n}$ and another permutation $z_{\tau (1)},\dots ,z_{\tau (n)}$ of $z_{1},\dots ,z_{n}.$ Then
$x_{1}y_{\sigma (1)}z_{\tau (1)}+\cdots +x_{n}y_{\sigma (n)}z_{\tau (n)}\leq x_{1}y_{1}z_{1}+\cdots +x_{n}y_{n}z_{n}.$
Note that, unlike the standard rearrangement inequality (1), this statement requires the numbers to be nonnegative. A similar statement is true for any number of sequences with all numbers nonnegative.
Functions instead of factors
Another generalization of the rearrangement inequality states that for all real numbers $x_{1}\leq \cdots \leq x_{n}$ and every choice of continuously differentiable functions $f_{i}:[x_{1},x_{n}]\to \mathbb {R} $ for $i=1,2,\ldots ,n$ such that their derivatives $f'_{1},\ldots ,f'_{n}$ satisfy
$f'_{1}(x)\leq f'_{2}(x)\leq \cdots \leq f'_{n}(x)\quad {\text{ for all }}x\in [x_{1},x_{n}],$
the inequality
$\sum _{i=1}^{n}f_{n-i+1}(x_{i})\leq \sum _{i=1}^{n}f_{\sigma (i)}(x_{i})\leq \sum _{i=1}^{n}f_{i}(x_{i})$
holds for every permutation $f_{\sigma (1)},\ldots ,f_{\sigma (n)}$ of $f_{1},\ldots ,f_{n}.$[2] Taking real numbers $y_{1}\leq \cdots \leq y_{n}$ and the linear functions $f_{i}(x):=xy_{i}$ for real $x$ and $i=1,\ldots ,n,$ the standard rearrangement inequality (1) is recovered.
See also
• Hardy–Littlewood inequality
• Chebyshev's sum inequality
References
1. Hardy, G.H.; Littlewood, J.E.; Pólya, G. (1952), Inequalities, Cambridge Mathematical Library (2. ed.), Cambridge: Cambridge University Press, ISBN 0-521-05206-8, MR 0046395, Zbl 0047.05302, Section 10.2, Theorem 368
2. Holstermann, Jan (2017), "A Generalization of the Rearrangement Inequality" (PDF), Mathematical Reflections, no. 5 (2017)
|
Wikipedia
|
Rebecca Goldin
Rebecca Freja Goldin is an American mathematician who works as a professor of mathematical sciences at George Mason University[1] and director of the Statistical Assessment Service, a nonprofit organization associated with GMU that aims to improve the use of statistics in journalism.[2] Her mathematical research concerns symplectic geometry, including work on Hamiltonian actions and symplectic quotients.[3]
Rebecca Goldin
NationalityAmerican
Alma materPh.D., Massachusetts Institute of Technology, 1999
Known forWork on Hamiltonian actions and symplectic quotients
AwardsRuth I. Michler Memorial Prize
AWM/MAA Falconer Lecturer 2008
Scientific career
FieldsMathematics
InstitutionsGeorge Mason University
ThesisThe Cohomology of Weight Varieties
Doctoral advisorVictor Guillemin
Websitemath.gmu.edu/~rgoldin/
Education and career
After graduating with honors in mathematics from Harvard University,[4] Goldin studied in France for a year with Bernard Teissier at the École Normale Supérieure,[5] pursuing research on toric varieties.[3] She completed her Ph.D. in 1999 at the Massachusetts Institute of Technology under the supervision of Victor Guillemin.[6]
After postdoctoral research at the University of Maryland, she joined the GMU faculty in 2001.[4][5]
Recognition
She was the inaugural winner of the Ruth I. Michler Memorial Prize of the Association for Women in Mathematics (AWM), in 2007.[3][5] She was also the 2008 AWM/MAA Falconer Lecturer, speaking on "The Use and Abuse of Statistics in the Media".[7]
She was included in the 2019 class of fellows of the American Mathematical Society "for contributions to differential geometry and service to the mathematical community, particularly in support of promoting mathematical and statistical thinking to a wide audience".[8]
References
1. GMU Math Faculty, retrieved 2016-06-30.
2. about STATS.org, retrieved 2016-06-30.
3. Rebecca Goldin selected as first recipient of Ruth I. Michler Memorial Prize (PDF), Association for Women in Mathematics, March 28, 2007.
4. An Interview with Rebecca Goldin, Mathematical Association of America, October 31, 2008, retrieved 2016-06-30.
5. "Mathematics Professor Goldin Receives First Michler Prize", The Mason Gazette, March 30, 2007.
6. Rebecca Goldin at the Mathematics Genealogy Project
7. Rebecca Goldin named 2008 Falconer Lecturer (PDF), Association for Women in Mathematics, retrieved 2016-07-06.
8. 2019 Class of the Fellows of the AMS, American Mathematical Society, retrieved 2018-11-07
External links
• Home page
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
|
Wikipedia
|
Rebecca A. Herb
Rebecca A. Herb (born 1948) is an American mathematician, a professor emerita at the University of Maryland.[1] Her research involves abstract algebra and Lie groups.
In 2012, Herb became one of the inaugural fellows of the American Mathematical Society.[2] In 2013, she was one of ten recipients of the first Service Awards of the Association for Women in Mathematics “for her service as AWM Treasurer (2004–2012), and her help during AWM’s transition from its headquarters at the University of Maryland to the management company STAT.”[3][4]
Herb earned her Ph.D. in 1974 from the University of Washington under the supervision of Garth William Warner, Jr.[5] From 2004 until 2012 (when she was succeeded by Ellen Kirkman) Herb was treasurer of the Association for Women in Mathematics.[6]
Herb was an American Mathematical Society (AMS) Council member at large.[7]
References
1. Faculty profile, Univ. of Maryland, retrieved 2014-12-31.
2. List of Fellows of the American Mathematical Society, retrieved 2014-12-31.
3. Kehoe, Elaine (May 2013), "AWM Awards Given in San Diego" (PDF), Mathematical People, Notices of the American Mathematical Society, 60 (5): 616–617, doi:10.1090/noti985.
4. "AWM Service Award 2013". Association for Women in Mathematics (AWM). Retrieved 2020-01-17.
5. Rebecca A. Herb at the Mathematics Genealogy Project
6. AWM History, retrieved 2014-12-31.
7. "AMS Committees". American Mathematical Society. Retrieved 2023-03-27.
Authority control
International
• ISNI
• VIAF
National
• Norway
• Israel
• Belgium
• United States
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Peter Gustav Lejeune Dirichlet
Johann Peter Gustav Lejeune Dirichlet (German: [ləˈʒœn diʁiˈkleː];[1] 13 February 1805 – 5 May 1859) was a German mathematician who made contributions to number theory (including creating the field of analytic number theory), and to the theory of Fourier series and other topics in mathematical analysis; he is credited with being one of the first mathematicians to give the modern formal definition of a function.
Peter Gustav Lejeune Dirichlet
Born
Johann Peter Gustav Lejeune Dirichlet
(1805-02-13)13 February 1805
Düren, French Empire
Died5 May 1859(1859-05-05) (aged 54)
Göttingen, Kingdom of Hanover
NationalityGerman
Known forSee full list
AwardsPhD (Hon):
University of Bonn (1827)
Pour le Mérite (1855)
Scientific career
FieldsMathematician
InstitutionsUniversity of Breslau
University of Berlin
University of Göttingen
ThesisPartial Results on Fermat's Last Theorem, Exponent 5 (1827)
Academic advisorsSiméon Poisson
Joseph Fourier
Carl Gauss
Doctoral studentsGotthold Eisenstein
Leopold Kronecker
Rudolf Lipschitz
Carl Wilhelm Borchardt
Other notable studentsMoritz Cantor
Elwin Bruno Christoffel
Richard Dedekind
Alfred Enneper
Eduard Heine
Bernhard Riemann
Ludwig Schläfli
Ludwig von Seidel
Wilhelm Weber
Julius Weingarten
Although his surname is Lejeune Dirichlet, he is commonly referred to by his mononym Dirichlet, in particular for results named after him.
Biography
Early life (1805–1822)
Gustav Lejeune Dirichlet was born on 13 February 1805 in Düren, a town on the left bank of the Rhine which at the time was part of the First French Empire, reverting to Prussia after the Congress of Vienna in 1815. His father Johann Arnold Lejeune Dirichlet was the postmaster, merchant, and city councilor. His paternal grandfather had come to Düren from Richelette (or more likely Richelle), a small community 5 km (3 miles) north east of Liège in Belgium, from which his surname "Lejeune Dirichlet" ("le jeune de Richelette", French for "the youth from Richelette") was derived.[2]
Although his family was not wealthy and he was the youngest of seven children, his parents supported his education. They enrolled him in an elementary school and then private school in hope that he would later become a merchant. The young Dirichlet, who showed a strong interest in mathematics before age 12, persuaded his parents to allow him to continue his studies. In 1817 they sent him to the Gymnasium Bonn under the care of Peter Joseph Elvenich, a student his family knew. In 1820, Dirichlet moved to the Jesuit Gymnasium in Cologne, where his lessons with Georg Ohm helped widen his knowledge in mathematics. He left the gymnasium a year later with only a certificate, as his inability to speak fluent Latin prevented him from earning the Abitur.[2]
Studies in Paris (1822–1826)
Dirichlet again persuaded his parents to provide further financial support for his studies in mathematics, against their wish for a career in law. As Germany provided little opportunity to study higher mathematics at the time, with only Gauss at the University of Göttingen who was nominally a professor of astronomy and anyway disliked teaching, Dirichlet decided to go to Paris in May 1822. There he attended classes at the Collège de France and at the University of Paris, learning mathematics from Hachette among others, while undertaking private study of Gauss's Disquisitiones Arithmeticae, a book he kept close for his entire life. In 1823 he was recommended to General Maximilien Foy, who hired him as a private tutor to teach his children German, the wage finally allowing Dirichlet to become independent from his parents' financial support.[3]
His first original research, comprising part of a proof of Fermat's Last Theorem for the case n = 5, brought him immediate fame, being the first advance in the theorem since Fermat's own proof of the case n = 4 and Euler's proof for n = 3. Adrien-Marie Legendre, one of the referees, soon completed the proof for this case; Dirichlet completed his own proof a short time after Legendre, and a few years later produced a full proof for the case n = 14.[4] In June 1825 he was accepted to lecture on his partial proof for the case n = 5 at the French Academy of Sciences, an exceptional feat for a 20-year-old student with no degree.[2] His lecture at the Academy had also put Dirichlet in close contact with Fourier and Poisson, who raised his interest in theoretical physics, especially Fourier's analytic theory of heat.
Back to Prussia, Breslau (1825–1828)
As General Foy died in November 1825 and he could not find any paying position in France, Dirichlet had to return to Prussia. Fourier and Poisson introduced him to Alexander von Humboldt, who had been called to join the court of King Friedrich Wilhelm III. Humboldt, planning to make Berlin a center of science and research, immediately offered his help to Dirichlet, sending letters in his favour to the Prussian government and to the Prussian Academy of Sciences. Humboldt also secured a recommendation letter from Gauss, who upon reading his memoir on Fermat's theorem wrote with an unusual amount of praise that "Dirichlet showed excellent talent".[5] With the support of Humboldt and Gauss, Dirichlet was offered a teaching position at the University of Breslau. However, as he had not passed a doctoral dissertation, he submitted his memoir on the Fermat theorem as a thesis to the University of Bonn. Again his lack of fluency in Latin rendered him unable to hold the required public disputation of his thesis; after much discussion, the university decided to bypass the problem by awarding him an honorary doctorate in February 1827. Also, the Minister of Education granted him a dispensation for the Latin disputation required for the Habilitation. Dirichlet earned the Habilitation and lectured in the 1827–28 year as a Privatdozent at Breslau.[2]
While in Breslau, Dirichlet continued his number theoretic research, publishing important contributions to the biquadratic reciprocity law which at the time was a focal point of Gauss's research. Alexander von Humboldt took advantage of these new results, which had also drawn enthusiastic praise from Friedrich Bessel, to arrange for him the desired transfer to Berlin. Given Dirichlet's young age (he was 23 years old at the time), Humboldt was able to get him only a trial position at the Prussian Military Academy in Berlin while remaining nominally employed by the University of Breslau. The probation was extended for three years until the position becoming definite in 1831.
Marriage to Rebecka Mendelssohn
After Dirichlet's move to Berlin, Humboldt introduced him to the great salons held by the banker Abraham Mendelssohn Bartholdy and his family. Their house was a weekly gathering point for Berlin artists and scientists, including Abraham's children Felix and Fanny Mendelssohn, both outstanding musicians, and the painter Wilhelm Hensel (Fanny's husband). Dirichlet showed great interest in Abraham's daughter Rebecka, whom he married in 1832.
Rebecka Henriette Lejeune Dirichlet (née Rebecka Mendelssohn; 11 April 1811 – 1 December 1858) was a granddaughter of Moses Mendelssohn and the youngest sister of Felix Mendelssohn and Fanny Mendelssohn.[6][7] Rebecka was born in Hamburg.[8] In 1816 her parents arranged for her to be baptised at which point she took the names Rebecka Henriette Mendelssohn Bartholdy.[9] She became a part of the notable salon of her parents, Abraham Mendelssohn and his wife Lea, having social contacts with the important musicians, artists and scientists in a highly creative period of German intellectual life. In 1829 she sang a small role in the premiere, given at the Mendelssohn house, of Felix's Singspiel Die Heimkehr aus der Fremde. She later wrote:
My older brother and sister stole my reputation as an artist. In any other family I would have been highly regarded as a musician and perhaps been leader of a group. Next to Felix and Fanny, I could not aspire to any recognition.[10]
In 1832 she married Dirichlet, who was introduced to the Mendelssohn family by Alexander von Humboldt.[11] In 1833 their first son, Walter, was born. She died in Göttingen in 1858.
Berlin (1826–1855)
As soon as he came to Berlin, Dirichlet applied to lecture at the University of Berlin, and the Education Minister approved the transfer and in 1831 assigned him to the faculty of philosophy. The faculty required him to undertake a renewed habilitation qualification, and although Dirichlet wrote a Habilitationsschrift as needed, he postponed giving the mandatory lecture in Latin for another 20 years, until 1851. As he had not completed this formal requirement, he remained attached to the faculty with less than full rights, including restricted emoluments, forcing him to keep in parallel his teaching position at the Military School. In 1832 Dirichlet became a member of the Prussian Academy of Sciences, the youngest member at only 27 years old.[2]
Dirichlet had a good reputation with students for the clarity of his explanations and enjoyed teaching, especially as his University lectures tended to be on the more advanced topics in which he was doing research: number theory (he was the first German professor to give lectures on number theory), analysis and mathematical physics. He advised the doctoral theses of several important German mathematicians, as Gotthold Eisenstein, Leopold Kronecker, Rudolf Lipschitz and Carl Wilhelm Borchardt, while being influential in the mathematical formation of many other scientists, including Elwin Bruno Christoffel, Wilhelm Weber, Eduard Heine, Ludwig von Seidel and Julius Weingarten. At the Military Academy, Dirichlet managed to introduce differential and integral calculus in the curriculum, raising the level of scientific education there. However, he gradually started feeling that his double teaching load, at the Military academy and at the university, was limiting the time available for his research.[2]
While in Berlin, Dirichlet kept in contact with other mathematicians. In 1829, during a trip, he met Carl Jacobi, at the time professor of mathematics at Königsberg University. Over the years they kept meeting and corresponding on research matters, in time becoming close friends. In 1839, during a visit to Paris, Dirichlet met Joseph Liouville, the two mathematicians becoming friends, keeping in contact and even visiting each other with the families a few years later. In 1839, Jacobi sent Dirichlet a paper by Ernst Kummer, at the time a schoolteacher. Realizing Kummer's potential, they helped him get elected in the Berlin Academy and, in 1842, obtained for him a full professor position at the University of Breslau. In 1840 Kummer married Ottilie Mendelssohn, a cousin of Rebecka's.
In 1843, when Jacobi fell ill, Dirichlet traveled to Königsberg to help him, then obtained for him the assistance of King Friedrich Wilhelm IV's personal physician. When the physician recommended that Jacobi spend some time in Italy, Dirichlet joined him on the trip together with his family. They were accompanied to Italy by Ludwig Schläfli, who came as a translator; as he was strongly interested in mathematics, both Dirichlet and Jacobi lectured to him during the trip, and he later became an important mathematician himself.[2] The Dirichlet family extended their stay in Italy to 1845, their daughter Flora being born there. In 1844, Jacobi moved to Berlin as a royal pensioner, their friendship becoming even closer. In 1846, when the Heidelberg University tried to recruit Dirichlet, Jacobi provided von Humboldt the needed support to obtain a doubling of Dirichlet's pay at the university in order to keep him in Berlin; however, even then he was not paid a full professor wage and could not leave the Military Academy.[12]
Holding liberal views, Dirichlet and his family supported the 1848 revolution; he even guarded with a rifle the palace of the Prince of Prussia. After the revolution failed, the Military Academy closed temporarily, causing him a large loss of income. When it reopened, the environment became more hostile to him, as officers he was teaching were expected to be loyal to the constituted government. Some of the press who had not sided with the revolution pointed him out, as well as Jacobi and other liberal professors, as "the red contingent of the staff".[2]
In 1849 Dirichlet participated, together with his friend Jacobi, in the jubilee of Gauss's doctorate.
Göttingen (1855–1859)
Despite Dirichlet's expertise and the honours he received, and even though, by 1851, he had finally completed all formal requirements for a full professor, the issue of raising his pay at the university still dragged on and he was still unable to leave the Military Academy. In 1855, upon Gauss's death, the University of Göttingen decided to call Dirichlet as his successor. Given the difficulties faced in Berlin, he decided to accept the offer and immediately moved to Göttingen with his family. Kummer was called to assume his position as a professor of mathematics in Berlin.[3]
Dirichlet enjoyed his time in Göttingen, as the lighter teaching load allowed him more time for research and he came into close contact with the new generation of researchers, especially Richard Dedekind and Bernhard Riemann. After moving to Göttingen he was able to obtain a small annual stipend for Riemann to retain him in the teaching staff there. Dedekind, Riemann, Moritz Cantor and Alfred Enneper, although they had all already earned their PhDs, attended Dirichlet's classes to study with him. Dedekind, who felt that there were gaps in his mathematics education, considered that the occasion to study with Dirichlet made him "a new human being".[2] He later edited and published Dirichlet's lectures and other results in number theory under the title Vorlesungen über Zahlentheorie (Lectures on Number Theory).
In the summer of 1858, during a trip to Montreux, Dirichlet suffered a heart attack. On 5 May 1859, he died in Göttingen, several months after the death of his wife Rebecka.[3] Dirichlet's brain is preserved in the department of physiology at the University of Göttingen, along with the brain of Gauss. The Academy in Berlin honored him with a formal memorial speech presented by Kummer in 1860, and later ordered the publication of his collected works edited by Kronecker and Lazarus Fuchs.
Mathematics research
Further information: List of things named after Peter Gustav Lejeune Dirichlet
Number theory
Number theory was Dirichlet's main research interest,[13] a field in which he found several deep results and in proving them introduced some fundamental tools, many of which were later named after him. In 1837, Dirichlet's theorem on arithmetic progressions, using mathematical analysis concepts to tackle an algebraic problem and thus creating the branch of analytic number theory. In proving the theorem, he introduced the Dirichlet characters and L-functions.[13][14] Also, in the article he noted the difference between the absolute and conditional convergence of series and its impact in what was later called the Riemann series theorem. In 1841, he generalized his arithmetic progressions theorem from integers to the ring of Gaussian integers $\mathbb {Z} [i]$.[2]
In a couple of papers in 1838 and 1839, he proved the first class number formula, for quadratic forms (later refined by his student Kronecker). The formula, which Jacobi called a result "touching the utmost of human acumen", opened the way for similar results regarding more general number fields.[2] Based on his research of the structure of the unit group of quadratic fields, he proved the Dirichlet unit theorem, a fundamental result in algebraic number theory.[14]
He first used the pigeonhole principle, a basic counting argument, in the proof of a theorem in diophantine approximation, later named after him Dirichlet's approximation theorem. He published important contributions to Fermat's Last Theorem, for which he proved the cases n = 5 and n = 14, and to the biquadratic reciprocity law.[2] The Dirichlet divisor problem, for which he found the first results, is still an unsolved problem in number theory despite later contributions by other mathematicians.
Analysis
Inspired by the work of his mentor in Paris, Dirichlet published in 1829 a famous memoir giving the conditions, showing for which functions the convergence of the Fourier series holds.[15] Before Dirichlet's solution, not only Fourier, but also Poisson and Cauchy had tried unsuccessfully to find a rigorous proof of convergence. The memoir pointed out Cauchy's mistake and introduced Dirichlet's test for the convergence of series. It also introduced the Dirichlet function as an example of a function that is not integrable (the definite integral was still a developing topic at the time) and, in the proof of the theorem for the Fourier series, introduced the Dirichlet kernel and the Dirichlet integral.[16]
Dirichlet also studied the first boundary value problem, for the Laplace equation, proving the uniqueness of the solution; this type of problem in the theory of partial differential equations was later named the Dirichlet problem after him. A function satisfying a partial differential equation subject to the Dirichlet boundary conditions must have fixed values on the boundary.[13] In the proof he notably used the principle that the solution is the function that minimizes the so-called Dirichlet energy. Riemann later named this approach the Dirichlet principle, although he knew it had also been used by Gauss and by Lord Kelvin.[2]
Introduction of the modern concept of function
While trying to gauge the range of functions for which convergence of the Fourier series can be shown, Dirichlet defines a function by the property that "to any x there corresponds a single finite y", but then restricts his attention to piecewise continuous functions. Based on this, he is credited with introducing the modern concept for a function, as opposed to the older vague understanding of a function as an analytic formula.[2] Imre Lakatos cites Hermann Hankel as the early origin of this attribution, but disputes the claim saying that "there is ample evidence that he had no idea of this concept [...] for instance, when he discusses piecewise continuous functions, he says that at points of discontinuity the function has two values".[17]
Other fields
Dirichlet also worked in mathematical physics, lecturing and publishing research in potential theory (including the Dirichlet problem and Dirichlet principle mentioned above), the theory of heat and hydrodynamics.[13] He improved on Lagrange's work on conservative systems by showing that the condition for equilibrium is that the potential energy is minimal.[18]
Dirichlet also lectured on probability theory and least squares, introducing some original methods and results, in particular for limit theorems and an improvement of Laplace's method of approximation related to the central limit theorem.[19] The Dirichlet distribution and the Dirichlet process, based on the Dirichlet integral, are named after him.
Honors
Dirichlet was elected as a member of several academies:[20]
• Prussian Academy of Sciences (1832)
• Saint Petersburg Academy of Sciences (1833) – corresponding member
• Göttingen Academy of Sciences (1846)
• French Academy of Sciences (1854) – foreign member
• Royal Swedish Academy of Sciences (1854)
• Royal Belgian Academy of Sciences (1855)
• Royal Society (1855) – foreign member
In 1855 Dirichlet was awarded the civil class medal of the Pour le Mérite order at Alexander von Humboldt's recommendation. The Dirichlet crater on the Moon and the 11665 Dirichlet asteroid are named after him.
Selected publications
• Lejeune Dirichlet, J.P.G. (1889). L. Kronecker (ed.). Werke. Vol. 1. Berlin: Reimer.
• Lejeune Dirichlet, J.P.G. (1897). L. Kronecker, L. Fuchs (ed.). Werke. Vol. 2. Berlin: Reimer.
• Lejeune Dirichlet, J.P.G.; Richard Dedekind (1863). Vorlesungen über Zahlentheorie. F. Vieweg und sohn.
References
1. Dudenredaktion (2015). Duden – Das Aussprachewörterbuch: Betonung und Aussprache von über 132.000 Wörtern und Namen [Duden – The Pronouncing Dictionary: accent and pronunciation of more than 132.000 words and names]. Duden - Deutsche Sprache in 12 Bänden (in German). Vol. 6. 312. ISBN 978-3-411-91151-6.
2. Elstrodt, Jürgen (2007). "The Life and Work of Gustav Lejeune Dirichlet (1805–1859)" (PDF). Clay Mathematics Proceedings. Retrieved 25 December 2007.
3. James, Ioan Mackenzie (2003). Remarkable Mathematicians: From Euler to von Neumann. Cambridge University Press. pp. 103–109. ISBN 978-0-521-52094-2.
4. Krantz, Steven (2011). The Proof is in the Pudding: The Changing Nature of Mathematical Proof. Springer. pp. 55–58. ISBN 978-0-387-48908-7.
5. Goldstein, Cathérine; Catherine Goldstein; Norbert Schappacher; Joachim Schwermer (2007). The shaping of arithmetic: after C.F. Gauss's Disquisitiones Arithmeticae. Springer. pp. 204–208. ISBN 978-3-540-20441-1.
6. Mercer-Taylor, Peter The Life of Mendelssohn. Cambridge 2000 ISBN 978-0-521-63972-9.
7. Todd, R. Larry Mendelssohn: A Life in Music. Oxford 2003 ISBN 978-0-19-511043-2.
8. Todd 2003, 28.
9. Todd 2003, 33.
10. cited in Mercer-Taylor 2000, 66
11. Todd 2003, 192.
12. Calinger, Ronald (1996). Vita mathematica: historical research and integration with teaching. Cambridge University Press. pp. 156–159. ISBN 978-0-88385-097-8.
13. Gowers, Timothy; June Barrow-Green; Imre Leader (2008). The Princeton companion to mathematics. Princeton University Press. pp. 764–765. ISBN 978-0-691-11880-2.
14. Kanemitsu, Shigeru; Chaohua Jia (2002). Number theoretic methods: future trends. Springer. pp. 271–274. ISBN 978-1-4020-1080-4.
15. Lejeune Dirichlet (1829). "Sur la convergence des séries trigonométriques qui servent à représenter une fonction arbitraire entre des limites données" [On the convergence of trigonometric series that serve to represent an arbitrary function between given limits]. Journal für die reine und angewandte Mathematik. 4: 157–169.
16. Bressoud, David M. (2007). A radical approach to real analysis. MAA. pp. 218–227. ISBN 978-0-88385-747-2.
17. Lakatos, Imre (1976). Proofs and refutations: the logic of mathematical discovery. Cambridge University Press. pp. 151–152. ISBN 978-0-521-29038-8.
18. Leine, Remco; Nathan van de Wouw (2008). Stability and convergence of mechanical systems with unilateral constraints. Springer. p. 6. ISBN 978-3-540-76974-3.
19. Fischer, Hans (February 1994). "Dirichlet's contributions to mathematical probability theory". Historia Mathematica. Elsevier. 21 (1): 39–63. doi:10.1006/hmat.1994.1007.
20. "Obituary notices of deceased fellows". Proceedings of the Royal Society of London. Taylor and Francis. 10: xxxviii–xxxix. 1860. doi:10.1098/rspl.1859.0002. S2CID 186209363.
External links
• Media related to Peter Gustav Lejeune Dirichlet at Wikimedia Commons
• O'Connor, John J.; Robertson, Edmund F., "Peter Gustav Lejeune Dirichlet", MacTutor History of Mathematics Archive, University of St Andrews
• Elstrodt, Jürgen (2007). "The Life and Work of Gustav Lejeune Dirichlet (1805–1859)" (PDF). Clay Mathematics Proceedings. Retrieved 13 June 2010.
• Peter Gustav Lejeune Dirichlet at the Mathematics Genealogy Project.
• Johann Peter Gustav Lejeune Dirichlet – Œuvres complètes – Gallica-Math
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Sweden
• Latvia
• Japan
• Czech Republic
• Australia
• Greece
• Netherlands
• Poland
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
Peter Gustav Lejeune Dirichlet
• Dirichlet distribution
• Dirichlet character
• Dirichlet process
• Dirichlet-multinomial distribution
• Dirichlet series
• Dirichlet's theorem on arithmetic progressions
• Dirichlet convolution
• Dirichlet problem
• Dirichlet integral
|
Wikipedia
|
Rebecca Waldecker
Rebecca Anne Hedwig Waldecker (born 1979) is a German mathematician specializing in group theory. She is professor for algebra at Martin Luther University of Halle-Wittenberg.[1]
Education and career
Waldecker is originally from Aachen.[1] She earned her doctorate (Dr. rer. nat.) at the University of Kiel in 2007, under the supervision of Helmut Bender,[2] and in 2014 completed her habilitation at Martin Luther University of Halle-Wittenberg.[1]
After postdoctoral research as a Leverhulme Fellow at the University of Birmingham, Waldecker joined Martin Luther University of Halle-Wittenberg as a junior professor in 2009. She became professor for algebra in 2015.[1]
Books
Waldecker is the author of the book Isolated Involutions in Finite Groups (Memoirs of the American Mathematical Society, 2013), developed from her doctoral dissertation.[3]
With Lasse Rempe-Gillen, she is the coauthor of Primzahltests für Einsteiger: Zahlentheorie, Algorithmik, Kryptographie (Vieweg+Teubner, 2009; 2nd ed., Springer, 2016), a book on primality tests that was translated into English as Primality Testing for Beginners (Student Mathematical Library 70, American Mathematical Society, 2014).[4]
She became a coauthor to the 2012 textbook Elementare Algebra und Zahlentheorie of Gernot Stroth, in its second edition (Mathematik Kompakt, Springer, 2019).
References
1. Lebenslauf, retrieved 2020-08-16
2. Rebecca Waldecker at the Mathematics Genealogy Project
3. Reviews of Isolated Involutions in Finite Groups:
• George Glauberman, MR3136255
• Anatoli Kondrat’ev, Zbl 1298.20018
4. Reviews of Primzahltests für Einsteiger and Primality Testing for Beginners:
• Samuel S. Wagstaff Jr., MR3154407
• Wilfried Meidl, Zbl 1195.11003
• Chapman, Robin (July 2014), "Review" (PDF), London Mathematical Society Newsletter, no. 438, p. 49
• Green, Frederic (June 2016), ACM SIGACT News, 47 (2): 6–9, doi:10.1145/2951860.2951863, S2CID 26146309{{citation}}: CS1 maint: untitled periodical (link)
• Ehme, Jeffrey (November 2016), "A joy to review: Two books about primes and factoring", Cryptologia, 41 (1): 97–100, doi:10.1080/01611194.2016.1236625, S2CID 36760384
External links
• Home page
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Czech Republic
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
|
Wikipedia
|
Rebecca Walo Omana
Rebecca Walo Omana (born 15 July 1951) is a Congolese mathematician, professor, and reverend sister. Omana became the first female mathematics professor in the Democratic Republic of the Congo in 1982.[1] She is the director of the mathematics and informatics doctoral program at the University of Kinshasa and is a vice-president of the African Women in Mathematics Association.[2] Her mathematical interests lie in differential equations, nonlinear analysis, and modeling.
Rebecca Walo Omana
Born(1951-07-15)15 July 1951
Democratic Republic of the Congo
NationalityCongolese
Alma materUniversité catholique de Louvain
Known forfirst female mathematics professor in the Democratic Republic of the Congo
Scientific career
FieldsMathematics
InstitutionsProfessor at the University of Kinshasa
Doctoral advisorJean Mawhin
Biography
Omana was born in the Democratic Republic of the Congo, on 15 July 1951. She was passionate about mathematics during high school.[1] She made her religious profession to the Catholic Soeurs de St Francois d'Assise at the age of 18, and made her sacred vows in 1978.[3][2]
Omana earned a bachelor’s of science in mathematics from the Université du Québec à Montréal in 1979. She earned her master’s of science in 1982 from the Université Laval. In both institutions, she was the only African woman in the department.[1] Of this period Omana says:
I had to double effort to be better and remove negative prejudices in the heads of my colleagues and my professors to be accepted. But in view of results, I was not only accepted but invited by groups of colleagues for research works. [1]
In 1982, Omana began working as a lecturer and became the first female mathematics professor in the Democratic Republic of the Congo.[1]
Omana earned her Diplôme d'études approfondies in 1985 and her Ph.D in 1990 from the Université catholique de Louvain where she worked with advisor Jean Mawhin.[1][4] She was the first Congolese woman to earn a doctorate there.[2]
At the founding of the quarterly multidisciplinary journal la revue Notre Dame de la Sagesse (RENODAS), Omana was listed as the director.[5] She has supervised numerous doctoral students.[4] She hopes that some of her doctoral students will join her among the small number of female professors in the Democratic Republic of the Congo.[1] Omana heads the mathematics doctoral program at the University of Kinshasa.[1] Since 2010,[2] she has served as the rector for the Université Notre-Dame de Tshumbe (UNITSHU), a Catholic public university which was founded in 2010 in Tshumbe, Democratic Republic of the Congo.[6]
Mathematical works
Omana has published two books.[2] Her work on ordinary differential equations has had applications in fields like epidemiology and law.
Personal life
Omana's parents are not academics, but some siblings hold master's degrees.[1] Her teachers and father influenced her decision to become a mathematician.[1]
She has said "mathematics is fantastic; as its name is female, it is a domain that should belong to us women".[1][lower-alpha 1]
See also
• Timeline of women in mathematics
• Grace Alele-Williams
• Marie Françoise Ouedraogo
Notes
1. Referring to the French word mathématiques, which has the feminine gender
References
1. "Rebecca Walo OMANA". African Women in Mathematics Association. Retrieved 2021-01-15.
2. OKONDJO, Pierre Claude. "LE PROFIL DU RECTEUR DE L'UNIVERSITÉ NOTRE-DAME DE TSHUMBE". DIOCÈSE DE TSHUMBE. Retrieved 2021-01-16.
3. Marie, diocese-tshumbe-ste. "SOEURS DE ST FRANCOIS D´ASSISE". Diocèse de Tshumbe Sainte Marie. Retrieved 2021-01-18.
4. "Rébecca Walo Omana - The Mathematics Genealogy Project". www.mathgenealogy.org. Retrieved 2022-02-11.
5. "La Revue Notre-Dame de la Sagesse". UNIVERSITÉ NOTRE-DAME DE TSHUMBE. Retrieved 2021-01-16.
6. "Présentation de l'Université Notre Dame de Tshumbe". UNIVERSITÉ NOTRE-DAME DE TSHUMBE. Retrieved 2021-01-18.
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
|
Wikipedia
|
Recamán's sequence
In mathematics and computer science, Recamán's sequence[1][2] is a well known sequence defined by a recurrence relation. Because its elements are related to the previous elements in a straightforward way, they are often defined using recursion.
It takes its name after its inventor Bernardo Recamán Santos, a Colombian mathematician.
Definition
Recamán's sequence $a_{0},a_{1},a_{2}\dots $ is defined as:
$a_{n}={\begin{cases}0&&{\text{if }}n=0\\a_{n-1}-n&&{\text{if }}a_{n-1}-n>0{\text{ and is not already in the sequence}}\\a_{n-1}+n&&{\text{otherwise}}\end{cases}}$
The first terms of the sequence are:
0, 1, 3, 6, 2, 7, 13, 20, 12, 21, 11, 22, 10, 23, 9, 24, 8, 25, 43, 62, 42, 63, 41, 18, 42, 17, 43, 16, 44, 15, 45, 14, 46, 79, 113, 78, 114, 77, 39, 78, 38, 79, 37, 80, 36, 81, 35, 82, 34, 83, 33, 84, 32, 85, 31, 86, 30, 87, 29, 88, 28, 89, 27, 90, 26, 91, 157, 224, 156, 225, 155, ...
On-line encyclopedia of integer sequences (OEIS)
Recamán's sequence was named after its inventor, Colombian mathematician Bernardo Recamán Santos, by Neil Sloane, creator of the On-Line Encyclopedia of Integer Sequences (OEIS). The OEIS entry for this sequence is A005132.
Even when Neil Sloane has collected more than 325,000 sequences since 1964, the Recamán's sequence was referenced in his paper My favorite integer sequences.[4] He also stated that of all the sequences in the OEIS, this one is his favorite to listen to[1] (you can hear it below).
Visual representation
The most-common visualization of the Recamán's sequence is simply plotting its values, such as the figure at right.
On January 14, 2018, the Numberphile YouTube channel published a video titled The Slightly Spooky Recamán Sequence,[3] showing a visualization using alternating semi-circles, as it is shown in the figure at top of this page.
Sound representation
Values of the sequence can be associated with musical notes, in such that case the running of the sequence can be associated with an execution of a musical tune.[6]
Properties
The sequence satisfies:[1]
$a_{n}\geq 0$
$|a_{n}-a_{n-1}|=n$
This is not a permutation of the integers: the first repeated term is $42=a_{24}=a_{20}$.[7] Another one is $43=a_{18}=a_{26}$.
Conjecture
Neil Sloane has conjectured that every number eventually appears,[8][9][10] but it has not been proved. Even though 10230 terms have been calculated (in 2018), the number 852,655 has not appeared on the list.[1]
Uses
Besides its mathematical and aesthetic properties, Recamán's sequence can be used to secure 2D images by steganography.[11]
Alternate sequence
The sequence is the most-known sequence invented by Recamán. There is another sequence, less known, defined as:
$a_{1}=1$
$a_{n+1}={\begin{cases}a_{n}/n&&{\text{if }}n{\text{ divides }}a_{n}\\na_{n}&&{\text{otherwise}}\end{cases}}$
This OEIS entry is A008336.
References
1. "A005132 - Oeis".
2. "Recamán's Sequence".
3. The Slightly Spooky Recamán Sequence, Numberphile video.
4. N. J. A. Sloane, Sequences and their Applications (Proceedings of SETA '98), C. Ding, T. Helleseth and H. Niederreiter (editors), Springer-Verlag, London, 1999, pp. 103–130.
5. R.Ugalde, Laurence. "Recamán sequence in Fōrmulæ programming language". Fōrmulæ. Retrieved July 26, 2021.
6. "The On-Line Encyclopedia of Integer Sequences® (OEIS®)".
7. Math less traveled
8. "A057167 - Oeis".
9. "A064227 - Oeis".
10. "A064228 - Oeis".
11. S. Farrag and W. Alexan, "Secure 2D Image Steganography Using Recamán's Sequence," 2019 International Conference on Advanced Communication Technologies and Networking (CommNet), Rabat, Morocco, 2019, pp. 1-6. doi: 10.1109/COMMNET.2019.8742368
External links
• OEIS sequence A005132 (Recamán's sequence)
• The Slightly Spooky Recamán Sequence. (June 14, 2018) Numberphile on YouTube
• The Recamán's sequence at Rosetta Code
|
Wikipedia
|
Recession cone
In mathematics, especially convex analysis, the recession cone of a set $A$ is a cone containing all vectors such that $A$ recedes in that direction. That is, the set extends outward in all the directions given by the recession cone.[1]
Mathematical definition
Given a nonempty set $A\subset X$ for some vector space $X$, then the recession cone $\operatorname {recc} (A)$ is given by
$\operatorname {recc} (A)=\{y\in X:\forall x\in A,\forall \lambda \geq 0:x+\lambda y\in A\}.$[2]
If $A$ is additionally a convex set then the recession cone can equivalently be defined by
$\operatorname {recc} (A)=\{y\in X:\forall x\in A:x+y\in A\}.$[3]
If $A$ is a nonempty closed convex set then the recession cone can equivalently be defined as
$\operatorname {recc} (A)=\bigcap _{t>0}t(A-a)$ for any choice of $a\in A.$[3]
Properties
• If $A$ is a nonempty set then $0\in \operatorname {recc} (A)$.
• If $A$ is a nonempty convex set then $\operatorname {recc} (A)$ is a convex cone.[3]
• If $A$ is a nonempty closed convex subset of a finite-dimensional Hausdorff space (e.g. $\mathbb {R} ^{d}$), then $\operatorname {recc} (A)=\{0\}$ if and only if $A$ is bounded.[1][3]
• If $A$ is a nonempty set then $A+\operatorname {recc} (A)=A$ where the sum denotes Minkowski addition.
Relation to asymptotic cone
The asymptotic cone for $C\subseteq X$ is defined by
$C_{\infty }=\{x\in X:\exists (t_{i})_{i\in I}\subset (0,\infty ),\exists (x_{i})_{i\in I}\subset C:t_{i}\to 0,t_{i}x_{i}\to x\}.$[4][5]
By the definition it can easily be shown that $\operatorname {recc} (C)\subseteq C_{\infty }.$[4]
In a finite-dimensional space, then it can be shown that $C_{\infty }=\operatorname {recc} (C)$ if $C$ is nonempty, closed and convex.[5] In infinite-dimensional spaces, then the relation between asymptotic cones and recession cones is more complicated, with properties for their equivalence summarized in.[6]
Sum of closed sets
• Dieudonné's theorem: Let nonempty closed convex sets $A,B\subset X$ a locally convex space, if either $A$ or $B$ is locally compact and $\operatorname {recc} (A)\cap \operatorname {recc} (B)$ is a linear subspace, then $A-B$ is closed.[7][3]
• Let nonempty closed convex sets $A,B\subset \mathbb {R} ^{d}$ such that for any $y\in \operatorname {recc} (A)\backslash \{0\}$ then $-y\not \in \operatorname {recc} (B)$, then $A+B$ is closed.[1][4]
See also
• Barrier cone
References
1. Rockafellar, R. Tyrrell (1997) [1970]. Convex Analysis. Princeton, NJ: Princeton University Press. pp. 60–76. ISBN 978-0-691-01586-6.
2. Borwein, Jonathan; Lewis, Adrian (2006). Convex Analysis and Nonlinear Optimization: Theory and Examples (2 ed.). Springer. ISBN 978-0-387-29570-1.
3. Zălinescu, Constantin (2002). Convex analysis in general vector spaces. River Edge, NJ: World Scientific Publishing Co., Inc. pp. 6–7. ISBN 981-238-067-1. MR 1921556.
4. Kim C. Border. "Sums of sets, etc" (PDF). Retrieved March 7, 2012.
5. Alfred Auslender; M. Teboulle (2003). Asymptotic cones and functions in optimization and variational inequalities. Springer. pp. 25–80. ISBN 978-0-387-95520-9.
6. Zălinescu, Constantin (1993). "Recession cones and asymptotically compact sets". Journal of Optimization Theory and Applications. Springer Netherlands. 77 (1): 209–220. doi:10.1007/bf00940787. ISSN 0022-3239. S2CID 122403313.
7. J. Dieudonné (1966). "Sur la séparation des ensembles convexes". Math. Ann.. 163: 1–3. doi:10.1007/BF02052480. S2CID 119742919.
|
Wikipedia
|
Multiplicative inverse
In mathematics, a multiplicative inverse or reciprocal for a number x, denoted by 1/x or x−1, is a number which when multiplied by x yields the multiplicative identity, 1. The multiplicative inverse of a fraction a/b is b/a. For the multiplicative inverse of a real number, divide 1 by the number. For example, the reciprocal of 5 is one fifth (1/5 or 0.2), and the reciprocal of 0.25 is 1 divided by 0.25, or 4. The reciprocal function, the function f(x) that maps x to 1/x, is one of the simplest examples of a function which is its own inverse (an involution).
"Reciprocal (mathematics)" redirects here. Not to be confused with reciprocation (geometry).
Multiplying by a number is the same as dividing by its reciprocal and vice versa. For example, multiplication by 4/5 (or 0.8) will give the same result as division by 5/4 (or 1.25). Therefore, multiplication by a number followed by multiplication by its reciprocal yields the original number (since the product of the number and its reciprocal is 1).
The term reciprocal was in common use at least as far back as the third edition of Encyclopædia Britannica (1797) to describe two numbers whose product is 1; geometrical quantities in inverse proportion are described as reciprocall in a 1570 translation of Euclid's Elements.[1]
In the phrase multiplicative inverse, the qualifier multiplicative is often omitted and then tacitly understood (in contrast to the additive inverse). Multiplicative inverses can be defined over many mathematical domains as well as numbers. In these cases it can happen that ab ≠ ba; then "inverse" typically implies that an element is both a left and right inverse.
The notation f −1 is sometimes also used for the inverse function of the function f, which is for most functions not equal to the multiplicative inverse. For example, the multiplicative inverse 1/(sin x) = (sin x)−1 is the cosecant of x, and not the inverse sine of x denoted by sin−1 x or arcsin x. The terminology difference reciprocal versus inverse is not sufficient to make this distinction, since many authors prefer the opposite naming convention, probably for historical reasons (for example in French, the inverse function is preferably called the bijection réciproque).
Examples and counterexamples
In the real numbers, zero does not have a reciprocal (division by zero is undefined) because no real number multiplied by 0 produces 1 (the product of any number with zero is zero). With the exception of zero, reciprocals of every real number are real, reciprocals of every rational number are rational, and reciprocals of every complex number are complex. The property that every element other than zero has a multiplicative inverse is part of the definition of a field, of which these are all examples. On the other hand, no integer other than 1 and −1 has an integer reciprocal, and so the integers are not a field.
In modular arithmetic, the modular multiplicative inverse of a is also defined: it is the number x such that ax ≡ 1 (mod n). This multiplicative inverse exists if and only if a and n are coprime. For example, the inverse of 3 modulo 11 is 4 because 4 ⋅ 3 ≡ 1 (mod 11). The extended Euclidean algorithm may be used to compute it.
The sedenions are an algebra in which every nonzero element has a multiplicative inverse, but which nonetheless has divisors of zero, that is, nonzero elements x, y such that xy = 0.
A square matrix has an inverse if and only if its determinant has an inverse in the coefficient ring. The linear map that has the matrix A−1 with respect to some base is then the inverse function of the map having A as matrix in the same base. Thus, the two distinct notions of the inverse of a function are strongly related in this case, but they still do not coincide, since the multiplicative inverse of Ax would be (Ax)−1, not A−1x.
These two notions of an inverse function do sometimes coincide, for example for the function $f(x)=x^{i}=e^{i\ln(x)}$ where $\ln $ is the principal branch of the complex logarithm and $e^{-\pi }<|x|<e^{\pi }$:
$((1/f)\circ f)(x)=(1/f)(f(x))=1/(f(f(x)))=1/e^{i\ln(e^{i\ln(x)})}=1/e^{ii\ln(x)}=1/e^{-\ln(x)}=x$.
The trigonometric functions are related by the reciprocal identity: the cotangent is the reciprocal of the tangent; the secant is the reciprocal of the cosine; the cosecant is the reciprocal of the sine.
A ring in which every nonzero element has a multiplicative inverse is a division ring; likewise an algebra in which this holds is a division algebra.
Complex numbers
As mentioned above, the reciprocal of every nonzero complex number z = a + bi is complex. It can be found by multiplying both top and bottom of 1/z by its complex conjugate ${\bar {z}}=a-bi$ and using the property that $z{\bar {z}}=\|z\|^{2}$, the absolute value of z squared, which is the real number a2 + b2:
${\frac {1}{z}}={\frac {\bar {z}}{z{\bar {z}}}}={\frac {\bar {z}}{\|z\|^{2}}}={\frac {a-bi}{a^{2}+b^{2}}}={\frac {a}{a^{2}+b^{2}}}-{\frac {b}{a^{2}+b^{2}}}i.$
The intuition is that
${\frac {\bar {z}}{\|z\|}}$
gives us the complex conjugate with a magnitude reduced to a value of $1$, so dividing again by $\|z\|$ ensures that the magnitude is now equal to the reciprocal of the original magnitude as well, hence:
${\frac {1}{z}}={\frac {\bar {z}}{\|z\|^{2}}}$
In particular, if ||z||=1 (z has unit magnitude), then $1/z={\bar {z}}$. Consequently, the imaginary units, ±i, have additive inverse equal to multiplicative inverse, and are the only complex numbers with this property. For example, additive and multiplicative inverses of i are −(i) = −i and 1/i = −i, respectively.
For a complex number in polar form z = r(cos φ + i sin φ), the reciprocal simply takes the reciprocal of the magnitude and the negative of the angle:
${\frac {1}{z}}={\frac {1}{r}}\left(\cos(-\varphi )+i\sin(-\varphi )\right).$
Calculus
In real calculus, the derivative of 1/x = x−1 is given by the power rule with the power −1:
${\frac {d}{dx}}x^{-1}=(-1)x^{(-1)-1}=-x^{-2}=-{\frac {1}{x^{2}}}.$
The power rule for integrals (Cavalieri's quadrature formula) cannot be used to compute the integral of 1/x, because doing so would result in division by 0:
$\int {\frac {dx}{x}}={\frac {x^{0}}{0}}+C$
Instead the integral is given by:
$\int _{1}^{a}{\frac {dx}{x}}=\ln a,$
$\int {\frac {dx}{x}}=\ln x+C.$
where ln is the natural logarithm. To show this, note that $ {\frac {d}{dy}}e^{y}=e^{y}$, so if $x=e^{y}$ and $y=\ln x$, we have:[2]
${\begin{aligned}&{\frac {dx}{dy}}=x\quad \Rightarrow \quad {\frac {dx}{x}}=dy\\[10mu]&\quad \Rightarrow \quad \int {\frac {dx}{x}}=\int dy=y+C=\ln x+C.\end{aligned}}$
Algorithms
The reciprocal may be computed by hand with the use of long division.
Computing the reciprocal is important in many division algorithms, since the quotient a/b can be computed by first computing 1/b and then multiplying it by a. Noting that $f(x)=1/x-b$ has a zero at x = 1/b, Newton's method can find that zero, starting with a guess $x_{0}$ and iterating using the rule:
$x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {1/x_{n}-b}{-1/x_{n}^{2}}}=2x_{n}-bx_{n}^{2}=x_{n}(2-bx_{n}).$
This continues until the desired precision is reached. For example, suppose we wish to compute 1/17 ≈ 0.0588 with 3 digits of precision. Taking x0 = 0.1, the following sequence is produced:
x1 = 0.1(2 − 17 × 0.1) = 0.03
x2 = 0.03(2 − 17 × 0.03) = 0.0447
x3 = 0.0447(2 − 17 × 0.0447) ≈ 0.0554
x4 = 0.0554(2 − 17 × 0.0554) ≈ 0.0586
x5 = 0.0586(2 − 17 × 0.0586) ≈ 0.0588
A typical initial guess can be found by rounding b to a nearby power of 2, then using bit shifts to compute its reciprocal.
In constructive mathematics, for a real number x to have a reciprocal, it is not sufficient that x ≠ 0. There must instead be given a rational number r such that 0 < r < |x|. In terms of the approximation algorithm described above, this is needed to prove that the change in y will eventually become arbitrarily small.
This iteration can also be generalized to a wider sort of inverses; for example, matrix inverses.
Reciprocals of irrational numbers
Every real or complex number excluding zero has a reciprocal, and reciprocals of certain irrational numbers can have important special properties. Examples include the reciprocal of e (≈ 0.367879) and the golden ratio's reciprocal (≈ 0.618034). The first reciprocal is special because no other positive number can produce a lower number when put to the power of itself; $f(1/e)$ is the global minimum of $f(x)=x^{x}$. The second number is the only positive number that is equal to its reciprocal plus one:$\varphi =1/\varphi +1$. Its additive inverse is the only negative number that is equal to its reciprocal minus one:$-\varphi =-1/\varphi -1$.
The function $ f(n)=n+{\sqrt {(n^{2}+1)}},n\in \mathbb {N} ,n>0$ gives an infinite number of irrational numbers that differ with their reciprocal by an integer. For example, $f(2)$ is the irrational $2+{\sqrt {5}}$. Its reciprocal $1/(2+{\sqrt {5}})$ is $-2+{\sqrt {5}}$, exactly $4$ less. Such irrational numbers share an evident property: they have the same fractional part as their reciprocal, since these numbers differ by an integer.
Further remarks
If the multiplication is associative, an element x with a multiplicative inverse cannot be a zero divisor (x is a zero divisor if some nonzero y, xy = 0). To see this, it is sufficient to multiply the equation xy = 0 by the inverse of x (on the left), and then simplify using associativity. In the absence of associativity, the sedenions provide a counterexample.
The converse does not hold: an element which is not a zero divisor is not guaranteed to have a multiplicative inverse. Within Z, all integers except −1, 0, 1 provide examples; they are not zero divisors nor do they have inverses in Z. If the ring or algebra is finite, however, then all elements a which are not zero divisors do have a (left and right) inverse. For, first observe that the map f(x) = ax must be injective: f(x) = f(y) implies x = y:
${\begin{aligned}ax&=ay&\quad \Rightarrow &\quad ax-ay=0\\&&\quad \Rightarrow &\quad a(x-y)=0\\&&\quad \Rightarrow &\quad x-y=0\\&&\quad \Rightarrow &\quad x=y.\end{aligned}}$
Distinct elements map to distinct elements, so the image consists of the same finite number of elements, and the map is necessarily surjective. Specifically, ƒ (namely multiplication by a) must map some element x to 1, ax = 1, so that x is an inverse for a.
Applications
The expansion of the reciprocal 1/q in any base can also act [3] as a source of pseudo-random numbers, if q is a "suitable" safe prime, a prime of the form 2p + 1 where p is also a prime. A sequence of pseudo-random numbers of length q − 1 will be produced by the expansion.
See also
• Division (mathematics)
• Exponential decay
• Fraction
• Group (mathematics)
• Hyperbola
• Inverse distribution
• List of sums of reciprocals
• Repeating decimal
• 6-sphere coordinates
• Unit fractions – reciprocals of integers
• Zeros and poles
Notes
1. "In equall Parallelipipedons the bases are reciprokall to their altitudes". OED "Reciprocal" §3a. Sir Henry Billingsley translation of Elements XI, 34.
2. Anthony, Dr. "Proof that INT(1/x)dx = lnx". Ask Dr. Math. Drexel University. Retrieved 22 March 2013.
3. Mitchell, Douglas W., "A nonlinear random number generator with known, long cycle length," Cryptologia 17, January 1993, 55–62.
References
• Maximally Periodic Reciprocals, Matthews R.A.J. Bulletin of the Institute of Mathematics and its Applications vol 28 pp 147–148 1992
|
Wikipedia
|
Reciprocal Fibonacci constant
The reciprocal Fibonacci constant, or ψ, is defined as the sum of the reciprocals of the Fibonacci numbers:
$\psi =\sum _{k=1}^{\infty }{\frac {1}{F_{k}}}={\frac {1}{1}}+{\frac {1}{1}}+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{5}}+{\frac {1}{8}}+{\frac {1}{13}}+{\frac {1}{21}}+\cdots .$
The ratio of successive terms in this sum tends to the reciprocal of the golden ratio. Since this is less than 1, the ratio test shows that the sum converges.
The value of ψ is known to be approximately
$\psi =3.359885666243177553172011302918927179688905133732\dots $ (sequence A079586 in the OEIS).
Gosper describes an algorithm for fast numerical approximation of its value. The reciprocal Fibonacci series itself provides O(k) digits of accuracy for k terms of expansion, while Gosper's accelerated series provides O(k 2) digits.[1] ψ is known to be irrational; this property was conjectured by Paul Erdős, Ronald Graham, and Leonard Carlitz, and proved in 1989 by Richard André-Jeannin.[2]
The continued fraction representation of the constant is:
$\psi =[3;2,1,3,1,1,13,2,3,3,2,1,1,6,3,2,4,362,2,4,8,6,30,50,1,6,3,3,2,7,2,3,1,3,2,\dots ]\!\,$ (sequence A079587 in the OEIS).
See also
• List of sums of reciprocals
References
1. Gosper, William R. (1974), Acceleration of Series, Artificial Intelligence Memo #304, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, p. 66, hdl:1721.1/6088.
2. André-Jeannin, Richard (1989), "Irrationalité de la somme des inverses de certaines suites récurrentes", Comptes Rendus de l'Académie des Sciences, Série I, 308 (19): 539–541, MR 0999451
External links
• Weisstein, Eric W. "Reciprocal Fibonacci Constant". MathWorld.
|
Wikipedia
|
Dual basis
In linear algebra, given a vector space $V$ with a basis $B$ of vectors indexed by an index set $I$ (the cardinality of $I$ is the dimension of $V$), the dual set of $B$ is a set $B^{*}$ of vectors in the dual space $V^{*}$ with the same index set I such that $B$ and $B^{*}$ form a biorthogonal system. The dual set is always linearly independent but does not necessarily span $V^{*}$. If it does span $V^{*}$, then $B^{*}$ is called the dual basis or reciprocal basis for the basis $B$.
Denoting the indexed vector sets as $B=\{v_{i}\}_{i\in I}$ and $B^{*}=\{v^{i}\}_{i\in I}$, being biorthogonal means that the elements pair to have an inner product equal to 1 if the indexes are equal, and equal to 0 otherwise. Symbolically, evaluating a dual vector in $V^{*}$ on a vector in the original space $V$:
$v^{i}\cdot v_{j}=\delta _{j}^{i}={\begin{cases}1&{\text{if }}i=j\\0&{\text{if }}i\neq j{\text{,}}\end{cases}}$
where $\delta _{j}^{i}$ is the Kronecker delta symbol.
Introduction
To perform operations with a vector, we must have a straightforward method of calculating its components. In a Cartesian frame the necessary operation is the dot product of the vector and the base vector.[1] For example,
$\mathbf {x} =x^{1}\mathbf {i} _{1}+x^{2}\mathbf {i} _{2}+x^{3}\mathbf {i} _{3}$
where $\mathbf {i} _{k}$ is the bases in a Cartesian frame. The components of $\mathbf {x} $ can be found by
$x^{k}=\mathbf {x} \cdot \mathbf {i} _{k}.$
However, in a non-Cartesian frame, we do not necessarily have $\mathbf {e} _{i}\cdot \mathbf {e} _{j}=0$ for all $i\neq j$. However, it is always possible to find a vector $\mathbf {e} ^{i}$ such that
$x^{i}=\mathbf {x} \cdot \mathbf {e} ^{i}\qquad (i=1,2,3).$
The equality holds when $\mathbf {e} ^{i}$ is the dual base of $\mathbf {e} _{i}$. Notice the difference in position of the index $i$.
In a Cartesian frame, we have $\mathbf {e} ^{k}=\mathbf {e} _{k}=\mathbf {i} _{k}.$
Existence and uniqueness
The dual set always exists and gives an injection from V into V∗, namely the mapping that sends vi to vi. This says, in particular, that the dual space has dimension greater or equal to that of V.
However, the dual set of an infinite-dimensional V does not span its dual space V∗. For example, consider the map w in V∗ from V into the underlying scalars F given by w(vi) = 1 for all i. This map is clearly nonzero on all vi. If w were a finite linear combination of the dual basis vectors vi, say $ w=\sum _{i\in K}\alpha _{i}v^{i}$ for a finite subset K of I, then for any j not in K, $ w(v_{j})=\left(\sum _{i\in K}\alpha _{i}v^{i}\right)\left(v_{j}\right)=0$, contradicting the definition of w. So, this w does not lie in the span of the dual set.
The dual of an infinite-dimensional space has greater dimensionality (this being a greater infinite cardinality) than the original space has, and thus these cannot have a basis with the same indexing set. However, a dual set of vectors exists, which defines a subspace of the dual isomorphic to the original space. Further, for topological vector spaces, a continuous dual space can be defined, in which case a dual basis may exist.
Finite-dimensional vector spaces
In the case of finite-dimensional vector spaces, the dual set is always a dual basis and it is unique. These bases are denoted by $B=\{e_{1},\dots ,e_{n}\}$ and $B^{*}=\{e^{1},\dots ,e^{n}\}$. If one denotes the evaluation of a covector on a vector as a pairing, the biorthogonality condition becomes:
$\left\langle e^{i},e_{j}\right\rangle =\delta _{j}^{i}.$
The association of a dual basis with a basis gives a map from the space of bases of V to the space of bases of V∗, and this is also an isomorphism. For topological fields such as the real numbers, the space of duals is a topological space, and this gives a homeomorphism between the Stiefel manifolds of bases of these spaces.
A categorical and algebraic construction of the dual space
Another way to introduce the dual space of a vector space (module) is by introducing it in a categorical sense. To do this, let $A$ be a module defined over the ring $R$ (that is, $A$ is an object in the category $R{\text{-}}\mathbf {Mod} $). Then we define the dual space of $A$, denoted $A^{\ast }$, to be ${\text{Hom}}_{R}(A,R)$, the module formed of all $R$-linear module homomorphisms from $A$ into $R$. Note then that we may define a dual to the dual, referred to as the double dual of $A$, written as $A^{\ast \ast }$, and defined as ${\text{Hom}}_{R}(A^{\ast },R)$.
To formally construct a basis for the dual space, we shall now restrict our view to the case where $F$ is a finite-dimensional free (left) $R$-module, where $R$ is a ring with unity. Then, we assume that the set $X$ is a basis for $F$. From here, we define the Kronecker Delta function $\delta _{xy}$ over the basis $X$ by $\delta _{xy}=1$ if $x=y$ and $\delta _{xy}=0$ if $x\neq y$. Then the set $S=\lbrace f_{x}:F\to R\;|\;f_{x}(y)=\delta _{xy}\rbrace $ describes a linearly independent set with each $f_{x}\in {\text{Hom}}_{R}(F,R)$. Since $F$ is finite-dimensional, the basis $X$ is of finite cardinality. Then, the set $S$ is a basis to $F^{\ast }$ and $F^{\ast }$ is a free (right) $R$-module.
Examples
For example, the standard basis vectors of $\mathbb {R} ^{2}$ (the Cartesian plane) are
$\left\{\mathbf {e} _{1},\mathbf {e} _{2}\right\}=\left\{{\begin{pmatrix}1\\0\end{pmatrix}},{\begin{pmatrix}0\\1\end{pmatrix}}\right\}$
and the standard basis vectors of its dual space $(\mathbb {R} ^{2})^{*}$ are
$\left\{\mathbf {e} ^{1},\mathbf {e} ^{2}\right\}=\left\{{\begin{pmatrix}1&0\end{pmatrix}},{\begin{pmatrix}0&1\end{pmatrix}}\right\}{\text{.}}$
In 3-dimensional Euclidean space, for a given basis $\{\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}\}$, the biorthogonal (dual) basis $\{\mathbf {e} ^{1},\mathbf {e} ^{2},\mathbf {e} ^{3}\}$ can be found by formulas below:
$\mathbf {e} ^{1}=\left({\frac {\mathbf {e} _{2}\times \mathbf {e} _{3}}{V}}\right)^{\mathsf {T}},\ \mathbf {e} ^{2}=\left({\frac {\mathbf {e} _{3}\times \mathbf {e} _{1}}{V}}\right)^{\mathsf {T}},\ \mathbf {e} ^{3}=\left({\frac {\mathbf {e} _{1}\times \mathbf {e} _{2}}{V}}\right)^{\mathsf {T}}.$
where T denotes the transpose and
$V\,=\,\left(\mathbf {e} _{1};\mathbf {e} _{2};\mathbf {e} _{3}\right)\,=\,\mathbf {e} _{1}\cdot (\mathbf {e} _{2}\times \mathbf {e} _{3})\,=\,\mathbf {e} _{2}\cdot (\mathbf {e} _{3}\times \mathbf {e} _{1})\,=\,\mathbf {e} _{3}\cdot (\mathbf {e} _{1}\times \mathbf {e} _{2})$
is the volume of the parallelepiped formed by the basis vectors $\mathbf {e} _{1},\,\mathbf {e} _{2}$ and $\mathbf {e} _{3}.$
In general the dual basis of a basis in a finite-dimensional vector space can be readily computed as follows: given the basis $f_{1},\ldots ,f_{n}$ and corresponding dual basis $f^{1},\ldots ,f^{n}$ we can build matrices
${\begin{aligned}F&={\begin{bmatrix}f_{1}&\cdots &f_{n}\end{bmatrix}}\\G&={\begin{bmatrix}f^{1}&\cdots &f^{n}\end{bmatrix}}\end{aligned}}$
Then the defining property of the dual basis states that
$G^{\mathsf {T}}F=I$
Hence the matrix for the dual basis $G$ can be computed as
$G=\left(F^{-1}\right)^{\mathsf {T}}$
See also
• Reciprocal lattice
• Miller index
• Zone axis
Notes
1. Lebedev, Cloud & Eremeyev 2010, p. 12.
References
• Lebedev, Leonid P.; Cloud, Michael J.; Eremeyev, Victor A. (2010). Tensor Analysis With Applications to Mechanics. World Scientific. ISBN 978-981431312-4.
• "Finding the Dual Basis". Stack Exchange. May 27, 2012.
|
Wikipedia
|
Squeeze mapping
In linear algebra, a squeeze mapping, also called a squeeze transformation, is a type of linear map that preserves Euclidean area of regions in the Cartesian plane, but is not a rotation or shear mapping.
For a fixed positive real number a, the mapping
$(x,y)\mapsto (ax,y/a)$
is the squeeze mapping with parameter a. Since
$\{(u,v)\,:\,uv=\mathrm {constant} \}$
is a hyperbola, if u = ax and v = y/a, then uv = xy and the points of the image of the squeeze mapping are on the same hyperbola as (x,y) is. For this reason it is natural to think of the squeeze mapping as a hyperbolic rotation, as did Émile Borel in 1914,[1] by analogy with circular rotations, which preserve circles.
Logarithm and hyperbolic angle
The squeeze mapping sets the stage for development of the concept of logarithms. The problem of finding the area bounded by a hyperbola (such as xy = 1) is one of quadrature. The solution, found by Grégoire de Saint-Vincent and Alphonse Antonio de Sarasa in 1647, required the natural logarithm function, a new concept. Some insight into logarithms comes through hyperbolic sectors that are permuted by squeeze mappings while preserving their area. The area of a hyperbolic sector is taken as a measure of a hyperbolic angle associated with the sector. The hyperbolic angle concept is quite independent of the ordinary circular angle, but shares a property of invariance with it: whereas circular angle is invariant under rotation, hyperbolic angle is invariant under squeeze mapping. Both circular and hyperbolic angle generate invariant measures but with respect to different transformation groups. The hyperbolic functions, which take hyperbolic angle as argument, perform the role that circular functions play with the circular angle argument.[2]
Group theory
In 1688, long before abstract group theory, the squeeze mapping was described by Euclid Speidell in the terms of the day: "From a Square and an infinite company of Oblongs on a Superficies, each Equal to that square, how a curve is begotten which shall have the same properties or affections of any Hyperbola inscribed within a Right Angled Cone."[3]
If r and s are positive real numbers, the composition of their squeeze mappings is the squeeze mapping of their product. Therefore, the collection of squeeze mappings forms a one-parameter group isomorphic to the multiplicative group of positive real numbers. An additive view of this group arises from consideration of hyperbolic sectors and their hyperbolic angles.
From the point of view of the classical groups, the group of squeeze mappings is SO+(1,1), the identity component of the indefinite orthogonal group of 2×2 real matrices preserving the quadratic form u2 − v2. This is equivalent to preserving the form xy via the change of basis
$x=u+v,\quad y=u-v\,,$
and corresponds geometrically to preserving hyperbolae. The perspective of the group of squeeze mappings as hyperbolic rotation is analogous to interpreting the group SO(2) (the connected component of the definite orthogonal group) preserving quadratic form x2 + y2 as being circular rotations.
Note that the "SO+" notation corresponds to the fact that the reflections
$u\mapsto -u,\quad v\mapsto -v$
are not allowed, though they preserve the form (in terms of x and y these are x ↦ y, y ↦ x and x ↦ −x, y ↦ −y); the additional "+" in the hyperbolic case (as compared with the circular case) is necessary to specify the identity component because the group O(1,1) has 4 connected components, while the group O(2) has 2 components: SO(1,1) has 2 components, while SO(2) only has 1. The fact that the squeeze transforms preserve area and orientation corresponds to the inclusion of subgroups SO ⊂ SL – in this case SO(1,1) ⊂ SL(2) – of the subgroup of hyperbolic rotations in the special linear group of transforms preserving area and orientation (a volume form). In the language of Möbius transformations, the squeeze transformations are the hyperbolic elements in the classification of elements.
A geometric transformation is called conformal when it preserves angles. Hyperbolic angle is defined using area under y = 1/x. Since squeeze mappings preserve areas of transformed regions such as hyperbolic sectors, the angle measure of sectors is preserved. Thus squeeze mappings are conformal in the sense of preserving hyperbolic angle.
Applications
Here some applications are summarized with historic references.
Relativistic spacetime
Spacetime geometry is conventionally developed as follows: Select (0,0) for a "here and now" in a spacetime. Light radiant left and right through this central event tracks two lines in the spacetime, lines that can be used to give coordinates to events away from (0,0). Trajectories of lesser velocity track closer to the original timeline (0,t). Any such velocity can be viewed as a zero velocity under a squeeze mapping called a Lorentz boost. This insight follows from a study of split-complex number multiplications and the diagonal basis which corresponds to the pair of light lines. Formally, a squeeze preserves the hyperbolic metric expressed in the form xy; in a different coordinate system. This application in the theory of relativity was noted in 1912 by Wilson and Lewis,[4] by Werner Greub,[5] and by Louis Kauffman.[6] Furthermore, the squeeze mapping form of Lorentz transformations was used by Gustav Herglotz (1909/10)[7] while discussing Born rigidity, and was popularized by Wolfgang Rindler in his textbook on relativity, who used it in his demonstration of their characteristic property.[8]
The term squeeze transformation was used in this context in an article connecting the Lorentz group with Jones calculus in optics.[9]
Corner flow
In fluid dynamics one of the fundamental motions of an incompressible flow involves bifurcation of a flow running up against an immovable wall. Representing the wall by the axis y = 0 and taking the parameter r = exp(t) where t is time, then the squeeze mapping with parameter r applied to an initial fluid state produces a flow with bifurcation left and right of the axis x = 0. The same model gives fluid convergence when time is run backward. Indeed, the area of any hyperbolic sector is invariant under squeezing.
For another approach to a flow with hyperbolic streamlines, see Potential flow § Power laws with n = 2.
In 1989 Ottino[10] described the "linear isochoric two-dimensional flow" as
$v_{1}=Gx_{2}\quad v_{2}=KGx_{1}$
where K lies in the interval [−1, 1]. The streamlines follow the curves
$x_{2}^{2}-Kx_{1}^{2}=\mathrm {constant} $
so negative K corresponds to an ellipse and positive K to a hyperbola, with the rectangular case of the squeeze mapping corresponding to K = 1.
Stocker and Hosoi[11] described their approach to corner flow as follows:
we suggest an alternative formulation to account for the corner-like geometry, based on the use of hyperbolic coordinates, which allows substantial analytical progress towards determination of the flow in a Plateau border and attached liquid threads. We consider a region of flow forming an angle of π/2 and delimited on the left and bottom by symmetry planes.
Stocker and Hosoi then recall Moffatt's[12] consideration of "flow in a corner between rigid boundaries, induced by an arbitrary disturbance at a large distance." According to Stocker and Hosoi,
For a free fluid in a square corner, Moffatt's (antisymmetric) stream function ... [indicates] that hyperbolic coordinates are indeed the natural choice to describe these flows.
Bridge to transcendentals
The area-preserving property of squeeze mapping has an application in setting the foundation of the transcendental functions natural logarithm and its inverse the exponential function:
Definition: Sector(a,b) is the hyperbolic sector obtained with central rays to (a, 1/a) and (b, 1/b).
Lemma: If bc = ad, then there is a squeeze mapping that moves the sector(a,b) to sector(c,d).
Proof: Take parameter r = c/a so that (u,v) = (rx, y/r) takes (a, 1/a) to (c, 1/c) and (b, 1/b) to (d, 1/d).
Theorem (Gregoire de Saint-Vincent 1647) If bc = ad, then the quadrature of the hyperbola xy = 1 against the asymptote has equal areas between a and b compared to between c and d.
Proof: An argument adding and subtracting triangles of area 1⁄2, one triangle being {(0,0), (0,1), (1,1)}, shows the hyperbolic sector area is equal to the area along the asymptote. The theorem then follows from the lemma.
Theorem (Alphonse Antonio de Sarasa 1649) As area measured against the asymptote increases in arithmetic progression, the projections upon the asymptote increase in geometric sequence. Thus the areas form logarithms of the asymptote index.
For instance, for a standard position angle which runs from (1, 1) to (x, 1/x), one may ask "When is the hyperbolic angle equal to one?" The answer is the transcendental number x = e.
A squeeze with r = e moves the unit angle to one between (e, 1/e) and (ee, 1/ee) which subtends a sector also of area one. The geometric progression
e, e2, e3, ..., en, ...
corresponds to the asymptotic index achieved with each sum of areas
1,2,3, ..., n,...
which is a proto-typical arithmetic progression A + nd where A = 0 and d = 1 .
Lie transform
Following Pierre Ossian Bonnet's (1867) investigations on surfaces of constant curvatures, Sophus Lie (1879) found a way to derive new pseudospherical surfaces from a known one. Such surfaces satisfy the Sine-Gordon equation:
${\frac {d^{2}\Theta }{ds\ d\sigma }}=K\sin \Theta ,$
where $(s,\sigma )$ are asymptotic coordinates of two principal tangent curves and $\Theta $ their respective angle. Lie showed that if $\Theta =f(s,\sigma )$ is a solution to the Sine-Gordon equation, then the following squeeze mapping (now known as Lie transform[13]) indicates other solutions of that equation:[14]
$\Theta =f\left(ms,\ {\frac {\sigma }{m}}\right).$
Lie (1883) noticed its relation to two other transformations of pseudospherical surfaces:[15] The Bäcklund transform (introduced by Albert Victor Bäcklund in 1883) can be seen as the combination of a Lie transform with a Bianchi transform (introduced by Luigi Bianchi in 1879.) Such transformations of pseudospherical surfaces were discussed in detail in the lectures on differential geometry by Gaston Darboux (1894),[16] Luigi Bianchi (1894),[17] or Luther Pfahler Eisenhart (1909).[18]
It is known that the Lie transforms (or squeeze mappings) correspond to Lorentz boosts in terms of light-cone coordinates, as pointed out by Terng and Uhlenbeck (2000):[13]
Sophus Lie observed that the SGE [Sinus-Gordon equation] is invariant under Lorentz transformations. In asymptotic coordinates, which correspond to light cone coordinates, a Lorentz transformation is $(x,t)\mapsto \left({\tfrac {1}{\lambda }}x,\lambda t\right)$.
This can be represented as follows:
${\begin{matrix}-c^{2}t^{2}+x^{2}=-c^{2}t^{\prime 2}+x^{\prime 2}\\\hline {\begin{aligned}ct'&=ct\gamma -x\beta \gamma &&=ct\cosh \eta -x\sinh \eta \\x'&=-ct\beta \gamma +x\gamma &&=-ct\sinh \eta +x\cosh \eta \end{aligned}}\\\hline u=ct+x,\ v=ct-x,\ k={\sqrt {\tfrac {1+\beta }{1-\beta }}}=e^{\eta }\\u'={\frac {u}{k}},\ v'=kv\\\hline u'v'=uv\end{matrix}}$
where k corresponds to the Doppler factor in Bondi k-calculus, η is the rapidity.
See also
Wikimedia Commons has media related to Squeeze (geometry).
• Indefinite orthogonal group
• Isochoric process
References
1. Émile Borel (1914) Introduction Geometrique à quelques Théories Physiques, page 29, Gauthier-Villars, link from Cornell University Historical Math Monographs
2. Mellen W. Haskell (1895) On the introduction of the notion of hyperbolic functions Bulletin of the American Mathematical Society 1(6):155–9,particularly equation 12, page 159
3. Euclid Speidell (1688) Logarithmotechnia: the making of numbers called logarithms from Google Books
4. Edwin Bidwell Wilson & Gilbert N. Lewis (1912) "The space-time manifold of relativity. The non-Euclidean geometry of mechanics and electromagnetics", Proceedings of the American Academy of Arts and Sciences 48:387–507, footnote p. 401
5. W. H. Greub (1967) Linear Algebra, Springer-Verlag. See pages 272 to 274
6. Louis Kauffman (1985) "Transformations in Special Relativity", International Journal of Theoretical Physics 24:223–36
7. Herglotz, Gustav (1910) [1909], "Über den vom Standpunkt des Relativitätsprinzips aus als starr zu bezeichnenden Körper" [Wikisource translation: On bodies that are to be designated as "rigid" from the standpoint of the relativity principle], Annalen der Physik, 336 (2): 408, Bibcode:1910AnP...336..393H, doi:10.1002/andp.19103360208
8. Wolfgang Rindler, Essential Relativity, equation 29.5 on page 45 of the 1969 edition, or equation 2.17 on page 37 of the 1977 edition, or equation 2.16 on page 52 of the 2001 edition
9. Daesoo Han, Young Suh Kim & Marilyn E. Noz (1997) "Jones-matrix formalism as a representation of the Lorentz group", Journal of the Optical Society of America A14(9):2290–8
10. J. M. Ottino (1989) The Kinematics of Mixing: stretching, chaos, transport, page 29, Cambridge University Press
11. Roman Stocker & A.E. Hosoi (2004) "Corner flow in free liquid films", Journal of Engineering Mathematics 50:267–88
12. H.K. Moffatt (1964) "Viscous and resistive eddies near a sharp corner", Journal of Fluid Mechanics 18:1–18
13. Terng, C. L., & Uhlenbeck, K. (2000). "Geometry of solitons" (PDF). Notices of the AMS. 47 (1): 17–25.{{cite journal}}: CS1 maint: multiple names: authors list (link)
14. Lie, S. (1881) [1879]. "Selbstanzeige: Über Flächen, deren Krümmungsradien durch eine Relation verknüpft sind". Fortschritte der Mathematik. 11: 529–531. Reprinted in Lie's collected papers, Vol. 3, pp. 392–393.
15. Lie, S. (1884) [1883]. "Untersuchungen über Differentialgleichungen IV". Christ. Forh.. Reprinted in Lie's collected papers, Vol. 3, pp. 556–560.
16. Darboux, G. (1894). Leçons sur la théorie générale des surfaces. Troisième partie. Paris: Gauthier-Villars. pp. 381–382.
17. Bianchi, L. (1894). Lezioni di geometria differenziale. Pisa: Enrico Spoerri. pp. 433–434.
18. Eisenhart, L. P. (1909). A treatise on the differential geometry of curves and surfaces. Boston: Ginn and Company. pp. 289–290.
• HSM Coxeter & SL Greitzer (1967) Geometry Revisited, Chapter 4 Transformations, A genealogy of transformation.
• P. S. Modenov and A. S. Parkhomenko (1965) Geometric Transformations, volume one. See pages 104 to 106.
• Walter, Scott (1999). "The non-Euclidean style of Minkowskian relativity" (PDF). In J. Gray (ed.). The Symbolic Universe: Geometry and Physics. Oxford University Press. pp. 91–127.(see page 9 of e-link)
|
Wikipedia
|
Inverse distribution
In probability theory and statistics, an inverse distribution is the distribution of the reciprocal of a random variable. Inverse distributions arise in particular in the Bayesian context of prior distributions and posterior distributions for scale parameters. In the algebra of random variables, inverse distributions are special cases of the class of ratio distributions, in which the numerator random variable has a degenerate distribution.
Relation to original distribution
In general, given the probability distribution of a random variable X with strictly positive support, it is possible to find the distribution of the reciprocal, Y = 1 / X. If the distribution of X is continuous with density function f(x) and cumulative distribution function F(x), then the cumulative distribution function, G(y), of the reciprocal is found by noting that
$G(y)=\Pr(Y\leq y)=\Pr \left(X\geq {\frac {1}{y}}\right)=1-\Pr \left(X<{\frac {1}{y}}\right)=1-F\left({\frac {1}{y}}\right).$
Then the density function of Y is found as the derivative of the cumulative distribution function:
$g(y)={\frac {1}{y^{2}}}f\left({\frac {1}{y}}\right).$
Examples
Reciprocal distribution
The reciprocal distribution has a density function of the form.[1]
$f(x)\propto x^{-1}\quad {\text{ for }}0<a<x<b,$
where $\propto \!\,$ means "is proportional to". It follows that the inverse distribution in this case is of the form
$g(y)\propto y^{-1}\quad {\text{ for }}0\leq b^{-1}<y<a^{-1},$
which is again a reciprocal distribution.
Inverse uniform distribution
Inverse uniform distribution
Parameters $0<a<b,\quad a,b\in \mathbb {R} $
Support $[b^{-1},a^{-1}]$
PDF $y^{-2}{\frac {1}{b-a}}$
CDF ${\frac {b-y^{-1}}{b-a}}$
Mean ${\frac {\ln(b)-\ln(a)}{b-a}}$
Median ${\frac {2}{a+b}}$
Variance ${\frac {1}{a\cdot b}}-\left({\frac {\ln(b)-\ln(a)}{b-a}}\right)^{2}$
If the original random variable X is uniformly distributed on the interval (a,b), where a>0, then the reciprocal variable Y = 1 / X has the reciprocal distribution which takes values in the range (b−1 ,a−1), and the probability density function in this range is
$g(y)=y^{-2}{\frac {1}{b-a}},$
and is zero elsewhere.
The cumulative distribution function of the reciprocal, within the same range, is
$G(y)={\frac {b-y^{-1}}{b-a}}.$
For example, if X is uniformly distributed on the interval (0,1), then Y = 1 / X has density $g(y)=y^{-2}$ and cumulative distribution function $G(y)={1-y^{-1}}$ when $y>1.$
Inverse t distribution
Let X be a t distributed random variate with k degrees of freedom. Then its density function is
$f(x)={\frac {1}{\sqrt {k\pi }}}{\frac {\Gamma \left({\frac {k+1}{2}}\right)}{\Gamma \left({\frac {k}{2}}\right)}}{\frac {1}{\left(1+{\frac {x^{2}}{k}}\right)^{\frac {1+k}{2}}}}.$
The density of Y = 1 / X is
$g(y)={\frac {1}{\sqrt {k\pi }}}{\frac {\Gamma \left({\frac {k+1}{2}}\right)}{\Gamma \left({\frac {k}{2}}\right)}}{\frac {1}{y^{2}\left(1+{\frac {1}{y^{2}k}}\right)^{\frac {1+k}{2}}}}.$
With k = 1, the distributions of X and 1 / X are identical (X is then Cauchy distributed (0,1)). If k > 1 then the distribution of 1 / X is bimodal.
Reciprocal normal distribution
If variable X follows a normal distribution ${\mathcal {N}}(\mu ,\sigma ^{2})$, then the inverse or reciprocal Y=1/X follows a reciprocal normal distribution:[2]
$f(y)={\frac {1}{{\sqrt {2\pi }}\sigma y^{2}}}e^{-{\frac {1}{2}}\left({\frac {1/y-\mu }{\sigma }}\right)^{2}}.$
If variable X follows a standard normal distribution ${\mathcal {N}}(0,1)$, then Y=1/X follows a reciprocal standard normal distribution, heavy-tailed and bimodal,[2] with modes at $\pm {\tfrac {1}{\sqrt {2}}}$ and density
$f(y)={\frac {e^{-{\frac {1}{2y^{2}}}}}{{\sqrt {2\pi }}y^{2}}}$
and the first and higher-order moments do not exist.[2] For such inverse distributions and for ratio distributions, there can still be defined probabilities for intervals, which can be computed either by Monte Carlo simulation or, in some cases, by using the Geary–Hinkley transformation.[3]
However, in the more general case of a shifted reciprocal function $1/(p-B)$, for $B=N(\mu ,\sigma )$ following a general normal distribution, then mean and variance statistics do exist in a principal value sense, if the difference between the pole $p$ and the mean $\mu $ is real-valued. The mean of this transformed random variable (reciprocal shifted normal distribution) is then indeed the scaled Dawson's function:[4]
${\frac {\sqrt {2}}{\sigma }}F\left({\frac {p-\mu }{{\sqrt {2}}\sigma }}\right)$.
In contrast, if the shift $p-\mu $ is purely complex, the mean exists and is a scaled Faddeeva function, whose exact expression depends on the sign of the imaginary part, $\operatorname {Im} (p-\mu )$. In both cases, the variance is a simple function of the mean.[5] Therefore, the variance has to be considered in a principal value sense if $p-\mu $ is real, while it exists if the imaginary part of $p-\mu $ is non-zero. Note that these means and variances are exact, as they do not recur to linearisation of the ratio. The exact covariance of two ratios with a pair of different poles $p_{1}$ and $p_{2}$ is similarly available.[6] The case of the inverse of a complex normal variable $B$, shifted or not, exhibits different characteristics.[4]
Inverse exponential distribution
If $X$ is an exponentially distributed random variable with rate parameter $\lambda $, then $Y=1/X$ has the following cumulative distribution function: $F_{Y}(y)=e^{-\lambda /y}$for $y>0$. Note that the expected value of this random variable does not exist. The reciprocal exponential distribution finds use in the analysis of fading wireless communication systems.
Inverse Cauchy distribution
If X is a Cauchy distributed (μ, σ) random variable, then 1 / X is a Cauchy ( μ / C, σ / C ) random variable where C = μ2 + σ2.
Inverse F distribution
If X is an F(ν1, ν2 ) distributed random variable then 1 / X is an F(ν2, ν1 ) random variable.
Reciprocal of binomial distribution
No closed form for this distribution is known. An asymptotic approximation for the mean is known.[7]
$E[(1+X)^{a}]=O((np)^{-a})+o(n^{-a})$
where E[] is the expectation operator, X is a random variable, O() and o() are the big and little o order functions, n is the sample size, p is the probability of success and a is a variable that may be positive or negative, integer or fractional.
Reciprocal of triangular distribution
For a triangular distribution with lower limit a, upper limit b and mode c, where a < b and a ≤ c ≤ b, the mean of the reciprocal is given by
$\mu ={\frac {2\left({\frac {a\,\mathrm {ln} \left({\frac {a}{c}}\right)}{a-c}}+{\frac {b\,\mathrm {ln} \left({\frac {c}{b}}\right)}{b-c}}\right)}{a-b}}$
and the variance by
$\sigma ^{2}={\frac {2\left({\frac {\mathrm {ln} \left({\frac {c}{a}}\right)}{a-c}}+{\frac {\mathrm {ln} \left({\frac {b}{c}}\right)}{b-c}}\right)}{a-b}}-\mu ^{2}$.
Both moments of the reciprocal are only defined when the triangle does not cross zero, i.e. when a, b, and c are either all positive or all negative.
Other inverse distributions
Other inverse distributions include
inverse-chi-squared distribution
inverse-gamma distribution
inverse-Wishart distribution
inverse matrix gamma distribution
Applications
Inverse distributions are widely used as prior distributions in Bayesian inference for scale parameters.
See also
• Harmonic mean
• Ratio distribution
• Self-reciprocal distributions
References
1. Hamming R. W. (1970) "On the distribution of numbers", The Bell System Technical Journal 49(8) 1609–1625
2. Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1994). Continuous Univariate Distributions, Volume 1. Wiley. p. 171. ISBN 0-471-58495-9.
3. Hayya, Jack; Armstrong, Donald; Gressis, Nicolas (July 1975). "A Note on the Ratio of Two Normally Distributed Variables". Management Science. 21 (11): 1338–1341. doi:10.1287/mnsc.21.11.1338. JSTOR 2629897.
4. Lecomte, Christophe (May 2013). "Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems". Journal of Sound and Vibration. 332 (11): 2750–2776. doi:10.1016/j.jsv.2012.12.009.
5. Lecomte, Christophe (May 2013). "Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems". Journal of Sound and Vibration. 332 (11). Section (4.1.1). doi:10.1016/j.jsv.2012.12.009.
6. Lecomte, Christophe (May 2013). "Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems". Journal of Sound and Vibration. 332 (11). Eq.(39)-(40). doi:10.1016/j.jsv.2012.12.009.
7. Cribari-Neto F, Lopes Garcia N, Vasconcellos KLP (2000) A note on inverse moments of binomial variates. Brazilian Review of Econometrics 20 (2)
|
Wikipedia
|
Correlation (projective geometry)
In projective geometry, a correlation is a transformation of a d-dimensional projective space that maps subspaces of dimension k to subspaces of dimension d − k − 1, reversing inclusion and preserving incidence. Correlations are also called reciprocities or reciprocal transformations.
In two dimensions
In the real projective plane, points and lines are dual to each other. As expressed by Coxeter,
A correlation is a point-to-line and a line-to-point transformation that preserves the relation of incidence in accordance with the principle of duality. Thus it transforms ranges into pencils, pencils into ranges, quadrangles into quadrilaterals, and so on.[1]
Given a line m and P a point not on m, an elementary correlation is obtained as follows: for every Q on m form the line PQ. The inverse correlation starts with the pencil on P: for any line q in this pencil take the point m ∩ q. The composition of two correlations that share the same pencil is a perspectivity.
In three dimensions
In a 3-dimensional projective space a correlation maps a point to a plane. As stated in one textbook:[2]
If κ is such a correlation, every point P is transformed by it into a plane π′ = κP, and conversely, every point P arises from a unique plane π′ by the inverse transformation κ−1.
Three-dimensional correlations also transform lines into lines, so they may be considered to be collineations of the two spaces.
In higher dimensions
In general n-dimensional projective space, a correlation takes a point to a hyperplane. This context was described by Paul Yale:
A correlation of the projective space P(V) is an inclusion-reversing permutation of the proper subspaces of P(V).[3]
He proves a theorem stating that a correlation φ interchanges joins and intersections, and for any projective subspace W of P(V), the dimension of the image of W under φ is (n − 1) − dim W, where n is the dimension of the vector space V used to produce the projective space P(V).
Existence of correlations
Correlations can exist only if the space is self-dual. For dimensions 3 and higher, self-duality is easy to test: A coordinatizing skewfield exists and self-duality fails if and only if the skewfield is not isomorphic to its opposite.
Special types of correlations
Polarity
If a correlation φ is an involution (that is, two applications of the correlation equals the identity: φ2(P) = P for all points P) then it is called a polarity. Polarities of projective spaces lead to polar spaces, which are defined by taking the collection of all subspace which are contained in their image under the polarity.
Natural correlation
There is a natural correlation induced between a projective space P(V) and its dual P(V∗) by the natural pairing ⟨⋅,⋅⟩ between the underlying vector spaces V and its dual V∗, where every subspace W of V∗ is mapped to its orthogonal complement W⊥ in V, defined as W⊥ = {v ∈ V | ⟨w, v⟩ = 0, ∀w ∈ W}.[4]
Composing this natural correlation with an isomorphism of projective spaces induced by a semilinear map produces a correlation of P(V) to itself. In this way, every nondegenerate semilinear map V → V∗ induces a correlation of a projective space to itself.
References
1. H. S. M. Coxeter (1974) Projective Geometry, second edition, page 57, University of Toronto Press ISBN 0-8020-2104-2
2. J. G. Semple and G. T. Kneebone (1952) Algebraic Projective Geometry, p 360, Clarendon Press
3. Paul B. Yale (1968, 1988. 2004) Geometry and Symmetry, chapter 6.9 Correlations and semi-bilinear forms, Dover Publications ISBN 0-486-43835-X
4. Irving Kaplansky (1974) [1969], Linear Algebra and Geometry (2nd ed.), p. 104
• Robert J. Bumcroft (1969), Modern Projective Geometry, Holt, Rinehart, and Winston, Chapter 4.5 Correlations p. 90
• Robert A. Rosenbaum (1963), Introduction to Projective Geometry and Modern Algebra, Addison-Wesley, p. 198
|
Wikipedia
|
Counting
Counting is the process of determining the number of elements of a finite set of objects; that is, determining the size of a set. The traditional way of counting consists of continually increasing a (mental or spoken) counter by a unit for every element of the set, in some order, while marking (or displacing) those elements to avoid visiting the same element more than once, until no unmarked elements are left; if the counter was set to one after the first object, the value after visiting the final object gives the desired number of elements. The related term enumeration refers to uniquely identifying the elements of a finite (combinatorial) set or infinite set by assigning a number to each element.
Counting sometimes involves numbers other than one; for example, when counting money, counting out change, "counting by twos" (2, 4, 6, 8, 10, 12, ...), or "counting by fives" (5, 10, 15, 20, 25, ...).
There is archaeological evidence suggesting that humans have been counting for at least 50,000 years.[1] Counting was primarily used by ancient cultures to keep track of social and economic data such as the number of group members, prey animals, property, or debts (that is, accountancy). Notched bones were also found in the Border Caves in South Africa, which may suggest that the concept of counting was known to humans as far back as 44,000 BCE.[2] The development of counting led to the development of mathematical notation, numeral systems, and writing.
Forms of counting
Further information: Prehistoric numerals and Numerical digit
Counting can occur in a variety of forms.
Counting can be verbal; that is, speaking every number out loud (or mentally) to keep track of progress. This is often used to count objects that are present already, instead of counting a variety of things over time.
Counting can also be in the form of tally marks, making a mark for each number and then counting all of the marks when done tallying. This is useful when counting objects over time, such as the number of times something occurs during the course of a day. Tallying is base 1 counting; normal counting is done in base 10. Computers use base 2 counting (0s and 1s), also known as Boolean algebra.
Counting can also be in the form of finger counting, especially when counting small numbers. This is often used by children to facilitate counting and simple mathematical operations. Finger-counting uses unary notation (one finger = one unit), and is thus limited to counting 10 (unless you start in with your toes). Older finger counting used the four fingers and the three bones in each finger (phalanges) to count to the number twelve.[3] Other hand-gesture systems are also in use, for example the Chinese system by which one can count to 10 using only gestures of one hand. By using finger binary (base 2 counting), it is possible to keep a finger count up to 1023 = 210 − 1.
Various devices can also be used to facilitate counting, such as hand tally counters and abacuses.
Inclusive counting
Inclusive counting is usually encountered when dealing with time in Roman calendars and the Romance languages.[4] In the ancient Roman calendar, the nones (meaning "nine") is 8 days before the ides; more generally, dates are specified as inclusively counted days up to the next named day.[4] In the Christian liturgical calendar, Quinquagesima (meaning 50) is 49 days before Easter Sunday. When counting "inclusively", the Sunday (the start day) will be day 1 and therefore the following Sunday will be the eighth day. For example, the French phrase for "fortnight" is quinzaine (15 [days]), and similar words are present in Greek (δεκαπενθήμερο, dekapenthímero), Spanish (quincena) and Portuguese (quinzena). In contrast, the English word "fortnight" itself derives from "a fourteen-night", as the archaic "sennight" does from "a seven-night"; the English words are not examples of inclusive counting. In exclusive counting languages such as English, when counting eight days "from Sunday", Monday will be day 1, Tuesday day 2, and the following Monday will be the eighth day. For many years it was a standard practice in English law for the phrase "from a date" to mean "beginning on the day after that date": this practice is now deprecated because of the high risk of misunderstanding.[5]
Similar counting is involved in East Asian age reckoning, in which newborns are considered to be 1 at birth.
Musical terminology also uses inclusive counting of intervals between notes of the standard scale: going up one note is a second interval, going up two notes is a third interval, etc., and going up seven notes is an octave.
Education and development
Main article: Pre-math skills
Learning to count is an important educational/developmental milestone in most cultures of the world. Learning to count is a child's very first step into mathematics, and constitutes the most fundamental idea of that discipline. However, some cultures in Amazonia and the Australian Outback do not count,[6][7] and their languages do not have number words.
Many children at just 2 years of age have some skill in reciting the count list (that is, saying "one, two, three, ..."). They can also answer questions of ordinality for small numbers, for example, "What comes after three?". They can even be skilled at pointing to each object in a set and reciting the words one after another. This leads many parents and educators to the conclusion that the child knows how to use counting to determine the size of a set.[8] Research suggests that it takes about a year after learning these skills for a child to understand what they mean and why the procedures are performed.[9][10] In the meantime, children learn how to name cardinalities that they can subitize.
Counting in mathematics
Main article: Combinatorics
See also: Countable set
In mathematics, the essence of counting a set and finding a result n, is that it establishes a one-to-one correspondence (or bijection) of the set with the subset of positive integers {1, 2, ..., n}. A fundamental fact, which can be proved by mathematical induction, is that no bijection can exist between {1, 2, ..., n} and {1, 2, ..., m} unless n = m; this fact (together with the fact that two bijections can be composed to give another bijection) ensures that counting the same set in different ways can never result in different numbers (unless an error is made). This is the fundamental mathematical theorem that gives counting its purpose; however you count a (finite) set, the answer is the same. In a broader context, the theorem is an example of a theorem in the mathematical field of (finite) combinatorics—hence (finite) combinatorics is sometimes referred to as "the mathematics of counting."
Many sets that arise in mathematics do not allow a bijection to be established with {1, 2, ..., n} for any natural number n; these are called infinite sets, while those sets for which such a bijection does exist (for some n) are called finite sets. Infinite sets cannot be counted in the usual sense; for one thing, the mathematical theorems which underlie this usual sense for finite sets are false for infinite sets. Furthermore, different definitions of the concepts in terms of which these theorems are stated, while equivalent for finite sets, are inequivalent in the context of infinite sets.
The notion of counting may be extended to them in the sense of establishing (the existence of) a bijection with some well-understood set. For instance, if a set can be brought into bijection with the set of all natural numbers, then it is called "countably infinite." This kind of counting differs in a fundamental way from counting of finite sets, in that adding new elements to a set does not necessarily increase its size, because the possibility of a bijection with the original set is not excluded. For instance, the set of all integers (including negative numbers) can be brought into bijection with the set of natural numbers, and even seemingly much larger sets like that of all finite sequences of rational numbers are still (only) countably infinite. Nevertheless, there are sets, such as the set of real numbers, that can be shown to be "too large" to admit a bijection with the natural numbers, and these sets are called "uncountable." Sets for which there exists a bijection between them are said to have the same cardinality, and in the most general sense counting a set can be taken to mean determining its cardinality. Beyond the cardinalities given by each of the natural numbers, there is an infinite hierarchy of infinite cardinalities, although only very few such cardinalities occur in ordinary mathematics (that is, outside set theory that explicitly studies possible cardinalities).
Counting, mostly of finite sets, has various applications in mathematics. One important principle is that if two sets X and Y have the same finite number of elements, and a function f: X → Y is known to be injective, then it is also surjective, and vice versa. A related fact is known as the pigeonhole principle, which states that if two sets X and Y have finite numbers of elements n and m with n > m, then any map f: X → Y is not injective (so there exist two distinct elements of X that f sends to the same element of Y); this follows from the former principle, since if f were injective, then so would its restriction to a strict subset S of X with m elements, which restriction would then be surjective, contradicting the fact that for x in X outside S, f(x) cannot be in the image of the restriction. Similar counting arguments can prove the existence of certain objects without explicitly providing an example. In the case of infinite sets this can even apply in situations where it is impossible to give an example.
The domain of enumerative combinatorics deals with computing the number of elements of finite sets, without actually counting them; the latter usually being impossible because infinite families of finite sets are considered at once, such as the set of permutations of {1, 2, ..., n} for any natural number n.
See also
• Calculation
• Card reading (bridge)
• Cardinal number
• Combinatorics
• Count data
• Counting (music)
• Counting problem (complexity)
• Developmental psychology
• Elementary arithmetic
• Finger counting
• History of mathematics
• Jeton
• Level of measurement
• List of numbers
• Mathematical quantity
• Ordinal number
• Particle number
• Subitizing and counting
• Tally mark
• Unary numeral system
• Yan tan tethera (Counting sheep in Britain)
References
1. An Introduction to the History of Mathematics (6th Edition) by Howard Eves (1990) p.9
2. "Early Human Counting Tools". Math Timeline. Retrieved 2018-04-26.
3. Macey, Samuel L. (1989). The Dynamics of Progress: Time, Method, and Measure. Atlanta, Georgia: University of Georgia Press. p. 92. ISBN 978-0-8203-3796-8.
4. Evans, James (1998). "4". The History and Practice of Ancient Astronomy. Oxford University Press. p. 164. ISBN 019987445X.
5. "Drafting bills for Parliament". gov.uk. Office of the Parliamentary Counsel. 18 June 2020. See heading 8.
6. Butterworth, B., Reeve, R., Reynolds, F., & Lloyd, D. (2008). Numerical thought with and without words: Evidence from indigenous Australian children. Proceedings of the National Academy of Sciences, 105(35), 13179–13184.
7. Gordon, P. (2004). Numerical cognition without words: Evidence from Amazonia. Science, 306, 496–499.
8. Fuson, K.C. (1988). Children's counting and concepts of number. New York: Springer–Verlag.
9. Le Corre, M., & Carey, S. (2007). One, two, three, four, nothing more: An investigation of the conceptual sources of the verbal counting principles. Cognition, 105, 395–438.
10. Le Corre, M., Van de Walle, G., Brannon, E. M., Carey, S. (2006). Re-visiting the competence/performance debate in the acquisition of the counting principles. Cognitive Psychology, 52(2), 130–169.
|
Wikipedia
|
Recursively enumerable language
In mathematics, logic and computer science, a formal language is called recursively enumerable (also recognizable, partially decidable, semidecidable, Turing-acceptable or Turing-recognizable) if it is a recursively enumerable subset in the set of all possible words over the alphabet of the language, i.e., if there exists a Turing machine which will enumerate all valid strings of the language.
Recursively enumerable languages are known as type-0 languages in the Chomsky hierarchy of formal languages. All regular, context-free, context-sensitive and recursive languages are recursively enumerable.
The class of all recursively enumerable languages is called RE.
Definitions
There are three equivalent definitions of a recursively enumerable language:
1. A recursively enumerable language is a recursively enumerable subset in the set of all possible words over the alphabet of the language.
2. A recursively enumerable language is a formal language for which there exists a Turing machine (or other computable function) which will enumerate all valid strings of the language. Note that if the language is infinite, the enumerating algorithm provided can be chosen so that it avoids repetitions, since we can test whether the string produced for number n is "already" produced for a number which is less than n. If it already is produced, use the output for input n+1 instead (recursively), but again, test whether it is "new".
3. A recursively enumerable language is a formal language for which there exists a Turing machine (or other computable function) that will halt and accept when presented with any string in the language as input but may either halt and reject or loop forever when presented with a string not in the language. Contrast this to recursive languages, which require that the Turing machine halts in all cases.
All regular, context-free, context-sensitive and recursive languages are recursively enumerable.
Post's theorem shows that RE, together with its complement co-RE, correspond to the first level of the arithmetical hierarchy.
Example
The set of halting Turing machines is recursively enumerable but not recursive. Indeed, one can run the Turing machine and accept if the machine halts, hence it is recursively enumerable. On the other hand, the problem is undecidable.
Some other recursively enumerable languages that are not recursive include:
• Post correspondence problem
• Mortality (computability theory)
• Entscheidungsproblem
Closure properties
Recursively enumerable languages (REL) are closed under the following operations. That is, if L and P are two recursively enumerable languages, then the following languages are recursively enumerable as well:
• the Kleene star $L^{*}$ of L
• the concatenation $L\circ P$ of L and P
• the union $L\cup P$
• the intersection $L\cap P$.
Recursively enumerable languages are not closed under set difference or complementation. The set difference $L-P$ is recursively enumerable if $P$ is recursive. If $L$ is recursively enumerable, then the complement of $L$ is recursively enumerable if and only if $L$ is also recursive.
See also
• Computably enumerable set
• Recursion
References
• Sipser, M. (1996), Introduction to the Theory of Computation, PWS Publishing Co.
• Kozen, D.C. (1997), Automata and Computability, Springer.
External links
• Complexity Zoo: Class RE
• Lecture slides
Automata theory: formal languages and formal grammars
Chomsky hierarchyGrammarsLanguagesAbstract machines
• Type-0
• —
• Type-1
• —
• —
• —
• —
• —
• Type-2
• —
• —
• Type-3
• —
• —
• Unrestricted
• (no common name)
• Context-sensitive
• Positive range concatenation
• Indexed
• —
• Linear context-free rewriting systems
• Tree-adjoining
• Context-free
• Deterministic context-free
• Visibly pushdown
• Regular
• —
• Non-recursive
• Recursively enumerable
• Decidable
• Context-sensitive
• Positive range concatenation*
• Indexed*
• —
• Linear context-free rewriting language
• Tree-adjoining
• Context-free
• Deterministic context-free
• Visibly pushdown
• Regular
• Star-free
• Finite
• Turing machine
• Decider
• Linear-bounded
• PTIME Turing Machine
• Nested stack
• Thread automaton
• restricted Tree stack automaton
• Embedded pushdown
• Nondeterministic pushdown
• Deterministic pushdown
• Visibly pushdown
• Finite
• Counter-free (with aperiodic finite monoid)
• Acyclic finite
Each category of languages, except those marked by a *, is a proper subset of the category directly above it. Any language in each category is generated by a grammar and by an automaton in the category in the same line.
|
Wikipedia
|
Prime factor exponent notation
In his 1557 work The Whetstone of Witte, British mathematician Robert Recorde proposed an exponent notation by prime factorisation, which remained in use up until the eighteenth century and acquired the name Arabic exponent notation. The principle of Arabic exponents was quite similar to Egyptian fractions; large exponents were broken down into smaller prime numbers. Squares and cubes were so called; prime numbers from five onwards were called sursolids.
Although the terms used for defining exponents differed between authors and times, the general system was the primary exponent notation until René Descartes devised the Cartesian exponent notation, which is still used today.
This is a list of Recorde's terms.
Cartesian indexArabic indexRecordian symbolExplanation
1Simple
2Square (compound form is zenzic)z
3Cubic&
4Zenzizenzic (biquadratic)zzsquare of squares
5First sursolidszfirst prime exponent greater than three
6Zenzicubicz&square of cubes
7Second sursolidBszsecond prime exponent greater than three
8Zenzizenzizenzic (quadratoquadratoquadratum)zzzsquare of squared squares
9Cubicubic&&cube of cubes
10Square of first sursolidzszsquare of five
11Third sursolidcszthird prime number greater than 3
12Zenzizenzicubiczz&square of square of cubes
13Fourth sursoliddsz
14Square of second sursolidzbszsquare of seven
15Cube of first sursolid&szcube of five
16Zenzizenzizenzizenziczzzz"square of squares, squaredly squared"
17Fifth sursolidesz
18Zenzicubicubicz&&
19Sixth sursolidfsz
20Zenzizenzic of first sursolidzzsz
21Cube of second sursolid&bsz
22Square of third sursolidzcsz
By comparison, here is a table of prime factors:
1 − 20
1unit
22
33
422
55
62·3
77
823
932
102·5
1111
1222·3
1313
142·7
153·5
1624
1717
182·32
1919
2022·5
21 − 40
213·7
222·11
2323
2423·3
2552
262·13
2733
2822·7
2929
302·3·5
3131
3225
333·11
342·17
355·7
3622·32
3737
382·19
393·13
4023·5
41 − 60
4141
422·3·7
4343
4422·11
4532·5
462·23
4747
4824·3
4972
502·52
513·17
5222·13
5353
542·33
555·11
5623·7
573·19
582·29
5959
6022·3·5
61 − 80
6161
622·31
6332·7
6426
655·13
662·3·11
6767
6822·17
693·23
702·5·7
7171
7223·32
7373
742·37
753·52
7622·19
777·11
782·3·13
7979
8024·5
81 − 100
8134
822·41
8383
8422·3·7
855·17
862·43
873·29
8823·11
8989
902·32·5
917·13
9222·23
933·31
942·47
955·19
9625·3
9797
982·72
9932·11
10022·52
See also
• Surd
External links (references)
• Mathematical dictionary, Chas Hutton, pg 224
|
Wikipedia
|
Recreational mathematics
Recreational mathematics is mathematics carried out for recreation (entertainment) rather than as a strictly research- and application-based professional activity or as a part of a student's formal education. Although it is not necessarily limited to being an endeavor for amateurs, many topics in this field require no knowledge of advanced mathematics. Recreational mathematics involves mathematical puzzles and games, often appealing to children and untrained adults and inspiring their further study of the subject.[1]
Mathematics
Areas
• Number theory
• Geometry
• Algebra
• Calculus and analysis
• Discrete mathematics
• Logic and set theory
• Probability
• Statistics and decision sciences
Relationship with sciences
• Physics
• Chemistry
• Geosciences
• Computation
• Biology
• Linguistics
• Economics
• Philosophy
• Education
Portal
The Mathematical Association of America (MAA) includes recreational mathematics as one of its seventeen Special Interest Groups, commenting:
Recreational mathematics is not easily defined because it is more than mathematics done as a diversion or playing games that involve mathematics. Recreational mathematics is inspired by deep ideas that are hidden in puzzles, games, and other forms of play. The aim of the SIGMAA on Recreational Mathematics (SIGMAA-Rec) is to bring together enthusiasts and researchers in the myriad of topics that fall under recreational math. We will share results and ideas from our work, show that real, deep mathematics is there awaiting those who look, and welcome those who wish to become involved in this branch of mathematics.[2]
Mathematical competitions (such as those sponsored by mathematical associations) are also categorized under recreational mathematics.
Topics
Some of the more well-known topics in recreational mathematics are Rubik's Cubes, magic squares, fractals, logic puzzles and mathematical chess problems, but this area of mathematics includes the aesthetics and culture of mathematics, peculiar or amusing stories and coincidences about mathematics, and the personal lives of mathematicians.
Mathematical games
Mathematical games are multiplayer games whose rules, strategies, and outcomes can be studied and explained using mathematics. The players of the game may not need to use explicit mathematics in order to play mathematical games. For example, Mancala is studied in the mathematical field of combinatorial game theory, but no mathematics is necessary in order to play it.
Mathematical puzzles
Mathematical puzzles require mathematics in order to solve them. They have specific rules, as do multiplayer games, but mathematical puzzles do not usually involve competition between two or more players. Instead, in order to solve such a puzzle, the solver must find a solution that satisfies the given conditions.
Logic puzzles and classical ciphers are common examples of mathematical puzzles. Cellular automata and fractals are also considered mathematical puzzles, even though the solver only interacts with them by providing a set of initial conditions.
As they often include or require game-like features or thinking, mathematical puzzles are sometimes also called mathematical games.
Mathemagics
Magic tricks based on mathematical principles can produce self-working but surprising effects. For instance, a mathemagician might use the combinatorial properties of a deck of playing cards to guess a volunteer's selected card, or Hamming codes to identify whether a volunteer is lying.[3]
Other activities
Other curiosities and pastimes of non-trivial mathematical interest include:
• patterns in juggling
• the sometimes profound algorithmic and geometrical characteristics of origami
• patterns and process in creating string figures such as Cat's cradles, etc.
• fractal-generating software
Online blogs, podcasts, and YouTube channels
There are many blogs and audio or video series devoted to recreational mathematics. Among the notable are the following:
• Cut-the-knot by Alexander Bogomolny
• Futility Closet by Greg Ross
• Numberphile by Brady Haran
• Mathologer by Burkard Polster
• 3Blue1Brown by Grant Sanderson
• The videos of Vi Hart
• Stand-Up Maths by Matt Parker
Publications
• The journal Eureka published by the mathematical society of the University of Cambridge is one of the oldest publications in recreational mathematics. It has been published 60 times since 1939 and authors have included many famous mathematicians and scientists such as Martin Gardner, John Conway, Roger Penrose, Ian Stewart, Timothy Gowers, Stephen Hawking and Paul Dirac.
• The Journal of Recreational Mathematics was the largest publication on this topic from its founding in 1968 until 2014 when it ceased publication.
• Mathematical Games (1956 to 1981) was the title of a long-running Scientific American column on recreational mathematics by Martin Gardner. He inspired several generations of mathematicians and scientists through his interest in mathematical recreations. "Mathematical Games" was succeeded by 25 "Metamagical Themas" columns (1981-1983), a similarly distinguished, but shorter-running, column by Douglas Hofstadter, then by 78 "Mathematical Recreations" and "Computer Recreations" columns (1984 to 1991) by A. K. Dewdney, then by 96 "Mathematical Recreations" columns (1991 to 2001) by Ian Stewart, and most recently "Puzzling Adventures" by Dennis Shasha.
• The Recreational Mathematics Magazine, published by the Ludus Association, is electronic and semiannual, and focuses on results that provide amusing, witty but nonetheless original and scientifically profound mathematical nuggets. The issues are published in the exact moments of the equinox.
People
Prominent practitioners and advocates of recreational mathematics have included professional and amateur mathematicians:
Full nameLast nameBornDiedNationality Description
Lewis Carroll (Charles Dodgson)Carroll18321898English Mathematician, puzzlist and Anglican deacon best known as the author of Alice in Wonderland and Through the Looking-Glass.
Sam LoydLoyd18411911American Chess problem composer and author, described as "America's greatest puzzlist" by Martin Gardner.[4]
Henry DudeneyDudeney18571930English Civil servant described as England's "greatest puzzlist".[5]
Yakov PerelmanPerelman18821942Russian Author of many popular science and mathematics books, including Mathematics Can Be Fun.
D. R. KaprekarKaprekar19051986Indian Discovered several results in number theory, described several classes of natural numbers including the Kaprekar, harshad and self numbers, and discovered the Kaprekar's constant
Martin GardnerGardner19142010American Popular mathematics and science writer; author of Mathematical Games, a long-running Scientific American column.
Raymond SmullyanSmullyan19192017American Logician; author of many logic puzzle books including "To Mock a Mockingbird".
Joseph MadachyMadachy19272014American Long-time editor of Journal of Recreational Mathematics, author of Mathematics on Vacation.
Solomon W. Golomb Golomb19322016American Mathematician and engineer, best known as the inventor of polyominoes.
John Horton Conway Conway19372020English Mathematician and inventor of Conway's Game of Life, co-author of Winning Ways, an analysis of many mathematical games.
Lee Sallows Sallows 1944 English Invented geomagic squares, golygons, and self-enumerating sentences.
See also
• List of recreational number theory topics
References
1. Kulkarni, D. Enjoying Math: Learning Problem Solving With KenKen Puzzles Archived 2013-08-01 at the Wayback Machine, a textbook for teaching with KenKen Puzzles.
2. Special Interest Groups of the MAA Mathematical Association of America
3. Teixeira, Ricardo (2020). Mathemagics: A Magical Journey through Advanced Mathematics. USA: World Scientific. ISBN 9789811214509.
4. Loyd, Sam (1959). Mathematical Puzzles of Sam Loyd (selected and edited by Martin Gardner), Dover Publications Inc., p. xi, ISBN 0-486-20498-7
5. Newing, Angela (1994), "Henry Ernest Dudeney: Britain's Greatest Puzzlist", in Guy, Richard K.; Woodrow, Robert E. (eds.), The Lighter Side of Mathematics: Proceedings of the Eugène Strens Memorial Conference on Recreational Mathematics and Its History, Cambridge University Press, pp. 294–301, ISBN 9780883855164.
Further reading
• W. W. Rouse Ball and H.S.M. Coxeter (1987). Mathematical Recreations and Essays, Thirteenth Edition, Dover. ISBN 0-486-25357-0.
• Henry E. Dudeney (1967). 536 Puzzles and Curious Problems. Charles Scribner's sons. ISBN 0-684-71755-7.
• Sam Loyd (1959. 2 Vols.). in Martin Gardner: The Mathematical Puzzles of Sam Loyd. Dover. OCLC 5720955.
• Raymond M. Smullyan (1991). The Lady or the Tiger? And Other Logic Puzzles. Oxford University Press. ISBN 0-19-286136-0.
External links
Look up recreational mathematics in Wiktionary, the free dictionary.
• Recreational Mathematics from MathWorld at Wolfram Research
• The Unreasonable Utility of Recreational Mathematics by David Singmaster
Major mathematics areas
• History
• Timeline
• Future
• Outline
• Lists
• Glossary
Foundations
• Category theory
• Information theory
• Mathematical logic
• Philosophy of mathematics
• Set theory
• Type theory
Algebra
• Abstract
• Commutative
• Elementary
• Group theory
• Linear
• Multilinear
• Universal
• Homological
Analysis
• Calculus
• Real analysis
• Complex analysis
• Hypercomplex analysis
• Differential equations
• Functional analysis
• Harmonic analysis
• Measure theory
Discrete
• Combinatorics
• Graph theory
• Order theory
Geometry
• Algebraic
• Analytic
• Arithmetic
• Differential
• Discrete
• Euclidean
• Finite
Number theory
• Arithmetic
• Algebraic number theory
• Analytic number theory
• Diophantine geometry
Topology
• General
• Algebraic
• Differential
• Geometric
• Homotopy theory
Applied
• Engineering mathematics
• Mathematical biology
• Mathematical chemistry
• Mathematical economics
• Mathematical finance
• Mathematical physics
• Mathematical psychology
• Mathematical sociology
• Mathematical statistics
• Probability
• Statistics
• Systems science
• Control theory
• Game theory
• Operations research
Computational
• Computer science
• Theory of computation
• Computational complexity theory
• Numerical analysis
• Optimization
• Computer algebra
Related topics
• Mathematicians
• lists
• Informal mathematics
• Films about mathematicians
• Recreational mathematics
• Mathematics and art
• Mathematics education
• Mathematics portal
• Category
• Commons
• WikiProject
Authority control: National
• Germany
• Israel
• United States
|
Wikipedia
|
Cuboid
In geometry, a cuboid is a hexahedron, a six-faced solid. Its faces are quadrilaterals. Cuboid means "like a cube". A cuboid is like a cube in the sense that by adjusting the lengths of the edges or the angles between faces a cuboid can be transformed into a cube. In mathematical language a cuboid is a convex polyhedron whose polyhedral graph is the same as that of a cube.
A special case of a cuboid is a rectangular cuboid, with six rectangles as faces. Its adjacent faces meet at right angles. A special case of a rectangular cuboid is a cube, with six square faces meeting at right angles.[1][2]
General cuboids
By Euler's formula the numbers of faces F, of vertices V, and of edges E of any convex polyhedron are related by the formula F + V = E + 2. In the case of a cuboid this gives 6 + 8 = 12 + 2; that is, like a cube, a cuboid has six faces, eight vertices, and twelve edges. Along with the rectangular cuboids, any parallelepiped is a cuboid of this type, as is a square frustum (the shape formed by truncation of the apex of a square pyramid).
Quadrilaterally-faced hexahedron (cuboid) 6 faces, 12 edges, 8 vertices
Cube
(square)
Rectangular cuboid
(three pairs of
rectangles)
Trigonal trapezohedron
(congruent rhombi)
Trigonal trapezohedron
(congruent quadrilaterals)
Quadrilateral frustum
(apex-truncated
square pyramid)
Parallelepiped
(three pairs of
parallelograms)
Rhombohedron
(three pairs of
rhombi)
Oh, [4,3], (*432)
order 48
D2h, [2,2], (*222)
order 8
D3d, [2+,6], (2*3)
order 12
D3, [2,3]+, (223)
order 6
C4v, [4], (*44)
order 8
Ci, [2+,2+], (×)
order 2
Rectangular cuboid
Rectangular cuboid
TypePrism
Plesiohedron
Faces6 rectangles
Edges12
Vertices8
Symmetry groupD2h, [2,2], (*222), order 8
Schläfli symbol{ } × { } × { }
Coxeter diagram
Dual polyhedronRectangular fusil
Propertiesconvex, zonohedron, isogonal
In a rectangular cuboid, all angles are right angles, and opposite faces of a cuboid are equal. By definition this makes it a right rectangular prism, and the terms rectangular parallelepiped or orthogonal parallelepiped are also used to designate this polyhedron. The terms rectangular prism and oblong prism, however, are ambiguous, since they do not specify all angles.
A square cuboid, square box, or right square prism (also ambiguously called a square prism) is a special case of a cuboid in which at least two faces are squares. It has Schläfli symbol {4} × { }, and its symmetry is doubled from [2,2] to [4,2], order 16.
A cube is a special case of a square cuboid in which all six faces are squares. It has the Schläfli symbol {4,3}, and its symmetry is raised from [2,2], to [4,3], order 48.
If the dimensions of a rectangular cuboid are a, b and c, then its volume is abc and its surface area is 2(ab + ac + bc).
The length of the space diagonal is
$d={\sqrt {a^{2}+b^{2}+c^{2}}}.$
Cuboid shapes are often used for boxes, cupboards, rooms, buildings, containers, cabinets, books, sturdy computer chassis, printing devices, electronic calling touchscreen devices, washing and drying machines, etc. Cuboids are among those solids that can tessellate three-dimensional space. The shape is fairly versatile in being able to contain multiple smaller cuboids, e.g. sugar cubes in a box, boxes in a cupboard, cupboards in a room, and rooms in a building.
A cuboid with integer edges as well as integer face diagonals is called an Euler brick, for example, with sides 44, 117 and 240. A perfect cuboid is an Euler brick whose space diagonal is also an integer. It is currently unknown whether a perfect cuboid actually exists.
Nets
The number of different nets for a simple cube is 11. However, this number increases significantly to (at least) 54 for a rectangular cuboid of three different lengths.[3]
See also
• Hyperrectangle
• Trapezohedron
• Lists of shapes
References
1. Robertson, Stewart Alexander (1984). Polytopes and Symmetry. Cambridge University Press. p. 75. ISBN 9780521277396.
2. Dupuis, Nathan Fellowes (1893). Elements of Synthetic Solid Geometry. Macmillan. p. 53. Retrieved December 1, 2018.
3. Steward, Don (May 24, 2013). "nets of a cuboid". Retrieved December 1, 2018.
External links
Wikimedia Commons has media related to Hexahedra with cube topology.
Wikimedia Commons has media related to Rectangular cuboids.
• Weisstein, Eric W. "Cuboid". MathWorld.
• Rectangular prism and cuboid Paper models and pictures
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
|
Wikipedia
|
Continuous uniform distribution
In probability theory and statistics, the continuous uniform distributions or rectangular distributions are a family of symmetric probability distributions. Such a distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds.[1] The bounds are defined by the parameters, $a$ and $b,$ which are the minimum and maximum values. The interval can either be closed (i.e. $[a,b]$) or open (i.e. $(a,b)$).[2] Therefore, the distribution is often abbreviated $U(a,b),$ where $U$ stands for uniform distribution.[1] The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable $X$ under no constraint other than that it is contained in the distribution's support.[3]
${\mathbf {\text{Continuous uniform distribution}}}$${\mathbf {{\text{with parameters }}{\mathcal {a}}{\text{ and }}{\mathcal {b}}}}$
Probability density function
Using maximum convention
Cumulative distribution function
Notation ${\mathcal {U}}_{[a,b]}$
Parameters $-\infty <a<b<\infty $
Support $[a,b]$
PDF ${\begin{cases}{\frac {1}{b-a}}&{\text{for }}x\in [a,b]\\0&{\text{otherwise}}\end{cases}}$
CDF ${\begin{cases}0&{\text{for }}x<a\\{\frac {x-a}{b-a}}&{\text{for }}x\in [a,b]\\1&{\text{for }}x>b\end{cases}}$
Mean ${\tfrac {1}{2}}(a+b)$
Median ${\tfrac {1}{2}}(a+b)$
Mode ${\text{any value in }}(a,b)$
Variance ${\tfrac {1}{12}}(b-a)^{2}$
MAD ${\tfrac {1}{4}}(b-a)$
Skewness $0$
Ex. kurtosis $-{\tfrac {6}{5}}$
Entropy $\ln(b-a)$
MGF ${\begin{cases}{\frac {\mathrm {e} ^{tb}-\mathrm {e} ^{ta}}{t(b-a)}}&{\text{for }}t\neq 0\\1&{\text{for }}t=0\end{cases}}$
CF ${\begin{cases}{\frac {\mathrm {e} ^{\mathrm {i} tb}-\mathrm {e} ^{\mathrm {i} ta}}{\mathrm {i} t(b-a)}}&{\text{for }}t\neq 0\\1&{\text{for }}t=0\end{cases}}$
Definitions
Probability density function
The probability density function of the continuous uniform distribution is:
$f(x)={\begin{cases}{\frac {1}{b-a}}&{\text{for }}a\leq x\leq b,\\[8pt]0&{\text{for }}x<a\ {\text{ or }}\ x>b.\end{cases}}$
The values of $f(x)$ at the two boundaries $a$ and $b$ are usually unimportant, because they do not alter the value of $ \int _{c}^{d}f(x)dx$ over any interval $[c,d],$ nor of $ \int _{a}^{b}xf(x)dx,$ nor of any higher moment. Sometimes they are chosen to be zero, and sometimes chosen to be ${\tfrac {1}{b-a}}.$ The latter is appropriate in the context of estimation by the method of maximum likelihood. In the context of Fourier analysis, one may take the value of $f(a)$ or $f(b)$ to be ${\tfrac {1}{2(b-a)}},$ because then the inverse transform of many integral transforms of this uniform function will yield back the function itself, rather than a function which is equal "almost everywhere", i.e. except on a set of points with zero measure. Also, it is consistent with the sign function, which has no such ambiguity.
Any probability density function integrates to $1,$ so the probability density function of the continuous uniform distribution is graphically portrayed as a rectangle where $b-a$ is the base length and ${\tfrac {1}{b-a}}$ is the height. As the base length increases, the height (the density at any particular value within the distribution boundaries) decreases.[4]
In terms of mean $\mu $ and variance $\sigma ^{2},$ the probability density function of the continuous uniform distribution is:
$f(x)={\begin{cases}{\frac {1}{2\sigma {\sqrt {3}}}}&{\text{for }}-\sigma {\sqrt {3}}\leq x-\mu \leq \sigma {\sqrt {3}},\\0&{\text{otherwise}}.\end{cases}}$
Cumulative distribution function
The cumulative distribution function of the continuous uniform distribution is:
$F(x)={\begin{cases}0&{\text{for }}x<a,\\[8pt]{\frac {x-a}{b-a}}&{\text{for }}a\leq x\leq b,\\[8pt]1&{\text{for }}x>b.\end{cases}}$
Its inverse is:
$F^{-1}(p)=a+p(b-a)\quad {\text{ for }}0<p<1.$
In terms of mean $\mu $ and variance $\sigma ^{2},$ the cumulative distribution function of the continuous uniform distribution is:
$F(x)={\begin{cases}0&{\text{for }}x-\mu <-\sigma {\sqrt {3}},\\{\frac {1}{2}}\left({\frac {x-\mu }{\sigma {\sqrt {3}}}}+1\right)&{\text{for }}-\sigma {\sqrt {3}}\leq x-\mu <\sigma {\sqrt {3}},\\1&{\text{for }}x-\mu \geq \sigma {\sqrt {3}};\end{cases}}$
its inverse is:
$F^{-1}(p)=\sigma {\sqrt {3}}(2p-1)+\mu \quad {\text{ for }}0\leq p\leq 1.$
Example 1. Using the continuous uniform distribution function
For a random variable $X\sim U(0,23),$ find $P(2<X<18):$
$P(2<X<18)=(18-2)\cdot {\frac {1}{23-0}}={\frac {16}{23}}.$
In a graphical representation of the continuous uniform distribution function $[f(x){\text{ vs }}x],$ the area under the curve within the specified bounds, displaying the probability, is a rectangle. For the specific example above, the base would be $16,$ and the height would be ${\tfrac {1}{23}}.$[5]
Example 2. Using the continuous uniform distribution function (conditional)
For a random variable $X\sim U(0,23),$ find $P(X>12\ |\ X>8):$
$P(X>12\ |\ X>8)=(23-12)\cdot {\frac {1}{23-8}}={\frac {11}{15}}.$
The example above is a conditional probability case for the continuous uniform distribution: given that $X>8$ is true, what is the probability that $X>12?$ Conditional probability changes the sample space, so a new interval length $b-a'$ has to be calculated, where $b=23$ and $a'=8.$[5] The graphical representation would still follow Example 1, where the area under the curve within the specified bounds displays the probability; the base of the rectangle would be $11,$ and the height would be ${\tfrac {1}{15}}.$[5]
Moment-generating function
The moment-generating function of the continuous uniform distribution is:[6]
$M_{X}=\mathrm {E} (\mathrm {e} ^{tX})=\int _{a}^{b}\mathrm {e} ^{tx}{\frac {dx}{b-a}}={\frac {\mathrm {e} ^{tb}-\mathrm {e} ^{ta}}{t(b-a)}}={\frac {B^{t}-A^{t}}{t(b-a)}},$[7]
from which we may calculate the raw moments $m_{k}:$
$m_{1}={\frac {a+b}{2}},$
$m_{2}={\frac {a^{2}+ab+b^{2}}{3}},$
$m_{k}={\frac {\sum _{i=0}^{k}a^{i}b^{k-i}}{k+1}}.$
For a random variable following the continuous uniform distribution, the expected value is $m_{1}={\tfrac {a+b}{2}},$ and the variance is $m_{2}-m_{1}^{2}={\tfrac {(b-a)^{2}}{12}}.$
For the special case $a=-b,$ the probability density function of the continuous uniform distribution is:
$f(x)={\begin{cases}{\frac {1}{2b}}&{\text{for }}-b\leq x\leq b,\\[8pt]0&{\text{otherwise}};\end{cases}}$
the moment-generating function reduces to the simple form:
$M_{X}={\frac {\sinh bt}{bt}}.$
Cumulant-generating function
For $n\geq 2,$ the $n$-th cumulant of the continuous uniform distribution on the interval $[-{\tfrac {1}{2}},{\tfrac {1}{2}}]$ is ${\tfrac {B_{n}}{n}},$ where $B_{n}$ is the $n$-th Bernoulli number.[8]
Standard uniform distribution
The continuous uniform distribution with parameters $a=0$ and $b=1,$ i.e. $U(0,1),$ is called the standard uniform distribution.
One interesting property of the standard uniform distribution is that if $u_{1}$ has a standard uniform distribution, then so does $1-u_{1}.$ This property can be used for generating antithetic variates, among other things. In other words, this property is known as the inversion method where the continuous standard uniform distribution can be used to generate random numbers for any other continuous distribution.[4] If $u_{1}$ is a uniform random number with standard uniform distribution, i.e. with $U(0,1),$ then $x=F^{-1}(u_{1})$ generates a random number $x$ from any continuous distribution with the specified cumulative distribution function $F.$[4]
Relationship to other functions
As long as the same conventions are followed at the transition points, the probability density function of the continuous uniform distribution may also be expressed in terms of the Heaviside step function as:
$f(x)={\frac {\operatorname {H} (x-a)-\operatorname {H} (x-b)}{b-a}},$
or in terms of the rectangle function as:
$f(x)={\frac {1}{b-a}}\ \operatorname {rect} \left({\frac {x-{\frac {a+b}{2}}}{b-a}}\right).$
There is no ambiguity at the transition point of the sign function. Using the half-maximum convention at the transition points, the continuous uniform distribution may be expressed in terms of the sign function as:
$f(x)={\frac {\operatorname {sgn} {(x-a)}-\operatorname {sgn} {(x-b)}}{2(b-a)}}.$
Properties
Moments
The mean (first raw moment) of the continuous uniform distribution is:
$E(X)=\int _{a}^{b}x{\frac {dx}{b-a}}={\frac {b^{2}-a^{2}}{2(b-a)}}.$
The second raw moment of this distribution is:
$E(X^{2})=\int _{a}^{b}x^{2}{\frac {dx}{b-a}}={\frac {b^{3}-a^{3}}{3(b-a)}}.$
In general, the $n$-th raw moment of this distribution is:
$E(X^{n})=\int _{a}^{b}x^{n}{\frac {dx}{b-a}}={\frac {b^{n+1}-a^{n+1}}{(n+1)(b-a)}}.$
The variance (second central moment) of this distribution is:
$V(X)=E\left({\big (}X-E(X){\big )}^{2}\right)=\int _{a}^{b}\left(x-{\frac {a+b}{2}}\right)^{2}{\frac {dx}{b-a}}={\frac {(b-a)^{2}}{12}}.$
Order statistics
Let $X_{1},...,X_{n}$ be an i.i.d. sample from $U(0,1),$ and let $X_{(k)}$ be the $k$-th order statistic from this sample.
$X_{(k)}$ has a beta distribution, with parameters $k$ and $n-k+1.$
The expected value is:
$\operatorname {E} (X_{(k)})={k \over n+1}.$
This fact is useful when making Q–Q plots.
The variance is:
$\operatorname {V} (X_{(k)})={k(n-k+1) \over (n+1)^{2}(n+2)}.$
Uniformity
The probability that a continuously uniformly distributed random variable falls within any interval of fixed length is independent of the location of the interval itself (but it is dependent on the interval size $(\ell )$), so long as the interval is contained in the distribution's support.
Indeed, if $X\sim U(a,b)$ and if $[x,x+\ell ]$ is a subinterval of $[a,b]$ with fixed $\ell >0,$ then:
$P{\big (}X\in [x,x+\ell ]{\big )}=\int _{x}^{x+\ell }{\frac {dy}{b-a}}={\frac {\ell }{b-a}},$
which is independent of $x.$ This fact motivates the distribution's name.
Generalization to Borel sets
This distribution can be generalized to more complicated sets than intervals. Let $S$ be a Borel set of positive, finite Lebesgue measure $\lambda (S),$ i.e. $0<\lambda (S)<+\infty .$ The uniform distribution on $S$ can be specified by defining the probability density function to be zero outside $S$ and constantly equal to ${\tfrac {1}{\lambda (S)}}$ on $S.$
Related distributions
• If X has a standard uniform distribution, then by the inverse transform sampling method, Y = − λ−1 ln(X) has an exponential distribution with (rate) parameter λ.
• If X has a standard uniform distribution, then Y = Xn has a beta distribution with parameters (1/n,1). As such,
• The standard uniform distribution is a special case of the beta distribution, with parameters (1,1).
• The Irwin–Hall distribution is the sum of n i.i.d. U(0,1) distributions.
• The sum of two independent, equally distributed, uniform distributions yields a symmetric triangular distribution.
• The distance between two i.i.d. uniform random variables also has a triangular distribution, although not symmetric.
Statistical inference
Minimum-variance unbiased estimator
Main article: German tank problem
Given a uniform distribution on $[0,b]$ with unknown $b,$ the minimum-variance unbiased estimator (UMVUE) for the maximum is:
${\hat {b}}_{\text{UMVU}}={\frac {k+1}{k}}m=m+{\frac {m}{k}},$
where $m$ is the sample maximum and $k$ is the sample size, sampling without replacement (though this distinction almost surely makes no difference for a continuous distribution). This follows for the same reasons as estimation for the discrete distribution, and can be seen as a very simple case of maximum spacing estimation. This problem is commonly known as the German tank problem, due to application of maximum estimation to estimates of German tank production during World War II.
Maximum likelihood estimator
The maximum likelihood estimator is:
${\hat {b}}_{ML}=m,$
where $m$ is the sample maximum, also denoted as $m=X_{(n)},$ the maximum order statistic of the sample.
Method of moment estimator
The method of moments estimator is:
${\hat {b}}_{MM}=2{\bar {X}},$
where ${\bar {X}}$ is the sample mean.
Estimation of midpoint
The midpoint of the distribution, ${\tfrac {a+b}{2}},$ is both the mean and the median of the uniform distribution. Although both the sample mean and the sample median are unbiased estimators of the midpoint, neither is as efficient as the sample mid-range, i.e. the arithmetic mean of the sample maximum and the sample minimum, which is the UMVU estimator of the midpoint (and also the maximum likelihood estimate).
For the maximum
Let $X_{1},X_{2},X_{3},...,X_{n}$ be a sample from $U_{[0,L]},$ where $L$ is the maximum value in the population. Then $X_{(n)}=\max(X_{1},X_{2},X_{3},...,X_{n})$ has the Lebesgue-Borel-density $f={\frac {d\Pr _{X_{(n)}}}{d\lambda }}:$[9]
$f(t)=n{\frac {1}{L}}\left({\frac {t}{L}}\right)^{n-1}\!=n{\frac {t^{n-1}}{L^{n}}}1\!\!1_{[0,L]}(t),$ where $1\!\!1_{[0,L]}$ is the indicator function of $[0,L].$
The confidence interval given before is mathematically incorrect, as
$\Pr {\big (}[{\hat {\theta }},{\hat {\theta }}+\varepsilon ]\ni \theta {\big )}\geq 1-\alpha $
cannot be solved for $\varepsilon $ without knowledge of $\theta $. However, one can solve
$\Pr {\big (}[{\hat {\theta }},{\hat {\theta }}(1+\varepsilon )]\ni \theta {\big )}\geq 1-\alpha $ for $\varepsilon \geq (1-\alpha )^{-1/n}-1$ for any unknown but valid $\theta ;$ ;}
one then chooses the smallest $\varepsilon $ possible satisfying the condition above. Note that the interval length depends upon the random variable ${\hat {\theta }}.$
Occurrence and applications
The probabilities for uniform distribution function are simple to calculate due to the simplicity of the function form.[2] Therefore, there are various applications that this distribution can be used for as shown below: hypothesis testing situations, random sampling cases, finance, etc. Furthermore, generally, experiments of physical origin follow a uniform distribution (e.g. emission of radioactive particles).[1] However, it is important to note that in any application, there is the unchanging assumption that the probability of falling in an interval of fixed length is constant.[2]
Economics example for uniform distribution
In the field of economics, usually demand and replenishment may not follow the expected normal distribution. As a result, other distribution models are used to better predict probabilities and trends such as Bernoulli process.[10] But according to Wanke (2008), in the particular case of investigating lead-time for inventory management at the beginning of the life cycle when a completely new product is being analyzed, the uniform distribution proves to be more useful.[10] In this situation, other distribution may not be viable since there is no existing data on the new product or that the demand history is unavailable so there isn't really an appropriate or known distribution.[10] The uniform distribution would be ideal in this situation since the random variable of lead-time (related to demand) is unknown for the new product but the results are likely to range between a plausible range of two values.[10] The lead-time would thus represent the random variable. From the uniform distribution model, other factors related to lead-time were able to be calculated such as cycle service level and shortage per cycle. It was also noted that the uniform distribution was also used due to the simplicity of the calculations.[10]
Sampling from an arbitrary distribution
Main article: Inverse transform sampling
The uniform distribution is useful for sampling from arbitrary distributions. A general method is the inverse transform sampling method, which uses the cumulative distribution function (CDF) of the target random variable. This method is very useful in theoretical work. Since simulations using this method require inverting the CDF of the target variable, alternative methods have been devised for the cases where the CDF is not known in closed form. One such method is rejection sampling.
The normal distribution is an important example where the inverse transform method is not efficient. However, there is an exact method, the Box–Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables.
Quantization error
In analog-to-digital conversion, a quantization error occurs. This error is either due to rounding or truncation. When the original signal is much larger than one least significant bit (LSB), the quantization error is not significantly correlated with the signal, and has an approximately uniform distribution. The RMS error therefore follows from the variance of this distribution.
Random variate generation
There are many applications in which it is useful to run simulation experiments. Many programming languages come with implementations to generate pseudo-random numbers which are effectively distributed according to the standard uniform distribution.
On the other hand, the uniformly distributed numbers are often used as the basis for non-uniform random variate generation.
If $u$ is a value sampled from the standard uniform distribution, then the value $a+(b-a)u$ follows the uniform distribution parameterized by $a$ and $b,$ as described above.
History
While the historical origins in the conception of uniform distribution are inconclusive, it is speculated that the term "uniform" arose from the concept of equiprobability in dice games (note that the dice games would have discrete and not continuous uniform sample space). Equiprobability was mentioned in Gerolamo Cardano's Liber de Ludo Aleae, a manual written in 16th century and detailed on advanced probability calculus in relation to dice.[11]
See also
• Discrete uniform distribution
• Beta distribution
• Box–Muller transform
• Probability plot
• Q–Q plot
• Rectangular function
• Irwin–Hall distribution — In the degenerate case where n=1, the Irwin-Hall distribution generates a uniform distribution between 0 and 1.
• Bates distribution — Similar to the Irwin-Hall distribution, but rescaled for n. Like the Irwin-Hall distribution, in the degenerate case where n=1, the Bates distribution generates a uniform distribution between 0 and 1.
References
1. Dekking, Michel (2005). A modern introduction to probability and statistics : understanding why and how. London, UK: Springer. pp. 60–61. ISBN 978-1-85233-896-1.
2. Walpole, Ronald; et al. (2012). Probability & Statistics for Engineers and Scientists. Boston, USA: Prentice Hall. pp. 171–172. ISBN 978-0-321-62911-1.
3. Park, Sung Y.; Bera, Anil K. (2009). "Maximum entropy autoregressive conditional heteroskedasticity model". Journal of Econometrics. 150 (2): 219–230. CiteSeerX 10.1.1.511.9750. doi:10.1016/j.jeconom.2008.12.014.
4. "Uniform Distribution (Continuous)". MathWorks. 2019. Retrieved November 22, 2019.
5. Illowsky, Barbara; et al. (2013). Introductory Statistics. Rice University, Houston, Texas, USA: OpenStax College. pp. 296–304. ISBN 978-1-938168-20-8.
6. Casella & Berger 2001, p. 626
7. https://www.stat.washington.edu/~nehemyl/files/UW_MATH-STAT395_moment-functions.pdf
8. https://galton.uchicago.edu/~wichura/Stat304/Handouts/L18.cumulants.pdf
9. Nechval KN, Nechval NA, Vasermanis EK, Makeev VY (2002) Constructing shortest-length confidence intervals. Transport and Telecommunication 3 (1) 95-103
10. Wanke, Peter (2008). "The uniform distribution as a first practical approach to new product inventory management". International Journal of Production Economics. 114 (2): 811–819. doi:10.1016/j.ijpe.2008.04.004 – via Research Gate.
11. Bellhouse, David (May 2005). "Decoding Cardano's Liber de Ludo". Historia Mathematica. 32: 180–202. doi:10.1016/j.hm.2004.04.001.
Further reading
• Casella, George; Roger L. Berger (2001), Statistical Inference (2nd ed.), ISBN 978-0-534-24312-8, LCCN 2001025794
External links
• Online calculator of Uniform distribution (continuous)
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
Authority control: National
• Israel
• United States
|
Wikipedia
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.