text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Ramanujan prime
In mathematics, a Ramanujan prime is a prime number that satisfies a result proven by Srinivasa Ramanujan relating to the prime-counting function.
Not to be confused with Hardy–Ramanujan number.
Origins and definition
In 1919, Ramanujan published a new proof of Bertrand's postulate which, as he notes, was first proved by Chebyshev.[1] At the end of the two-page published paper, Ramanujan derived a generalized result, and that is:
$\pi (x)-\pi \left({\frac {x}{2}}\right)\geq 1,2,3,4,5,\ldots {\text{ for all }}x\geq 2,11,17,29,41,\ldots {\text{ respectively}}$ OEIS: A104272
where $\pi (x)$ is the prime-counting function, equal to the number of primes less than or equal to x.
The converse of this result is the definition of Ramanujan primes:
The nth Ramanujan prime is the least integer Rn for which $\pi (x)-\pi (x/2)\geq n,$ for all x ≥ Rn.[2] In other words: Ramanujan primes are the least integers Rn for which there are at least n primes between x and x/2 for all x ≥ Rn.
The first five Ramanujan primes are thus 2, 11, 17, 29, and 41.
Note that the integer Rn is necessarily a prime number: $\pi (x)-\pi (x/2)$ and, hence, $\pi (x)$ must increase by obtaining another prime at x = Rn. Since $\pi (x)-\pi (x/2)$ can increase by at most 1,
$\pi (R_{n})-\pi \left({\frac {R_{n}}{2}}\right)=n.$
Bounds and an asymptotic formula
For all $n\geq 1$, the bounds
$2n\ln 2n<R_{n}<4n\ln 4n$
hold. If $n>1$, then also
$p_{2n}<R_{n}<p_{3n}$
where pn is the nth prime number.
As n tends to infinity, Rn is asymptotic to the 2nth prime, i.e.,
Rn ~ p2n (n → ∞).
All these results were proved by Sondow (2009),[3] except for the upper bound Rn < p3n which was conjectured by him and proved by Laishram (2010).[4] The bound was improved by Sondow, Nicholson, and Noe (2011)[5] to
$R_{n}\leq {\frac {41}{47}}\ p_{3n}$
which is the optimal form of Rn ≤ c·p3n since it is an equality for n = 5.
References
1. Ramanujan, S. (1919), "A proof of Bertrand's postulate", Journal of the Indian Mathematical Society, 11: 181–182
2. Jonathan Sondow. "Ramanujan Prime". MathWorld.
3. Sondow, J. (2009), "Ramanujan primes and Bertrand's postulate", Amer. Math. Monthly, 116 (7): 630–635, arXiv:0907.5232, doi:10.4169/193009709x458609
4. Laishram, S. (2010), "On a conjecture on Ramanujan primes" (PDF), International Journal of Number Theory, 6 (8): 1869–1873, CiteSeerX 10.1.1.639.4934, doi:10.1142/s1793042110003848.
5. Sondow, J.; Nicholson, J.; Noe, T.D. (2011), "Ramanujan primes: bounds, runs, twins, and gaps" (PDF), Journal of Integer Sequences, 14: 11.6.2, arXiv:1105.2249, Bibcode:2011arXiv1105.2249S
Prime number classes
By formula
• Fermat (22n + 1)
• Mersenne (2p − 1)
• Double Mersenne (22p−1 − 1)
• Wagstaff (2p + 1)/3
• Proth (k·2n + 1)
• Factorial (n! ± 1)
• Primorial (pn# ± 1)
• Euclid (pn# + 1)
• Pythagorean (4n + 1)
• Pierpont (2m·3n + 1)
• Quartan (x4 + y4)
• Solinas (2m ± 2n ± 1)
• Cullen (n·2n + 1)
• Woodall (n·2n − 1)
• Cuban (x3 − y3)/(x − y)
• Leyland (xy + yx)
• Thabit (3·2n − 1)
• Williams ((b−1)·bn − 1)
• Mills (⌊A3n⌋)
By integer sequence
• Fibonacci
• Lucas
• Pell
• Newman–Shanks–Williams
• Perrin
• Partitions
• Bell
• Motzkin
By property
• Wieferich (pair)
• Wall–Sun–Sun
• Wolstenholme
• Wilson
• Lucky
• Fortunate
• Ramanujan
• Pillai
• Regular
• Strong
• Stern
• Supersingular (elliptic curve)
• Supersingular (moonshine theory)
• Good
• Super
• Higgs
• Highly cototient
• Unique
Base-dependent
• Palindromic
• Emirp
• Repunit (10n − 1)/9
• Permutable
• Circular
• Truncatable
• Minimal
• Delicate
• Primeval
• Full reptend
• Unique
• Happy
• Self
• Smarandache–Wellin
• Strobogrammatic
• Dihedral
• Tetradic
Patterns
• Twin (p, p + 2)
• Bi-twin chain (n ± 1, 2n ± 1, 4n ± 1, …)
• Triplet (p, p + 2 or p + 4, p + 6)
• Quadruplet (p, p + 2, p + 6, p + 8)
• k-tuple
• Cousin (p, p + 4)
• Sexy (p, p + 6)
• Chen
• Sophie Germain/Safe (p, 2p + 1)
• Cunningham (p, 2p ± 1, 4p ± 3, 8p ± 7, ...)
• Arithmetic progression (p + a·n, n = 0, 1, 2, 3, ...)
• Balanced (consecutive p − n, p, p + n)
By size
• Mega (1,000,000+ digits)
• Largest known
• list
Complex numbers
• Eisenstein prime
• Gaussian prime
Composite numbers
• Pseudoprime
• Catalan
• Elliptic
• Euler
• Euler–Jacobi
• Fermat
• Frobenius
• Lucas
• Somer–Lucas
• Strong
• Carmichael number
• Almost prime
• Semiprime
• Sphenic number
• Interprime
• Pernicious
Related topics
• Probable prime
• Industrial-grade prime
• Illegal prime
• Formula for primes
• Prime gap
First 60 primes
• 2
• 3
• 5
• 7
• 11
• 13
• 17
• 19
• 23
• 29
• 31
• 37
• 41
• 43
• 47
• 53
• 59
• 61
• 67
• 71
• 73
• 79
• 83
• 89
• 97
• 101
• 103
• 107
• 109
• 113
• 127
• 131
• 137
• 139
• 149
• 151
• 157
• 163
• 167
• 173
• 179
• 181
• 191
• 193
• 197
• 199
• 211
• 223
• 227
• 229
• 233
• 239
• 241
• 251
• 257
• 263
• 269
• 271
• 277
• 281
List of prime numbers
|
Wikipedia
|
Ramanujan–Nagell equation
In mathematics, in the field of number theory, the Ramanujan–Nagell equation is an equation between a square number and a number that is seven less than a power of two. It is an example of an exponential Diophantine equation, an equation to be solved in integers where one of the variables appears as an exponent.
The equation is named after Srinivasa Ramanujan, who conjectured that it has only five integer solutions, and after Trygve Nagell, who proved the conjecture. It implies non-existence of perfect binary codes with the minimum Hamming distance 5 or 6.
Equation and solution
The equation is
$2^{n}-7=x^{2}\,$
and solutions in natural numbers n and x exist just when n = 3, 4, 5, 7 and 15 (sequence A060728 in the OEIS).
This was conjectured in 1913 by Indian mathematician Srinivasa Ramanujan, proposed independently in 1943 by the Norwegian mathematician Wilhelm Ljunggren, and proved in 1948 by the Norwegian mathematician Trygve Nagell. The values of n correspond to the values of x as:-
x = 1, 3, 5, 11 and 181 (sequence A038198 in the OEIS).[1]
Triangular Mersenne numbers
The problem of finding all numbers of the form 2b − 1 (Mersenne numbers) which are triangular is equivalent:
${\begin{aligned}&\ 2^{b}-1={\frac {y(y+1)}{2}}\\[2pt]\Longleftrightarrow &\ 8(2^{b}-1)=4y(y+1)\\\Longleftrightarrow &\ 2^{b+3}-8=4y^{2}+4y\\\Longleftrightarrow &\ 2^{b+3}-7=4y^{2}+4y+1\\\Longleftrightarrow &\ 2^{b+3}-7=(2y+1)^{2}\end{aligned}}$
The values of b are just those of n − 3, and the corresponding triangular Mersenne numbers (also known as Ramanujan–Nagell numbers) are:
${\frac {y(y+1)}{2}}={\frac {(x-1)(x+1)}{8}}$
for x = 1, 3, 5, 11 and 181, giving 0, 1, 3, 15, 4095 and no more (sequence A076046 in the OEIS).
Equations of Ramanujan–Nagell type
An equation of the form
$x^{2}+D=AB^{n}$
for fixed D, A , B and variable x, n is said to be of Ramanujan–Nagell type. The result of Siegel[2] implies that the number of solutions in each case is finite.[3] By representing $n=3m+r$ with $r\in \{0,1,2\}$ and $B^{n}=B^{r}y^{3}$ with $y=B^{m}$, the equation of Ramanujan–Nagell type is reduced to three Mordell curves (indexed by $r$), each of which has a finite number of integer solutions:
$r=0:\qquad (Ax)^{2}=(Ay)^{3}-A^{2}D$,
$r=1:\qquad (ABx)^{2}=(ABy)^{3}-A^{2}B^{2}D$,
$r=2:\qquad (AB^{2}x)^{2}=(AB^{2}y)^{3}-A^{2}B^{4}D$.
The equation with $A=1,\ B=2$ has at most two solutions, except in the case $D=7$ corresponding to the Ramanujan–Nagell equation. There are infinitely many values of D for which there are two solutions, including $D=2^{m}-1$.[1]
Equations of Lebesgue–Nagell type
An equation of the form
$x^{2}+D=Ay^{n}$
for fixed D, A and variable x, y, n is said to be of Lebesgue–Nagell type. This is named after Victor-Amédée Lebesgue, who proved that the equation
$x^{2}+1=y^{n}$
has no nontrivial solutions.[4]
Results of Shorey and Tijdeman[5] imply that the number of solutions in each case is finite.[6] Bugeaud, Mignotte and Siksek[7] solved equations of this type with A = 1 and 1 ≤ D ≤ 100. In particular, the following generalization of the Ramanujan-Nagell equation:
$y^{n}-7=x^{2}\,$
has positive integer solutions only when x = 1, 3, 5, 11, or 181.
See also
• Pillai's conjecture
• Scientific equations named after people
Notes
1. Saradha & Srinivasan 2008, p. 208.
2. Siegel 1929.
3. Saradha & Srinivasan 2008, p. 207.
4. Lebesgue 1850.
5. Shorey & Tijdeman 1986.
6. Saradha & Srinivasan 2008, p. 211.
7. Bugeaud, Mignotte & Siksek 2006.
References
• Bugeaud, Y.; Mignotte, M.; Siksek, S. (2006). "Classical and modular approaches to exponential Diophantine equations II. The Lebesgue–Nagell equation". Compositio Mathematica. 142: 31–62. arXiv:math/0405220. doi:10.1112/S0010437X05001739. S2CID 18534268.
• Lebesgue (1850). "Sur l'impossibilité, en nombres entiers, de l'équation xm = y2 + 1". Nouv. Ann. Math. Série 1. 9: 178–181.
• Ljunggren, W. (1943). "Oppgave nr 2". Norsk Mat. Tidsskr. 25: 29.
• Nagell, T. (1948). "Løsning till oppgave nr 2". Norsk Mat. Tidsskr. 30: 62–64.
• Nagell, T. (1961). "The Diophantine equation x2 + 7 = 2n". Ark. Mat. 30 (2–3): 185–187. Bibcode:1961ArM.....4..185N. doi:10.1007/BF02592006.
• Ramanujan, S. (1913). "Question 464". J. Indian Math. Soc. 5: 130.
• Saradha, N.; Srinivasan, Anitha (2008). "Generalized Lebesgue–Ramanujan–Nagell equations". In Saradha, N. (ed.). Diophantine Equations. Narosa. pp. 207–223. ISBN 978-81-7319-898-4.
• Shorey, T. N.; Tijdeman, R. (1986). Exponential Diophantine equations. Cambridge Tracts in Mathematics. Vol. 87. Cambridge University Press. pp. 137–138. ISBN 0-521-26826-5. Zbl 0606.10011.
• Siegel, C. L. (1929). "Uber einige Anwendungen Diophantischer Approximationen". Abh. Preuss. Akad. Wiss. Phys. Math. Kl. 1: 41–69.
External links
• "Values of X corresponding to N in the Ramanujan–Nagell Equation". Wolfram MathWorld. Retrieved 2012-05-08.
• Can N2 + N + 2 Be A Power Of 2?, Math Forum discussion
|
Wikipedia
|
Ramanujan summation
Ramanujan summation is a technique invented by the mathematician Srinivasa Ramanujan for assigning a value to divergent infinite series. Although the Ramanujan summation of a divergent series is not a sum in the traditional sense, it has properties that make it mathematically useful in the study of divergent infinite series, for which conventional summation is undefined.
Further information: 1 + 2 + 3 + 4 + ⋯
Summation
Since there are no properties of an entire sum, the Ramanujan summation functions as a property of partial sums. If we take the Euler–Maclaurin summation formula together with the correction rule using Bernoulli numbers, we see that:
${\begin{aligned}{\frac {1}{2}}f(0)+f(1)+\cdots +f(n-1)+{\frac {1}{2}}f(n)&={\frac {f(0)+f(n)}{2}}+\sum _{k=1}^{n-1}f(k)=\sum _{k=0}^{n}f(k)-{\frac {f(0)+f(n)}{2}}\\&=\int _{0}^{n}f(x)\,dx+\sum _{k=1}^{p}{\frac {B_{2k}}{(2k)!}}\left[f^{(2k-1)}(n)-f^{(2k-1)}(0)\right]+R_{p}\end{aligned}}$
Ramanujan[1] wrote it for the case p going to infinity, and changing the limits of the integral and the corresponding summation:
$\sum _{k=a}^{x}f(k)=C+\int _{a}^{x}f(t)\,dt+{\frac {1}{2}}f(x)+\sum _{k=1}^{\infty }{\frac {B_{2k}}{(2k)!}}f^{(2k-1)}(x)$
where C is a constant specific to the series and its analytic continuation and the limits on the integral were not specified by Ramanujan, but presumably they were as given above. Comparing both formulae and assuming that R tends to 0 as x tends to infinity, we see that, in a general case, for functions f(x) with no divergence at x = 0:
$C(a)=\int _{0}^{a}f(t)\,dt-{\frac {1}{2}}f(0)-\sum _{k=1}^{\infty }{\frac {B_{2k}}{(2k)!}}f^{(2k-1)}(0)$
where Ramanujan assumed $a=0.$ By taking $a=\infty $ we normally recover the usual summation for convergent series. For functions f(x) with no divergence at x = 1, we obtain:
$C(a)=\int _{1}^{a}f(t)\,dt+{\frac {1}{2}}f(1)-\sum _{k=1}^{\infty }{\frac {B_{2k}}{(2k)!}}f^{(2k-1)}(1)$
C(0) was then proposed to use as the sum of the divergent sequence. It is like a bridge between summation and integration.
The most common application of Ramanujan summation is for the Riemann zeta function ζ(z), in which the Ramanujan summation of the function $\sum _{k=1}^{\infty }{\frac {1}{n^{s}}}$ has the same value as ζ(s) for all the values of $s$, even for those for which the first function is divergent, which is equivalent to doing analytic continuation or, alternatively, applying smoothed sums.[2]
The convergent version of summation for functions with appropriate growth condition is then:
$f(1)+f(2)+f(3)+\cdots =-{\frac {f(0)}{2}}+i\int _{0}^{\infty }{\frac {f(it)-f(-it)}{e^{2\pi t}-1}}\,dt$
Ramanujan summation of divergent series
In the following text, $({\mathfrak {R}})$ indicates "Ramanujan summation". This formula originally appeared in one of Ramanujan's notebooks, without any notation to indicate that it exemplified a novel method of summation.
For example, the $({\mathfrak {R}})$ of 1 − 1 + 1 − ⋯ is:
$1-1+1-\cdots ={\frac {1}{2}}\quad ({\mathfrak {R}}).$
Ramanujan had calculated "sums" of known divergent series. It is important to mention that the Ramanujan sums are not the sums of the series in the usual sense,[3][4] i.e. the partial sums do not converge to this value, which is denoted by the symbol $({\mathfrak {R}}).$ In particular, the $({\mathfrak {R}})$ sum of 1 + 2 + 3 + 4 + ⋯ was calculated as:
$1+2+3+\cdots =-{\frac {1}{12}}\quad ({\mathfrak {R}})$
Extending to positive even powers, this gave:
$1+2^{2k}+3^{2k}+\cdots =0\quad ({\mathfrak {R}})$
and for odd powers the approach suggested a relation with the Bernoulli numbers:
$1+2^{2k-1}+3^{2k-1}+\cdots =-{\frac {B_{2k}}{2k}}\quad ({\mathfrak {R}})$
It has been proposed to use of C(1) rather than C(0) as the result of Ramanujan's summation, since then it can be assured that one series $\textstyle \sum _{k=1}^{\infty }f(k)$ admits one and only one Ramanujan's summation, defined as the value in 1 of the only solution of the difference equation $R(x)-R(x+1)=f(x)$ that verifies the condition $\textstyle \int _{1}^{2}R(t)\,dt=0$.[5]
This demonstration of Ramanujan's summation (denoted as $\textstyle \sum _{n\geq 1}^{\mathfrak {R}}f(n)$) does not coincide with the earlier defined Ramanujan's summation, C(0), nor with the summation of convergent series, but it has interesting properties, such as: If R(x) tends to a finite limit when x → 1, then the series $\textstyle \sum _{n\geq 1}^{\mathfrak {R}}f(n)$ is convergent, and we have
$\sum _{n\geq 1}^{\mathfrak {R}}f(n)=\lim _{N\to \infty }\left[\sum _{n=1}^{N}f(n)-\int _{1}^{N}f(t)\,dt\right]$
In particular we have:
$\sum _{n\geq 1}^{\mathfrak {R}}{\frac {1}{n}}=\gamma $
where γ is the Euler–Mascheroni constant.
Extension to integrals
Ramanujan resummation can be extended to integrals; for example, using the Euler–Maclaurin summation formula, one can write
${\begin{aligned}\int _{a}^{\infty }x^{m-s}\,dx&={\frac {m-s}{2}}\int _{a}^{\infty }x^{m-1-s}\,dx+\zeta (s-m)-\sum _{i=1}^{a}\left[i^{m-s}+a^{m-s}\right]\\&\qquad -\sum _{r=1}^{\infty }{\frac {B_{2r}\theta (m-s+1)}{(2r)!\Gamma (m-2r+2-s)}}(m-2r+1-s)\int _{a}^{\infty }x^{m-2r-s}\,dx\end{aligned}}$
which is the natural extension to integrals of the Zeta regularization algorithm.
This recurrence equation is finite, since for $m-2r<-1$,
$\int _{a}^{\infty }dx\,x^{m-2r}=-{\frac {a^{m-2r+1}}{m-2r+1}}.$
Note that this involves (see zeta function regularization)
$I(n,\Lambda )=\int _{0}^{\Lambda }dx\,x^{n}$.
With $\Lambda \to \infty $, the application of this Ramanujan resummation lends to finite results in the renormalization of quantum field theories.
See also
• Borel summation
• Cesàro summation
• Divergent series
• Ramanujan's sum
• Abel–Plana formula
References
1. Bruce C. Berndt, Ramanujan's Notebooks, Ramanujan's Theory of Divergent Series, Chapter 6, Springer-Verlag (ed.), (1939), pp. 133-149.
2. https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/
3. "The Euler–Maclaurin formula, Bernoulli numbers, the zeta function, and real-variable analytic continuation". Retrieved 20 January 2014.
4. "Infinite series are weird". Retrieved 20 January 2014.
5. Éric Delabaere, Ramanujan's Summation, Algorithms Seminar 2001–2002, F. Chyzak (ed.), INRIA, (2003), pp. 83–88.
|
Wikipedia
|
Ramanujan theta function
In mathematics, particularly q-analog theory, the Ramanujan theta function generalizes the form of the Jacobi theta functions, while capturing their general properties. In particular, the Jacobi triple product takes on a particularly elegant form when written in terms of the Ramanujan theta. The function is named after mathematician Srinivasa Ramanujan.
Not to be confused with the mock theta functions discovered by Ramanujan.
Definition
The Ramanujan theta function is defined as
$f(a,b)=\sum _{n=-\infty }^{\infty }a^{\frac {n(n+1)}{2}}\;b^{\frac {n(n-1)}{2}}$
for |ab| < 1. The Jacobi triple product identity then takes the form
$f(a,b)=(-a;ab)_{\infty }\;(-b;ab)_{\infty }\;(ab;ab)_{\infty }.$
Here, the expression $(a;q)_{n}$ denotes the q-Pochhammer symbol. Identities that follow from this include
$\varphi (q)=f(q,q)=\sum _{n=-\infty }^{\infty }q^{n^{2}}={\left(-q;q^{2}\right)_{\infty }^{2}\left(q^{2};q^{2}\right)_{\infty }}$
and
$\psi (q)=f\left(q,q^{3}\right)=\sum _{n=0}^{\infty }q^{\frac {n(n+1)}{2}}={\left(q^{2};q^{2}\right)_{\infty }}{(-q;q)_{\infty }}$
and
$f(-q)=f\left(-q,-q^{2}\right)=\sum _{n=-\infty }^{\infty }(-1)^{n}q^{\frac {n(3n-1)}{2}}=(q;q)_{\infty }$
This last being the Euler function, which is closely related to the Dedekind eta function. The Jacobi theta function may be written in terms of the Ramanujan theta function as:
$\vartheta (w,q)=f\left(qw^{2},qw^{-2}\right)$
Integral representations
We have the following integral representation for the full two-parameter form of Ramanujan's theta function:[1]
$f(a,b)=1+\int _{0}^{\infty }{\frac {2ae^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {1-a{\sqrt {ab}}\cosh \left({\sqrt {\log ab}}\,t\right)}{a^{3}b-2a{\sqrt {ab}}\cosh \left({\sqrt {\log ab}}\,t\right)+1}}\right]dt+\int _{0}^{\infty }{\frac {2be^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {1-b{\sqrt {ab}}\cosh \left({\sqrt {\log ab}}\,t\right)}{ab^{3}-2b{\sqrt {ab}}\cosh \left({\sqrt {\log ab}}\,t\right)+1}}\right]dt$
The special cases of Ramanujan's theta functions given by φ(q) := f(q, q) OEIS: A000122 and ψ(q) := f(q, q3) OEIS: A010054 [2] also have the following integral representations:[1]
${\begin{aligned}\varphi (q)&=1+\int _{0}^{\infty }{\frac {e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {4q\left(1-q^{2}\cosh \left({\sqrt {2\log q}}\,t\right)\right)}{q^{4}-2q^{2}\cosh \left({\sqrt {2\log q}}\,t\right)+1}}\right]dt\\[6pt]\psi (q)&=\int _{0}^{\infty }{\frac {2e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {1-{\sqrt {q}}\cosh \left({\sqrt {\log q}}\,t\right)}{q-2{\sqrt {q}}\cosh \left({\sqrt {\log q}}\,t\right)+1}}\right]dt\end{aligned}}$
This leads to several special case integrals for constants defined by these functions when q := e−kπ (cf. theta function explicit values). In particular, we have that [1]
${\begin{aligned}\varphi \left(e^{-k\pi }\right)&=1+\int _{0}^{\infty }{\frac {e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {4e^{k\pi }\left(e^{2k\pi }-\cos \left({\sqrt {2\pi k}}\,t\right)\right)}{e^{4k\pi }-2e^{2k\pi }\cos \left({\sqrt {2\pi k}}\,t\right)+1}}\right]dt\\[6pt]{\frac {\pi ^{\frac {1}{4}}}{\Gamma \left({\frac {3}{4}}\right)}}&=1+\int _{0}^{\infty }{\frac {e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {4e^{\pi }\left(e^{2\pi }-\cos \left({\sqrt {2\pi }}\,t\right)\right)}{e^{4\pi }-2e^{2\pi }\cos \left({\sqrt {2\pi }}\,t\right)+1}}\right]dt\\[6pt]{\frac {\pi ^{\frac {1}{4}}}{\Gamma \left({\frac {3}{4}}\right)}}\cdot {\frac {\sqrt {2+{\sqrt {2}}}}{2}}&=1+\int _{0}^{\infty }{\frac {e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {4e^{2\pi }\left(e^{4\pi }-\cos \left(2{\sqrt {\pi }}\,t\right)\right)}{e^{8\pi }-2e^{4\pi }\cos \left(2{\sqrt {\pi }}\,t\right)+1}}\right]dt\\[6pt]{\frac {\pi ^{\frac {1}{4}}}{\Gamma \left({\frac {3}{4}}\right)}}\cdot {\frac {\sqrt {1+{\sqrt {3}}}}{2^{\frac {1}{4}}3^{\frac {3}{8}}}}&=1+\int _{0}^{\infty }{\frac {e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {4e^{3\pi }\left(e^{6\pi }-\cos \left({\sqrt {6\pi }}\,t\right)\right)}{e^{12\pi }-2e^{6\pi }\cos \left({\sqrt {6\pi }}\,t\right)+1}}\right]dt\\[6pt]{\frac {\pi ^{\frac {1}{4}}}{\Gamma \left({\frac {3}{4}}\right)}}\cdot {\frac {\sqrt {5+2{\sqrt {5}}}}{5^{\frac {3}{4}}}}&=1+\int _{0}^{\infty }{\frac {e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {4e^{5\pi }\left(e^{10\pi }-\cos \left({\sqrt {10\pi }}\,t\right)\right)}{e^{20\pi }-2e^{10\pi }\cos \left({\sqrt {10\pi }}\,t\right)+1}}\right]dt\end{aligned}}$
and that
${\begin{aligned}\psi \left(e^{-k\pi }\right)&=\int _{0}^{\infty }{\frac {e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {\cos \left({\sqrt {k\pi }}\,t\right)-e^{\frac {k\pi }{2}}}{\cos \left({\sqrt {k\pi }}\,t\right)-\cosh {\frac {k\pi }{2}}}}\right]dt\\[6pt]{\frac {\pi ^{\frac {1}{4}}}{\Gamma \left({\frac {3}{4}}\right)}}\cdot {\frac {e^{\frac {\pi }{8}}}{2^{\frac {5}{8}}}}&=\int _{0}^{\infty }{\frac {e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {\cos \left({\sqrt {\pi }}\,t\right)-e^{\frac {\pi }{2}}}{\cos \left({\sqrt {\pi }}\,t\right)-\cosh {\frac {\pi }{2}}}}\right]dt\\[6pt]{\frac {\pi ^{\frac {1}{4}}}{\Gamma \left({\frac {3}{4}}\right)}}\cdot {\frac {e^{\frac {\pi }{4}}}{2^{\frac {5}{4}}}}&=\int _{0}^{\infty }{\frac {e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {\cos \left({\sqrt {2\pi }}\,t\right)-e^{\pi }}{\cos \left({\sqrt {2\pi }}\,t\right)-\cosh \pi }}\right]dt\\[6pt]{\frac {\pi ^{\frac {1}{4}}}{\Gamma \left({\frac {3}{4}}\right)}}\cdot {\frac {{\sqrt[{4}]{1+{\sqrt {2}}}}\,e^{\frac {\pi }{16}}}{2^{\frac {7}{16}}}}&=\int _{0}^{\infty }{\frac {e^{-{\frac {1}{2}}t^{2}}}{\sqrt {2\pi }}}\left[{\frac {\cos \left({\sqrt {\frac {\pi }{2}}}\,t\right)-e^{\frac {\pi }{4}}}{\cos \left({\sqrt {\frac {\pi }{2}}}\,t\right)-\cosh {\frac {\pi }{4}}}}\right]dt\end{aligned}}$
Application in string theory
The Ramanujan theta function is used to determine the critical dimensions in Bosonic string theory, superstring theory and M-theory.
References
1. Schmidt, M. D. (2017). "Square series generating function transformations" (PDF). Journal of Inequalities and Special Functions. 8 (2). arXiv:1609.02803.
2. Weisstein, Eric W. "Ramanujan Theta Functions". MathWorld. Retrieved 29 April 2018.
• Bailey, W. N. (1935). Generalized Hypergeometric Series. Cambridge Tracts in Mathematics and Mathematical Physics. Vol. 32. Cambridge: Cambridge University Press.
• Gasper, George; Rahman, Mizan (2004). Basic Hypergeometric Series. Encyclopedia of Mathematics and Its Applications. Vol. 96 (2nd ed.). Cambridge: Cambridge University Press. ISBN 0-521-83357-4.
• "Ramanujan function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Kaku, Michio (1994). Hyperspace: A Scientific Odyssey Through Parallel Universes, Time Warps, and the Tenth Dimension. Oxford: Oxford University Press. ISBN 0-19-286189-1.
• Weisstein, Eric W. "Ramanujan Theta Functions". MathWorld.
|
Wikipedia
|
Ramanujam vanishing theorem
In algebraic geometry, the Ramanujam vanishing theorem is an extension of the Kodaira vanishing theorem due to Ramanujam (1972), that in particular gives conditions for the vanishing of first cohomology groups of coherent sheaves on a surface. The Kawamata–Viehweg vanishing theorem generalizes it.
See also
• Mumford vanishing theorem
References
• Kawamata, Yujiro (1982), "A generalization of Kodaira-Ramanujam's vanishing theorem", Mathematische Annalen, 261 (1): 43–46, doi:10.1007/BF01456407, ISSN 0025-5831, MR 0675204, S2CID 120101105
• Ramanujam, C. P. (1972), "Remarks on the Kodaira vanishing theorem", J. Indian Math. Soc., New Series, 36: 41–51, MR 0330164
|
Wikipedia
|
Ramanujan–Sato series
In mathematics, a Ramanujan–Sato series[1][2] generalizes Ramanujan’s pi formulas such as,
${\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{99^{2}}}\sum _{k=0}^{\infty }{\frac {(4k)!}{k!^{4}}}{\frac {26390k+1103}{396^{4k}}}$
to the form
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }s(k){\frac {Ak+B}{C^{k}}}$
by using other well-defined sequences of integers $s(k)$ obeying a certain recurrence relation, sequences which may be expressed in terms of binomial coefficients ${\tbinom {n}{k}}$, and $A,B,C$ employing modular forms of higher levels.
Ramanujan made the enigmatic remark that there were "corresponding theories", but it was only recently that H. H. Chan and S. Cooper found a general approach that used the underlying modular congruence subgroup $\Gamma _{0}(n)$,[3] while G. Almkvist has experimentally found numerous other examples also with a general method using differential operators.[4]
Levels 1–4A were given by Ramanujan (1914),[5] level 5 by H. H. Chan and S. Cooper (2012),[3] 6A by Chan, Tanigawa, Yang, and Zudilin,[6] 6B by Sato (2002),[7] 6C by H. Chan, S. Chan, and Z. Liu (2004),[1] 6D by H. Chan and H. Verrill (2009),[8] level 7 by S. Cooper (2012),[9] part of level 8 by Almkvist and Guillera (2012),[2] part of level 10 by Y. Yang, and the rest by H. H. Chan and S. Cooper.
The notation jn(τ) is derived from Zagier[10] and Tn refers to the relevant McKay–Thompson series.
Level 1
Examples for levels 1–4 were given by Ramanujan in his 1917 paper. Given $q=e^{2\pi i\tau }$ as in the rest of this article. Let,
${\begin{aligned}j(\tau )&=\left({\frac {E_{4}(\tau )}{\eta ^{8}(\tau )}}\right)^{3}={\frac {1}{q}}+744+196884q+21493760q^{2}+\cdots \\j^{*}(\tau )&=432\,{\frac {{\sqrt {j(\tau )}}+{\sqrt {j(\tau )-1728}}}{{\sqrt {j(\tau )}}-{\sqrt {j(\tau )-1728}}}}={\frac {1}{q}}-120+10260q-901120q^{2}+\cdots \end{aligned}}$
with the j-function j(τ), Eisenstein series E4, and Dedekind eta function η(τ). The first expansion is the McKay–Thompson series of class 1A (OEIS: A007240) with a(0) = 744. Note that, as first noticed by J. McKay, the coefficient of the linear term of j(τ) almost equals 196883, which is the degree of the smallest nontrivial irreducible representation of the Monster group. Similar phenomena will be observed in the other levels. Define
$s_{1A}(k)={\binom {2k}{k}}{\binom {3k}{k}}{\binom {6k}{3k}}=1,120,83160,81681600,\ldots $ (OEIS: A001421)
$s_{1B}(k)=\sum _{j=0}^{k}{\binom {2j}{j}}{\binom {3j}{j}}{\binom {6j}{3j}}{\binom {k+j}{k-j}}(-432)^{k-j}=1,-312,114264,-44196288,\ldots $
Then the two modular functions and sequences are related by
$\sum _{k=0}^{\infty }s_{1A}(k)\,{\frac {1}{(j(\tau ))^{k+{\frac {1}{2}}}}}=\pm \sum _{k=0}^{\infty }s_{1B}(k)\,{\frac {1}{(j^{*}(\tau ))^{k+{\frac {1}{2}}}}}$
if the series converges and the sign chosen appropriately, though squaring both sides easily removes the ambiguity. Analogous relationships exist for the higher levels.
Examples:
${\frac {1}{\pi }}=12\,{\boldsymbol {i}}\,\sum _{k=0}^{\infty }s_{1A}(k)\,{\frac {163\cdot 3344418k+13591409}{\left(-640320^{3}\right)^{k+{\frac {1}{2}}}}},\quad j\left({\frac {1+{\sqrt {-163}}}{2}}\right)=-640320^{3}=-262537412640768000$
${\frac {1}{\pi }}=24\,{\boldsymbol {i}}\,\sum _{k=0}^{\infty }s_{1B}(k)\,{\frac {-3669+320{\sqrt {645}}\,\left(k+{\frac {1}{2}}\right)}{\left({-432}\,U_{645}^{3}\right)^{k+{\frac {1}{2}}}}},\quad j^{*}\left({\frac {1+{\sqrt {-43}}}{2}}\right)=-432\,U_{645}^{3}=-432\left({\frac {127+5{\sqrt {645}}}{2}}\right)^{3}$
where $645=43\times 15,$ and $U_{n}$ is a fundamental unit. The first belongs to a family of formulas which were rigorously proven by the Chudnovsky brothers in 1989[11] and later used to calculate 10 trillion digits of π in 2011.[12] The second formula, and the ones for higher levels, was established by H.H. Chan and S. Cooper in 2012.[3]
Level 2
Using Zagier's notation[10] for the modular function of level 2,
${\begin{aligned}j_{2A}(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (2\tau )}}\right)^{12}+2^{6}\left({\frac {\eta (2\tau )}{\eta (\tau )}}\right)^{12}\right)^{2}={\frac {1}{q}}+104+4372q+96256q^{2}+1240002q^{3}+\cdots \\j_{2B}(\tau )&=\left({\frac {\eta (\tau )}{\eta (2\tau )}}\right)^{24}={\frac {1}{q}}-24+276q-2048q^{2}+11202q^{3}-\cdots \end{aligned}}$
Note that the coefficient of the linear term of j2A(τ) is one more than 4371 which is the smallest degree greater than 1 of the irreducible representations of the Baby Monster group. Define,
$s_{2A}(k)={\binom {2k}{k}}{\binom {2k}{k}}{\binom {4k}{2k}}=1,24,2520,369600,63063000,\ldots $ (OEIS: A008977)
$s_{2B}(k)=\sum _{j=0}^{k}{\binom {2j}{j}}{\binom {2j}{j}}{\binom {4j}{2j}}{\binom {k+j}{k-j}}(-64)^{k-j}=1,-40,2008,-109120,6173656,\ldots $
Then,
$\sum _{k=0}^{\infty }s_{2A}(k)\,{\frac {1}{(j_{2A}(\tau ))^{k+{\frac {1}{2}}}}}=\pm \sum _{k=0}^{\infty }s_{2B}(k)\,{\frac {1}{(j_{2B}(\tau ))^{k+{\frac {1}{2}}}}}$
if the series converges and the sign chosen appropriately.
Examples:
${\frac {1}{\pi }}=32{\sqrt {2}}\,\sum _{k=0}^{\infty }s_{2A}(k)\,{\frac {58\cdot 455k+1103}{\left(396^{4}\right)^{k+{\frac {1}{2}}}}},\quad j_{2A}\left({\frac {\sqrt {-58}}{2}}\right)=396^{4}=24591257856$
${\frac {1}{\pi }}=16{\sqrt {2}}\,\sum _{k=0}^{\infty }s_{2B}(k)\,{\frac {-24184+9801{\sqrt {29}}\,\left(k+{\frac {1}{2}}\right)}{\left(64\,U_{29}^{12}\right)^{k+{\frac {1}{2}}}}},\quad j_{2B}\left({\frac {\sqrt {-58}}{2}}\right)=64\left({\frac {5+{\sqrt {29}}}{2}}\right)^{12}=64\,U_{29}^{12}$
The first formula, found by Ramanujan and mentioned at the start of the article, belongs to a family proven by D. Bailey and the Borwein brothers in a 1989 paper.[13]
Level 3
Define,
${\begin{aligned}j_{3A}(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (3\tau )}}\right)^{6}+3^{3}\left({\frac {\eta (3\tau )}{\eta (\tau )}}\right)^{6}\right)^{2}={\frac {1}{q}}+42+783q+8672q^{2}+65367q^{3}+\cdots \\j_{3B}(\tau )&=\left({\frac {\eta (\tau )}{\eta (3\tau )}}\right)^{12}={\frac {1}{q}}-12+54q-76q^{2}-243q^{3}+1188q^{4}+\cdots \\\end{aligned}}$
where 782 is the smallest degree greater than 1 of the irreducible representations of the Fischer group Fi23 and,
$s_{3A}(k)={\binom {2k}{k}}{\binom {2k}{k}}{\binom {3k}{k}}=1,12,540,33600,2425500,\ldots $ (OEIS: A184423)
$s_{3B}(k)=\sum _{j=0}^{k}{\binom {2j}{j}}{\binom {2j}{j}}{\binom {3j}{j}}{\binom {k+j}{k-j}}(-27)^{k-j}=1,-15,297,-6495,149481,\ldots $
Examples:
${\frac {1}{\pi }}=2\,{\boldsymbol {i}}\,\sum _{k=0}^{\infty }s_{3A}(k)\,{\frac {267\cdot 53k+827}{\left(-300^{3}\right)^{k+{\frac {1}{2}}}}},\quad j_{3A}\left({\frac {3+{\sqrt {-267}}}{6}}\right)=-300^{3}=-27000000$
${\frac {1}{\pi }}={\boldsymbol {i}}\,\sum _{k=0}^{\infty }s_{3B}(k)\,{\frac {12497-3000{\sqrt {89}}\,\left(k+{\frac {1}{2}}\right)}{\left(-27\,U_{89}^{2}\right)^{k+{\frac {1}{2}}}}},\quad j_{3B}\left({\frac {3+{\sqrt {-267}}}{6}}\right)=-27\,\left(500+53{\sqrt {89}}\right)^{2}=-27\,U_{89}^{2}$
Level 4
Define,
${\begin{aligned}j_{4A}(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (4\tau )}}\right)^{4}+4^{2}\left({\frac {\eta (4\tau )}{\eta (\tau )}}\right)^{4}\right)^{2}=\left({\frac {\eta ^{2}(2\tau )}{\eta (\tau )\,\eta (4\tau )}}\right)^{24}=-\left({\frac {\eta \left({\frac {2\tau +3}{2}}\right)}{\eta (2\tau +3)}}\right)^{24}={\frac {1}{q}}+24+276q+2048q^{2}+11202q^{3}+\cdots \\j_{4C}(\tau )&=\left({\frac {\eta (\tau )}{\eta (4\tau )}}\right)^{8}={\frac {1}{q}}-8+20q-62q^{3}+216q^{5}-641q^{7}+\ldots \\\end{aligned}}$
where the first is the 24th power of the Weber modular function ${\mathfrak {f}}(2\tau )$. And,
$s_{4A}(k)={\binom {2k}{k}}^{3}=1,8,216,8000,343000,\ldots $ (OEIS: A002897)
$s_{4C}(k)=\sum _{j=0}^{k}{\binom {2j}{j}}^{3}{\binom {k+j}{k-j}}(-16)^{k-j}=(-1)^{k}\sum _{j=0}^{k}{\binom {2j}{j}}^{2}{\binom {2k-2j}{k-j}}^{2}=1,-8,88,-1088,14296,\ldots $ (OEIS: A036917)
Examples:
${\frac {1}{\pi }}=8\,{\boldsymbol {i}}\,\sum _{k=0}^{\infty }s_{4A}(k)\,{\frac {6k+1}{\left(-2^{9}\right)^{k+{\frac {1}{2}}}}},\quad j_{4A}\left({\frac {1+{\sqrt {-4}}}{2}}\right)=-2^{9}=-512$
${\frac {1}{\pi }}=16\,{\boldsymbol {i}}\,\sum _{k=0}^{\infty }s_{4C}(k)\,{\frac {1-2{\sqrt {2}}\,\left(k+{\frac {1}{2}}\right)}{\left(-16\,U_{2}^{4}\right)^{k+{\frac {1}{2}}}}},\quad j_{4C}\left({\frac {1+{\sqrt {-4}}}{2}}\right)=-16\,\left(1+{\sqrt {2}}\right)^{4}=-16\,U_{2}^{4}$
Level 5
Define,
${\begin{aligned}j_{5A}(\tau )&=\left({\frac {\eta (\tau )}{\eta (5\tau )}}\right)^{6}+5^{3}\left({\frac {\eta (5\tau )}{\eta (\tau )}}\right)^{6}+22={\frac {1}{q}}+16+134q+760q^{2}+3345q^{3}+\cdots \\j_{5B}(\tau )&=\left({\frac {\eta (\tau )}{\eta (5\tau )}}\right)^{6}={\frac {1}{q}}-6+9q+10q^{2}-30q^{3}+6q^{4}+\cdots \end{aligned}}$
and,
$s_{5A}(k)={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {k}{j}}^{2}{\binom {k+j}{j}}=1,6,114,2940,87570,\ldots $
$s_{5B}(k)=\sum _{j=0}^{k}(-1)^{j+k}{\binom {k}{j}}^{3}{\binom {4k-5j}{3k}}=1,-5,35,-275,2275,-19255,\ldots $ (OEIS: A229111)
where the first is the product of the central binomial coefficients and the Apéry numbers (OEIS: A005258)[9]
Examples:
${\frac {1}{\pi }}={\frac {5}{9}}\,{\boldsymbol {i}}\,\sum _{k=0}^{\infty }s_{5A}(k)\,{\frac {682k+71}{(-15228)^{k+{\frac {1}{2}}}}},\quad j_{5A}\left({\frac {5+{\sqrt {-5(47)}}}{10}}\right)=-15228=-(18{\sqrt {47}})^{2}$
${\frac {1}{\pi }}={\frac {6}{\sqrt {5}}}\,{\boldsymbol {i}}\,\sum _{k=0}^{\infty }s_{5B}(k)\,{\frac {25{\sqrt {5}}-141\left(k+{\frac {1}{2}}\right)}{\left(-5{\sqrt {5}}\,U_{5}^{15}\right)^{k+{\frac {1}{2}}}}},\quad j_{5B}\left({\frac {5+{\sqrt {-5(47)}}}{10}}\right)=-5{\sqrt {5}}\,\left({\frac {1+{\sqrt {5}}}{2}}\right)^{15}=-5{\sqrt {5}}\,U_{5}^{15}$
Level 6
Modular functions
In 2002, Sato[7] established the first results for levels above 4. It involved Apéry numbers which were first used to establish the irrationality of $\zeta (3)$. First, define,
${\begin{aligned}j_{6A}(\tau )&=\left({\sqrt {j_{6B}(\tau )}}-{\frac {1}{\sqrt {j_{6B}(\tau )}}}\right)^{2}=\left({\sqrt {j_{6C}(\tau )}}+{\frac {8}{\sqrt {j_{6C}(\tau )}}}\right)^{2}=\left({\sqrt {j_{6D}(\tau )}}+{\frac {9}{\sqrt {j_{6D}(\tau )}}}\right)^{2}-4={\frac {1}{q}}+10+79q+352q^{2}+\cdots \end{aligned}}$
${\begin{aligned}j_{6B}(\tau )&=\left({\frac {\eta (2\tau )\eta (3\tau )}{\eta (\tau )\eta (6\tau )}}\right)^{12}={\frac {1}{q}}+12+78q+364q^{2}+1365q^{3}+\cdots \end{aligned}}$
${\begin{aligned}j_{6C}(\tau )&=\left({\frac {\eta (\tau )\eta (3\tau )}{\eta (2\tau )\eta (6\tau )}}\right)^{6}={\frac {1}{q}}-6+15q-32q^{2}+87q^{3}-192q^{4}+\cdots \end{aligned}}$
${\begin{aligned}j_{6D}(\tau )&=\left({\frac {\eta (\tau )\eta (2\tau )}{\eta (3\tau )\eta (6\tau )}}\right)^{4}={\frac {1}{q}}-4-2q+28q^{2}-27q^{3}-52q^{4}+\cdots \end{aligned}}$
${\begin{aligned}j_{6E}(\tau )&=\left({\frac {\eta (2\tau )\eta ^{3}(3\tau )}{\eta (\tau )\eta ^{3}(6\tau )}}\right)^{3}={\frac {1}{q}}+3+6q+4q^{2}-3q^{3}-12q^{4}+\cdots \end{aligned}}$
The phenomenon of $j_{6A}$ being squares or a near-square of the other functions will also be manifested by $j_{10A}$. Another similarity between levels 6 and 10 is J. Conway and S. Norton showed there are linear relations between the McKay–Thompson series Tn,[14] one of which was,
$T_{6A}-T_{6B}-T_{6C}-T_{6D}+2T_{6E}=0$
or using the above eta quotients jn,
$j_{6A}-j_{6B}-j_{6C}-j_{6D}+2j_{6E}=22$
A similar relation exists for level 10.
α Sequences
For the modular function j6A, one can associate it with three different sequences. (A similar situation happens for the level 10 function j10A.) Let,
$\alpha _{1}(k)={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {k}{j}}^{3}=1,4,60,1120,24220,\ldots $ (OEIS: A181418, labeled as s6 in Cooper's paper)
$\alpha _{2}(k)={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {k}{j}}\sum _{m=0}^{j}{\binom {j}{m}}^{3}={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {k}{j}}^{2}{\binom {2j}{j}}=1,6,90,1860,44730,\ldots $ (OEIS: A002896)
$\alpha _{3}(k)={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {k}{j}}(-8)^{k-j}\sum _{m=0}^{j}{\binom {j}{m}}^{3}=1,-12,252,-6240,167580,-4726512,\ldots $
The three sequences involve the product of the central binomial coefficients $c(k)={\tbinom {2k}{k}}$ with: first, the Franel numbers $\textstyle \sum _{j=0}^{k}{\tbinom {k}{j}}^{3}$; second, OEIS: A002893, and third, $(-1)^{k}$ OEIS: A093388. Note that the second sequence, α2(k) is also the number of 2n-step polygons on a cubic lattice. Their complements,
$\alpha '_{2}(k)={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {k}{j}}(-1)^{k-j}\sum _{m=0}^{j}{\binom {j}{m}}^{3}=1,2,42,620,12250,\ldots $
$\alpha '_{3}(k)={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {k}{j}}(8)^{k-j}\sum _{m=0}^{j}{\binom {j}{m}}^{3}=1,20,636,23840,991900,\ldots $
There are also associated sequences, namely the Apéry numbers,
$s_{6B}(k)=\sum _{j=0}^{k}{\binom {k}{j}}^{2}{\binom {k+j}{j}}^{2}=1,5,73,1445,33001,\ldots $ (OEIS: A005259)
the Domb numbers (unsigned) or the number of 2n-step polygons on a diamond lattice,
$s_{6C}(k)=(-1)^{k}\sum _{j=0}^{k}{\binom {k}{j}}^{2}{\binom {2(k-j)}{k-j}}{\binom {2j}{j}}=1,-4,28,-256,2716,\ldots $ (OEIS: A002895)
and the Almkvist-Zudilin numbers,
$s_{6D}(k)=\sum _{j=0}^{k}(-1)^{k-j}\,3^{k-3j}\,{\frac {(3j)!}{j!^{3}}}{\binom {k}{3j}}{\binom {k+j}{j}}=1,-3,9,-3,-279,2997,\ldots $ (OEIS: A125143)
where
${\frac {(3j)!}{j!^{3}}}={\binom {2j}{j}}{\binom {3j}{j}}$
Identities
The modular functions can be related as,
$P=\sum _{k=0}^{\infty }\alpha _{1}(k)\,{\frac {1}{\left(j_{6A}(\tau )\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }\alpha _{2}(k)\,{\frac {1}{\left(j_{6A}(\tau )+4\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }\alpha _{3}(k)\,{\frac {1}{\left(j_{6A}(\tau )-32\right)^{k+{\frac {1}{2}}}}}$
$Q=\sum _{k=0}^{\infty }s_{6B}(k)\,{\frac {1}{\left(j_{6B}(\tau )\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }s_{6C}(k)\,{\frac {1}{\left(j_{6C}(\tau )\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }s_{6D}(k)\,{\frac {1}{\left(j_{6D}(\tau )\right)^{k+{\frac {1}{2}}}}}$
if the series converges and the sign chosen appropriately. It can also be observed that,
$P=Q=\sum _{k=0}^{\infty }\alpha '_{2}(k)\,{\frac {1}{\left(j_{6A}(\tau )-4\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }\alpha '_{3}(k)\,{\frac {1}{\left(j_{6A}(\tau )+32\right)^{k+{\frac {1}{2}}}}}$
which implies,
$\sum _{k=0}^{\infty }\alpha _{2}(k)\,{\frac {1}{\left(j_{6A}(\tau )+4\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }\alpha '_{2}(k)\,{\frac {1}{\left(j_{6A}(\tau )-4\right)^{k+{\frac {1}{2}}}}}$
and similarly using α3 and α'3.
Examples
One can use a value for j6A in three ways. For example, starting with,
$\Delta =j_{6A}\left({\sqrt {\frac {-17}{6}}}\right)=198^{2}-4=\left(140{\sqrt {2}}\right)^{2}=39200$
and noting that $3\cdot 17=51$ then,
${\begin{aligned}{\frac {1}{\pi }}&={\frac {24{\sqrt {3}}}{35}}\,\sum _{k=0}^{\infty }\alpha _{1}(k)\,{\frac {51\cdot 11k+53}{(\Delta )^{k+{\frac {1}{2}}}}}\\{\frac {1}{\pi }}&={\frac {4{\sqrt {3}}}{99}}\,\sum _{k=0}^{\infty }\alpha _{2}(k)\,{\frac {17\cdot 560k+899}{(\Delta +4)^{k+{\frac {1}{2}}}}}\\{\frac {1}{\pi }}&={\frac {\sqrt {3}}{2}}\,\sum _{k=0}^{\infty }\alpha _{3}(k)\,{\frac {770k+73}{(\Delta -32)^{k+{\frac {1}{2}}}}}\\\end{aligned}}$
as well as,
${\begin{aligned}{\frac {1}{\pi }}&={\frac {12{\sqrt {3}}}{9799}}\,\sum _{k=0}^{\infty }\alpha '_{2}(k)\,{\frac {11\cdot 51\cdot 560k+29693}{(\Delta -4)^{k+{\frac {1}{2}}}}}\\{\frac {1}{\pi }}&={\frac {6{\sqrt {3}}}{613}}\,\sum _{k=0}^{\infty }\alpha '_{3}(k)\,{\frac {51\cdot 770k+3697}{(\Delta +32)^{k+{\frac {1}{2}}}}}\\\end{aligned}}$
though the formulas using the complements apparently do not yet have a rigorous proof. For the other modular functions,
${\frac {1}{\pi }}=8{\sqrt {15}}\,\sum _{k=0}^{\infty }s_{6B}(k)\,\left({\frac {1}{2}}-{\frac {3{\sqrt {5}}}{20}}+k\right)\left({\frac {1}{\phi ^{12}}}\right)^{k+{\frac {1}{2}}},\quad j_{6B}\left({\sqrt {\frac {-5}{6}}}\right)=\left({\frac {1+{\sqrt {5}}}{2}}\right)^{12}=\phi ^{12}$
${\frac {1}{\pi }}={\frac {1}{2}}\,\sum _{k=0}^{\infty }s_{6C}(k)\,{\frac {3k+1}{32^{k}}},\quad j_{6C}\left({\sqrt {\frac {-1}{3}}}\right)=32$
${\frac {1}{\pi }}=2{\sqrt {3}}\,\sum _{k=0}^{\infty }s_{6D}(k)\,{\frac {4k+1}{81^{k+{\frac {1}{2}}}}},\quad j_{6D}\left({\sqrt {\frac {-1}{2}}}\right)=81$
Level 7
Define
$s_{7A}(k)=\sum _{j=0}^{k}{\binom {k}{j}}^{2}{\binom {2j}{k}}{\binom {k+j}{j}}=1,4,48,760,13840,\ldots $ (OEIS: A183204)
and,
${\begin{aligned}j_{7A}(\tau )&=\left(\left({\frac {\eta (\tau )}{\eta (7\tau )}}\right)^{2}+7\left({\frac {\eta (7\tau )}{\eta (\tau )}}\right)^{2}\right)^{2}={\frac {1}{q}}+10+51q+204q^{2}+681q^{3}+\cdots \\j_{7B}(\tau )&=\left({\frac {\eta (\tau )}{\eta (7\tau )}}\right)^{4}={\frac {1}{q}}-4+2q+8q^{2}-5q^{3}-4q^{4}-10q^{5}+\cdots \end{aligned}}$
Example:
${\frac {1}{\pi }}={\frac {\sqrt {7}}{22^{3}}}\,\sum _{k=0}^{\infty }s_{7A}(k)\,{\frac {11895k+1286}{\left(-22^{3}\right)^{k}}},\quad j_{7A}\left({\frac {7+{\sqrt {-427}}}{14}}\right)=-22^{3}+1=-\left(39{\sqrt {7}}\right)^{2}=-10647$
No pi formula has yet been found using j7B.
Level 8
Modular functions
Levels $2,4,8$ are related since they are just powers of the same prime. Define,
${\begin{aligned}j_{4B}(\tau )&={\sqrt {j_{2A}(2\tau )}}=\left({\sqrt {j_{4D}(\tau )}}+{\frac {8}{\sqrt {j_{4D}(\tau )}}}\right)^{2}-16=\left({\sqrt {j_{8A}(\tau )}}-{\frac {4}{\sqrt {j_{8A}(\tau )}}}\right)^{2}=\left({\sqrt {j_{8A'}(\tau )}}+{\frac {4}{\sqrt {j_{8A'}(\tau )}}}\right)^{2}\\&=\left({\frac {\eta (2\tau )}{\eta (4\tau )}}\right)^{12}+2^{6}\left({\frac {\eta (4\tau )}{\eta (2\tau )}}\right)^{12}={\frac {1}{q}}+52q+834q^{3}+4760q^{5}+24703q^{7}+\cdots \\j_{4D}(\tau )&=\left({\frac {\eta (2\tau )}{\eta (4\tau )}}\right)^{12}={\frac {1}{q}}-12q+66q^{3}-232q^{5}+639q^{7}-1596q^{9}+\cdots \\j_{8A}(\tau )&=\left({\frac {\eta (2\tau )\,\eta (4\tau )}{\eta (\tau )\,\eta (8\tau )}}\right)^{8}={\frac {1}{q}}+8+36q+128q^{2}+386q^{3}+1024q^{4}+\cdots \\j_{8A'}(\tau )&=\left({\frac {\eta (\tau )\,\eta ^{2}(4\tau )}{\eta ^{2}(2\tau )\,\eta (8\tau )}}\right)^{8}={\frac {1}{q}}-8+36q-128q^{2}+386q^{3}-1024q^{4}+\cdots \\j_{8B}(\tau )&=\left({\frac {\eta ^{2}(4\tau )}{\eta (2\tau )\,\eta (8\tau )}}\right)^{12}={\sqrt {j_{4A}(2\tau )}}={\frac {1}{q}}+12q+66q^{3}+232q^{5}+639q^{7}+\cdots \\j_{8E}(\tau )&=\left({\frac {\eta ^{3}(4\tau )}{\eta (2\tau )\,\eta ^{2}(8\tau )}}\right)^{4}={\frac {1}{q}}+4q+2q^{3}-8q^{5}-q^{7}+20q^{9}-2q^{11}-40q^{13}+\cdots \end{aligned}}$
Just like for level 6, five of these functions have a linear relationship,
$j_{4B}-j_{4D}-j_{8A}-j_{8A'}+2j_{8E}=0$
But this is not one of the nine Conway-Norton-Atkin linear dependencies since $j_{8A'}$ is not a moonshine function. However, it is related to one as,
$j_{8A'}(\tau )=-j_{8A}{\Big (}\tau +{\tfrac {1}{2}}{\Big )}$
Sequences
$s_{4B}(k)={\binom {2k}{k}}\sum _{j=0}^{k}4^{k-2j}{\binom {k}{2j}}{\binom {2j}{j}}^{2}={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {k}{j}}{\binom {2k-2j}{k-j}}{\binom {2j}{j}}=1,8,120,2240,47320,\ldots $
$s_{4D}(k)={\binom {2k}{k}}^{3}=1,8,216,8000,343000,\ldots $
$s_{8A}(k)=\sum _{j=0}^{k}{\binom {k}{j}}^{2}{\binom {2j}{k}}^{2}=1,4,40,544,8536,\ldots $ (OEIS: A290575)
$s_{8B}(k)=\sum _{j=0}^{k}{\binom {2j}{j}}^{3}{\binom {2k-4j}{k-2j}}=1,2,14,36,334,\ldots $
where the first is the product[2] of the central binomial coefficient and a sequence related to an arithmetic-geometric mean (OEIS: A081085).
Identities
The modular functions can be related as,
$\pm \sum _{k=0}^{\infty }s_{4B}(k)\,{\frac {1}{\left(j_{4B}(\tau )+16\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }s_{4D}(k)\,{\frac {1}{\left(j_{4D}(\tau )\right)^{2k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }s_{8A}(k)\,{\frac {1}{\left(j_{8A}(\tau )\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }(-1)^{k}s_{8A}(k)\,{\frac {1}{\left(j_{8A'}(\tau )\right)^{k+{\frac {1}{2}}}}}$
if the series converges and signs chosen appropriately. Note also the different exponent of $\left(j_{4D}(\tau )\right)^{2k+{\frac {1}{2}}}$ from the others.
Examples
Recall that $j_{2A}\left({\tfrac {\sqrt {-58}}{2}}\right)=396^{4},$ while $j_{4B}\left({\tfrac {\sqrt {-58}}{4}}\right)=396^{2}$. Hence,
${\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{13}}\,\sum _{k=0}^{\infty }s_{4B}(k)\,{\frac {70\cdot 99\,k+579}{\left(396^{2}+16\right)^{k+{\frac {1}{2}}}}},\qquad j_{4B}\left({\frac {\sqrt {-58}}{4}}\right)=396^{2}$
${\frac {1}{\pi }}=2{\sqrt {2}}\,\sum _{k=0}^{\infty }s_{8A}(k)\,{\frac {-222+70{\sqrt {58}}\,\left(k+{\frac {1}{2}}\right)}{\left(4\left(99+13{\sqrt {58}}\right)^{2}\right)^{k+{\frac {1}{2}}}}},\qquad j_{8A}\left({\frac {\sqrt {-58}}{4}}\right)=4\left(99+13{\sqrt {58}}\right)^{2}=4U_{58}^{2}$
${\frac {1}{\pi }}=2\,\sum _{k=0}^{\infty }(-1)^{k}s_{8A}(k)\,{\frac {-222{\sqrt {2}}+13\times 58\,\left(k+{\frac {1}{2}}\right)}{\left(4\left(1+{\sqrt {2}}\right)^{12}\right)^{k+{\frac {1}{2}}}}},\qquad j_{8A'}\left({\frac {\sqrt {-58}}{4}}\right)=4\left(1+{\sqrt {2}}\right)^{12}=4U_{2}^{12},$
For another level 8 example,
${\frac {1}{\pi }}={\frac {1}{16}}{\sqrt {\frac {3}{5}}}\,\sum _{k=0}^{\infty }s_{8B}(k)\,{\frac {210k+43}{(64)^{k+{\frac {1}{2}}}}},\qquad j_{8B}\left({\frac {\sqrt {-7}}{4}}\right)=2^{6}=64$
Level 9
Define,
${\begin{aligned}j_{3C}(\tau )&=\left(j(3\tau )\right)^{\frac {1}{3}}=-6+\left({\frac {\eta ^{2}(3\tau )}{\eta (\tau )\,\eta (9\tau )}}\right)^{6}-27\left({\frac {\eta (\tau )\,\eta (9\tau )}{\eta ^{2}(3\tau )}}\right)^{6}={\frac {1}{q}}+248q^{2}+4124q^{5}+34752q^{8}+\cdots \\j_{9A}(\tau )&=\left({\frac {\eta ^{2}(3\tau )}{\eta (\tau )\,\eta (9\tau )}}\right)^{6}={\frac {1}{q}}+6+27q+86q^{2}+243q^{3}+594q^{4}+\cdots \\\end{aligned}}$
The expansion of the first is the McKay–Thompson series of class 3C (and related to the cube root of the j-function), while the second is that of class 9A. Let,
$s_{3C}(k)={\binom {2k}{k}}\sum _{j=0}^{k}(-3)^{k-3j}{\binom {k}{j}}{\binom {k-j}{j}}{\binom {k-2j}{j}}={\binom {2k}{k}}\sum _{j=0}^{k}(-3)^{k-3j}{\binom {k}{3j}}{\binom {2j}{j}}{\binom {3j}{j}}=1,-6,54,-420,630,\ldots $
$s_{9A}(k)=\sum _{j=0}^{k}{\binom {k}{j}}^{2}\sum _{m=0}^{j}{\binom {k}{m}}{\binom {j}{m}}{\binom {j+m}{k}}=1,3,27,309,4059,\ldots $
where the first is the product of the central binomial coefficients and OEIS: A006077 (though with different signs).
Examples:
${\frac {1}{\pi }}={\frac {-{\boldsymbol {i}}}{9}}\sum _{k=0}^{\infty }s_{3C}(k)\,{\frac {602k+85}{\left(-960-12\right)^{k+{\frac {1}{2}}}}},\quad j_{3C}\left({\frac {3+{\sqrt {-43}}}{6}}\right)=-960$
${\frac {1}{\pi }}=6\,{\boldsymbol {i}}\,\sum _{k=0}^{\infty }s_{9A}(k)\,{\frac {4-{\sqrt {129}}\,\left(k+{\frac {1}{2}}\right)}{\left(-3{\sqrt {3U_{129}}}\right)^{k+{\frac {1}{2}}}}},\quad j_{9A}\left({\frac {3+{\sqrt {-43}}}{6}}\right)=-3{\sqrt {3}}\left(53{\sqrt {3}}+14{\sqrt {43}}\right)=-3{\sqrt {3U_{129}}}$
Level 10
Modular functions
Define,
${\begin{aligned}j_{10A}(\tau )&=\left({\sqrt {j_{10D}(\tau )}}-{\frac {1}{\sqrt {j_{10D}(\tau )}}}\right)^{2}=\left({\sqrt {j_{6B}(\tau )}}+{\frac {4}{\sqrt {j_{10B}(\tau )}}}\right)^{2}=\left({\sqrt {j_{10C}(\tau )}}+{\frac {5}{\sqrt {j_{10C}(\tau )}}}\right)^{2}-4={\frac {1}{q}}+4+22q+56q^{2}+\cdots \end{aligned}}$
${\begin{aligned}j_{10B}(\tau )&=\left({\frac {\eta (\tau )\eta (5\tau )}{\eta (2\tau )\eta (10\tau )}}\right)^{4}={\frac {1}{q}}-4+6q-8q^{2}+17q^{3}-32q^{4}+\cdots \end{aligned}}$
${\begin{aligned}j_{10C}(\tau )&=\left({\frac {\eta (\tau )\eta (2\tau )}{\eta (5\tau )\eta (10\tau )}}\right)^{2}={\frac {1}{q}}-2-3q+6q^{2}+2q^{3}+2q^{4}+\cdots \end{aligned}}$
${\begin{aligned}j_{10D}(\tau )&=\left({\frac {\eta (2\tau )\eta (5\tau )}{\eta (\tau )\eta (10\tau )}}\right)^{6}={\frac {1}{q}}+6+21q+62q^{2}+162q^{3}+\cdots \end{aligned}}$
${\begin{aligned}j_{10E}(\tau )&=\left({\frac {\eta (2\tau )\eta ^{5}(5\tau )}{\eta (\tau )\eta ^{5}(10\tau )}}\right)={\frac {1}{q}}+1+q+2q^{2}+2q^{3}-2q^{4}+\cdots \end{aligned}}$
Just like $j_{6A}$, the function $j_{10A}$ is a square or a near-square of the others. Furthermore, there are also linear relations between these,
$T_{10A}-T_{10B}-T_{10C}-T_{10D}+2T_{10E}=0$
or using the above eta quotients jn,
$j_{10A}-j_{10B}-j_{10C}-j_{10D}+2j_{10E}=6$
β sequences
Let,
$\beta _{1}(k)=\sum _{j=0}^{k}{\binom {k}{j}}^{4}=1,2,18,164,1810,\ldots $ (OEIS: A005260, labeled as s10 in Cooper's paper)
$\beta _{2}(k)={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {2j}{j}}^{-1}{\binom {k}{j}}\sum _{m=0}^{j}{\binom {j}{m}}^{4}=1,4,36,424,5716,\ldots $
$\beta _{3}(k)={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {2j}{j}}^{-1}{\binom {k}{j}}(-4)^{k-j}\sum _{m=0}^{j}{\binom {j}{m}}^{4}=1,-6,66,-876,12786,\ldots $
their complements,
$\beta _{2}'(k)={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {2j}{j}}^{-1}{\binom {k}{j}}(-1)^{k-j}\sum _{m=0}^{j}{\binom {j}{m}}^{4}=1,0,12,24,564,2784,\ldots $
$\beta _{3}'(k)={\binom {2k}{k}}\sum _{j=0}^{k}{\binom {2j}{j}}^{-1}{\binom {k}{j}}(4)^{k-j}\sum _{m=0}^{j}{\binom {j}{m}}^{4}=1,10,162,3124,66994,\ldots $
and,
$s_{10B}(k)=1,-2,10,-68,514,-4100,33940,\ldots $
$s_{10C}(k)=1,-1,1,-1,1,23,-263,1343,-2303,\ldots $
$s_{10D}(k)=1,3,25,267,3249,42795,594145,\ldots $
though closed forms are not yet known for the last three sequences.
Identities
The modular functions can be related as,[15]
$U=\sum _{k=0}^{\infty }\beta _{1}(k)\,{\frac {1}{\left(j_{10A}(\tau )\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }\beta _{2}(k)\,{\frac {1}{\left(j_{10A}(\tau )+4\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }\beta _{3}(k)\,{\frac {1}{\left(j_{10A}(\tau )-16\right)^{k+{\frac {1}{2}}}}}$
$V=\sum _{k=0}^{\infty }s_{10B}(k)\,{\frac {1}{\left(j_{10B}(\tau )\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }s_{10C}(k)\,{\frac {1}{\left(j_{10C}(\tau )\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }s_{10D}(k)\,{\frac {1}{\left(j_{10D}(\tau )\right)^{k+{\frac {1}{2}}}}}$
if the series converges. In fact, it can also be observed that,
$U=V=\sum _{k=0}^{\infty }\beta _{2}'(k)\,{\frac {1}{\left(j_{10A}(\tau )-4\right)^{k+{\frac {1}{2}}}}}=\sum _{k=0}^{\infty }\beta _{3}'(k)\,{\frac {1}{\left(j_{10A}(\tau )+16\right)^{k+{\frac {1}{2}}}}}$
Since the exponent has a fractional part, the sign of the square root must be chosen appropriately though it is less an issue when jn is positive.
Examples
Just like level 6, the level 10 function j10A can be used in three ways. Starting with,
$j_{10A}\left({\sqrt {\frac {-19}{10}}}\right)=76^{2}=5776$
and noting that $5\cdot 19=95$ then,
${\begin{aligned}{\frac {1}{\pi }}&={\frac {5}{\sqrt {95}}}\,\sum _{k=0}^{\infty }\beta _{1}(k)\,{\frac {408k+47}{\left(76^{2}\right)^{k+{\frac {1}{2}}}}}\\{\frac {1}{\pi }}&={\frac {1}{17{\sqrt {95}}}}\,\sum _{k=0}^{\infty }\beta _{2}(k)\,{\frac {19\cdot 1824k+3983}{\left(76^{2}+4\right)^{k+{\frac {1}{2}}}}}\\{\frac {1}{\pi }}&={\frac {1}{6{\sqrt {95}}}}\,\,\sum _{k=0}^{\infty }\beta _{3}(k)\,\,{\frac {19\cdot 646k+1427}{\left(76^{2}-16\right)^{k+{\frac {1}{2}}}}}\\\end{aligned}}$
as well as,
${\begin{aligned}{\frac {1}{\pi }}&={\frac {5}{481{\sqrt {95}}}}\,\sum _{k=0}^{\infty }\beta _{2}'(k)\,{\frac {19\cdot 10336k+22675}{\left(76^{2}-4\right)^{k+{\frac {1}{2}}}}}\\{\frac {1}{\pi }}&={\frac {5}{181{\sqrt {95}}}}\,\sum _{k=0}^{\infty }\beta _{3}'(k)\,{\frac {19\cdot 3876k+8405}{\left(76^{2}+16\right)^{k+{\frac {1}{2}}}}}\end{aligned}}$
though the ones using the complements do not yet have a rigorous proof. A conjectured formula using one of the last three sequences is,
${\frac {1}{\pi }}={\frac {\boldsymbol {i}}{\sqrt {5}}}\,\sum _{k=0}^{\infty }s_{10C}(k){\frac {10k+3}{\left(-5^{2}\right)^{k+{\frac {1}{2}}}}},\quad j_{10C}\left({\frac {1+\,{\boldsymbol {i}}}{2}}\right)=-5^{2}$
which implies there might be examples for all sequences of level 10.
Level 11
Define the McKay–Thompson series of class 11A,
$j_{11A}(\tau )=(1+3F)^{3}+\left({\frac {1}{\sqrt {F}}}+3{\sqrt {F}}\right)^{2}={\frac {1}{q}}+6+17q+46q^{2}+116q^{3}+\cdots $
or sequence (OEIS: A128525) and where,
$F={\frac {\eta (3\tau )\,\eta (33\tau )}{\eta (\tau )\,\eta (11\tau )}}$
and,
$s_{11A}(k)=1,4,28,268,3004,36784,476476,\ldots $ (OEIS: A284756)
No closed form in terms of binomial coefficients is yet known for the sequence but it obeys the recurrence relation,
$(k+1)^{3}s_{k+1}=2(2k+1)\left(5k^{2}+5k+2\right)s_{k}-8k\left(7k^{2}+1\right)s_{k-1}+22k(k-1)(2k-1)s_{k-2}$
with initial conditions s(0) = 1, s(1) = 4.
Example:[16]
${\frac {1}{\pi }}={\frac {\boldsymbol {i}}{22}}\sum _{k=0}^{\infty }s_{11A}(k)\,{\frac {221k+67}{(-44)^{k+{\frac {1}{2}}}}},\quad j_{11A}\left({\frac {1+{\sqrt {\frac {-17}{11}}}}{2}}\right)=-44$
Higher levels
As pointed out by Cooper,[16] there are analogous sequences for certain higher levels.
Similar series
R. Steiner found examples using Catalan numbers $C_{k}$,
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k-n}\right)^{2}{\frac {(4z)k+\left(4^{2n-3}-(4n-3)z\right)}{16^{k}}}\qquad z\in \mathbb {Z} ,\quad n\geq 2,\quad n\in \mathbb {N} $
and for this a modular form with a second periodic for k exists:
$k={\frac {(-20-12{\boldsymbol {i}})+16n}{16}},\qquad k={\frac {(-20+12{\boldsymbol {i}})+16n}{16}}$
Other similar series are
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k-2}\right)^{2}{\frac {3k+{\frac {1}{4}}}{16^{k}}}$
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k-1}\right)^{2}{\frac {(4z+1)k-z}{16^{k}}}\qquad z\in \mathbb {Z} $
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k-1}\right)^{2}{\frac {-1k+{\frac {1}{2}}}{16^{k}}}$
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k-1}\right)^{2}{\frac {0k+{\frac {1}{4}}}{16^{k}}}$
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k-1}\right)^{2}{\frac {{\frac {k}{5}}+{\frac {1}{5}}}{16^{k}}}$
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k-1}\right)^{2}{\frac {{\frac {k}{3}}+{\frac {1}{6}}}{16^{k}}}$
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k-1}\right)^{2}{\frac {{\frac {k}{2}}+{\frac {1}{8}}}{16^{k}}}$
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k-1}\right)^{2}{\frac {2k-{\frac {1}{4}}}{16^{k}}}$
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k-1}\right)^{2}{\frac {3k-{\frac {1}{2}}}{16^{k}}}$
${\frac {1}{\pi }}=\sum _{k=0}^{\infty }\left(2C_{k}\right)^{2}{\frac {{\frac {k}{16}}+{\frac {1}{16}}}{16^{k}}}$
with the last (comments in OEIS: A013709) found by using a linear combination of higher parts of Wallis-Lambert series for ${\tfrac {4}{\pi }}$ and Euler series for the circumference of an ellipse.
Using the definition of Catalan numbers with the gamma function the first and last for example give the identities
${\frac {1}{4}}=\sum _{k=0}^{\infty }{\left({\frac {\Gamma ({\frac {1}{2}}+k)}{\Gamma (2+k)}}\right)}^{2}\left(4zk-(4n-3)z+4^{2n-3}\right)\qquad z\in \mathbb {Z} ,\quad n\geq 2,\quad n\in \mathbb {N} $
...
$4=\sum _{k=0}^{\infty }{\left({\frac {\Gamma ({\frac {1}{2}}+k)}{\Gamma (2+k)}}\right)}^{2}(k+1)$.
The last is also equivalent to,
${\frac {1}{\pi }}={\frac {1}{4}}\sum _{k=0}^{\infty }{\frac {{\binom {2k}{k}}^{2}}{k+1}}\,{\frac {1}{16^{k}}}$
and is related to the fact that,
$\lim _{k\rightarrow \infty }{\frac {16^{k}}{k{\binom {2k}{k}}^{2}}}=\pi $
which is a consequence of Stirling's approximation.
See also
• Chudnovsky algorithm
• Borwein's algorithm
References
1. Chan, Heng Huat; Chan, Song Heng; Liu, Zhiguo (2004). "Domb's numbers and Ramanujan–Sato type series for 1/π". Advances in Mathematics. 186 (2): 396–410. doi:10.1016/j.aim.2003.07.012.
2. Almkvist, Gert; Guillera, Jesus (2013). "Ramanujan–Sato-Like Series". In Borwein, J.; Shparlinski, I.; Zudilin, W. (eds.). Number Theory and Related Fields. Springer Proceedings in Mathematics & Statistics. Vol. 43. New York: Springer. pp. 55–74. doi:10.1007/978-1-4614-6642-0_2. ISBN 978-1-4614-6641-3. S2CID 44875082.
3. Chan, H. H.; Cooper, S. (2012). "Rational analogues of Ramanujan's series for 1/π" (PDF). Mathematical Proceedings of the Cambridge Philosophical Society. 153 (2): 361–383. doi:10.1017/S0305004112000254. S2CID 76656590. Archived from the original (PDF) on 2019-12-19.
4. Almkvist, G. (2012). "Some conjectured formulas for 1/π coming from polytopes, K3-surfaces and Moonshine". arXiv:1211.6563. {{cite journal}}: Cite journal requires |journal= (help)
5. Ramanujan, S. (1914). "Modular equations and approximations to π". Quart. J. Math. Oxford. 45.
6. Chan; Tanigawa; Yang; Zudilin (2011). "New analogues of Clausen's identities arising from the theory of modular forms". Advances in Mathematics. 228 (2): 1294–1314. doi:10.1016/j.aim.2011.06.011.
7. Sato, T. (2002). "Apéry numbers and Ramanujan's series for 1/π". Abstract of a Talk Presented at the Annual Meeting of the Mathematical Society of Japan.
8. Chan, H.; Verrill, H. (2009). "The Apéry numbers, the Almkvist–Zudilin Numbers, and new series for 1/π". Mathematical Research Letters. 16 (3): 405–420. doi:10.4310/MRL.2009.v16.n3.a3.
9. Cooper, S. (2012). "Sporadic sequences, modular forms and new series for 1/π". Ramanujan Journal. 29 (1–3): 163–183. doi:10.1007/s11139-011-9357-3. S2CID 122870693.
10. Zagier, D. (2000). "Traces of Singular Moduli" (PDF). pp. 15–16.
11. Chudnovsky, David V.; Chudnovsky, Gregory V. (1989), "The Computation of Classical Constants", Proceedings of the National Academy of Sciences of the United States of America, 86 (21): 8178–8182, Bibcode:1989PNAS...86.8178C, doi:10.1073/pnas.86.21.8178, ISSN 0027-8424, JSTOR 34831, PMC 298242, PMID 16594075.
12. Yee, Alexander; Kondo, Shigeru (2011), 10 Trillion Digits of Pi: A Case Study of summing Hypergeometric Series to high precision on Multicore Systems, Technical Report, Computer Science Department, University of Illinois, hdl:2142/28348.
13. Borwein, J. M.; Borwein, P. B.; Bailey, D. H. (1989). "Ramanujan, modular equations, and approximations to pi; Or how to compute one billion digits of pi" (PDF). Amer. Math. Monthly. 96 (3): 201–219. doi:10.1080/00029890.1989.11972169.
14. Conway, J.; Norton, S. (1979). "Monstrous Moonshine". Bulletin of the London Mathematical Society. 11 (3): 308–339 [p. 319]. doi:10.1112/blms/11.3.308.
15. S. Cooper, "Level 10 analogues of Ramanujan’s series for 1/π", Theorem 4.3, p.85, J. Ramanujan Math. Soc. 27, No.1 (2012)
16. Cooper, S. (December 2013). "Ramanujan's theories of elliptic functions to alternative bases, and beyond" (PDF). Askey 80 Conference.
External links
• Franel numbers
• McKay–Thompson series
• Approximations to Pi via the Dedekind eta function
|
Wikipedia
|
Ramchundra
Ramchundra (Ramachandra Lal) (Devanagari,रामचन्द्र लाल) (1821–1880) was a British Indian mathematician. His book, Treatise on Problems of Maxima and Minima, was promoted by mathematician Augustus De Morgan.
Writing in his preface to the treatise, De Morgan states that Ramchundra was born in 1821 in Panipat to Sunder Lal, a Kayasth of Delhi. He came to De Morgan’s attention when, in 1850, a friend sent him Ramchundra’s work on maxima and minima. The 29-year-old self-taught mathematician had published the book at his own expense in Calcutta in that year. De Morgan was so impressed that he arranged for the book to be republished in London under his own supervision, and in general undertook to bring Ramchundra's work to the notice of the broader European scientific community. Quoting De Morgan, from the preface of the treatise:
On examining this work I saw in it, not merely merit worthy of encouragement, but merit of a peculiar kind, the encouragement of which, as it appeared to me, was likely to promote native effort towards the restoration of the native mind in India.
Charles Muses, in an article in the Mathematical Intelligencer (1998) called Ramchundra "De Morgan's Ramanujan". He was mystified why, in spite of De Morgan's efforts to make this "remarkable Hindu algebraist known, he does not appear in most texts on history of mathematics. He is also known for his great ability to calculate."
Ramchundra was teacher of science in Delhi College for some time. In 1858, he was native head master in Thomason Civil Engineering College (now Indian Institute of Technology, Roorkee) at Roorkee. Later that year, he was appointed head master of a school in Delhi.
References
• C. Muses, "De Morgan's Ramanujan". Mathematical Intelligencer, vol. 20, no. 3, 1998, pp. 47–51.
• Dhruv Raina, "Ramchundra's Treatise Through the Unsentimentalised Light of Mathematics or the Mathematical Foundation of a Cultural Project", Historia Mathematica, 19, 1992, 371 384.
• S.Irfan Habib and Dhruv Raina, "The Introduction of Modern Science into India: A Study of Ramchundra, Educationist and Mathematician", Annals of Science, 46, (1989), 597 610; also Habib and Raina, "Vaijnanik Soch ko Samarpit", Sancha, June July 1988, 76 83.
• Ramchundra (1859), A Treatise on Problems of Maxima and Minima: Solved by Algebra, London: Wm. H. Allen and Co.
Authority control
International
• FAST
• ISNI
• VIAF
National
• Germany
• Israel
• United States
Academics
• MathSciNet
• zbMATH
Other
• IdRef
|
Wikipedia
|
Finite extensions of local fields
In algebraic number theory, through completion, the study of ramification of a prime ideal can often be reduced to the case of local fields where a more detailed analysis can be carried out with the aid of tools such as ramification groups.
In this article, a local field is non-archimedean and has finite residue field.
Unramified extension
Let $L/K$ be a finite Galois extension of nonarchimedean local fields with finite residue fields $\ell /k$ and Galois group $G$. Then the following are equivalent.
• (i) $L/K$ is unramified.
• (ii) ${\mathcal {O}}_{L}/{\mathfrak {p}}{\mathcal {O}}_{L}$ is a field, where ${\mathfrak {p}}$ is the maximal ideal of ${\mathcal {O}}_{K}$.
• (iii) $[L:K]=[\ell :k]$
• (iv) The inertia subgroup of $G$ is trivial.
• (v) If $\pi $ is a uniformizing element of $K$, then $\pi $ is also a uniformizing element of $L$.
When $L/K$ is unramified, by (iv) (or (iii)), G can be identified with $\operatorname {Gal} (\ell /k)$, which is finite cyclic.
The above implies that there is an equivalence of categories between the finite unramified extensions of a local field K and finite separable extensions of the residue field of K.
Totally ramified extension
Again, let $L/K$ be a finite Galois extension of nonarchimedean local fields with finite residue fields $l/k$ and Galois group $G$. The following are equivalent.
• $L/K$ is totally ramified
• $G$ coincides with its inertia subgroup.
• $L=K[\pi ]$ where $\pi $ is a root of an Eisenstein polynomial.
• The norm $N(L/K)$ contains a uniformizer of $K$.
See also
• Abhyankar's lemma
• Unramified morphism
References
• Cassels, J.W.S. (1986). Local Fields. London Mathematical Society Student Texts. Vol. 3. Cambridge University Press. ISBN 0-521-31525-5. Zbl 0595.12006.
• Weiss, Edwin (1976). Algebraic Number Theory (2nd unaltered ed.). Chelsea Publishing. ISBN 0-8284-0293-0. Zbl 0348.12101.
|
Wikipedia
|
Principia Mathematica
The Principia Mathematica (often abbreviated PM) is a three-volume work on the foundations of mathematics written by mathematician–philosophers Alfred North Whitehead and Bertrand Russell and published in 1910, 1912, and 1913. In 1925–1927, it appeared in a second edition with an important Introduction to the Second Edition, an Appendix A that replaced ✸9 and all-new Appendix B and Appendix C. PM is not to be confused with Russell's 1903 The Principles of Mathematics. PM was originally conceived as a sequel volume to Russell's 1903 Principles, but as PM states, this became an unworkable suggestion for practical and philosophical reasons: "The present work was originally intended by us to be comprised in a second volume of Principles of Mathematics... But as we advanced, it became increasingly evident that the subject is a very much larger one than we had supposed; moreover on many fundamental questions which had been left obscure and doubtful in the former work, we have now arrived at what we believe to be satisfactory solutions."
I can remember Bertrand Russell telling me of a horrible dream. He was in the top floor of the University Library, about A.D. 2100. A library assistant was going round the shelves carrying an enormous bucket, taking down books, glancing at them, restoring them to the shelves or dumping them into the bucket. At last he came to three large volumes which Russell could recognize as the last surviving copy of Principia Mathematica. He took down one of the volumes, turned over a few pages, seemed puzzled for a moment by the curious symbolism, closed the volume, balanced it in his hand and hesitated....
Hardy, G. H. (2004) [1940]. A Mathematician's Apology. Cambridge: University Press. p. 83. ISBN 978-0-521-42706-7.
He [Russell] said once, after some contact with the Chinese language, that he was horrified to find that the language of Principia Mathematica was an Indo-European one.
Littlewood, J. E. (1986). Littlewood's Miscellany. Cambridge: University Press. p. 130.
PM, according to its introduction, had three aims: (1) to analyze to the greatest possible extent the ideas and methods of mathematical logic and to minimize the number of primitive notions, axioms, and inference rules; (2) to precisely express mathematical propositions in symbolic logic using the most convenient notation that precise expression allows; (3) to solve the paradoxes that plagued logic and set theory at the turn of the 20th century, like Russell's paradox.[1]
This third aim motivated the adoption of the theory of types in PM. The theory of types adopts grammatical restrictions on formulas that rules out the unrestricted comprehension of classes, properties, and functions. The effect of this is that formulas such as would allow the comprehension of objects like the Russell set turn out to be ill-formed: they violate the grammatical restrictions of the system of PM.
There is no doubt that PM is of great importance in the history of mathematics and philosophy: as Irvine has noted, it sparked interest in symbolic logic and advanced the subject by popularizing it; it showcased the powers and capacities of symbolic logic; and it showed how advances in philosophy of mathematics and symbolic logic could go hand-in-hand with tremendous fruitfulness.[2] Indeed, PM was in part brought about by an interest in logicism, the view on which all mathematical truths are logical truths. It was in part thanks to the advances made in PM that, despite its defects, numerous advances in meta-logic were made, including Gödel's incompleteness theorems.
For all that, PM notations are not widely used anymore: probably the foremost reason for this is that practicing mathematicians tend to assume that the background Foundation is a form of the system of Zermelo–Fraenkel set theory. Nonetheless, the scholarly, historical, and philosophical interest in PM is great and ongoing: for example, the Modern Library placed it 23rd in a list of the top 100 English-language nonfiction books of the twentieth century.[3] There are also multiple articles on the work in the peer-reviewed Stanford Encyclopedia of Philosophy and academic researchers continue working with Principia, whether for the historical reason of understanding the text or its authors, or for mathematical reasons of understanding or developing Principia's logical system.
Scope of foundations laid
The Principia covered only set theory, cardinal numbers, ordinal numbers, and real numbers. Deeper theorems from real analysis were not included, but by the end of the third volume it was clear to experts that a large amount of known mathematics could in principle be developed in the adopted formalism. It was also clear how lengthy such a development would be.
A fourth volume on the foundations of geometry had been planned, but the authors admitted to intellectual exhaustion upon completion of the third.
Theoretical basis
As noted in the criticism of the theory by Kurt Gödel (below), unlike a formalist theory, the "logicistic" theory of PM has no "precise statement of the syntax of the formalism". Furthermore in the theory, it is almost immediately observable that interpretations (in the sense of model theory) are presented in terms of truth-values for the behaviour of the symbols "⊢" (assertion of truth), "~" (logical not), and "V" (logical inclusive OR).
Truth-values: PM embeds the notions of "truth" and "falsity" in the notion "primitive proposition". A raw (pure) formalist theory would not provide the meaning of the symbols that form a "primitive proposition"—the symbols themselves could be absolutely arbitrary and unfamiliar. The theory would specify only how the symbols behave based on the grammar of the theory. Then later, by assignment of "values", a model would specify an interpretation of what the formulas are saying. Thus in the formal Kleene symbol set below, the "interpretation" of what the symbols commonly mean, and by implication how they end up being used, is given in parentheses, e.g., "¬ (not)". But this is not a pure Formalist theory.
Contemporary construction of a formal theory
The following formalist theory is offered as contrast to the logicistic theory of PM. A contemporary formal system would be constructed as follows:
1. Symbols used: This set is the starting set, and other symbols can appear but only by definition from these beginning symbols. A starting set might be the following set derived from Kleene 1952: logical symbols: "→" (implies, IF-THEN, and "⊃"), "&" (and), "V" (or), "¬" (not), "∀" (for all), "∃" (there exists); predicate symbol "=" (equals); function symbols "+" (arithmetic addition), "∙" (arithmetic multiplication), "'" (successor); individual symbol "0" (zero); variables "a", "b", "c", etc.; and parentheses "(" and ")".[4]
2. Symbol strings: The theory will build "strings" of these symbols by concatenation (juxtaposition).[5]
3. Formation rules: The theory specifies the rules of syntax (rules of grammar) usually as a recursive definition that starts with "0" and specifies how to build acceptable strings or "well-formed formulas" (wffs).[6] This includes a rule for "substitution"[7] of strings for the symbols called "variables".
4. Transformation rule(s): The axioms that specify the behaviours of the symbols and symbol sequences.
5. Rule of inference, detachment, modus ponens : The rule that allows the theory to "detach" a "conclusion" from the "premises" that led up to it, and thereafter to discard the "premises" (symbols to the left of the line │, or symbols above the line if horizontal). If this were not the case, then substitution would result in longer and longer strings that have to be carried forward. Indeed, after the application of modus ponens, nothing is left but the conclusion, the rest disappears forever.
Contemporary theories often specify as their first axiom the classical or modus ponens or "the rule of detachment":
A, A ⊃ B | B
The symbol "│" is usually written as a horizontal line, here "⊃" means "implies". The symbols A and B are "stand-ins" for strings; this form of notation is called an "axiom schema" (i.e., there is a countable number of specific forms the notation could take). This can be read in a manner similar to IF-THEN but with a difference: given symbol string IF A and A implies B THEN B (and retain only B for further use). But the symbols have no "interpretation" (e.g., no "truth table" or "truth values" or "truth functions") and modus ponens proceeds mechanistically, by grammar alone.
Construction
The theory of PM has both significant similarities, and similar differences, to a contemporary formal theory. Kleene states that "this deduction of mathematics from logic was offered as intuitive axiomatics. The axioms were intended to be believed, or at least to be accepted as plausible hypotheses concerning the world".[8] Indeed, unlike a Formalist theory that manipulates symbols according to rules of grammar, PM introduces the notion of "truth-values", i.e., truth and falsity in the real-world sense, and the "assertion of truth" almost immediately as the fifth and sixth elements in the structure of the theory (PM 1962:4–36):
1. Variables
2. Uses of various letters
3. The fundamental functions of propositions: "the Contradictory Function" symbolised by "~" and the "Logical Sum or Disjunctive Function" symbolised by "∨" being taken as primitive and logical implication defined (the following example also used to illustrate 9. Definition below) as
p ⊃ q .=. ~ p ∨ q Df. (PM 1962:11)
and logical product defined as
p . q .=. ~(~p ∨ ~q) Df. (PM 1962:12)
4. Equivalence: Logical equivalence, not arithmetic equivalence: "≡" given as a demonstration of how the symbols are used, i.e., "Thus ' p ≡ q ' stands for '( p ⊃ q ) . ( q ⊃ p )'." (PM 1962:7). Notice that to discuss a notation PM identifies a "meta"-notation with "[space] ... [space]":[9]
Logical equivalence appears again as a definition:
p ≡ q .=. ( p ⊃ q ) . ( q ⊃ p ) (PM 1962:12),
Notice the appearance of parentheses. This grammatical usage is not specified and appears sporadically; parentheses do play an important role in symbol strings, however, e.g., the notation "(x)" for the contemporary "∀x".
5. Truth-values: "The 'Truth-value' of a proposition is truth if it is true, and falsehood if it is false" (this phrase is due to Gottlob Frege) (PM 1962:7).
6. Assertion-sign: "'⊦. p may be read 'it is true that' ... thus '⊦: p .⊃. q ' means 'it is true that p implies q ', whereas '⊦. p .⊃⊦. q ' means ' p is true; therefore q is true'. The first of these does not necessarily involve the truth either of p or of q, while the second involves the truth of both" (PM 1962:92).
7. Inference: PM's version of modus ponens. "[If] '⊦. p ' and '⊦ (p ⊃ q)' have occurred, then '⊦ . q ' will occur if it is desired to put it on record. The process of the inference cannot be reduced to symbols. Its sole record is the occurrence of '⊦. q ' [in other words, the symbols on the left disappear or can be erased]" (PM 1962:9).
8. The use of dots
9. Definitions: These use the "=" sign with "Df" at the right end.
10. Summary of preceding statements: brief discussion of the primitive ideas "~ p" and "p ∨ q" and "⊦" prefixed to a proposition.
11. Primitive propositions: the axioms or postulates. This was significantly modified in the second edition.
12. Propositional functions: The notion of "proposition" was significantly modified in the second edition, including the introduction of "atomic" propositions linked by logical signs to form "molecular" propositions, and the use of substitution of molecular propositions into atomic or molecular propositions to create new expressions.
13. The range of values and total variation
14. Ambiguous assertion and the real variable: This and the next two sections were modified or abandoned in the second edition. In particular, the distinction between the concepts defined in sections 15. Definition and the real variable and 16 Propositions connecting real and apparent variables was abandoned in the second edition.
15. Formal implication and formal equivalence
16. Identity
17. Classes and relations
18. Various descriptive functions of relations
19. Plural descriptive functions
20. Unit classes
Primitive ideas
Cf. PM 1962:90–94, for the first edition:
• (1) Elementary propositions.
• (2) Elementary propositions of functions.
• (3) Assertion: introduces the notions of "truth" and "falsity".
• (4) Assertion of a propositional function.
• (5) Negation: "If p is any proposition, the proposition "not-p", or "p is false," will be represented by "~p" ".
• (6) Disjunction: "If p and q are any propositions, the proposition "p or q, i.e., "either p is true or q is true," where the alternatives are to be not mutually exclusive, will be represented by "p ∨ q" ".
• (cf. section B)
Primitive propositions
The first edition (see discussion relative to the second edition, below) begins with a definition of the sign "⊃"
✸1.01. p ⊃ q .=. ~ p ∨ q. Df.
✸1.1. Anything implied by a true elementary proposition is true. Pp modus ponens
(✸1.11 was abandoned in the second edition.)
✸1.2. ⊦: p ∨ p .⊃. p. Pp principle of tautology
✸1.3. ⊦: q .⊃. p ∨ q. Pp principle of addition
✸1.4. ⊦: p ∨ q .⊃. q ∨ p. Pp principle of permutation
✸1.5. ⊦: p ∨ ( q ∨ r ) .⊃. q ∨ ( p ∨ r ). Pp associative principle
✸1.6. ⊦:. q ⊃ r .⊃: p ∨ q .⊃. p ∨ r. Pp principle of summation
✸1.7. If p is an elementary proposition, ~p is an elementary proposition. Pp
✸1.71. If p and q are elementary propositions, p ∨ q is an elementary proposition. Pp
✸1.72. If φp and ψp are elementary propositional functions which take elementary propositions as arguments, φp ∨ ψp is an elementary proposition. Pp
Together with the "Introduction to the Second Edition", the second edition's Appendix A abandons the entire section ✸9. This includes six primitive propositions ✸9 through ✸9.15 together with the Axioms of reducibility.
The revised theory is made difficult by the introduction of the Sheffer stroke ("|") to symbolise "incompatibility" (i.e., if both elementary propositions p and q are true, their "stroke" p | q is false), the contemporary logical NAND (not-AND). In the revised theory, the Introduction presents the notion of "atomic proposition", a "datum" that "belongs to the philosophical part of logic". These have no parts that are propositions and do not contain the notions "all" or "some". For example: "this is red", or "this is earlier than that". Such things can exist ad finitum, i.e., even an "infinite enumeration" of them to replace "generality" (i.e., the notion of "for all").[10] PM then "advance[s] to molecular propositions" that are all linked by "the stroke". Definitions give equivalences for "~", "∨", "⊃", and ".".
The new introduction defines "elementary propositions" as atomic and molecular positions together. It then replaces all the primitive propositions ✸1.2 to ✸1.72 with a single primitive proposition framed in terms of the stroke:
"If p, q, r are elementary propositions, given p and p|(q|r), we can infer r. This is a primitive proposition."
The new introduction keeps the notation for "there exists" (now recast as "sometimes true") and "for all" (recast as "always true"). Appendix A strengthens the notion of "matrix" or "predicative function" (a "primitive idea", PM 1962:164) and presents four new Primitive propositions as ✸8.1–✸8.13.
✸88. Multiplicative axiom
✸120. Axiom of infinity
Ramified types and the axiom of reducibility
In simple type theory objects are elements of various disjoint "types". Types are implicitly built up as follows. If τ1,...,τm are types then there is a type (τ1,...,τm) that can be thought of as the class of propositional functions of τ1,...,τm (which in set theory is essentially the set of subsets of τ1×...×τm). In particular there is a type () of propositions, and there may be a type ι (iota) of "individuals" from which other types are built. Russell and Whitehead's notation for building up types from other types is rather cumbersome, and the notation here is due to Church.
In the ramified type theory of PM all objects are elements of various disjoint ramified types. Ramified types are implicitly built up as follows. If τ1,...,τm,σ1,...,σn are ramified types then as in simple type theory there is a type (τ1,...,τm,σ1,...,σn) of "predicative" propositional functions of τ1,...,τm,σ1,...,σn. However, there are also ramified types (τ1,...,τm|σ1,...,σn) that can be thought of as the classes of propositional functions of τ1,...τm obtained from propositional functions of type (τ1,...,τm,σ1,...,σn) by quantifying over σ1,...,σn. When n=0 (so there are no σs) these propositional functions are called predicative functions or matrices. This can be confusing because modern mathematical practice does not distinguish between predicative and non-predicative functions, and in any case PM never defines exactly what a "predicative function" actually is: this is taken as a primitive notion.
Russell and Whitehead found it impossible to develop mathematics while maintaining the difference between predicative and non-predicative functions, so they introduced the axiom of reducibility, saying that for every non-predicative function there is a predicative function taking the same values. In practice this axiom essentially means that the elements of type (τ1,...,τm|σ1,...,σn) can be identified with the elements of type (τ1,...,τm), which causes the hierarchy of ramified types to collapse down to simple type theory. (Strictly speaking, PM allows two propositional functions to be different even if they take the same values on all arguments; this differs from modern mathematical practice where one normally identifies two such functions.)
In Zermelo set theory one can model the ramified type theory of PM as follows. One picks a set ι to be the type of individuals. For example, ι might be the set of natural numbers, or the set of atoms (in a set theory with atoms) or any other set one is interested in. Then if τ1,...,τm are types, the type (τ1,...,τm) is the power set of the product τ1×...×τm, which can also be thought of informally as the set of (propositional predicative) functions from this product to a 2-element set {true,false}. The ramified type (τ1,...,τm|σ1,...,σn) can be modeled as the product of the type (τ1,...,τm,σ1,...,σn) with the set of sequences of n quantifiers (∀ or ∃) indicating which quantifier should be applied to each variable σi. (One can vary this slightly by allowing the σs to be quantified in any order, or allowing them to occur before some of the τs, but this makes little difference except to the bookkeeping.)
Notation
Main article: Glossary of Principia Mathematica
One author[2] observes that "The notation in that work has been superseded by the subsequent development of logic during the 20th century, to the extent that the beginner has trouble reading PM at all"; while much of the symbolic content can be converted to modern notation, the original notation itself is "a subject of scholarly dispute", and some notation "embodies substantive logical doctrines so that it cannot simply be replaced by contemporary symbolism".[11]
Kurt Gödel was harshly critical of the notation:
"It is to be regretted that this first comprehensive and thorough-going presentation of a mathematical logic and the derivation of mathematics from it [is] so greatly lacking in formal precision in the foundations (contained in ✸1–✸21 of Principia [i.e., sections ✸1–✸5 (propositional logic), ✸8–14 (predicate logic with identity/equality), ✸20 (introduction to set theory), and ✸21 (introduction to relations theory)]) that it represents in this respect a considerable step backwards as compared with Frege. What is missing, above all, is a precise statement of the syntax of the formalism. Syntactical considerations are omitted even in cases where they are necessary for the cogency of the proofs".[12]
This is reflected in the example below of the symbols "p", "q", "r" and "⊃" that can be formed into the string "p ⊃ q ⊃ r". PM requires a definition of what this symbol-string means in terms of other symbols; in contemporary treatments the "formation rules" (syntactical rules leading to "well formed formulas") would have prevented the formation of this string.
Source of the notation: Chapter I "Preliminary Explanations of Ideas and Notations" begins with the source of the elementary parts of the notation (the symbols =⊃≡−ΛVε and the system of dots):
"The notation adopted in the present work is based upon that of Peano, and the following explanations are to some extent modeled on those which he prefixes to his Formulario Mathematico [i.e., Peano 1889]. His use of dots as brackets is adopted, and so are many of his symbols" (PM 1927:4).[13]
PM changed Peano's Ɔ to ⊃, and also adopted a few of Peano's later symbols, such as ℩ and ι, and Peano's practice of turning letters upside down.
PM adopts the assertion sign "⊦" from Frege's 1879 Begriffsschrift:[14]
"(I)t may be read 'it is true that'"[15]
Thus to assert a proposition p PM writes:
"⊦. p." (PM 1927:92)
(Observe that, as in the original, the left dot is square and of greater size than the period on the right.)
Most of the rest of the notation in PM was invented by Whitehead.[16]
An introduction to the notation of "Section A Mathematical Logic" (formulas ✸1–✸5.71)
PM's dots[17] are used in a manner similar to parentheses. Each dot (or multiple dot) represents either a left or right parenthesis or the logical symbol ∧. More than one dot indicates the "depth" of the parentheses, for example, ".", ":" or ":.", "::". However the position of the matching right or left parenthesis is not indicated explicitly in the notation but has to be deduced from some rules that are complex and at times ambiguous. Moreover, when the dots stand for a logical symbol ∧ its left and right operands have to be deduced using similar rules. First one has to decide based on context whether the dots stand for a left or right parenthesis or a logical symbol. Then one has to decide how far the other corresponding parenthesis is: here one carries on until one meets either a larger number of dots, or the same number of dots next that have equal or greater "force", or the end of the line. Dots next to the signs ⊃, ≡,∨, =Df have greater force than dots next to (x), (∃x) and so on, which have greater force than dots indicating a logical product ∧.
Example 1. The line
✸3.4. ⊢ : p . q . ⊃ . p ⊃ q
corresponds to
⊢ ((p ∧ q) ⊃ (p ⊃ q)).
The two dots standing together immediately following the assertion-sign indicate that what is asserted is the entire line: since there are two of them, their scope is greater than that of any of the single dots to their right. They are replaced by a left parenthesis standing where the dots are and a right parenthesis at the end of the formula, thus:
⊢ (p . q . ⊃ . p ⊃ q).
(In practice, these outermost parentheses, which enclose an entire formula, are usually suppressed.) The first of the single dots, standing between two propositional variables, represents conjunction. It belongs to the third group and has the narrowest scope. Here it is replaced by the modern symbol for conjunction "∧", thus
⊢ (p ∧ q . ⊃ . p ⊃ q).
The two remaining single dots pick out the main connective of the whole formula. They illustrate the utility of the dot notation in picking out those connectives which are relatively more important than the ones which surround them. The one to the left of the "⊃" is replaced by a pair of parentheses, the right one goes where the dot is and the left one goes as far to the left as it can without crossing a group of dots of greater force, in this case the two dots which follow the assertion-sign, thus
⊢ ((p ∧ q) ⊃ . p ⊃ q)
The dot to the right of the "⊃" is replaced by a left parenthesis which goes where the dot is and a right parenthesis which goes as far to the right as it can without going beyond the scope already established by a group of dots of greater force (in this case the two dots which followed the assertion-sign). So the right parenthesis which replaces the dot to the right of the "⊃" is placed in front of the right parenthesis which replaced the two dots following the assertion-sign, thus
⊢ ((p ∧ q) ⊃ (p ⊃ q)).
Example 2, with double, triple, and quadruple dots:
✸9.521. ⊢ : : (∃x). φx . ⊃ . q : ⊃ : . (∃x). φx . v . r : ⊃ . q v r
stands for
((((∃x)(φx)) ⊃ (q)) ⊃ ((((∃x) (φx)) v (r)) ⊃ (q v r)))
Example 3, with a double dot indicating a logical symbol (from volume 1, page 10):
p⊃q:q⊃r.⊃.p⊃r
stands for
(p⊃q) ∧ ((q⊃r)⊃(p⊃r))
where the double dot represents the logical symbol ∧ and can be viewed as having the higher priority as a non-logical single dot.
Later in section ✸14, brackets "[ ]" appear, and in sections ✸20 and following, braces "{ }" appear. Whether these symbols have specific meanings or are just for visual clarification is unclear. Unfortunately the single dot (but also ":", ":.", "::", etc.) is also used to symbolise "logical product" (contemporary logical AND often symbolised by "&" or "∧").
Logical implication is represented by Peano's "Ɔ" simplified to "⊃", logical negation is symbolised by an elongated tilde, i.e., "~" (contemporary "~" or "¬"), the logical OR by "v". The symbol "=" together with "Df" is used to indicate "is defined as", whereas in sections ✸13 and following, "=" is defined as (mathematically) "identical with", i.e., contemporary mathematical "equality" (cf. discussion in section ✸13). Logical equivalence is represented by "≡" (contemporary "if and only if"); "elementary" propositional functions are written in the customary way, e.g., "f(p)", but later the function sign appears directly before the variable without parenthesis e.g., "φx", "χx", etc.
Example, PM introduces the definition of "logical product" as follows:
✸3.01. p . q .=. ~(~p v ~q) Df.
where "p . q" is the logical product of p and q.
✸3.02. p ⊃ q ⊃ r .=. p ⊃ q . q ⊃ r Df.
This definition serves merely to abbreviate proofs.
Translation of the formulas into contemporary symbols: Various authors use alternate symbols, so no definitive translation can be given. However, because of criticisms such as that of Kurt Gödel below, the best contemporary treatments will be very precise with respect to the "formation rules" (the syntax) of the formulas.
The first formula might be converted into modern symbolism as follows:[18]
(p & q) =df (~(~p v ~q))
alternately
(p & q) =df (¬(¬p v ¬q))
alternately
(p ∧ q) =df (¬(¬p v ¬q))
etc.
The second formula might be converted as follows:
(p → q → r) =df (p → q) & (q → r)
But note that this is not (logically) equivalent to (p → (q → r)) nor to ((p → q) → r), and these two are not logically equivalent either.
An introduction to the notation of "Section B Theory of Apparent Variables" (formulas ✸8–✸14.34)
These sections concern what is now known as predicate logic, and predicate logic with identity (equality).
• NB: As a result of criticism and advances, the second edition of PM (1927) replaces ✸9 with a new ✸8 (Appendix A). This new section eliminates the first edition's distinction between real and apparent variables, and it eliminates "the primitive idea 'assertion of a propositional function'.[19] To add to the complexity of the treatment, ✸8 introduces the notion of substituting a "matrix", and the Sheffer stroke:
• Matrix: In contemporary usage, PM's matrix is (at least for propositional functions), a truth table, i.e., all truth-values of a propositional or predicate function.
• Sheffer stroke: Is the contemporary logical NAND (NOT-AND), i.e., "incompatibility", meaning:
"Given two propositions p and q, then ' p | q ' means "proposition p is incompatible with proposition q", i.e., if both propositions p and q evaluate as true, then and only then p | q evaluates as false." After section ✸8 the Sheffer stroke sees no usage.
Section ✸10: The existential and universal "operators": PM adds "(x)" to represent the contemporary symbolism "for all x " i.e., " ∀x", and it uses a backwards serifed E to represent "there exists an x", i.e., "(Ǝx)", i.e., the contemporary "∃x". The typical notation would be similar to the following:
"(x) . φx" means "for all values of variable x, function φ evaluates to true"
"(Ǝx) . φx" means "for some value of variable x, function φ evaluates to true"
Sections ✸10, ✸11, ✸12: Properties of a variable extended to all individuals: section ✸10 introduces the notion of "a property" of a "variable". PM gives the example: φ is a function that indicates "is a Greek", and ψ indicates "is a man", and χ indicates "is a mortal" these functions then apply to a variable x. PM can now write, and evaluate:
(x) . ψx
The notation above means "for all x, x is a man". Given a collection of individuals, one can evaluate the above formula for truth or falsity. For example, given the restricted collection of individuals { Socrates, Plato, Russell, Zeus } the above evaluates to "true" if we allow for Zeus to be a man. But it fails for:
(x) . φx
because Russell is not Greek. And it fails for
(x) . χx
because Zeus is not a mortal.
Equipped with this notation PM can create formulas to express the following: "If all Greeks are men and if all men are mortals then all Greeks are mortals". (PM 1962:138)
(x) . φx ⊃ ψx :(x). ψx ⊃ χx :⊃: (x) . φx ⊃ χx
Another example: the formula:
✸10.01. (Ǝx). φx . = . ~(x) . ~φx Df.
means "The symbols representing the assertion 'There exists at least one x that satisfies function φ' is defined by the symbols representing the assertion 'It's not true that, given all values of x, there are no values of x satisfying φ'".
The symbolisms ⊃x and "≡x" appear at ✸10.02 and ✸10.03. Both are abbreviations for universality (i.e., for all) that bind the variable x to the logical operator. Contemporary notation would have simply used parentheses outside of the equality ("=") sign:
✸10.02 φx ⊃x ψx .=. (x). φx ⊃ ψx Df
Contemporary notation: ∀x(φ(x) → ψ(x)) (or a variant)
✸10.03 φx ≡x ψx .=. (x). φx ≡ ψx Df
Contemporary notation: ∀x(φ(x) ↔︎ ψ(x)) (or a variant)
PM attributes the first symbolism to Peano.
Section ✸11 applies this symbolism to two variables. Thus the following notations: ⊃x, ⊃y, ⊃x, y could all appear in a single formula.
Section ✸12 reintroduces the notion of "matrix" (contemporary truth table), the notion of logical types, and in particular the notions of first-order and second-order functions and propositions.
New symbolism "φ ! x" represents any value of a first-order function. If a circumflex "^" is placed over a variable, then this is an "individual" value of y, meaning that "ŷ" indicates "individuals" (e.g., a row in a truth table); this distinction is necessary because of the matrix/extensional nature of propositional functions.
Now equipped with the matrix notion, PM can assert its controversial axiom of reducibility: a function of one or two variables (two being sufficient for PM's use) where all its values are given (i.e., in its matrix) is (logically) equivalent ("≡") to some "predicative" function of the same variables. The one-variable definition is given below as an illustration of the notation (PM 1962:166–167):
✸12.1 ⊢: (Ǝ f): φx .≡x. f ! x Pp;
Pp is a "Primitive proposition" ("Propositions assumed without proof") (PM 1962:12, i.e., contemporary "axioms"), adding to the 7 defined in section ✸1 (starting with ✸1.1 modus ponens). These are to be distinguished from the "primitive ideas" that include the assertion sign "⊢", negation "~", logical OR "V", the notions of "elementary proposition" and "elementary propositional function"; these are as close as PM comes to rules of notational formation, i.e., syntax.
This means: "We assert the truth of the following: There exists a function f with the property that: given all values of x, their evaluations in function φ (i.e., resulting their matrix) is logically equivalent to some f evaluated at those same values of x. (and vice versa, hence logical equivalence)". In other words: given a matrix determined by property φ applied to variable x, there exists a function f that, when applied to the x is logically equivalent to the matrix. Or: every matrix φx can be represented by a function f applied to x, and vice versa.
✸13: The identity operator "=" : This is a definition that uses the sign in two different ways, as noted by the quote from PM:
✸13.01. x = y .=: (φ): φ ! x . ⊃ . φ ! y Df
means:
"This definition states that x and y are to be called identical when every predicative function satisfied by x is also satisfied by y ... Note that the second sign of equality in the above definition is combined with "Df", and thus is not really the same symbol as the sign of equality which is defined."
The not-equals sign "≠" makes its appearance as a definition at ✸13.02.
✸14: Descriptions:
"A description is a phrase of the form "the term y which satisfies φŷ, where φŷ is some function satisfied by one and only one argument."[20]
From this PM employs two new symbols, a forward "E" and an inverted iota "℩". Here is an example:
✸14.02. E ! ( ℩y) (φy) .=: ( Ǝb):φy . ≡y . y = b Df.
This has the meaning:
"The y satisfying φŷ exists," which holds when, and only when φŷ is satisfied by one value of y and by no other value." (PM 1967:173–174)
Introduction to the notation of the theory of classes and relations
The text leaps from section ✸14 directly to the foundational sections ✸20 GENERAL THEORY OF CLASSES and ✸21 GENERAL THEORY OF RELATIONS. "Relations" are what is known in contemporary set theory as sets of ordered pairs. Sections ✸20 and ✸22 introduce many of the symbols still in contemporary usage. These include the symbols "ε", "⊂", "∩", "∪", "–", "Λ", and "V": "ε" signifies "is an element of" (PM 1962:188); "⊂" (✸22.01) signifies "is contained in", "is a subset of"; "∩" (✸22.02) signifies the intersection (logical product) of classes (sets); "∪" (✸22.03) signifies the union (logical sum) of classes (sets); "–" (✸22.03) signifies negation of a class (set); "Λ" signifies the null class; and "V" signifies the universal class or universe of discourse.
Small Greek letters (other than "ε", "ι", "π", "φ", "ψ", "χ", and "θ") represent classes (e.g., "α", "β", "γ", "δ", etc.) (PM 1962:188):
x ε α
"The use of single letter in place of symbols such as ẑ(φz) or ẑ(φ ! z) is practically almost indispensable, since otherwise the notation rapidly becomes intolerably cumbrous. Thus ' x ε α' will mean ' x is a member of the class α'". (PM 1962:188)
α ∪ –α = V
The union of a set and its inverse is the universal (completed) set.[21]
α ∩ –α = Λ
The intersection of a set and its inverse is the null (empty) set.
When applied to relations in section ✸23 CALCULUS OF RELATIONS, the symbols "⊂", "∩", "∪", and "–" acquire a dot: for example: "⊍", "∸".[22]
The notion, and notation, of "a class" (set): In the first edition PM asserts that no new primitive ideas are necessary to define what is meant by "a class", and only two new "primitive propositions" called the axioms of reducibility for classes and relations respectively (PM 1962:25).[23] But before this notion can be defined, PM feels it necessary to create a peculiar notation "ẑ(φz)" that it calls a "fictitious object". (PM 1962:188)
⊢: x ε ẑ(φz) .≡. (φx)
"i.e., ' x is a member of the class determined by (φẑ)' is [logically] equivalent to ' x satisfies (φẑ),' or to '(φx) is true.'". (PM 1962:25)
At least PM can tell the reader how these fictitious objects behave, because "A class is wholly determinate when its membership is known, that is, there cannot be two different classes having the same membership" (PM 1962:26). This is symbolised by the following equality (similar to ✸13.01 above:
ẑ(φz) = ẑ(ψz) . ≡ : (x): φx .≡. ψx
"This last is the distinguishing characteristic of classes, and justifies us in treating ẑ(ψz) as the class determined by [the function] ψẑ." (PM 1962:188)
Perhaps the above can be made clearer by the discussion of classes in Introduction to the Second Edition, which disposes of the Axiom of Reducibility and replaces it with the notion: "All functions of functions are extensional" (PM 1962:xxxix), i.e.,
φx ≡x ψx .⊃. (x): ƒ(φẑ) ≡ ƒ(ψẑ) (PM 1962:xxxix)
This has the reasonable meaning that "IF for all values of x the truth-values of the functions φ and ψ of x are [logically] equivalent, THEN the function ƒ of a given φẑ and ƒ of ψẑ are [logically] equivalent." PM asserts this is "obvious":
"This is obvious, since φ can only occur in ƒ(φẑ) by the substitution of values of φ for p, q, r, ... in a [logical-] function, and, if φx ≡ ψx, the substitution of φx for p in a [logical-] function gives the same truth-value to the truth-function as the substitution of ψx. Consequently there is no longer any reason to distinguish between functions classes, for we have, in virtue of the above,
φx ≡x ψx .⊃. (x). φẑ = . ψẑ".
Observe the change to the equality "=" sign on the right. PM goes on to state that will continue to hang onto the notation "ẑ(φz)", but this is merely equivalent to φẑ, and this is a class. (all quotes: PM 1962:xxxix).
Consistency and criticisms
According to Carnap's "Logicist Foundations of Mathematics", Russell wanted a theory that could plausibly be said to derive all of mathematics from purely logical axioms. However, Principia Mathematica required, in addition to the basic axioms of type theory, three further axioms that seemed to not be true as mere matters of logic, namely the axiom of infinity, the axiom of choice, and the axiom of reducibility. Since the first two were existential axioms, Russell phrased mathematical statements depending on them as conditionals. But reducibility was required to be sure that the formal statements even properly express statements of real analysis, so that statements depending on it could not be reformulated as conditionals. Frank Ramsey tried to argue that Russell's ramification of the theory of types was unnecessary, so that reducibility could be removed, but these arguments seemed inconclusive.
Beyond the status of the axioms as logical truths, one can ask the following questions about any system such as PM:
• whether a contradiction could be derived from the axioms (the question of inconsistency), and
• whether there exists a mathematical statement which could neither be proven nor disproven in the system (the question of completeness).
Propositional logic itself was known to be consistent, but the same had not been established for Principia's axioms of set theory. (See Hilbert's second problem.) Russell and Whitehead suspected that the system in PM is incomplete: for example, they pointed out that it does not seem powerful enough to show that the cardinal ℵω exists. However, one can ask if some recursively axiomatizable extension of it is complete and consistent.
Gödel 1930, 1931
In 1930, Gödel's completeness theorem showed that first-order predicate logic itself was complete in a much weaker sense—that is, any sentence that is unprovable from a given set of axioms must actually be false in some model of the axioms. However, this is not the stronger sense of completeness desired for Principia Mathematica, since a given system of axioms (such as those of Principia Mathematica) may have many models, in some of which a given statement is true and in others of which that statement is false, so that the statement is left undecided by the axioms.
Gödel's incompleteness theorems cast unexpected light on these two related questions.
Gödel's first incompleteness theorem showed that no recursive extension of Principia could be both consistent and complete for arithmetic statements. (As mentioned above, Principia itself was already known to be incomplete for some non-arithmetic statements.) According to the theorem, within every sufficiently powerful recursive logical system (such as Principia), there exists a statement G that essentially reads, "The statement G cannot be proved." Such a statement is a sort of Catch-22: if G is provable, then it is false, and the system is therefore inconsistent; and if G is not provable, then it is true, and the system is therefore incomplete.
Gödel's second incompleteness theorem (1931) shows that no formal system extending basic arithmetic can be used to prove its own consistency. Thus, the statement "there are no contradictions in the Principia system" cannot be proven in the Principia system unless there are contradictions in the system (in which case it can be proven both true and false).
Wittgenstein 1919, 1939
By the second edition of PM, Russell had removed his axiom of reducibility to a new axiom (although he does not state it as such). Gödel 1944:126 describes it this way:
"This change is connected with the new axiom that functions can occur in propositions only "through their values", i.e., extensionally . . . [this is] quite unobjectionable even from the constructive standpoint . . . provided that quantifiers are always restricted to definite orders". This change from a quasi-intensional stance to a fully extensional stance also restricts predicate logic to the second order, i.e. functions of functions: "We can decide that mathematics is to confine itself to functions of functions which obey the above assumption" (PM 2nd edition p. 401, Appendix C).
This new proposal resulted in a dire outcome. An "extensional stance" and restriction to a second-order predicate logic means that a propositional function extended to all individuals such as "All 'x' are blue" now has to list all of the 'x' that satisfy (are true in) the proposition, listing them in a possibly infinite conjunction: e.g. x1 ∧ x2 ∧ . . . ∧ xn ∧ . . .. Ironically, this change came about as the result of criticism from Wittgenstein in his 1919 Tractatus Logico-Philosophicus. As described by Russell in the Introduction to the Second Edition of PM:
"There is another course, recommended by Wittgenstein† (†Tractatus Logico-Philosophicus, *5.54ff) for philosophical reasons. This is to assume that functions of propositions are always truth-functions, and that a function can only occur in a proposition through its values. [...] [Working through the consequences] it appears that everything in Vol. I remains true (though often new proofs are required); the theory of inductive cardinals and ordinals survives; but it seems that the theory of infinite Dedekindian and well-ordered series largely collapses, so that irrationals, and real numbers generally, can no longer be adequately dealt with. Also Cantor's proof that 2n > n breaks down unless n is finite." (PM 2nd edition reprinted 1962:xiv, also cf. new Appendix C).
In other words, the fact that an infinite list cannot realistically be specified means that the concept of "number" in the infinite sense (i.e. the continuum) cannot be described by the new theory proposed in PM Second Edition.
Wittgenstein in his Lectures on the Foundations of Mathematics, Cambridge 1939 criticised Principia on various grounds, such as:
• It purports to reveal the fundamental basis for arithmetic. However, it is our everyday arithmetical practices such as counting which are fundamental; for if a persistent discrepancy arose between counting and Principia, this would be treated as evidence of an error in Principia (e.g., that Principia did not characterise numbers or addition correctly), not as evidence of an error in everyday counting.
• The calculating methods in Principia can only be used in practice with very small numbers. To calculate using large numbers (e.g., billions), the formulae would become too long, and some short-cut method would have to be used, which would no doubt rely on everyday techniques such as counting (or else on non-fundamental and hence questionable methods such as induction). So again Principia depends on everyday techniques, not vice versa.
Wittgenstein did, however, concede that Principia may nonetheless make some aspects of everyday arithmetic clearer.
Gödel 1944
In his 1944 Russell's mathematical logic, Gödel offers a "critical but sympathetic discussion of the logicistic order of ideas":[24]
"It is to be regretted that this first comprehensive and thorough-going presentation of a mathematical logic and the derivation of mathematics from it [is] so greatly lacking in formal precision in the foundations (contained in *1-*21 of Principia) that it represents in this respect a considerable step backwards as compared with Frege. What is missing, above all, is a precise statement of the syntax of the formalism. Syntactical considerations are omitted even in cases where they are necessary for the cogency of the proofs . . . The matter is especially doubtful for the rule of substitution and of replacing defined symbols by their definiens . . . it is chiefly the rule of substitution which would have to be proved" (Gödel 1944:124)[25]
Contents
Part I Mathematical logic. Volume I ✸1 to ✸43
This section describes the propositional and predicate calculus, and gives the basic properties of classes, relations, and types.
Part II Prolegomena to cardinal arithmetic. Volume I ✸50 to ✸97
This part covers various properties of relations, especially those needed for cardinal arithmetic.
Part III Cardinal arithmetic. Volume II ✸100 to ✸126
This covers the definition and basic properties of cardinals. A cardinal is defined to be an equivalence class of similar classes (as opposed to ZFC, where a cardinal is a special sort of von Neumann ordinal). Each type has its own collection of cardinals associated with it, and there is a considerable amount of bookkeeping necessary for comparing cardinals of different types. PM define addition, multiplication and exponentiation of cardinals, and compare different definitions of finite and infinite cardinals. ✸120.03 is the Axiom of infinity.
Part IV Relation-arithmetic. Volume II ✸150 to ✸186
A "relation-number" is an equivalence class of isomorphic relations. PM defines analogues of addition, multiplication, and exponentiation for arbitrary relations. The addition and multiplication is similar to the usual definition of addition and multiplication of ordinals in ZFC, though the definition of exponentiation of relations in PM is not equivalent to the usual one used in ZFC.
Part V Series. Volume II ✸200 to ✸234 and volume III ✸250 to ✸276
This covers series, which is PM's term for what is now called a totally ordered set. In particular it covers complete series, continuous functions between series with the order topology (though of course they do not use this terminology), well-ordered series, and series without "gaps" (those with a member strictly between any two given members).
Part VI Quantity. Volume III ✸300 to ✸375
This section constructs the ring of integers, the fields of rational and real numbers, and "vector-families", which are related to what are now called torsors over abelian groups.
Comparison with set theory
This section compares the system in PM with the usual mathematical foundations of ZFC. The system of PM is roughly comparable in strength with Zermelo set theory (or more precisely a version of it where the axiom of separation has all quantifiers bounded).
• The system of propositional logic and predicate calculus in PM is essentially the same as that used now, except that the notation and terminology has changed.
• The most obvious difference between PM and set theory is that in PM all objects belong to one of a number of disjoint types. This means that everything gets duplicated for each (infinite) type: for example, each type has its own ordinals, cardinals, real numbers, and so on. This results in a lot of bookkeeping to relate the various types with each other.
• In ZFC functions are normally coded as sets of ordered pairs. In PM functions are treated rather differently. First of all, "function" means "propositional function", something taking values true or false. Second, functions are not determined by their values: it is possible to have several different functions all taking the same values (for example, one might regard 2x+2 and 2(x+1) as different functions on grounds that the computer programs for evaluating them are different). The functions in ZFC given by sets of ordered pairs correspond to what PM call "matrices", and the more general functions in PM are coded by quantifying over some variables. In particular PM distinguishes between functions defined using quantification and functions not defined using quantification, whereas ZFC does not make this distinction.
• PM has no analogue of the axiom of replacement, though this is of little practical importance as this axiom is used very little in mathematics outside set theory.
• PM emphasizes relations as a fundamental concept, whereas in modern mathematical practice it is functions rather than relations that are treated as more fundamental; for example, category theory emphasizes morphisms or functions rather than relations. (However, there is an analogue of categories called allegories that models relations rather than functions, and is quite similar to the type system of PM.)
• In PM, cardinals are defined as classes of similar classes, whereas in ZFC cardinals are special ordinals. In PM there is a different collection of cardinals for each type with some complicated machinery for moving cardinals between types, whereas in ZFC there is only 1 sort of cardinal. Since PM does not have any equivalent of the axiom of replacement, it is unable to prove the existence of cardinals greater than ℵω.
• In PM ordinals are treated as equivalence classes of well-ordered sets, and as with cardinals there is a different collection of ordinals for each type. In ZFC there is only one collection of ordinals, usually defined as von Neumann ordinals. One strange quirk of PM is that they do not have an ordinal corresponding to 1, which causes numerous unnecessary complications in their theorems. The definition of ordinal exponentiation αβ in PM is not equivalent to the usual definition in ZFC and has some rather undesirable properties: for example, it is not continuous in β and is not well ordered (so is not even an ordinal).
• The constructions of the integers, rationals and real numbers in ZFC have been streamlined considerably over time since the constructions in PM.
Differences between editions
Apart from corrections of misprints, the main text of PM is unchanged between the first and second editions. The main text in Volumes 1 and 2 was reset, so that it occupies fewer pages in each. In the second edition, Volume 3 was not reset, being photographically reprinted with the same page numbering; corrections were still made. The total number of pages (excluding the endpapers) in the first edition is 1,996; in the second, 2,000. Volume 1 has five new additions:
• A 54-page introduction by Russell describing the changes they would have made had they had more time and energy. The main change he suggests is the removal of the controversial axiom of reducibility, though he admits that he knows no satisfactory substitute for it. He also seems more favorable to the idea that a function should be determined by its values (as is usual in modern mathematical practice).
• Appendix A, numbered as *8, 15 pages, about the Sheffer stroke.
• Appendix B, numbered as *89, discussing induction without the axiom of reducibility.
• Appendix C, 8 pages, discussing propositional functions.
• An 8-page list of definitions at the end, giving a much-needed index to the 500 or so notations used.
In 1962, Cambridge University Press published a shortened paperback edition containing parts of the second edition of Volume 1: the new introduction (and the old), the main text up to *56, and Appendices A and C..
Editions
• Whitehead, Alfred North; Russell, Bertrand (1910), Principia mathematica, vol. 1 (1 ed.), Cambridge: Cambridge University Press, JFM 41.0083.02
• Whitehead, Alfred North; Russell, Bertrand (1912), Principia mathematica, vol. 2 (1 ed.), Cambridge: Cambridge University Press, JFM 43.0093.03
• Whitehead, Alfred North; Russell, Bertrand (1913), Principia mathematica, vol. 3 (1 ed.), Cambridge: Cambridge University Press, JFM 44.0068.01
• Whitehead, Alfred North; Russell, Bertrand (1925), Principia mathematica, vol. 1 (2 ed.), Cambridge: Cambridge University Press, ISBN 978-0521067911, JFM 51.0046.06
• Whitehead, Alfred North; Russell, Bertrand (1927), Principia mathematica, vol. 2 (2 ed.), Cambridge: Cambridge University Press, ISBN 978-0521067911, JFM 53.0038.02
• Whitehead, Alfred North; Russell, Bertrand (1927), Principia mathematica, vol. 3 (2 ed.), Cambridge: Cambridge University Press, ISBN 978-0521067911, JFM 53.0038.02
• Whitehead, Alfred North; Russell, Bertrand (1997) [1962], Principia mathematica to *56, Cambridge Mathematical Library, Cambridge: Cambridge University Press, doi:10.1017/CBO9780511623585, ISBN 0-521-62606-4, MR 1700771, Zbl 0877.01042
The first edition was reprinted in 2009 by Merchant Books, ISBN 978-1-60386-182-3, ISBN 978-1-60386-183-0, ISBN 978-1-60386-184-7.
See also
• Axiomatic set theory
• Boolean algebra
• Information Processing Language – first computational demonstration of theorems in PM
• Introduction to Mathematical Philosophy
Footnotes
1. Whitehead, Alfred North; Russell, Bertrand (1963). Principia Mathematica. Cambridge: Cambridge University Press. pp. 1.
2. Irvine, Andrew D. (1 May 2003). "Principia Mathematica (Stanford Encyclopedia of Philosophy)". Metaphysics Research Lab, CSLI, Stanford University. Retrieved 5 August 2009.
3. "The Modern Library's Top 100 Nonfiction Books of the Century". The New York Times Company. 30 April 1999. Retrieved 5 August 2009.
4. This set is taken from Kleene 1952:69 substituting → for ⊃.
5. Kleene 1952:71, Enderton 2001:15
6. Enderton 2001:16
7. This is the word used by Kleene 1952:78
8. Quote from Kleene 1952:45. See discussion LOGICISM at pp. 43–46.
9. In his section 8.5.4 Groping towards metalogic Grattan-Guinness 2000:454ff discusses the American logicians' critical reception of the second edition of PM. For instance Sheffer "puzzled that ' In order to give an account of logic, we must presuppose and employ logic ' " (p. 452). And Bernstein ended his 1926 review with the comment that "This distinction between the propositional logic as a mathematical system and as a language must be made, if serious errors are to be avoided; this distinction the Principia does not make" (p. 454).
10. This idea is due to Wittgenstein's Tractatus. See the discussion at PM 1962:xiv–xv)
11. Linsky, Bernard (2018). Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 1 May 2018 – via Stanford Encyclopedia of Philosophy.
12. Kurt Gödel 1944 "Russell's mathematical logic" appearing at p. 120 in Feferman et al. 1990 Kurt Gödel Collected Works Volume II, Oxford University Press, NY, ISBN 978-0-19-514721-6 (v.2.pbk.) .
13. For comparison, see the translated portion of Peano 1889 in van Heijenoort 1967:81ff.
14. This work can be found at van Heijenoort 1967:1ff.
15. And see footnote, both at PM 1927:92
16. Bertrand Russell (1959). "Chapter VII". My Philosophical Development.
17. The original typography is a square of a heavier weight than the conventional period.
18. The first example comes from plato.stanford.edu (loc.cit.).
19. p. xiii of 1927 appearing in the 1962 paperback edition to ✸56.
20. The original typography employs an x with a circumflex rather than ŷ; this continues below
21. See the ten postulates of Huntington, in particular postulates IIa and IIb at PM 1962:205 and discussion at p. 206.
22. The "⊂" sign has a dot inside it, and the intersection sign "∩" has a dot above it; these are not available in the "Arial Unicode MS" font.
23. Wiener 1914 "A simplification of the logic of relations" (van Heijenoort 1967:224ff) disposed of the second of these when he showed how to reduce the theory of relations to that of classes
24. Kleene 1952:46.
25. Gödel 1944 Russell's mathematical logic in Kurt Gödel: Collected Works Volume II, Oxford University Press, New York, NY, ISBN 978-0-19-514721-6.
References
• Stephen Kleene (1952). Introduction to Metamathematics, 6th Reprint, North-Holland Publishing Company, Amsterdam NY, ISBN 0-7204-2103-9.
• Stephen Cole Kleene; Michael Beeson (2009). Introduction to Metamathematics (Paperback ed.). Ishi Press. ISBN 978-0-923891-57-2.
• Ivor Grattan-Guinness (2000). The Search for Mathematical Roots 1870–1940, Princeton University Press, Princeton NJ, ISBN 0-691-05857-1.
• Ludwig Wittgenstein (2009), Major Works: Selected Philosophical Writings, Harper Collins, New York, ISBN 978-0-06-155024-9. In particular:
Tractatus Logico-Philosophicus (Vienna 1918), original publication in German).
• Jean van Heijenoort editor (1967). From Frege to Gödel: A Source book in Mathematical Logic, 1879–1931, 3rd printing, Harvard University Press, Cambridge MA, ISBN 0-674-32449-8.
• Michel Weber and Will Desmond (eds.) (2008) Handbook of Whiteheadian Process Thought, Frankfurt / Lancaster, Ontos Verlag, Process Thought X1 & X2.
External links
Wikimedia Commons has media related to Principia Mathematica.
• Stanford Encyclopedia of Philosophy:
• Principia Mathematica – by A. D. Irvine
• The Notation in Principia Mathematica – by Bernard Linsky.
• Proposition ✸54.43 in a more modern notation (Metamath)
Links to related articles
Bertrand Russell
British philosopher, logician, and social critic
Philosophy
Views on philosophy
• Copleston–Russell debate
• Logical atomism
• Russell's teapot
• Theory of descriptions
Views on society
• Russell–Einstein Manifesto
• Russell Tribunal
Mathematics
• Peano–Russell notation
• Russell's paradox
Works
• The Principles of Mathematics (1903)
• On Denoting (1905)
• Principia Mathematica (1910–1913)
• The Problems of Philosophy (1912)
• Why Men Fight (1916)
• Introduction to Mathematical Philosophy (1919)
• Free Thought and Official Propaganda (1922)
• Why I Am Not a Christian (1927)
• Marriage and Morals (1929)
• In Praise of Idleness and Other Essays (1935)
• Power: A New Social Analysis (1938)
• A History of Western Philosophy (1945)
• My Philosophical Development (1959)
Family
• Alys Pearsall Smith (wife, 1894–1921)
• Dora Russell (wife, 1921–35)
• Patricia Russell (wife, 1936–51)
• Edith Finch Russell (wife, 1952–70)
• John Russell, 4th Earl Russell (son)
• Conrad Russell, 5th Earl Russell (son)
• Frank Russell, 2nd Earl Russell (brother)
• John Russell, Viscount Amberley (father)
• Katharine Russell, Viscountess Amberley (mother)
• John Stuart Mill (godfather)
• John Russell, 1st Earl Russell (paternal grandfather)
• Henrietta Stanley, Baroness Stanley of Alderley (maternal grandmother)
Related
• Appointment court case
• Earl Russell
• Peace Foundation
• Professorship of Philosophy
Category: Works by Bertrand Russell
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Logic
• Outline
• History
Major fields
• Computer science
• Formal semantics (natural language)
• Inference
• Philosophy of logic
• Proof
• Semantics of logic
• Syntax
Logics
• Classical
• Informal
• Critical thinking
• Reason
• Mathematical
• Non-classical
• Philosophical
Theories
• Argumentation
• Metalogic
• Metamathematics
• Set
Foundations
• Abduction
• Analytic and synthetic propositions
• Contradiction
• Paradox
• Antinomy
• Deduction
• Deductive closure
• Definition
• Description
• Entailment
• Linguistic
• Form
• Induction
• Logical truth
• Name
• Necessity and sufficiency
• Premise
• Probability
• Reference
• Statement
• Substitution
• Truth
• Validity
Lists
topics
• Mathematical logic
• Boolean algebra
• Set theory
other
• Logicians
• Rules of inference
• Paradoxes
• Fallacies
• Logic symbols
• Philosophy portal
• Category
• WikiProject (talk)
• changes
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
Authority control: National
• France
• BnF data
|
Wikipedia
|
Branched covering
In mathematics, a branched covering is a map that is almost a covering map, except on a small set.
In topology
In topology, a map is a branched covering if it is a covering map everywhere except for a nowhere dense set known as the branch set. Examples include the map from a wedge of circles to a single circle, where the map is a homeomorphism on each circle.
In algebraic geometry
In algebraic geometry, the term branched covering is used to describe morphisms $f$ from an algebraic variety $V$ to another one $W$, the two dimensions being the same, and the typical fibre of $f$ being of dimension 0.
In that case, there will be an open set $W'$ of $W$ (for the Zariski topology) that is dense in $W$, such that the restriction of $f$ to $W'$ (from $V'=f^{-1}(W')$ to $W'$, that is) is unramified. Depending on the context, we can take this as local homeomorphism for the strong topology, over the complex numbers, or as an étale morphism in general (under some slightly stronger hypotheses, on flatness and separability). Generically, then, such a morphism resembles a covering space in the topological sense. For example, if $V$ and $W$ are both compact Riemann surfaces, we require only that $f$ is holomorphic and not constant, and then there is a finite set of points $P$ of $W$, outside of which we do find an honest covering
$V'\to W'$.
Ramification locus
The set of exceptional points on $W$ is called the ramification locus (i.e. this is the complement of the largest possible open set $W'$). In general monodromy occurs according to the fundamental group of $W'$ acting on the sheets of the covering (this topological picture can be made precise also in the case of a general base field).
Kummer extensions
Branched coverings are easily constructed as Kummer extensions, i.e. as algebraic extension of the function field. The hyperelliptic curves are prototypic examples.
Unramified covering
An unramified covering then is the occurrence of an empty ramification locus.
Examples
Elliptic curve
Morphisms of curves provide many examples of ramified coverings. For example, let C be the elliptic curve of equation
$y^{2}-x(x-1)(x-2)=0.$
The projection of C onto the x-axis is a ramified cover with ramification locus given by
$x(x-1)(x-2)=0.$
This is because for these three values of x the fiber is the double point $y^{2}=0,$ while for any other value of x, the fiber consists of two distinct points (over an algebraically closed field).
This projection induces an algebraic extension of degree two of the function fields: Also, if we take the fraction fields of the underlying commutative rings, we get the morphism
$\mathbb {C} (x)\to \mathbb {C} (x)[y]/(y^{2}-x(x-1)(x-2))$
Hence this projection is a degree 2 branched covering. This can be homogenized to construct a degree 2 branched covering of the corresponding projective elliptic curve to the projective line.
Plane algebraic curve
The previous example may be generalized to any algebraic plane curve in the following way. Let C be a plane curve defined by the equation f(x,y) = 0, where f is a separable and irreducible polynomial in two indeterminates. If n is the degree of f in y, then the fiber consists of n distinct points, except for a finite number of values of x. Thus, this projection is a branched covering of degree n.
The exceptional values of x are the roots of the coefficient of $y^{n}$ in f, and the roots of the discriminant of f with respect to y.
Over a root r of the discriminant, there is at least a ramified point, which is either a critical point or a singular point. If r is also a root of the coefficient of $y^{n}$ in f, then this ramified point is "at infinity".
Over a root s of the coefficient of $y^{n}$ in f, the curve C has an infinite branch, and the fiber at s has less than n points. However, if one extends the projection to the projective completions of C and the x-axis, and if s is not a root of the discriminant, the projection becomes a covering over a neighbourhood of s.
The fact that this projection is a branched covering of degree n may also be seen by considering the function fields. In fact, this projection corresponds to the field extension of degree n
$\mathbb {C} (x)\to \mathbb {C} (x)[y]/f(x,y).$
Varying Ramifications
We can also generalize branched coverings of the line with varying ramification degrees. Consider a polynomial of the form
$f(x,y)=g(x)$
as we choose different points $x=\alpha $, the fibers given by the vanishing locus of $f(\alpha ,y)-g(\alpha )$ vary. At any point where the multiplicity of one of the linear terms in the factorization of $f(\alpha ,y)-g(\alpha )$ increases by one, there is a ramification.
Scheme Theoretic Examples
Elliptic Curves
Morphisms of curves provide many examples of ramified coverings of schemes. For example, the morphism from an affine elliptic curve to a line
${\text{Spec}}\left({\mathbb {C} [x,y]}/{(y^{2}-x(x-1)(x-2)}\right)\to {\text{Spec}}(\mathbb {C} [x])$
is a ramified cover with ramification locus given by
$X={\text{Spec}}\left({\mathbb {C} [x]}/{(x(x-1)(x-2))}\right)$
This is because at any point of $X$ in $\mathbb {A} ^{1}$ the fiber is the scheme
${\text{Spec}}\left({\mathbb {C} [y]}/{(y^{2})}\right)$
Also, if we take the fraction fields of the underlying commutative rings, we get the field homomorphism
$\mathbb {C} (x)\to {\mathbb {C} (x)[y]}/{(y^{2}-x(x-1)(x-2))},$
which is an algebraic extension of degree two; hence we got a degree 2 branched covering of an elliptic curve to the affine line. This can be homogenized to construct a morphism of a projective elliptic curve to $\mathbb {P} ^{1}$.
Hyperelliptic curve
A hyperelliptic curve provides a generalization of the above degree $2$ cover of the affine line, by considering the affine scheme defined over $\mathbb {C} $ by a polynomial of the form
$y^{2}-\prod (x-a_{i})$ where $a_{i}\neq a_{j}$ for $i\neq j$
Higher Degree Coverings of the Affine Line
We can generalize the previous example by taking the morphism
${\text{Spec}}\left({\frac {\mathbb {C} [x,y]}{(f(y)-g(x))}}\right)\to {\text{Spec}}(\mathbb {C} [x])$
where $g(x)$ has no repeated roots. Then the ramification locus is given by
$X={\text{Spec}}\left({\frac {\mathbb {C} [x]}{(f(x))}}\right)$
where the fibers are given by
${\text{Spec}}\left({\frac {\mathbb {C} [y]}{(f(y))}}\right)$
Then, we get an induced morphism of fraction fields
$\mathbb {C} (x)\to {\frac {\mathbb {C} (x)[y]}{(f(y)-g(x))}}$
There is an $\mathbb {C} (x)$-module isomorphism of the target with
$\mathbb {C} (x)\oplus \mathbb {C} (x)\cdot y\oplus \cdots \oplus \mathbb {C} (x)\cdot y^{{\text{deg}}(f(y))}$
Hence the cover is of degree ${\text{deg}}(f)$.
Superelliptic Curves
Superelliptic curves are a generalization of hyperelliptic curves and a specialization of the previous family of examples since they are given by affine schemes $X/\mathbb {C} $ from polynomials of the form
$y^{k}-f(x)$ where $k>2$ and $f(x)$ has no repeated roots.
Ramified Coverings of Projective Space
Another useful class of examples come from ramified coverings of projective space. Given a homogeneous polynomial $f\in \mathbb {C} [x_{0},\ldots ,x_{n}]$ we can construct a ramified covering of $\mathbb {P} ^{n}$ with ramification locus
${\text{Proj}}\left({\frac {\mathbb {C} [x_{0},\ldots ,x_{n}]}{f(x)}}\right)$
by considering the morphism of projective schemes
${\text{Proj}}\left({\frac {\mathbb {C} [x_{0},\ldots ,x_{n}][y]}{y^{{\text{deg}}(f)}-f(x)}}\right)\to \mathbb {P} ^{n}$
Again, this will be a covering of degree ${\text{deg}}(f)$.
Applications
Branched coverings $C\to X$ come with a symmetry group of transformations $G$. Since the symmetry group has stabilizers at the points of the ramification locus, branched coverings can be used to construct examples of orbifolds, or Deligne–Mumford stacks.
See also
• Étale morphism
• Orbifold
• Stack (mathematics)
References
• Dimca, Alexandru (1992), Singularities and Topology of Hypersurfaces, Berlin, New York: Springer-Verlag, ISBN 978-0-387-97709-6
• Hartshorne, Robin (1977), Algebraic Geometry, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157, OCLC 13348052
• Osserman, Brian, Branched Covers of the Riemann Sphere (PDF)
|
Wikipedia
|
Ramification (mathematics)
In geometry, ramification is 'branching out', in the way that the square root function, for complex numbers, can be seen to have two branches differing in sign. The term is also used from the opposite perspective (branches coming together) as when a covering map degenerates at a point of a space, with some collapsing of the fibers of the mapping.
In complex analysis
See also: Branch point
In complex analysis, the basic model can be taken as the z → zn mapping in the complex plane, near z = 0. This is the standard local picture in Riemann surface theory, of ramification of order n. It occurs for example in the Riemann–Hurwitz formula for the effect of mappings on the genus.
In algebraic topology
In a covering map the Euler–Poincaré characteristic should multiply by the number of sheets; ramification can therefore be detected by some dropping from that. The z → zn mapping shows this as a local pattern: if we exclude 0, looking at 0 < |z| < 1 say, we have (from the homotopy point of view) the circle mapped to itself by the n-th power map (Euler–Poincaré characteristic 0), but with the whole disk the Euler–Poincaré characteristic is 1, n – 1 being the 'lost' points as the n sheets come together at z = 0.
In geometric terms, ramification is something that happens in codimension two (like knot theory, and monodromy); since real codimension two is complex codimension one, the local complex example sets the pattern for higher-dimensional complex manifolds. In complex analysis, sheets can't simply fold over along a line (one variable), or codimension one subspace in the general case. The ramification set (branch locus on the base, double point set above) will be two real dimensions lower than the ambient manifold, and so will not separate it into two 'sides', locally―there will be paths that trace round the branch locus, just as in the example. In algebraic geometry over any field, by analogy, it also happens in algebraic codimension one.
In algebraic number theory
In algebraic extensions of the rational numbers
See also: Splitting of prime ideals in Galois extensions
Ramification in algebraic number theory means a prime ideal factoring in an extension so as to give some repeated prime ideal factors. Namely, let ${\mathcal {O}}_{K}$ be the ring of integers of an algebraic number field $K$, and ${\mathfrak {p}}$ a prime ideal of ${\mathcal {O}}_{K}$. For a field extension $L/K$ we can consider the ring of integers ${\mathcal {O}}_{L}$ (which is the integral closure of ${\mathcal {O}}_{K}$ in $L$), and the ideal ${\mathfrak {p}}{\mathcal {O}}_{L}$ of ${\mathcal {O}}_{L}$. This ideal may or may not be prime, but for finite $[L:K]$, it has a factorization into prime ideals:
${\mathfrak {p}}\cdot {\mathcal {O}}_{L}={\mathfrak {p}}_{1}^{e_{1}}\cdots {\mathfrak {p}}_{k}^{e_{k}}$
where the ${\mathfrak {p}}_{i}$ are distinct prime ideals of ${\mathcal {O}}_{L}$. Then ${\mathfrak {p}}$ is said to ramify in $L$ if $e_{i}>1$ for some $i$; otherwise it is unramified. In other words, ${\mathfrak {p}}$ ramifies in $L$ if the ramification index $e_{i}$ is greater than one for some ${\mathfrak {p}}_{i}$. An equivalent condition is that ${\mathcal {O}}_{L}/{\mathfrak {p}}{\mathcal {O}}_{L}$ has a non-zero nilpotent element: it is not a product of finite fields. The analogy with the Riemann surface case was already pointed out by Richard Dedekind and Heinrich M. Weber in the nineteenth century.
The ramification is encoded in $K$ by the relative discriminant and in $L$ by the relative different. The former is an ideal of ${\mathcal {O}}_{K}$ and is divisible by ${\mathfrak {p}}$ if and only if some ideal ${\mathfrak {p}}_{i}$ of ${\mathcal {O}}_{L}$ dividing ${\mathfrak {p}}$ is ramified. The latter is an ideal of ${\mathcal {O}}_{L}$ and is divisible by the prime ideal ${\mathfrak {p}}_{i}$ of ${\mathcal {O}}_{L}$ precisely when ${\mathfrak {p}}_{i}$ is ramified.
The ramification is tame when the ramification indices $e_{i}$ are all relatively prime to the residue characteristic p of ${\mathfrak {p}}$, otherwise wild. This condition is important in Galois module theory. A finite generically étale extension $B/A$ of Dedekind domains is tame if and only if the trace $\operatorname {Tr} :B\to A$ is surjective.
In local fields
Main article: Ramification of local fields
The more detailed analysis of ramification in number fields can be carried out using extensions of the p-adic numbers, because it is a local question. In that case a quantitative measure of ramification is defined for Galois extensions, basically by asking how far the Galois group moves field elements with respect to the metric. A sequence of ramification groups is defined, reifying (amongst other things) wild (non-tame) ramification. This goes beyond the geometric analogue.
In algebra
Main article: Ramification theory of valuations
In valuation theory, the ramification theory of valuations studies the set of extensions of a valuation of a field K to an extension field of K. This generalizes the notions in algebraic number theory, local fields, and Dedekind domains.
In algebraic geometry
There is also corresponding notion of unramified morphism in algebraic geometry. It serves to define étale morphisms.
Let $f:X\to Y$ be a morphism of schemes. The support of the quasicoherent sheaf $\Omega _{X/Y}$ is called the ramification locus of $f$ and the image of the ramification locus, $f\left(\operatorname {Supp} \Omega _{X/Y}\right)$, is called the branch locus of $f$. If $\Omega _{X/Y}=0$ we say that $f$ is formally unramified and if $f$ is also of locally finite presentation we say that $f$ is unramified (see Vakil 2017).
See also
• Eisenstein polynomial
• Newton polygon
• Puiseux expansion
• Branched covering
Look up ramification (mathematics) in Wiktionary, the free dictionary.
References
• Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021.
• Vakil, Ravi (18 November 2017). The Rising Sea: Foundations of algebraic geometry (PDF). Retrieved 5 June 2019.
External links
• "Splitting and ramification in number fields and Galois extensions". PlanetMath.
|
Wikipedia
|
Ramified forcing
In the mathematical discipline of set theory, ramified forcing is the original form of forcing introduced by Cohen (1963) to prove the independence of the continuum hypothesis from Zermelo–Fraenkel set theory. Ramified forcing starts with a model M of set theory in which the axiom of constructibility, V = L, holds, and then builds up a larger model M[G] of Zermelo–Fraenkel set theory by adding a generic subset G of a partially ordered set to M, imitating Kurt Gödel's constructible hierarchy.
Dana Scott and Robert Solovay realized that the use of constructible sets was an unnecessary complication, and could be replaced by a simpler construction similar to John von Neumann's construction of the universe as a union of sets Vα for ordinals α. Their simplification was originally called "unramified forcing" (Shoenfield 1971), but is now usually just called "forcing". As a result, ramified forcing is only rarely used.
References
• Cohen, P. J. (1966), Set Theory and the Continuum Hypothesis, Menlo Park, CA: W. A. Benjamin.
• Cohen, Paul J. (1963), "The Independence of the Continuum Hypothesis", Proceedings of the National Academy of Sciences of the United States of America, 50 (6): 1143–1148, Bibcode:1963PNAS...50.1143C, doi:10.1073/pnas.50.6.1143, ISSN 0027-8424, JSTOR 71858, PMC 221287, PMID 16578557.
• Shoenfield, J. R. (1971), "Unramified forcing", Axiomatic Set Theory, Proc. Sympos. Pure Math., vol. XIII, Part I, Providence, R.I.: Amer. Math. Soc., pp. 357–381, MR 0280359.
|
Wikipedia
|
Ramin Takloo-Bighash
Ramin Takloo-Bighash (born 1974) is a mathematician who works in the field of automorphic forms and Diophantine geometry and is a professor at the University of Illinois at Chicago.
Ramin Takloo-Bighash
NationalityAmerican
Alma materSharif University of Technology
Johns Hopkins University
Known forSpinor L-Functions, rational points
Scientific career
FieldsNumber Theory
Arithmetic Geometry
Harmonic Analysis
InstitutionsUniversity of Illinois at Chicago
Princeton University
Doctoral advisorJoseph Shalika
Websitehttp://homepages.math.uic.edu/~rtakloo/
Mathematical career
Takloo-Bighash graduated from the Sharif University of Technology, where he enrolled after winning a Silver medal at the 1992 International Mathematical Olympiad. In 2001, Takloo-Bighash graduated under Joseph Shalika from Johns Hopkins University. He spent 2001-2007 at Princeton University, first as an instructor and then as an assistant professor. He is a professor at the University of Illinois at Chicago.
Research
Takloo-Bighash computed the local factors of spinor L-function attached to generic automorphic forms on the symplectic group GSp(4). He has joint works with Joseph Shalika and Yuri Tschinkel on the distribution of rational points on certain group compactifications. He is a co-author, with Steven J. Miller, of An Invitation To Modern Number Theory (Princeton University Press, 2006).
Books
• Takloo-Bighash, Ramin (2018). A Pythagorean Introduction to Number Theory. Undergraduate Texts in Mathematics. Springer Publishing. p. XVIII, 279. doi:10.1007/978-3-030-02604-2. ISBN 978-3-030-02603-5.
• Miller, Steven; Takloo-Bighash, Ramin (2006). An Invitation to Modern Number Theory. United States: Princeton University Press. p. 526. ISBN 9780691120607.
External links
• Takloo-Bighash's web page at UIC
• Ramin Takloo-Bighash at the Mathematics Genealogy Project
• Ramin Takloo-Bighash's results at International Mathematical Olympiad
Mathematics in Iran
Mathematicians
Before
20th Century
• Abu al-Wafa' Buzjani
• Jamshīd al-Kāshī (al-Kashi's theorem)
• Omar Khayyam (Khayyam-Pascal's triangle, Khayyam-Saccheri quadrilateral, Khayyam's Solution of Cubic Equations)
• Al-Mahani
• Muhammad Baqir Yazdi
• Nizam al-Din al-Nisapuri
• Al-Nayrizi
• Kushyar Gilani
• Ayn al-Quzat Hamadani
• Al-Isfahani
• Al-Isfizari
• Al-Khwarizmi (Al-jabr)
• Najm al-Din al-Qazwini al-Katibi
• Nasir al-Din al-Tusi
• Al-Biruni
Modern
• Maryam Mirzakhani
• Caucher Birkar
• Sara Zahedi
• Farideh Firoozbakht (Firoozbakht's conjecture)
• S. L. Hakimi (Havel–Hakimi algorithm)
• Siamak Yassemi
• Freydoon Shahidi (Langlands–Shahidi method)
• Hamid Naderi Yeganeh
• Esmail Babolian
• Ramin Takloo-Bighash
• Lotfi A. Zadeh (Fuzzy mathematics, Fuzzy set, Fuzzy logic)
• Ebadollah S. Mahmoodian
• Reza Sarhangi (The Bridges Organization)
• Siavash Shahshahani
• Gholamhossein Mosaheb
• Amin Shokrollahi
• Reza Sadeghi
• Mohammad Mehdi Zahedi
• Mohsen Hashtroodi
• Hossein Zakeri
• Amir Ali Ahmadi
Prize Recipients
Fields Medal
• Maryam Mirzakhani (2014)
• Caucher Birkar (2018)
EMS Prize
• Sara Zahedi (2016)
Satter Prize
• Maryam Mirzakhani (2013)
Organizations
• Iranian Mathematical Society
Institutions
• Institute for Research in Fundamental Sciences
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Czech Republic
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Ramiro Rampinelli
Ramiro Rampinelli, born Lodovico Rampinelli (1697 – 1759), was an Italian mathematician and physicist. He was a monk in the Olivetan Order. He had a decisive influence on the spread of mathematical analysis, algebra and mathematical physics in the best universities of Italy.[1] He is one of the best known Italian scholars in the field of infinitesimal mathematics of the first half of the 18th century.
Biography
He was born in Brescia into the noble Rampinelli family and educated by the Jesuits; he learned the rudiments of mathematics from Giovan Battista Mazini.[2]
He studied first at the University of Bologna, where he was a disciple of Gabriele Manfredi, and took his monastic vows on 1 November 1722 at San Michele in Bosco.[1]
In 1727, after a brief stay at the Monastery of St. Helen in Venice, he entered the Abbey of St. Benedict in Padua, where he made the acquaintance of the best known professors of mathematics at the University of Padua, such as Marquess Giovanni Poleni and Count Jacopo Riccati; he formed a lasting friendship with the latter's family.[3]
In 1731 he was in Rome for a year, spending time with Celestino Galiani and Antonio Leprotti, studying subjects including architecture.[1]
After a period at the University of Naples Federico II, during which time he was always in contact with the best mathematicians, such as Nicola Antonio De Martino, he was assigned by his superiors to the University of Pavia for a year. He then returned to the University of Bologna in 1733, to teach mathematics.[1] Here he completed his Istituzioni Fisiche con il metodo analitico.[4]
In 1740, after a stay at the monastery of St. Francis in Brescia, he transferred to the Olivetan monastery of San Vittore al Corso in Milan, where he was also mathematics tutor to the noblewoman Maria Gaetana Agnesi, who remembered him with gratitude in the preface to her Instituzioni Analitiche per la gioventù d'Italia.[5]
In 1747, the Senate of Milan appointed him (at double salary) to the chair in Mathematics and Physics at the University of Pavia.[6] His expertise in river hydraulics also earned him the appointment as supervisor both for the construction of the Pavia-Milan canal and for the construction of the embankment to contain the Po River at Parpanese, in the Oltrepò Pavese.[4]
In 1758 his Lectiones opticæ Ramiri Rampinelii brixiani Congregationis Montis Oliveti monachi et in gymnasio Ticinensi Matheseos Professoris was published with the prestigious Brescia printer Bossini.[1] This work on optics was to have been followed by Trigonometria and Applicazione dei principi matematici alla fisica pratica, but Rampinelli suffered a stroke on 10 April 1758.[7]
After a short period of recuperation in Brescia, he returned to the monastery of San Vittore al Corso in Milan, where, on 8 February 1759, he had a second stroke and died.[4]
Giordano Riccati wrote in a supplement to his eulogy dated 9 January 1760:
In him were united doctrine and an indescribable modesty, and firm religious faith accompanied by all the moral and Christian virtues. His only thoughts were ever to fulfill the obligations of his own condition, and study his only innocent passion, by which he let himself be dominated, virtuously directing it outward in indefatigable service of his Religion and the Public. He dedicated himself willingly to others' benefit, and of benefits received, an indelible, grateful memory was preserved.[8]
Works
• Lectiones opticæ Ramiri Rampinelii brixiani Congregationis Montis Oliveti monachi et in gymnasio Ticinensi Matheseos Professoris. Brixiæ: excudebat Joannes Baptista Bossini. 1760.
Other works by Rampinelli, said by contemporaries to be preserved in manuscript at the monastery of San Vittore in Milan, are now lost.[1]
• Applicazione de' principi alla fisica pratica
• Trattato di trigonometria piana e sferica
• Istituzioni Fisiche con il metodo analitico
• Trattato di idrostatica (ad integrazione delle istituzioni fisiche)
References
1. A. Fappani, Enciclopedia Bresciana, Brescia: La Voce del Popolo, 1997 (in Italian)
2. P. Guerrini, La scuola cattolica, XVII, 1919.
3. D. Bonsi, Giordano Riccati, illuminista veneto ed europeo, Florence: Olschki, 2012 (in Italian)
4. Carlo Succi, Un Matematico Bresciano Ramiro Rampinelli Monaco Olivetano 1697–1759, Rodengo-Saiano (BS): Centro storico olivetano / Brescia: Ateneo di Brescia, 1992, OCLC 797874024 (in Italian) (pdf)
5. Giovanna Tilche, Maria Gaetana Agnesi: la scienziata santa del Settecento, Milan: Rizzoli, 1984, ISBN 9788817537841 (in Italian)
6. U. Baldini, Economia, istituzioni, cultura in Lombardia nell'Età di M. Teresa, Milan: Il Mulino, 1980 (in Italian)
7. P. Guerrini, Il maestro di M. G. Agnesi, Brescia, 1918 (in Italian)
8. Giordano Riccati, "Supplemento all'elogio del P.D.R. Rampinelli", Nuove memorie per servire alla Storia Letteraria, Venice, 1760: "Accoppiò egli colla dottrina una indicibile modestia, ed una soda religione accompagnata da tutte le virtù morali e cristiane. Furono sempre gli unici suoi pensieri l'adempiere gli obblighi del proprio stato, e lo studio unica innocente passione, da cui si lasciò dominare, indirizzandola per altro virtuosamente al servigio indefesso della sua Religione, e del Pubblico. S'impegnava volentieri in giovamento altrui, e dei ricevuti benefici ne conservava indelebile, grata memoria."
Sources and further reading
• Excerpta Totius Italiae necnon Helvetia littératoria Vol. III - 1759
• C. G. Pozzi. "Elogio del P.D. Ramiro Rampinelli Bresciano". Giornale de' Letterati, Rome, 1760
• F. Torricelli. "De Vita Rampinelli Epistola". in Lectiones Opticae. Brescia, 1760
• A. Fabroni. Vitae Italorum doctrina excellentium. Vol. VIII. Pisa, 1781
• F. Mandelli. Nuova raccolta di opuscoli scientifici e filosofici. Ed. A. Calogerà. Vol. XL. Venice, 1784
• A. Brognoli. Elogi de' Bresciani per dottrina eccellenti nel secolo XVIII. Brescia, 1785
• P. Verri. Memorie appartenenti alla vita ed agli studi di P. Frisi. Milan, 1787
• A. F. Frisi. Elogio storico di Donna M. G. Agnesi Milanese. Milan: Galeazzi, 1799
• V. Peroni. Biblioteca Bresciana. Vol. III. Brescia, 1821
• P. Gambara. Ragionamenti di cose patrie. Vol. IV. Brescia, 1840
• J. C. Poggendorf. Biographisch-literarisches Handwörterbuch zur Geschichte der exakten Wissenschaften. Vol. II. Leipzig, 1863
• C. Cocchetti. Del movimento intellettuale nella provincia di Brescia. Brescia, 1880
• U. Baldini. "L'insegnamento fisico matematico a Pavia alle soglie dell'età Teresiana". In Economia, istituzioni, cultura in Lombardia nell'età di M. Teresa. Vol. III. Milan: Il Mulino, 1980
Authority control
International
• ISNI
• VIAF
National
• Germany
• United States
• Vatican
Academics
• zbMATH
|
Wikipedia
|
Ramon E. Moore
Ramon Edgar (Ray) Moore ((1929-12-27)December 27, 1929 – (2015-04-01)April 1, 2015[1]) was an American mathematician, known for his pioneering work in the field of interval arithmetic.
Moore received an AB degree in physics from the University of California, Berkeley in 1950, and a PhD in mathematics from Stanford University in 1963. His early career included work on the earliest computers (including ENIAC). He was awarded the Humboldt Research Award for U.S. senior scientists twice, in 1975 and 1980.[1]
His most well known work is his first book, Interval Analysis, published in 1966. He wrote several more books and many journal articles and technical reports.[2][3][4]
R. E. Moore Prize
The R. E. Moore Prize for Applications of Interval Analysis is an award in the interdisciplinary field of rigorous numerics. It is awarded biennially by the Computer Science Department at the University of Texas at El Paso,[5] and judged by the editorial board of the journal Reliable Computing.[6] The award was named in honor of Moore's contributions to interval analysis.[7]
Laureates
Year Name Citation
2002 Warwick Tucker Dr. Tucker has proved, using interval techniques, that the renowned Lorenz equations do in fact possess a strange attractor. This problem, Smale's 14th conjecture, is of particular note in large part because the Lorenz model is widely recognized as signaling the beginning of chaos theory[8]
2004 Thomas C. Hales Dr. Hales solved this long-standing problem by using interval arithmetic. His preliminary results appeared in the Notices of the American Math Society in 2000; his full paper "The Kepler Conjecture" will appear in Annals of Mathematics, one of the world leading journals in pure mathematics.[9]
2006 not awarded[10]
2008 Kyoko Makino and Martin Berz For their paper "Suppression of the Wrapping Effect by Taylor Model-based Verified Integrators: Long-term Stabilization by Preconditioning" published in International Journal of Differential Equations and Applications in 2005 (Vol. 10, No. 4, pp. 353–384).[11]
2012 Luc Jaulin For his paper "A nonlinear set-membership approach for the localization and map building of an underwater robot using interval constraint propagation" published in IEEE Transactions on Robotics in 2009 (Vol. 25, No. 1, pp. 88–98).[12]
2014 Kenta Kobayashi For his paper "Computer-Assisted Uniqueness Proof for Stokes' Wave of Extreme Form" published in Nankai Series in Pure, Applied Mathematics and Theoretical Physics in 2013 (Vol. 10, pp. 54–67).[13]
2016 Balazs Banhelyi, Tibor Csendes, Tibor Krisztin, and Arnold Neumaier For their paper "Global attractivity of the zero solution for Wright's equation" published in SIAM Journal on Applied Dynamical Systems in 2014 (Vol. 13, No. 1, pp. 537–563).[14]
2018 Jordi-Lluís Figueras, Alex Haro and Alejandro Luque For their paper "Rigorous Computer-Assisted Application of KAM Theory: A Modern Approach", published in Foundations of Computational Mathematics in 2017 (Vol. 17, No. 5, pp. 1123–1193).[15]
See also
• List of mathematics awards
References
1. "Ramon E. Moore (1929–2015)" (PDF). Reliable Computing. 2016.
2. Reviews of Interval Analysis:
• Richtmeyer, R. D. (1968). Mathematics of Computation. 22 (101): 219–212. JSTOR 2004792.{{cite journal}}: CS1 maint: untitled periodical (link)
• Alefeld, Goetz (2011). SIAM Review. 53 (2): 380–381. JSTOR 23065173.{{cite journal}}: CS1 maint: untitled periodical (link)
• Traub, J. F. (1967). Science. 158 (3799): 365. Bibcode:1967Sci...158..365M. doi:10.1126/science.158.3799.365. JSTOR 1722775.{{cite journal}}: CS1 maint: untitled periodical (link)
• Hanson, Eldon (1967). SIAM Review. 9 (3): 610–612. JSTOR 2028021.{{cite journal}}: CS1 maint: untitled periodical (link)
3. Review of Introduction to Interval Analysis:
• Gavrilyuk, I. P. (2010). Mathematics of Computation. 79 (269): 615–616. doi:10.1090/S0025-5718-09-02327-8. JSTOR 40590421.{{cite journal}}: CS1 maint: untitled periodical (link)
4. Review of Methods and Applications of Interval Analysis:
• Hanson, Eldon (1981). SIAM Review. 23 (1): 121–123. JSTOR 2029862.{{cite journal}}: CS1 maint: untitled periodical (link)
5. "The R. E. Moore Prize for Applications of Interval Analysis: Description and Rationale". Department of Computer Science, University of Texas at El Paso. Retrieved May 17, 2019.
6. "Reliable Computing - Springer". link.springer.com. Retrieved 2018-08-13.
7. "RE Moore Prize" (in Japanese). Retrieved May 17, 2019.
8. "Warwick Tucker Receives First R. E. Moore Prize". www.cs.utep.edu. Retrieved 2018-08-13.
9. "Thomas C. Hales Receives Second R. E. Moore Prize". www.cs.utep.edu. Retrieved 2018-08-13.
10. Department of Physics and Astronomy, University of Michigan. "R. E. Moore Prize for Applications of Interval Analysis". Retrieved May 17, 2019.
11. "Kyoko Makino and Martin Berz Will Receive Third R. E. Moore Prize". www.cs.utep.edu. Retrieved 2018-08-13.
12. "Luc Jaulin Awarded Receive Fourth R. E. Moore Prize". www.cs.utep.edu. Retrieved 2018-08-13.
13. "Kenta Kobayashi Receives Fifth R. E. Moore Prize". www.cs.utep.edu. Retrieved 2018-08-13.
14. "Balazs Banhelyi, Tibor Csendes, Tibor Krisztin, and Arnold Neumaier Receive Sixth R. E. Moore Prize". www.cs.utep.edu. Retrieved 2018-08-13.
15. "Jordi-Lluís Figueras, Alex Haro and Alejandro Luque Receive Seventh R. E. Moore Prize". www.cs.utep.edu. Retrieved 2020-03-09.
Further reading
• Moore, Ramon E. (1966). Interval Analysis. Prentice-Hall.
External links
• Ramon E. Moore publications indexed by Google Scholar
• Faculty webpage
• R. E. Moore Prize
• Ramon E. Moore at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
• Czech Republic
• Netherlands
Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Ramon Picarte Mujica
Manuel Felipe Ramón Picarte Mujica, better known as Ramón Picarte Mujica (June 9, 1830 – 1884?) was a Chilean scientist.
Manuel Felipe Ramón Picarte Mujica
Born
Santiago, Chile
Died
France
NationalityChilean
EducationGeneral José Miguel Carrera National Institute
Alma materUniversity of Chile
SpouseClorinda Pardo
Scientific career
FieldsMathematics
Thesis"The importance of life insurance, and related projects susceptibles of application in Chile" (Importancia de la Institución de Seguros de la Vida, y proyectos sobre el particular que son susceptibles de establecerse en Chile) (1862)
Early life
Picarte was born on 9 June 1830, to father Ramon Picarte and mother Carmen Mujica. His father, a colonel in the independence army, had an outstanding career under the command of José Miguel Carrera: he took part in many of the actions that led to Chilean independence, became commander of the garrison of Valparaíso, and later mayor of Valdivia. As a liberal in the newly formed nation, he opposed Diego Portales, who he considered authoritarian and elitist. As a result of this, he was expelled from the army and died in poverty in 1830. The temperament of his father would later find expression in Picarte.
Education
Little is known of Picarte's early education. The subjects studied in Chile at the time were reading, writing, Christian doctrine, arithmetic (addition, subtraction, division, multiplication) and morality and etiquette. That was considered more than enough for ordinary citizens.[1]
His secondary education was better. At the time, secondary education in Chile was divided in two basic courses: humanities (intended for future lawyers) and mathematics (for future surveyors).[2] The former was a longer course but gave a more promising future.[3]
Picarte studied in one of the most prestigious and old state schools in Chile, the General José Miguel Carrera National Institute from 1840. Some of his classmates would become illustrious figures, including Guillermo, Joaquín and Alberto Blest Gana; Víctor and Miguel Luis Amunátegui; Diego Barros Arana, one of the fathers of Chilean historiography; Eusebio Lillo, poet and the composer of the national anthem, and Pedro León Gallo Goyenechea, a prominent politician.[4]
Picarte began his studies on the humanities course, but soon after finishing the Roman law module, he switched to maths. Fortunately for him, he had an excellent maths teacher in Andrés Antonio Gorbea. For Professor Gorbea, maths was an essential part of education and should not be limited to the requirements of future surveyors. He based his teaching on the book "A complete course of pure mathematics"[5] by Louis Benjamin Francoeur, a professor of the French Academy of Sciences. Picarte had lectures on topics including analytic geometry, probability theory, algebra, series, differential calculus and integrals. He obtained a degree as surveyor in 1852.
Career
Early work
In 1854, Picarte became a mathematics professor at the military academy of the Chilean Army. In those days, mathematics was dependent on mathematical tables, which were as important for mathematicians as computers are today. There were just a few limited division and multiplication tables available in Chile at the time. Picarte extended those tables to include numbers up to 10,000, and in doing, greatly improved the accuracy and reach of the tables.
He translated and improved the most commonly used logarithm tables in Chile, and invented a new way of calculating divisions by creating a table that allowed mathematicians to divide any number up to 10,000 with a simple sum. He also improved the Lalande algorithm tables which were widely used by engineers, architects, surveyors, merchants, or anyone needing to solve complex mathematical problems.
Picarte asked other Chilean mathematicians examine his work but did not receive an enthusiastic response. He tried to sell the copyrights for it at a very low price so that it could be published and distributed, but was unable to find a buyer. He then applied for support from the government but received the same response as from his peers: indifference and incredulity. This may have been because there were very few mathematicians in Chile at the time able to review or check his work. Lalande, who had produced the existing, widely distributed algorithm tables, was also a famous French astronomer and mathematician, so there was some resistance to replacing them with something new.
Essentially, Picarte's work did not align with the needs of president Manuel Montt’s development project for the newly born nation. There were no government financial support for scientific or mathematical research, and Lalande's existing tables were sufficient for Pedro José Amadeo Pissis’s work on topographical maps and for the astronomical studies carried out by Carlos Moesta, director of the newly created Santa Lucía Hill observatory.
Success in
Confident in the value of his invention despite the lack of recognition in Chile, Picarte decided to leave his homeland. He travelled to Peru in 1857, where he also failed to find a publisher. He stayed for two months taking on whatever casual work he could find, and begged his Chilean acquaintances for money to continue onwards, first to Panama, and then to Southampton, England. Once in England, sold his watch in order to complete the last leg of his journey to France.
In Paris, with no more than his tables and the clothes on his back, he followed the advice of other mathematicians and spent 5 months reviewing and compiling his work in order to present it to the French Academy of Sciences. He finally presented his work on 15 February 1859 and received warm praise from the academics. The report of this session was signed by famous members of the academy, Mathieu, Bienaymé and Charles Hermite.[6] Picarte's mathematical tables were welcomed in Europe. He stayed in France for some time and received some income from selling the copyright of his work. Soon, the government of Chile realized of the magnitude of his achievement and offered him official recognition and a cash prize via the consulate in France.
Return to Chile
Picarte returned to Chile in 1862. In October of that year he joined the Faculty of Physics and Maths at the University of Chile, although he didn't teach classes until the beginning of 1890. Free of financial problems, he dedicated his life to his studies. He patented three inventions: A siphon pump, a steam siphon and a steam reciprocating pump.[7]
When he returned to Chile, Picarte was preoccupied with the social problems that he saw around him. In France he had learned about the socialist ideas of Charles Fourier, cousin of famous mathematician François Marie Charles Fourier. These ideas had a great impact on him, and he dedicated himself to helping to solve Chile's welfare issues.
Observing the conditions in his own country and comparing them to what was happening in Europe, Picarte developed an interesting theory that he incorporated to his university thesis, called: "The importance of life insurance, and related projects which are likely to be established in Chile" (Importancia de la Institución de Seguros de la Vida, y proyectos sobre el particular que son susceptibles de establecerse en Chile)[8] In his thesis, Picarte expressed his views as follows: "If this horrible state of things (misery) is a necessity in that sad European civilization, sustained only by poverty and selfishness; in America, continent of new republics, and especially in Chile, where the arteries of blundering speculation are not yet formed or solidified, it would be an eternal shame if we (society) can do something about it and don’t."[9] In this report, Picarte criticizes the economic system and sets out the basis of the scientific approach to a future social program in Chile, after seeing how those organizations operate in Europe. Through mathematics, he proposes that it is scientifically proven that these social institutions are possible ("es un hecho matemáticamente probado que son posibles").
In the years that followed, Picarte dedicated himself to this project, though again, he received almost no support. But he was not just an intellectual: in 1863, he organized a union for tailors, and another for shoemakers. In 1864, he took his ideas to a larger scale with the "Sociedad Trabajo para Todos" (Society of Work for All), a production and mutual support organization with a people's savings bank, similar to a cooperative and organized according to his theories. His leaflets claimed that the organization would provide affordable and healthy food for everyone, eliminating intermediaries, and would reduce housing costs by sub-letting houses and rooms from other members of the organization at lower prices. The organization also promoted work, encouraging its members to produce and exchange goods within the organisation. Picarte expect the organization to produce some income, and with that, expand its benefits to other areas. In order to get capital for his plans, he waited in his office everyday from 12 to 3 to receive anyone who wanted to partner with him, but no-one supported or financed his project.
In 1865 Picarte obtained a degree in law and became a lawyer. His degree thesis discussed similar social issues and highlighted the need to organize state finances in order to be a truly independent nation.
"All that is needed is that want it, that we believe we are now capable of being such men; that we leave behind the sad concerns that have made us view what comes from that out-of-date Europe with a kind of respect.".[10]
As a lawyer, Picarte offered his services for free to those who were unable to pay. He also wrote and published leaflets that explained the key rights enshrined in the Chilean Civil Code in simple words, putting them within reach of those with no knowledge of the law.
Around this time, Picarte moved to the southern Chilean town of San Carlos. Then, in 1866, he moved to Chillán, where in 1869 he married Clorinda Pardo, the daughter of a Chilean army colonel. It is not known if the couple had any children. He made the news again in 1883 when he published "Large logarithm tables to twelve decimal points" (Grandes Tablas de Logaritmos a doce decimales) in Chile and France, financed by the Chilean government. He then travelled back to France, after which there is no further record of him.[11]
Sources
• Almendras, Domingo, "Desarrollo de los estudios de Matemáticas en Chile antes de 1930", Folleto.
• Amunátegui Solar, Domingo, "Los primeros años del Instituto Nacional (1813–1835)", Santiago, Imprenta Cervantes, 1889.
• Amunátegui Solar, Domingo, "El Instituto Nacional bajo los rectorados de don Francisco Puente, don Manuel Montt y don Antonio Varas (1835–1845)", Santiago, Imprenta Cervantes, 1891.
• Barlow, Peter, "Tables of squares, cubes, square roots, cube roots and reciprocals of all integers up to 12,500," fourth edition, by L.J. Comrie, Chemical Publ. Co., New York, 1954. (First edition 1819).
• Bell, Eric T., "Los grandes matemáticos", Ed. Losada, 1948.
References
1. Decreto ley July 12, 1832. Programa de Educación Primaria, Anales, 1845(?), p. 30. retrieved on January 15, 2015
2. Las matemáticas en Chile en el siglo XIX Archived 2009-09-25 at the Wayback Machine http://escuela.med.puc.cl/ 2015, Retrieved on January 17, 2015
3. Decreto ley July 12, 1832. Programa de Educación Primaria, Anales, 1845(?), p. 30. retrieved on January 15, 2015
4. Estudiar matemáticas en los comienzos de la República picarte.cl retrieved on January 17, 2015
5. A complete course of pure mathematics (1829) archive.org – 2014, retrieved on January 17, 2015
6. Nadie es profeta en su tierra – La proeza de hacer matematicas en Chile Claudio Gutiérrez, Departamento de Ciencias de la Computación, Universidad de Chile, Flavio Gutiérrez Universidad de Valparaíso, retrieved on January 15, 2015
7. Nadie es profeta en su tierra – La proeza de hacer matematicas en Chile La proeza de hacer matematicas en Chile] Claudio Gutiérrez, Departamento de Ciencias de la Computación, Universidad de Chile, Flavio Gutiérrez Universidad de Valparaíso, retrieved on January 17, 2015
8. Anales del año 1862 – Memoria "Proyectos sobre seguros de vida: discurso'", Imprenta Nacional, 1862 retrieved on January 15, 2015
9. Matemáticas y bienestar social "si este horrible estado de cosas (la miseria) es una necesidad en esa triste civilización europea que sólo se sostiene con la pobreza i el egoísmo, en América, continente de Repúblicas nuevas, i en Chile especialmente donde todavía no se han formado o no tienen consistencia las arterias de torpe especulación, sería una eterna vergüenza si pudiendo hacer algo útil a este respecto (la asociación), no lo realizamos. Claudio Gutiérrez, Departamento de Ciencias de la Computación, Universidad de Chile, Flavio Gutiérrez Universidad de Valparaíso, retrieved on January 17, 2015
10. "Sólo se necesita que lo queramos, que nos creamos ya hombres capaces de ser tales; que dejemos pronto esa tristes preocupaciones que nos hacen mirar con cierto respeto lo que viene de esa caduca Europa" from his thesis, published on the Anales 1865, "Estudio sobre Bancos de Emisión, Imprenta Nacional, 1865 retrieved on January 16, 2015
11. "actualmente se encuentra en París, y en la ed. de 1897: ``Ha permanecido en París consagrado a sus estudios científicos. P. P. Figueroa Dictionary 1888 edition, retrieved on January 16, 2015
|
Wikipedia
|
Ramp function
The ramp function is a unary real function, whose graph is shaped like a ramp. It can be expressed by numerous definitions, for example "0 for negative inputs, output equals input for non-negative inputs". The term "ramp" can also be used for other functions obtained by scaling and shifting, and the function in this article is the unit ramp function (slope 1, starting at 0).
In mathematics, the ramp function is also known as the positive part.
In machine learning, it is commonly known as a ReLU activation function[1][2] or a rectifier in analogy to half-wave rectification in electrical engineering. In statistics (when used as a likelihood function) it is known as a tobit model.
This function has numerous applications in mathematics and engineering, and goes by various names, depending on the context. There are differentiable variants of the ramp function.
Definitions
The ramp function (R(x) : R → R0+) may be defined analytically in several ways. Possible definitions are:
• A piecewise function:
$R(x):={\begin{cases}x,&x\geq 0;\\0,&x<0\end{cases}}$
• The max function:
$R(x):=\max(x,0)$
• The mean of an independent variable and its absolute value (a straight line with unity gradient and its modulus):
$R(x):={\frac {x+|x|}{2}}$
this can be derived by noting the following definition of max(a, b),
$\max(a,b)={\frac {a+b+|a-b|}{2}}$
for which a = x and b = 0
• The Heaviside step function multiplied by a straight line with unity gradient:
$R\left(x\right):=xH(x)$
• The convolution of the Heaviside step function with itself:
$R\left(x\right):=H(x)*H(x)$
• The integral of the Heaviside step function:[3]
$R(x):=\int _{-\infty }^{x}H(\xi )\,d\xi $
• Macaulay brackets:
$R(x):=\langle x\rangle $
• The positive part of the identity function:
$R:=\operatorname {id} ^{+}$
Applications
The ramp function has numerous applications in engineering, such as in the theory of digital signal processing.
In finance, the payoff of a call option is a ramp (shifted by strike price). Horizontally flipping a ramp yields a put option, while vertically flipping (taking the negative) corresponds to selling or being "short" an option. In finance, the shape is widely called a "hockey stick", due to the shape being similar to an ice hockey stick.
In statistics, hinge functions of multivariate adaptive regression splines (MARS) are ramps, and are used to build regression models.
Analytic properties
Non-negativity
In the whole domain the function is non-negative, so its absolute value is itself, i.e.
$\forall x\in \mathbb {R} :R(x)\geq 0$
and
$\left|R(x)\right|=R(x)$
Proof
by the mean of definition 2, it is non-negative in the first quarter, and zero in the second; so everywhere it is non-negative.
Derivative
Its derivative is the Heaviside step function:
$R'(x)=H(x)\quad {\mbox{for }}x\neq 0.$
Second derivative
The ramp function satisfies the differential equation:
${\frac {d^{2}}{dx^{2}}}R(x-x_{0})=\delta (x-x_{0}),$
where δ(x) is the Dirac delta. This means that R(x) is a Green's function for the second derivative operator. Thus, any function, f(x), with an integrable second derivative, f″(x), will satisfy the equation:
$f(x)=f(a)+(x-a)f'(a)+\int _{a}^{b}R(x-s)f''(s)\,ds\quad {\mbox{for }}a<x<b.$
Fourier transform
${\mathcal {F}}{\big \{}R(x){\big \}}(f)=\int _{-\infty }^{\infty }R(x)e^{-2\pi ifx}\,dx={\frac {i\delta '(f)}{4\pi }}-{\frac {1}{4\pi ^{2}f^{2}}},$
where δ(x) is the Dirac delta (in this formula, its derivative appears).
Laplace transform
The single-sided Laplace transform of R(x) is given as follows,[4]
${\mathcal {L}}{\big \{}R(x){\big \}}(s)=\int _{0}^{\infty }e^{-sx}R(x)dx={\frac {1}{s^{2}}}.$
Algebraic properties
Iteration invariance
Every iterated function of the ramp mapping is itself, as
$R{\big (}R(x){\big )}=R(x).$
Proof
$R{\big (}R(x){\big )}:={\frac {R(x)+|R(x)|}{2}}={\frac {R(x)+R(x)}{2}}=R(x).$
This applies the non-negative property.
See also
• Tobit model
References
1. Brownlee, Jason (8 January 2019). "A Gentle Introduction to the Rectified Linear Unit (ReLU)". Machine Learning Mastery. Retrieved 8 April 2021.
2. Liu, Danqing (30 November 2017). "A Practical Guide to ReLU". Medium. Retrieved 8 April 2021.
3. Weisstein, Eric W. "Ramp Function". MathWorld.
4. "The Laplace Transform of Functions". lpsa.swarthmore.edu. Retrieved 2019-04-05.
|
Wikipedia
|
Dvoretzky's theorem
In mathematics, Dvoretzky's theorem is an important structural theorem about normed vector spaces proved by Aryeh Dvoretzky in the early 1960s,[1] answering a question of Alexander Grothendieck. In essence, it says that every sufficiently high-dimensional normed vector space will have low-dimensional subspaces that are approximately Euclidean. Equivalently, every high-dimensional bounded symmetric convex set has low-dimensional sections that are approximately ellipsoids.
A new proof found by Vitali Milman in the 1970s[2] was one of the starting points for the development of asymptotic geometric analysis (also called asymptotic functional analysis or the local theory of Banach spaces).[3]
Original formulations
For every natural number k ∈ N and every ε > 0 there exists a natural number N(k, ε) ∈ N such that if (X, ‖·‖) is any normed space of dimension N(k, ε), there exists a subspace E ⊂ X of dimension k and a positive definite quadratic form Q on E such that the corresponding Euclidean norm
$|\cdot |={\sqrt {Q(\cdot )}}$
on E satisfies:
$|x|\leq \|x\|\leq (1+\varepsilon )|x|\quad {\text{for every}}\ x\in E.$
In terms of the multiplicative Banach-Mazur distance d the theorem's conclusion can be formulated as:
$d(E,\ \ell _{k}^{2})\leq 1+\varepsilon $
where $\ell _{k}^{2}$ denotes the standard k-dimensional Euclidean space.
Since the unit ball of every normed vector space is a bounded, symmetric, convex set and the unit ball of every Euclidean space is an ellipsoid, the theorem may also be formulated as a statement about ellipsoid sections of convex sets.
Further developments
In 1971, Vitali Milman gave a new proof of Dvoretzky's theorem, making use of the concentration of measure on the sphere to show that a random k-dimensional subspace satisfies the above inequality with probability very close to 1. The proof gives the sharp dependence on k:
$N(k,\varepsilon )\leq \exp(C(\varepsilon )k)$
where the constant C(ε) only depends on ε.
We can thus state: for every ε > 0 there exists a constant C(ε) > 0 such that for every normed space (X, ‖·‖) of dimension N, there exists a subspace E ⊂ X of dimension k ≥ C(ε) log N and a Euclidean norm |·| on E such that
$|x|\leq \|x\|\leq (1+\varepsilon )|x|\quad {\text{for every}}\ x\in E.$
More precisely, let SN − 1 denote the unit sphere with respect to some Euclidean structure Q on X, and let σ be the invariant probability measure on SN − 1. Then:
• there exists such a subspace E with
$k=\dim E\geq C(\varepsilon )\,\left({\frac {\int _{S^{N-1}}\|\xi \|\,d\sigma (\xi )}{\max _{\xi \in S^{N-1}}\|\xi \|}}\right)^{2}\,N.$
• For any X one may choose Q so that the term in the brackets will be at most
$c_{1}{\sqrt {\frac {\log N}{N}}}.$
Here c1 is a universal constant. For given X and ε, the largest possible k is denoted k*(X) and called the Dvoretzky dimension of X.
The dependence on ε was studied by Yehoram Gordon,[4][5] who showed that k*(X) ≥ c2 ε2 log N. Another proof of this result was given by Gideon Schechtman.[6]
Noga Alon and Vitali Milman showed that the logarithmic bound on the dimension of the subspace in Dvoretzky's theorem can be significantly improved, if one is willing to accept a subspace that is close either to a Euclidean space or to a Chebyshev space. Specifically, for some constant c, every n-dimensional space has a subspace of dimension k ≥ exp(c√log N) that is close either to ℓk
2
or to ℓk
∞
.[7]
Important related results were proved by Tadeusz Figiel, Joram Lindenstrauss and Milman.[8]
References
1. Dvoretzky, A. (1961). "Some results on convex bodies and Banach spaces". Proc. Internat. Sympos. Linear Spaces (Jerusalem, 1960). Jerusalem: Jerusalem Academic Press. pp. 123–160.
2. Milman, V. D. (1971). "A new proof of A. Dvoretzky's theorem on cross-sections of convex bodies". Funkcional. Anal. I Prilozhen. (in Russian). 5 (4): 28–37.
3. Gowers, W. T. (2000). "The two cultures of mathematics". Mathematics: frontiers and perspectives. Providence, RI: Amer. Math. Soc. pp. 65–78. ISBN 978-0-8218-2070-4. The full significance of measure concentration was first realized by Vitali Milman in his revolutionary proof [Mil1971] of the theorem of Dvoretzky ... Dvoretzky's theorem, especially as proved by Milman, is a milestone in the local (that is, finite-dimensional) theory of Banach spaces. While I feel sorry for a mathematician who cannot see its intrinsic appeal, this appeal on its own does not explain the enormous influence that the proof has had, well beyond Banach space theory, as a result of planting the idea of measure concentration in the minds of many mathematicians. Huge numbers of papers have now been published exploiting this idea or giving new techniques for showing that it holds.
4. Gordon, Y. (1985). "Some inequalities for Gaussian processes and applications". Israel Journal of Mathematics. 50 (4): 265–289. doi:10.1007/bf02759761.
5. Gordon, Y. (1988). "Gaussian processes and almost spherical sections of convex bodies". Annals of Probability. 16 (1): 180–188. doi:10.1214/aop/1176991893.
6. Schechtman, G. (1989). "A remark concerning the dependence on ε in Dvoretzky's theorem". Geometric aspects of functional analysis (1987–88). Lecture Notes in Math. Vol. 1376. Berlin: Springer. pp. 274–277. ISBN 978-0-387-51303-4.
7. Alon, N.; Milman, V. D. (1983), "Embedding of $\scriptstyle \ell _{\infty }^{k}$ in finite-dimensional Banach spaces", Israel Journal of Mathematics, 45 (4): 265–280, doi:10.1007/BF02804012, MR 0720303.
8. Figiel, T.; Lindenstrauss, J.; Milman, V. D. (1976). "The dimension of almost spherical sections of convex bodies". Bull. Amer. Math. Soc. 82 (4): 575–578. doi:10.1090/s0002-9904-1976-14108-0., expanded in "The dimension of almost spherical sections of convex bodies", Acta Math. 139 (1977), 53–94.
Further reading
• Vershynin, Roman (2018). "Dvoretzky–Milman Theorem". High-Dimensional Probability : An Introduction with Applications in Data Science. Cambridge University Press. pp. 254–264. doi:10.1017/9781108231596.014.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
|
Wikipedia
|
Ramsey-Turán theory
Ramsey-Turán theory is a subfield of extremal graph theory. It studies common generalizations of Ramsey's theorem and Turán's theorem. In brief, Ramsey-Turán theory asks for the maximum number of edges a graph which satisfies constraints on its subgraphs and structure can have. The theory organizes many natural questions which arise in extremal graph theory. The first authors to formalize the central ideas of the theory were Erdős and Sós in 1969,[1] though mathematicians had previously investigated many Ramsey-Turán-type problems.[2]
Ramsey's theorem and Turán's theorem
See also: Ramsey's theorem and Turán's theorem
Ramsey's theorem for two colors and the complete graph, proved in its original form in 1930, states that for any positive integer k there exists an integer n large enough that for any coloring of the edges of the complete graph $K_{n}$ using two colors has a monochoromatic copy of $K_{k}$. More generally, for any graphs $L_{1},\dots ,L_{r}$, there is a threshold $R=R(L_{1},\dots ,L_{k})$ such that if $n\geq R$ and the edges of $K_{n}$ are colored arbitrarily with $r$ colors, then for some $1\leq i\leq r$ there is a $L_{i}$ in the $i$th color.
Turán's theorem, proved in 1941, characterizes the graph with the maximal number of edges on $n$ vertices which does not contain a $K_{r+1}$. Specifically, the theorem states that for all positive integers $r,n$, the number of edges of an $n$-vertex graph which does not contain $K_{r+1}$ as a subgraph is at most
${\bigg (}1-{\frac {1}{r}}{\bigg )}{\frac {n^{2}}{2}}$
and that the maximum is attained uniquely by the Turán graph $T_{n,r}$.
Both of these classic results ask questions about how large a graph can be before it possesses a certain property. There is a notable stylistic difference, however. The extremal graph in Turán's theorem has a very strict structure, having a small chromatic number and containing a small number of large independent sets. On the other hand, the graph considered in Ramsey problems is the complete graph, which has large chromatic number and no nontrivial independent set. A natural way to combine these two kinds of problems is to ask the following question, posed by Andrásfai:[3]
Problem 1: For a given positive integer $m$, let $G$ be an $n$-vertex graph not containing $K_{r+1}$ and having independence number $\alpha (G)<m$. What is the maximum number of edges such a graph can have?
Essentially, this question asks for the answer to the Turán problem in a Ramsey setting; it restricts Turán's problem to a subset of graphs with less orderly, more randomlike structure. The following question combines the problems in the opposite direction:
Problem 2: Let $L_{1},\dots ,L_{r}$ be fixed graphs. What is the maximum number of edges an $r$-edge colored graph on $n$ vertices can have under the condition that it does not contain an $L_{i}$ in the ith color?
General problem
The backbone of Ramsey-Turán theory is the common generalization of the above problems.
Problem 3: Let $L_{1},\dots ,L_{r}$ be fixed graphs. Let $G$ be an $r$-edge-colored $n$-vertex graph satisfying
(1) $\alpha (G)<m$
(2) the subgraph $G_{i}$ defined by the $i$th color contains no $L_{i}$.
What is the maximum number of edges $G$ can have? We denote the maximum by $\mathbf {RT} (n;L_{1},\dots ,L_{r},m)$.
Ramsey-Turán-type problems are special cases of problem 3. Many cases of this problem remain open, but several interesting cases have been resolved with precise asymptotic solutions.
Notable results
Problem 3 can be divided into three different cases, depending on the restriction on the independence number. There is the restriction-free case, where $m=n$, which reduces to the classic Ramsey problem. There is the "intermediate" case, where $m=cn$ for a fixed $0<c<1$. Lastly, there is the $m=o(n)$ case, which contains the richest problems.[2]
The most basic nontrivial problem in the $m=o(n)$ range is when $r=1$ and $L_{1}=K_{2k+1}.$ Erdős and Sós determined the asymptotic value of the Ramsey-Turán number in this situation in 1969:[1]
$\mathbf {RT} (n;K_{2k+1},o(n))={\bigg (}1-{\frac {1}{k}}{\bigg )}{\frac {n^{2}}{2}}+o(n^{2}).$
The case of the complete graph on an even number of vertices is much more challenging, and was resolved by Erdős, Hajnal, Sós and Szemerédi in 1983:[4]
$\mathbf {RT} (n;K_{2k},o(n))={\frac {6k-10}{6k-4}}{\frac {n^{2}}{2}}+o(n^{2}).$
Note that in both cases, the problem can be viewed as adding the extra condition to Turán's theorem that $\alpha (G)<o(n)$. In both cases, it can be seen that asymptotically, the effect is the same as if we had excluded $K_{k}$ instead of $K_{2k+1}$ or $K_{2k}$.
References
1. Erdős, Paul; Sós, Vera T. (1970). Some remarks on Ramsey's and Turán's theorem. Combinatorial theory and its applications, Balatonfüred, 1969. Vol. II. North-Holland. pp. 395–404.
2. Simonovits, Miklós; T. Sós, Vera (2001-02-28). "Ramsey–Turán theory". Discrete Mathematics. 229 (1): 293–340. doi:10.1016/S0012-365X(00)00214-4. ISSN 0012-365X.
3. Andrásfal, B. (1964). "Graphentheoretische Extremalprobleme". Acta Mathematica Academiae Scientiarum Hungaricae. 15 (3–4): 413–438. doi:10.1007/bf01897150. ISSN 0001-5954. S2CID 189783307.
4. Erdős, Paul; Hajnal, András; Sós, Vera T.; Szemerédi, Endre (1983). "More results on Ramsey—Turán type problems". Combinatorica. 3 (1): 69–81. doi:10.1007/bf02579342. ISSN 0209-9683. S2CID 14815278.
|
Wikipedia
|
Ramsey theory
Ramsey theory, named after the British mathematician and philosopher Frank P. Ramsey, is a branch of the mathematical field of combinatorics that focuses on the appearance of order in a substructure given a structure of a known size. Problems in Ramsey theory typically ask a question of the form: "how big must some structure be to guarantee that a particular property holds?"[1]
For Ramsey theory of infinite sets, see Infinitary combinatorics.
Examples
A typical result in Ramsey theory starts with some mathematical structure that is then cut into pieces. How big must the original structure be in order to ensure that at least one of the pieces has a given interesting property? This idea can be defined as partition regularity.
For example, consider a complete graph of order n; that is, there are n vertices and each vertex is connected to every other vertex by an edge. A complete graph of order 3 is called a triangle. Now colour each edge either red or blue. How large must n be in order to ensure that there is either a blue triangle or a red triangle? It turns out that the answer is 6. See the article on Ramsey's theorem for a rigorous proof.
Another way to express this result is as follows: at any party with at least six people, there are three people who are all either mutual acquaintances (each one knows the other two) or mutual strangers (none of them knows either of the other two). See theorem on friends and strangers.
This also is a special case of Ramsey's theorem, which says that for any given integer c, any given integers n1,...,nc, there is a number, R(n1,...,nc), such that if the edges of a complete graph of order R(n1,...,nc) are coloured with c different colours, then for some i between 1 and c, it must contain a complete subgraph of order ni whose edges are all colour i. The special case above has c = 2 and n1 = n2 = 3.
Results
Two key theorems of Ramsey theory are:
• Van der Waerden's theorem: For any given c and n, there is a number V, such that if V consecutive numbers are coloured with c different colours, then it must contain an arithmetic progression of length n whose elements are all the same colour.
• Hales–Jewett theorem: For any given n and c, there is a number H such that if the cells of an H-dimensional n×n×n×...×n cube are coloured with c colours, there must be one row, column, etc. of length n all of whose cells are the same colour. That is: a multi-player n-in-a-row tic-tac-toe cannot end in a draw, no matter how large n is, and no matter how many people are playing, if you play on a board with sufficiently many dimensions. The Hales–Jewett theorem implies Van der Waerden's theorem.
A theorem similar to van der Waerden's theorem is Schur's theorem: for any given c there is a number N such that if the numbers 1, 2, ..., N are coloured with c different colours, then there must be a pair of integers x, y such that x, y, and x+y are all the same colour. Many generalizations of this theorem exist, including Rado's theorem, Rado–Folkman–Sanders theorem, Hindman's theorem, and the Milliken–Taylor theorem. A classic reference for these and many other results in Ramsey theory is Graham, Rothschild, Spencer and Solymosi, updated and expanded in 2015 to its first new edition in 25 years.[2]
Results in Ramsey theory typically have two primary characteristics. Firstly, they are unconstructive: they may show that some structure exists, but they give no process for finding this structure (other than brute-force search). For instance, the pigeonhole principle is of this form. Secondly, while Ramsey theory results do say that sufficiently large objects must necessarily contain a given structure, often the proof of these results requires these objects to be enormously large – bounds that grow exponentially, or even as fast as the Ackermann function are not uncommon. In some small niche cases, upper and lower bounds are improved, but not in general. In many cases these bounds are artifacts of the proof, and it is not known whether they can be substantially improved. In other cases it is known that any bound must be extraordinarily large, sometimes even greater than any primitive recursive function; see the Paris–Harrington theorem for an example. Graham's number, one of the largest numbers ever used in serious mathematical proof, is an upper bound for a problem related to Ramsey theory. Another large example is the Boolean Pythagorean triples problem.[3]
Theorems in Ramsey theory are generally one of the following two types. Many such theorems, which are modeled after Ramsey's theorem itself, assert that in every partition of a large structured object, one of the classes necessarily contains its own structured object, but gives no information about which class this is. In other cases, the reason behind a Ramsey-type result is that the largest partition class always contains the desired substructure. The results of this latter kind are called either density results or Turán-type result, after Turán's theorem. Notable examples include Szemerédi's theorem, which is such a strengthening of van der Waerden's theorem, and the density version of the Hales-Jewett theorem.[4]
See also
• Ergodic Ramsey theory
• Extremal graph theory
• Goodstein's theorem
• Bartel Leendert van der Waerden
• Discrepancy theory
References
1. Graham, Ron; Butler, Steve (2015). Rudiments of Ramsey Theory (2nd ed.). American Mathematical Society. p. 1. ISBN 978-0-8218-4156-3.
2. Graham, Ronald L.; Rothschild, Bruce L.; Spencer, Joel H.; Solymosi, József (2015), Ramsey Theory (3rd ed.), New York: John Wiley and Sons, ISBN 978-0470391853.
3. Lamb, Evelyn (2016-06-02). "Two-hundred-terabyte maths proof is largest ever". Nature. 534 (7605): 17–18. doi:10.1038/nature.2016.19990. PMID 27251254.
4. Furstenberg, Hillel; Katznelson, Yitzhak (1991), "A density version of the Hales–Jewett theorem", Journal d'Analyse Mathématique, 57 (1): 64–119, doi:10.1007/BF03041066.
Further reading
• Landman, B. M. & Robertson, A. (2004), Ramsey Theory on the Integers, Student Mathematical Library, vol. 24, Providence, RI: AMS, ISBN 0-8218-3199-2.
• Ramsey, F. P. (1930), "On a Problem of Formal Logic", Proceedings of the London Mathematical Society, s2-30 (1): 264–286, doi:10.1112/plms/s2-30.1.264 (behind a paywall).
• Erdős, Paul; Szekeres, George (1935), "A combinatorial problem in geometry", Compositio Mathematica, 2: 463–470, doi:10.1007/978-0-8176-4842-8_3, ISBN 978-0-8176-4841-1, Zbl 0012.27010.
• Boolos, G.; Burgess, J. P.; Jeffrey, R. (2007), Computability and Logic (5th ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-87752-7.
• Matthew Katz and Jan Reimann An Introduction to Ramsey Theory: Fast Functions, Infinity, and Metamathematics Student Mathematical Library Volume: 87; 2018; 207 pp; ISBN 978-1-4704-4290-3
|
Wikipedia
|
Ramsey cardinal
In mathematics, a Ramsey cardinal is a certain kind of large cardinal number introduced by Erdős & Hajnal (1962) and named after Frank P. Ramsey, whose theorem establishes that ω enjoys a certain property that Ramsey cardinals generalize to the uncountable case.
Let [κ]<ω denote the set of all finite subsets of κ. A cardinal number κ is called Ramsey if, for every function
f: [κ]<ω → {0, 1}
there is a set A of cardinality κ that is homogeneous for f. That is, for every n, the function f is constant on the subsets of cardinality n from A. A cardinal κ is called ineffably Ramsey if A can be chosen to be a stationary subset of κ. A cardinal κ is called virtually Ramsey if for every function
f: [κ]<ω → {0, 1}
there is C, a closed and unbounded subset of κ, so that for every λ in C of uncountable cofinality, there is an unbounded subset of λ that is homogenous for f; slightly weaker is the notion of almost Ramsey where homogenous sets for f are required of order type λ, for every λ < κ.
The existence of any of these kinds of Ramsey cardinal is sufficient to prove the existence of 0#, or indeed that every set with rank less than κ has a sharp.
Every measurable cardinal is a Ramsey cardinal, and every Ramsey cardinal is a Rowbottom cardinal.
A property intermediate in strength between Ramseyness and measurability is existence of a κ-complete normal non-principal ideal I on κ such that for every A ∉ I and for every function
f: [κ]<ω → {0, 1}
there is a set B ⊂ A not in I that is homogeneous for f. This is strictly stronger than κ being ineffably Ramsey.
The existence of a Ramsey cardinal implies the existence of 0# and this in turn implies the falsity of the Axiom of Constructibility of Kurt Gödel.
References
• Drake, F. R. (1974). Set Theory: An Introduction to Large Cardinals (Studies in Logic and the Foundations of Mathematics; V. 76). Elsevier Science Ltd. ISBN 0-444-10535-2.
• Erdős, Paul; Hajnal, András (1962), "Some remarks concerning our paper "On the structure of set-mappings. Non-existence of a two-valued σ-measure for the first uncountable inaccessible cardinal", Acta Mathematica Academiae Scientiarum Hungaricae, 13 (1–2): 223–226, doi:10.1007/BF02033641, ISSN 0001-5954, MR 0141603, S2CID 121179872
• Kanamori, Akihiro (2003). The Higher Infinite : Large Cardinals in Set Theory from Their Beginnings (2nd ed.). Springer. ISBN 3-540-00384-3.
|
Wikipedia
|
Ramsey class
In the area of mathematics known as Ramsey theory, a Ramsey class[1] is one which satisfies a generalization of Ramsey's theorem.
Suppose $A$, $B$ and $C$ are structures and $k$ is a positive integer. We denote by ${\binom {B}{A}}$ the set of all subobjects $A'$ of $B$ which are isomorphic to $A$. We further denote by $C\rightarrow (B)_{k}^{A}$ the property that for all partitions $X_{1}\cup X_{2}\cup \dots \cup X_{k}$ of ${\binom {C}{A}}$ there exists a $B'\in {\binom {C}{B}}$ and an $1\leq i\leq k$ such that ${\binom {B'}{A}}\subseteq X_{i}$.
Suppose $K$ is a class of structures closed under isomorphism and substructures. We say the class $K$ has the A-Ramsey property if for ever positive integer $k$ and for every $B\in K$ there is a $C\in K$ such that $C\rightarrow (B)_{k}^{A}$ holds. If $K$ has the $A$-Ramsey property for all $A\in K$ then we say $K$ is a Ramsey class.
Ramsey's theorem is equivalent to the statement that the class of all finite sets is a Ramsey class.
[2] [3]
References
1. Nešetřil, Jaroslav (2016-06-14). "All the Ramsey Classes - צילום הרצאות סטודיו האנה בי - YouTube". www.youtube.com. Tel Aviv University. Retrieved 4 November 2020.
2. Bodirsky, Manuel (27 May 2015). "Ramsey Classes: Examples and Constructions". arXiv:1502.05146 [math.CO].
3. Hubička, Jan; Nešetřil, Jaroslav (November 2019). "All those Ramsey classes (Ramsey classes with closures and forbidden homomorphisms)". Advances in Mathematics. 356: 106791. arXiv:1606.07979. doi:10.1016/j.aim.2019.106791. S2CID 7750570.
|
Wikipedia
|
Clique game
The clique game is a positional game where two players alternately pick edges, trying to occupy a complete clique of a given size.
The game is parameterized by two integers n > k. The game-board is the set of all edges of a complete graph on n vertices. The winning-sets are all the cliques on k vertices. There are several variants of this game:
• In the strong positional variant of the game, the first player who holds a k-clique wins. If no one wins, the game is a draw.
• In the Maker-Breaker variant , the first player (Maker) wins if he manages to hold a k-clique, otherwise the second player (Breaker) wins. There are no draws.
• In the Avoider-Enforcer variant, the first player (Avoider) wins if he manages not to hold a k-clique. Otherwise, the second player (Enforcer) wins. There are no draws. A special case of this variant is Sim.
The clique game (in its strong-positional variant) was first presented by Paul Erdős and John Selfridge, who attributed it to Simmons.[1] They called it the Ramsey game, since it is closely related to Ramsey's theorem (see below).
Winning conditions
Ramsey's theorem implies that, whenever we color a graph with 2 colors, there is at least one monochromatic clique. Moreover, for every integer k, there exists an integer R(k,k) such that, in every graph with $n\geq R_{2}(k,k)$ vertices, any 2-coloring contains a monochromatic clique of size at least k. This means that, if $n\geq R_{2}(k,k)$, the clique game can never end in a draw. a Strategy-stealing argument implies that the first player can always force at least a draw; therefore, if $n\geq R_{2}(k,k)$, Maker wins. By substituting known bounds for the Ramsey number we get that Maker wins whenever $k\leq {\log _{2}n \over 2}$.
On the other hand, the Erdos-Selfridge theorem[1] implies that Breaker wins whenever $k\geq {2\log _{2}n}$.
Beck improved these bounds as follows:[2]
• Maker wins whenever $k\leq 2\log _{2}n-2\log _{2}\log _{2}n+2\log _{2}e-10/3+o(1)$;
• Breaker wins whenever $k\geq 2\log _{2}n-2\log _{2}\log _{2}n+2\log _{2}e-1+o(1)$.
Ramsey game on higher-order hypergraphs
Instead of playing on complete graphs, the clique game can also be played on complete hypergraphs of higher orders. For example, in the clique game on triplets, the game-board is the set of triplets of integers 1,...,n (so its size is ${n \choose 3}$ ), and winning-sets are all sets of triplets of k integers (so the size of any winning-set in it is ${k \choose 3}$).
By Ramsey's theorem on triples, if $n\geq R_{3}(k,k)$, Maker wins. The currently known upper bound on $R_{3}(k,k)$ is very large, $2^{k^{2}/6}<R_{3}(k,k)<2^{2^{4k-10}}$. In contrast, Beck[3] proves that $2^{k^{2}/6}<R_{3}^{*}(k,k)<k^{4}2^{k^{3}/6}$, where $R_{3}^{*}(k,k)$ is the smallest integer such that Maker has a winning strategy. In particular, if $k^{4}2^{k^{3}/6}<n$ then the game is Maker's win.
References
1. Erdős, P.; Selfridge, J. L. (1973). "On a combinatorial game" (PDF). Journal of Combinatorial Theory. Series A. 14 (3): 298–301. doi:10.1016/0097-3165(73)90005-8. MR 0327313.
2. Beck, József (2002-04-01). "Positional Games and the Second Moment Method". Combinatorica. 22 (2): 169–216. doi:10.1007/s004930200009. ISSN 0209-9683.
3. Beck, József (1981). "Van der waerden and ramsey type games". Combinatorica. 1 (2): 103–116. doi:10.1007/bf02579267. ISSN 0209-9683.
|
Wikipedia
|
Ramsey's theorem
In combinatorics, Ramsey's theorem, in one of its graph-theoretic forms, states that one will find monochromatic cliques in any edge labelling (with colours) of a sufficiently large complete graph. To demonstrate the theorem for two colours (say, blue and red), let r and s be any two positive integers.[1] Ramsey's theorem states that there exists a least positive integer R(r, s) for which every blue-red edge colouring of the complete graph on R(r, s) vertices contains a blue clique on r vertices or a red clique on s vertices. (Here R(r, s) signifies an integer that depends on both r and s.)
Ramsey's theorem is a foundational result in combinatorics. The first version of this result was proved by Frank Ramsey. This initiated the combinatorial theory now called Ramsey theory, that seeks regularity amid disorder: general conditions for the existence of substructures with regular properties. In this application it is a question of the existence of monochromatic subsets, that is, subsets of connected edges of just one colour.
An extension of this theorem applies to any finite number of colours, rather than just two. More precisely, the theorem states that for any given number of colours, c, and any given integers n1, …, nc, there is a number, R(n1, …, nc), such that if the edges of a complete graph of order R(n1, …, nc) are coloured with c different colours, then for some i between 1 and c, it must contain a complete subgraph of order ni whose edges are all colour i. The special case above has c = 2 (and n1 = r and n2 = s).
Examples
R(3, 3) = 6
Suppose the edges of a complete graph on 6 vertices are coloured red and blue. Pick a vertex, v. There are 5 edges incident to v and so (by the pigeonhole principle) at least 3 of them must be the same colour. Without loss of generality we can assume at least 3 of these edges, connecting the vertex, v, to vertices, r, s and t, are blue. (If not, exchange red and blue in what follows.) If any of the edges, (rs), (rt), (st), are also blue then we have an entirely blue triangle. If not, then those three edges are all red and we have an entirely red triangle. Since this argument works for any colouring, any K6 contains a monochromatic K3, and therefore R(3, 3) ≤ 6. The popular version of this is called the theorem on friends and strangers.
An alternative proof works by double counting. It goes as follows: Count the number of ordered triples of vertices, x, y, z, such that the edge, (xy), is red and the edge, (yz), is blue. Firstly, any given vertex will be the middle of either 0 × 5 = 0 (all edges from the vertex are the same colour), 1 × 4 = 4 (four are the same colour, one is the other colour), or 2 × 3 = 6 (three are the same colour, two are the other colour) such triples. Therefore, there are at most 6 × 6 = 36 such triples. Secondly, for any non-monochromatic triangle (xyz), there exist precisely two such triples. Therefore, there are at most 18 non-monochromatic triangles. Therefore, at least 2 of the 20 triangles in the K6 are monochromatic.
Conversely, it is possible to 2-colour a K5 without creating any monochromatic K3, showing that R(3, 3) > 5. The unique[lower-alpha 1] colouring is shown to the right. Thus R(3, 3) = 6.
The task of proving that R(3, 3) ≤ 6 was one of the problems of William Lowell Putnam Mathematical Competition in 1953, as well as in the Hungarian Math Olympiad in 1947.
A multicolour example: R(3, 3, 3) = 17
The only two 3-colourings of K16 with no monochromatic K3, up to isomorphism and permutation of colors: the untwisted (left) and twisted (right) colorings.
A multicolour Ramsey number is a Ramsey number using 3 or more colours. There are (up to symmetries) only two non-trivial multicolour Ramsey numbers for which the exact value is known, namely R(3, 3, 3) = 17 and R(3, 3, 4) = 30.[2]
Suppose that we have an edge colouring of a complete graph using 3 colours, red, green and blue. Suppose further that the edge colouring has no monochromatic triangles. Select a vertex v. Consider the set of vertices that have a red edge to the vertex v. This is called the red neighbourhood of v. The red neighbourhood of v cannot contain any red edges, since otherwise there would be a red triangle consisting of the two endpoints of that red edge and the vertex v. Thus, the induced edge colouring on the red neighbourhood of v has edges coloured with only two colours, namely green and blue. Since R(3, 3) = 6, the red neighbourhood of v can contain at most 5 vertices. Similarly, the green and blue neighbourhoods of v can contain at most 5 vertices each. Since every vertex, except for v itself, is in one of the red, green or blue neighbourhoods of v, the entire complete graph can have at most 1 + 5 + 5 + 5 = 16 vertices. Thus, we have R(3, 3, 3) ≤ 17.
To see that R(3, 3, 3) = 17, it suffices to draw an edge colouring on the complete graph on 16 vertices with 3 colours that avoids monochromatic triangles. It turns out that there are exactly two such colourings on K16, the so-called untwisted and twisted colourings. Both colourings are shown in the figures to the right, with the untwisted colouring on the left, and the twisted colouring on the right.
If we select any colour of either the untwisted or twisted colouring on K16, and consider the graph whose edges are precisely those edges that have the specified colour, we will get the Clebsch graph.
It is known that there are exactly two edge colourings with 3 colours on K15 that avoid monochromatic triangles, which can be constructed by deleting any vertex from the untwisted and twisted colourings on K16, respectively.
It is also known that there are exactly 115 edge colourings with 3 colours on K14 that avoid monochromatic triangles, provided that we consider edge colourings that differ by a permutation of the colours as being the same.
Proof
2-colour case
The theorem for the 2-colour case can be proved by induction on r + s.[3] It is clear from the definition that for all n, R(n, 2) = R(2, n) = n. This starts the induction. We prove that R(r, s) exists by finding an explicit bound for it. By the inductive hypothesis R(r − 1, s) and R(r, s − 1) exist.
Lemma 1. $R(r,s)\leq R(r-1,s)+R(r,s-1).$
Proof. Consider a complete graph on R(r − 1, s) + R(r, s − 1) vertices whose edges are coloured with two colours. Pick a vertex v from the graph, and partition the remaining vertices into two sets M and N, such that for every vertex w, w is in M if edge (vw) is blue, and w is in N if (vw) is red. Because the graph has $R(r-1,s)+R(r,s-1)=|M|+|N|+1$ vertices, it follows that either $|M|\geq R(r-1,s)$ or $|N|\geq R(r,s-1).$ In the former case, if M has a red Ks then so does the original graph and we are finished. Otherwise M has a blue Kr − 1 and so $M\cup \{v\}$ has a blue Kr by the definition of M. The latter case is analogous. Thus the claim is true and we have completed the proof for 2 colours.
In this 2-colour case, if R(r − 1, s) and R(r, s − 1) are both even, the induction inequality can be strengthened to:[4]
$R(r,s)\leq R(r-1,s)+R(r,s-1)-1.$
Proof. Suppose p = R(r − 1, s) and q = R(r, s − 1) are both even. Let t = p + q − 1 and consider a two-coloured graph of t vertices. If di is degree of i-th vertex in the blue subgraph, then, according to the Handshaking lemma, $\textstyle \sum _{i=1}^{t}d_{i}$ is even. Given that t is odd, there must be an even di. Assume d1 is even, M and N are the vertices incident to vertex 1 in the blue and red subgraphs, respectively. Then both $|M|=d_{1}$ and $|N|=t-1-d_{1}$ are even. According to the Pigeonhole principle, either $|M|\geq p-1,$ or $|N|\geq q.$ Since |M| is even, while p – 1 is odd, the first inequality can be strengthened, so either $|M|\geq p$ or $|N|\geq q.$ Suppose $|M|\geq p=R(r-1,s).$ Then either the M subgraph has a red Ks and the proof is complete, or it has a blue Kr – 1 which along with vertex 1 makes a blue Kr. The case $|N|\geq q=R(r,s-1)$ is treated similarly.
Case of more colours
Lemma 2. If c > 2, then $R(n_{1},\dots ,n_{c})\leq R(n_{1},\dots ,n_{c-2},R(n_{c-1},n_{c})).$
Proof. Consider a complete graph of $R(n_{1},\dots ,n_{c-2},R(n_{c-1},n_{c}))$ vertices and colour its edges with c colours. Now 'go colour-blind' and pretend that c − 1 and c are the same colour. Thus the graph is now (c − 1)-coloured. Due to the definition of $R(n_{1},\dots ,n_{c-2},R(n_{c-1},n_{c})),$ such a graph contains either a Kni mono-chromatically coloured with colour i for some 1 ≤ i ≤ c − 2 or a KR(nc − 1, nc)-coloured in the 'blurred colour'. In the former case we are finished. In the latter case, we recover our sight again and see from the definition of R(nc − 1, nc) we must have either a (c − 1)-monochrome Knc − 1 or a c-monochrome Knc. In either case the proof is complete.
Lemma 1 implies that any R(r,s) is finite. The right hand side of the inequality in Lemma 2 expresses a Ramsey number for c colours in terms of Ramsey numbers for fewer colours. Therefore any R(n1, …, nc) is finite for any number of colours. This proves the theorem.
Ramsey numbers
The numbers R(r, s) in Ramsey's theorem (and their extensions to more than two colours) are known as Ramsey numbers. The Ramsey number, R(m, n), gives the solution to the party problem, which asks the minimum number of guests, R(m, n), that must be invited so that at least m will know each other or at least n will not know each other. In the language of graph theory, the Ramsey number is the minimum number of vertices, v = R(m, n), such that all undirected simple graphs of order v, contain a clique of order m, or an independent set of order n. Ramsey's theorem states that such a number exists for all m and n.
By symmetry, it is true that R(m, n) = R(n, m). An upper bound for R(r, s) can be extracted from the proof of the theorem, and other arguments give lower bounds. (The first exponential lower bound was obtained by Paul Erdős using the probabilistic method.) However, there is a vast gap between the tightest lower bounds and the tightest upper bounds. There are also very few numbers r and s for which we know the exact value of R(r, s).
Computing a lower bound L for R(r, s) usually requires exhibiting a blue/red colouring of the graph KL−1 with no blue Kr subgraph and no red Ks subgraph. Such a counterexample is called a Ramsey graph. Brendan McKay maintains a list of known Ramsey graphs.[5] Upper bounds are often considerably more difficult to establish: one either has to check all possible colourings to confirm the absence of a counterexample, or to present a mathematical argument for its absence.
Computational complexity
Erdős asks us to imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of R(5, 5) or they will destroy our planet. In that case, he claims, we should marshal all our computers and all our mathematicians and attempt to find the value. But suppose, instead, that they ask for R(6, 6). In that case, he believes, we should attempt to destroy the aliens.[6]
— Joel Spencer
A sophisticated computer program does not need to look at all colourings individually in order to eliminate all of them; nevertheless it is a very difficult computational task that existing software can only manage on small sizes. Each complete graph Kn has 1/2n(n − 1) edges, so there would be a total of cn(n-1)/2 graphs to search through (for c colours) if brute force is used.[7] Therefore, the complexity for searching all possible graphs (via brute force) is O(cn2) for c colourings and at most n nodes.
The situation is unlikely to improve with the advent of quantum computers. One of the best-known searching algorithms for unstructured datasets exhibits only a quadratic speedup (c.f. Grover's algorithm) relative to classical computers, so that the computation time is still exponential in the number of nodes.[8][9]
Known values
As described above, R(3, 3) = 6. It is easy to prove that R(4, 2) = 4, and, more generally, that R(s, 2) = s for all s: a graph on s − 1 nodes with all edges coloured red serves as a counterexample and proves that R(s, 2) ≥ s; among colourings of a graph on s nodes, the colouring with all edges coloured red contains a s-node red subgraph, and all other colourings contain a 2-node blue subgraph (that is, a pair of nodes connected with a blue edge.)
Using induction inequalities, it can be concluded that R(4, 3) ≤ R(4, 2) + R(3, 3) − 1 = 9, and therefore R(4, 4) ≤ R(4, 3) + R(3, 4) ≤ 18. There are only two (4, 4, 16) graphs (that is, 2-colourings of a complete graph on 16 nodes without 4-node red or blue complete subgraphs) among 6.4 × 1022 different 2-colourings of 16-node graphs, and only one (4, 4, 17) graph (the Paley graph of order 17) among 2.46 × 1026 colourings.[5] (This was proven by Evans, Pulham and Sheehan in 1979.) It follows that R(4, 4) = 18.
The fact that R(4, 5) = 25 was first established by Brendan McKay and Stanisław Radziszowski in 1995.[10]
The exact value of R(5, 5) is unknown, although it is known to lie between 43 (Geoffrey Exoo (1989)[11]) and 48 (Angeltveit and McKay (2017)[12]) (inclusive).
In 1997, McKay, Radziszowski and Exoo employed computer-assisted graph generation methods to conjecture that R(5, 5) = 43. They were able to construct exactly 656 (5, 5, 42) graphs, arriving at the same set of graphs through different routes. None of the 656 graphs can be extended to a (5, 5, 43) graph.[13]
For R(r, s) with r, s > 5, only weak bounds are available. Lower bounds for R(6, 6) and R(8, 8) have not been improved since 1965 and 1972, respectively.[2]
R(r, s) with r, s ≤ 10 are shown in the table below. Where the exact value is unknown, the table lists the best known bounds. R(r, s) with r < 3 are given by R(1, s) = 1 and R(2, s) = s for all values of s.
The standard survey on the development of Ramsey number research is the Dynamic Survey 1 of the Electronic Journal of Combinatorics, by Stanisław Radziszowski, which is periodically updated.[2][14] Where not cited otherwise, entries in the table below are taken from the January 2021 edition. (Note there is a trivial symmetry across the diagonal since R(r, s) = R(s, r).)
Values / known bounding ranges for Ramsey numbers R(r, s) (sequence A212954 in the OEIS)
s
r
1 2 3 4 5 6 7 8 9 10
1 1 1 1 1 1 1 1 1 1 1
2 2 3 4 5 6 7 8 9 10
3 6 9 14 18 23 28 36 40–42
4 18 25[10] 36–40 49–58 59[15]–79 73–106 92–136
5 43–48 58–85 80–133 101–194 133–282 149[15]–381
6 102–161 115[15]–273 134[15]–427 183–656 204–949
7 205–497 219–840 252–1379 292–2134
8 282–1532 329–2683 343–4432
9 565–6588 581–12677
10 798–23556
Asymptotics
The inequality R(r, s) ≤ R(r − 1, s) + R(r, s − 1) may be applied inductively to prove that
$R(r,s)\leq {\binom {r+s-2}{r-1}}.$
In particular, this result, due to Erdős and Szekeres, implies that when r = s,
$R(s,s)\leq (1+o(1)){\frac {4^{s-1}}{\sqrt {\pi s}}}.$
An exponential lower bound,
$R(s,s)\geq (1+o(1)){\frac {s}{{\sqrt {2}}e}}2^{s/2},$
was given by Erdős in 1947 and was instrumental in his introduction of the probabilistic method. There is obviously a huge gap between these two bounds: for example, for s = 10, this gives 101 ≤ R(10, 10) ≤ 48,620. Nevertheless, the exponential growth factors of either bound were not improved for a long time, and that for the lower bound still stands at √2. There is no known explicit construction producing an exponential lower bound. The best known lower and upper bounds for diagonal Ramsey numbers are
$[1+o(1)]{\frac {{\sqrt {2}}s}{e}}2^{\frac {s}{2}}\leq R(s,s)\leq s^{-(c\log s)/(\log \log s)}4^{s},$
due to Spencer and Conlon respectively, while a 2023 preprint[16][17] by Morris, Campos, Griffiths and Sahasrabudhe claims to have made exponential progress using an algorithmic construction relaying on a graph structure dubbed books, improving the upper bound to
$R(s,s)\leq (4-\varepsilon )^{s}{\text{ and }}R(s,t)\leq e^{-\delta t+o(s)}{\binom {s+t}{t}}.$
with $\varepsilon =2^{-7}$ and $\delta =50^{-1}$ where it is believed these parameters could be optimized in particular $\varepsilon $.
For the off-diagonal Ramsey numbers R(3, t), it is known that they are of order t2/log t; this may be stated equivalently as saying that the smallest possible independence number in an n-vertex triangle-free graph is
$\Theta \left({\sqrt {n\log n}}\right).$
The upper bound for R(3, t) is given by Ajtai, Komlós, and Szemerédi,[18] the lower bound was obtained originally by Kim,[19] and was improved by Griffiths, Morris, Fiz Pontiveros,[20] and Bohman and Keevash,[21] by analysing the triangle-free process, and more broadly these works set the best known asymptotic bounds on $t$ for general off-diagonal Ramsey numbers, R(s, t)
$c'_{s}{\frac {t^{\frac {s+1}{2}}}{(\log t)^{{\frac {s+1}{2}}-{\frac {1}{s-2}}}}}\leq R(s,t)\leq c_{s}{\frac {t^{s-1}}{(\log t)^{s-2}}},$
For $s=4$ the bounds become $c'_{s}t^{\frac {5}{2}}(\log t)^{-2}\leq R(4,t)\leq c_{s}t^{3}(\log t)^{-2}$, but a 2023 preprint[22][23] has improved the lower bound to $c'_{s}t^{3}(\log t)^{-4}$ which settles a question of Erdos who offered 250 dollars for a proof that the lower limit has form $c'_{s}t^{3}(\log t)^{-d}$.[24][25]
Induced Ramsey
There is a less well-known yet interesting analogue of Ramsey's theorem for induced subgraphs. Roughly speaking, instead of finding a monochromatic subgraph, we are now required to find a monochromatic induced subgraph. In this variant, it is no longer sufficient to restrict our focus to complete graphs, since the existence of a complete subgraph does not imply the existence of an induced subgraph. The qualitative statement of the theorem in the next section was first proven independently by Erdős, Hajnal and Pósa, Deuber and Rödl in the 1970s.[26][27][28] Since then, there has been much research in obtaining good bounds for induced Ramsey numbers.
Statement
Let H be a graph on n vertices. Then, there exists a graph G such that any coloring of the edges of G using two colors contains a monochromatic induced copy of H (i.e. an induced subgraph of G such that it is isomorphic to H and its edges are monochromatic). The smallest possible number of vertices of G is the induced Ramsey number rind(H).
Sometimes, we also consider the asymmetric version of the problem. We define rind(X,Y) to be the smallest possible number of vertices of a graph G such that every coloring of the edges of G using only red or blue contains a red induced subgraph of X or blue induced subgraph of Y.
History and bounds
Similar to Ramsey's theorem, it is unclear a priori whether induced Ramsey numbers exist for every graph H. In the early 1970s, Erdős, Hajnal and Pósa, Deuber and Rödl independently proved that this is the case.[26][27][28] However, the original proofs gave terrible bounds (e.g. towers of twos) on the induced Ramsey numbers. It is interesting to ask if better bounds can be achieved. In 1974, Paul Erdős conjectured that there exists a constant c such that every graph H on k vertices satisfies rind(H) ≤ 2ck.[29] If this conjecture is true, it would be optimal up to the constant c because the complete graph achieves a lower bound of this form (in fact, it's the same as Ramsey numbers). However, this conjecture is still open as of now.
In 1984, Erdős and Hajnal claimed that they proved the bound[30]
$r_{\text{ind}}(H)\leq 2^{2^{k^{1+o(1)}}}.$
However, that was still far from the exponential bound conjectured by Erdős. It was not until 1998 when a major breakthrough was achieved by Kohayakawa, Prömel and Rödl, who proved the first almost-exponential bound of rind(H) ≤ 2ck(log k)2 for some constant c. Their approach was to consider a suitable random graph constructed on projective planes and show that it has the desired properties with nonzero probability. The idea of using random graphs on projective planes have also previously been used in studying Ramsey properties with respect to vertex colorings and the induced Ramsey problem on bounded degree graphs H.[31]
Kohayakawa, Prömel and Rödl's bound remained the best general bound for a decade. In 2008, Fox and Sudakov provided an explicit construction for induced Ramsey numbers with the same bound.[32] In fact, they showed that every (n,d,λ)-graph G with small λ and suitable d contains an induced monochromatic copy of any graph on k vertices in any coloring of edges of G in two colors. In particular, for some constant c, the Paley graph on n ≥ 2ck log2k vertices is such that all of its edge colorings in two colors contain an induced monochromatic copy of every k-vertex graph.
In 2010, Conlon, Fox and Sudakov were able to improve the bound to rind(H) ≤ 2ck log k, which remains the current best upper bound for general induced Ramsey numbers.[33] Similar to the previous work in 2008, they showed that every (n,d,λ)-graph G with small λ and edge density 1⁄2 contains an induced monochromatic copy of every graph on k vertices in any edge coloring in two colors. Currently, Erdős's conjecture that rind(H) ≤ 2ck remains open and is one of the important problems in extremal graph theory.
For lower bounds, not much is known in general except for the fact that induced Ramsey numbers must be at least the corresponding Ramsey numbers. Some lower bounds have been obtained for some special cases (see Special Cases).
Special cases
While the general bounds for the induced Ramsey numbers are exponential in the size of the graph, the behaviour is much different on special classes of graphs (in particular, sparse ones). Many of these classes have induced Ramsey numbers polynomial in the number of vertices.
If H is a cycle, path or star on k vertices, it is known that rind(H) is linear in k.[32]
If H is a tree on k vertices, it is known that rind(H) = O(k2 log2k).[34] It is also known that rind(H) is superlinear (i.e. rind(H) = ω(k)). Note that this is in contrast to the usual Ramsey numbers, where the Burr–Erdős conjecture (now proven) tells us that r(H) is linear (since trees are 1-degenerate).
For graphs H with number of vertices k and bounded degree Δ, it was conjectured that rind(H) ≤ cnd(Δ), for some constant d depending only on Δ. This result was first proven by Łuczak and Rödl in 1996, with d(Δ) growing as a tower of twos with height O(Δ2).[35] More reasonable bounds for d(Δ) were obtained since then. In 2013, Conlon, Fox and Zhao showed using a counting lemma for sparse pseudorandom graphs that rind(H) ≤ cn2Δ+8, where the exponent is best possible up to constant factors.[36]
Generalizations
Similar to Ramsey numbers, we can generalize the notion of induced Ramsey numbers to hypergraphs and multicolor settings.
More colors
We can also generalize the induced Ramsey's theorem to a multicolor setting. For graphs H1, H2, …, Hr, define rind(H1, H2, …, Hr) to be the minimum number of vertices in a graph G such that any coloring of the edges of G into r colors contain an induced subgraph isomorphic to Hi where all edges are colored in the i-th color for some 1 ≤ i ≤ r. Let rind(H;q) := rind(H, H, …, H) (q copies of H).
It is possible to derive a bound on rind(H;q) which is approximately a tower of two of height ~ log q by iteratively applying the bound on the two-color case. The current best known bound is due to Fox and Sudakov, which achieves rind(H;q) ≤ 2ck3, where k is the number of vertices of H and c is a constant depending only on q.[37]
Hypergraphs
We can extend the definition of induced Ramsey numbers to d-uniform hypergraphs by simply changing the word graph in the statement to hypergraph. Furthermore, we can define the multicolor version of induced Ramsey numbers in the same way as the previous subsection.
Let H be a d-uniform hypergraph with k vertices. Define the tower function tr(x) by letting t1(x) = x and for i ≥ 1, ti+1(x) = 2ti(x). Using the hypergraph container method, Conlon, Dellamonica, La Fleur, Rödl and Schacht were able to show that for d ≥ 3, q ≥ 2, rind(H;q) ≤ td(ck) for some constant c depending on only d and q. In particular, this result mirrors the best known bound for the usual Ramsey number when d = 3.[38]
Extensions of the theorem
Infinite graphs
A further result, also commonly called Ramsey's theorem, applies to infinite graphs. In a context where finite graphs are also being discussed it is often called the "Infinite Ramsey theorem". As intuition provided by the pictorial representation of a graph is diminished when moving from finite to infinite graphs, theorems in this area are usually phrased in set-theoretic terminology.[39]
Theorem. Let X be some infinite set and colour the elements of X (n) (the subsets of X of size n) in c different colours. Then there exists some infinite subset M of X such that the size n subsets of M all have the same colour.
Proof: The proof is by induction on n, the size of the subsets. For n = 1, the statement is equivalent to saying that if you split an infinite set into a finite number of sets, then one of them is infinite. This is evident. Assuming the theorem is true for n ≤ r, we prove it for n = r + 1. Given a c-colouring of the (r + 1)-element subsets of X, let a0 be an element of X and let Y = X \ {a0}. We then induce a c-colouring of the r-element subsets of Y, by just adding a0 to each r-element subset (to get an (r + 1)-element subset of X). By the induction hypothesis, there exists an infinite subset Y1 of Y such that every r-element subset of Y1 is coloured the same colour in the induced colouring. Thus there is an element a0 and an infinite subset Y1 such that all the (r + 1)-element subsets of X consisting of a0 and r elements of Y1 have the same colour. By the same argument, there is an element a1 in Y1 and an infinite subset Y2 of Y1 with the same properties. Inductively, we obtain a sequence {a0, a1, a2, …} such that the colour of each (r + 1)-element subset (ai(1), ai(2), …, ai(r + 1)) with i(1) < i(2) < … < i(r + 1) depends only on the value of i(1). Further, there are infinitely many values of i(n) such that this colour will be the same. Take these ai(n)'s to get the desired monochromatic set.
A stronger but unbalanced infinite form of Ramsey's theorem for graphs, the Erdős–Dushnik–Miller theorem, states that every infinite graph contains either a countably infinite independent set, or an infinite clique of the same cardinality as the original graph.[40]
Infinite version implies the finite
It is possible to deduce the finite Ramsey theorem from the infinite version by a proof by contradiction. Suppose the finite Ramsey theorem is false. Then there exist integers c, n, T such that for every integer k, there exists a c-colouring of [k](n) without a monochromatic set of size T. Let Ck denote the c-colourings of [k](n) without a monochromatic set of size T.
For any k, the restriction of a colouring in Ck+1 to [k](n) (by ignoring the colour of all sets containing k + 1) is a colouring in Ck. Define $C_{k}^{1}$ to be the colourings in Ck which are restrictions of colourings in Ck+1. Since Ck+1 is not empty, neither is $C_{k}^{1}$.
Similarly, the restriction of any colouring in $C_{k+1}^{1}$ is in $C_{k}^{1}$, allowing one to define $C_{k}^{2}$ as the set of all such restrictions, a non-empty set. Continuing so, define $C_{k}^{m}$ for all integers m, k.
Now, for any integer k,
$C_{k}\supseteq C_{k}^{1}\supseteq C_{k}^{2}\supseteq \cdots $
and each set is non-empty. Furthermore, Ck is finite as
$|C_{k}|\leq c^{\frac {k!}{n!(k-n)!}}$
It follows that the intersection of all of these sets is non-empty, and let
$D_{k}=C_{k}\cap C_{k}^{1}\cap C_{k}^{2}\cap \cdots $
Then every colouring in Dk is the restriction of a colouring in Dk+1. Therefore, by unrestricting a colouring in Dk to a colouring in Dk+1, and continuing doing so, one constructs a colouring of $\mathbb {N} ^{(n)}$ without any monochromatic set of size T. This contradicts the infinite Ramsey theorem.
If a suitable topological viewpoint is taken, this argument becomes a standard compactness argument showing that the infinite version of the theorem implies the finite version.[41]
Hypergraphs
The theorem can also be extended to hypergraphs. An m-hypergraph is a graph whose "edges" are sets of m vertices – in a normal graph an edge is a set of 2 vertices. The full statement of Ramsey's theorem for hypergraphs is that for any integers m and c, and any integers n1, …, nc, there is an integer R(n1, …, nc; m) such that if the hyperedges of a complete m-hypergraph of order R(n1, …, nc; m) are coloured with c different colours, then for some i between 1 and c, the hypergraph must contain a complete sub-m-hypergraph of order ni whose hyperedges are all colour i. This theorem is usually proved by induction on m, the 'hyper-ness' of the graph. The base case for the proof is m = 2, which is exactly the theorem above.
For m = 3 we know the exact value of one non-trivial Ramsey number, namely R(4, 4; 3) = 13. This fact was established by Brendan McKay and Stanisław Radziszowski in 1991.[42] Additionally, we have: R(4, 5; 3) ≥ 35,[43] R(4, 6; 3) ≥ 63 and R(5, 5; 3) ≥ 88.[43]
Directed graphs
It is also possible to define Ramsey numbers for directed graphs; these were introduced by P. Erdős and L. Moser (1964). Let R(n) be the smallest number Q such that any complete graph with singly directed arcs (also called a "tournament") and with ≥ Q nodes contains an acyclic (also called "transitive") n-node subtournament.
This is the directed-graph analogue of what (above) has been called R(n, n; 2), the smallest number Z such that any 2-colouring of the edges of a complete undirected graph with ≥ Z nodes, contains a monochromatic complete graph on n nodes. (The directed analogue of the two possible arc colours is the two directions of the arcs, the analogue of "monochromatic" is "all arc-arrows point the same way"; i.e., "acyclic.")
We have R(0) = 0, R(1) = 1, R(2) = 2, R(3) = 4, R(4) = 8, R(5) = 14, R(6) = 28, and 34 ≤ R(7) ≤ 47.[44][45]
Ramsey cardinals
Main article: Ramsey cardinal
In terms of the partition calculus Ramsey's theorem can be stated as $\aleph _{0}\rightarrow (\aleph _{0})_{k}^{n}$ for all finite n and k. A Ramsey cardinal, $\kappa $, is a large cardinal axiomatically defined to satisfy the related formula: $\kappa \rightarrow (\kappa )_{2}^{<\omega }$.
Relationship to the axiom of choice
In reverse mathematics, there is a significant difference in proof strength between the version of Ramsey's theorem for infinite graphs (the case n = 2) and for infinite multigraphs (the case n ≥ 3). The multigraph version of the theorem is equivalent in strength to the arithmetical comprehension axiom, making it part of the subsystem ACA0 of second-order arithmetic, one of the big five subsystems in reverse mathematics. In contrast, by a theorem of David Seetapun, the graph version of the theorem is weaker than ACA0, and (combining Seetapun's result with others) it does not fall into one of the big five subsystems.[46] Over ZF, however, the graph version implies the classical Kőnig's lemma, whereas the converse implication does not hold,[47] since Kőnig's lemma is equivalent to countable choice from finite sets in this setting.[48]
See also
• Ramsey cardinal
• Paris–Harrington theorem
• Sim (pencil game)
• Infinite Ramsey theory
• Van der Waerden number
• Ramsey game
• Erdős–Rado theorem
Notes
1. up to automorphisms of the graph
1. Some authors restrict the values to be greater than one, for example (Brualdi 2010) and (Harary 1972), thus avoiding a discussion of edge colouring a graph with no edges, while others rephrase the statement of the theorem to require, in a simple graph, either an r-clique or an s-independent set, see (Gross 2008) or (Erdős & Szekeres 1935). In this form, the consideration of graphs with one vertex is more natural.
2. Radziszowski, Stanisław (2011). "Small Ramsey Numbers". Dynamic Surveys. Electronic Journal of Combinatorics. 1000. doi:10.37236/21.
3. Do, Norman (2006). "Party problems and Ramsey theory" (PDF). Austr. Math. Soc. Gazette. 33 (5): 306–312.
4. "Party Acquaintances".
5. "Ramsey Graphs".
6. Joel H. Spencer (1994), Ten Lectures on the Probabilistic Method, SIAM, p. 4, ISBN 978-0-89871-325-1
7. 2.6 Ramsey Theory from Mathematics Illuminated
8. Montanaro, Ashley (2016). "Quantum algorithms: an overview". npj Quantum Information. 2: 15023. arXiv:1511.04206. Bibcode:2016npjQI...215023M. doi:10.1038/npjqi.2015.23. S2CID 2992738 – via Nature.
9. Wang, Hefeng (2016). "Determining Ramsey numbers on a quantum computer". Physical Review A. 93 (3): 032301. arXiv:1510.01884. Bibcode:2016PhRvA..93c2301W. doi:10.1103/PhysRevA.93.032301. S2CID 118724989.
10. McKay, Brendan D.; Radziszowski, Stanislaw P. (May 1995). "R(4,5) = 25" (PDF). Journal of Graph Theory. 19 (3): 309–322. doi:10.1002/jgt.3190190304.
11. Exoo, Geoffrey (March 1989). "A lower bound for R(5, 5)". Journal of Graph Theory. 13 (1): 97–98. doi:10.1002/jgt.3190130113.
12. Vigleik Angeltveit; Brendan McKay (September 2018). "$R(5,5)\leq 48$". Journal of Graph Theory. 89 (1): 5–13. arXiv:1703.08768v2. doi:10.1002/jgt.22235.
13. Brendan D. McKay, Stanisław P. Radziszowski (1997). "Subgraph Counting Identities and Ramsey Numbers" (PDF). Journal of Combinatorial Theory. Series B. 69 (2): 193–209. doi:10.1006/jctb.1996.1741.
14. Stanisław Radziszowski. "DS1". Retrieved 17 August 2023.
15. Exoo, Geoffrey; Tatarevic, Milos (2015). "New Lower Bounds for 28 Classical Ramsey Numbers". Electronic Journal of Combinatorics. 22 (3): 3. arXiv:1504.02403. Bibcode:2015arXiv150402403E. doi:10.37236/5254.
16. Campos, Marcelo; Griffiths, Simon; Morris, Robert; Sahasrabudhe, Julian (2023). "An exponential improvement for diagonal Ramsey". arXiv:2303.09521 [math.CO].
17. Sloman, Leila (2 May 2023). "A Very Big Small Leap Forward in Graph Theory". Quanta Magazine.
18. Ajtai, Miklós; Komlós, János; Szemerédi, Endre (1980-11-01). "A note on Ramsey numbers". Journal of Combinatorial Theory, Series A. 29 (3): 354–360. doi:10.1016/0097-3165(80)90030-8. ISSN 0097-3165.
19. Kim, Jeong Han (1995), "The Ramsey Number R(3,t) has order of magnitude t2/log t", Random Structures and Algorithms, 7 (3): 173–207, CiteSeerX 10.1.1.46.5058, doi:10.1002/rsa.3240070302
20. "The Triangle-Free Process and the Ramsey Number $R(3,k)$". bookstore.ams.org. Retrieved 2023-06-27.
21. Bohman, Tom; Keevash, Peter (2010-08-01). "The early evolution of the H-free process". Inventiones mathematicae. 181 (2): 291–336. arXiv:0908.0429. doi:10.1007/s00222-010-0247-x. ISSN 1432-1297.
22. Mattheus, Sam; Verstraete, Jacques (6 Jun 2023). "The asymptotics of r(4,t)". arXiv:2306.04007 – via ArXiv. {{cite journal}}: Cite journal requires |journal= (help)
23. Cepelewicz, Jordana (22 June 2023). "Mathematicians Discover Novel Way to Predict Structure in Graphs". Quanta Magazine.
24. Erdös, Paul (1990), Nešetřil, Jaroslav; Rödl, Vojtěch (eds.), "Problems and Results on Graphs and Hypergraphs: Similarities and Differences", Mathematics of Ramsey Theory, Algorithms and Combinatorics, Berlin, Heidelberg: Springer, pp. 12–28, doi:10.1007/978-3-642-72905-8_2, ISBN 978-3-642-72905-8, retrieved 2023-06-27
25. "Erdős Problems". www.erdosproblems.com. Retrieved 2023-07-12.
26. Erdős, P.; Hajnal, A.; Pósa, L. (1975). "Strong embeddings of graphs into colored graphs". Infinite and Finite Sets, Vol. 1. Colloquia Mathematica Societatis János Bolyai. Vol. 10. North-Holland, Amsterdam/London. pp. 585–595.
27. Deuber, W. (1975). "A generalization of Ramsey's theorem". Infinite and Finite Sets, Vol. 1. Colloquia Mathematica Societatis János Bolyai. Vol. 10. North-Holland, Amsterdam/London. pp. 323–332.
28. Rödl, V. (1973). The dimension of a graph and generalized Ramsey theorems (Master's thesis). Charles University.
29. Erdős, P. (1975). "Problems and results on finite and infinite graphs". Recent advances in graph theory (Proceedings of the Second Czechoslovak Symposium, Prague, 1974). Academia, Prague. pp. 183–192.
30. Erdős, Paul (1984). "On some problems in graph theory, combinatorial analysis and combinatorial number theory" (PDF). Graph Theory and Combinatorics: 1–17.
31. Kohayakawa, Y.; Prömel, H.J.; Rödl, V. (1998). "Induced Ramsey Numbers" (PDF). Combinatorica. 18 (3): 373–404. doi:10.1007/PL00009828.
32. Fox, Jacob; Sudakov, Benny (2008). "Induced Ramsey-type theorems". Advances in Mathematics. 219 (6): 1771–1800. doi:10.1016/j.aim.2008.07.009.
33. Conlon, David; Fox, Jacob; Sudakov, Benny (2012). "On two problems in graph Ramsey theory". Combinatorica. 32 (5): 513–535. arXiv:1002.0045. doi:10.1007/s00493-012-2710-3.
34. Beck, József (1990). "On Size Ramsey Number of Paths, Trees and Circuits. II". In Nešetřil, J.; Rödl, V. (eds.). Mathematics of Ramsey Theory. Algorithms and Combinatorics. Vol. 5. Springer, Berlin, Heidelberg. pp. 34–45. doi:10.1007/978-3-642-72905-8_4. ISBN 978-3-642-72907-2.
35. Łuczak, Tomasz; Rödl, Vojtěch (March 1996). "On induced Ramsey numbers for graphs with bounded maximum degree". Journal of Combinatorial Theory. Series B. 66 (2): 324–333. doi:10.1006/jctb.1996.0025.
36. Conlon, David; Fox, Jacob; Zhao, Yufei (May 2014). "Extremal results in sparse pseudorandom graphs". Advances in Mathematics. 256: 206–29. arXiv:1204.6645. doi:10.1016/j.aim.2013.12.004.
37. Fox, Jacob; Sudakov, Benny (2009). "Density theorems for bipartite graphs and related Ramsey-type results". Combinatorica. 29 (2): 153–196. arXiv:0707.4159v2. doi:10.1007/s00493-009-2475-5.
38. Conlon, David; Dellamonica Jr., Domingos; La Fleur, Steven; Rödl, Vojtěch; Schacht, Mathias (2017). "A note on induced Ramsey numbers". In Loebl, Martin; Nešetřil, Jaroslav; Thomas, Robin (eds.). A Journey Through Discrete Mathematics. Springer, Cham. pp. 357–366. arXiv:1601.01493. doi:10.1007/978-3-319-44479-6_13. ISBN 978-3-319-44478-9.
39. Martin Gould. "Ramsey Theory" (PDF).
40. Dushnik, Ben; Miller, E. W. (1941). "Partially ordered sets". American Journal of Mathematics. 63 (3): 600–610. doi:10.2307/2371374. hdl:10338.dmlcz/100377. JSTOR 2371374. MR 0004862.. See in particular Theorems 5.22 and 5.23.
41. Diestel, Reinhard (2010). "Chapter 8, Infinite Graphs". Graph Theory (4 ed.). Heidelberg: Springer-Verlag. pp. 209–2010. ISBN 978-3-662-53621-6.
42. McKay, Brendan D.; Radziszowski, Stanislaw P. (1991). "The First Classical Ramsey Number for Hypergraphs is Computed". Proceedings of the Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA'91: 304–308.
43. Dybizbański, Janusz (2018-12-31). "A lower bound on the hypergraph Ramsey number R(4,5;3)". Contributions to Discrete Mathematics. 13 (2). doi:10.11575/cdm.v13i2.62416. ISSN 1715-0868.
44. Smith, Warren D.; Exoo, Geoff, Partial Answer to Puzzle #27: A Ramsey-like quantity, retrieved 2020-06-02
45. Neiman, David; Mackey, John; Heule, Marijn (2020-11-01). "Tighter Bounds on Directed Ramsey Number R(7)". arXiv:2011.00683 [math.CO].
46. Hirschfeldt, Denis R. (2014). Slicing the Truth. Lecture Notes Series of the Institute for Mathematical Sciences, National University of Singapore. Vol. 28. World Scientific.
47. Blass, Andreas (September 1977). "Ramsey's theorem in the hierarchy of choice principles". The Journal of Symbolic Logic. 42 (3): 387–390. doi:10.2307/2272866. ISSN 1943-5886.
48. Forster, T.E.; Truss, J.K. (January 2007). "Ramsey's theorem and König's Lemma". Archive for Mathematical Logic. 46 (1): 37–42. doi:10.1007/s00153-006-0025-z. ISSN 1432-0665.
References
• Ajtai, Miklós; Komlós, János; Szemerédi, Endre (1980), "A note on Ramsey numbers", J. Combin. Theory Ser. A, 29 (3): 354–360, doi:10.1016/0097-3165(80)90030-8.
• Bohman, Tom; Keevash, Peter (2010), "The early evolution of the H-free process", Invent. Math., 181 (2): 291–336, arXiv:0908.0429, Bibcode:2010InMat.181..291B, doi:10.1007/s00222-010-0247-x, S2CID 2429894
• Brualdi, Richard A. (2010), Introductory Combinatorics (5th ed.), Prentice-Hall, pp. 77–82, ISBN 978-0-13-602040-0
• Conlon, David (2009), "A new upper bound for diagonal Ramsey numbers", Annals of Mathematics, 170 (2): 941–960, arXiv:math/0607788v1, doi:10.4007/annals.2009.170.941, MR 2552114, S2CID 9238219.
• Erdős, Paul (1947), "Some remarks on the theory of graphs", Bull. Amer. Math. Soc., 53 (4): 292–294, doi:10.1090/S0002-9904-1947-08785-1.
• Erdős, P.; Moser, L. (1964), "On the representation of directed graphs as unions of orderings" (PDF), A Magyar Tudományos Akadémia, Matematikai Kutató Intézetének Közleményei, 9: 125–132, MR 0168494
• Erdős, Paul; Szekeres, George (1935), "A combinatorial problem in geometry" (PDF), Compositio Mathematica, 2: 463–470, doi:10.1007/978-0-8176-4842-8_3, ISBN 978-0-8176-4841-1.
• Exoo, G. (1989), "A lower bound for R(5,5)", Journal of Graph Theory, 13: 97–98, doi:10.1002/jgt.3190130113.
• Graham, R.; Rothschild, B.; Spencer, J. H. (1990), Ramsey Theory, New York: John Wiley and Sons.
• Gross, Jonathan L. (2008), Combinatorial Methods with Computer Applications, CRC Press, p. 458, ISBN 978-1-58488-743-0
• Harary, Frank (1972), Graph Theory, Addison-Wesley, pp. 16–17, ISBN 0-201-02787-9
• Ramsey, F. P. (1930), "On a problem of formal logic", Proceedings of the London Mathematical Society, 30: 264–286, doi:10.1112/plms/s2-30.1.264.
• Spencer, J. (1975), "Ramsey's theorem – a new lower bound", J. Combin. Theory Ser. A, 18: 108–115, doi:10.1016/0097-3165(75)90071-0.
• Bian, Zhengbing; Chudak, Fabian; Macready, William G.; Clark, Lane; Gaitan, Frank (2013), "Experimental determination of Ramsey numbers", Physical Review Letters, 111 (13): 130505, arXiv:1201.1842, Bibcode:2013PhRvL.111m0505B, doi:10.1103/PhysRevLett.111.130505, PMID 24116761, S2CID 1303361.
External links
The Wikibook Combinatorics has a page on the topic of: Ramsey numbers
• "Ramsey theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Ramsey@Home is a distributed computing project designed to find new lower bounds for various Ramsey numbers using a host of different techniques.
• The Electronic Journal of Combinatorics dynamic survey of small Ramsey numbers (by Stanisław Radziszowski)
• Ramsey Number – from MathWorld (contains lower and upper bounds up to R(19, 19))
• Ramsey Number – Geoffrey Exoo (Contains R(5, 5) > 42 counter-proof)
|
Wikipedia
|
Ultrafilter on a set
In the mathematical field of set theory, an ultrafilter on a set $X$ is a maximal filter on the set $X.$ In other words, it is a collection of subsets of $X$ that satisfies the definition of a filter on $X$ and that is maximal with respect to inclusion, in the sense that there does not exist a strictly larger collection of subsets of $X$ that is also a filter. (In the above, by definition a filter on a set does not contain the empty set.) Equivalently, an ultrafilter on the set $X$ can also be characterized as a filter on $X$ with the property that for every subset $A$ of $X$ either $A$ or its complement $X\setminus A$ belongs to the ultrafilter.
This article is about specific collections of subsets of a given set. For more general ultrafilters on partially ordered sets, see Ultrafilter. For the physical device, see ultrafiltration.
Ultrafilters on sets are an important special instance of ultrafilters on partially ordered sets, where the partially ordered set consists of the power set $\wp (X)$ and the partial order is subset inclusion $\,\subseteq .$ This article deals specifically with ultrafilters on a set and does not cover the more general notion.
There are two types of ultrafilter on a set. A principal ultrafilter on $X$ is the collection of all subsets of $X$ that contain a fixed element $x\in X$. The ultrafilters that are not principal are the free ultrafilters. The existence of free ultrafilters on any infinite set is implied by the ultrafilter lemma, which can be proven in ZFC. On the other hand, there exists models of ZF where every ultrafilter on a set is principal.
Ultrafilters have many applications in set theory, model theory, and topology.[1]: 186 Usually, only free ultrafilters lead to non-trivial constructions. For example, an ultraproduct modulo a principal ultrafilter is always isomorphic to one of the factors, while an ultraproduct modulo a free ultrafilter usually has more complex structures.
Definitions
See also: Filter (mathematics) and Ultrafilter
Given an arbitrary set $X,$ an ultrafilter on $X$ is a non-empty family $U$ of subsets of $X$ such that:
1. Proper or non-degenerate: The empty set is not an element of $U.$
2. Upward closed in $X$: If $A\in U$ and if $B\subseteq X$ is any superset of $A$ (that is, if $A\subseteq B\subseteq X$) then $B\in U.$
3. π−system: If $A$ and $B$ are elements of $U$ then so is their intersection $A\cap B.$
4. If $A\subseteq X$ then either $A$ or its complement $X\setminus A$ is an element of $U.$[note 1]
Properties (1), (2), and (3) are the defining properties of a filter on $X.$ Some authors do not include non-degeneracy (which is property (1) above) in their definition of "filter". However, the definition of "ultrafilter" (and also of "prefilter" and "filter subbase") always includes non-degeneracy as a defining condition. This article requires that all filters be proper although a filter might be described as "proper" for emphasis.
A filter subbase is a non-empty family of sets that has the finite intersection property (i.e. all finite intersections are non-empty). Equivalently, a filter subbase is a non-empty family of sets that is contained in some (proper) filter. The smallest (relative to $\subseteq $) filter containing a given filter subbase is said to be generated by the filter subbase.
The upward closure in $X$ of a family of sets $P$ is the set
$P^{\uparrow X}:=\{S:A\subseteq S\subseteq X{\text{ for some }}A\in P\}.$
A prefilter or filter base is a non-empty and proper (i.e. $\varnothing \not \in P$) family of sets $P$ that is downward directed, which means that if $B,C\in P$ then there exists some $A\in P$ such that $A\subseteq B\cap C.$ Equivalently, a prefilter is any family of sets $P$ whose upward closure $P^{\uparrow X}$ is a filter, in which case this filter is called the filter generated by $P$ and $P$ is said to be a filter base for $P^{\uparrow X}.$
The dual in $X$[2] of a family of sets $P$ is the set $X\setminus P:=\{X\setminus B:B\in P\}.$ For example, the dual of the power set $\wp (X)$ is itself: $X\setminus \wp (X)=\wp (X).$ A family of sets is a proper filter on $X$ if and only if its dual is a proper ideal on $X$ ("proper" means not equal to the power set).
Generalization to ultra prefilters
A family $U\neq \varnothing $ of subsets of $X$ is called ultra if $\varnothing \not \in U$ and any of the following equivalent conditions are satisfied:[2][3]
1. For every set $S\subseteq X$ there exists some set $B\in U$ such that $B\subseteq S$ or $B\subseteq X\setminus S$ (or equivalently, such that $B\cap S$ equals $B$ or $\varnothing $).
2. For every set $S\subseteq \bigcup \limits _{B\in U}}B$ there exists some set $B\in U$ such that $B\cap S$ equals $B$ or $\varnothing .$
• Here, $ \bigcup \limits _{B\in U}}B$ is defined to be the union of all sets in $U.$
• This characterization of "$U$ is ultra" does not depend on the set $X,$ so mentioning the set $X$ is optional when using the term "ultra."
3. For every set $S$ (not necessarily even a subset of $X$) there exists some set $B\in U$ such that $B\cap S$ equals $B$ or $\varnothing .$
• If $U$ satisfies this condition then so does every superset $V\supseteq U.$ In particular, a set $V$ is ultra if and only if $\varnothing \not \in V$ and $V$ contains as a subset some ultra family of sets.
A filter subbase that is ultra is necessarily a prefilter.[proof 1]
The ultra property can now be used to define both ultrafilters and ultra prefilters:
An ultra prefilter[2][3] is a prefilter that is ultra. Equivalently, it is a filter subbase that is ultra.
An ultrafilter[2][3] on $X$ is a (proper) filter on $X$ that is ultra. Equivalently, it is any filter on $X$ that is generated by an ultra prefilter.
Ultra prefilters as maximal prefilters
To characterize ultra prefilters in terms of "maximality," the following relation is needed.
Given two families of sets $M$ and $N,$ the family $M$ is said to be coarser[4][5] than $N,$ and $N$ is finer than and subordinate to $M,$ written $M\leq N$ or N ⊢ M, if for every $C\in M,$ there is some $F\in N$ such that $F\subseteq C.$ The families $M$ and $N$ are called equivalent if $M\leq N$ and $N\leq M.$ The families $M$ and $N$ are comparable if one of these sets is finer than the other.[4]
The subordination relationship, i.e. $\,\geq ,\,$ is a preorder so the above definition of "equivalent" does form an equivalence relation. If $M\subseteq N$ then $M\leq N$ but the converse does not hold in general. However, if $N$ is upward closed, such as a filter, then $M\leq N$ if and only if $M\subseteq N.$ Every prefilter is equivalent to the filter that it generates. This shows that it is possible for filters to be equivalent to sets that are not filters.
If two families of sets $M$ and $N$ are equivalent then either both $M$ and $N$ are ultra (resp. prefilters, filter subbases) or otherwise neither one of them is ultra (resp. a prefilter, a filter subbase). In particular, if a filter subbase is not also a prefilter, then it is not equivalent to the filter or prefilter that it generates. If $M$ and $N$ are both filters on $X$ then $M$ and $N$ are equivalent if and only if $M=N.$ If a proper filter (resp. ultrafilter) is equivalent to a family of sets $M$ then $M$ is necessarily a prefilter (resp. ultra prefilter). Using the following characterization, it is possible to define prefilters (resp. ultra prefilters) using only the concept of filters (resp. ultrafilters) and subordination:
An arbitrary family of sets is a prefilter if and only it is equivalent to a (proper) filter.
An arbitrary family of sets is an ultra prefilter if and only it is equivalent to an ultrafilter.
A maximal prefilter on $X$[2][3] is a prefilter $U\subseteq \wp (X)$ that satisfies any of the following equivalent conditions:
1. $U$ is ultra.
2. $U$ is maximal on $\operatorname {Prefilters} (X)$ with respect to $\,\leq ,$ meaning that if $P\in \operatorname {Prefilters} (X)$ satisfies $U\leq P$ then $P\leq U.$[3]
3. There is no prefilter properly subordinate to $U.$[3]
4. If a (proper) filter $F$ on $X$ satisfies $U\leq F$ then $F\leq U.$
5. The filter on $X$ generated by $U$ is ultra.
Characterizations
There are no ultrafilters on the empty set, so it is henceforth assumed that $X$ is nonempty.
A filter subbase $U$ on $X$ is an ultrafilter on $X$ if and only if any of the following equivalent conditions hold:[2][3]
1. for any $S\subseteq X,$ either $S\in U$ or $X\setminus S\in U.$
2. $U$ is a maximal filter subbase on $X,$ meaning that if $F$ is any filter subbase on $X$ then $U\subseteq F$ implies $U=F.$[6]
A (proper) filter $U$ on $X$ is an ultrafilter on $X$ if and only if any of the following equivalent conditions hold:
1. $U$ is ultra;
2. $U$ is generated by an ultra prefilter;
3. For any subset $S\subseteq X,$ $S\in U$ or $X\setminus S\in U.$[6]
• So an ultrafilter $U$ decides for every $S\subseteq X$ whether $S$ is "large" (i.e. $S\in U$) or "small" (i.e. $X\setminus S\in U$).[7]
4. For each subset $A\subseteq X,$ either[note 1] $A$ is in $U$ or ($X\setminus A$) is.
5. $U\cup (X\setminus U)=\wp (X).$ This condition can be restated as: $\wp (X)$ is partitioned by $U$ and its dual $X\setminus U.$
• The sets $P$ and $X\setminus P$ are disjoint for all prefilters $P$ on $X.$
6. $\wp (X)\setminus U=\left\{S\in \wp (X):S\not \in U\right\}$ is an ideal on $X.$[6]
7. For any finite family $S_{1},\ldots ,S_{n}$ of subsets of $X$ (where $n\geq 1$), if $S_{1}\cup \cdots \cup S_{n}\in U$ then $S_{i}\in U$ for some index $i.$
• In words, a "large" set cannot be a finite union of sets none of which is large.[8]
8. For any $R,S\subseteq X,$ if $R\cup S=X$ then $R\in U$ or $S\in U.$
9. For any $R,S\subseteq X,$ if $R\cup S\in U$ then $R\in U$ or $S\in U$ (a filter with this property is called a prime filter).
10. For any $R,S\subseteq X,$ if $R\cup S\in U$ and $R\cap S=\varnothing $ then either $R\in U$ or $S\in U.$
11. $U$ is a maximal filter; that is, if $F$ is a filter on $X$ such that $U\subseteq F$ then $U=F.$ Equivalently, $U$ is a maximal filter if there is no filter $F$ on $X$ that contains $U$ as a proper subset (that is, no filter is strictly finer than $U$).[6]
Grills and filter-grills
If ${\mathcal {B}}\subseteq \wp (X)$ then its grill on $X$ is the family
${\mathcal {B}}^{\#X}:=\{S\subseteq X~:~S\cap B\neq \varnothing {\text{ for all }}B\in {\mathcal {B}}\}$
where ${\mathcal {B}}^{\#}$ may be written if $X$ is clear from context. For example, $\varnothing ^{\#}=\wp (X)$ and if $\varnothing \in {\mathcal {B}}$ then ${\mathcal {B}}^{\#}=\varnothing .$ If ${\mathcal {A}}\subseteq {\mathcal {B}}$ then ${\mathcal {B}}^{\#}\subseteq {\mathcal {A}}^{\#}$ and moreover, if ${\mathcal {B}}$ is a filter subbase then ${\mathcal {B}}\subseteq {\mathcal {B}}^{\#}.$[9] The grill ${\mathcal {B}}^{\#X}$ is upward closed in $X$ if and only if $\varnothing \not \in {\mathcal {B}},$ which will henceforth be assumed. Moreover, ${\mathcal {B}}^{\#\#}={\mathcal {B}}^{\uparrow X}$ so that ${\mathcal {B}}$ is upward closed in $X$ if and only if ${\mathcal {B}}^{\#\#}={\mathcal {B}}.$
The grill of a filter on $X$ is called a filter-grill on $X.$[9] For any $\varnothing \neq {\mathcal {B}}\subseteq \wp (X),$ ${\mathcal {B}}$ is a filter-grill on $X$ if and only if (1) ${\mathcal {B}}$ is upward closed in $X$ and (2) for all sets $R$ and $S,$ if $R\cup S\in {\mathcal {B}}$ then $R\in {\mathcal {B}}$ or $S\in {\mathcal {B}}.$ The grill operation ${\mathcal {F}}\mapsto {\mathcal {F}}^{\#X}$ induces a bijection
${\bullet }^{\#X}~:~\operatorname {Filters} (X)\to \operatorname {FilterGrills} (X)$
whose inverse is also given by ${\mathcal {F}}\mapsto {\mathcal {F}}^{\#X}.$[9] If ${\mathcal {F}}\in \operatorname {Filters} (X)$ then ${\mathcal {F}}$ is a filter-grill on $X$ if and only if ${\mathcal {F}}={\mathcal {F}}^{\#X},$[9] or equivalently, if and only if ${\mathcal {F}}$ is an ultrafilter on $X.$[9] That is, a filter on $X$ is a filter-grill if and only if it is ultra. For any non-empty ${\mathcal {F}}\subseteq \wp (X),$ ${\mathcal {F}}$ is both a filter on $X$ and a filter-grill on $X$ if and only if (1) $\varnothing \not \in {\mathcal {F}}$ and (2) for all $R,S\subseteq X,$ the following equivalences hold:
$R\cup S\in {\mathcal {F}}$ if and only if $R,S\in {\mathcal {F}}$ if and only if $R\cap S\in {\mathcal {F}}.$[9]
Free or principal
If $P$ is any non-empty family of sets then the Kernel of $P$ is the intersection of all sets in $P:$[10]
$\operatorname {ker} P:=\bigcap _{B\in P}B.$
A non-empty family of sets $P$ is called:
• free if $\operatorname {ker} P=\varnothing $ and fixed otherwise (that is, if $\operatorname {ker} P\neq \varnothing $).
• principal if $\operatorname {ker} P\in P.$
• principal at a point if $\operatorname {ker} P\in P$ and $\operatorname {ker} P$ is a singleton set; in this case, if $\operatorname {ker} P=\{x\}$ then $P$ is said to be principal at $x.$
If a family of sets $P$ is fixed then $P$ is ultra if and only if some element of $P$ is a singleton set, in which case $P$ will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter $P$ is ultra if and only if $\operatorname {ker} P$ is a singleton set. A singleton set is ultra if and only if its sole element is also a singleton set.
The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point.
Proposition — If $U$ is an ultrafilter on $X$ then the following are equivalent:
1. $U$ is fixed, or equivalently, not free.
2. $U$ is principal.
3. Some element of $U$ is a finite set.
4. Some element of $U$ is a singleton set.
5. $U$ is principal at some point of $X,$ which means $\operatorname {ker} U=\{x\}\in U$ for some $x\in X.$
6. $U$ does not contain the Fréchet filter on $X$ as a subset.
7. $U$ is sequential.[9]
Every filter on $X$ that is principal at a single point is an ultrafilter, and if in addition $X$ is finite, then there are no ultrafilters on $X$ other than these.[10] In particular, if a set $X$ has finite cardinality $n<\infty ,$ then there are exactly $n$ ultrafilters on $X$ and those are the ultrafilters generated by each singleton subset of $X.$ Consequently, free ultrafilters can only exist on an infinite set.
Examples, properties, and sufficient conditions
If $X$ is an infinite set then there are as many ultrafilters over $X$ as there are families of subsets of $X;$ explicitly, if $X$ has infinite cardinality $\kappa $ then the set of ultrafilters over $X$ has the same cardinality as $\wp (\wp (X));$ that cardinality being $2^{2^{\kappa }}.$[11]
If $U$ and $S$ are families of sets such that $U$ is ultra, $\varnothing \not \in S,$ and $U\leq S,$ then $S$ is necessarily ultra. A filter subbase $U$ that is not a prefilter cannot be ultra; but it is nevertheless still possible for the prefilter and filter generated by $U$ to be ultra.
Suppose $U\subseteq \wp (X)$ is ultra and $Y$ is a set. The trace $U\vert _{Y}:=\{B\cap Y:B\in U\}$ is ultra if and only if it does not contain the empty set. Furthermore, at least one of the sets $U\vert _{Y}\setminus \{\varnothing \}$ and $U\vert _{X\setminus Y}\setminus \{\varnothing \}$ will be ultra (this result extends to any finite partition of $X$). If $F_{1},\ldots ,F_{n}$ are filters on $X,$ $U$ is an ultrafilter on $X,$ and $F_{1}\cap \cdots \cap F_{n}\leq U,$ then there is some $F_{i}$ that satisfies $F_{i}\leq U.$[12] This result is not necessarily true for an infinite family of filters.[12]
The image under a map $f:X\to Y$ of an ultra set $U\subseteq \wp (X)$ is again ultra and if $U$ is an ultra prefilter then so is $f(U).$ The property of being ultra is preserved under bijections. However, the preimage of an ultrafilter is not necessarily ultra, not even if the map is surjective. For example, if $X$ has more than one point and if the range of $f:X\to Y$ consists of a single point $\{y\}$ then $\{y\}$ is an ultra prefilter on $Y$ but its preimage is not ultra. Alternatively, if $U$ is a principal filter generated by a point in $Y\setminus f(X)$ then the preimage of $U$ contains the empty set and so is not ultra.
The elementary filter induced by an infinite sequence, all of whose points are distinct, is not an ultrafilter.[12] If $n=2,$ then $U_{n}$ denotes the set consisting all subsets of $X$ having cardinality $n,$ and if $X$ contains at least $2n-1$ ($=3$) distinct points, then $U_{n}$ is ultra but it is not contained in any prefilter. This example generalizes to any integer $n>1$ and also to $n=1$ if $X$ contains more than one element. Ultra sets that are not also prefilters are rarely used.
For every $S\subseteq X\times X$ and every $a\in X,$ let $S{\big \vert }_{\{a\}\times X}:=\{y\in X~:~(a,y)\in S\}.$ If ${\mathcal {U}}$ is an ultrafilter on $X$ then the set of all $S\subseteq X\times X$ such that $\left\{a\in X~:~S{\big \vert }_{\{a\}\times X}\in {\mathcal {U}}\right\}\in {\mathcal {U}}$ is an ultrafilter on $X\times X.$[13]
Monad structure
The functor associating to any set $X$ the set of $U(X)$ of all ultrafilters on $X$ forms a monad called the ultrafilter monad. The unit map
$X\to U(X)$
sends any element $x\in X$ to the principal ultrafilter given by $x.$
This ultrafilter monad is the codensity monad of the inclusion of the category of finite sets into the category of all sets,[14] which gives a conceptual explanation of this monad.
Similarly, the ultraproduct monad is the codensity monad of the inclusion of the category of finite families of sets into the category of all families of set. So in this sense, ultraproducts are categorically inevitable.[14]
The ultrafilter lemma
The ultrafilter lemma was first proved by Alfred Tarski in 1930.[13]
The ultrafilter lemma/principle/theorem[4] — Every proper filter on a set $X$ is contained in some ultrafilter on $X.$
The ultrafilter lemma is equivalent to each of the following statements:
1. For every prefilter on a set $X,$ there exists a maximal prefilter on $X$ subordinate to it.[2]
2. Every proper filter subbase on a set $X$ is contained in some ultrafilter on $X.$
A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it.[15][note 2]
The following results can be proven using the ultrafilter lemma. A free ultrafilter exists on a set $X$ if and only if $X$ is infinite. Every proper filter is equal to the intersection of all ultrafilters containing it.[4] Since there are filters that are not ultra, this shows that the intersection of a family of ultrafilters need not be ultra. A family of sets $\mathbb {F} \neq \varnothing $ can be extended to a free ultrafilter if and only if the intersection of any finite family of elements of $\mathbb {F} $ is infinite.
Relationships to other statements under ZF
See also: Boolean prime ideal theorem and Set-theoretic topology
Throughout this section, ZF refers to Zermelo–Fraenkel set theory and ZFC refers to ZF with the Axiom of Choice (AC). The ultrafilter lemma is independent of ZF. That is, there exist models in which the axioms of ZF hold but the ultrafilter lemma does not. There also exist models of ZF in which every ultrafilter is necessarily principal.
Every filter that contains a singleton set is necessarily an ultrafilter and given $x\in X,$ the definition of the discrete ultrafilter $\{S\subseteq X:x\in S\}$ does not require more than ZF. If $X$ is finite then every ultrafilter is a discrete filter at a point; consequently, free ultrafilters can only exist on infinite sets. In particular, if $X$ is finite then the ultrafilter lemma can be proven from the axioms ZF. The existence of free ultrafilter on infinite sets can be proven if the axiom of choice is assumed. More generally, the ultrafilter lemma can be proven by using the axiom of choice, which in brief states that any Cartesian product of non-empty sets is non-empty. Under ZF, the axiom of choice is, in particular, equivalent to (a) Zorn's lemma, (b) Tychonoff's theorem, (c) the weak form of the vector basis theorem (which states that every vector space has a basis), (d) the strong form of the vector basis theorem, and other statements. However, the ultrafilter lemma is strictly weaker than the axiom of choice. While free ultrafilters can be proven to exist, it is not possible to construct an explicit example of a free ultrafilter (using only ZF and the ultrafilter lemma); that is, free ultrafilters are intangible.[16] Alfred Tarski proved that under ZFC, the cardinality of the set of all free ultrafilters on an infinite set $X$ is equal to the cardinality of $\wp (\wp (X)),$ where $\wp (X)$ denotes the power set of $X.$[17] Other authors attribute this discovery to Bedřich Pospíšil (following a combinatorial argument from Fichtenholz, and Kantorovitch, improved by Hausdorff).[18][19]
Under ZF, the axiom of choice can be used to prove both the ultrafilter lemma and the Krein–Milman theorem; conversely, under ZF, the ultrafilter lemma together with the Krein–Milman theorem can prove the axiom of choice.[20]
Statements that cannot be deduced
The ultrafilter lemma is a relatively weak axiom. For example, each of the statements in the following list can not be deduced from ZF together with only the ultrafilter lemma:
1. A countable union of countable sets is a countable set.
2. The axiom of countable choice (ACC).
3. The axiom of dependent choice (ADC).
Equivalent statements
Under ZF, the ultrafilter lemma is equivalent to each of the following statements:[21]
1. The Boolean prime ideal theorem (BPIT).
2. Stone's representation theorem for Boolean algebras.
3. Any product of Boolean spaces is a Boolean space.[22]
4. Boolean Prime Ideal Existence Theorem: Every nondegenerate Boolean algebra has a prime ideal.[23]
5. Tychonoff's theorem for Hausdorff spaces: Any product of compact Hausdorff spaces is compact.[22]
6. If $\{0,1\}$ is endowed with the discrete topology then for any set $I,$ the product space $\{0,1\}^{I}$ is compact.[22]
7. Each of the following versions of the Banach-Alaoglu theorem is equivalent to the ultrafilter lemma:
1. Any equicontinuous set of scalar-valued maps on a topological vector space (TVS) is relatively compact in the weak-* topology (that is, it is contained in some weak-* compact set).[24]
2. The polar of any neighborhood of the origin in a TVS $X$ is a weak-* compact subset of its continuous dual space.[24]
3. The closed unit ball in the continuous dual space of any normed space is weak-* compact.[24]
• If the normed space is separable then the ultrafilter lemma is sufficient but not necessary to prove this statement.
8. A topological space $X$ is compact if every ultrafilter on $X$ converges to some limit.[25]
9. A topological space $X$ is compact if and only if every ultrafilter on $X$ converges to some limit.[25]
• The addition of the words "and only if" is the only difference between this statement and the one immediately above it.
10. The Alexander subbase theorem.[26][27]
11. The Ultranet lemma: Every net has a universal subnet.[27]
• By definition, a net in $X$ is called an ultranet or an universal net if for every subset $S\subseteq X,$ the net is eventually in $S$ or in $X\setminus S.$
12. A topological space $X$ is compact if and only if every ultranet on $X$ converges to some limit.[25]
• If the words "and only if" are removed then the resulting statement remains equivalent to the ultrafilter lemma.[25]
13. A convergence space $X$ is compact if every ultrafilter on $X$ converges.[25]
14. A uniform space is compact if it is complete and totally bounded.[25]
15. The Stone–Čech compactification Theorem.[22]
16. Each of the following versions of the compactness theorem is equivalent to the ultrafilter lemma:
1. If $\Sigma $ is a set of first-order sentences such that every finite subset of $\Sigma $ has a model, then $\Sigma $ has a model.[28]
2. If $\Sigma $ is a set of zero-order sentences such that every finite subset of $\Sigma $ has a model, then $\Sigma $ has a model.[28]
17. The completeness theorem: If $\Sigma $ is a set of zero-order sentences that is syntactically consistent, then it has a model (that is, it is semantically consistent).
Weaker statements
Any statement that can be deduced from the ultrafilter lemma (together with ZF) is said to be weaker than the ultrafilter lemma. A weaker statement is said to be strictly weaker if under ZF, it is not equivalent to the ultrafilter lemma. Under ZF, the ultrafilter lemma implies each of the following statements:
1. The Axiom of Choice for Finite sets (ACF): Given $I\neq \varnothing $ and a family $\left(X_{i}\right)_{i\in I}$ of non-empty finite sets, their product $ \prod \limits _{i\in I}}X_{i}$ is not empty.[27]
2. A countable union of finite sets is a countable set.
• However, ZF with the ultrafilter lemma is too weak to prove that a countable union of countable sets is a countable set.
3. The Hahn–Banach theorem.[27]
• In ZF, the Hahn–Banach theorem is strictly weaker than the ultrafilter lemma.
4. The Banach–Tarski paradox.
• In fact, under ZF, the Banach–Tarski paradox can be deduced from the Hahn–Banach theorem,[29][30] which is strictly weaker than the Ultrafilter Lemma.
5. Every set can be linearly ordered.
6. Every field has a unique algebraic closure.
7. Non-trivial ultraproducts exist.
8. The weak ultrafilter theorem: A free ultrafilter exists on $\mathbb {N} .$
• Under ZF, the weak ultrafilter theorem does not imply the ultrafilter lemma; that is, it is strictly weaker than the ultrafilter lemma.
9. There exists a free ultrafilter on every infinite set;
• This statement is actually strictly weaker than the ultrafilter lemma.
• ZF alone does not even imply that there exists a non-principal ultrafilter on some set.
Completeness
The completeness of an ultrafilter $U$ on a powerset is the smallest cardinal κ such that there are κ elements of $U$ whose intersection is not in $U.$ The definition of an ultrafilter implies that the completeness of any powerset ultrafilter is at least $\aleph _{0}$. An ultrafilter whose completeness is greater than $\aleph _{0}$—that is, the intersection of any countable collection of elements of $U$ is still in $U$—is called countably complete or σ-complete.
The completeness of a countably complete nonprincipal ultrafilter on a powerset is always a measurable cardinal.
Ordering on ultrafilters
The Rudin–Keisler ordering (named after Mary Ellen Rudin and Howard Jerome Keisler) is a preorder on the class of powerset ultrafilters defined as follows: if $U$ is an ultrafilter on $\wp (X),$ and $V$ an ultrafilter on $\wp (Y),$ then $V\leq {}_{RK}U$ if there exists a function $f:X\to Y$ such that
$C\in V$ if and only if $f^{-1}[C]\in U$
for every subset $C\subseteq Y.$
Ultrafilters $U$ and $V$ are called Rudin–Keisler equivalent, denoted U ≡RK V, if there exist sets $A\in U$ and $B\in V$ and a bijection $f:A\to B$ that satisfies the condition above. (If $X$ and $Y$ have the same cardinality, the definition can be simplified by fixing $A=X,$ $B=Y.$)
It is known that ≡RK is the kernel of ≤RK, i.e., that U ≡RK V if and only if $U\leq {}_{RK}V$ and $V\leq {}_{RK}U.$[31]
Ultrafilters on ℘(ω)
There are several special properties that an ultrafilter on $\wp (\omega ),$ where $\omega $ extends the natural numbers, may possess, which prove useful in various areas of set theory and topology.
• A non-principal ultrafilter $U$ is called a P-point (or weakly selective) if for every partition $\left\{C_{n}:n<\omega \right\}$ of $\omega $ such that for all $n<\omega ,$ $C_{n}\not \in U,$ there exists some $A\in U$ such that $A\cap C_{n}$ is a finite set for each $n.$
• A non-principal ultrafilter $U$ is called Ramsey (or selective) if for every partition $\left\{C_{n}:n<\omega \right\}$ of $\omega $ such that for all $n<\omega ,$ $C_{n}\not \in U,$ there exists some $A\in U$ such that $A\cap C_{n}$ is a singleton set for each $n.$
It is a trivial observation that all Ramsey ultrafilters are P-points. Walter Rudin proved that the continuum hypothesis implies the existence of Ramsey ultrafilters.[32] In fact, many hypotheses imply the existence of Ramsey ultrafilters, including Martin's axiom. Saharon Shelah later showed that it is consistent that there are no P-point ultrafilters.[33] Therefore, the existence of these types of ultrafilters is independent of ZFC.
P-points are called as such because they are topological P-points in the usual topology of the space βω \ ω of non-principal ultrafilters. The name Ramsey comes from Ramsey's theorem. To see why, one can prove that an ultrafilter is Ramsey if and only if for every 2-coloring of $[\omega ]^{2}$ there exists an element of the ultrafilter that has a homogeneous color.
An ultrafilter on $\wp (\omega )$ is Ramsey if and only if it is minimal in the Rudin–Keisler ordering of non-principal powerset ultrafilters.[34]
See also
• Extender (set theory) – in set theory, a system of ultrafilters representing an elementary embedding witnessing large cardinal propertiesPages displaying wikidata descriptions as a fallback
• Filter (mathematics) – In mathematics, a special subset of a partially ordered set
• Filter (set theory) – Family of sets representing "large" sets
• Filters in topology – Use of filters to describe and characterize all basic topological notions and results.
• Łoś's theorem – Mathematical constructionPages displaying short descriptions of redirect targets
• Ultrafilter – Maximal proper filter
• Universal net – A generalization of a sequence of pointsPages displaying short descriptions of redirect targets
Notes
1. Properties 1 and 3 imply that $A$ and $X\setminus A$ cannot both be elements of $U.$
2. Let ${\mathcal {F}}$ be a filter on $X$ that is not an ultrafilter. If $S\subseteq X$ is such that $S\not \in {\mathcal {F}}$ then $\{X\setminus S\}\cup {\mathcal {F}}$ has the finite intersection property (because if $F\in {\mathcal {F}}$ then $F\cap (X\setminus S)=\varnothing $ if and only if $F\subseteq S$) so that by the ultrafilter lemma, there exists some ultrafilter ${\mathcal {U}}_{S}$ on $X$ such that $\{X\setminus S\}\cup {\mathcal {F}}\subseteq {\mathcal {U}}_{S}$ (so in particular $S\not \in {\mathcal {U}}_{S}$). It follows that ${\mathcal {F}}=\bigcap _{S\subseteq X,S\not \in {\mathcal {F}}}{\mathcal {U}}_{S}.$ $\blacksquare $
Proofs
1. Suppose ${\mathcal {B}}$ is filter subbase that is ultra. Let $C,D\in {\mathcal {B}}$ and define $S=C\cap D.$ Because ${\mathcal {B}}$ is ultra, there exists some $B\in {\mathcal {B}}$ such that $B\cap S$ equals $B$ or $\varnothing .$ The finite intersection property implies that $B\cap S\neq \varnothing $ so necessarily $B\cap S=B,$ which is equivalent to $B\subseteq C\cap D.$ $\blacksquare $
References
1. Davey, B. A.; Priestley, H. A. (1990). Introduction to Lattices and Order. Cambridge Mathematical Textbooks. Cambridge University Press.
2. Narici & Beckenstein 2011, pp. 2–7.
3. Dugundji 1966, pp. 219–221.
4. Bourbaki 1989, pp. 57–68.
5. Schubert 1968, pp. 48–71.
6. Schechter 1996, pp. 100–130.
7. Higgins, Cecelia (2018). "Ultrafilters in set theory" (PDF). math.uchicago.edu. Retrieved August 16, 2020.
8. Kruckman, Alex (November 7, 2012). "Notes on Ultrafilters" (PDF). math.berkeley.edu. Retrieved August 16, 2020.
9. Dolecki & Mynard 2016, pp. 27–54.
10. Dolecki & Mynard 2016, pp. 33–35.
11. Pospíšil, Bedřich (1937). "Remark on Bicompact Spaces". The Annals of Mathematics. 38 (4): 845-846. doi:10.2307/1968840. JSTOR 1968840.
12. Bourbaki 1989, pp. 129–133.
13. Jech 2006, pp. 73–89.
14. Leinster, Tom (2013). "Codensity and the ultrafilter monad" (PDF). Theory and Applications of Categories. 28: 332–370. arXiv:1209.3606. Bibcode:2012arXiv1209.3606L.
15. Bourbaki 1987, pp. 57–68. sfn error: no target: CITEREFBourbaki1987 (help)
16. Schechter 1996, p. 105.
17. Schechter 1996, pp. 150–152.
18. Jech 2006, pp. 75–76.
19. Comfort 1977, p. 420.
20. Bell, J.; Fremlin, David (1972). "A geometric form of the axiom of choice" (PDF). Fundamenta Mathematicae. 77 (2): 167–170. doi:10.4064/fm-77-2-167-170. Retrieved 11 June 2018. Theorem 1.2. BPI [the Boolean Prime Ideal Theorem] & KM [Krein-Milman] $\implies $ (*) [the unit ball of the dual of a normed vector space has an extreme point].... Theorem 2.1. (*) $\implies $ AC [the Axiom of Choice].
21. Schechter 1996, pp. 105, 150–160, 166, 237, 317–315, 338–340, 344–346, 386–393, 401–402, 455–456, 463, 474, 506, 766–767.
22. Schechter 1996, p. 463.
23. Schechter 1996, p. 339.
24. Schechter 1996, pp. 766–767.
25. Schechter 1996, p. 455.
26. Hodel, R.E. (2005). "Restricted versions of the Tukey-Teichmüller theorem that are equivalent to the Boolean prime ideal theorem". Archive for Mathematical Logic. 44 (4): 459–472. doi:10.1007/s00153-004-0264-9. S2CID 6507722.
27. Muger, Michael (2020). Topology for the Working Mathematician.
28. Schechter 1996, pp. 391–392.
29. Foreman, M.; Wehrung, F. (1991). "The Hahn–Banach theorem implies the existence of a non-Lebesgue measurable set" (PDF). Fundamenta Mathematicae. 138: 13–19. doi:10.4064/fm-138-1-13-19.
30. Pawlikowski, Janusz (1991). "The Hahn–Banach theorem implies the Banach–Tarski paradox" (PDF). Fundamenta Mathematicae. 138: 21–22. doi:10.4064/fm-138-1-21-22.
31. Comfort, W. W.; Negrepontis, S. (1974). The theory of ultrafilters. Berlin, New York: Springer-Verlag. MR 0396267. Corollary 9.3.
32. Rudin, Walter (1956), "Homogeneity problems in the theory of Čech compactifications", Duke Mathematical Journal, 23 (3): 409–419, doi:10.1215/S0012-7094-56-02337-7, hdl:10338.dmlcz/101493
33. Wimmers, Edward (March 1982), "The Shelah P-point independence theorem", Israel Journal of Mathematics, 43 (1): 28–48, doi:10.1007/BF02761683, S2CID 122393776
34. Jech 2006, p. 91(Left as exercise 7.12)
Bibliography
• Arkhangel'skii, Alexander Vladimirovich; Ponomarev, V.I. (1984). Fundamentals of General Topology: Problems and Exercises. Mathematics and Its Applications. Vol. 13. Dordrecht Boston: D. Reidel. ISBN 978-90-277-1355-1. OCLC 9944489.
• Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129.
• Dixmier, Jacques (1984). General Topology. Undergraduate Texts in Mathematics. Translated by Berberian, S. K. New York: Springer-Verlag. ISBN 978-0-387-90972-1. OCLC 10277303.
• Dolecki, Szymon; Mynard, Frederic (2016). Convergence Foundations Of Topology. New Jersey: World Scientific Publishing Company. ISBN 978-981-4571-52-4. OCLC 945169917.
• Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485.
• Császár, Ákos (1978). General topology. Translated by Császár, Klára. Bristol England: Adam Hilger Ltd. ISBN 0-85274-275-4. OCLC 4146011.
• Jech, Thomas (2006). Set Theory: The Third Millennium Edition, Revised and Expanded. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-44085-7. OCLC 50422939.
• Joshi, K. D. (1983). Introduction to General Topology. New York: John Wiley and Sons Ltd. ISBN 978-0-85226-444-7. OCLC 9218750.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
• Schubert, Horst (1968). Topology. London: Macdonald & Co. ISBN 978-0-356-02077-8. OCLC 463753.
Further reading
• Comfort, W. W. (1977). "Ultrafilters: some old and some new results". Bulletin of the American Mathematical Society. 83 (4): 417–455. doi:10.1090/S0002-9904-1977-14316-4. ISSN 0002-9904. MR 0454893.
• Comfort, W. W.; Negrepontis, S. (1974), The theory of ultrafilters, Berlin, New York: Springer-Verlag, MR 0396267
• Ultrafilter at the nLab
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
|
Wikipedia
|
Ramón Verea
Ramón Silvestre Verea Aguiar y García (Curantes, 11 December 1833 – Buenos Aires, 6 February 1899) was a Galician journalist, engineer and writer, known as the inventor of a calculator with an internal multiplication table (1878).
Works
Novels
• La cruz de Cobblestone.
• Una mujer con dos maridos.
Essays
• Artículos filosóficos y cartas a un campesino. Los Angeles: "La Aurora," Librería Mexicana, [1909]
• La religión universal: artículos, críticas y polémicas, publicados en El Progreso en 1886–87. Dios y la creación (inédito), Nueva York: Impr. el Poligloto, 1891.
• Catecismo librepensador, ó Cartas a un campesino... Nueva York: Imprenta "El polígloto", 1894 y Managua, Tipografía nacional, 1894; reimpreso con otro título: Catecismo libre-pensador. Cartas a un campesino. San Salvador: Imp. R. Reyes [1923]
• En defensa de España, cuestiones de Cuba, Venezuela, "América para los Americanos"... Guatemala: Sánchez y de Guise, 1896.
References
Notes
• Biography of Ramón Verea
• Calculating machine of Ramón Verea
• Reportaje sobre él en El País, 4 de agosto de 2013
• Un gallego inventó la calculadora, artículo aparecido en La Voz de Galicia el 30 de diciembre de 2004.
• Imagen de la máquina de calcular de Verea
• Ramon Verea's Internal Multiplication Table
Authority control
International
• VIAF
National
• Spain
• United States
|
Wikipedia
|
Ran space
In mathematics, the Ran space (or Ran's space) of a topological space X is a topological space $\operatorname {Ran} (X)$ whose underlying set is the set of all nonempty finite subsets of X: for a metric space X the topology is induced by the Hausdorff distance. The notion is named after Ziv Ran.
Definition
In general, the topology of the Ran space is generated by sets
$\{S\in \operatorname {Ran} (U_{1}\cup \dots \cup U_{m})\mid S\cap U_{1}\neq \emptyset ,\dots ,S\cap U_{m}\neq \emptyset \}$
for any disjoint open subsets $U_{i}\subset X,i=1,...,m$.
There is an analog of a Ran space for a scheme:[1] the Ran prestack of a quasi-projective scheme X over a field k, denoted by $\operatorname {Ran} (X)$, is the category whose objects are triples $(R,S,\mu )$ consisting of a finitely generated k-algebra R, a nonempty set S and a map of sets $\mu :S\to X(R)$, and whose morphisms $(R,S,\mu )\to (R',S',\mu ')$ consist of a k-algebra homomorphism $R\to R'$ and a surjective map $S\to S'$ that commutes with $\mu $ and $\mu '$. Roughly, an R-point of $\operatorname {Ran} (X)$ is a nonempty finite set of R-rational points of X "with labels" given by $\mu $. A theorem of Beilinson and Drinfeld continues to hold: $\operatorname {Ran} (X)$ is acyclic if X is connected.
Properties
A theorem of Beilinson and Drinfeld states that the Ran space of a connected manifold is weakly contractible.[2]
Topological chiral homology
If F is a cosheaf on the Ran space $\operatorname {Ran} (M)$, then its space of global sections is called the topological chiral homology of M with coefficients in F. If A is, roughly, a family of commutative algebras parametrized by points in M, then there is a factorizable sheaf associated to A. Via this construction, one also obtains the topological chiral homology with coefficients in A. The construction is a generalization of Hochschild homology.[3]
See also
• Chiral homology
Notes
1. Lurie 2014
2. Beilinson, Alexander; Drinfeld, Vladimir (2004). Chiral algebras. American Mathematical Society. p. 173. ISBN 0-8218-3528-9.
3. Lurie 2017, Theorem 5.5.3.11
References
• Gaitsgory, Dennis (2012). "Contractibility of the space of rational maps". arXiv:1108.1741 [math.AG].
• Lurie, Jacob (19 February 2014). "Homology and Cohomology of Stacks (Lecture 7)" (PDF). Tamagawa Numbers via Nonabelian Poincare Duality (282y).
• Lurie, Jacob (18 September 2017). "Higher Algebra" (PDF).
• "Exponential space と Ran space". Algebraic Topology: A Guide to Literature. 2018.
|
Wikipedia
|
Randall Munroe
Randall Patrick Munroe (born October 17, 1984)[1][2] is an American cartoonist, author, and engineer best known as the creator of the webcomic xkcd. Munroe has worked full-time on the comic since late 2006.[3] In addition to publishing a book of the webcomic's strips, he has written four books: What If?, Thing Explainer, How To: Absurd Scientific Advice for Common Real-World Problems, and What If? 2.
Randall Munroe
Munroe speaking at re:publica in 2016
BornRandall Patrick Munroe
(1984-10-17) October 17, 1984
Easton, Pennsylvania, U.S.
Alma mater
• Christopher Newport University (BS)
GenreWebcomics, popular science
Notable works
• xkcd
• What If?
• Thing Explainer
• How To: Absurd Scientific Advice for Common Real-World Problems
Signature
Website
www.xkcd.com
Early life
Munroe was born in Easton, Pennsylvania, and his father has worked as an engineer and marketer.[4] He has two younger siblings, including a brother named Doug,[5] and was raised as a Quaker.[4][6] He was a fan of comic strips in newspapers from an early age,[3] starting off with Calvin and Hobbes.[7] After graduating from the Chesterfield County Mathematics and Science High School at Clover Hill in Midlothian, Virginia, he graduated from Christopher Newport University in 2006 with a degree in physics.[8][9][10]
Career
NASA
Munroe worked as a contract programmer and roboticist for NASA at the Langley Research Center,[11][7] before and after his graduation with a physics degree.[4] In late 2006, he left NASA and moved to Boston to focus on webcomics full time.[12][11]
Webcomic
Main article: xkcd
Munroe's webcomic, entitled xkcd, is primarily a stick figure comic. Its tagline describes it as "A webcomic of romance, sarcasm, math, and language".[14]
Munroe had originally used xkcd as an instant messaging screenname because he wanted a name without a meaning so he would not eventually grow tired of it.[15] He registered the domain name, but left it idle until he started posting his drawings, perhaps in September 2005.[7] The webcomic quickly became very popular, garnering up to 70 million hits a month by October 2007.[16] Munroe has said, "I think the comic that's gotten me the most feedback is actually the one about the stoplights".[16][17]
Munroe now supports himself by the sale of xkcd-related merchandise, primarily thousands of t-shirts a month.[3][15] He licenses his xkcd creations under the Creative Commons attribution-noncommercial 2.5, stating that it is not just about the free culture movement, but that it also makes good business sense.[15]
In 2010, he published a collection of the comics.[18] He has also toured the lecture circuit, giving speeches at places such as Google's Googleplex in Mountain View, California.[19]
The popularity of the strip among science fiction fans resulted in Munroe being nominated for a Hugo Award for Best Fan Artist in 2011 and again in 2012.[20] In 2014, he won the Hugo Award for Best Graphic Story for the xkcd strip "Time".[21]
Other projects
Munroe is the creator of the now defunct websites "The Funniest",[22] "The Cutest",[23] and "The Fairest",[24] each of which presents users with two options and asks them to choose one over the other.
In January 2008, Munroe developed an open-source chat moderation script named "Robot9000". Originally developed to moderate one of Munroe's xkcd-related Internet Relay Chat (IRC) channels, the software's algorithm attempts to prevent repetition in IRC channels by temporarily muting users who send messages that are identical to a message that has been sent to the channel before. If users continue to send unoriginal messages, Robot9000 mutes the user for a longer period, quadrupling for each unoriginal message the user sends to the channel.[25] Shortly after Munroe's blog post about the script went live, 4chan administrator Christopher Poole adapted the script to moderate the site's experimental /r9k/ board.[26] Twitch trialed R9K mode but it did not pass beta.[27]
In October 2008, The New Yorker magazine online published an interview and "Cartoon Off" between Munroe and Farley Katz, in which each cartoonist drew a series of four humorous cartoons.[28]
In early 2010, Munroe ran the xkcd Color Name Survey, in which participants were shown a series of RGB colors and asked to enter a suitable name for each specific color. Munroe wanted to identify colors which were given identical or highly similar names by a large number of survey participants, which would then serve as an approximate list of the most common colors rendered similarly across a range of computer monitors. Over 200,000 people eventually completed the survey,[29] and Munroe published the resulting list of 954 named RGB web colors[30] on the xkcd website. They have since been adopted as conventional color identifiers in various programming and markup languages, including Python[31] and LaTeX.[32]
What If?
Munroe has a blog entitled What If?, where he has answered questions sent in by fans of his comics. These questions are usually absurd and related to math or physics, and he explains them using both his knowledge and various academic sources.[33] In 2014, he published a collection of some of the responses, as well as a few new ones and some rejected questions, in a book entitled What If?: Serious Scientific Answers to Absurd Hypothetical Questions.[18] Starting in November 2019, Munroe began writing a monthly column in the New York Times titled Good Question, answering user-submitted questions in the same style as What If.[34]
A sequel, What If? 2: Additional Serious Scientific Answers to Absurd Hypothetical Questions, was published in September 2022.[35]
Radioactivity visualization
In response to concerns about the radioactivity released by the Fukushima Daiichi nuclear disaster in 2011, and to remedy what he described as "confusing" reporting on radiation levels in the media, Munroe created a radiation chart of comparative radiation exposure levels.[36] The chart was rapidly adopted by print and online journalists in several countries, including being linked to by online writers for The Guardian,[37] and The New York Times.[38] As a result of requests for permission to reprint the chart and to translate it into Japanese, Munroe placed it in the public domain, but requested that his non-expert status be clearly stated in any reprinting.[39]
Munroe published an xkcd-style comic on scientific publishing and open access in Science in October 2013.[40]
Thing Explainer
Munroe's book Thing Explainer, announced in May 2015 and published later that year, explains concepts using only the 1,000 most common English words.[18][41][42] The book's publisher, Houghton Mifflin Harcourt, saw these illustrations as potentially useful for textbooks, and announced in March 2016 that the next editions of their high-school-level chemistry, biology, and physics textbooks will include selected drawings and accompanying text from Thing Explainer.[43][44]
How To
In February 2019, Munroe announced his next book, How To, which was released in September of that year. The book deals with everyday problems by using physics to find absurd, and generally extreme, solutions to them.[45][6]
Influence
In September 2013, Munroe announced that a group of xkcd readers had submitted his name as a candidate for the renaming of asteroid (4942) 1987 DU6 to 4942 Munroe. The name was accepted by the International Astronomical Union.[46][47]
Personal life
As of May 2008, Munroe lives in Somerville, Massachusetts.[3]
In October 2010, Munroe's fiancée was diagnosed with stage three breast cancer; there had been no prior family history.[48][49] The emotional effect of her illness was referenced in the comic panel "Emotion", published 18 months later in April 2012.[50] In September 2011, he announced that they had married.[51] In November 2012, he published a comic entitled "Two Years', and in December 2017, Munroe followed this with a comic entitled "Seven Years".[52] He revisited the subject in November 2020 in a comic entitled "Ten Years".[53]
His hobbies and interests include kite photography, in which cameras are attached to kites and photographs are then taken of the ground or buildings.[54]
Publications
Publications by Munroe
• xkcd: volume 0. Breadpig. 2009. ISBN 9780615314464.
• What If?: Serious Scientific Answers to Absurd Hypothetical Questions. London: John Murray. 2014. ISBN 9781848549579.
• Thing Explainer. Boston: Houghton Mifflin Harcourt. 2015. ISBN 9780544668256.
• How To: Absurd Scientific Advice for Common Real-World Problems. John Murray. 2019. ISBN 9780525537090.
• What If? 2: Additional Serious Scientific Answers to Absurd Hypothetical Questions. Riverhead Books. 2022. ISBN 9780525537113.
Publications with contributions by Munroe
• North, Ryan; Matthew, Bennardo; Malki, David (2010). Machine of Death (PDF). ISBN 9780982167120. Archived from the original on August 15, 2014.
References
1. Chamberlin, Alan. "JPL Small-Body Database Browser". JPL Solar System Dynamics. Archived from the original on March 5, 2017. Retrieved August 24, 2020.
2. Cavna, Michael (September 12, 2022). "The world's funniest former NASA roboticist will take your questions". The Washington Post. Retrieved September 13, 2022.
3. Cohen, Noam (May 26, 2008). "This Is Funny Only if You Know Unix". The New York Times. ISSN 0362-4331. Archived from the original on March 25, 2019. Retrieved September 25, 2008.
4. Tupponce, Joan (November 24, 2009). "A Cartoonist's Mind". Richmond Magazine. Archived from the original on March 27, 2020. Retrieved January 28, 2020.
5. "Pillar". xkcd. Retrieved April 25, 2023.
6. Martinelli, Marissa (September 6, 2019). "Xkcd Creator Randall Munroe on the Joys of Overthinking Everything". Slate. Archived from the original on September 10, 2019. Retrieved September 11, 2019.
7. Munroe, Randall (December 11, 2007). Authors@Google: Randall Munroe (@Google Talks Adobe Flash video). Mountain View, California: Google. Event occurs at 24:13, 48:05, other timepoints. Archived from the original on December 19, 2021. Retrieved September 25, 2008. ...Calvin and Hobbes was the first comic that I discovered. / ... I'm pretty sure I started [posting drawings] in September 2005
8. Munroe, Randall. "About". xkcd. Archived from the original on May 23, 2019. Retrieved September 26, 2008.
9. Munroe, Randall (October 6, 2006). "Many news things, some overdue". xkcd: The blag of the webcomic. WordPress. Job. Archived from the original on August 24, 2013. Retrieved January 1, 2014. My about page mentions that I work for NASA — I'm technically a contractor working repeated contracts for them. However, they recently ran out of money to rehire me for another contract, so I'm done there for now.
10. "Voyages 2012". December 2014.
11. Lineberry, Denise (2012). "Robots or Webcomics? That was the Question". NASA. Archived from the original on March 25, 2019. Retrieved December 11, 2015.
12. Harvkey, Mike (August 5, 2019). "Cartoonist Randall Munroe Will Be Your Answer Man". Publishers Weekly. Vol. 266, no. 31. p. 49. ProQuest 2268106353. Retrieved November 16, 2022 – via ProQuest.
13. Munroe, Randall. "Wikipedian Protester". xkcd.com. Archived from the original on March 22, 2019. Retrieved April 5, 2010.
14. Munroe, Randall. "xkcd". xkcd. Archived from the original on February 3, 2020. Retrieved February 5, 2020.
15. Fernandez, Rebecca (October 12, 2006). "xkcd: A comic strip for the computer geek". Red Hat Magazine. Raleigh, North Carolina: Red Hat. Archived from the original on March 6, 2007. Retrieved September 25, 2008.
16. So, Adrienne (November 13, 2007). "Real Geek Heart Beats in Xkcd's Stick Figures". Wired. San Francisco: Condé Nast Publications. ISSN 1059-1028. Archived from the original on October 11, 2008. Retrieved September 25, 2008.
17. Randall Munroe (June 15, 2007). "Long Light". xkcd. Archived from the original on March 25, 2019. Retrieved April 18, 2020.
18. Alter, Alexandra (November 23, 2015). "Randall Munroe Explains It All for Us". The New York Times. Archived from the original on March 25, 2019. Retrieved February 23, 2017.
19. Spertus, Ellen (December 21, 2007). "Randall Munroe's visit to Google (xkcd)". Beyond Satire. Archived from the original on October 5, 2008. Retrieved September 25, 2008.
20. Hugo Staff. "Hugo Awards 2012 nomination". Archived from the original on April 9, 2012. Retrieved April 7, 2012.
21. Hugo Staff (April 18, 2014). "Hugo Awards 2014 nomination". Archived from the original on September 6, 2015. Retrieved April 20, 2014.
22. Munroe, Randall. "The Funniest". Archived from the original on December 5, 2006.
23. Munroe, Randall. "The Cutest". Archived from the original on May 28, 2010.
24. Munroe, Randall. "The Fairest". Archived from the original on June 12, 2010. Retrieved September 26, 2008.
25. Munroe, Randall (January 14, 2008). "ROBOT9000 and #xkcd-signal: Attacking Noise in Chat". blog.xkcd.com. Archived from the original on March 25, 2019. Retrieved September 28, 2018.
26. Petersen, Kierran (October 2, 2015). "A short history of /r9k/ — the 4chan message board some believe may be connected to the Oregon shooting". Public Radio International. Archived from the original on March 25, 2019. Retrieved May 18, 2018. Surprisingly enough, however, the /r9k/ board, otherwise known as ROBOT9001, was originally conceived as a way to increase the quality of messages on the wildly popular webcomic xkcd. It used a type of auto-moderation that prevented people from posting the same comment multiple times. [...] 4chan eventually moved the idea and software behind ROBOT9000 on to its site. They just added a one.
27. "What Does R9K Mode Mean - Twitch - Streamer Tactics". streamertactics.com. September 30, 2022. Retrieved September 30, 2022.
28. Katz, Farley (October 15, 2008). "Cartoon-Off: XKCD". The New Yorker. Archived from the original on April 1, 2015.
29. "Color Survey Results". xkcd. May 4, 2010. Retrieved October 22, 2021.
30. "954 most common RGB colors (xkcd color survey results)". xkcd.com. Retrieved October 22, 2021.
31. "Specifying colors - Matplotlib 3.7.0 documentation". Retrieved March 2, 2023.
32. "CTAN: Package xkcdcolors". ctan.org. Retrieved March 2, 2023.
33. "What If? Serious Scientific Answers to Absurd Hypothetical Questions". www.goodreads.com. Retrieved January 17, 2021.
34. "Good Question". The New York Times. Archived from the original on June 16, 2020. Retrieved June 18, 2020.
35. Gartenberg, Chaim (January 31, 2022). "XKCD's Randall Munroe announces What If? 2, with more scientific answers to life's most absurd hypothetical questions". The Verge. Retrieved February 1, 2022.
36. "Radiation dosage chart". xkcd.com. Archived from the original on November 22, 2019. Retrieved January 28, 2020.
37. Monbiot, George (March 21, 2011). "Why Fukushima made me stop worrying and love nuclear power". The Guardian. London. Archived from the original on June 26, 2015. Retrieved March 29, 2011.
38. Revkin, Andrew (March 23, 2011). "The 'Dread to Risk' Ratio on Radiation and other Discontents". Dot Earth blog. The New York Times. Archived from the original on March 26, 2011. Retrieved March 29, 2011.
39. Munroe, Randall (March 19, 2011). "Radiation Chart". www.xkcd.com. Archived from the original on July 5, 2011. Retrieved March 29, 2011.
40. Munroe, Randall (October 4, 2013). "The Rise of Open Access". Science. 342 (6154): 58–59. Bibcode:2013Sci...342...58.. doi:10.1126/science.342.6154.58. PMID 24092724.
41. Kastrenakes, Jacob (May 13, 2015). "XKCD has a new book about explaining complicated subjects in simple ways". The Verge. Archived from the original on March 25, 2019. Retrieved May 14, 2015.
42. Alderman, Naomi (December 17, 2015). "Thing Explainer: Complicated Stuff in Simple Words by Randall Munroe – funny, precise and beautifully designed". The Guardian. Archived from the original on March 25, 2019. Retrieved December 12, 2016.
43. Chang, Kenneth (March 22, 2016). "Randall Munroe, XKCD Creator, Goes Back to High School". New York Times. Archived from the original on March 25, 2019. Retrieved March 22, 2016.
44. Jao, Charlene (March 23, 2016). "XKCD Creator Randall Munroe Making Content For High School Textbooks". The Mary Sue. Archived from the original on April 14, 2016. Retrieved April 6, 2016.
45. Munroe, Randall. "how to". xkcd. Archived from the original on March 29, 2019. Retrieved February 6, 2019.
46. "4942 Munroe (1987 DU6)". Jet Propulsion Laboratory. NASA Jet Propulsion Laboratory, California Institute of Technology. July 29, 2013. Archived from the original on September 6, 2014. Retrieved June 11, 2013.
47. Munroe, Randall (September 30, 2013). "Asteroid 4942 Munroe". xkcd | The blag of the webcomic. Archived from the original on October 3, 2013. Retrieved June 11, 2013.
48. Munroe, Randall (November 5, 2010). "November - 2010 - xkcd". blog.xkcd.com. Archived from the original on March 25, 2019. Retrieved June 3, 2018.
49. Munroe, Randall (June 30, 2011). "Family Illness". Archived from the original on May 21, 2018. Retrieved June 3, 2018.
50. Munroe, Randall. "xkcd: Emotion". xkcd.com. Archived from the original on March 8, 2019. Retrieved June 3, 2018.
51. Munroe, Randall (September 12, 2011). "<3". Blog. XKCD. Archived from the original on November 3, 2011. Retrieved September 12, 2011.
52. Munroe, Randall (December 13, 2017). "Seven Years". Webcomic. XKCD. Archived from the original on March 4, 2019. Retrieved January 6, 2018.
53. Munroe, Randall (November 16, 2020). "Ten Years". Webcomic. XKCD. Archived from the original on November 17, 2020. Retrieved November 17, 2020.
54. Kuchera, Ben (July 2, 2007). "The joys of kite photography". Ars Technica. Archived from the original on January 12, 2012. Retrieved June 14, 2017.
External links
• Official website
Randall Munroe
Solo works
• xkcd
• "Time"
• What If?: Serious Scientific Answers to Absurd Hypothetical Questions
• Thing Explainer
• How To: Absurd Scientific Advice for Common Real-World Problems
Related
• Machine of Death
• 4942 Munroe
• Nerd sniping
• Geohashing
Authority control
International
• ISNI
• VIAF
National
• Norway
• Spain
• Germany
• Israel
• United States
• Latvia
• Czech Republic
• Korea
• Croatia
• Netherlands
• Poland
• Portugal
Academics
• zbMATH
Artists
• MusicBrainz
Other
• IdRef
|
Wikipedia
|
Randall Kamien
Randall David Kamien (born February 25, 1966) is a theoretical condensed matter physicist specializing in the physics of liquid crystals and is the Vicki and William Abrams Professor in the Natural Sciences at the University of Pennsylvania.[1]
Randall David Kamien
Kamien in 2022
Born (1966-02-25) February 25, 1966
Pittsburgh, Pennsylvania
Alma materCalifornia Institute of Technology (B.S., 1988)
California Institute of Technology (M.S., 1988)
Harvard University (Ph.D, 1992)
Known forGrain boundaries
Focal conic domains
Liquid crystals
AwardsG.W. Gray Medal British Liquid Crystal Society (2016)
Scientific career
FieldsCondensed Matter Physics
InstitutionsHarvard University
Institute for Advanced Studies
University of Pennsylvania
ThesisDirected Line Liquids (1992)
Doctoral advisorDavid R. Nelson
Biography
Randall Kamien was born to economist Morton Kamien and Lenore Kamien on February 25, 1966, and grew up in Wilmette, Illinois on the outskirts of Chicago.[2] Kamien completed a B.S. and a M.S. in physics at the California Institute of Technology in 1988 and completed a PhD in physics at Harvard University in 1992 under the supervision of David R. Nelson.[3] Prior to joining the faculty at the University of Pennsylvania he was a member of the Institute for Advanced Study in Princeton, New Jersey, and a postdoctoral research associate at the University of Pennsylvania. Kamien was appointed assistant professor at the University of Pennsylvania in 1997 and promoted to full professor in 2003.[4] Kamien is a fellow of the American Physics Society and the American Association for the Advancement of Science.[4] Kamien is the editor of Reviews of Modern Physics.[5]
Research
Randall Kamien studies soft condensed matter – and in particular liquid crystalline phases of matter – through the lens of geometry and topology.[6] In particular, Kamien has contributed to understanding Twist Grain Boundaries,[7] Focal Conic Domains,[8] and defect topology in smectic liquid crystals.[9] He is also known for his idiosyncratic naming conventions, such as “Shnerk’s Surface” [10] and “Shmessel Functions.”
Publications
• Senyuk, B.; Liu, Q.; He, S.; Kamien, R. D.; Kusner, R. B.; Lubensky, T. C.; Smalyukh, I. I. (2013), "Topological colloids", Nature, 493 (7431): 200–205, arXiv:1612.08753, Bibcode:2013Natur.493..200S, doi:10.1038/nature11710, PMID 23263182, S2CID 4343186.
• Honglawan, A.; Beller, D. A.; Cavallaro, M.; Kamien, R. D.; Stebe, K. J.; Yang, S. (2013), "Topographically induced hierarchical assembly and geometrical transformation of focal conic domain arrays in smectic liquid crystals", Proceedings of the National Academy of Sciences, 110 (1): 34–39, doi:10.1073/pnas.1214708109, PMC 3538202, PMID 23213240.
• Snir, Y.; Kamien, R. D. (2005), "Entropically driven helix formation", Science, 307 (5712): 1067, arXiv:cond-mat/0502520, doi:10.1126/science.1106243, PMID 15718461, S2CID 14611285.
• Ziherl, P.; Kamien, R. D. (2001), "Maximizing entropy by minimizing area: Towards a new principle of self-organization", The Journal of Physical Chemistry B, 105 (42): 10147, arXiv:cond-mat/0103171, doi:10.1021/jp010944q, S2CID 119467204.
• Kamien, R. D.; Selinger, J. V. (2001), "Order and frustration in chiral liquid crystals", Journal of Physics: Condensed Matter, 13 (3): R1, arXiv:cond-mat/0009094, doi:10.1088/0953-8984/13/3/201, S2CID 93442372.
• Kamien, R. D.; Lubensky, T. C. (1999), "Minimal surfaces, screw dislocations, and twist grain boundaries", Physical Review Letters, 82 (14): 2892, arXiv:cond-mat/9808306, Bibcode:1999PhRvL..82.2892K, doi:10.1103/PhysRevLett.82.2892, S2CID 15354995.
References
1. "Randall Kamien". www.physics.upenn.edu. Retrieved 2022-05-05.
2. In memoriam: Professor Emeritus Morton I. Kamien, 1938-2011, retrieved 2022-05-05.
3. Harvard PhD Theses in Physics: 1971-2000, retrieved 2022-05-05.
4. Curriculum vitae (PDF), retrieved 2022-05-05.
5. APS Editorial Office: Reviews of Modern Physics, retrieved 2022-05-05.
6. Kamien Group, retrieved 2022-05-05.
7. Kamien, R. D.; Lubensky, T. C. (1999). "Minimal surfaces, screw dislocations, and twist grain boundaries". Physical Review Letters. 82 (14): 2892–2895. arXiv:cond-mat/9808306. Bibcode:1999PhRvL..82.2892K. doi:10.1103/PhysRevLett.82.2892. S2CID 15354995.
8. Alexander, G. P.; Chen, B. G.; Matsumoto, E. A.; Kamien, R. D. (2010). "The Power of Poincaré: Elucidating the Hidden Symmetries in Focal Conic Domains". Physical Review Letters. 104 (25): 257802. arXiv:1004.0465. doi:10.1103/PhysRevLett.104.257802. PMID 20867415. S2CID 8291259.
9. Machon, T.; Aharoni, H.; Hu, Y.; Kamien, R. D. (2019). "Aspects of Defect Topology in Smectic Liquid Crystals". Communications in Mathematical Physics. 372 (2): 525–542. arXiv:1808.04104. Bibcode:2019CMaPh.372..525M. doi:10.1007/s00220-019-03366-y. S2CID 52435763.
10. Santangelo, C. D.; Kamien, R. D. (2007). "Triply periodic smectic liquid crystals". Physical Review E. 75 (1 Pt 1): 011702. arXiv:cond-mat/0609596. Bibcode:2007PhRvE..75a1702S. doi:10.1103/PhysRevE.75.011702. PMID 17358168. S2CID 119371099.
Authority control: Academics
• ORCID
|
Wikipedia
|
Random field
In physics and mathematics, a random field is a random function over an arbitrary domain (usually a multi-dimensional space such as $\mathbb {R} ^{n}$). That is, it is a function $f(x)$ that takes on a random value at each point $x\in \mathbb {R} ^{n}$(or some other domain). It is also sometimes thought of as a synonym for a stochastic process with some restriction on its index set.[1] That is, by modern definitions, a random field is a generalization of a stochastic process where the underlying parameter need no longer be real or integer valued "time" but can instead take values that are multidimensional vectors or points on some manifold.[2]
Formal definition
Given a probability space $(\Omega ,{\mathcal {F}},P)$, an X-valued random field is a collection of X-valued random variables indexed by elements in a topological space T. That is, a random field F is a collection
$\{F_{t}:t\in T\}$
where each $F_{t}$ is an X-valued random variable.
Examples
In its discrete version, a random field is a list of random numbers whose indices are identified with a discrete set of points in a space (for example, n-dimensional Euclidean space). Suppose there are four random variables, $X_{1}$, $X_{2}$, $X_{3}$, and $X_{4}$, located in a 2D grid at (0,0), (0,2), (2,2), and (2,0), respectively. Suppose each random variable can take on the value of -1 or 1, and the probability of each random variable's value depends on its immediately adjacent neighbours. This is a simple example of a discrete random field.
More generally, the values each $X_{i}$ can take on might be defined over a continuous domain. In larger grids, it can also be useful to think of the random field as a "function valued" random variable as described above. In quantum field theory the notion is generalized to a random functional, one that takes on random values over a space of functions (see Feynman integral).
Several kinds of random fields exist, among them the Markov random field (MRF), Gibbs random field, conditional random field (CRF), and Gaussian random field. In 1974, Julian Besag proposed an approximation method relying on the relation between MRFs and Gibbs RFs.
Example properties
An MRF exhibits the Markov property
$P(X_{i}=x_{i}|X_{j}=x_{j},i\neq j)=P(X_{i}=x_{i}|X_{j}=x_{j},j\in \partial _{i}),\,$
for each choice of values $(x_{j})_{j}$. And each $\partial _{i}$ is the set of neighbors of $i$. In other words, the probability that a random variable assumes a value depends on its immediate neighboring random variables. The probability of a random variable in an MRF is given by
$P(X_{i}=x_{i}|\partial _{i})={\frac {P(X_{i}=x_{i},\partial _{i})}{\sum _{k}P(X_{i}=k,\partial _{i})}},$
where the sum (can be an integral) is over the possible values of k. It is sometimes difficult to compute this quantity exactly.
Applications
When used in the natural sciences, values in a random field are often spatially correlated. For example, adjacent values (i.e. values with adjacent indices) do not differ as much as values that are further apart. This is an example of a covariance structure, many different types of which may be modeled in a random field. One example is the Ising model where sometimes nearest neighbor interactions are only included as a simplification to better understand the model.
A common use of random fields is in the generation of computer graphics, particularly those that mimic natural surfaces such as water and earth. Random fields have been also used in subsurface ground models as in [3]
In neuroscience, particularly in task-related functional brain imaging studies using PET or fMRI, statistical analysis of random fields are one common alternative to correction for multiple comparisons to find regions with truly significant activation.[4]
They are also used in machine learning applications (see graphical models).
Tensor-valued random fields
Random fields are of great use in studying natural processes by the Monte Carlo method in which the random fields correspond to naturally spatially varying properties. This leads to tensor-valued random fields in which the key role is played by a Statistical Volume Element (SVE); when the SVE becomes sufficiently large, its properties become deterministic and one recovers the representative volume element (RVE) of deterministic continuum physics. The second type of random fields that appear in continuum theories are those of dependent quantities (temperature, displacement, velocity, deformation, rotation, body and surface forces, stress, etc.).[5]
See also
• Covariance
• Kriging
• Variogram
• Resel
• Stochastic process
• Interacting particle system
• Stochastic cellular automata
References
1. "Random Fields" (PDF).
2. Vanmarcke, Erik (2010). Random Fields: Analysis and Synthesis. World Scientific Publishing Company. ISBN 978-9812563538.
3. Cardenas, IC (2023). "A two-dimensional approach to quantify stratigraphic uncertainty from borehole data using non-homogeneous random fields". Engineering Geology. doi:10.1016/j.enggeo.2023.107001.
4. Worsley, K. J.; Evans, A. C.; Marrett, S.; Neelin, P. (November 1992). "A Three-Dimensional Statistical Analysis for CBF Activation Studies in Human Brain". Journal of Cerebral Blood Flow & Metabolism. 12 (6): 900–918. doi:10.1038/jcbfm.1992.127. ISSN 0271-678X. PMID 1400644.
5. Malyarenko, Anatoliy; Ostoja-Starzewski, Martin (2019). Tensor-Valued Random Fields for Continuum Physics. Cambridge University Press. ISBN 9781108429856.
Further reading
• Adler, R. J. & Taylor, Jonathan (2007). Random Fields and Geometry. Springer. ISBN 978-0-387-48112-8.
• Besag, J. E. (1974). "Spatial Interaction and the Statistical Analysis of Lattice Systems". Journal of the Royal Statistical Society. Series B. 36 (2): 192–236. doi:10.1111/j.2517-6161.1974.tb00999.x.
• Griffeath, David (1976). "Random Fields". In Kemeny, John G.; Snell, Laurie; Knapp, Anthony W. (eds.). Denumerable Markov Chains (2nd ed.). Springer. ISBN 0-387-90177-9.
• Davar Khoshnevisan (2002). Multiparameter Processes : An Introduction to Random Fields. Springer. ISBN 0-387-95459-7.
Stochastic processes
Discrete time
• Bernoulli process
• Branching process
• Chinese restaurant process
• Galton–Watson process
• Independent and identically distributed random variables
• Markov chain
• Moran process
• Random walk
• Loop-erased
• Self-avoiding
• Biased
• Maximal entropy
Continuous time
• Additive process
• Bessel process
• Birth–death process
• pure birth
• Brownian motion
• Bridge
• Excursion
• Fractional
• Geometric
• Meander
• Cauchy process
• Contact process
• Continuous-time random walk
• Cox process
• Diffusion process
• Empirical process
• Feller process
• Fleming–Viot process
• Gamma process
• Geometric process
• Hawkes process
• Hunt process
• Interacting particle systems
• Itô diffusion
• Itô process
• Jump diffusion
• Jump process
• Lévy process
• Local time
• Markov additive process
• McKean–Vlasov process
• Ornstein–Uhlenbeck process
• Poisson process
• Compound
• Non-homogeneous
• Schramm–Loewner evolution
• Semimartingale
• Sigma-martingale
• Stable process
• Superprocess
• Telegraph process
• Variance gamma process
• Wiener process
• Wiener sausage
Both
• Branching process
• Galves–Löcherbach model
• Gaussian process
• Hidden Markov model (HMM)
• Markov process
• Martingale
• Differences
• Local
• Sub-
• Super-
• Random dynamical system
• Regenerative process
• Renewal process
• Stochastic chains with memory of variable length
• White noise
Fields and other
• Dirichlet process
• Gaussian random field
• Gibbs measure
• Hopfield model
• Ising model
• Potts model
• Boolean network
• Markov random field
• Percolation
• Pitman–Yor process
• Point process
• Cox
• Poisson
• Random field
• Random graph
Time series models
• Autoregressive conditional heteroskedasticity (ARCH) model
• Autoregressive integrated moving average (ARIMA) model
• Autoregressive (AR) model
• Autoregressive–moving-average (ARMA) model
• Generalized autoregressive conditional heteroskedasticity (GARCH) model
• Moving-average (MA) model
Financial models
• Binomial options pricing model
• Black–Derman–Toy
• Black–Karasinski
• Black–Scholes
• Chan–Karolyi–Longstaff–Sanders (CKLS)
• Chen
• Constant elasticity of variance (CEV)
• Cox–Ingersoll–Ross (CIR)
• Garman–Kohlhagen
• Heath–Jarrow–Morton (HJM)
• Heston
• Ho–Lee
• Hull–White
• LIBOR market
• Rendleman–Bartter
• SABR volatility
• Vašíček
• Wilkie
Actuarial models
• Bühlmann
• Cramér–Lundberg
• Risk process
• Sparre–Anderson
Queueing models
• Bulk
• Fluid
• Generalized queueing network
• M/G/1
• M/M/1
• M/M/c
Properties
• Càdlàg paths
• Continuous
• Continuous paths
• Ergodic
• Exchangeable
• Feller-continuous
• Gauss–Markov
• Markov
• Mixing
• Piecewise-deterministic
• Predictable
• Progressively measurable
• Self-similar
• Stationary
• Time-reversible
Limit theorems
• Central limit theorem
• Donsker's theorem
• Doob's martingale convergence theorems
• Ergodic theorem
• Fisher–Tippett–Gnedenko theorem
• Large deviation principle
• Law of large numbers (weak/strong)
• Law of the iterated logarithm
• Maximal ergodic theorem
• Sanov's theorem
• Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy)
Inequalities
• Burkholder–Davis–Gundy
• Doob's martingale
• Doob's upcrossing
• Kunita–Watanabe
• Marcinkiewicz–Zygmund
Tools
• Cameron–Martin formula
• Convergence of random variables
• Doléans-Dade exponential
• Doob decomposition theorem
• Doob–Meyer decomposition theorem
• Doob's optional stopping theorem
• Dynkin's formula
• Feynman–Kac formula
• Filtration
• Girsanov theorem
• Infinitesimal generator
• Itô integral
• Itô's lemma
• Karhunen–Loève theorem
• Kolmogorov continuity theorem
• Kolmogorov extension theorem
• Lévy–Prokhorov metric
• Malliavin calculus
• Martingale representation theorem
• Optional stopping theorem
• Prokhorov's theorem
• Quadratic variation
• Reflection principle
• Skorokhod integral
• Skorokhod's representation theorem
• Skorokhod space
• Snell envelope
• Stochastic differential equation
• Tanaka
• Stopping time
• Stratonovich integral
• Uniform integrability
• Usual hypotheses
• Wiener space
• Classical
• Abstract
Disciplines
• Actuarial mathematics
• Control theory
• Econometrics
• Ergodic theory
• Extreme value theory (EVT)
• Large deviations theory
• Mathematical finance
• Mathematical statistics
• Probability theory
• Queueing theory
• Renewal theory
• Ruin theory
• Signal processing
• Statistics
• Stochastic analysis
• Time series analysis
• Machine learning
• List of topics
• Category
|
Wikipedia
|
Linear network coding
In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations.
Linear network coding may be used to improve a network's throughput, efficiency, and scalability, as well as reducing attacks and eavesdropping. The nodes of a network take several packets and combine for transmission. This process may be used to attain the maximum possible information flow in a network.
It has been proven that, theoretically, linear coding is enough to achieve the upper bound in multicast problems with one source.[1] However linear coding is not sufficient in general; even for more general versions of linearity such as convolutional coding and filter-bank coding.[2] Finding optimal coding solutions for general network problems with arbitrary demands is a hard problem, which can be NP-hard[3][4] and even undecidable.[5][6]
Encoding and decoding
In a linear network coding problem, a group of nodes $P$ are involved in moving the data from $S$ source nodes to $K$ sink nodes. Each node generates new packets which are linear combinations of past received packets by multiplying them by coefficients chosen from a finite field, typically of size $GF(2^{s})$.
More formally, each node, $p_{k}$ with indegree, $InDeg(p_{k})=S$, generates a message $X_{k}$ from the linear combination of received messages $\{M_{i}\}_{i=1}^{S}$ by the formula:
$X_{k}=\sum _{i=1}^{S}g_{k}^{i}\cdot M_{i}$
Where the values $g_{k}^{i}$ are coefficients selected from $GF(2^{s})$. Since operations are computed in a finite field, the generated message is of the same length as the original messages. Each node forwards the computed value $X_{k}$ along with the coefficients, $g_{k}^{i}$, used in the $k^{\text{th}}$ level, $g_{k}^{i}$.
Sink nodes receive these network coded messages, and collect them in a matrix. The original messages can be recovered by performing Gaussian elimination on the matrix.[7] In reduced row echelon form, decoded packets correspond to the rows of the form $e_{i}=[0...010...0]$
Background
A network is represented by a directed graph ${\mathcal {G}}=(V,E,C)$. $V$ is the set of nodes or vertices, $E$ is the set of directed links (or edges), and $C$ gives the capacity of each link of $E$. Let $T(s,t)$ be the maximum possible throughput from node $s$ to node $t$. By the max-flow min-cut theorem, $T(s,t)$ is upper bounded by the minimum capacity of all cuts, which is the sum of the capacities of the edges on a cut, between these two nodes.
Karl Menger proved that there is always a set of edge-disjoint paths achieving the upper bound in a unicast scenario, known as the max-flow min-cut theorem. Later, the Ford–Fulkerson algorithm was proposed to find such paths in polynomial time. Then, Edmonds proved in the paper "Edge-Disjoint Branchings" the upper bound in the broadcast scenario is also achievable, and proposed a polynomial time algorithm.
However, the situation in the multicast scenario is more complicated, and in fact, such an upper bound can't be reached using traditional routing ideas. Ahlswede et al. proved that it can be achieved if additional computing tasks (incoming packets are combined into one or several outgoing packets) can be done in the intermediate nodes.[8]
The Butterfly Network
The butterfly network[8] is often used to illustrate how linear network coding can outperform routing. Two source nodes (at the top of the picture) have information A and B that must be transmitted to the two destination nodes (at the bottom). Each destination node wants to know both A and B. Each edge can carry only a single value (we can think of an edge transmitting a bit in each time slot).
If only routing were allowed, then the central link would be only able to carry A or B, but not both. Supposing we send A through the center; then the left destination would receive A twice and not know B at all. Sending B poses a similar problem for the right destination. We say that routing is insufficient because no routing scheme can transmit both A and B to both destinations simultaneously. Meanwhile, it takes four time slots in total for both destination nodes to know A and B.
Using a simple code, as shown, A and B can be transmitted to both destinations simultaneously by sending the sum of the symbols through the two relay nodes – encoding A and B using the formula "A+B". The left destination receives A and A + B, and can calculate B by subtracting the two values. Similarly, the right destination will receive B and A + B, and will also be able to determine both A and B. Therefore, with network coding, it takes only three time slots and improves the throughput.
Random Linear Network Coding
Random linear network coding[9] (RLNC) is a simple yet powerful encoding scheme, which in broadcast transmission schemes allows close to optimal throughput using a decentralized algorithm. Nodes transmit random linear combinations of the packets they receive, with coefficients chosen randomly, with a uniform distribution from a Galois field. If the field size is sufficiently large, the probability that the receiver(s) will obtain linearly independent combinations (and therefore obtain innovative information) approaches 1. It should however be noted that, although random linear network coding has excellent throughput performance, if a receiver obtains an insufficient number of packets, it is extremely unlikely that they can recover any of the original packets. This can be addressed by sending additional random linear combinations until the receiver obtains the appropriate number of packets.
Operation and key parameters
There are three key parameters in RLNC. The first one is the generation size. In RLNC, the original data transmitted over the network is divided into packets. The source and intermediate nodes in the network can combine and recombine the set of original and coded packets. The original $M$ packets form a block, usually called a generation. The number of original packets combined and recombined together is the generation size. The second parameter is the packet size. Usually, the size of the original packets is fixed. In the case of unequally-sized packets, these can be zero-padded if they are shorter or split into multiple packets if they are longer. In practice, the packet size can be the size of the maximum transmission unit (MTU) of the underlying network protocol. For example, it can be around 1500 bytes in an Ethernet frame. The third key parameter is the Galois field used. In practice, the most commonly used Galois fields are binary extension fields. And the most commonly used sizes for the Galois fields are the binary field $GF(2)$ and the so-called binary-8 ($GF(2^{8})$). In the binary field, each element is one bit long, while in the binary-8, it is one byte long. Since the packet size is usually larger than the field size, each packet is seen as a set of elements from the Galois field (usually referred to as symbols) appended together. The packets have a fixed amount of symbols (Galois field elements), and since all the operations are performed over Galois fields, then the size of the packets does not change with subsequent linear combinations.
The sources and the intermediate nodes can combine any subset of the original and previously coded packets performing linear operations. To form a coded packet in RLNC, the original and previously coded packets are multiplied by randomly chosen coefficients and added together. Since each packet is just an appended set of Galois field elements, the operations of multiplication and addition are performed symbol-wise over each of the individual symbols of the packets, as shown in the picture from the example.
To preserve the statelessness of the code, the coding coefficients used to generate the coded packets are appended to the packets transmitted over the network. Therefore, each node in the network can see what coefficients were used to generate each coded packet. One novelty of linear network coding over traditional block codes is that it allows the recombination of previously coded packets into new and valid coded packets. This process is usually called recoding. After a recoding operation, the size of the appended coding coefficients does not change. Since all the operations are linear, the state of the recoded packet can be preserved by applying the same operations of addition and multiplication to the payload and the appended coding coefficients. In the following example, we will illustrate this process.
Any destination node must collect enough linearly independent coded packets to be able to reconstruct the original data. Each coded packet can be understood as a linear equation where the coefficients are known since they are appended to the packet. In these equations, each of the original $M$ packets is the unknown. To solve the linear system of equations, the destination needs at least $M$ linearly independent equations (packets).
Example
In the figure, we can see an example of two packets linearly combined into a new coded packet. In the example, we have two packets, namely packet $f$ and packet $e$. The generation size of our example is two. We know this because each packet has two coding coefficients ($C_{ij}$) appended. The appended coefficients can take any value from the Galois field. However, an original, uncoded data packet would have appended the coding coefficients $[0,1]$ or $[1,0]$, which means that they are constructed by a linear combination of zero times one of the packets plus one time the other packet. Any coded packet would have appended other coefficients. In our example, packet $f$ for instance has appended the coefficients $[C_{11},C_{12}]$. Since network coding can be applied at any layter of the communication protocol, these packets can have a header from the other layers, which is ignored in the network coding operations.
Now, lets assume that the network node wants to produce a new coded packet combining packet $f$ and packet $e$. In RLNC, it will randomly choose two coding coefficients, $d_{1}$ and $d_{2}$ in the example. The node will multiply each symbol of packet $f$ by $d_{1}$, and each symbol of packet $e$ by $d_{2}$. Then, it will add the results symbol-wise to produce the new coded data. It will perform the same operations of multiplication and addition to the coding coefficients of the coded packets.
Misconceptions
Linear network coding is still a relatively new subject. However, the topic has been vastly researched over the last twenty years. Nevertheless, there are still some misconceptions that are no longer valid:
Decoding computational complexity: Network coding decoders have been improved over the years. Nowadays, the algorithms are highly efficient and parallelizable. In 2016, in i5 processors with SIMD instructions enabled, the decoding goodput of network coding was 750 MB/s for a generation size of 16 packets and 250 MB/s for a generation size of 64 packets.[10] Furthermore, today's algorithms can be vastly parallelizable, increasing the encoding and decoding goodput even further.[11]
Transmission Overhead: It is usually thought that the transmission overhead of network coding is high due to the need to append the coding coefficients to each coded packet. In reality, this overhead is negligible in most applications. The overhead due to coding coefficients can be computed as follows. Each packet has appended $M$ coding coefficients. The size of each coefficient is the number of bits needed to represent one element of the Galois field. In practice, most network coding applications use a generation size of no more than 32 packets per generation and Galois fields of 256 elements (binary-8). With these numbers, each packet needs $M*log_{2}(s)=32$ bytes of appended overhead. If each packet is 1500 bytes long (i.e. the Ethernet MTU), then 32 bytes represent an overhead of only 2%.
Overhead due to linear dependencies: Since the coding coefficients are chosen randomly in RLNC, there is a chance that some transmitted coded packets are not beneficial to the destination because they are formed using a linearly dependent combination of packets. However, this overhead is negligible in most applications. The linear dependencies depend on the Galois fields' size and are practically independent of the generation size used. We can illustrate this with the following example. Let us assume we are using a Galois field of $q$ elements and a generation size of $M$ packets. If the destination has not received any coded packet, we say it has $M$ degrees of freedom, and then almost any coded packet will be useful and innovative. In fact, only the zero-packet (only zeroes in the coding coefficients) will be non-innovative. The probability of generating the zero-packet is equal to the probability of each of the $M$ coding coefficient to be equal to the zero-element of the Galois field. I.e., the probability of a non-innovative packet is of ${\frac {1}{q^{M}}}$. With each successive innovative transmission, it can be shown that the exponent of the probability of a non innovative packet is reduced by one. When the destination has received $M-1$ innovative packets (i.e., it needs only one more packet to fully decode the data). Then the probability of a non innovative packet is of ${\frac {1}{q}}$. We can use this knowledge to calculate the expected number of linearly dependent packets per generation. In the worst-case scenario, when the Galois field used contains only two elements ($q=2$), the expected number of linearly dependent packets per generation is of 1.6 extra packets. If our generation size if of 32 or 64 packets, this represents an overhead of 5% or 2.5%, respectively. If we use the binary-8 field ($q=256$), then the expected number of linearly dependent packets per generation is practically zero. Since it is the last packets the major contributor to the overhead due to linear dependencies, there are RLNC-based protocols such as tunable sparse network coding[12] that exploit this knowledge. These protocols introduce sparsity (zero-elements) in the coding coefficients at the beginning of the transmission to reduce the decoding complexity, and reduce the sparsity at the end of the transmission to reduce the overhead due to linear dependencies.
Applications
Over the years, multiple researchers and companies have integrated network coding solutions into their applications.[13] We can list some of the applications of network coding in different areas:
• VoIP:[14] The performance of streaming services such as VoIP over wireless mesh networks can be improved with network coding by reducing the network delay and jitter.
• Video[15] and audio[16] streaming and conferencing:[17][18] The performance of MPEG-4 traffic in terms of delay, packet loss, and jitter over wireless networks prone to packet erasures can be improved with RLNC.[15] In the case of audio streaming over wireless mesh networks, the packet delivery ratio, latency, and jitter performance of the network can be significantly increased when using RLNC instead of packet forwarding-based protocols such as simplified multicast forwarding and partial dominant pruning.[16] The performance improvements of network coding for video conferencing are not only theoretical. In 2016, the authors of [17] built a real-world testbed of 15 wireless Android devices to evaluate the feasibility of network-coding-based video conference systems. Their results showed large improvements in packet delivery ratio and overall user experience, especially over poor quality links compared to multicasting technologies based on packet forwarding.
• Software-defined wide area networks (SD-WAN):[19][20][21][22] Large industrial IoT wireless networks can benefit from network coding. Researchers showed[19] that network coding and its channel bundling capabilities improved the performance of SD-WANs with a large number of nodes with multiple cellular connections. Nowadays, companies such as Barracuda are employing RLNC-based solutions due to their advantages in low latency, small footprint on computing devices, and low overhead.[21][22]
• Channel bundling:[23] Due to the statelessness characteristics of RLNC, it can be used to efficiently perform channel bundling, i.e., the transmission of information through multiple network interfaces.[23] Since the coded packets are randomly generated, and the state of the code traverses the network together with the coded packets, a source can achieve bundling without much planning just by sending coded packets through all its network interfaces. The destination can decode the information once enough coded packets arrive, irrespectively of the network interface. A video demonstrating the channel bundling capabilities of RLNC is available at.[24]
• 5G private networks:[25][26] RLNC can be integrated into the 5G NR standard to improve the performance of video delivery over 5G systems.[25] In 2018, a demo presented at the Consumer Electronics Show demonstrated a practical deployment of RLNC with NFV and SDN technologies to improve video quality against packet loss due to congestion at the core network.[26]
• Remote collaboration.[27]
• Augmented reality remote support and training.[28]
• Remote vehicle driving applications.[29][30][31][32]
• Connected cars networks.[33][34]
• Gaming applications such as low latency streaming and multiplayer connectivity.[35][36][37][38]
• Healthcare applications.[39][40][41]
• Industry 4.0.[42][43][44]
• Satellite networks.[45]
• Agricultural sensor fields.[46][47][48]
• In-flight enterteinment networks.[49]
• Major security and firmware updates for mobile product families.[50][51]
• Smart city infrastructure.[52][53]
• Information-centric networking and named data networking.:[54] Linear network coding can improve the network efficiency of information-centric networking solutions by exploiting the multi-source multi-cast nature of such systems.[54] It has been shown, that RLNC can be integrated into distributed content delivery networks such as IPFS to increase data availability while reducing storage resources.[55]
• Alternative to forward error correction and automatic repeat requests in traditional and wireless networks with packet loss, such as Coded TCP[56] and Multi-user ARQ[57]
• Protection against network attacks such as snooping, eavesdropping, replay, or data corruption.[58][59]
• Digital file distribution and P2P file sharing, e.g. Avalanche filesystem from Microsoft
• Distributed storage[54][60][61]
• Throughput increase in wireless mesh networks, e.g.: COPE,[62] CORE,[63] Coding-aware routing,[64] and B.A.T.M.A.N.[65]
• Buffer and delay reduction in spatial sensor networks: Spatial buffer multiplexing[66]
• Wireless broadcast:[67] RLNC can reduce the number of packet transmission for a single-hop wireless multicast network, and hence improve network bandwidth[67]
• Distributed file sharing[68]
• Low-complexity video streaming to mobile device[69]
• Device-to-device extensions[70][71][72][73][74]
See also
• Secret sharing protocol
• Homomorphic signatures for network coding
• Triangular network coding
References
1. S. Li, R. Yeung, and N. Cai, "Linear Network Coding"(PDF), in IEEE Transactions on Information Theory, Vol 49, No. 2, pp. 371–381, 2003
2. R. Dougherty, C. Freiling, and K. Zeger, "Insufficiency of Linear Coding in Network Information Flow" (PDF), in IEEE Transactions on Information Theory, Vol. 51, No. 8, pp. 2745-2759, August 2005 ( erratum)
3. Rasala Lehman, A.; Lehman, E. (2004). Complexity classification of network information flow problems. 15th ACM-SIAM SODA. pp. 142–150.
4. Langberg, M.; Sprintson, A.; Bruck, J. (2006). "The encoding complexity of network coding". IEEE Transactions on Information Theory. 52 (6): 2386–2397. doi:10.1109/TIT.2006.874434. S2CID 1414385.
5. Li, C. T. (2023). "Undecidability of Network Coding, Conditional Information Inequalities, and Conditional Independence Implication". IEEE Transactions on Information Theory. 69 (6): 1. arXiv:2205.11461. doi:10.1109/TIT.2023.3247570. S2CID 248986512.
6. Kühne, L.; Yashfe, G. (2022). "Representability of Matroids by c-Arrangements is Undecidable". Israel Journal of Mathematics. 252: 95–147. arXiv:1912.06123. doi:10.1007/s11856-022-2345-z. S2CID 209324252.
7. Chou, Philip A.; Wu, Yunnan; Jain, Kamal (October 2003), "Practical network coding", Allerton Conference on Communication, Control, and Computing, Any receiver can then recover the source vectors using Gaussian elimination on the vectors in its h (or more) received packets.
8. Ahlswede, Rudolf; N. Cai; S.-Y. R. Li; R. W. Yeung (2000). "Network Information Flow". IEEE Transactions on Information Theory. 46 (4): 1204–1216. CiteSeerX 10.1.1.722.1409. doi:10.1109/18.850663.
9. T. Ho, R. Koetter, M. Médard, D. R. Karger and M. Effros, "The Benefits of Coding over Routing in a Randomized Setting" Archived 2017-10-31 at the Wayback Machine in 2003 IEEE International Symposium on Information Theory. doi:10.1109/ISIT.2003.1228459
10. Sørensen, Chres W.; Paramanathan, Achuthan; Cabrera, Juan A.; Pedersen, Morten V.; Lucani, Daniel E.; Fitzek, Frank H.P. (April 2016). "Leaner and meaner: Network coding in SIMD enabled commercial devices" (PDF). 2016 IEEE Wireless Communications and Networking Conference. pp. 1–6. doi:10.1109/WCNC.2016.7565066. ISBN 978-1-4673-9814-5. S2CID 10468008. Archived from the original on 2022-04-08.
11. Wunderlich, Simon; Cabrera, Juan A.; Fitzek, Frank H. P.; Reisslein, Martin (August 2017). "Network Coding in Heterogeneous Multicore IoT Nodes With DAG Scheduling of Parallel Matrix Block Operations" (PDF). IEEE Internet of Things Journal. 4 (4): 917–933. doi:10.1109/JIOT.2017.2703813. ISSN 2327-4662. S2CID 30243498. Archived from the original (PDF) on 8 Apr 2022.
12. Feizi, Soheil; Lucani, Daniel E.; Sørensen, Chres W.; Makhdoumi, Ali; Médard, Muriel (June 2014). "Tunable sparse network coding for multicast networks". 2014 International Symposium on Network Coding (NetCod). pp. 1–6. doi:10.1109/NETCOD.2014.6892129. ISBN 978-1-4799-6217-4. S2CID 18256950.
13. "Coding the Network: Next Generation Coding for Flexible Network Operation | IEEE Communications Society". www.comsoc.org. Retrieved 2022-06-06.
14. Lopetegui, I.; Carrasco, R.A.; Boussakta, S. (July 2010). "VoIP design and implementation with network coding schemes for wireless networks". 2010 7th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP 2010). Newcastle upon Tyne: IEEE. pp. 857–861. doi:10.1109/CSNDSP16145.2010.5580304. ISBN 978-1-4244-8858-2. S2CID 1761089.
15. Shrimali, R.; Narmawala, Z. (December 2012). "A survey on MPEG-4 streaming using network coding in wireless networks". 2012 Nirma University International Conference on Engineering (NUiCONE). pp. 1–5. doi:10.1109/NUICONE.2012.6493203. ISBN 978-1-4673-1719-1. S2CID 7791774.
16. Saeed, Basil; Lung, Chung-Horng; Kunz, Thomas; Srinivasan, Anand (October 2011). "Audio streaming for ad hoc wireless mesh networks using network coding". 2011 IFIP Wireless Days (WD). pp. 1–5. doi:10.1109/WD.2011.6098167. ISBN 978-1-4577-2028-4. S2CID 8052927.
17. Wang, Lei; Yang, Zhen; Xu, Lijie; Yang, Yuwang (July 2016). "NCVCS: Network-coding-based video conference system for mobile devices in multicast networks". Ad Hoc Networks. 45: 13–21. doi:10.1016/j.adhoc.2016.03.002.
18. Wang, Hui; Chang, Ronald Y.; Kuo, C.-C. Jay (June 2009). "Wireless Multi-party video conferencing with network coding". 2009 IEEE International Conference on Multimedia and Expo. pp. 1492–1495. doi:10.1109/ICME.2009.5202786. ISBN 978-1-4244-4290-4. S2CID 8234088.
19. Rachuri, Sri Pramodh; Ansari, Ahtisham Ali; Tandur, Deepaknath; Kherani, Arzad A.; Chouksey, Sameer (December 2019). "Network-Coded SD-WAN in Multi-Access Systems for Delay Control". 2019 International Conference on contemporary Computing and Informatics (IC3I). Singapore, Singapore: IEEE. pp. 32–37. doi:10.1109/IC3I46837.2019.9055565. ISBN 978-1-7281-5529-6. S2CID 215723197.
20. Ansari, Ahtisham Ali; Rachuri, Sri Pramodh; Kherani, Arzad A.; Tandur, Deepaknath (December 2019). "An SD-WAN Controller for Delay Jitter Minimization in Coded Multi-access Systems". 2019 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS). pp. 1–6. doi:10.1109/ANTS47819.2019.9117981. ISBN 978-1-7281-3715-5. S2CID 219853700.
21. "Steinwurf's next-gen FECs aren't a choice for SD-WAN, they're an imperative". www.linkedin.com. Retrieved 2022-06-06.
22. "Barracuda Networks optimizes SD-WAN traffic with patented erasure correction technology from Steinwurf". Steinwurf. Retrieved 2022-06-06.
23. Pedersen, Morten V.; Lucani, Daniel E.; Fitzek, Frank H. P.; Sorensen, Chres W.; Badr, Arash S. (September 2013). "Network coding designs suited for the real world: What works, what doesn't, what's promising". 2013 IEEE Information Theory Workshop (ITW). Sevilla: IEEE. pp. 1–5. doi:10.1109/ITW.2013.6691231. ISBN 978-1-4799-1321-3. S2CID 286822.
24. Channel Bundling Using Random Linear Network Coding, retrieved 2022-06-06
25. Vukobratovic, Dejan; Tassi, Andrea; Delic, Savo; Khirallah, Chadi (April 2018). "Random Linear Network Coding for 5G Mobile Video Delivery". Information. 9 (4): 72. arXiv:1802.04873. doi:10.3390/info9040072. ISSN 2078-2489.
26. Gabriel, Frank; Nguyen, Giang T.; Schmoll, Robert-Steve; Cabrera, Juan A.; Muehleisen, Maciej; Fitzek, Frank H.P. (January 2018). "Practical deployment of network coding for real-time applications in 5G networks". 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC). Las Vegas, NV: IEEE. pp. 1–2. doi:10.1109/CCNC.2018.8319320. ISBN 978-1-5386-4790-5. S2CID 3982619.
27. Magli, Enrico; Wang, Mea; Frossard, Pascal; Markopoulou, Athina (August 2013). "Network Coding Meets Multimedia: A Review". IEEE Transactions on Multimedia. 15 (5): 1195–1212. arXiv:1211.4206. doi:10.1109/TMM.2013.2241415. ISSN 1520-9210. S2CID 3200945.
28. Torres Vega, Maria; Liaskos, Christos; Abadal, Sergi; Papapetrou, Evangelos; Jain, Akshay; Mouhouche, Belkacem; Kalem, Gökhan; Ergüt, Salih; Mach, Marian; Sabol, Tomas; Cabellos-Aparicio, Albert (October 2020). "Immersive Interconnected Virtual and Augmented Reality: A 5G and IoT Perspective". Journal of Network and Systems Management. 28 (4): 796–826. doi:10.1007/s10922-020-09545-w. hdl:2117/330129. ISSN 1064-7570. S2CID 219589307.
29. De Jonckere, Olivier; Chorin, Jean; Feldmann, Marius (September 2017). "Simulation Environment for Network Coding Research in Ring Road Networks". 2017 6th International Conference on Space Mission Challenges for Information Technology (SMC-IT). Alcala de Henares: IEEE. pp. 128–131. doi:10.1109/SMC-IT.2017.29. ISBN 978-1-5386-3462-2. S2CID 6180560.
30. Jamil, Farhan; Javaid, Anam; Umer, Tariq; Rehmani, Mubashir Husain (November 2017). "A comprehensive survey of network coding in vehicular ad-hoc networks". Wireless Networks. 23 (8): 2395–2414. doi:10.1007/s11276-016-1294-z. ISSN 1022-0038. S2CID 13624914.
31. Park, Joon-Sang; Lee, Uichin; Gerla, Mario (May 2010). "Vehicular communications: emergency video streams and network coding". Journal of Internet Services and Applications. 1 (1): 57–68. doi:10.1007/s13174-010-0006-7. ISSN 1867-4828. S2CID 2143201.
32. Noor-A-Rahim, Md; Liu, Zilong; Lee, Haeyoung; Khyam, M. Omar; He, Jianhua; Pesch, Dirk; Moessner, Klaus; Saad, Walid; Poor, H. Vincent (2022-05-01). "6G for Vehicle-to-Everything (V2X) Communications: Enabling Technologies, Challenges, and Opportunities". arXiv:2012.07753 [cs.IT].
33. Achour, Imen; Bejaoui, Tarek; Busson, Anthony; Tabbane, Sami (October 2017). "Network Coding scheme behavior in a Vehicle-to-Vehicle safety message dissemination". 2017 IEEE International Conference on Communications Workshops (ICC Workshops). Paris, France: IEEE. pp. 441–446. doi:10.1109/ICCW.2017.7962697. ISBN 978-1-5090-1525-2. S2CID 22423560.
34. Wang, Shujuan; Lu, Shuguang; Zhang, Qian (April 2019). "Instantly decodable network coding–assisted data dissemination for prioritized services in vehicular ad hoc networks". International Journal of Distributed Sensor Networks. 15 (4): 155014771984213. doi:10.1177/1550147719842137. ISSN 1550-1477. S2CID 145983739.
35. Dammak, Marwa; Andriyanova, Iryna; Boujelben, Yassine; Sellami, Noura (2018-03-29). "Routing and Network Coding over a Cyclic Network for Online Video Gaming". arXiv:1803.11102 [cs.IT].
36. Lajtha, Balázs; Biczók, Gergely; Szabó, Róbert (2010). Aagesen, Finn Arve; Knapskog, Svein Johan (eds.). "Enabling P2P Gaming with Network Coding". Networked Services and Applications - Engineering, Control and Management. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. 6164: 76–86. doi:10.1007/978-3-642-13971-0_8. ISBN 978-3-642-13971-0.
37. Dammak, Marwa (2018-11-20). Network coding application for online games platformes (phdthesis thesis). Université de Cergy Pontoise ; École nationale d'ingénieurs de Sfax (Tunisie).
38. Lajtha, Balázs; Biczók, Gergely; Szabó, Róbert (2010), Aagesen, Finn Arve; Knapskog, Svein Johan (eds.), "Enabling P2P Gaming with Network Coding", Networked Services and Applications - Engineering, Control and Management, Berlin, Heidelberg: Springer Berlin Heidelberg, vol. 6164, pp. 76–86, doi:10.1007/978-3-642-13971-0_8, ISBN 978-3-642-13970-3
39. Ilyas, Mohammad; Alwakeel, Sami S.; Alwakeel, Mohammed M.; Aggoune, el-Hadi M. (2014). "Exploiting Network Coding for Smart Healthcare". Sensor networks for sustainable development. Boca Raton, FL. doi:10.1201/b17124-13. ISBN 978-1-4665-8207-1. OCLC 881429695.{{cite book}}: CS1 maint: location missing publisher (link)
40. Kartsakli, Elli; Antonopoulos, Angelos; Alonso, Luis; Verikoukis, Christos (2014-03-10). "A Cloud-Assisted Random Linear Network Coding Medium Access Control Protocol for Healthcare Applications". Sensors. 14 (3): 4806–4830. Bibcode:2014Senso..14.4806K. doi:10.3390/s140304806. ISSN 1424-8220. PMC 4003969. PMID 24618727.
41. Taparugssanagorn, Attaphongse; Ono, Fumie; Kohno, Ryuji (September 2010). "Network coding for non-invasive Wireless Body Area Networks". 2010 IEEE 21st International Symposium on Personal, Indoor and Mobile Radio Communications Workshops. pp. 134–138. doi:10.1109/PIMRCW.2010.5670413. ISBN 978-1-4244-9117-9. S2CID 25872472.
42. Peralta, Goiuri; Iglesias-Urkia, Markel; Barcelo, Marc; Gomez, Raul; Moran, Adrian; Bilbao, Josu (May 2017). "Fog computing based efficient IoT scheme for the Industry 4.0". 2017 IEEE International Workshop of Electronics, Control, Measurement, Signals and their Application to Mechatronics (ECMSM). Donostia, San Sebastian, Spain: IEEE. pp. 1–6. doi:10.1109/ECMSM.2017.7945879. ISBN 978-1-5090-5582-1. S2CID 37985560.
43. Peralta, Goiuri; Garrido, Pablo; Bilbao, Josu; Agüero, Ramón; Crespo, Pedro (2019-04-08). "On the Combination of Multi-Cloud and Network Coding for Cost-Efficient Storage in Industrial Applications". Sensors. 19 (7): 1673. Bibcode:2019Senso..19.1673P. doi:10.3390/s19071673. ISSN 1424-8220. PMC 6479523. PMID 30965629.
44. Zverev, Mihail; Agüero, Ramón; Garrido, Pablo; Bilbao, Josu (2019-10-22). "Network Coding for IIoT Multi-Cloud Environments". Proceedings of the 9th International Conference on the Internet of Things. IoT 2019. New York, NY, USA: Association for Computing Machinery. pp. 1–4. doi:10.1145/3365871.3365903. ISBN 978-1-4503-7207-7. S2CID 207940281.
45. "DLR - Institute of Communications and Navigation - NEXT - Network Coding Satellite Experiment". www.dlr.de. Retrieved 2022-06-06.
46. Hsu, Hsiao-Tzu; Wang, Tzu-Ming; Kuo, Yuan-Cheng (2018-11-05). "Implementation of Agricultural Monitoring System Based on the Internet of Things". Proceedings of the 2018 2nd International Conference on Education and E-Learning. ICEEL 2018. New York, NY, USA: Association for Computing Machinery. pp. 212–216. doi:10.1145/3291078.3291098. ISBN 978-1-4503-6577-2. S2CID 59337140.
47. Syed, Abid Husain; Ali, Syed Zakir (2021-08-11). "Towards Transforming Agriculture for Challenges of 21st Century by Optimizing Resources Using IOT, Fuzzy Logic and Network Coding". doi:10.20944/preprints202108.0262.v1. S2CID 238723260. {{cite journal}}: Cite journal requires |journal= (help)
48. Camilli, Alberto; Cugnasca, Carlos E.; Saraiva, Antonio M.; Hirakawa, André R.; Corrêa, Pedro L. P. (2007-08-01). "From wireless sensors to field mapping: Anatomy of an application for precision agriculture". Computers and Electronics in Agriculture. Precision Agriculture in Latin America. 58 (1): 25–36. doi:10.1016/j.compag.2007.01.019. ISSN 0168-1699.
49. US8401021B2, Buga, Wladyslaw Jan & Trent, Tracy Raymond, "Systems and methods for prioritizing wireless communication of aircraft", issued 2013-03-19
50. Tonyali, Samet; Akkaya, Kemal; Saputro, Nico; Cheng, Xiuzhen (July 2017). "An Attribute & Network Coding-Based Secure Multicast Protocol for Firmware Updates in Smart Grid AMI Networks". 2017 26th International Conference on Computer Communication and Networks (ICCCN). Vancouver, BC, Canada: IEEE. pp. 1–9. doi:10.1109/ICCCN.2017.8038415. ISBN 978-1-5090-2991-4. S2CID 25131878.
51. Jalil, Syed Qaisar; Chalup, Stephan; Rehmani, Mubashir Husain (2019). Pathan, Al-Sakib Khan; Fadlullah, Zubair Md.; Guerroumi, Mohamed (eds.). "A Smart Meter Firmware Update Strategy Through Network Coding for AMI Network". Smart Grid and Internet of Things. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Cham: Springer International Publishing. 256: 68–77. doi:10.1007/978-3-030-05928-6_7. ISBN 978-3-030-05928-6. S2CID 59561476.
52. Kumar, Vaibhav; Cardiff, Barry; Flanagan, Mark F. (October 2017). "Physical-layer network coding with multiple antennas: An enabling technology for smart cities". 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). Montreal, QC: IEEE. pp. 1–6. doi:10.1109/PIMRC.2017.8292785. hdl:10197/11114. ISBN 978-1-5386-3529-2. S2CID 748535.
53. Darif, Anouar; Chaibi, Hasna; Saadane, Rachid (2020), Ben Ahmed, Mohamed; Boudhir, Anouar Abdelhakim; Santos, Domingos; El Aroussi, Mohamed (eds.), "Network Coding for Energy Optimization of SWIMAC in Smart Cities Using WSN Based on IR-UWB", Innovations in Smart Cities Applications Edition 3, Cham: Springer International Publishing, pp. 663–674, doi:10.1007/978-3-030-37629-1_48, ISBN 978-3-030-37628-4, S2CID 214486109, retrieved 2022-06-06
54. Bilal, Muhammad; et al. (2019). "Network-Coding Approach for Information-Centric Networking". IEEE Systems Journal. 13 (2): 1376–1385. arXiv:1808.00348. Bibcode:2019ISysJ..13.1376B. doi:10.1109/JSYST.2018.2862913. S2CID 51894197.
55. Zimmermann, Sandra; Rischke, Justus; Cabrera, Juan A.; Fitzek, Frank H. P. (December 2020). "Journey to MARS: Interplanetary Coding for relieving CDNS". GLOBECOM 2020 - 2020 IEEE Global Communications Conference. Taipei, Taiwan: IEEE. pp. 1–6. doi:10.1109/GLOBECOM42002.2020.9322478. ISBN 978-1-7281-8298-8. S2CID 231725197.
56. Kim, Minji (2012). "Network Coded TCP (CTCP)". arXiv:1212.2291 [cs.NI].
57. Larsson, P.; Johansson, N. (2006). "Multi-User ARQ". 2006 IEEE 63rd Vehicular Technology Conference. Vol. 4. Melbourne, Australia: IEEE. pp. 2052–2057. doi:10.1109/VETECS.2006.1683207. ISBN 0-7803-9392-9. S2CID 38823300.
58. "Welcome to Network Coding Security - Secure Network Coding". securenetworkcoding.wikidot.com. Retrieved 26 March 2022.{{cite web}}: CS1 maint: url-status (link)
59. http://home.eng.iastate.edu/~yuzhen/publications/ZhenYu_INFOCOM_2008.pdf%5B%5D
60. Acedański, Szymon; Deb, Supratim; Médard, Muriel; Koetter, Ralf. "How Good is Random Linear Coding Based Distributed Networked Storage?" (PDF). web.mit.edu. Retrieved 26 March 2022.{{cite web}}: CS1 maint: url-status (link)
61. Dimakis, Alexandros (2007). "Network Coding for Distributed Storage Systems". arXiv:cs/0702015.
62. Katti, Sachin; Rahul, Hariharan; Hu, Wenjun; Katabi, Dina; Médard, Muriel; Crowcroft, Jon (2006-08-11). "XORs in the air" (PDF). Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications. SIGCOMM '06. New York, NY, USA: Association for Computing Machinery. pp. 243–254. doi:10.1145/1159913.1159942. ISBN 978-1-59593-308-9. S2CID 207160426.
63. Krigslund, Jeppe; Hansen, Jonas; Hundeboll, Martin; Lucani, Daniel E.; Fitzek, Frank H. P. (2013). "CORE: COPE with MORE in Wireless Meshed Networks". 2013 IEEE 77th Vehicular Technology Conference (VTC Spring). pp. 1–6. doi:10.1109/VTCSpring.2013.6692495. ISBN 978-1-4673-6337-2. S2CID 1319567.
64. Sengupta, S.; Rayanchu, S.; Banerjee, S. (May 2007). "An Analysis of Wireless Network Coding for Unicast Sessions: The Case for Coding-Aware Routing". IEEE INFOCOM 2007 - 26th IEEE International Conference on Computer Communications. pp. 1028–1036. doi:10.1109/INFCOM.2007.124. ISBN 978-1-4244-1047-7. S2CID 3056111.
65. "NetworkCoding - batman-adv - Open Mesh". www.open-mesh.org. Archived from the original on 12 May 2021. Retrieved 2015-10-28.
66. Bhadra, S.; Shakkottai, S. (April 2006). "Looking at Large Networks: Coding vs. Queueing". Proceedings IEEE INFOCOM 2006. 25TH IEEE International Conference on Computer Communications. pp. 1–12. doi:10.1109/INFOCOM.2006.266. ISBN 1-4244-0221-2. S2CID 730706.
67. Dong Nguyen; Tuan Tran; Thinh Nguyen; Bose, B. (2009). "Wireless Broadcast Using Network Coding". IEEE Transactions on Vehicular Technology. 58 (2): 914–925. CiteSeerX 10.1.1.321.1962. doi:10.1109/TVT.2008.927729. S2CID 16989586.
68. Firooz, Mohammad Hamed; Roy, Sumit (24 March 2012). "Data Dissemination in Wireless Networks with Network Coding". IEEE Communications Letters. 17 (5): 944–947. arXiv:1203.5395. doi:10.1109/LCOMM.2013.031313.121994. ISSN 1089-7798. S2CID 13576.
69. Fiandrotti, Attilio; Bioglio, Valerio; Grangetto, Marco; Gaeta, Rossano; Magli, Enrico (11 October 2013). "Band Codes for Energy-Efficient Network Coding With Application to P2P Mobile Streaming". IEEE Transactions on Multimedia. 16 (2): 521–532. arXiv:1309.0316. doi:10.1109/TMM.2013.2285518. ISSN 1941-0077. S2CID 10548996.
70. Wu, Yue; Liu, Wuling; Wang, Siyi; Guo, Weisi; Chu, Xiaoli (June 2015). "Network coding in device-to-device (D2D) communications underlaying cellular networks". 2015 IEEE International Conference on Communications (ICC). pp. 2072–2077. doi:10.1109/ICC.2015.7248631. ISBN 978-1-4673-6432-4. S2CID 19637201.
71. Zhao, Yulei; Li, Yong; Ge, Ning (December 2015). "Physical Layer Network Coding Aided Two-Way Device-to-Device Communication Underlaying Cellular Networks". 2015 IEEE Global Communications Conference (GLOBECOM). pp. 1–6. doi:10.1109/GLOCOM.2015.7417590. ISBN 978-1-4799-5952-5.
72. Abrardo, Andrea; Fodor, Gábor; Tola, Besmir (2015). "Network coding schemes for Device-to-Device communications based relaying for cellular coverage extension" (PDF). 2015 IEEE 16th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). pp. 670–674. doi:10.1109/SPAWC.2015.7227122. ISBN 978-1-4799-1931-4. S2CID 9591953.
73. Gao, Chuhan; Li, Yong; Zhao, Yulei; Chen, Sheng (October 2017). "A Two-Level Game Theory Approach for Joint Relay Selection and Resource Allocation in Network Coding Assisted D2D Communications" (PDF). IEEE Transactions on Mobile Computing. 16 (10): 2697–2711. doi:10.1109/TMC.2016.2642190. ISSN 1558-0660. S2CID 22233426.
74. Zhou, Ting; Xu, Bin; Xu, Tianheng; Hu, Honglin; Xiong, Lei (1 February 2015). "User‐specific link adaptation scheme for device‐to‐device network coding multicast". IET Communications. 9 (3): 367–374. doi:10.1049/iet-com.2014.0323. ISSN 1751-8636. S2CID 27108894.
• Fragouli, C.; Le Boudec, J. & Widmer, J. "Network coding: An instant primer" in Computer Communication Review, 2006. https://doi.org/10.1145/1111322.1111337
• Ali Farzamnia, Sharifah K. Syed-Yusof, Norsheila Fisa "Multicasting Multiple Description Coding Using p-Cycle Network Coding", KSII Transactions on Internet and Information Systems, Vol 7, No 12, 2013. doi:10.3837/tiis.2013.12.009
External links
• Network Coding Homepage
• A network coding bibliography
• Raymond W. Yeung, Information Theory and Network Coding, Springer 2008, http://iest2.ie.cuhk.edu.hk/~whyeung/book2/
• Raymond W. Yeung et al., Network Coding Theory, now Publishers, 2005, http://iest2.ie.cuhk.edu.hk/~whyeung/netcode/monograph.html
• Christina Fragouli et al., Network Coding: An Instant Primer, ACM SIGCOMM 2006, http://infoscience.epfl.ch/getfile.py?mode=best&recid=58339.
• Avalanche Filesystem, http://research.microsoft.com/en-us/projects/avalanche/default.aspx
• Random Network Coding, https://web.archive.org/web/20060618083034/http://www.mit.edu/~medard/coding1.htm
• Digital Fountain Codes, http://www.icsi.berkeley.edu/~luby/
• Coding-Aware Routing, https://web.archive.org/web/20081011124616/http://arena.cse.sc.edu/papers/rocx.secon06.pdf
• MIT offers a course: Introduction to Network Coding
• Network coding: Networking's next revolution?
• Coding-aware protocol design for wireless networks: http://scholarcommons.sc.edu/etd/230/
|
Wikipedia
|
Sampling (statistics)
In statistics, quality assurance, and survey methodology, sampling is the selection of a subset or a statistical sample (termed sample for short) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt to collect samples that are representative of the population. Sampling has lower costs and faster data collection compared to recording data from the entire population, and thus, it can provide insights in cases where it is infeasible to measure an entire population.
Each observation measures one or more properties (such as weight, location, colour or mass) of independent objects or individuals. In survey sampling, weights can be applied to the data to adjust for the sample design, particularly in stratified sampling.[1] Results from probability theory and statistical theory are employed to guide the practice. In business and medical research, sampling is widely used for gathering information about a population.[2] Acceptance sampling is used to determine if a production lot of material meets the governing specifications.
Population definition
Successful statistical practice is based on focused problem definition. In sampling, this includes defining the "population" from which our sample is drawn. A population can be defined as including all people or items with the characteristics one wishes to understand. Because there is very rarely enough time or money to gather information from everyone or everything in a population, the goal becomes finding a representative sample (or subset) of that population.
Sometimes what defines a population is obvious. For example, a manufacturer needs to decide whether a batch of material from production is of high enough quality to be released to the customer or should be scrapped or reworked due to poor quality. In this case, the batch is the population.
Although the population of interest often consists of physical objects, sometimes it is necessary to sample over time, space, or some combination of these dimensions. For instance, an investigation of supermarket staffing could examine checkout line length at various times, or a study on endangered penguins might aim to understand their usage of various hunting grounds over time. For the time dimension, the focus may be on periods or discrete occasions.
In other cases, the examined 'population' may be even less tangible. For example, Joseph Jagger studied the behaviour of roulette wheels at a casino in Monte Carlo, and used this to identify a biased wheel. In this case, the 'population' Jagger wanted to investigate was the overall behaviour of the wheel (i.e. the probability distribution of its results over infinitely many trials), while his 'sample' was formed from observed results from that wheel. Similar considerations arise when taking repeated measurements of some physical characteristic such as the electrical conductivity of copper.
This situation often arises when seeking knowledge about the cause system of which the observed population is an outcome. In such cases, sampling theory may treat the observed population as a sample from a larger 'superpopulation'. For example, a researcher might study the success rate of a new 'quit smoking' program on a test group of 100 patients, in order to predict the effects of the program if it were made available nationwide. Here the superpopulation is "everybody in the country, given access to this treatment" – a group that does not yet exist since the program isn't yet available to all.
The population from which the sample is drawn may not be the same as the population from which information is desired. Often there is a large but not complete overlap between these two groups due to frame issues etc. (see below). Sometimes they may be entirely separate – for instance, one might study rats in order to get a better understanding of human health, or one might study records from people born in 2008 in order to make predictions about people born in 2009.
Time spent in making the sampled population and population of concern precise is often well spent because it raises many issues, ambiguities, and questions that would otherwise have been overlooked at this stage.
Sampling frame
Main article: Sampling frame
In the most straightforward case, such as the sampling of a batch of material from production (acceptance sampling by lots), it would be most desirable to identify and measure every single item in the population and to include any one of them in our sample. However, in the more general case this is not usually possible or practical. There is no way to identify all rats in the set of all rats. Where voting is not compulsory, there is no way to identify which people will vote at a forthcoming election (in advance of the election). These imprecise populations are not amenable to sampling in any of the ways below and to which we could apply statistical theory.
As a remedy, we seek a sampling frame which has the property that we can identify every single element and include any in our sample.[3][4][5][6] The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in an opinion poll, possible sampling frames include an electoral register and a telephone directory.
A probability sample is a sample in which every unit in the population has a chance (greater than zero) of being selected in the sample, and this probability can be accurately determined. The combination of these traits makes it possible to produce unbiased estimates of population totals, by weighting sampled units according to their probability of selection.
Example: We want to estimate the total income of adults living in a given street. We visit each household in that street, identify all adults living there, and randomly select one adult from each household. (For example, we can allocate each person a random number, generated from a uniform distribution between 0 and 1, and select the person with the highest number in each household). We then interview the selected person and find their income.
People living on their own are certain to be selected, so we simply add their income to our estimate of the total. But a person living in a household of two adults has only a one-in-two chance of selection. To reflect this, when we come to such a household, we would count the selected person's income twice towards the total. (The person who is selected from that household can be loosely viewed as also representing the person who isn't selected.)
In the above example, not everybody has the same probability of selection; what makes it a probability sample is the fact that each person's probability is known. When every element in the population does have the same probability of selection, this is known as an 'equal probability of selection' (EPS) design. Such designs are also referred to as 'self-weighting' because all sampled units are given the same weight.
Probability sampling includes: Simple Random Sampling, Systematic Sampling, Stratified Sampling, Probability Proportional to Size Sampling, and Cluster or Multistage Sampling. These various ways of probability sampling have two things in common:
1. Every element has a known nonzero probability of being sampled and
2. involves random selection at some point.
Nonprobability sampling
Main article: Nonprobability sampling
Nonprobability sampling is any sampling method where some elements of the population have no chance of selection (these are sometimes referred to as 'out of coverage'/'undercovered'), or where the probability of selection can't be accurately determined. It involves the selection of elements based on assumptions regarding the population of interest, which forms the criteria for selection. Hence, because the selection of elements is nonrandom, nonprobability sampling does not allow the estimation of sampling errors. These conditions give rise to exclusion bias, placing limits on how much information a sample can provide about the population. Information about the relationship between sample and population is limited, making it difficult to extrapolate from the sample to the population.
Example: We visit every household in a given street, and interview the first person to answer the door. In any household with more than one occupant, this is a nonprobability sample, because some people are more likely to answer the door (e.g. an unemployed person who spends most of their time at home is more likely to answer than an employed housemate who might be at work when the interviewer calls) and it's not practical to calculate these probabilities.
Nonprobability sampling methods include convenience sampling, quota sampling, and purposive sampling. In addition, nonresponse effects may turn any probability design into a nonprobability design if the characteristics of nonresponse are not well understood, since nonresponse effectively modifies each element's probability of being sampled.
Sampling methods
Within any of the types of frames identified above, a variety of sampling methods can be employed individually or in combination. Factors commonly influencing the choice between these designs include:
• Nature and quality of the frame
• Availability of auxiliary information about units on the frame
• Accuracy requirements, and the need to measure accuracy
• Whether detailed analysis of the sample is expected
• Cost/operational concerns
Simple random sampling
In a simple random sample (SRS) of a given size, all subsets of a sampling frame have an equal probability of being selected. Each element of the frame thus has an equal probability of selection: the frame is not subdivided or partitioned. Furthermore, any given pair of elements has the same chance of selection as any other such pair (and similarly for triples, and so on). This minimizes bias and simplifies analysis of results. In particular, the variance between individual results within the sample is a good indicator of variance in the overall population, which makes it relatively easy to estimate the accuracy of results.
Simple random sampling can be vulnerable to sampling error because the randomness of the selection may result in a sample that doesn't reflect the makeup of the population. For instance, a simple random sample of ten people from a given country will on average produce five men and five women, but any given trial is likely to over represent one sex and underrepresent the other. Systematic and stratified techniques attempt to overcome this problem by "using information about the population" to choose a more "representative" sample.
Also, simple random sampling can be cumbersome and tedious when sampling from a large target population. In some cases, investigators are interested in research questions specific to subgroups of the population. For example, researchers might be interested in examining whether cognitive ability as a predictor of job performance is equally applicable across racial groups. Simple random sampling cannot accommodate the needs of researchers in this situation, because it does not provide subsamples of the population, and other sampling strategies, such as stratified sampling, can be used instead.
Systematic sampling
Systematic sampling (also known as interval sampling) relies on arranging the study population according to some ordering scheme and then selecting elements at regular intervals through that ordered list. Systematic sampling involves a random start and then proceeds with the selection of every kth element from then onwards. In this case, k=(population size/sample size). It is important that the starting point is not automatically the first in the list, but is instead randomly chosen from within the first to the kth element in the list. A simple example would be to select every 10th name from the telephone directory (an 'every 10th' sample, also referred to as 'sampling with a skip of 10').
As long as the starting point is randomized, systematic sampling is a type of probability sampling. It is easy to implement and the stratification induced can make it efficient, if the variable by which the list is ordered is correlated with the variable of interest. 'Every 10th' sampling is especially useful for efficient sampling from databases.
For example, suppose we wish to sample people from a long street that starts in a poor area (house No. 1) and ends in an expensive district (house No. 1000). A simple random selection of addresses from this street could easily end up with too many from the high end and too few from the low end (or vice versa), leading to an unrepresentative sample. Selecting (e.g.) every 10th street number along the street ensures that the sample is spread evenly along the length of the street, representing all of these districts. (Note that if we always start at house #1 and end at #991, the sample is slightly biased towards the low end; by randomly selecting the start between #1 and #10, this bias is eliminated.)
However, systematic sampling is especially vulnerable to periodicities in the list. If periodicity is present and the period is a multiple or factor of the interval used, the sample is especially likely to be unrepresentative of the overall population, making the scheme less accurate than simple random sampling.
For example, consider a street where the odd-numbered houses are all on the north (expensive) side of the road, and the even-numbered houses are all on the south (cheap) side. Under the sampling scheme given above, it is impossible to get a representative sample; either the houses sampled will all be from the odd-numbered, expensive side, or they will all be from the even-numbered, cheap side, unless the researcher has previous knowledge of this bias and avoids it by a using a skip which ensures jumping between the two sides (any odd-numbered skip).
Another drawback of systematic sampling is that even in scenarios where it is more accurate than SRS, its theoretical properties make it difficult to quantify that accuracy. (In the two examples of systematic sampling that are given above, much of the potential sampling error is due to variation between neighbouring houses – but because this method never selects two neighbouring houses, the sample will not give us any information on that variation.)
As described above, systematic sampling is an EPS method, because all elements have the same probability of selection (in the example given, one in ten). It is not 'simple random sampling' because different subsets of the same size have different selection probabilities – e.g. the set {4,14,24,...,994} has a one-in-ten probability of selection, but the set {4,13,24,34,...} has zero probability of selection.
Systematic sampling can also be adapted to a non-EPS approach; for an example, see discussion of PPS samples below.
Stratified sampling
When the population embraces a number of distinct categories, the frame can be organized by these categories into separate "strata." Each stratum is then sampled as an independent sub-population, out of which individual elements can be randomly selected.[3] The ratio of the size of this random selection (or sample) to the size of the population is called a sampling fraction.[7] There are several potential benefits to stratified sampling.[7]
First, dividing the population into distinct, independent strata can enable researchers to draw inferences about specific subgroups that may be lost in a more generalized random sample.
Second, utilizing a stratified sampling method can lead to more efficient statistical estimates (provided that strata are selected based upon relevance to the criterion in question, instead of availability of the samples). Even if a stratified sampling approach does not lead to increased statistical efficiency, such a tactic will not result in less efficiency than would simple random sampling, provided that each stratum is proportional to the group's size in the population.
Third, it is sometimes the case that data are more readily available for individual, pre-existing strata within a population than for the overall population; in such cases, using a stratified sampling approach may be more convenient than aggregating data across groups (though this may potentially be at odds with the previously noted importance of utilizing criterion-relevant strata).
Finally, since each stratum is treated as an independent population, different sampling approaches can be applied to different strata, potentially enabling researchers to use the approach best suited (or most cost-effective) for each identified subgroup within the population.
There are, however, some potential drawbacks to using stratified sampling. First, identifying strata and implementing such an approach can increase the cost and complexity of sample selection, as well as leading to increased complexity of population estimates. Second, when examining multiple criteria, stratifying variables may be related to some, but not to others, further complicating the design, and potentially reducing the utility of the strata. Finally, in some cases (such as designs with a large number of strata, or those with a specified minimum sample size per group), stratified sampling can potentially require a larger sample than would other methods (although in most cases, the required sample size would be no larger than would be required for simple random sampling).
A stratified sampling approach is most effective when three conditions are met
1. Variability within strata are minimized
2. Variability between strata are maximized
3. The variables upon which the population is stratified are strongly correlated with the desired dependent variable.
Advantages over other sampling methods
1. Focuses on important subpopulations and ignores irrelevant ones.
2. Allows use of different sampling techniques for different subpopulations.
3. Improves the accuracy/efficiency of estimation.
4. Permits greater balancing of statistical power of tests of differences between strata by sampling equal numbers from strata varying widely in size.
Disadvantages
1. Requires selection of relevant stratification variables which can be difficult.
2. Is not useful when there are no homogeneous subgroups.
3. Can be expensive to implement.
Poststratification
Stratification is sometimes introduced after the sampling phase in a process called "poststratification".[3] This approach is typically implemented due to a lack of prior knowledge of an appropriate stratifying variable or when the experimenter lacks the necessary information to create a stratifying variable during the sampling phase. Although the method is susceptible to the pitfalls of post hoc approaches, it can provide several benefits in the right situation. Implementation usually follows a simple random sample. In addition to allowing for stratification on an ancillary variable, poststratification can be used to implement weighting, which can improve the precision of a sample's estimates.[3]
Oversampling
Choice-based sampling is one of the stratified sampling strategies. In choice-based sampling,[8] the data are stratified on the target and a sample is taken from each stratum so that the rare target class will be more represented in the sample. The model is then built on this biased sample. The effects of the input variables on the target are often estimated with more precision with the choice-based sample even when a smaller overall sample size is taken, compared to a random sample. The results usually must be adjusted to correct for the oversampling.
Probability-proportional-to-size sampling
Main article: Probability-proportional-to-size sampling
In some cases the sample designer has access to an "auxiliary variable" or "size measure", believed to be correlated to the variable of interest, for each element in the population. These data can be used to improve accuracy in sample design. One option is to use the auxiliary variable as a basis for stratification, as discussed above.
Another option is probability proportional to size ('PPS') sampling, in which the selection probability for each element is set to be proportional to its size measure, up to a maximum of 1. In a simple PPS design, these selection probabilities can then be used as the basis for Poisson sampling. However, this has the drawback of variable sample size, and different portions of the population may still be over- or under-represented due to chance variation in selections.
Systematic sampling theory can be used to create a probability proportionate to size sample. This is done by treating each count within the size variable as a single sampling unit. Samples are then identified by selecting at even intervals among these counts within the size variable. This method is sometimes called PPS-sequential or monetary unit sampling in the case of audits or forensic sampling.
Example: Suppose we have six schools with populations of 150, 180, 200, 220, 260, and 490 students respectively (total 1500 students), and we want to use student population as the basis for a PPS sample of size three. To do this, we could allocate the first school numbers 1 to 150, the second school 151 to 330 (= 150 + 180), the third school 331 to 530, and so on to the last school (1011 to 1500). We then generate a random start between 1 and 500 (equal to 1500/3) and count through the school populations by multiples of 500. If our random start was 137, we would select the schools which have been allocated numbers 137, 637, and 1137, i.e. the first, fourth, and sixth schools.
The PPS approach can improve accuracy for a given sample size by concentrating sample on large elements that have the greatest impact on population estimates. PPS sampling is commonly used for surveys of businesses, where element size varies greatly and auxiliary information is often available – for instance, a survey attempting to measure the number of guest-nights spent in hotels might use each hotel's number of rooms as an auxiliary variable. In some cases, an older measurement of the variable of interest can be used as an auxiliary variable when attempting to produce more current estimates.[9]
Cluster sampling
Sometimes it is more cost-effective to select respondents in groups ('clusters'). Sampling is often clustered by geography, or by time periods. (Nearly all samples are in some sense 'clustered' in time – although this is rarely taken into account in the analysis.) For instance, if surveying households within a city, we might choose to select 100 city blocks and then interview every household within the selected blocks.
Clustering can reduce travel and administrative costs. In the example above, an interviewer can make a single trip to visit several households in one block, rather than having to drive to a different block for each household.
It also means that one does not need a sampling frame listing all elements in the target population. Instead, clusters can be chosen from a cluster-level frame, with an element-level frame created only for the selected clusters. In the example above, the sample only requires a block-level city map for initial selections, and then a household-level map of the 100 selected blocks, rather than a household-level map of the whole city.
Cluster sampling (also known as clustered sampling) generally increases the variability of sample estimates above that of simple random sampling, depending on how the clusters differ between one another as compared to the within-cluster variation. For this reason, cluster sampling requires a larger sample than SRS to achieve the same level of accuracy – but cost savings from clustering might still make this a cheaper option.
Cluster sampling is commonly implemented as multistage sampling. This is a complex form of cluster sampling in which two or more levels of units are embedded one in the other. The first stage consists of constructing the clusters that will be used to sample from. In the second stage, a sample of primary units is randomly selected from each cluster (rather than using all units contained in all selected clusters). In following stages, in each of those selected clusters, additional samples of units are selected, and so on. All ultimate units (individuals, for instance) selected at the last step of this procedure are then surveyed. This technique, thus, is essentially the process of taking random subsamples of preceding random samples.
Multistage sampling can substantially reduce sampling costs, where the complete population list would need to be constructed (before other sampling methods could be applied). By eliminating the work involved in describing clusters that are not selected, multistage sampling can reduce the large costs associated with traditional cluster sampling.[9] However, each sample may not be a full representative of the whole population.
Quota sampling
Main article: Quota sampling
In quota sampling, the population is first segmented into mutually exclusive sub-groups, just as in stratified sampling. Then judgement is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60.
It is this second step which makes the technique one of non-probability sampling. In quota sampling the selection of the sample is non-random. For example, interviewers might be tempted to interview those who look most helpful. The problem is that these samples may be biased because not everyone gets a chance of selection. This random element is its greatest weakness and quota versus probability has been a matter of controversy for several years.
Minimax sampling
In imbalanced datasets, where the sampling ratio does not follow the population statistics, one can resample the dataset in a conservative manner called minimax sampling. The minimax sampling has its origin in Anderson minimax ratio whose value is proved to be 0.5: in a binary classification, the class-sample sizes should be chosen equally. This ratio can be proved to be minimax ratio only under the assumption of LDA classifier with Gaussian distributions. The notion of minimax sampling is recently developed for a general class of classification rules, called class-wise smart classifiers. In this case, the sampling ratio of classes is selected so that the worst case classifier error over all the possible population statistics for class prior probabilities, would be the best.[7]
Accidental sampling
Accidental sampling (sometimes known as grab, convenience or opportunity sampling) is a type of nonprobability sampling which involves the sample being drawn from that part of the population which is close to hand. That is, a population is selected because it is readily available and convenient. It may be through meeting the person or including a person in the sample when one meets them or chosen by finding them through technological means such as the internet or through phone. The researcher using such a sample cannot scientifically make generalizations about the total population from this sample because it would not be representative enough. For example, if the interviewer were to conduct such a survey at a shopping center early in the morning on a given day, the people that they could interview would be limited to those given there at that given time, which would not represent the views of other members of society in such an area, if the survey were to be conducted at different times of day and several times per week. This type of sampling is most useful for pilot testing. Several important considerations for researchers using convenience samples include:
1. Are there controls within the research design or experiment which can serve to lessen the impact of a non-random convenience sample, thereby ensuring the results will be more representative of the population?
2. Is there good reason to believe that a particular convenience sample would or should respond or behave differently than a random sample from the same population?
3. Is the question being asked by the research one that can adequately be answered using a convenience sample?
In social science research, snowball sampling is a similar technique, where existing study subjects are used to recruit more subjects into the sample. Some variants of snowball sampling, such as respondent driven sampling, allow calculation of selection probabilities and are probability sampling methods under certain conditions.
Voluntary Sampling
The voluntary sampling method is a type of non-probability sampling. Volunteers choose to complete a survey.
Volunteers may be invited through advertisements in social media.[10] The target population for advertisements can be selected by characteristics like location, age, sex, income, occupation, education, or interests using tools provided by the social medium. The advertisement may include a message about the research and link to a survey. After following the link and completing the survey, the volunteer submits the data to be included in the sample population. This method can reach a global population but is limited by the campaign budget. Volunteers outside the invited population may also be included in the sample.
It is difficult to make generalizations from this sample because it may not represent the total population. Often, volunteers have a strong interest in the main topic of the survey.
Line-intercept sampling
Line-intercept sampling is a method of sampling elements in a region whereby an element is sampled if a chosen line segment, called a "transect", intersects the element.
Panel sampling
Panel sampling is the method of first selecting a group of participants through a random sampling method and then asking that group for (potentially the same) information several times over a period of time. Therefore, each participant is interviewed at two or more time points; each period of data collection is called a "wave". The method was developed by sociologist Paul Lazarsfeld in 1938 as a means of studying political campaigns.[11] This longitudinal sampling-method allows estimates of changes in the population, for example with regard to chronic illness to job stress to weekly food expenditures. Panel sampling can also be used to inform researchers about within-person health changes due to age or to help explain changes in continuous dependent variables such as spousal interaction.[12] There have been several proposed methods of analyzing panel data, including MANOVA, growth curves, and structural equation modeling with lagged effects.
Snowball sampling
Snowball sampling involves finding a small group of initial respondents and using them to recruit more respondents. It is particularly useful in cases where the population is hidden or difficult to enumerate.
Theoretical sampling
Theoretical sampling[13] occurs when samples are selected on the basis of the results of the data collected so far with a goal of developing a deeper understanding of the area or develop theories. Extreme or very specific cases might be selected in order to maximize the likelihood a phenomenon will actually be observable.
Replacement of selected units
Sampling schemes may be without replacement ('WOR' – no element can be selected more than once in the same sample) or with replacement ('WR' – an element may appear multiple times in the one sample). For example, if we catch fish, measure them, and immediately return them to the water before continuing with the sample, this is a WR design, because we might end up catching and measuring the same fish more than once. However, if we do not return the fish to the water or tag and release each fish after catching it, this becomes a WOR design.
Sample size determination
Main article: Sample size determination
Formulas, tables, and power function charts are well known approaches to determine sample size.
Steps for using sample size tables:
1. Postulate the effect size of interest, α, and β.
2. Check sample size table[14]
1. Select the table corresponding to the selected α
2. Locate the row corresponding to the desired power
3. Locate the column corresponding to the estimated effect size.
4. The intersection of the column and row is the minimum sample size required.
Sampling and data collection
Good data collection involves:
• Following the defined sampling process
• Keeping the data in time order
• Noting comments and other contextual events
• Recording non-responses
Applications of sampling
Sampling enables the selection of right data points from within the larger data set to estimate the characteristics of the whole population. For example, there are about 600 million tweets produced every day. It is not necessary to look at all of them to determine the topics that are discussed during the day, nor is it necessary to look at all the tweets to determine the sentiment on each of the topics. A theoretical formulation for sampling Twitter data has been developed.[15]
In manufacturing different types of sensory data such as acoustics, vibration, pressure, current, voltage, and controller data are available at short time intervals. To predict down-time it may not be necessary to look at all the data but a sample may be sufficient.
Errors in sample surveys
Main article: Sampling error
Survey results are typically subject to some error. Total errors can be classified into sampling errors and non-sampling errors. The term "error" here includes systematic biases as well as random errors.
Sampling errors and biases
Sampling errors and biases are induced by the sample design. They include:
1. Selection bias: When the true selection probabilities differ from those assumed in calculating the results.
2. Random sampling error: Random variation in the results due to the elements in the sample being selected at random.
Non-sampling error
Non-sampling errors are other errors which can impact final survey estimates, caused by problems in data collection, processing, or sample design. Such errors may include:
1. Over-coverage: inclusion of data from outside of the population
2. Under-coverage: sampling frame does not include elements in the population.
3. Measurement error: e.g. when respondents misunderstand a question, or find it difficult to answer
4. Processing error: mistakes in data coding
5. Non-response or Participation bias: failure to obtain complete data from all selected individuals
After sampling, a review should be held of the exact process followed in sampling, rather than that intended, in order to study any effects that any divergences might have on subsequent analysis.
A particular problem involves non-response. Two major types of non-response exist:[16][17]
• unit nonresponse (lack of completion of any part of the survey)
• item non-response (submission or participation in survey but failing to complete one or more components/questions of the survey)
In survey sampling, many of the individuals identified as part of the sample may be unwilling to participate, not have the time to participate (opportunity cost),[18] or survey administrators may not have been able to contact them. In this case, there is a risk of differences between respondents and nonrespondents, leading to biased estimates of population parameters. This is often addressed by improving survey design, offering incentives, and conducting follow-up studies which make a repeated attempt to contact the unresponsive and to characterize their similarities and differences with the rest of the frame.[19] The effects can also be mitigated by weighting the data (when population benchmarks are available) or by imputing data based on answers to other questions. Nonresponse is particularly a problem in internet sampling. Reasons for this problem may include improperly designed surveys,[17] over-surveying (or survey fatigue),[12][20] and the fact that potential participants may have multiple e-mail addresses, which they don't use anymore or don't check regularly.
Survey weights
In many situations the sample fraction may be varied by stratum and data will have to be weighted to correctly represent the population. Thus for example, a simple random sample of individuals in the United Kingdom might not include some in remote Scottish islands who would be inordinately expensive to sample. A cheaper method would be to use a stratified sample with urban and rural strata. The rural sample could be under-represented in the sample, but weighted up appropriately in the analysis to compensate.
More generally, data should usually be weighted if the sample design does not give each individual an equal chance of being selected. For instance, when households have equal selection probabilities but one person is interviewed from within each household, this gives people from large households a smaller chance of being interviewed. This can be accounted for using survey weights. Similarly, households with more than one telephone line have a greater chance of being selected in a random digit dialing sample, and weights can adjust for this.
Weights can also serve other purposes, such as helping to correct for non-response.
Methods of producing random samples
• Random number table
• Mathematical algorithms for pseudo-random number generators
• Physical randomization devices such as coins, playing cards or sophisticated devices such as ERNIE
History
Random sampling by using lots is an old idea, mentioned several times in the Bible. In 1786 Pierre Simon Laplace estimated the population of France by using a sample, along with ratio estimator. He also computed probabilistic estimates of the error. These were not expressed as modern confidence intervals but as the sample size that would be needed to achieve a particular upper bound on the sampling error with probability 1000/1001. His estimates used Bayes' theorem with a uniform prior probability and assumed that his sample was random. Alexander Ivanovich Chuprov introduced sample surveys to Imperial Russia in the 1870s.[21]
In the US the 1936 Literary Digest prediction of a Republican win in the presidential election went badly awry, due to severe bias . More than two million people responded to the study with their names obtained through magazine subscription lists and telephone directories. It was not appreciated that these lists were heavily biased towards Republicans and the resulting sample, though very large, was deeply flawed.[22][23]
See also
Wikiversity has learning resources about Sampling (statistics)
Wikimedia Commons has media related to Sampling (statistics).
• Data collection
• Estimation theory
• Gy's sampling theory
• German tank problem
• Horvitz–Thompson estimator
• Official statistics
• Ratio estimator
• Replication (statistics)
• Random-sampling mechanism
• Resampling (statistics)
• Pseudo-random number sampling
• Sample size determination
• Sampling (case studies)
• Sampling bias
• Sampling distribution
• Sampling error
• Sortition
• Survey sampling
• Design effect
Notes
The textbook by Groves et alia provides an overview of survey methodology, including recent literature on questionnaire development (informed by cognitive psychology) :
• Robert Groves, et alia. Survey methodology (2010 2nd ed. [2004]) ISBN 0-471-48348-6.
The other books focus on the statistical theory of survey sampling and require some knowledge of basic statistics, as discussed in the following textbooks:
• David S. Moore and George P. McCabe (February 2005). "Introduction to the practice of statistics" (5th edition). W.H. Freeman & Company. ISBN 0-7167-6282-X.
• Freedman, David; Pisani, Robert; Purves, Roger (2007). Statistics (4th ed.). New York: Norton. ISBN 978-0-393-92972-0.
The elementary book by Scheaffer et alia uses quadratic equations from high-school algebra:
• Scheaffer, Richard L., William Mendenhal and R. Lyman Ott. Elementary survey sampling, Fifth Edition. Belmont: Duxbury Press, 1996.
More mathematical statistics is required for Lohr, for Särndal et alia, and for Cochran (classic):
• Cochran, William G. (1977). Sampling techniques (Third ed.). Wiley. ISBN 978-0-471-16240-7.
• Lohr, Sharon L. (1999). Sampling: Design and analysis. Duxbury. ISBN 978-0-534-35361-2.
• Särndal, Carl-Erik; Swensson, Bengt; Wretman, Jan (1992). Model assisted survey sampling. Springer-Verlag. ISBN 978-0-387-40620-6.
The historically important books by Deming and Kish remain valuable for insights for social scientists (particularly about the U.S. census and the Institute for Social Research at the University of Michigan):
• Deming, W. Edwards (1966). Some Theory of Sampling. Dover Publications. ISBN 978-0-486-64684-8. OCLC 166526.
• Kish, Leslie (1995) Survey Sampling, Wiley, ISBN 0-471-10949-5
References
1. Lance, P.; Hattori, A. (2016). Sampling and Evaluation. Web: MEASURE Evaluation. pp. 6–8, 62–64.
2. Salant, Priscilla, I. Dillman, and A. Don. How to conduct your own survey. No. 300.723 S3. 1994.
3. Robert M. Groves; et al. (2009). Survey methodology. ISBN 978-0470465462.
4. Lohr, Sharon L. Sampling: Design and analysis.
5. Särndal, Carl-Erik; Swensson, Bengt; Wretman, Jan. Model Assisted Survey Sampling.
6. Scheaffer, Richard L.; William Mendenhal; R. Lyman Ott. (2006). Elementary survey sampling.
7. Shahrokh Esfahani, Mohammad; Dougherty, Edward (2014). "Effect of separate sampling on classification accuracy". Bioinformatics. 30 (2): 242–250. doi:10.1093/bioinformatics/btt662. PMID 24257187.
8. Scott, A.J.; Wild, C.J. (1986). "Fitting logistic models under case-control or choice-based sampling". Journal of the Royal Statistical Society, Series B. 48 (2): 170–182. JSTOR 2345712.
• Lohr, Sharon L. Sampling: Design and Analysis.
• Särndal, Carl-Erik; Swensson, Bengt; Wretman, Jan. Model Assisted Survey Sampling.
9. Ariyaratne, Buddhika (30 July 2017). "Voluntary Sampling Method combined with Social Media advertising". heal-info.blogspot.com. Health Informatics. Retrieved 18 December 2018.
10. Lazarsfeld, P., & Fiske, M. (1938). The" panel" as a new tool for measuring opinion. The Public Opinion Quarterly, 2(4), 596–612.
11. Groves, et alia. Survey Methodology
12. "Examples of sampling methods" (PDF).
13. Cohen, 1988
14. Deepan Palguna; Vikas Joshi; Venkatesan Chakaravarthy; Ravi Kothari; L. V. Subramaniam (2015). Analysis of Sampling Algorithms for Twitter. International Joint Conference on Artificial Intelligence.
15. Berinsky, A. J. (2008). "Survey non-response". In: W. Donsbach & M. W. Traugott (Eds.), The Sage handbook of public opinion research (pp. 309–321). Thousand Oaks, CA: Sage Publications.
16. Dillman, D. A., Eltinge, J. L., Groves, R. M., & Little, R. J. A. (2002). "Survey nonresponse in design, data collection, and analysis". In: R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 3–26). New York: John Wiley & Sons.
17. Dillman, D.A., Smyth, J.D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method. San Francisco: Jossey-Bass.
18. Vehovar, V., Batagelj, Z., Manfreda, K.L., & Zaletel, M. (2002). "Nonresponse in web surveys". In: R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 229–242). New York: John Wiley & Sons.
19. Porter; Whitcomb; Weitzer (2004). "Multiple surveys of students and survey fatigue". In Porter, Stephen R (ed.). Overcoming survey research problems. New directions for institutional research. San Francisco: Jossey-Bass. pp. 63–74. ISBN 9780787974770. Retrieved 15 July 2019.
20. Seneta, E. (1985). "A Sketch of the History of Survey Sampling in Russia". Journal of the Royal Statistical Society. Series A (General). 148 (2): 118–125. doi:10.2307/2981944. JSTOR 2981944.
21. David S. Moore and George P. McCabe. "Introduction to the Practice of Statistics".
22. Freedman, David; Pisani, Robert; Purves, Roger. Statistics.
Further reading
• Singh, G N, Jaiswal, A. K., and Pandey A. K. (2021), Improved Imputation Methods for Missing Data in Two-Occasion Successive Sampling, Communications in Statistics: Theory and Methods. DOI:10.1080/03610926.2021.1944211
• Chambers, R L, and Skinner, C J (editors) (2003), Analysis of Survey Data, Wiley, ISBN 0-471-89987-9
• Deming, W. Edwards (1975) On probability as a basis for action, The American Statistician, 29(4), pp. 146–152.
• Gy, P (2012) Sampling of Heterogeneous and Dynamic Material Systems: Theories of Heterogeneity, Sampling and Homogenizing, Elsevier Science, ISBN 978-0444556066
• Korn, E.L., and Graubard, B.I. (1999) Analysis of Health Surveys, Wiley, ISBN 0-471-13773-1
• Lucas, Samuel R. (2012). doi:10.1007%2Fs11135-012-9775-3 "Beyond the Existence Proof: Ontological Conditions, Epistemological Implications, and In-Depth Interview Research."], Quality & Quantity, doi:10.1007/s11135-012-9775-3.
• Stuart, Alan (1962) Basic Ideas of Scientific Sampling, Hafner Publishing Company, New York
• Smith, T. M. F. (1984). "Present Position and Potential Developments: Some Personal Views: Sample surveys". Journal of the Royal Statistical Society, Series A. 147 (The 150th Anniversary of the Royal Statistical Society, number 2): 208–221. doi:10.2307/2981677. JSTOR 2981677.
• Smith, T. M. F. (1993). "Populations and Selection: Limitations of Statistics (Presidential address)". Journal of the Royal Statistical Society, Series A. 156 (2): 144–166. doi:10.2307/2982726. JSTOR 2982726. (Portrait of T. M. F. Smith on page 144)
• Smith, T. M. F. (2001). "Centenary: Sample surveys". Biometrika. 88 (1): 167–243. doi:10.1093/biomet/88.1.167.
• Smith, T. M. F. (2001). "Biometrika centenary: Sample surveys". In D. M. Titterington and D. R. Cox (ed.). Biometrika: One Hundred Years. Oxford University Press. pp. 165–194. ISBN 978-0-19-850993-6.
• Whittle, P. (May 1954). "Optimum preventative sampling". Journal of the Operations Research Society of America. 2 (2): 197–203. doi:10.1287/opre.2.2.197. JSTOR 166605.
Standards
ISO
• ISO 2859 series
• ISO 3951 series
ASTM
• ASTM E105 Standard Practice for Probability Sampling Of Materials
• ASTM E122 Standard Practice for Calculating Sample Size to Estimate, With a Specified Tolerable Error, the Average for Characteristic of a Lot or Process
• ASTM E141 Standard Practice for Acceptance of Evidence Based on the Results of Probability Sampling
• ASTM E1402 Standard Terminology Relating to Sampling
• ASTM E1994 Standard Practice for Use of Process Oriented AOQL and LTPD Sampling Plans
• ASTM E2234 Standard Practice for Sampling a Stream of Product by Attributes Indexed by AQL
ANSI, ASQ
• ANSI/ASQ Z1.4
U.S. federal and military standards
• MIL-STD-105
• MIL-STD-1916
External links
• Media related to Sampling (statistics) at Wikimedia Commons
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
Social survey research
Data collection
• Collection methods
• Questionnaire
• Interview
• Structured
• Semi-structured
• Unstructured
• Couple
Methodology
• Census
• Sampling frame
• Statistical sample
• Sampling for surveys
• Random sampling
• Simple random sampling
• Quota sampling
• Stratified sampling
• Nonprobability sampling
• Sample size determination
• Research design
• Panel study
• Cohort study
• Cross-sectional study
• Cross-sequential study
Survey errors
• Sampling error
• Standard error
• Sampling bias
• Systematic errors
• Non-sampling error
• Specification error
• Frame error
• Measurement error
• Response errors
• Non-response bias
• Coverage error
• Pseudo-opinion
• Processing errors
Data analysis
• Categorical data
• Contingency table
• Level of measurement
• Descriptive statistics
• Exploratory data analysis
• Multivariate statistics
• Psychometrics
• Statistical inference
• Statistical models
• Graphical
• Log-linear
• Structural
Applications
• Audience measurement
• Demography
• Market research
• Opinion poll
• Public opinion
Major surveys
• List of comparative social surveys
• Afrobarometer
• American National Election Studies
• Asian Barometer Survey
• Comparative Study of Electoral Systems
• Emerson College Polling
• Eurobarometer
• European Social Survey
• Gallup Poll
• General Social Survey
• Household, Income and Labour Dynamics in Australia Survey
• International Social Survey
• Latinobarómetro
• List of household surveys in the United States
• National Health and Nutrition Examination Survey
• New Zealand Attitudes and Values Study
• Suffolk University Political Research Center
• The Phillips Academy Poll
• Quinnipiac University Polling Institute
• World Values Survey
Associations
• American Association for Public Opinion Research
• European Society for Opinion and Marketing Research
• International Statistical Institute
• Pew Research Center
• World Association for Public Opinion Research
• Category
• Projects
• Business
• Politics
• Psychology
• Sociology
• Statistics
Authority control
National
• Germany
• Israel
• United States
• Japan
• Czech Republic
Other
• NARA
|
Wikipedia
|
Instrumental variables estimation
In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment.[1] Intuitively, IVs are used when an explanatory variable of interest is correlated with the error term, in which case ordinary least squares and ANOVA give biased results. A valid instrument induces changes in the explanatory variable but has no independent effect on the dependent variable, allowing a researcher to uncover the causal effect of the explanatory variable on the dependent variable.
Instrumental variable methods allow for consistent estimation when the explanatory variables (covariates) are correlated with the error terms in a regression model. Such correlation may occur when:
1. changes in the dependent variable change the value of at least one of the covariates ("reverse" causation),
2. there are omitted variables that affect both the dependent and explanatory variables, or
3. the covariates are subject to non-random measurement error.
Explanatory variables that suffer from one or more of these issues in the context of a regression are sometimes referred to as endogenous. In this situation, ordinary least squares produces biased and inconsistent estimates.[2] However, if an instrument is available, consistent estimates may still be obtained. An instrument is a variable that does not itself belong in the explanatory equation but is correlated with the endogenous explanatory variables, conditionally on the value of other covariates.
In linear models, there are two main requirements for using IVs:
• The instrument must be correlated with the endogenous explanatory variables, conditionally on the other covariates. If this correlation is strong, then the instrument is said to have a strong first stage. A weak correlation may provide misleading inferences about parameter estimates and standard errors.[3][4]
• The instrument cannot be correlated with the error term in the explanatory equation, conditionally on the other covariates. In other words, the instrument cannot suffer from the same problem as the original predicting variable. If this condition is met, then the instrument is said to satisfy the exclusion restriction.
Example
Informally, in attempting to estimate the causal effect of some variable X ("covariate" or "explanatory variable") on another Y ("dependent variable"), an instrument is a third variable Z which affects Y only through its effect on X.
For example, suppose a researcher wishes to estimate the causal effect of smoking (X) on general health (Y).[5] Correlation between smoking and health does not imply that smoking causes poor health because other variables, such as depression, may affect both health and smoking, or because health may affect smoking. It is not possible to conduct controlled experiments on smoking status in the general population. The researcher may attempt to estimate the causal effect of smoking on health from observational data by using the tax rate for tobacco products (Z) as an instrument for smoking. The tax rate for tobacco products is a reasonable choice for an instrument because the researcher assumes that it can only be correlated with health through its effect on smoking. If the researcher then finds tobacco taxes and state of health to be correlated, this may be viewed as evidence that smoking causes changes in health.
History
First use of an instrument variable occurred in a 1928 book by Philip G. Wright, best known for his excellent description of the production, transport and sale of vegetable and animal oils in the early 1900s in the United States,[6][7] while in 1945, Olav Reiersøl applied the same approach in the context of errors-in-variables models in his dissertation, giving the method its name.[8]
Wright attempted to determine the supply and demand for butter using panel data on prices and quantities sold in the United States. The idea was that a regression analysis could produce a demand or supply curve because they are formed by the path between prices and quantities demanded or supplied. The problem was that the observational data did not form a demand or supply curve as such, but rather a cloud of point observations that took different shapes under varying market conditions. It seemed that making deductions from the data remained elusive.
The problem was that price affected both supply and demand so that a function describing only one of the two could not be constructed directly from the observational data. Wright correctly concluded that he needed a variable that correlated with either demand or supply but not both – that is, an instrumental variable.
After much deliberation, Wright decided to use regional rainfall as his instrumental variable: he concluded that rainfall affected grass production and hence milk production and ultimately butter supply, but not butter demand. In this way he was able to construct a regression equation with only the instrumental variable of price and supply.[9]
Formal definitions of instrumental variables, using counterfactuals and graphical criteria, were given by Judea Pearl in 2000.[10] Angrist and Krueger (2001) present a survey of the history and uses of instrumental variable techniques.[11] Notions of causality in econometrics, and their relationship with instrumental variables and other methods, are discussed by Heckman (2008).[12]
Theory
While the ideas behind IV extend to a broad class of models, a very common context for IV is in linear regression. Traditionally,[13] an instrumental variable is defined as a variable Z that is correlated with the independent variable X and uncorrelated with the "error term" U in the linear equation
$Y=X\beta +U$
$Y$ is a vector. $X$ is a matrix, usually with a column of ones and perhaps with additional columns for other covariates. Consider how an instrument allows $\beta $ to be recovered. Recall that OLS solves for ${\widehat {\beta }}$ such that $\operatorname {cov} (X,{\widehat {U}})=0$ (when we minimize the sum of squared errors, $\min _{\beta }(Y-X\beta )'(Y-X\beta )$, the first-order condition is exactly $X'(Y-X{\widehat {\beta }})=X'{\widehat {U}}=0$.) If the true model is believed to have $\operatorname {cov} (X,U)\neq 0$ due to any of the reasons listed above—for example, if there is an omitted variable which affects both $X$ and $Y$ separately—then this OLS procedure will not yield the causal impact of $X$ on $Y$. OLS will simply pick the parameter that makes the resulting errors appear uncorrelated with $X$.
Consider for simplicity the single-variable case. Suppose we are considering a regression with one variable and a constant (perhaps no other covariates are necessary, or perhaps we have partialed out any other relevant covariates):
$y=\alpha +\beta x+u$
In this case, the coefficient on the regressor of interest is given by ${\widehat {\beta }}={\frac {\operatorname {cov} (x,y)}{\operatorname {var} (x)}}$. Substituting for $y$ gives
${\begin{aligned}{\widehat {\beta }}&={\frac {\operatorname {cov} (x,y)}{\operatorname {var} (x)}}={\frac {\operatorname {cov} (x,\alpha +\beta x+u)}{\operatorname {var} (x)}}\\[6pt]&={\frac {\operatorname {cov} (x,\alpha +\beta x)}{\operatorname {var} (x)}}+{\frac {\operatorname {cov} (x,u)}{\operatorname {var} (x)}}=\beta ^{*}+{\frac {\operatorname {cov} (x,u)}{\operatorname {var} (x)}},\end{aligned}}$
where $\beta ^{*}$ is what the estimated coefficient vector would be if x were not correlated with u. In this case, it can be shown that $\beta ^{*}$ is an unbiased estimator of $\beta .$ If $\operatorname {cov} (x,u)\neq 0$ in the underlying model that we believe, then OLS gives a coefficient which does not reflect the underlying causal effect of interest. IV helps to fix this problem by identifying the parameters ${\beta }$ not based on whether $x$ is uncorrelated with $u$, but based on whether another variable $z$ is uncorrelated with $u$. If theory suggests that $z$ is related to $x$ (the first stage) but uncorrelated with $u$ (the exclusion restriction), then IV may identify the causal parameter of interest where OLS fails. Because there are multiple specific ways of using and deriving IV estimators even in just the linear case (IV, 2SLS, GMM), we save further discussion for the Estimation section below.
Graphical definition
IV techniques have been developed among a much broader class of non-linear models. General definitions of instrumental variables, using counterfactual and graphical formalism, were given by Pearl (2000; p. 248).[10] The graphical definition requires that Z satisfy the following conditions:
$(Z\perp \!\!\!\perp Y)_{G_{\overline {X}}}\qquad (Z\not \!\!{\perp \!\!\!\perp }X)_{G}$
where $\perp \!\!\!\perp $ stands for d-separation and $G_{\overline {X}}$ stands for the graph in which all arrows entering X are cut off.
The counterfactual definition requires that Z satisfies
$(Z\perp \!\!\!\perp Y_{x})\qquad (Z\not \!\!{\perp \!\!\!\perp }X)$
where Yx stands for the value that Y would attain had X been x and $\perp \!\!\!\perp $ stands for independence.
If there are additional covariates W then the above definitions are modified so that Z qualifies as an instrument if the given criteria hold conditional on W.
The essence of Pearl's definition is:
1. The equations of interest are "structural," not "regression".
2. The error term U stands for all exogenous factors that affect Y when X is held constant.
3. The instrument Z should be independent of U.
4. The instrument Z should not affect Y when X is held constant (exclusion restriction).
5. The instrument Z should not be independent of X.
These conditions do not rely on specific functional form of the equations and are applicable therefore to nonlinear equations, where U can be non-additive (see Non-parametric analysis). They are also applicable to a system of multiple equations, in which X (and other factors) affect Y through several intermediate variables. An instrumental variable need not be a cause of X; a proxy of such cause may also be used, if it satisfies conditions 1–5.[10] The exclusion restriction (condition 4) is redundant; it follows from conditions 2 and 3.
Selecting suitable instruments
Since U is unobserved, the requirement that Z be independent of U cannot be inferred from data and must instead be determined from the model structure, i.e., the data-generating process. Causal graphs are a representation of this structure, and the graphical definition given above can be used to quickly determine whether a variable Z qualifies as an instrumental variable given a set of covariates W. To see how, consider the following example.
• Figure 1: Proximity qualifies as an instrumental variable given Library Hours
• Figure 2: $G_{\overline {X}}$, which is used to determine whether Proximity is an instrumental variable.
• Figure 3: Proximity does not qualify as an instrumental variable given Library Hours
• Figure 4: Proximity qualifies as an instrumental variable, as long as we do not include Library Hours as a covariate.
Suppose that we wish to estimate the effect of a university tutoring program on grade point average (GPA). The relationship between attending the tutoring program and GPA may be confounded by a number of factors. Students who attend the tutoring program may care more about their grades or may be struggling with their work. This confounding is depicted in the Figures 1–3 on the right through the bidirected arc between Tutoring Program and GPA. If students are assigned to dormitories at random, the proximity of the student's dorm to the tutoring program is a natural candidate for being an instrumental variable.
However, what if the tutoring program is located in the college library? In that case, Proximity may also cause students to spend more time at the library, which in turn improves their GPA (see Figure 1). Using the causal graph depicted in the Figure 2, we see that Proximity does not qualify as an instrumental variable because it is connected to GPA through the path Proximity $\rightarrow $ Library Hours $\rightarrow $ GPA in $G_{\overline {X}}$. However, if we control for Library Hours by adding it as a covariate then Proximity becomes an instrumental variable, since Proximity is separated from GPA given Library Hours in $G_{\overline {X}}$.
Now, suppose that we notice that a student's "natural ability" affects his or her number of hours in the library as well as his or her GPA, as in Figure 3. Using the causal graph, we see that Library Hours is a collider and conditioning on it opens the path Proximity $\rightarrow $ Library Hours $\leftrightarrow $ GPA. As a result, Proximity cannot be used as an instrumental variable.
Finally, suppose that Library Hours does not actually affect GPA because students who do not study in the library simply study elsewhere, as in Figure 4. In this case, controlling for Library Hours still opens a spurious path from Proximity to GPA. However, if we do not control for Library Hours and remove it as a covariate then Proximity can again be used an instrumental variable.
Estimation
We now revisit and expand upon the mechanics of IV in greater detail. Suppose the data are generated by a process of the form
$y_{i}=X_{i}\beta +e_{i},$
where
• i indexes observations,
• $y_{i}$ is the i-th value of the dependent variable,
• $X_{i}$ is a vector of the i-th values of the independent variable(s) and a constant,
• $e_{i}$ is the i-th value of an unobserved error term representing all causes of $y_{i}$ other than $X_{i}$, and
• $\beta $ is an unobserved parameter vector.
The parameter vector $\beta $ is the causal effect on $y_{i}$ of a one unit change in each element of $X_{i}$, holding all other causes of $y_{i}$ constant. The econometric goal is to estimate $\beta $. For simplicity's sake assume the draws of e are uncorrelated and that they are drawn from distributions with the same variance (that is, that the errors are serially uncorrelated and homoskedastic).
Suppose also that a regression model of nominally the same form is proposed. Given a random sample of T observations from this process, the ordinary least squares estimator is
${\widehat {\beta }}_{\mathrm {OLS} }=(X^{\mathrm {T} }X)^{-1}X^{\mathrm {T} }y=(X^{\mathrm {T} }X)^{-1}X^{\mathrm {T} }(X\beta +e)=\beta +(X^{\mathrm {T} }X)^{-1}X^{\mathrm {T} }e$
where X, y and e denote column vectors of length T. This equation is similar to the equation involving $\operatorname {cov} (X,y)$ in the introduction (this is the matrix version of that equation). When X and e are uncorrelated, under certain regularity conditions the second term has an expected value conditional on X of zero and converges to zero in the limit, so the estimator is unbiased and consistent. When X and the other unmeasured, causal variables collapsed into the e term are correlated, however, the OLS estimator is generally biased and inconsistent for β. In this case, it is valid to use the estimates to predict values of y given values of X, but the estimate does not recover the causal effect of X on y.
To recover the underlying parameter $\beta $, we introduce a set of variables Z that is highly correlated with each endogenous component of X but (in our underlying model) is not correlated with e. For simplicity, one might consider X to be a T × 2 matrix composed of a column of constants and one endogenous variable, and Z to be a T × 2 consisting of a column of constants and one instrumental variable. However, this technique generalizes to X being a matrix of a constant and, say, 5 endogenous variables, with Z being a matrix composed of a constant and 5 instruments. In the discussion that follows, we will assume that X is a T × K matrix and leave this value K unspecified. An estimator in which X and Z are both T × K matrices is referred to as just-identified .
Suppose that the relationship between each endogenous component xi and the instruments is given by
$x_{i}=Z_{i}\gamma +v_{i},$
The most common IV specification uses the following estimator:
${\widehat {\beta }}_{\mathrm {IV} }=(Z^{\mathrm {T} }X)^{-1}Z^{\mathrm {T} }y$
This specification approaches the true parameter as the sample gets large, so long as $Z^{\mathrm {T} }e=0$ in the true model:
${\widehat {\beta }}_{\mathrm {IV} }=(Z^{\mathrm {T} }X)^{-1}Z^{\mathrm {T} }y=(Z^{\mathrm {T} }X)^{-1}Z^{\mathrm {T} }X\beta +(Z^{\mathrm {T} }X)^{-1}Z^{\mathrm {T} }e\rightarrow \beta $
As long as $Z^{\mathrm {T} }e=0$ in the underlying process which generates the data, the appropriate use of the IV estimator will identify this parameter. This works because IV solves for the unique parameter that satisfies $Z^{\mathrm {T} }e=0$, and therefore hones in on the true underlying parameter as the sample size grows.
Now an extension: suppose that there are more instruments than there are covariates in the equation of interest, so that Z is a T × M matrix with M > K. This is often called the over-identified case. In this case, the generalized method of moments (GMM) can be used. The GMM IV estimator is
${\widehat {\beta }}_{\mathrm {GMM} }=(X^{\mathrm {T} }P_{Z}X)^{-1}X^{\mathrm {T} }P_{Z}y,$
where $P_{Z}$ refers to the projection matrix $P_{Z}=Z(Z^{\mathrm {T} }Z)^{-1}Z^{\mathrm {T} }$.
This expression collapses to the first when the number of instruments is equal to the number of covariates in the equation of interest. The over-identified IV is therefore a generalization of the just-identified IV.
Proof that βGMM collapses to βIV in the just-identified case
Developing the $\beta _{\text{GMM}}$ expression:
${\widehat {\beta }}_{\mathrm {GMM} }=(X^{\mathrm {T} }Z(Z^{\mathrm {T} }Z)^{-1}Z^{\mathrm {T} }X)^{-1}X^{\mathrm {T} }Z(Z^{\mathrm {T} }Z)^{-1}Z^{\mathrm {T} }y$
In the just-identified case, we have as many instruments as covariates, so that the dimension of X is the same as that of Z. Hence, $X^{\mathrm {T} }Z,Z^{\mathrm {T} }Z$ and $Z^{\mathrm {T} }X$ are all squared matrices of the same dimension. We can expand the inverse, using the fact that, for any invertible n-by-n matrices A and B, (AB)−1 = B−1A−1 (see Invertible matrix#Properties):
${\begin{aligned}{\widehat {\beta }}_{\mathrm {GMM} }&=(Z^{\mathrm {T} }X)^{-1}(Z^{\mathrm {T} }Z)(X^{\mathrm {T} }Z)^{-1}X^{\mathrm {T} }Z(Z^{\mathrm {T} }Z)^{-1}Z^{\mathrm {T} }y\\&=(Z^{\mathrm {T} }X)^{-1}(Z^{\mathrm {T} }Z)(Z^{\mathrm {T} }Z)^{-1}Z^{\mathrm {T} }y\\&=(Z^{\mathrm {T} }X)^{-1}Z^{\mathrm {T} }y\\&={\widehat {\beta }}_{\mathrm {IV} }\end{aligned}}$
Reference: see Davidson and Mackinnnon (1993)[14]: 218
There is an equivalent under-identified estimator for the case where m < k. Since the parameters are the solutions to a set of linear equations, an under-identified model using the set of equations $Z'v=0$ does not have a unique solution.
Interpretation as two-stage least squares
One computational method which can be used to calculate IV estimates is two-stage least squares (2SLS or TSLS). In the first stage, each explanatory variable that is an endogenous covariate in the equation of interest is regressed on all of the exogenous variables in the model, including both exogenous covariates in the equation of interest and the excluded instruments. The predicted values from these regressions are obtained:
Stage 1: Regress each column of X on Z, ($X=Z\delta +{\text{errors}}$):
${\widehat {\delta }}=(Z^{\mathrm {T} }Z)^{-1}Z^{\mathrm {T} }X,\,$
and save the predicted values:
${\widehat {X}}=Z{\widehat {\delta }}={\color {ProcessBlue}Z(Z^{\mathrm {T} }Z)^{-1}Z^{\mathrm {T} }}X={\color {ProcessBlue}P_{Z}}X.\,$
In the second stage, the regression of interest is estimated as usual, except that in this stage each endogenous covariate is replaced with the predicted values from the first stage:
Stage 2: Regress Y on the predicted values from the first stage:
$Y={\widehat {X}}\beta +\mathrm {noise} ,\,$
which gives
$\beta _{\text{2SLS}}=\left(X^{\mathrm {T} }{\color {ProcessBlue}P_{Z}}X\right)^{-1}X^{\mathrm {T} }{\color {ProcessBlue}P_{Z}}Y.$
This method is only valid in linear models. For categorical endogenous covariates, one might be tempted to use a different first stage than ordinary least squares, such as a probit model for the first stage followed by OLS for the second. This is commonly known in the econometric literature as the forbidden regression,[15] because second-stage IV parameter estimates are consistent only in special cases.[16]
Proof: computation of the 2SLS estimator
The usual OLS estimator is: $({\widehat {X}}^{\mathrm {T} }{\widehat {X}})^{-1}{\widehat {X}}^{\mathrm {T} }Y$. Replacing ${\widehat {X}}=P_{Z}X$ and noting that $P_{Z}$ is a symmetric and idempotent matrix, so that $P_{Z}^{\mathrm {T} }P_{Z}=P_{Z}P_{Z}=P_{Z}$
$\beta _{\text{2SLS}}=({\widehat {X}}^{\mathrm {T} }{\widehat {X}})^{-1}{\widehat {X}}^{\mathrm {T} }Y=\left(X^{\mathrm {T} }P_{Z}^{\mathrm {T} }P_{Z}X\right)^{-1}X^{\mathrm {T} }P_{Z}^{\mathrm {T} }Y=\left(X^{\mathrm {T} }P_{Z}X\right)^{-1}X^{\mathrm {T} }P_{Z}Y.$
The resulting estimator of $\beta $ is numerically identical to the expression displayed above. A small correction must be made to the sum-of-squared residuals in the second-stage fitted model in order that the covariance matrix of $\beta $ is calculated correctly.
Non-parametric analysis
When the form of the structural equations is unknown, an instrumental variable $Z$ can still be defined through the equations:
$x=g(z,u)\,$
$y=f(x,u)\,$
where $f$ and $g$ are two arbitrary functions and $Z$ is independent of $U$. Unlike linear models, however, measurements of $Z,X$ and $Y$ do not allow for the identification of the average causal effect of $X$ on $Y$, denoted ACE
${\text{ACE}}=\Pr(y\mid {\text{do}}(x))=\operatorname {E} _{u}[f(x,u)].$
Balke and Pearl [1997] derived tight bounds on ACE and showed that these can provide valuable information on the sign and size of ACE.[17]
In linear analysis, there is no test to falsify the assumption the $Z$ is instrumental relative to the pair $(X,Y)$. This is not the case when $X$ is discrete. Pearl (2000) has shown that, for all $f$ and $g$, the following constraint, called "Instrumental Inequality" must hold whenever $Z$ satisfies the two equations above:[10]
$\max _{x}\sum _{y}[\max _{z}\Pr(y,x\mid z)]\leq 1.$
Interpretation under treatment effect heterogeneity
The exposition above assumes that the causal effect of interest does not vary across observations, that is, that $\beta $ is a constant. Generally, different subjects will respond in different ways to changes in the "treatment" x. When this possibility is recognized, the average effect in the population of a change in x on y may differ from the effect in a given subpopulation. For example, the average effect of a job training program may substantially differ across the group of people who actually receive the training and the group which chooses not to receive training. For these reasons, IV methods invoke implicit assumptions on behavioral response, or more generally assumptions over the correlation between the response to treatment and propensity to receive treatment.[18]
The standard IV estimator can recover local average treatment effects (LATE) rather than average treatment effects (ATE).[1] Imbens and Angrist (1994) demonstrate that the linear IV estimate can be interpreted under weak conditions as a weighted average of local average treatment effects, where the weights depend on the elasticity of the endogenous regressor to changes in the instrumental variables. Roughly, that means that the effect of a variable is only revealed for the subpopulations affected by the observed changes in the instruments, and that subpopulations which respond most to changes in the instruments will have the largest effects on the magnitude of the IV estimate.
For example, if a researcher uses presence of a land-grant college as an instrument for college education in an earnings regression, she identifies the effect of college on earnings in the subpopulation which would obtain a college degree if a college is present but which would not obtain a degree if a college is not present. This empirical approach does not, without further assumptions, tell the researcher anything about the effect of college among people who would either always or never get a college degree regardless of whether a local college exists.
Weak instruments problem
As Bound, Jaeger, and Baker (1995) note, a problem is caused by the selection of "weak" instruments, instruments that are poor predictors of the endogenous question predictor in the first-stage equation.[19] In this case, the prediction of the question predictor by the instrument will be poor and the predicted values will have very little variation. Consequently, they are unlikely to have much success in predicting the ultimate outcome when they are used to replace the question predictor in the second-stage equation.
In the context of the smoking and health example discussed above, tobacco taxes are weak instruments for smoking if smoking status is largely unresponsive to changes in taxes. If higher taxes do not induce people to quit smoking (or not start smoking), then variation in tax rates tells us nothing about the effect of smoking on health. If taxes affect health through channels other than through their effect on smoking, then the instruments are invalid and the instrumental variables approach may yield misleading results. For example, places and times with relatively health-conscious populations may both implement high tobacco taxes and exhibit better health even holding smoking rates constant, so we would observe a correlation between health and tobacco taxes even if it were the case that smoking has no effect on health. In this case, we would be mistaken to infer a causal effect of smoking on health from the observed correlation between tobacco taxes and health.
Testing for weak instruments
The strength of the instruments can be directly assessed because both the endogenous covariates and the instruments are observable.[20] A common rule of thumb for models with one endogenous regressor is: the F-statistic against the null that the excluded instruments are irrelevant in the first-stage regression should be larger than 10.
Statistical inference and hypothesis testing
When the covariates are exogenous, the small-sample properties of the OLS estimator can be derived in a straightforward manner by calculating moments of the estimator conditional on X. When some of the covariates are endogenous so that instrumental variables estimation is implemented, simple expressions for the moments of the estimator cannot be so obtained. Generally, instrumental variables estimators only have desirable asymptotic, not finite sample, properties, and inference is based on asymptotic approximations to the sampling distribution of the estimator. Even when the instruments are uncorrelated with the error in the equation of interest and when the instruments are not weak, the finite sample properties of the instrumental variables estimator may be poor. For example, exactly identified models produce finite sample estimators with no moments, so the estimator can be said to be neither biased nor unbiased, the nominal size of test statistics may be substantially distorted, and the estimates may commonly be far away from the true value of the parameter.[21]
Testing the exclusion restriction
The assumption that the instruments are not correlated with the error term in the equation of interest is not testable in exactly identified models. If the model is overidentified, there is information available which may be used to test this assumption. The most common test of these overidentifying restrictions, called the Sargan–Hansen test, is based on the observation that the residuals should be uncorrelated with the set of exogenous variables if the instruments are truly exogenous.[22] The Sargan–Hansen test statistic can be calculated as $TR^{2}$ (the number of observations multiplied by the coefficient of determination) from the OLS regression of the residuals onto the set of exogenous variables. This statistic will be asymptotically chi-squared with m − k degrees of freedom under the null that the error term is uncorrelated with the instruments.
See also
• Control function (econometrics)
• Optimal instruments
References
1. Imbens, G.; Angrist, J. (1994). "Identification and estimation of local average treatment effects". Econometrica. 62 (2): 467–476. doi:10.2307/2951620. JSTOR 2951620. S2CID 153123153.
2. Bullock, J. G.; Green, D. P.; Ha, S. E. (2010). "Yes, But What's the Mechanism? (Don't Expect an Easy Answer)". Journal of Personality and Social Psychology. 98 (4): 550–558. CiteSeerX 10.1.1.169.5465. doi:10.1037/a0018933. PMID 20307128. S2CID 7913867.
3. https://www.stata.com/meeting/5nasug/wiv.pdf%5B%5D
4. Nichols, Austin (2006-07-23). "Weak Instruments: An Overview and New Techniques". {{cite journal}}: Cite journal requires |journal= (help)
5. Leigh, J. P.; Schembri, M. (2004). "Instrumental Variables Technique: Cigarette Price Provided Better Estimate of Effects of Smoking on SF-12". Journal of Clinical Epidemiology. 57 (3): 284–293. doi:10.1016/j.jclinepi.2003.08.006. PMID 15066689.
6. Epstein, Roy J. (1989). "The Fall of OLS in Structural Estimation". Oxford Economic Papers. 41 (1): 94–107. doi:10.1093/oxfordjournals.oep.a041930. JSTOR 2663184.
7. Stock, James H.; Trebbi, Francesco (2003). "Retrospectives: Who Invented Instrumental Variable Regression?". Journal of Economic Perspectives. 17 (3): 177–194. doi:10.1257/089533003769204416.
8. Reiersøl, Olav (1945). Confluence Analysis by Means of Instrumental Sets of Variables. Arkiv for Mathematic, Astronomi, och Fysik. Vol. 32A. Uppsala: Almquist & Wiksells. OCLC 793451601.
9. Wooldridge, J.: Introductory Econometrics. South-Western, Scarborough, Kanada, 2009.
10. Pearl, J. (2000). Causality: Models, Reasoning, and Inference. New York: Cambridge University Press. ISBN 978-0-521-89560-6.
11. Angrist, J.; Krueger, A. (2001). "Instrumental Variables and the Search for Identification: From Supply and Demand to Natural Experiments". Journal of Economic Perspectives. 15 (4): 69–85. doi:10.1257/jep.15.4.69.
12. Heckman, J. (2008). "Econometric Causality". International Statistical Review. 76 (1): 1–27. doi:10.1111/j.1751-5823.2007.00024.x.
13. Bowden, R.J.; Turkington, D.A. (1984). Instrumental Variables. Cambridge, England: Cambridge University Press.
14. Davidson, Russell; Mackinnon, James (1993). Estimation and Inference in Econometrics. New York: Oxford University Press. ISBN 978-0-19-506011-9.
15. Wooldridge, J. (2010). Econometric Analysis of Cross Section and Panel Data. Econometric Analysis of Cross Section and Panel Data. MIT Press.
16. Lergenmuller, Simon (2017). Two-stage predictor substitution for time-to-event data (Thesis). hdl:10852/57801.
17. Balke, A.; Pearl, J. (1997). "Bounds on treatment effects from studies with imperfect compliance". Journal of the American Statistical Association. 92 (439): 1172–1176. CiteSeerX 10.1.1.26.3952. doi:10.1080/01621459.1997.10474074. S2CID 18365761.
18. Heckman, J. (1997). "Instrumental variables: A study of implicit behavioral assumptions used in making program evaluations". Journal of Human Resources. 32 (3): 441–462. doi:10.2307/146178. JSTOR 146178.
19. Bound, J.; Jaeger, D. A.; Baker, R. M. (1995). "Problems with Instrumental Variables Estimation when the Correlation between the Instruments and the Endogenous Explanatory Variable is Weak". Journal of the American Statistical Association. 90 (430): 443. doi:10.1080/01621459.1995.10476536.
20. Stock, J.; Wright, J.; Yogo, M. (2002). "A Survey of Weak Instruments and Weak Identification in Generalized Method of Moments". Journal of the American Statistical Association. 20 (4): 518–529. CiteSeerX 10.1.1.319.2477. doi:10.1198/073500102288618658. S2CID 14793271.
21. Nelson, C. R.; Startz, R. (1990). "Some Further Results on the Exact Small Sample Properties of the Instrumental Variable Estimator". Econometrica. 58 (4): 967–976. doi:10.2307/2938359. JSTOR 2938359. S2CID 119872226.
22. Hayashi, Fumio (2000). "Testing Overidentifying Restrictions". Econometrics. Princeton: Princeton University Press. pp. 217–221. ISBN 978-0-691-01018-2.
Further reading
• Greene, William H. (2008). Econometric Analysis (Sixth ed.). Upper Saddle River: Pearson Prentice-Hall. pp. 314–353. ISBN 978-0-13-600383-0.
• Gujarati, Damodar N.; Porter, Dawn C. (2009). Basic Econometrics (Fifth ed.). New York: McGraw-Hill Irwin. pp. 711–736. ISBN 978-0-07-337577-9.
• Sargan, Denis (1988). Lectures on Advanced Econometric Theory. Oxford: Basil Blackwell. pp. 42–67. ISBN 978-0-631-14956-9.
• Wooldridge, Jeffrey M. (2013). Introductory Econometrics: A Modern Approach (Fifth international ed.). Mason, OH: South-Western. pp. 490–528. ISBN 978-1-111-53439-4.
Bibliography
• Wooldridge, J. (1997): Quasi-Likelihood Methods for Count Data, Handbook of Applied Econometrics, Volume 2, ed. M. H. Pesaran and P. Schmidt, Oxford, Blackwell, pp. 352–406
• Terza, J. V. (1998): "Estimating Count Models with Endogenous Switching: Sample Selection and Endogenous Treatment Effects." Journal of Econometrics (84), pp. 129–154
• Wooldridge, J. (2002): "Econometric Analysis of Cross Section and Panel Data", MIT Press, Cambridge, Massachusetts.
External links
• Chapter from Daniel McFadden's textbook
• Econometrics lecture (topic: instrumental variable) on YouTube by Mark Thoma.
• Econometrics lecture (topic: two-stages least square) on YouTube by Mark Thoma
Authority control: National
• France
• BnF data
• Israel
• United States
|
Wikipedia
|
Random binary tree
In computer science and probability theory, a random binary tree is a binary tree selected at random from some probability distribution on binary trees. Two different distributions are commonly used: binary trees formed by inserting nodes one at a time according to a random permutation, and binary trees chosen from a uniform discrete distribution in which all distinct trees are equally likely. It is also possible to form other distributions, for instance by repeated splitting. Adding and removing nodes directly in a random binary tree will in general disrupt its random structure, but the treap and related randomized binary search tree data structures use the principle of binary trees formed from a random permutation in order to maintain a balanced binary search tree dynamically as nodes are inserted and deleted.
Part of a series on
Probabilistic
data structures
• Bloom filter
• Count–min sketch
• Quotient filter
• Skip list
Random trees
• Random binary tree
• Treap
• Rapidly exploring random tree
Related
• Randomized algorithm
• HyperLogLog
For random trees that are not necessarily binary, see random tree.
Binary trees from random permutations
For any set of numbers (or, more generally, values from some total order), one may form a binary search tree in which each number is inserted in sequence as a leaf of the tree, without changing the structure of the previously inserted numbers. The position into which each number should be inserted is uniquely determined by a binary search in the tree formed by the previous numbers. For instance, if the three numbers (1,3,2) are inserted into a tree in that sequence, the number 1 will sit at the root of the tree, the number 3 will be placed as its right child, and the number 2 as the left child of the number 3. There are six different permutations of the numbers (1,2,3), but only five trees may be constructed from them. That is because the permutations (2,1,3) and (2,3,1) form the same tree.
Expected depth of a node
For any fixed choice of a value x in a given set of n numbers, if one randomly permutes the numbers and forms a binary tree from them as described above, the expected value of the length of the path from the root of the tree to x is at most 2 log n + O(1), where "log" denotes the natural logarithm function and the O introduces big O notation. For, the expected number of ancestors of x is by linearity of expectation equal to the sum, over all other values y in the set, of the probability that y is an ancestor of x. And a value y is an ancestor of x exactly when y is the first element to be inserted from the elements in the interval [x,y]. Thus, the values that are adjacent to x in the sorted sequence of values have probability 1/2 of being an ancestor of x, the values one step away have probability 1/3, etc. Adding these probabilities for all positions in the sorted sequence gives twice a Harmonic number, leading to the bound above. A bound of this form holds also for the expected search length of a path to a fixed value x that is not part of the given set.[1]
To understand it by using min-max records. The number in a random permutation is the min (max) record means it is the min (max) value from the first position to its position. Consider a simple example l = (2, 4, 3, 6, 5, 1). The min records in l are 2, 1 by searching the min value from start to the end. Similarly, the max recodes in l are 2, 4, 6. The expected number of min (max) records is all probability of each number in a random permutation. For position i, it has the probability of 1/i. Therefore, the expected number of min (max) records is a Harmonic number. To search 3 in l, all numbers in (2, 1) is less than 3, and (4, 6, 5, 1) is bigger than 3. The searching on l's random binary tree only counts max records on (2, 1) and min records on (4, 6, 5, 1)—this is why it is twice a Harmonic number.
The longest path
Although not as easy to analyze as the average path length, there has also been much research on determining the expectation (or high probability bounds) of the length of the longest path in a binary search tree generated from a random insertion order. It is now known that this length, for a tree with n nodes, is almost surely
${\frac {1}{\beta }}\log n\approx 4.311\log n,$
where β is the unique number in the range 0 < β < 1 satisfying the equation
$\displaystyle 2\beta e^{1-\beta }=1.$[2]
Expected number of leaves
In the random permutation model, each of the numbers from the set of numbers used to form the tree, except for the smallest and largest of the numbers, has probability 1/3 of being a leaf in the tree, for it is a leaf when it inserted after its two neighbors, and any of the six permutations of these two neighbors and it are equally likely. By similar reasoning, the smallest and largest of the numbers have probability 1/2 of being a leaf. Therefore, the expected number of leaves is the sum of these probabilities, which for n ≥ 2 is exactly (n + 1)/3.
Strahler Number
The Strahler number of a tree is a more sensitive measure of the distance from a leaf in which a node has Strahler number i whenever it has either a child with that number or two children with number i − 1. For n-node random binary search trees, simulations suggest that the expected Strahler number is $\log _{3}n+o(\log n)$. However, only the upper bound $\log _{3}n+O(1)$ has actually been proven.[3]
Treaps and randomized binary search trees
In applications of binary search tree data structures, it is rare for the values in the tree to be inserted without deletion in a random order, limiting the direct applications of random binary trees. However, algorithm designers have devised data structures that allow insertions and deletions to be performed in a binary search tree, at each step maintaining as an invariant the property that the shape of the tree is a random variable with the same distribution as a random binary search tree.
If a given set of ordered numbers is assigned numeric priorities (distinct numbers unrelated to their values), these priorities may be used to construct a Cartesian tree for the numbers, a binary tree that has as its inorder traversal sequence the sorted sequence of the numbers and that is heap-ordered by priorities. Although more efficient construction algorithms are known, it is helpful to think of a Cartesian tree as being constructed by inserting the given numbers into a binary search tree in priority order. Thus, by choosing the priorities either to be a set of independent random real numbers in the unit interval, or by choosing them to be a random permutation of the numbers from 1 to n (where n is the number of nodes in the tree), and by maintaining the heap ordering property using tree rotations after any insertion or deletion of a node, it is possible to maintain a data structure that behaves like a random binary search tree. Such a data structure is known as a treap or a randomized binary search tree.[4]
Uniformly random binary trees
The number of binary trees with n nodes is a Catalan number: for n = 1, 2, 3, ... these numbers of trees are
1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, … (sequence A000108 in the OEIS).
Thus, if one of these trees is selected uniformly at random, its probability is the reciprocal of a Catalan number. Trees in this model have expected depth proportional to the square root of n, rather than to the logarithm.[5] However, the expected Strahler number of a uniformly random n-node binary tree is $\log _{4}n+O(1)$[6] lower than the expected Strahler number of random binary search trees.
Due to their large heights, this model of equiprobable random trees is not generally used for binary search trees, but it has been applied to problems of modeling the parse trees of algebraic expressions in compiler design[7] (where the above-mentioned bound on Strahler number translates into the number of registers needed to evaluate an expression[8]) and for modeling evolutionary trees.[9] In some cases the analysis of random binary trees under the random permutation model can be automatically transferred to the uniform model.[10]
Random split trees
Devroye & Kruszewski (1996) generate random binary trees with n nodes by generating a real-valued random variable x in the unit interval (0,1), assigning the first xn nodes (rounded down to an integer number of nodes) to the left subtree, the next node to the root, and the remaining nodes to the right subtree, and continuing recursively in each subtree. If x is chosen uniformly at random in the interval, the result is the same as the random binary search tree generated by a random permutation of the nodes, as any node is equally likely to be chosen as root; however, this formulation allows other distributions to be used instead. For instance, in the uniformly random binary tree model, once a root is fixed each of its two subtrees must also be uniformly random, so the uniformly random model may also be generated by a different choice of distribution for x. As Devroye and Kruszewski show, by choosing a beta distribution on x and by using an appropriate choice of shape to draw each of the branches, the mathematical trees generated by this process can be used to create realistic-looking botanical trees.
Notes
1. Hibbard (1962); Knuth (1973); Mahmoud (1992), p. 75.
2. Robson (1979); Pittel (1985); Devroye (1986); Mahmoud (1992), pp. 91–99; Reed (2003).
3. Kruszewski (1999)
4. Martinez & Roura (1998); Seidel & Aragon (1996).
5. Knuth (2005), p. 15.
6. ,Devroye & Kruszewski (1995)
7. Mahmoud (1992), p. 63.
8. Flajolet, Raoult & Vuillemin (1979).
9. Aldous (1996).
10. Mahmoud (1992), p. 70.
References
• Aldous, David (1996), "Probability distributions on cladograms", in Aldous, David; Pemantle, Robin (eds.), Random Discrete Structures, The IMA Volumes in Mathematics and its Applications, vol. 76, Springer-Verlag, pp. 1–18.
• Devroye, Luc (1986), "A note on the height of binary search trees", Journal of the ACM, 33 (3): 489–498, doi:10.1145/5925.5930, S2CID 8508624.
• Devroye, Luc; Kruszewski, Paul (1995), "A note on the Horton-Strahler number for random trees", Information Processing Letters, 56 (2): 95–99, doi:10.1016/0020-0190(95)00114-R.
• Devroye, Luc; Kruszewski, Paul (1996), "The botanical beauty of random binary trees", in Brandenburg, Franz J. (ed.), Graph Drawing: 3rd Int. Symp., GD'95, Passau, Germany, September 20-22, 1995, Lecture Notes in Computer Science, vol. 1027, Springer-Verlag, pp. 166–177, doi:10.1007/BFb0021801, ISBN 978-3-540-60723-6.
• Drmota, Michael (2009), Random Trees : An Interplay between Combinatorics and Probability, Springer-Verlag, ISBN 978-3-211-75355-2.
• Flajolet, P.; Raoult, J. C.; Vuillemin, J. (1979), "The number of registers required for evaluating arithmetic expressions", Theoretical Computer Science, 9 (1): 99–125, doi:10.1016/0304-3975(79)90009-4.
• Hibbard, Thomas N. (1962), "Some combinatorial properties of certain trees with applications to searching and sorting", Journal of the ACM, 9 (1): 13–28, doi:10.1145/321105.321108.
• Knuth, Donald E. (1973), "6.2.2 Binary Tree Searching", The Art of Computer Programming, vol. III, Addison-Wesley, pp. 422–451.
• Knuth, Donald E. (2005), "Draft of Section 7.2.1.6: Generating All Trees", The Art of Computer Programming, vol. IV.
• Kruszewski, Paul (1999), "A note on the Horton-Strahler number for random binary search trees", Information Processing Letters, 69 (1): 47–51, doi:10.1016/S0020-0190(98)00192-6.
• Mahmoud, Hosam M. (1992), Evolution of Random Search Trees, John Wiley & Sons.
• Martinez, Conrado; Roura, Salvador (1998), "Randomized binary search trees", Journal of the ACM, 45 (2): 288–323, CiteSeerX 10.1.1.17.243, doi:10.1145/274787.274812, S2CID 714621.
• Pittel, B. (1985), "Asymptotical growth of a class of random trees", Annals of Probability, 13 (2): 414–427, doi:10.1214/aop/1176993000.
• Reed, Bruce (2003), "The height of a random binary search tree", Journal of the ACM, 50 (3): 306–332, doi:10.1145/765568.765571, S2CID 6169610.
• Robson, J. M. (1979), "The height of binary search trees", Australian Computer Journal, 11: 151–153.
• Seidel, Raimund; Aragon, Cecilia R. (1996), "Randomized Search Trees", Algorithmica, 16 (4/5): 464–497, doi:10.1007/s004539900061.
External links
• Open Data Structures - Chapter 7 - Random Binary Search Trees, Pat Morin
|
Wikipedia
|
Randomness
In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information.[1][2] A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if the probability distribution is known, the frequency of different outcomes over repeated events (or "trials") is predictable.[note 1] For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance, probability, and information entropy.
Part of a series on statistics
Probability theory
• Probability
• Axioms
• Determinism
• System
• Indeterminism
• Randomness
• Probability space
• Sample space
• Event
• Collectively exhaustive events
• Elementary event
• Mutual exclusivity
• Outcome
• Singleton
• Experiment
• Bernoulli trial
• Probability distribution
• Bernoulli distribution
• Binomial distribution
• Normal distribution
• Probability measure
• Random variable
• Bernoulli process
• Continuous or discrete
• Expected value
• Markov chain
• Observed value
• Random walk
• Stochastic process
• Complementary event
• Joint probability
• Marginal probability
• Conditional probability
• Independence
• Conditional independence
• Law of total probability
• Law of large numbers
• Bayes' theorem
• Boole's inequality
• Venn diagram
• Tree diagram
The fields of mathematics, probability, and statistics use formal definitions of randomness. In statistics, a random variable is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events. Random variables can appear in random sequences. A random process is a sequence of random variables whose outcomes do not follow a deterministic pattern, but follow an evolution described by probability distributions. These and other constructs are extremely useful in probability theory and the various applications of randomness.
Randomness is most often used in statistics to signify well-defined statistical properties. Monte Carlo methods, which rely on random input (such as from random number generators or pseudorandom number generators), are important techniques in science, particularly in the field of computational science.[3] By analogy, quasi-Monte Carlo methods use quasi-random number generators.
Random selection, when narrowly associated with a simple random sample, is a method of selecting items (often called units) from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. A random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen. That is, if the selection process is such that each member of a population, say research subjects, has the same probability of being chosen, then we can say the selection process is random.[2]
According to Ramsey theory, pure randomness is impossible, especially for large structures. Mathematician Theodore Motzkin suggested that "while disorder is more probable in general, complete disorder is impossible".[4] Misunderstanding this can lead to numerous conspiracy theories.[5] Cristian S. Calude stated that "given the impossibility of true randomness, the effort is directed towards studying degrees of randomness".[6] It can be proven that there is infinite hierarchy (in terms of quality or strength) of forms of randomness.[6]
History
Main article: History of randomness
In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, and this later evolved into games of chance. Most ancient cultures used various methods of divination to attempt to circumvent randomness and fate.[7][8] Beyond religion and games of chance, randomness has been attested for sortition since at least ancient Athenian democracy in the form of a kleroterion.[9]
The formalization of odds and chance was perhaps earliest done by the Chinese of 3,000 years ago. The Greek philosophers discussed randomness at length, but only in non-quantitative forms. It was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of calculus had a positive impact on the formal study of randomness. In the 1888 edition of his book The Logic of Chance, John Venn wrote a chapter on The conception of randomness that included his view of the randomness of the digits of pi, by using them to construct a random walk in two dimensions.[10]
The early part of the 20th century saw a rapid growth in the formal analysis of randomness, as various approaches to the mathematical foundations of probability were introduced. In the mid-to-late-20th century, ideas of algorithmic information theory introduced new dimensions to the field via the concept of algorithmic randomness.
Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the 20th century computer scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, such randomized algorithms even outperform the best deterministic methods.[11]
In science
Many scientific fields are concerned with randomness:
• Algorithmic probability
• Chaos theory
• Cryptography
• Game theory
• Information theory
• Pattern recognition
• Percolation theory
• Probability theory
• Quantum mechanics
• Random walk
• Statistical mechanics
• Statistics
In the physical sciences
In the 19th century, scientists used the idea of random motions of molecules in the development of statistical mechanics to explain phenomena in thermodynamics and the properties of gases.
According to several standard interpretations of quantum mechanics, microscopic phenomena are objectively random.[12] That is, in an experiment that controls all causally relevant parameters, some aspects of the outcome still vary randomly. For example, if a single unstable atom is placed in a controlled environment, it cannot be predicted how long it will take for the atom to decay—only the probability of decay in a given time.[13] Thus, quantum mechanics does not specify the outcome of individual experiments, but only the probabilities. Hidden variable theories reject the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are at work behind the scenes, determining the outcome in each case.
In biology
The modern evolutionary synthesis ascribes the observed diversity of life to random genetic mutations followed by natural selection. The latter retains some random mutations in the gene pool due to the systematically improved chance for survival and reproduction that those mutated genes confer on individuals who possess them. The location of the mutation is not entirely random however as e.g. biologically important regions may be more protected from mutations.[14][15][16]
Several authors also claim that evolution (and sometimes development) requires a specific form of randomness, namely the introduction of qualitatively new behaviors. Instead of the choice of one possibility among several pre-given ones, this randomness corresponds to the formation of new possibilities.[17][18]
The characteristics of an organism arise to some extent deterministically (e.g., under the influence of genes and the environment), and to some extent randomly. For example, the density of freckles that appear on a person's skin is controlled by genes and exposure to light; whereas the exact location of individual freckles seems random.[19]
As far as behavior is concerned, randomness is important if an animal is to behave in a way that is unpredictable to others. For instance, insects in flight tend to move about with random changes in direction, making it difficult for pursuing predators to predict their trajectories.
In mathematics
The mathematical theory of probability arose from attempts to formulate mathematical descriptions of chance events, originally in the context of gambling, but later in connection with physics. Statistics is used to infer the underlying probability distribution of a collection of empirical observations. For the purposes of simulation, it is necessary to have a large supply of random numbers—or means to generate them on demand.
Algorithmic information theory studies, among other topics, what constitutes a random sequence. The central idea is that a string of bits is random if and only if it is shorter than any computer program that can produce that string (Kolmogorov randomness), which means that random strings are those that cannot be compressed. Pioneers of this field include Andrey Kolmogorov and his student Per Martin-Löf, Ray Solomonoff, and Gregory Chaitin. For the notion of infinite sequence, mathematicians generally accept Per Martin-Löf's semi-eponymous definition: An infinite sequence is random if and only if it withstands all recursively enumerable null sets.[20] The other notions of random sequences include, among others, recursive randomness and Schnorr randomness, which are based on recursively computable martingales. It was shown by Yongge Wang that these randomness notions are generally different.[21]
Randomness occurs in numbers such as log(2) and pi. The decimal digits of pi constitute an infinite sequence and "never repeat in a cyclical fashion." Numbers like pi are also considered likely to be normal:
Pi certainly seems to behave this way. In the first six billion decimal places of pi, each of the digits from 0 through 9 shows up about six hundred million times. Yet such results, conceivably accidental, do not prove normality even in base 10, much less normality in other number bases.[22]
In statistics
In statistics, randomness is commonly used to create simple random samples. This allows surveys of completely random groups of people to provide realistic data that is reflective of the population. Common methods of doing this include drawing names out of a hat or using a random digit chart (a large table of random digits).
In information science
In information science, irrelevant or meaningless data is considered noise. Noise consists of numerous transient disturbances, with a statistically randomized time distribution.
In communication theory, randomness in a signal is called "noise", and is opposed to that component of its variation that is causally attributable to the source, the signal.
In terms of the development of random networks, for communication randomness rests on the two simple assumptions of Paul Erdős and Alfréd Rényi, who said that there were a fixed number of nodes and this number remained fixed for the life of the network, and that all nodes were equal and linked randomly to each other.[23]
In finance
The random walk hypothesis considers that asset prices in an organized market evolve at random, in the sense that the expected value of their change is zero but the actual value may turn out to be positive or negative. More generally, asset prices are influenced by a variety of unpredictable events in the general economic environment.
In politics
Random selection can be an official method to resolve tied elections in some jurisdictions.[24] Its use in politics originates long ago. Many offices in ancient Athens were chosen by lot instead of modern voting.
Randomness and religion
Randomness can be seen as conflicting with the deterministic ideas of some religions, such as those where the universe is created by an omniscient deity who is aware of all past and future events. If the universe is regarded to have a purpose, then randomness can be seen as impossible. This is one of the rationales for religious opposition to evolution, which states that non-random selection is applied to the results of random genetic variation.
Hindu and Buddhist philosophies state that any event is the result of previous events, as is reflected in the concept of karma. As such, this conception is at odd with the idea of randomness, and any reconciliation between both of them would require an explanation.[25]
In some religious contexts, procedures that are commonly perceived as randomizers are used for divination. Cleromancy uses the casting of bones or dice to reveal what is seen as the will of the gods.
Applications
Main article: Applications of randomness
In most of its mathematical, political, social and religious uses, randomness is used for its innate "fairness" and lack of bias.
Politics: Athenian democracy was based on the concept of isonomia (equality of political rights), and used complex allotment machines to ensure that the positions on the ruling committees that ran Athens were fairly allocated. Allotment is now restricted to selecting jurors in Anglo-Saxon legal systems, and in situations where "fairness" is approximated by randomization, such as selecting jurors and military draft lotteries.
Games: Random numbers were first investigated in the context of gambling, and many randomizing devices, such as dice, shuffling playing cards, and roulette wheels, were first developed for use in gambling. The ability to produce random numbers fairly is vital to electronic gambling, and, as such, the methods used to create them are usually regulated by government Gaming Control Boards. Random drawings are also used to determine lottery winners. In fact, randomness has been used for games of chance throughout history, and to select out individuals for an unwanted task in a fair way (see drawing straws).
Sports: Some sports, including American football, use coin tosses to randomly select starting conditions for games or seed tied teams for postseason play. The National Basketball Association uses a weighted lottery to order teams in its draft.
Mathematics: Random numbers are also employed where their use is mathematically important, such as sampling for opinion polls and for statistical sampling in quality control systems. Computational solutions for some types of problems use random numbers extensively, such as in the Monte Carlo method and in genetic algorithms.
Medicine: Random allocation of a clinical intervention is used to reduce bias in controlled trials (e.g., randomized controlled trials).
Religion: Although not intended to be random, various forms of divination such as cleromancy see what appears to be a random event as a means for a divine being to communicate their will (see also Free will and Determinism for more).
Generation
It is generally accepted that there exist three mechanisms responsible for (apparently) random behavior in systems:
1. Randomness coming from the environment (for example, Brownian motion, but also hardware random number generators).
2. Randomness coming from the initial conditions. This aspect is studied by chaos theory, and is observed in systems whose behavior is very sensitive to small variations in initial conditions (such as pachinko machines and dice).
3. Randomness intrinsically generated by the system. This is also called pseudorandomness, and is the kind used in pseudo-random number generators. There are many algorithms (based on arithmetics or cellular automaton) for generating pseudorandom numbers. The behavior of the system can be determined by knowing the seed state and the algorithm used. These methods are often quicker than getting "true" randomness from the environment.
The many applications of randomness have led to many different methods for generating random data. These methods may vary as to how unpredictable or statistically random they are, and how quickly they can generate random numbers.
Before the advent of computational random number generators, generating large amounts of sufficiently random numbers (which is important in statistics) required a lot of work. Results would sometimes be collected and distributed as random number tables.
Measures and tests
Main article: Randomness tests
There are many practical measures of randomness for a binary sequence. These include measures based on frequency, discrete transforms, complexity, or a mixture of these, such as the tests by Kak, Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman.[26]
Quantum nonlocality has been used to certify the presence of genuine or strong form of randomness in a given string of numbers.[27]
Misconceptions and logical fallacies
Popular perceptions of randomness are frequently mistaken, and are often based on fallacious reasoning or intuitions.
Fallacy: a number is "due"
See also: Coupon collector's problem
This argument is, "In a random selection of numbers, since all numbers eventually appear, those that have not come up yet are 'due', and thus more likely to come up soon." This logic is only correct if applied to a system where numbers that come up are removed from the system, such as when playing cards are drawn and not returned to the deck. In this case, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be some other card. However, if the jack is returned to the deck, and the deck is thoroughly reshuffled, a jack is as likely to be drawn as any other card. The same applies in any other process where objects are selected independently, and none are removed after each event, such as the roll of a die, a coin toss, or most lottery number selection schemes. Truly random processes such as these do not have memory, which makes it impossible for past outcomes to affect future outcomes. In fact, there is no finite number of trials that can guarantee a success.
Fallacy: a number is "cursed" or "blessed"
See also: Benford's law
In a random sequence of numbers, a number may be said to be cursed because it has come up less often in the past, and so it is thought that it will occur less often in the future. A number may be assumed to be blessed because it has occurred more often than others in the past, and so it is thought likely to come up more often in the future. This logic is valid only if the randomisation might be biased, for example if a die is suspected to be loaded then its failure to roll enough sixes would be evidence of that loading. If the die is known to be fair, then previous rolls can give no indication of future events.
In nature, events rarely occur with a frequency that is known a priori, so observing outcomes to determine which events are more probable makes sense. However, it is fallacious to apply this logic to systems designed and known to make all outcomes equally likely, such as shuffled cards, dice, and roulette wheels.
Fallacy: odds are never dynamic
In the beginning of a scenario, one might calculate the probability of a certain event. However, as soon as one gains more information about the scenario, one may need to re-calculate the probability accordingly.
For example, when being told that a woman has two children, one might be interested in knowing if either of them is a girl, and if yes, what is probability that the other child is also a girl. Considering the two events independently, one might expect that the probability that the other child is female is ½ (50%), but by building a probability space illustrating all possible outcomes, one would notice that the probability is actually only ⅓ (33%).
To be sure, the probability space does illustrate four ways of having these two children: boy-boy, girl-boy, boy-girl, and girl-girl. But once it is known that at least one of the children is female, this rules out the boy-boy scenario, leaving only three ways of having the two children: boy-girl, girl-boy, girl-girl. From this, it can be seen only ⅓ of these scenarios would have the other child also be a girl[28] (see Boy or girl paradox for more).
In general, by using a probability space, one is less likely to miss out on possible scenarios, or to neglect the importance of new information. This technique can be used to provide insights in other situations such as the Monty Hall problem, a game show scenario in which a car is hidden behind one of three doors, and two goats are hidden as booby prizes behind the others. Once the contestant has chosen a door, the host opens one of the remaining doors to reveal a goat, eliminating that door as an option. With only two doors left (one with the car, the other with another goat), the player must decide to either keep their decision, or to switch and select the other door. Intuitively, one might think the player is choosing between two doors with equal probability, and that the opportunity to choose another door makes no difference. However, an analysis of the probability spaces would reveal that the contestant has received new information, and that changing to the other door would increase their chances of winning.[28]
See also
• Chaitin's constant
• Chance (disambiguation)
• Frequency probability
• Indeterminism
• Nonlinear system
• Probability interpretations
• Probability theory
• Pseudorandomness
• Random.org—generates random numbers using atmospheric noise
• Sortition
Notes
1. Strictly speaking, the frequency of an outcome will converge almost surely to a predictable value as the number of trials becomes arbitrarily large. Non-convergence or convergence to a different value is possible, but has probability zero.
References
1. The Oxford English Dictionary defines "random" as "Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice; haphazard."
2. "Definition of randomness | Dictionary.com". www.dictionary.com. Retrieved 21 November 2019.
3. Third Workshop on Monte Carlo Methods, Jun Liu, Professor of Statistics, Harvard University
4. Hans Jürgen Prömel (2005). "Complete Disorder is Impossible: The Mathematical Work of Walter Deuber". Combinatorics, Probability and Computing. Cambridge University Press. 14: 3–16. doi:10.1017/S0963548304006674. S2CID 37243306.
5. Ted.com, (May 2016). The origin of countless conspiracy theories
6. Cristian S. Calude, (2017). "Quantum Randomness: From Practice to Theory and Back" in "The Incomputable Journeys Beyond the Turing Barrier" Editors: S. Barry Cooper, Mariya I. Soskova, 169–181, doi:10.1007/978-3-319-43669-2_11.
7. Handbook to life in ancient Rome by Lesley Adkins 1998 ISBN 0-19-512332-8 page 279
8. Religions of the ancient world by Sarah Iles Johnston 2004 ISBN 0-674-01517-7 page 370
9. Hansen, Mogens Herman (1991). The Athenian Democracy in the Age of Demosthenes. Wiley. p. 230. ISBN 9780631180173.
10. Annotated readings in the history of statistics by Herbert Aron David, 2001 ISBN 0-387-98844-0 page 115. The 1866 edition of Venn's book (on Google books) does not include this chapter.
11. Reinert, Knut (2010). "Concept: Types of algorithms" (PDF). Freie Universität Berlin. Retrieved 20 November 2019.
12. Zeilinger, Anton; Aspelmeyer, Markus; Żukowski, Marek; Brukner, Časlav; Kaltenbaek, Rainer; Paterek, Tomasz; Gröblacher, Simon (April 2007). "An experimental test of non-local realism". Nature. 446 (7138): 871–875. arXiv:0704.2529. Bibcode:2007Natur.446..871G. doi:10.1038/nature05677. ISSN 1476-4687. PMID 17443179. S2CID 4412358.
13. "Each nucleus decays spontaneously, at random, in accordance with the blind workings of chance." Q for Quantum, John Gribbin
14. "Study challenges evolutionary theory that DNA mutations are random". U.C. Davis. Retrieved 12 February 2022.
15. Monroe, J. Grey; Srikant, Thanvi; Carbonell-Bejerano, Pablo; Becker, Claude; Lensink, Mariele; Exposito-Alonso, Moises; Klein, Marie; Hildebrandt, Julia; Neumann, Manuela; Kliebenstein, Daniel; Weng, Mao-Lun; Imbert, Eric; Ågren, Jon; Rutter, Matthew T.; Fenster, Charles B.; Weigel, Detlef (February 2022). "Mutation bias reflects natural selection in Arabidopsis thaliana". Nature. 602 (7895): 101–105. Bibcode:2022Natur.602..101M. doi:10.1038/s41586-021-04269-6. ISSN 1476-4687. PMC 8810380. PMID 35022609.
16. Belfield, Eric J.; Ding, Zhong Jie; Jamieson, Fiona J.C.; Visscher, Anne M.; Zheng, Shao Jian; Mithani, Aziz; Harberd, Nicholas P. (January 2018). "DNA mismatch repair preferentially protects genes from mutation". Genome Research. 28 (1): 66–74. doi:10.1101/gr.219303.116. PMC 5749183. PMID 29233924.
17. Longo, Giuseppe; Montévil, Maël; Kauffman, Stuart (1 January 2012). "No entailing laws, but enablement in the evolution of the biosphere". Proceedings of the 14th annual conference companion on Genetic and evolutionary computation. GECCO '12. New York, NY, US: ACM. pp. 1379–1392. arXiv:1201.2069. CiteSeerX 10.1.1.701.3838. doi:10.1145/2330784.2330946. ISBN 9781450311786. S2CID 15609415.
18. Longo, Giuseppe; Montévil, Maël (1 October 2013). "Extended criticality, phase spaces and enablement in biology". Chaos, Solitons & Fractals. Emergent Critical Brain Dynamics. 55: 64–79. Bibcode:2013CSF....55...64L. doi:10.1016/j.chaos.2013.03.008. S2CID 55589891.
19. Breathnach, A. S. (1982). "A long-term hypopigmentary effect of thorium-X on freckled skin". British Journal of Dermatology. 106 (1): 19–25. doi:10.1111/j.1365-2133.1982.tb00897.x. PMID 7059501. S2CID 72016377. The distribution of freckles seems entirely random, and not associated with any other obviously punctuate anatomical or physiological feature of skin.
20. Martin-Löf, Per (1966). "The definition of random sequences". Information and Control. 9 (6): 602–619. doi:10.1016/S0019-9958(66)80018-9.
21. Yongge Wang: Randomness and Complexity. PhD Thesis, 1996. http://webpages.uncc.edu/yonwang/papers/thesis.pdf
22. "Are the digits of pi random? researcher may hold the key". Lbl.gov. 23 July 2001. Retrieved 27 July 2012.
23. Laszso Barabasi, (2003), Linked, Rich Gets Richer, P81
24. Municipal Elections Act (Ontario, Canada) 1996, c. 32, Sched., s. 62 (3) : "If the recount indicates that two or more candidates who cannot both or all be declared elected to an office have received the same number of votes, the clerk shall choose the successful candidate or candidates by lot."
25. Reichenbach, Bruce (1990). The Law of Karma: A Philosophical Study. Palgrave Macmillan UK. p. 121. ISBN 978-1-349-11899-1.
26. Terry Ritter, Randomness tests: a literature survey. ciphersbyritter.com
27. Pironio, S.; et al. (2010). "Random Numbers Certified by Bell's Theorem". Nature. 464 (7291): 1021–1024. arXiv:0911.3427. Bibcode:2010Natur.464.1021P. doi:10.1038/nature09008. PMID 20393558. S2CID 4300790.
28. Johnson, George (8 June 2008). "Playing the Odds". The New York Times.
Further reading
• Randomness by Deborah J. Bennett. Harvard University Press, 1998. ISBN 0-674-10745-4.
• Random Measures, 4th ed. by Olav Kallenberg. Academic Press, New York, London; Akademie-Verlag, Berlin, 1986. MR0854102.
• The Art of Computer Programming. Vol. 2: Seminumerical Algorithms, 3rd ed. by Donald E. Knuth. Reading, MA: Addison-Wesley, 1997. ISBN 0-201-89684-2.
• Fooled by Randomness, 2nd ed. by Nassim Nicholas Taleb. Thomson Texere, 2004. ISBN 1-58799-190-X.
• Exploring Randomness by Gregory Chaitin. Springer-Verlag London, 2001. ISBN 1-85233-417-7.
• Random by Kenneth Chan includes a "Random Scale" for grading the level of randomness.
• The Drunkard’s Walk: How Randomness Rules our Lives by Leonard Mlodinow. Pantheon Books, New York, 2008. ISBN 978-0-375-42404-5.
External links
Wikiversity has learning resources about Random
Look up randomness in Wiktionary, the free dictionary.
Wikiquote has quotations related to Randomness.
Wikimedia Commons has media related to Randomness.
• QuantumLab Quantum random number generator with single photons as interactive experiment.
• HotBits generates random numbers from radioactive decay.
• QRBG Quantum Random Bit Generator
• QRNG Fast Quantum Random Bit Generator
• Chaitin: Randomness and Mathematical Proof
• A Pseudorandom Number Sequence Test Program (Public Domain)
• Dictionary of the History of Ideas: Chance
• Computing a Glimpse of Randomness
• Chance versus Randomness, from the Stanford Encyclopedia of Philosophy
Chaos theory
Concepts
Core
• Attractor
• Bifurcation
• Fractal
• Limit set
• Lyapunov exponent
• Orbit
• Periodic point
• Phase space
• Anosov diffeomorphism
• Arnold tongue
• axiom A dynamical system
• Bifurcation diagram
• Box-counting dimension
• Correlation dimension
• Conservative system
• Ergodicity
• False nearest neighbors
• Hausdorff dimension
• Invariant measure
• Lyapunov stability
• Measure-preserving dynamical system
• Mixing
• Poincaré section
• Recurrence plot
• SRB measure
• Stable manifold
• Topological conjugacy
Theorems
• Ergodic theorem
• Liouville's theorem
• Krylov–Bogolyubov theorem
• Poincaré–Bendixson theorem
• Poincaré recurrence theorem
• Stable manifold theorem
• Takens's theorem
Theoretical
branches
• Bifurcation theory
• Control of chaos
• Dynamical system
• Ergodic theory
• Quantum chaos
• Stability theory
• Synchronization of chaos
Chaotic
maps (list)
Discrete
• Arnold's cat map
• Baker's map
• Complex quadratic map
• Coupled map lattice
• Duffing map
• Dyadic transformation
• Dynamical billiards
• outer
• Exponential map
• Gauss map
• Gingerbreadman map
• Hénon map
• Horseshoe map
• Ikeda map
• Interval exchange map
• Irrational rotation
• Kaplan–Yorke map
• Langton's ant
• Logistic map
• Standard map
• Tent map
• Tinkerbell map
• Zaslavskii map
Continuous
• Double scroll attractor
• Duffing equation
• Lorenz system
• Lotka–Volterra equations
• Mackey–Glass equations
• Rabinovich–Fabrikant equations
• Rössler attractor
• Three-body problem
• Van der Pol oscillator
Physical
systems
• Chua's circuit
• Convection
• Double pendulum
• Elastic pendulum
• FPUT problem
• Hénon–Heiles system
• Kicked rotator
• Multiscroll attractor
• Population dynamics
• Swinging Atwood's machine
• Tilt-A-Whirl
• Weather
Chaos
theorists
• Michael Berry
• Rufus Bowen
• Mary Cartwright
• Chen Guanrong
• Leon O. Chua
• Mitchell Feigenbaum
• Peter Grassberger
• Celso Grebogi
• Martin Gutzwiller
• Brosl Hasslacher
• Michel Hénon
• Svetlana Jitomirskaya
• Bryna Kra
• Edward Norton Lorenz
• Aleksandr Lyapunov
• Benoît Mandelbrot
• Hee Oh
• Edward Ott
• Henri Poincaré
• Mary Rees
• Otto Rössler
• David Ruelle
• Caroline Series
• Yakov Sinai
• Oleksandr Mykolayovych Sharkovsky
• Nina Snaith
• Floris Takens
• Audrey Terras
• Mary Tsingou
• Marcelo Viana
• Amie Wilkinson
• James A. Yorke
• Lai-Sang Young
Related
articles
• Butterfly effect
• Complexity
• Edge of chaos
• Predictability
• Santa Fe Institute
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
Authority control
National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
Other
• Encyclopedia of Modern Ukraine
|
Wikipedia
|
Bertrand paradox (probability)
The Bertrand paradox is a problem within the classical interpretation of probability theory. Joseph Bertrand introduced it in his work Calcul des probabilités (1889),[1] as an example to show that the principle of indifference may not produce definite, well-defined results for probabilities if it is applied uncritically when the domain of possibilities is infinite.[2]
Bertrand's formulation of the problem
The Bertrand paradox is generally presented as follows:[3] Consider an equilateral triangle inscribed in a circle. Suppose a chord of the circle is chosen at random. What is the probability that the chord is longer than a side of the triangle?
Bertrand gave three arguments (each using the principle of indifference), all apparently valid, yet yielding different results:
1. The "random endpoints" method: Choose two random points on the circumference of the circle and draw the chord joining them. To calculate the probability in question imagine the triangle rotated so its vertex coincides with one of the chord endpoints. Observe that if the other chord endpoint lies on the arc between the endpoints of the triangle side opposite the first point, the chord is longer than a side of the triangle. The length of the arc is one third of the circumference of the circle, therefore the probability that a random chord is longer than a side of the inscribed triangle is 1/3.
2. The "random radial point" method: Choose a radius of the circle, choose a point on the radius and construct the chord through this point and perpendicular to the radius. To calculate the probability in question imagine the triangle rotated so a side is perpendicular to the radius. The chord is longer than a side of the triangle if the chosen point is nearer the center of the circle than the point where the side of the triangle intersects the radius. The side of the triangle bisects the radius, therefore the probability a random chord is longer than a side of the inscribed triangle is 1/2.
3. The "random midpoint" method: Choose a point anywhere within the circle and construct a chord with the chosen point as its midpoint. The chord is longer than a side of the inscribed triangle if the chosen point falls within a concentric circle of radius 1/2 the radius of the larger circle. The area of the smaller circle is one fourth the area of the larger circle, therefore the probability a random chord is longer than a side of the inscribed triangle is 1/4.
These three selection methods differ as to the weight they give to chords which are diameters. This issue can be avoided by "regularizing" the problem so as to exclude diameters, without affecting the resulting probabilities.[3] But as presented above, in method 1, each chord can be chosen in exactly one way, regardless of whether or not it is a diameter; in method 2, each diameter can be chosen in two ways, whereas each other chord can be chosen in only one way; and in method 3, each choice of midpoint corresponds to a single chord, except the center of the circle, which is the midpoint of all the diameters.
Scatterplots showing simulated Bertrand distributions,
midpoints/chords chosen at random using the above methods.
Other selection methods have been found. In fact, there exists an infinite family of them.[4]
Classical solution
The problem's classical solution (presented, for example, in Bertrand's own work) hinges on the method by which a chord is chosen "at random".[3] The argument is that if the method of random selection is specified, the problem will have a well-defined solution (determined by the principle of indifference). The three solutions presented by Bertrand correspond to different selection methods, and in the absence of further information there is no reason to prefer one over another; accordingly, the problem as stated has no unique solution.[5]
Jaynes's solution using the "maximum ignorance" principle
In his 1973 paper "The Well-Posed Problem",[6] Edwin Jaynes proposed a solution to Bertrand's paradox, based on the principle of "maximum ignorance"—that we should not use any information that is not given in the statement of the problem. Jaynes pointed out that Bertrand's problem does not specify the position or size of the circle, and argued that therefore any definite and objective solution must be "indifferent" to size and position. In other words: the solution must be both scale and translation invariant.
To illustrate: assume that chords are laid at random onto a circle with a diameter of 2, say by throwing straws onto it from far away and converting them to chords by extension/restriction. Now another circle with a smaller diameter (e.g., 1.1) is laid into the larger circle. Then the distribution of the chords on that smaller circle needs to be the same as the restricted distribution of chords on the larger circle (again using extension/restriction of the generating straws). Thus, if the smaller circle is moved around within the larger circle, the restricted distribution should not change. It can be seen very easily that there would be a change for method 3: the chord distribution on the small red circle looks qualitatively different from the distribution on the large circle:
The same occurs for method 1, though it is harder to see in a graphical representation. Method 2 is the only one that is both scale invariant and translation invariant; method 3 is just scale invariant, method 1 is neither.
However, Jaynes did not just use invariances to accept or reject given methods: this would leave the possibility that there is another not yet described method that would meet his common-sense criteria. Jaynes used the integral equations describing the invariances to directly determine the probability distribution. In this problem, the integral equations indeed have a unique solution, and it is precisely what was called "method 2" above, the random radius method.
In a 2015 article,[3] Alon Drory argued that Jaynes' principle can also yield Bertrand's other two solutions. Drory argues that the mathematical implementation of the above invariance properties is not unique, but depends on the underlying procedure of random selection that one uses (as mentioned above, Jaynes used a straw-throwing method to choose random chords). He shows that each of Bertrand's three solutions can be derived using rotational, scaling, and translational invariance, concluding that Jaynes' principle is just as subject to interpretation as the principle of indifference itself.
For example, we may consider throwing a dart at the circle, and drawing the chord having the chosen point as its center. Then the unique distribution which is translation, rotation, and scale invariant is the one called "method 3" above.
Likewise, "method 1" is the unique invariant distribution for a scenario where a spinner is used to select one endpoint of the chord, and then used again to select the orientation of the chord. Here the invariance in question consists of rotational invariance for each of the two spins. It is also the unique scale and rotation invariant distribution for a scenario where a rod is placed vertically over a point on the circle's circumference, and allowed to drop to the horizontal position (conditional on it landing partly inside the circle).
Physical experiments
"Method 2" is the only solution that fulfills the transformation invariants that are present in certain physical systems—such as in statistical mechanics and gas physics—in the specific case of Jaynes's proposed experiment of throwing straws from a distance onto a small circle. Nevertheless, one can design other practical experiments that give answers according to the other methods. For example, in order to arrive at the solution of "method 1", the random endpoints method, one can affix a spinner to the center of the circle, and let the results of two independent spins mark the endpoints of the chord. In order to arrive at the solution of "method 3", one could cover the circle with molasses and mark the first point that a fly lands on as the midpoint of the chord.[7] Several observers have designed experiments in order to obtain the different solutions and verified the results empirically.[8][9][3]
Notes
1. Bertrand, Joseph (1889), "Calcul des probabilités", Gauthier-Villars, p. 5-6.
2. Shackel, N. (2007), "Bertrand's Paradox and the Principle of Indifference" (PDF), Philosophy of Science, 74 (2): 150–175, doi:10.1086/519028, S2CID 15760612
3. Drory, Alon (2015), "Failure and Uses of Jaynes' Principle of Transformation Groups", Foundations of Physics, 45 (4): 439–460, arXiv:1503.09072, Bibcode:2015FoPh...45..439D, doi:10.1007/s10701-015-9876-7, S2CID 88515906
4. Bower, O. K. (1934). "Note Concerning Two Problems in Geometrical Probability". The American Mathematical Monthly. 41 (8): 506–510. doi:10.2307/2300418. ISSN 0002-9890. JSTOR 2300418.
5. Marinoff, L. (1994), "A resolution of Bertrand's paradox", Philosophy of Science, 61: 1–24, doi:10.1086/289777, S2CID 122224925
6. Jaynes, E. T. (1973), "The Well-Posed Problem" (PDF), Foundations of Physics, 3 (4): 477–493, Bibcode:1973FoPh....3..477J, doi:10.1007/BF00709116, S2CID 2380040
7. Gardner, Martin (1987), The Second Scientific American Book of Mathematical Puzzles and Diversions, University of Chicago Press, pp. 223–226, ISBN 978-0-226-28253-4
8. Tissler, P.E. (March 1984), "Bertrand's Paradox", The Mathematical Gazette, The Mathematical Association, 68 (443): 15–19, doi:10.2307/3615385, JSTOR 3615385, S2CID 158690181
9. Kac, Mark (May–June 1984), "Marginalia: more on randomness", American Scientist, 72 (3): 282–283
Further reading
• Clark, Michael (2012), Paradoxes from A to Z (3rd ed.), Routledge, ISBN 978-0-415-53857-2
• Gyenis, Zalán; Rédei, Miklós (1 June 2015), "Defusing Bertrand's Paradox", British Journal for the Philosophy of Science, 66 (2): 349–373, doi:10.1093/bjps/axt036, archived from the original on 5 August 2014
External links
• Weisstein, Eric W. "Bertrand's Problem". MathWorld.
• Bertrand's Paradox and More on Bertrand's Paradox by Numberphile & 3Blue1Brown
|
Wikipedia
|
Random coordinate descent
Randomized (Block) Coordinate Descent Method is an optimization algorithm popularized by Nesterov (2010) and Richtárik and Takáč (2011). The first analysis of this method, when applied to the problem of minimizing a smooth convex function, was performed by Nesterov (2010).[1] In Nesterov's analysis the method needs to be applied to a quadratic perturbation of the original function with an unknown scaling factor. Richtárik and Takáč (2011) give iteration complexity bounds which do not require this, i.e., the method is applied to the objective function directly. Furthermore, they generalize the setting to the problem of minimizing a composite function, i.e., sum of a smooth convex and a (possibly nonsmooth) convex block-separable function:
$F(x)=f(x)+\Psi (x),$
where $\Psi (x)=\sum _{i=1}^{n}\Psi _{i}(x^{(i)}),$ $x\in R^{N}$ is decomposed into $n$ blocks of variables/coordinates: $x=(x^{(1)},\dots ,x^{(n)})$ and $\Psi _{1},\dots ,\Psi _{n}$ are (simple) convex functions.
Example (block decomposition): If $x=(x_{1},x_{2},\dots ,x_{5})\in R^{5}$ and $n=3$, one may choose $x^{(1)}=(x_{1},x_{3}),x^{(2)}=(x_{2},x_{5})$ and $x^{(3)}=x_{4}$.
Example (block-separable regularizers):
1. $n=N;\Psi (x)=\|x\|_{1}=\sum _{i=1}^{n}|x_{i}|$
2. $N=N_{1}+N_{2}+\dots +N_{n};\Psi (x)=\sum _{i=1}^{n}\|x^{(i)}\|_{2}$, where $x^{(i)}\in R^{N_{i}}$ and $\|\cdot \|_{2}$ is the standard Euclidean norm.
Algorithm
Consider the optimization problem
$\min _{x\in R^{n}}f(x),$
where $f$ is a convex and smooth function.
Smoothness: By smoothness we mean the following: we assume the gradient of $f$ is coordinate-wise Lipschitz continuous with constants $L_{1},L_{2},\dots ,L_{n}$. That is, we assume that
$|\nabla _{i}f(x+he_{i})-\nabla _{i}f(x)|\leq L_{i}|h|,$
for all $x\in R^{n}$ and $h\in R$, where $\nabla _{i}$ denotes the partial derivative with respect to variable $x^{(i)}$.
Nesterov, and Richtarik and Takac showed that the following algorithm converges to the optimal point:
Algorithm Random Coordinate Descent Method
Input: $x_{0}\in R^{n}$ //starting point
Output: $x$
set x := x_0
for k := 1, ... do
choose coordinate $i\in \{1,2,\dots ,n\}$, uniformly at random
update $x^{(i)}=x^{(i)}-{\frac {1}{L_{i}}}\nabla _{i}f(x)$
end for
• "←" denotes assignment. For instance, "largest ← item" means that the value of largest changes to the value of item.
• "return" terminates the algorithm and outputs the following value.
Convergence rate
Since the iterates of this algorithm are random vectors, a complexity result would give a bound on the number of iterations needed for the method to output an approximate solution with high probability. It was shown in [2] that if $k\geq {\frac {2nR_{L}(x_{0})}{\epsilon }}\log \left({\frac {f(x_{0})-f^{*}}{\epsilon \rho }}\right)$, where $R_{L}(x)=\max _{y}\max _{x^{*}\in X^{*}}\{\|y-x^{*}\|_{L}:f(y)\leq f(x)\}$, $f^{*}$ is an optimal solution ($f^{*}=\min _{x\in R^{n}}\{f(x)\}$), $\rho \in (0,1)$ is a confidence level and $\epsilon >0$ is target accuracy, then $Prob(f(x_{k})-f^{*}>\epsilon )\leq \rho $.
Example on particular function
The following Figure shows how $x_{k}$ develops during iterations, in principle. The problem is
$f(x)={\tfrac {1}{2}}x^{T}\left({\begin{array}{cc}1&0.5\\0.5&1\end{array}}\right)x-\left({\begin{array}{cc}1.5&1.5\end{array}}\right)x,\quad x_{0}=\left({\begin{array}{cc}0&0\end{array}}\right)^{T}$
Extension to block coordinate setting
One can naturally extend this algorithm not only just to coordinates, but to blocks of coordinates. Assume that we have space $R^{5}$. This space has 5 coordinate directions, concretely $e_{1}=(1,0,0,0,0)^{T},e_{2}=(0,1,0,0,0)^{T},e_{3}=(0,0,1,0,0)^{T},e_{4}=(0,0,0,1,0)^{T},e_{5}=(0,0,0,0,1)^{T}$ in which Random Coordinate Descent Method can move. However, one can group some coordinate directions into blocks and we can have instead of those 5 coordinate directions 3 block coordinate directions (see image).
See also
• Coordinate descent
• Gradient descent
• Mathematical optimization
References
1. Nesterov, Yurii (2010), "Efficiency of coordinate descent methods on huge-scale optimization problems", SIAM Journal on Optimization, 22 (2): 341–362, CiteSeerX 10.1.1.332.3336, doi:10.1137/100802001
2. Richtárik, Peter; Takáč, Martin (2011), "Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function", Mathematical Programming, Series A, 144 (1–2): 1–38, arXiv:1107.2848, doi:10.1007/s10107-012-0614-z
|
Wikipedia
|
Random dynamical system
In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space S, a set of maps $\Gamma $ from S into itself that can be thought of as the set of all possible equations of motion, and a probability distribution Q on the set $\Gamma $ that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state $X\in S$ evolving according to a succession of maps randomly chosen according to the distribution Q.[1]
An example of a random dynamical system is a stochastic differential equation; in this case the distribution Q is typically determined by noise terms. It consists of a base flow, the "noise", and a cocycle dynamical system on the "physical" phase space. Another example is discrete state random dynamical system; some elementary contradistinctions between Markov chain and random dynamical system descriptions of a stochastic dynamics are discussed.[2]
Motivation 1: Solutions to a stochastic differential equation
Let $f:\mathbb {R} ^{d}\to \mathbb {R} ^{d}$ be a $d$-dimensional vector field, and let $\varepsilon >0$. Suppose that the solution $X(t,\omega ;x_{0})$ to the stochastic differential equation
$\left\{{\begin{matrix}\mathrm {d} X=f(X)\,\mathrm {d} t+\varepsilon \,\mathrm {d} W(t);\\X(0)=x_{0};\end{matrix}}\right.$
exists for all positive time and some (small) interval of negative time dependent upon $\omega \in \Omega $, where $W:\mathbb {R} \times \Omega \to \mathbb {R} ^{d}$ denotes a $d$-dimensional Wiener process (Brownian motion). Implicitly, this statement uses the classical Wiener probability space
$(\Omega ,{\mathcal {F}},\mathbb {P} ):=\left(C_{0}(\mathbb {R} ;\mathbb {R} ^{d}),{\mathcal {B}}(C_{0}(\mathbb {R} ;\mathbb {R} ^{d})),\gamma \right).$ ;\mathbb {R} ^{d}),{\mathcal {B}}(C_{0}(\mathbb {R} ;\mathbb {R} ^{d})),\gamma \right).}
In this context, the Wiener process is the coordinate process.
Now define a flow map or (solution operator) $\varphi :\mathbb {R} \times \Omega \times \mathbb {R} ^{d}\to \mathbb {R} ^{d}$ :\mathbb {R} \times \Omega \times \mathbb {R} ^{d}\to \mathbb {R} ^{d}} by
$\varphi (t,\omega ,x_{0}):=X(t,\omega ;x_{0})$
(whenever the right hand side is well-defined). Then $\varphi $ (or, more precisely, the pair $(\mathbb {R} ^{d},\varphi )$) is a (local, left-sided) random dynamical system. The process of generating a "flow" from the solution to a stochastic differential equation leads us to study suitably defined "flows" on their own. These "flows" are random dynamical systems.
Motivation 2: Connection to Markov Chain
An i.i.d random dynamical system in the discrete space is described by a triplet $(S,\Gamma ,Q)$.
• $S$ is the state space, $\{s_{1},s_{2},\cdots ,s_{n}\}$.
• $\Gamma $ is a family of maps of $S\rightarrow S$. Each such map has a $n\times n$ matrix representation, called deterministic transition matrix. It is a binary matrix but it has exactly one entry 1 in each row and 0s otherwise.
• $Q$ is the probability measure of the $\sigma $-field of $\Gamma $.
The discrete random dynamical system comes as follows,
1. The system is in some state $x_{0}$ in $S$, a map $\alpha _{1}$ in $\Gamma $ is chosen according to the probability measure $Q$ and the system moves to the state $x_{1}=\alpha _{1}(x_{0})$ in step 1.
2. Independently of previous maps, another map $\alpha _{2}$ is chosen according to the probability measure $Q$ and the system moves to the state $x_{2}=\alpha _{2}(x_{1})$.
3. The procedure repeats.
The random variable $X_{n}$ is constructed by means of composition of independent random maps, $X_{n}=\alpha _{n}\circ \alpha _{n-1}\circ \dots \circ \alpha _{1}(X_{0})$. Clearly, $X_{n}$ is a Markov Chain.
Reversely, can, and how, a given MC be represented by the compositions of i.i.d. random transformations? Yes, it can, but not unique. The proof for existence is similar with Birkhoff–von Neumann theorem for doubly stochastic matrix.
Here is an example that illustrates the existence and non-uniqueness.
Example: If the state space $S=\{1,2\}$ and the set of the transformations $\Gamma $ expressed in terms of deterministic transition matrices. Then a Markov transition matrix$M=\left({\begin{array}{cc}0.4&0.6\\0.7&0.3\end{array}}\right)$ can be represented by the following decomposition by the min-max algorithm, $M=0.6\left({\begin{array}{cc}0&1\\1&0\end{array}}\right)+0.3\left({\begin{array}{cc}1&0\\0&1\end{array}}\right)+0.1\left({\begin{array}{cc}1&0\\1&0\end{array}}\right).$
In the meantime, another decomposition could be $M=0.18\left({\begin{array}{cc}0&1\\0&1\end{array}}\right)+0.28\left({\begin{array}{cc}1&0\\1&0\end{array}}\right)+0.42\left({\begin{array}{cc}0&1\\1&0\end{array}}\right)+0.12\left({\begin{array}{cc}1&0\\0&1\end{array}}\right).$
Formal definition
Formally,[3] a random dynamical system consists of a base flow, the "noise", and a cocycle dynamical system on the "physical" phase space. In detail.
Let $(\Omega ,{\mathcal {F}},\mathbb {P} )$ be a probability space, the noise space. Define the base flow $\vartheta :\mathbb {R} \times \Omega \to \Omega $ :\mathbb {R} \times \Omega \to \Omega } as follows: for each "time" $s\in \mathbb {R} $, let $\vartheta _{s}:\Omega \to \Omega $ be a measure-preserving measurable function:
$\mathbb {P} (E)=\mathbb {P} (\vartheta _{s}^{-1}(E))$ for all $E\in {\mathcal {F}}$ and $s\in \mathbb {R} $;
Suppose also that
1. $\vartheta _{0}=\mathrm {id} _{\Omega }:\Omega \to \Omega $, the identity function on $\Omega $;
2. for all $s,t\in \mathbb {R} $, $\vartheta _{s}\circ \vartheta _{t}=\vartheta _{s+t}$.
That is, $\vartheta _{s}$, $s\in \mathbb {R} $, forms a group of measure-preserving transformation of the noise $(\Omega ,{\mathcal {F}},\mathbb {P} )$. For one-sided random dynamical systems, one would consider only positive indices $s$; for discrete-time random dynamical systems, one would consider only integer-valued $s$; in these cases, the maps $\vartheta _{s}$ would only form a commutative monoid instead of a group.
While true in most applications, it is not usually part of the formal definition of a random dynamical system to require that the measure-preserving dynamical system $(\Omega ,{\mathcal {F}},\mathbb {P} ,\vartheta )$ is ergodic.
Now let $(X,d)$ be a complete separable metric space, the phase space. Let $\varphi :\mathbb {R} \times \Omega \times X\to X$ :\mathbb {R} \times \Omega \times X\to X} be a $({\mathcal {B}}(\mathbb {R} )\otimes {\mathcal {F}}\otimes {\mathcal {B}}(X),{\mathcal {B}}(X))$-measurable function such that
1. for all $\omega \in \Omega $, $\varphi (0,\omega )=\mathrm {id} _{X}:X\to X$, the identity function on $X$;
2. for (almost) all $\omega \in \Omega $, $(t,x)\mapsto \varphi (t,\omega ,x)$ is continuous;
3. $\varphi $ satisfies the (crude) cocycle property: for almost all $\omega \in \Omega $,
$\varphi (t,\vartheta _{s}(\omega ))\circ \varphi (s,\omega )=\varphi (t+s,\omega ).$
In the case of random dynamical systems driven by a Wiener process $W:\mathbb {R} \times \Omega \to X$, the base flow $\vartheta _{s}:\Omega \to \Omega $ would be given by
$W(t,\vartheta _{s}(\omega ))=W(t+s,\omega )-W(s,\omega )$.
This can be read as saying that $\vartheta _{s}$ "starts the noise at time $s$ instead of time 0". Thus, the cocycle property can be read as saying that evolving the initial condition $x_{0}$ with some noise $\omega $ for $s$ seconds and then through $t$ seconds with the same noise (as started from the $s$ seconds mark) gives the same result as evolving $x_{0}$ through $(t+s)$ seconds with that same noise.
Attractors for random dynamical systems
The notion of an attractor for a random dynamical system is not as straightforward to define as in the deterministic case. For technical reasons, it is necessary to "rewind time", as in the definition of a pullback attractor.[4] Moreover, the attractor is dependent upon the realisation $\omega $ of the noise.
See also
• Chaos theory
• Diffusion process
• Stochastic control
References
1. Bhattacharya, Rabi; Majumdar, Mukul (2003). "Random dynamical systems: a review". Economic Theory. 23 (1): 13–38. doi:10.1007/s00199-003-0357-4. S2CID 15055697.
2. Ye, Felix X.-F.; Wang, Yue; Qian, Hong (August 2016). "Stochastic dynamics: Markov chains and random transformations". Discrete and Continuous Dynamical Systems - Series B. 21 (7): 2337–2361. doi:10.3934/dcdsb.2016050.
3. Arnold, Ludwig (1998). Random Dynamical Systems. ISBN 9783540637585.
4. Crauel, Hans; Debussche, Arnaud; Flandoli, Franco (1997). "Random attractors". Journal of Dynamics and Differential Equations. 9 (2): 307–341. Bibcode:1997JDDE....9..307C. doi:10.1007/BF02219225. S2CID 192603977.
Stochastic processes
Discrete time
• Bernoulli process
• Branching process
• Chinese restaurant process
• Galton–Watson process
• Independent and identically distributed random variables
• Markov chain
• Moran process
• Random walk
• Loop-erased
• Self-avoiding
• Biased
• Maximal entropy
Continuous time
• Additive process
• Bessel process
• Birth–death process
• pure birth
• Brownian motion
• Bridge
• Excursion
• Fractional
• Geometric
• Meander
• Cauchy process
• Contact process
• Continuous-time random walk
• Cox process
• Diffusion process
• Empirical process
• Feller process
• Fleming–Viot process
• Gamma process
• Geometric process
• Hawkes process
• Hunt process
• Interacting particle systems
• Itô diffusion
• Itô process
• Jump diffusion
• Jump process
• Lévy process
• Local time
• Markov additive process
• McKean–Vlasov process
• Ornstein–Uhlenbeck process
• Poisson process
• Compound
• Non-homogeneous
• Schramm–Loewner evolution
• Semimartingale
• Sigma-martingale
• Stable process
• Superprocess
• Telegraph process
• Variance gamma process
• Wiener process
• Wiener sausage
Both
• Branching process
• Galves–Löcherbach model
• Gaussian process
• Hidden Markov model (HMM)
• Markov process
• Martingale
• Differences
• Local
• Sub-
• Super-
• Random dynamical system
• Regenerative process
• Renewal process
• Stochastic chains with memory of variable length
• White noise
Fields and other
• Dirichlet process
• Gaussian random field
• Gibbs measure
• Hopfield model
• Ising model
• Potts model
• Boolean network
• Markov random field
• Percolation
• Pitman–Yor process
• Point process
• Cox
• Poisson
• Random field
• Random graph
Time series models
• Autoregressive conditional heteroskedasticity (ARCH) model
• Autoregressive integrated moving average (ARIMA) model
• Autoregressive (AR) model
• Autoregressive–moving-average (ARMA) model
• Generalized autoregressive conditional heteroskedasticity (GARCH) model
• Moving-average (MA) model
Financial models
• Binomial options pricing model
• Black–Derman–Toy
• Black–Karasinski
• Black–Scholes
• Chan–Karolyi–Longstaff–Sanders (CKLS)
• Chen
• Constant elasticity of variance (CEV)
• Cox–Ingersoll–Ross (CIR)
• Garman–Kohlhagen
• Heath–Jarrow–Morton (HJM)
• Heston
• Ho–Lee
• Hull–White
• LIBOR market
• Rendleman–Bartter
• SABR volatility
• Vašíček
• Wilkie
Actuarial models
• Bühlmann
• Cramér–Lundberg
• Risk process
• Sparre–Anderson
Queueing models
• Bulk
• Fluid
• Generalized queueing network
• M/G/1
• M/M/1
• M/M/c
Properties
• Càdlàg paths
• Continuous
• Continuous paths
• Ergodic
• Exchangeable
• Feller-continuous
• Gauss–Markov
• Markov
• Mixing
• Piecewise-deterministic
• Predictable
• Progressively measurable
• Self-similar
• Stationary
• Time-reversible
Limit theorems
• Central limit theorem
• Donsker's theorem
• Doob's martingale convergence theorems
• Ergodic theorem
• Fisher–Tippett–Gnedenko theorem
• Large deviation principle
• Law of large numbers (weak/strong)
• Law of the iterated logarithm
• Maximal ergodic theorem
• Sanov's theorem
• Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy)
Inequalities
• Burkholder–Davis–Gundy
• Doob's martingale
• Doob's upcrossing
• Kunita–Watanabe
• Marcinkiewicz–Zygmund
Tools
• Cameron–Martin formula
• Convergence of random variables
• Doléans-Dade exponential
• Doob decomposition theorem
• Doob–Meyer decomposition theorem
• Doob's optional stopping theorem
• Dynkin's formula
• Feynman–Kac formula
• Filtration
• Girsanov theorem
• Infinitesimal generator
• Itô integral
• Itô's lemma
• Karhunen–Loève theorem
• Kolmogorov continuity theorem
• Kolmogorov extension theorem
• Lévy–Prokhorov metric
• Malliavin calculus
• Martingale representation theorem
• Optional stopping theorem
• Prokhorov's theorem
• Quadratic variation
• Reflection principle
• Skorokhod integral
• Skorokhod's representation theorem
• Skorokhod space
• Snell envelope
• Stochastic differential equation
• Tanaka
• Stopping time
• Stratonovich integral
• Uniform integrability
• Usual hypotheses
• Wiener space
• Classical
• Abstract
Disciplines
• Actuarial mathematics
• Control theory
• Econometrics
• Ergodic theory
• Extreme value theory (EVT)
• Large deviations theory
• Mathematical finance
• Mathematical statistics
• Probability theory
• Queueing theory
• Renewal theory
• Ruin theory
• Signal processing
• Statistics
• Stochastic analysis
• Time series analysis
• Machine learning
• List of topics
• Category
|
Wikipedia
|
Random element
In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by Maurice Fréchet (1948) who commented that the “development of probability theory and expansion of area of its applications have led to necessity to pass from schemes where (random) outcomes of experiments can be described by number or a finite set of numbers, to schemes where outcomes of experiments represent, for example, vectors, functions, processes, fields, series, transformations, and also sets or collections of sets.”[1]
The modern-day usage of “random element” frequently assumes the space of values is a topological vector space, often a Banach or Hilbert space with a specified natural sigma algebra of subsets.[2]
Definition
Let $(\Omega ,{\mathcal {F}},P)$ be a probability space, and $(E,{\mathcal {E}})$ a measurable space. A random element with values in E is a function X: Ω→E which is $({\mathcal {F}},{\mathcal {E}})$-measurable. That is, a function X such that for any $B\in {\mathcal {E}}$, the preimage of B lies in ${\mathcal {F}}$.
Sometimes random elements with values in $E$ are called $E$-valued random variables.
Note if $(E,{\mathcal {E}})=(\mathbb {R} ,{\mathcal {B}}(\mathbb {R} ))$, where $\mathbb {R} $ are the real numbers, and ${\mathcal {B}}(\mathbb {R} )$ is its Borel σ-algebra, then the definition of random element is the classical definition of random variable.
The definition of a random element $X$ with values in a Banach space $B$ is typically understood to utilize the smallest $\sigma $-algebra on B for which every bounded linear functional is measurable. An equivalent definition, in this case, to the above, is that a map $X:\Omega \rightarrow B$, from a probability space, is a random element if $f\circ X$ is a random variable for every bounded linear functional f, or, equivalently, that $X$ is weakly measurable.
Examples of random elements
Random variable
Main article: Random variable
A random variable is the simplest type of random element. It is a map $X\colon \Omega \to \mathbb {R} $ is a measurable function from the set of possible outcomes $\Omega $ to $\mathbb {R} $.
As a real-valued function, $X$ often describes some numerical quantity of a given event. E.g. the number of heads after a certain number of coin flips; the heights of different people.
When the image (or range) of $X$ is finite or countably infinite, the random variable is called a discrete random variable[3] and its distribution can be described by a probability mass function which assigns a probability to each value in the image of $X$. If the image is uncountably infinite then $X$ is called a continuous random variable. In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous,[4] for example a mixture distribution. Such random variables cannot be described by a probability density or a probability mass function.
Random vector
Main article: Random vector
A random vector is a column vector $\mathbf {X} =(X_{1},...,X_{n})^{T}$ (or its transpose, which is a row vector) whose components are scalar-valued random variables on the same probability space $(\Omega ,{\mathcal {F}},P)$, where $\Omega $ is the sample space, ${\mathcal {F}}$ is the sigma-algebra (the collection of all events), and $P$ is the probability measure (a function returning each event's probability).
Random vectors are often used as the underlying implementation of various types of aggregate random variables, e.g. a random matrix, random tree, random sequence, random process, etc.
Random matrix
Main article: Random matrix theory
A random matrix is a matrix-valued random element. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.
Random function
Main article: Random function
A random function is a type of random element in which a single outcome is selected from some family of functions, where the family consists some class of all maps from the domain to the codomain. For example, the class may be restricted to all continuous functions or to all step functions. The values determined by a random function evaluated at different points from the same realization would not generally be statistically independent but, depending on the model, values determined at the same or different points from different realisations might well be treated as independent.
Random process
Main article: Random process
A Random process is a collection of random variables, representing the evolution of some system of random values over time. This is the probabilistic counterpart to a deterministic process (or deterministic system). Instead of describing a process which can only evolve in one way (as in the case, for example, of solutions of an ordinary differential equation), in a stochastic or random process there is some indeterminacy: even if the initial condition (or starting point) is known, there are several (often infinitely many) directions in which the process may evolve.
In the simple case of discrete time, as opposed to continuous time, a stochastic process involves a sequence of random variables and the time series associated with these random variables (for example, see Markov chain, also known as discrete-time Markov chain).
Random field
Main article: Random field
Given a probability space $(\Omega ,{\mathcal {F}},P)$ and a measurable space X, an X-valued random field is a collection of X-valued random variables indexed by elements in a topological space T. That is, a random field F is a collection
$\{F_{t}:t\in T\}$
where each $F_{t}$ is an X-valued random variable.
Several kinds of random fields exist, among them the Markov random field (MRF), Gibbs random field (GRF), conditional random field (CRF), and Gaussian random field. An MRF exhibits the Markovian property
$P(X_{i}=x_{i}|X_{j}=x_{j},i\neq j)=P(X_{i}=x_{i}|\partial _{i}),\,$
where $\partial _{i}$ is a set of neighbours of the random variable Xi. In other words, the probability that a random variable assumes a value depends on the other random variables only through the ones that are its immediate neighbours. The probability of a random variable in an MRF is given by
$P(X_{i}=x_{i}|\partial _{i})={\frac {P(\omega )}{\sum _{\omega '}P(\omega ')}},$
where Ω' is the same realization of Ω, except for random variable Xi. It is difficult to calculate with this equation, without recourse to the relation between MRFs and GRFs proposed by Julian Besag in 1974.
Random measure
A random measure is a measure-valued random element.[5][6] Let X be a complete separable metric space and ${\mathfrak {B}}(X)$ the σ-algebra of its Borel sets. A Borel measure μ on X is boundedly finite if μ(A) < ∞ for every bounded Borel set A. Let $M_{X}$ be the space of all boundedly finite measures on ${\mathfrak {B}}(X)$. Let (Ω, ℱ, P) be a probability space, then a random measure maps from this probability space to the measurable space ($M_{X}$, ${\mathfrak {B}}(M_{X})$).[7] A measure generally might be decomposed as:
$\mu =\mu _{d}+\mu _{a}=\mu _{d}+\sum _{n=1}^{N}\kappa _{n}\delta _{X_{n}},$
Here $\mu _{d}$ is a diffuse measure without atoms, while $\mu _{a}$ is a purely atomic measure.
Random set
A random set is a set-valued random element.
One specific example is a random compact set. Let $(M,d)$ be a complete separable metric space. Let ${\mathcal {K}}$ denote the set of all compact subsets of $M$. The Hausdorff metric $h$ on ${\mathcal {K}}$ is defined by
$h(K_{1},K_{2}):=\max \left\{\sup _{a\in K_{1}}\inf _{b\in K_{2}}d(a,b),\sup _{b\in K_{2}}\inf _{a\in K_{1}}d(a,b)\right\}.$
$({\mathcal {K}},h)$ is also а complete separable metric space. The corresponding open subsets generate a σ-algebra on ${\mathcal {K}}$, the Borel sigma algebra ${\mathcal {B}}({\mathcal {K}})$ of ${\mathcal {K}}$.
A random compact set is а measurable function $K$ from а probability space $(\Omega ,{\mathcal {F}},\mathbb {P} )$ into $({\mathcal {K}},{\mathcal {B}}({\mathcal {K}}))$.
Put another way, a random compact set is a measurable function $K\colon \Omega \to 2^{M}$ such that $K(\omega )$ is almost surely compact and
$\omega \mapsto \inf _{b\in K(\omega )}d(x,b)$
is a measurable function for every $x\in M$.
Random geometric objects
These include random points, random figures,[8] and random shapes.[8]
References
1. Fréchet, M. (1948). "Les éléments aléatoires de nature quelconque dans un espace distancié". Annales de l'Institut Henri Poincaré. 10 (4): 215–310.
2. V.V. Buldygin, A.B. Kharazishvili. Geometric Aspects of Probability Theory and Mathematical Statistics. – Kluwer Academic Publishers, Dordrecht. – 2000
3. Yates, Daniel S.; Moore, David S; Starnes, Daren S. (2003). The Practice of Statistics (2nd ed.). New York: Freeman. ISBN 978-0-7167-4773-4. Archived from the original on 2005-02-09.
4. L. Castañeda; V. Arunachalam & S. Dharmaraja (2012). Introduction to Probability and Stochastic Processes with Applications. Wiley. p. 67. ISBN 9781118344941.
5. Kallenberg, O., Random Measures, 4th edition. Academic Press, New York, London; Akademie-Verlag, Berlin (1986). ISBN 0-12-394960-2 MR854102. An authoritative but rather difficult reference.
6. Jan Grandell, Point processes and random measures, Advances in Applied Probability 9 (1977) 502-526. MR0478331 JSTOR A nice and clear introduction.
7. Daley, D. J.; Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes. Probability and its Applications. doi:10.1007/b97277. ISBN 0-387-95541-0.
8. Stoyan, D., and Stoyan, H. (1994) Fractals, Random Shapes and Point Fields. Methods of Geometrical Statistics. Chichester, New York: John Wiley & Sons. ISBN 0-471-93757-6
Literature
• Hoffman-Jorgensen J., Pisier G. (1976) "Ann.Probab.", v.4, 587–589.
• Mourier E. (1955) Elements aleatoires dans un espace de Banach (These). Paris.
• Prokhorov Yu.V. (1999) Random element. Probability and Mathematical statistics. Encyclopedia. Moscow: "Great Russian Encyclopedia", P.623.
External links
• Entry in Springer Encyclopedia of Mathematics
|
Wikipedia
|
Random Fibonacci sequence
In mathematics, the random Fibonacci sequence is a stochastic analogue of the Fibonacci sequence defined by the recurrence relation $f_{n}=f_{n-1}\pm f_{n-2}$, where the signs + or − are chosen at random with equal probability ${\tfrac {1}{2}}$, independently for different $n$. By a theorem of Harry Kesten and Hillel Furstenberg, random recurrent sequences of this kind grow at a certain exponential rate, but it is difficult to compute the rate explicitly. In 1999, Divakar Viswanath showed that the growth rate of the random Fibonacci sequence is equal to 1.1319882487943... (sequence A078416 in the OEIS), a mathematical constant that was later named Viswanath's constant.[1][2][3]
Description
A random Fibonacci sequence is an integer random sequence given by the numbers $f_{n}$ for natural numbers $n$, where $f_{1}=f_{2}=1$ and the subsequent terms are chosen randomly according to the random recurrence relation
$f_{n}={\begin{cases}f_{n-1}+f_{n-2},&{\text{ with probability }}{\tfrac {1}{2}};\\f_{n-1}-f_{n-2},&{\text{ with probability }}{\tfrac {1}{2}}.\end{cases}}$
An instance of the random Fibonacci sequence starts with 1,1 and the value of the each subsequent term is determined by a fair coin toss: given two consecutive elements of the sequence, the next element is either their sum or their difference with probability 1/2, independently of all the choices made previously. If in the random Fibonacci sequence the plus sign is chosen at each step, the corresponding instance is the Fibonacci sequence (Fn),
$1,1,2,3,5,8,13,21,34,55,\ldots .$
If the signs alternate in minus-plus-plus-minus-plus-plus-... pattern, the result is the sequence
$1,1,0,1,1,0,1,1,0,1,\ldots .$
However, such patterns occur with vanishing probability in a random experiment. In a typical run, the terms will not follow a predictable pattern:
$1,1,2,3,1,-2,-3,-5,-2,-3,\ldots {\text{ for the signs }}+,+,+,-,-,+,-,-,\ldots .$
Similarly to the deterministic case, the random Fibonacci sequence may be profitably described via matrices:
${f_{n-1} \choose f_{n}}={\begin{pmatrix}0&1\\\pm 1&1\end{pmatrix}}{f_{n-2} \choose f_{n-1}},$
where the signs are chosen independently for different n with equal probabilities for + or −. Thus
${f_{n-1} \choose f_{n}}=M_{n}M_{n-1}\ldots M_{3}{f_{1} \choose f_{2}},$
where (Mk) is a sequence of independent identically distributed random matrices taking values A or B with probability 1/2:
$A={\begin{pmatrix}0&1\\1&1\end{pmatrix}},\quad B={\begin{pmatrix}0&1\\-1&1\end{pmatrix}}.$
Growth rate
Johannes Kepler discovered that as n increases, the ratio of the successive terms of the Fibonacci sequence (Fn) approaches the golden ratio $\varphi =(1+{\sqrt {5}})/2,$ which is approximately 1.61803. In 1765, Leonhard Euler published an explicit formula, known today as the Binet formula,
$F_{n}={{\varphi ^{n}-(-1/\varphi )^{n}} \over {\sqrt {5}}}.$
It demonstrates that the Fibonacci numbers grow at an exponential rate equal to the golden ratio φ.
In 1960, Hillel Furstenberg and Harry Kesten showed that for a general class of random matrix products, the norm grows as λn, where n is the number of factors. Their results apply to a broad class of random sequence generating processes that includes the random Fibonacci sequence. As a consequence, the nth root of |fn| converges to a constant value almost surely, or with probability one:
${\sqrt[{n}]{|f_{n}|}}\to 1.1319882487943\dots {\text{ as }}n\to \infty .$
An explicit expression for this constant was found by Divakar Viswanath in 1999. It uses Furstenberg's formula for the Lyapunov exponent of a random matrix product and integration over a certain fractal measure on the Stern–Brocot tree. Moreover, Viswanath computed the numerical value above using floating point arithmetic validated by an analysis of the rounding error.
Generalization
Mark Embree and Nick Trefethen showed in 1999 that the sequence
$f_{n}=\pm f_{n-1}\pm \beta f_{n-2}$
decays almost surely if β is less than a critical value β* ≈ 0.70258, known as the Embree–Trefethen constant, and otherwise grows almost surely. They also showed that the asymptotic ratio σ(β) between consecutive terms converges almost surely for every value of β. The graph of σ(β) appears to have a fractal structure, with a global minimum near βmin ≈ 0.36747 approximately equal to σ(βmin) ≈ 0.89517.[4]
References
1. Viswanath, D. (1999). "Random Fibonacci sequences and the number 1.13198824..." Mathematics of Computation. 69 (231): 1131–1155. doi:10.1090/S0025-5718-99-01145-X.
2. Oliveira, J. O. B.; De Figueiredo, L. H. (2002). "Interval Computation of Viswanath's Constant". Reliable Computing. 8 (2): 131. doi:10.1023/A:1014702122205. S2CID 29600050.
3. Makover, E.; McGowan, J. (2006). "An elementary proof that random Fibonacci sequences grow exponentially". Journal of Number Theory. 121: 40–44. arXiv:math.NT/0510159. doi:10.1016/j.jnt.2006.01.002. S2CID 119169165.
4. Embree, M.; Trefethen, L. N. (1999). "Growth and decay of random Fibonacci sequences" (PDF). Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 455 (1987): 2471. Bibcode:1999RSPSA.455.2471T. doi:10.1098/rspa.1999.0412. S2CID 16404862.
External links
• Weisstein, Eric W. "Random Fibonacci Sequence". MathWorld.
• OEIS sequence A078416 (Decimal expansion of Viswanath's constant)
• Random Fibonacci Numbers. Numberphile's video about the random Fibonnaci sequence.
|
Wikipedia
|
Random geometric graph
In graph theory, a random geometric graph (RGG) is the mathematically simplest spatial network, namely an undirected graph constructed by randomly placing N nodes in some metric space (according to a specified probability distribution) and connecting two nodes by a link if and only if their distance is in a given range, e.g. smaller than a certain neighborhood radius, r.
Part of a series on
Network science
• Theory
• Graph
• Complex network
• Contagion
• Small-world
• Scale-free
• Community structure
• Percolation
• Evolution
• Controllability
• Graph drawing
• Social capital
• Link analysis
• Optimization
• Reciprocity
• Closure
• Homophily
• Transitivity
• Preferential attachment
• Balance theory
• Network effect
• Social influence
Network types
• Informational (computing)
• Telecommunication
• Transport
• Social
• Scientific collaboration
• Biological
• Artificial neural
• Interdependent
• Semantic
• Spatial
• Dependency
• Flow
• on-Chip
Graphs
Features
• Clique
• Component
• Cut
• Cycle
• Data structure
• Edge
• Loop
• Neighborhood
• Path
• Vertex
• Adjacency list / matrix
• Incidence list / matrix
Types
• Bipartite
• Complete
• Directed
• Hyper
• Labeled
• Multi
• Random
• Weighted
• Metrics
• Algorithms
• Centrality
• Degree
• Motif
• Clustering
• Degree distribution
• Assortativity
• Distance
• Modularity
• Efficiency
Models
Topology
• Random graph
• Erdős–Rényi
• Barabási–Albert
• Bianconi–Barabási
• Fitness model
• Watts–Strogatz
• Exponential random (ERGM)
• Random geometric (RGG)
• Hyperbolic (HGN)
• Hierarchical
• Stochastic block
• Blockmodeling
• Maximum entropy
• Soft configuration
• LFR Benchmark
Dynamics
• Boolean network
• agent based
• Epidemic/SIR
• Lists
• Categories
• Topics
• Software
• Network scientists
• Category:Network theory
• Category:Graph theory
Random geometric graphs resemble real human social networks in a number of ways. For instance, they spontaneously demonstrate community structure - clusters of nodes with high modularity. Other random graph generation algorithms, such as those generated using the Erdős–Rényi model or Barabási–Albert (BA) model do not create this type of structure. Additionally, random geometric graphs display degree assortativity according to their spatial dimension:[1] "popular" nodes (those with many links) are particularly likely to be linked to other popular nodes.
A real-world application of RGGs is the modeling of ad hoc networks.[2] Furthermore they are used to perform benchmarks for graph algorithms.
Definition
In the following, let G = (V, E) denote an undirected Graph with a set of vertices V and a set of edges E ⊆ V × V. The set sizes are denoted by |V| = n and |E| = m. Additionally, if not noted otherwise, the metric space [0,1)d with the euclidean distance is considered, i.e. for any points $x,y\in [0,1)^{d}$ the euclidean distance of x and y is defined as
$d(x,y)=||x-y||_{2}={\sqrt {\sum _{i=1}^{d}(x_{i}-y_{i})^{2}}}$.
A random geometric graph (RGG) is an undirected geometric graph with nodes randomly sampled from the uniform distribution of the underlying space [0,1)d.[3] Two vertices p, q ∈ V are connected if, and only if, their distance is less than a previously specified parameter r ∈ (0,1), excluding any loops. Thus, the parameters r and n fully characterize a RGG.
Algorithms
Naive algorithm
The naive approach is to calculate the distance of every vertex to every other vertex. As there are $ {\frac {n(n-1)}{2}}$possible connections that are checked, the time complexity of the naive algorithm is $ \Theta (n^{2})$. The samples are generated by using a random number generator (RNG) on $[0,1)^{d}$. Practically, one can implement this using d random number generators on $[0,1)$, one RNG for every dimension.
Pseudocode
V := generateSamples(n) // Generates n samples in the unit cube.
for each p ∈ V do
for each q ∈ V\{p} do
if distance(p, q) ≤ r then
addConnection(p, q) // Add the edge (p, q) to the edge data structure.
end if
end for
end for
As this algorithm is not scalable (every vertex needs information of every other vertex), Holtgrewe et al. and Funke et al. have introduced new algorithms for this problem.
Holtgrewe et al.
This algorithm, which was proposed by Holtgrewe et al., was the first distributed RGG generator algorithm for dimension 2.[4] It partitions the unit square into equal sized cells with side length of at least $r$. For a given number $P=p^{2}$of processors, each processor is assigned $ {k \over p}\times {k \over p}$cells, where $ k=\left\lfloor {1/r}\right\rfloor .$For simplicity, $ P$ is assumed to be a square number, but this can be generalized to any number of processors. Each processor then generates $ {\frac {n}{P}}$vertices, which are then distributed to their respective owners. Then the vertices are sorted by the cell number they fall into, for example with Quicksort. Next, each processor then sends their adjacent processors the information about the vertices in the border cells, such that each processing unit can calculate the edges in their partition independent of the other units. The expected running time is $ O({\frac {n}{P}}\log {\frac {n}{P}})$. An upper bound for the communication cost of this algorithm is given by $T_{all-to-all}(n/P,P)+T_{all-to-all}(1,P)+T_{point-to-point}(n/(k\cdot {P})+2)$, where $T_{all-to-all}(l,c)$denotes the time for an all-to-all communication with messages of length l bits to c communication partners. $T_{point-to-point}(l)$is the time taken for a point-to-point communication for a message of length l bits.
Since this algorithm is not communication free, Funke et al. proposed[4] a scalable distributed RGG generator for higher dimensions, which works without any communication between the processing units.
Funke et al.
The approach used in this algorithm[4] is similar to the approach in Holtgrewe: Partition the unit cube into equal sized chunks with side length of at least r. So in d = 2 this will be squares, in d = 3 this will be cubes. As there can only fit at most $ {\left\lfloor {1/r}\right\rfloor }$ chunks per dimension, the number of chunks is capped at $ {\left\lfloor {1/r}\right\rfloor }^{d}$. As before, each processor is assigned $ {\left\lfloor {1/r}\right\rfloor }^{d} \over P$chunks, for which it generates the vertices. To achieve a communication free process, each processor then generates the same vertices in the adjacent chunks by exploiting pseudorandomization of seeded hash functions. This way, each processor calculates the same vertices and there is no need for exchanging vertex information.
For dimension 3, Funke et al. showed that the expected running time is $ O({\frac {m+n}{P}}+\log {P})$, without any cost for communication between processing units.
Properties
Isolated vertices and connectivity
The probability that a single vertex is isolated in a RGG is $ (1-\pi r^{2})^{n-1}$.[5] Let $ X$ be the random variable counting how many vertices are isolated. Then the expected value of $ X$ is $ E(X)=n(1-\pi r^{2})^{n-1}=ne^{-\pi r^{2}n}-O(r^{4}n)$. The term $ \mu =ne^{-\pi r^{2}n}$provides information about the connectivity of the RGG. For $ \mu \longrightarrow 0$, the RGG is asymptotically almost surely connected. For $\mu \longrightarrow \infty $, the RGG is asymptotically almost surely disconnected. And for $ \mu =\Theta (1)$, the RGG has a giant component that covers more than $ {\frac {n}{2}}$vertices and $X$ is Poisson distributed with parameter $\mu $. It follows that if $ \mu =\Theta (1)$, the probability that the RGG is connected is $ P[X=0]\sim e^{-\mu }$and the probability that the RGG is not connected is $ P[X>0]\sim 1-e^{-\mu }$.
For any $ l_{p}$-Norm ( $ 1\leq p\leq \infty $) and for any number of dimensions $d>2$, a RGG possesses a sharp threshold of connectivity at $ r\sim \left({\ln(n) \over \alpha _{p,d}n}\right)^{1 \over d}$with constant $\alpha _{p,d}$. In the special case of a two-dimensional space and the euclidean norm ($d=2$ and $p=2$) this yields $ r\sim {\sqrt {\ln(n) \over \pi n}}$.
Hamiltonicity
It has been shown, that in the two-dimensional case, the threshold $ r\sim {\sqrt {\ln(n) \over \pi n}}$also provides information about the existence of a Hamiltonian cycle (Hamiltonian Path).[6] For any $\epsilon >0$, if $ r\sim {\sqrt {\ln(n) \over (\pi +\epsilon )n}}$, then the RGG has asymptotically almost surely no Hamiltonian cycle and if $ r\sim {\sqrt {\ln(n) \over (\pi -\epsilon )n}}$for any $ \epsilon >0$, then the RGG has asymptotically almost surely a Hamiltonian cycle.
Clustering coefficient
The clustering coefficient of RGGs only depends on the dimension d of the underlying space [0,1)d. The clustering coefficient is [7]
$ C_{d}=1-H_{d}(1)$for even $d$ and $ C_{d}={3 \over 2}-H_{d}({1 \over 2})$for odd $d$where
$H_{d}(x)={1 \over {\sqrt {\pi }}}\sum _{i=x}^{d \over 2}{\Gamma (i) \over \Gamma (i+{1 \over 2})}\left({3 \over 4}\right)^{i+{1 \over 2}}$
For large $d$, this simplifies to $C_{d}\sim 3{\sqrt {2 \over \pi d}}\left({3 \over 4}\right)^{d+1 \over 2}$.
Generalized random geometric graphs
In 1988 Waxman[8] generalised the standard RGG by introducing a probabilistic connection function as opposed to the deterministic one suggested by Gilbert. The example introduced by Waxman was a stretched exponential where two nodes $i$ and $j$ connect with probability given by $ H_{ij}=\beta e^{-{r_{ij} \over r_{0}}}$where $r_{ij}$ is the euclidean separation and $\beta $, $r_{0}$are parameters determined by the system. This type of RGG with probabilistic connection function is often referred to a soft random geometric Graph, which now has two sources of randomness; the location of nodes (vertices) and the formation of links (edges). This connection function has been generalized further in the literature $ H_{ij}=\beta e^{-\left({r_{ij} \over r_{0}}\right)^{\eta }}$which is often used to study wireless networks without interference. The parameter $\eta $ represents how the signal decays with distance, when $\eta =2$ is free space, $\eta >2$ models a more cluttered environment like a town (= 6 models cities like New York) whilst $\eta <2$ models highly reflective environments. We notice that for $\eta =1$ is the Waxman model, whilst as $\eta \to \infty $ and $ \beta =1$ we have the standard RGG. Intuitively these type of connection functions model how the probability of a link being made decays with distance.
Overview of some results for Soft RGG
In the high density limit for a network with exponential connection function the number of isolated nodes is Poisson distributed, and the resulting network contains a unique giant component and isolated nodes only.[9] Therefore by ensuring there are no isolated nodes, in the dense regime, the network is a.a.s fully connected; similar to the results shown in [10] for the disk model. Often the properties of these networks such as betweenness centrality [11] and connectivity [9] are studied in the limit as the density $\to \infty $ which often means border effects become negligible. However, in real life where networks are finite, although can still be extremely dense, border effects will impact on full connectivity; in fact [12] showed that for full connectivity, with an exponential connection function, is greatly impacted by boundary effects as nodes near the corner/face of a domain are less likely to connect compared with those in the bulk. As a result full connectivity can be expressed as a sum of the contributions from the bulk and the geometries boundaries. A more general analysis of the connection functions in wireless networks has shown that the probability of full connectivity can be well approximated expressed by a few moments of the connection function and the regions geometry.[13]
References
1. Antonioni, Alberto; Tomassini, Marco (28 September 2012). "Degree correlations in random geometric graphs". Physical Review E. 86 (3): 037101. arXiv:1207.2573. Bibcode:2012PhRvE..86c7101A. doi:10.1103/PhysRevE.86.037101. PMID 23031054. S2CID 14750415.
2. Nekovee, Maziar (28 June 2007). "Worm epidemics in wireless ad hoc networks". New Journal of Physics. 9 (6): 189. arXiv:0707.2293. Bibcode:2007NJPh....9..189N. doi:10.1088/1367-2630/9/6/189. S2CID 203944.
3. Penrose, Mathew. (2003). Random geometric graphs. Oxford: Oxford University Press. ISBN 0198506260. OCLC 51316118.
4. von Looz, Moritz; Strash, Darren; Schulz, Christian; Penschuck, Manuel; Sanders, Peter; Meyer, Ulrich; Lamm, Sebastian; Funke, Daniel (2017-10-20). "Communication-free Massively Distributed Graph Generation". arXiv:1710.07565v3 [cs.DC].
5. Perez, Xavier; Mitsche, Dieter; Diaz, Josep (2007-02-13). "Dynamic Random Geometric Graphs". arXiv:cs/0702074. Bibcode:2007cs........2074D. {{cite journal}}: Cite journal requires |journal= (help)
6. Perez, X.; Mitsche, D.; Diaz, J. (2006-07-07). "Sharp threshold for hamiltonicity of random geometric graphs". arXiv:cs/0607023. Bibcode:2006cs........7023D. {{cite journal}}: Cite journal requires |journal= (help)
7. Christensen, Michael; Dall, Jesper (2002-03-01). "Random Geometric Graphs". Physical Review E. 66 (1 Pt 2): 016121. arXiv:cond-mat/0203026. Bibcode:2002PhRvE..66a6121D. doi:10.1103/PhysRevE.66.016121. PMID 12241440. S2CID 15193516.
8. Waxman, B.M (1988). "Routing of multipoint connections". IEEE Journal on Selected Areas in Communications. 6 (9): 1617–1622. doi:10.1109/49.12889.
9. Mao, G; Anderson, B.D (2013). "Connectivity of large wireless networks under a general connection model". IEEE Transactions on Information Theory. 59 (3): 1761–1772. doi:10.1109/tit.2012.2228894. S2CID 3027610.
10. Penrose, Mathew D (1997). "The longest edge of the random minimal spanning tree". The Annals of Applied Probability: 340361.
11. Giles, Alexander P.; Georgiou, Orestis; Dettmann, Carl P. (2015). "Betweenness centrality in dense random geometric networks". 2015 IEEE International Conference on Communications (ICC). pp. 6450–6455. arXiv:1410.8521. Bibcode:2014arXiv1410.8521K. doi:10.1109/ICC.2015.7249352. ISBN 978-1-4673-6432-4. S2CID 928409.
12. Coon, J; Dettmann, C P; Georgiou, O (2012). "Full connectivity: corners, edges and faces". Journal of Statistical Physics. 147 (4): 758–778. arXiv:1201.3123. Bibcode:2012JSP...147..758C. doi:10.1007/s10955-012-0493-y. S2CID 18794396.
13. Dettmann, C.P; Georgiou, O (2016). "Random geometric graphs with general connection functions". Physical Review E. 93 (3): 032313. arXiv:1411.3617. Bibcode:2016PhRvE..93c2313D. doi:10.1103/physreve.93.032313. PMID 27078372. S2CID 124506496.
|
Wikipedia
|
Random group
In mathematics, random groups are certain groups obtained by a probabilistic construction. They were introduced by Misha Gromov to answer questions such as "What does a typical group look like?"
It so happens that, once a precise definition is given, random groups satisfy some properties with very high probability, whereas other properties fail with very high probability. For instance, very probably random groups are hyperbolic groups. In this sense, one can say that "most groups are hyperbolic".
Definition
The definition of random groups depends on a probabilistic model on the set of possible groups. Various such probabilistic models yield different (but related) notions of random groups.
Any group can be defined by a group presentation involving generators and relations. For instance, the Abelian group $\mathbb {Z} \times \mathbb {Z} $ has a presentation with two generators $a$ and $b$, and the relation $ab=ba$, or equivalently $aba^{-1}b^{-1}=1$. The main idea of random groups is to start with a fixed number of group generators $a_{1},\,a_{2},\,\ldots ,\,a_{m}$, and imposing relations of the form $r_{1}=1,\,r_{2}=1,\,\ldots ,\,r_{k}=1$ where each $r_{j}$ is a random word involving the letters $a_{i}$ and their formal inverses $a_{i}^{-1}$. To specify a model of random groups is to specify a precise way in which $m$, $k$ and the random relations $r_{j}$ are chosen.
Once the random relations $r_{k}$ have been chosen, the resulting random group $G$ is defined in the standard way for group presentations, namely: $G$ is the quotient of the free group $F_{m}$ with generators $a_{1},\,a_{2},\,\ldots ,\,a_{m}$, by the normal subgroup $R\subset F_{m}$ generated by the relations $r_{1}\,r_{2},\,\ldots ,\,r_{k}$ seen as elements of $F_{m}$:
$G=F_{m}/\langle r_{1},\,r_{2},\,\ldots ,\,r_{k}\rangle .$
The few-relator model of random groups
The simplest model of random groups is the few-relator model. In this model, a number of generators $m\geq 2$ and a number of relations $k\geq 1$ are fixed. Fix an additional parameter $\ell $ (the length of the relations), which is typically taken very large.
Then, the model consists in choosing the relations $r_{1}\,r_{2},\,\ldots ,\,r_{k}$ at random, uniformly and independently among all possible reduced words of length at most $\ell $ involving the letters $a_{i}$ and their formal inverses $a_{i}^{-1}$.
This model is especially interesting when the relation length $\ell $ tends to infinity: with probability tending to $1$ as $\ell \to \infty $ a random group in this model is hyperbolic and satisfies other nice properties.
Further remarks
More refined models of random groups have been defined.
For instance, in the density model, the number of relations is allowed to grow with the length of the relations. Then there is a sharp "phase transition" phenomenon: if the number of relations is larger than some threshold, the random group "collapses" (because the relations allow to show that any word is equal to any other), whereas below the threshold the resulting random group is infinite and hyperbolic.
Constructions of random groups can also be twisted in specific ways to build group with particular properties. For instance, Gromov used this technique to build new groups that are counter-examples to an extension of the Baum–Connes conjecture.
References
• Mikhail Gromov. Hyperbolic groups. Essays in group theory, 75–263, Math. Sci. Res. Inst. Publ., 8, Springer, New York, 1987.
• Mikhail Gromov. "Random walk in random groups." Geom. Funct. Anal., vol. 13 (2003), 73–146.
|
Wikipedia
|
Random minimum spanning tree
In mathematics, a random minimum spanning tree may be formed by assigning random weights from some distribution to the edges of an undirected graph, and then constructing the minimum spanning tree of the graph.
When the given graph is a complete graph on n vertices, and the edge weights have a continuous distribution function whose derivative at zero is D > 0, then the expected weight of its random minimum spanning trees is bounded by a constant, rather than growing as a function of n. More precisely, this constant tends in the limit (as n goes to infinity) to ζ(3)/D, where ζ is the Riemann zeta function and ζ(3) is Apéry's constant. For instance, for edge weights that are uniformly distributed on the unit interval, the derivative is D = 1, and the limit is just ζ(3).[1]
In contrast to uniformly random spanning trees of complete graphs, for which the typical diameter is proportional to the square root of the number of vertices, random minimum spanning trees of complete graphs have typical diameter proportional to the cube root.[2]
Random minimum spanning trees of grid graphs may be used for invasion percolation models of liquid flow through a porous medium,[3] and for maze generation.[4]
References
1. Frieze, A. M. (1985), "On the value of a random minimum spanning tree problem", Discrete Applied Mathematics, 10 (1): 47–56, doi:10.1016/0166-218X(85)90058-7, MR 0770868.
2. Goldschmidt, Christina, Random minimum spanning trees, Mathematical Institute, University of Oxford, retrieved 2019-09-13
3. Duxbury, P. M.; Dobrin, R.; McGarrity, E.; Meinke, J. H.; Donev, A.; Musolff, C.; Holm, E. A. (2004), "Network algorithms and critical manifolds in disordered systems", Computer Simulation Studies in Condensed-Matter Physics XVI: Proceedings of the Fifteenth Workshop, Athens, GA, USA, February 24–28, 2003, Springer Proceedings in Physics, vol. 95, Springer-Verlag, pp. 181–194, doi:10.1007/978-3-642-59293-5_25.
4. Foltin, Martin (2011), Automated Maze Generation and Human Interaction (PDF), Diploma Thesis, Brno: Masaryk University, Faculty of Informatics.
|
Wikipedia
|
Random graph
In mathematics, random graph is the general term to refer to probability distributions over graphs. Random graphs may be described simply by a probability distribution, or by a random process which generates them.[1][2] The theory of random graphs lies at the intersection between graph theory and probability theory. From a mathematical perspective, random graphs are used to answer questions about the properties of typical graphs. Its practical applications are found in all areas in which complex networks need to be modeled – many random graph models are thus known, mirroring the diverse types of complex networks encountered in different areas. In a mathematical context, random graph refers almost exclusively to the Erdős–Rényi random graph model. In other contexts, any graph model may be referred to as a random graph.
For the countably-infinite random graph, see Rado graph.
Part of a series on
Network science
• Theory
• Graph
• Complex network
• Contagion
• Small-world
• Scale-free
• Community structure
• Percolation
• Evolution
• Controllability
• Graph drawing
• Social capital
• Link analysis
• Optimization
• Reciprocity
• Closure
• Homophily
• Transitivity
• Preferential attachment
• Balance theory
• Network effect
• Social influence
Network types
• Informational (computing)
• Telecommunication
• Transport
• Social
• Scientific collaboration
• Biological
• Artificial neural
• Interdependent
• Semantic
• Spatial
• Dependency
• Flow
• on-Chip
Graphs
Features
• Clique
• Component
• Cut
• Cycle
• Data structure
• Edge
• Loop
• Neighborhood
• Path
• Vertex
• Adjacency list / matrix
• Incidence list / matrix
Types
• Bipartite
• Complete
• Directed
• Hyper
• Labeled
• Multi
• Random
• Weighted
• Metrics
• Algorithms
• Centrality
• Degree
• Motif
• Clustering
• Degree distribution
• Assortativity
• Distance
• Modularity
• Efficiency
Models
Topology
• Random graph
• Erdős–Rényi
• Barabási–Albert
• Bianconi–Barabási
• Fitness model
• Watts–Strogatz
• Exponential random (ERGM)
• Random geometric (RGG)
• Hyperbolic (HGN)
• Hierarchical
• Stochastic block
• Blockmodeling
• Maximum entropy
• Soft configuration
• LFR Benchmark
Dynamics
• Boolean network
• agent based
• Epidemic/SIR
• Lists
• Categories
• Topics
• Software
• Network scientists
• Category:Network theory
• Category:Graph theory
Models
A random graph is obtained by starting with a set of n isolated vertices and adding successive edges between them at random. The aim of the study in this field is to determine at what stage a particular property of the graph is likely to arise.[3] Different random graph models produce different probability distributions on graphs. Most commonly studied is the one proposed by Edgar Gilbert, denoted G(n,p), in which every possible edge occurs independently with probability 0 < p < 1. The probability of obtaining any one particular random graph with m edges is $p^{m}(1-p)^{N-m}$ with the notation $N={\tbinom {n}{2}}$.[4]
A closely related model, the Erdős–Rényi model denoted G(n,M), assigns equal probability to all graphs with exactly M edges. With 0 ≤ M ≤ N, G(n,M) has ${\tbinom {N}{M}}$ elements and every element occurs with probability $1/{\tbinom {N}{M}}$.[3] The latter model can be viewed as a snapshot at a particular time (M) of the random graph process ${\tilde {G}}_{n}$, which is a stochastic process that starts with n vertices and no edges, and at each step adds one new edge chosen uniformly from the set of missing edges.
If instead we start with an infinite set of vertices, and again let every possible edge occur independently with probability 0 < p < 1, then we get an object G called an infinite random graph. Except in the trivial cases when p is 0 or 1, such a G almost surely has the following property:
Given any n + m elements $a_{1},\ldots ,a_{n},b_{1},\ldots ,b_{m}\in V$, there is a vertex c in V that is adjacent to each of $a_{1},\ldots ,a_{n}$ and is not adjacent to any of $b_{1},\ldots ,b_{m}$.
It turns out that if the vertex set is countable then there is, up to isomorphism, only a single graph with this property, namely the Rado graph. Thus any countably infinite random graph is almost surely the Rado graph, which for this reason is sometimes called simply the random graph. However, the analogous result is not true for uncountable graphs, of which there are many (nonisomorphic) graphs satisfying the above property.
Another model, which generalizes Gilbert's random graph model, is the random dot-product model. A random dot-product graph associates with each vertex a real vector. The probability of an edge uv between any vertices u and v is some function of the dot product u • v of their respective vectors.
The network probability matrix models random graphs through edge probabilities, which represent the probability $p_{i,j}$ that a given edge $e_{i,j}$ exists for a specified time period. This model is extensible to directed and undirected; weighted and unweighted; and static or dynamic graphs structure.
For M ≃ pN, where N is the maximal number of edges possible, the two most widely used models, G(n,M) and G(n,p), are almost interchangeable.[5]
Random regular graphs form a special case, with properties that may differ from random graphs in general.
Once we have a model of random graphs, every function on graphs, becomes a random variable. The study of this model is to determine if, or at least estimate the probability that, a property may occur.[4]
Terminology
The term 'almost every' in the context of random graphs refers to a sequence of spaces and probabilities, such that the error probabilities tend to zero.[4]
Properties
The theory of random graphs studies typical properties of random graphs, those that hold with high probability for graphs drawn from a particular distribution. For example, we might ask for a given value of $n$ and $p$ what the probability is that $G(n,p)$ is connected. In studying such questions, researchers often concentrate on the asymptotic behavior of random graphs—the values that various probabilities converge to as $n$ grows very large. Percolation theory characterizes the connectedness of random graphs, especially infinitely large ones.
Percolation is related to the robustness of the graph (called also network). Given a random graph of $n$ nodes and an average degree $\langle k\rangle $. Next we remove randomly a fraction $1-p$ of nodes and leave only a fraction $p$. There exists a critical percolation threshold $p_{c}={\tfrac {1}{\langle k\rangle }}$ below which the network becomes fragmented while above $p_{c}$ a giant connected component exists.[1][5][6][7][8]
Localized percolation refers to removing a node its neighbors, next nearest neighbors etc. until a fraction of $1-p$ of nodes from the network is removed. It was shown that for random graph with Poisson distribution of degrees $p_{c}={\tfrac {1}{\langle k\rangle }}$ exactly as for random removal.
Random graphs are widely used in the probabilistic method, where one tries to prove the existence of graphs with certain properties. The existence of a property on a random graph can often imply, via the Szemerédi regularity lemma, the existence of that property on almost all graphs.
In random regular graphs, $G(n,r-reg)$ are the set of $r$-regular graphs with $r=r(n)$ such that $n$ and $m$ are the natural numbers, $3\leq r<n$, and $rn=2m$ is even.[3]
The degree sequence of a graph $G$ in $G^{n}$ depends only on the number of edges in the sets[3]
$V_{n}^{(2)}=\left\{ij\ :\ 1\leq j\leq n,i\neq j\right\}\subset V^{(2)},\qquad i=1,\cdots ,n.$ :\ 1\leq j\leq n,i\neq j\right\}\subset V^{(2)},\qquad i=1,\cdots ,n.}
If edges, $M$ in a random graph, $G_{M}$ is large enough to ensure that almost every $G_{M}$ has minimum degree at least 1, then almost every $G_{M}$ is connected and, if $n$ is even, almost every $G_{M}$ has a perfect matching. In particular, the moment the last isolated vertex vanishes in almost every random graph, the graph becomes connected.[3]
Almost every graph process on an even number of vertices with the edge raising the minimum degree to 1 or a random graph with slightly more than ${\tfrac {n}{4}}\log(n)$ edges and with probability close to 1 ensures that the graph has a complete matching, with exception of at most one vertex.
For some constant $c$, almost every labeled graph with $n$ vertices and at least $cn\log(n)$ edges is Hamiltonian. With the probability tending to 1, the particular edge that increases the minimum degree to 2 makes the graph Hamiltonian.
Properties of random graph may change or remain invariant under graph transformations. Mashaghi A. et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient.[9]
Coloring
Given a random graph G of order n with the vertex V(G) = {1, ..., n}, by the greedy algorithm on the number of colors, the vertices can be colored with colors 1, 2, ... (vertex 1 is colored 1, vertex 2 is colored 1 if it is not adjacent to vertex 1, otherwise it is colored 2, etc.).[3] The number of proper colorings of random graphs given a number of q colors, called its chromatic polynomial, remains unknown so far. The scaling of zeros of the chromatic polynomial of random graphs with parameters n and the number of edges m or the connection probability p has been studied empirically using an algorithm based on symbolic pattern matching.[10]
Random trees
A random tree is a tree or arborescence that is formed by a stochastic process. In a large range of random graphs of order n and size M(n) the distribution of the number of tree components of order k is asymptotically Poisson. Types of random trees include uniform spanning tree, random minimal spanning tree, random binary tree, treap, rapidly exploring random tree, Brownian tree, and random forest.
Conditional random graphs
Consider a given random graph model defined on the probability space $(\Omega ,{\mathcal {F}},P)$ and let ${\mathcal {P}}(G):\Omega \rightarrow R^{m}$ be a real valued function which assigns to each graph in $\Omega $ a vector of m properties. For a fixed $\mathbf {p} \in R^{m}$, conditional random graphs are models in which the probability measure $P$ assigns zero probability to all graphs such that '${\mathcal {P}}(G)\neq \mathbf {p} $.
Special cases are conditionally uniform random graphs, where $P$ assigns equal probability to all the graphs having specified properties. They can be seen as a generalization of the Erdős–Rényi model G(n,M), when the conditioning information is not necessarily the number of edges M, but whatever other arbitrary graph property ${\mathcal {P}}(G)$. In this case very few analytical results are available and simulation is required to obtain empirical distributions of average properties.
History
The earliest use of a random graph model was by Helen Hall Jennings and Jacob Moreno in 1938 where a "chance sociogram" (a directed Erdős-Rényi model) was considered in studying comparing the fraction of reciprocated links in their network data with the random model.[11] Another use, under the name "random net", was by Ray Solomonoff and Anatol Rapoport in 1951, using a model of directed graphs with fixed out-degree and randomly chosen attachments to other vertices.[12]
The Erdős–Rényi model of random graphs was first defined by Paul Erdős and Alfréd Rényi in their 1959 paper "On Random Graphs"[8] and independently by Gilbert in his paper "Random graphs".[6]
See also
• Bose–Einstein condensation: a network theory approach
• Cavity method
• Complex networks
• Dual-phase evolution
• Erdős–Rényi model
• Exponential random graph model
• Graph theory
• Interdependent networks
• Network science
• Percolation
• Percolation theory
• Random graph theory of gelation
• Regular graph
• Scale free network
• Semilinear response
• Stochastic block model
• Lancichinetti–Fortunato–Radicchi benchmark
References
1. Bollobás, Béla (2001). Random Graphs (2nd ed.). Cambridge University Press.
2. Frieze, Alan; Karonski, Michal (2015). Introduction to Random Graphs. Cambridge University Press.
3. Béla Bollobás, Random Graphs, 1985, Academic Press Inc., London Ltd.
4. Béla Bollobás, Probabilistic Combinatorics and Its Applications, 1991, Providence, RI: American Mathematical Society.
5. Bollobas, B. and Riordan, O.M. "Mathematical results on scale-free random graphs" in "Handbook of Graphs and Networks" (S. Bornholdt and H.G. Schuster (eds)), Wiley VCH, Weinheim, 1st ed., 2003
6. Gilbert, E. N. (1959), "Random graphs", Annals of Mathematical Statistics, 30 (4): 1141–1144, doi:10.1214/aoms/1177706098.
7. Newman, M. E. J. (2010). Networks: An Introduction. Oxford.
8. Erdős, P. Rényi, A (1959) "On Random Graphs I" in Publ. Math. Debrecen 6, p. 290–297 Archived 2020-08-07 at the Wayback Machine
9. Ramezanpour, A.; Karimipour, V.; Mashaghi, A. (2003). "Generating correlated networks from uncorrelated ones". Phys. Rev. E. 67 (46107): 046107. arXiv:cond-mat/0212469. Bibcode:2003PhRvE..67d6107R. doi:10.1103/PhysRevE.67.046107. PMID 12786436. S2CID 33054818.
10. Van Bussel, Frank; Ehrlich, Christoph; Fliegner, Denny; Stolzenberg, Sebastian; Timme, Marc (2010). "Chromatic Polynomials of Random Graphs". J. Phys. A: Math. Theor. 43 (17): 175002. arXiv:1709.06209. Bibcode:2010JPhA...43q5002V. doi:10.1088/1751-8113/43/17/175002. S2CID 15723612.
11. Moreno, Jacob L; Jennings, Helen Hall (Jan 1938). "Statistics of Social Configurations". Sociometry. 1 (3/4): 342–374. doi:10.2307/2785588. JSTOR 2785588.
12. Solomonoff, Ray; Rapoport, Anatol (June 1951). "Connectivity of random nets". Bulletin of Mathematical Biophysics. 13 (2): 107–117. doi:10.1007/BF02478357.
Stochastic processes
Discrete time
• Bernoulli process
• Branching process
• Chinese restaurant process
• Galton–Watson process
• Independent and identically distributed random variables
• Markov chain
• Moran process
• Random walk
• Loop-erased
• Self-avoiding
• Biased
• Maximal entropy
Continuous time
• Additive process
• Bessel process
• Birth–death process
• pure birth
• Brownian motion
• Bridge
• Excursion
• Fractional
• Geometric
• Meander
• Cauchy process
• Contact process
• Continuous-time random walk
• Cox process
• Diffusion process
• Empirical process
• Feller process
• Fleming–Viot process
• Gamma process
• Geometric process
• Hawkes process
• Hunt process
• Interacting particle systems
• Itô diffusion
• Itô process
• Jump diffusion
• Jump process
• Lévy process
• Local time
• Markov additive process
• McKean–Vlasov process
• Ornstein–Uhlenbeck process
• Poisson process
• Compound
• Non-homogeneous
• Schramm–Loewner evolution
• Semimartingale
• Sigma-martingale
• Stable process
• Superprocess
• Telegraph process
• Variance gamma process
• Wiener process
• Wiener sausage
Both
• Branching process
• Galves–Löcherbach model
• Gaussian process
• Hidden Markov model (HMM)
• Markov process
• Martingale
• Differences
• Local
• Sub-
• Super-
• Random dynamical system
• Regenerative process
• Renewal process
• Stochastic chains with memory of variable length
• White noise
Fields and other
• Dirichlet process
• Gaussian random field
• Gibbs measure
• Hopfield model
• Ising model
• Potts model
• Boolean network
• Markov random field
• Percolation
• Pitman–Yor process
• Point process
• Cox
• Poisson
• Random field
• Random graph
Time series models
• Autoregressive conditional heteroskedasticity (ARCH) model
• Autoregressive integrated moving average (ARIMA) model
• Autoregressive (AR) model
• Autoregressive–moving-average (ARMA) model
• Generalized autoregressive conditional heteroskedasticity (GARCH) model
• Moving-average (MA) model
Financial models
• Binomial options pricing model
• Black–Derman–Toy
• Black–Karasinski
• Black–Scholes
• Chan–Karolyi–Longstaff–Sanders (CKLS)
• Chen
• Constant elasticity of variance (CEV)
• Cox–Ingersoll–Ross (CIR)
• Garman–Kohlhagen
• Heath–Jarrow–Morton (HJM)
• Heston
• Ho–Lee
• Hull–White
• LIBOR market
• Rendleman–Bartter
• SABR volatility
• Vašíček
• Wilkie
Actuarial models
• Bühlmann
• Cramér–Lundberg
• Risk process
• Sparre–Anderson
Queueing models
• Bulk
• Fluid
• Generalized queueing network
• M/G/1
• M/M/1
• M/M/c
Properties
• Càdlàg paths
• Continuous
• Continuous paths
• Ergodic
• Exchangeable
• Feller-continuous
• Gauss–Markov
• Markov
• Mixing
• Piecewise-deterministic
• Predictable
• Progressively measurable
• Self-similar
• Stationary
• Time-reversible
Limit theorems
• Central limit theorem
• Donsker's theorem
• Doob's martingale convergence theorems
• Ergodic theorem
• Fisher–Tippett–Gnedenko theorem
• Large deviation principle
• Law of large numbers (weak/strong)
• Law of the iterated logarithm
• Maximal ergodic theorem
• Sanov's theorem
• Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy)
Inequalities
• Burkholder–Davis–Gundy
• Doob's martingale
• Doob's upcrossing
• Kunita–Watanabe
• Marcinkiewicz–Zygmund
Tools
• Cameron–Martin formula
• Convergence of random variables
• Doléans-Dade exponential
• Doob decomposition theorem
• Doob–Meyer decomposition theorem
• Doob's optional stopping theorem
• Dynkin's formula
• Feynman–Kac formula
• Filtration
• Girsanov theorem
• Infinitesimal generator
• Itô integral
• Itô's lemma
• Karhunen–Loève theorem
• Kolmogorov continuity theorem
• Kolmogorov extension theorem
• Lévy–Prokhorov metric
• Malliavin calculus
• Martingale representation theorem
• Optional stopping theorem
• Prokhorov's theorem
• Quadratic variation
• Reflection principle
• Skorokhod integral
• Skorokhod's representation theorem
• Skorokhod space
• Snell envelope
• Stochastic differential equation
• Tanaka
• Stopping time
• Stratonovich integral
• Uniform integrability
• Usual hypotheses
• Wiener space
• Classical
• Abstract
Disciplines
• Actuarial mathematics
• Control theory
• Econometrics
• Ergodic theory
• Extreme value theory (EVT)
• Large deviations theory
• Mathematical finance
• Mathematical statistics
• Probability theory
• Queueing theory
• Renewal theory
• Ruin theory
• Signal processing
• Statistics
• Stochastic analysis
• Time series analysis
• Machine learning
• List of topics
• Category
|
Wikipedia
|
Random number
In mathematics and statistics, a random number is either Pseudo-random or a number generated for, or part of, a set exhibiting statistical randomness.
Algorithms and implementations
A 1964-developed algorithm[1] is popularly known as the Knuth shuffle or the Fisher–Yates shuffle (based on work they did in 1938). A real-world use for this is sampling water quality in a reservoir.
In 1999, a new feature was added to the Pentium III: a hardware-based random number generator.[2][3] It has been described as "several oscillators combine their outputs and that odd waveform is sampled asynchronously."[4] These numbers, however, were only 32 bit, at a time when export controls were on 56 bits and higher, so they were not state of the art.[5]
Common understanding
In common understanding, "1 2 3 4 5" is not as random as "3 5 2 1 4" and certainly not as random as "47 88 1 32 41" but "we can't say authoritavely that the first sequence is not random ... it could have been generated by chance."[6]
When a police officer claims to have done a "random .. door-to-door" search, there is a certain expectation that members of a jury will have.[7][8]
Real world consequences
Flaws in randomness have real-world consequences.[9][10]
A 99.8% randomness was shown by researchers to negatively affect an estimated 27,000 customers of a large service[9] and that the problem was not limited to just that situation.
See also
• Algorithmically random sequence
• Quasi-random sequence
• Random number generation
• Random sequence
• Random variable
• Random variate
• Random real
References
1. Richard Durstenfeld (July 1964). "Algorithm 235: Random permutation". Communications of the ACM (Association for Computing Machinery). Vol. 7, no. 7. p. 420. doi:10.1145/364520.364540.
2. Robert Moscowitz (July 12, 1999). "Privacy's Random Nature". Network Computing.
3. "Hardwiring Security". Wired. January 1999.
4. Terry Ritter (January 21, 1999). "The Pentium III RNG".
5. "Unpredictable Randomness Definition". IRISA.
6. Jonathan Knudson (January 1998). "Javatalk: Horseshoes, hand grenades and random numbers". Sun Server. pp. 16–17.
7. Tom Hays (April 16, 1995). "NYPD Bad Cop's Illegal Search Mars Career". Los Angeles Times.
8. A pre-compiled list of apartment numbers would be a violation thereof.
9. John Markoff (February 14, 2012). "Flaw Found in an Online Encryption Method". New York Times.
10. Reid Forgrave (May 3, 2018). "The man who cracked the lottery". New York Times.
Authority control: National
• Japan
|
Wikipedia
|
Arrival theorem
In queueing theory, a discipline within the mathematical theory of probability, the arrival theorem[1] (also referred to as the random observer property, ROP or job observer property[2]) states that "upon arrival at a station, a job observes the system as if in steady state at an arbitrary instant for the system without that job."[3]
The arrival theorem always holds in open product-form networks with unbounded queues at each node, but it also holds in more general networks. A necessary and sufficient condition for the arrival theorem to be satisfied in product-form networks is given in terms of Palm probabilities in Boucherie & Dijk, 1997.[4] A similar result also holds in some closed networks. Examples of product-form networks where the arrival theorem does not hold include reversible Kingman networks[4][5] and networks with a delay protocol.[3]
Mitrani offers the intuition that "The state of node i as seen by an incoming job has a different distribution from the state seen by a random observer. For instance, an incoming job can never see all 'k jobs present at node i, because it itself cannot be among the jobs already present."[6]
Theorem for arrivals governed by a Poisson process
For Poisson processes the property is often referred to as the PASTA property (Poisson Arrivals See Time Averages) and states that the probability of the state as seen by an outside random observer is the same as the probability of the state seen by an arriving customer.[7] The property also holds for the case of a doubly stochastic Poisson process where the rate parameter is allowed to vary depending on the state.[8]
Theorem for Jackson networks
In an open Jackson network with m queues, write $ \mathbf {n} =(n_{1},n_{2},\ldots ,n_{m})$ for the state of the network. Suppose $ \pi (\mathbf {n} )$ is the equilibrium probability that the network is in state $ \mathbf {n} $. Then the probability that the network is in state $ \mathbf {n} $ immediately before an arrival to any node is also $ \pi (\mathbf {n} )$.
Note that this theorem does not follow from Jackson's theorem, where the steady state in continuous time is considered. Here we are concerned with particular points in time, namely arrival times.[9] This theorem first published by Sevcik and Mitrani in 1981.[10]
Theorem for Gordon–Newell networks
In a closed Gordon–Newell network with m queues, write ${\mathbf {n} =(n_{1},n_{2},\ldots ,n_{m})}$ for the state of the network. For a customer in transit to state $i$, let ${\alpha _{i}(\mathbf {n} -\mathbf {e} _{i})}$ denote the probability that immediately before arrival the customer 'sees' the state of the system to be
$\mathbf {n} -\mathbf {e} _{i}=(n_{1},n_{2},\ldots ,n_{i}-1,\ldots ,n_{m}).$
This probability, ${\alpha _{i}(\mathbf {n} -\mathbf {e} _{i})}$, is the same as the steady state probability for state ${\mathbf {n} -\mathbf {e} _{i}}$ for a network of the same type with one customer less.[11] It was published independently by Sevcik and Mitrani,[10] and Reiser and Lavenberg,[12] where the result was used to develop mean value analysis.
Notes
1. Asmussen, Søren (2003). "Queueing Networks and Insensitivity". Applied Probability and Queues. Stochastic Modelling and Applied Probability. Vol. 51. pp. 114–136. doi:10.1007/0-387-21525-5_4. ISBN 978-0-387-00211-8.
2. El-Taha, Muhammad (1999). Sample-path Analysis of Queueing Systems. Springer. p. 94. ISBN 0-7923-8210-2.
3. Van Dijk, N. M. (1993). "On the arrival theorem for communication networks". Computer Networks and ISDN Systems. 25 (10): 1135–2013. doi:10.1016/0169-7552(93)90073-D.
4. Boucherie, R. J.; Van Dijk, N. M. (1997). "On the arrivai theorem for product form queueing networks with blocking". Performance Evaluation. 29 (3): 155. doi:10.1016/S0166-5316(96)00045-4.
5. Kingman, J. F. C. (1969). "Markov Population Processes". Journal of Applied Probability. Applied Probability Trust. 6 (1): 1–18. doi:10.2307/3212273. JSTOR 3212273.
6. Mitrani, Isi (1987). Modelling of Computer and Communication Systems. CUP. p. 114. ISBN 0521314224.
7. Wolff, R. W. (1982). "Poisson Arrivals See Time Averages". Operations Research. 30 (2): 223–231. doi:10.1287/opre.30.2.223.
8. Van Doorn, E. A.; Regterschot, G. J. K. (1988). "Conditional PASTA" (PDF). Operations Research Letters. 7 (5): 229. doi:10.1016/0167-6377(88)90036-3.
9. Harrison, Peter G.; Patel, Naresh M. (1992). Performance Modelling of Communication Networks and Computer Architectures. Addison-Wesley. p. 228. ISBN 0-201-54419-9.
10. Sevcik, K. C.; Mitrani, I. (1981). "The Distribution of Queuing Network States at Input and Output Instants". Journal of the ACM. 28 (2): 358. doi:10.1145/322248.322257.
11. Breuer, L.; Baum, Dave (2005). "Markovian Queueing Networks". An Introduction to Queueing Theory and Matrix-Analytic Methods. pp. 63–61. doi:10.1007/1-4020-3631-0_5. ISBN 1-4020-3630-2.
12. Reiser, M.; Lavenberg, S. S. (1980). "Mean-Value Analysis of Closed Multichain Queuing Networks". Journal of the ACM. 27 (2): 313. doi:10.1145/322186.322195.
Queueing theory
Single queueing nodes
• D/M/1 queue
• M/D/1 queue
• M/D/c queue
• M/M/1 queue
• Burke's theorem
• M/M/c queue
• M/M/∞ queue
• M/G/1 queue
• Pollaczek–Khinchine formula
• Matrix analytic method
• M/G/k queue
• G/M/1 queue
• G/G/1 queue
• Kingman's formula
• Lindley equation
• Fork–join queue
• Bulk queue
Arrival processes
• Poisson point process
• Markovian arrival process
• Rational arrival process
Queueing networks
• Jackson network
• Traffic equations
• Gordon–Newell theorem
• Mean value analysis
• Buzen's algorithm
• Kelly network
• G-network
• BCMP network
Service policies
• FIFO
• LIFO
• Processor sharing
• Round-robin
• Shortest job next
• Shortest remaining time
Key concepts
• Continuous-time Markov chain
• Kendall's notation
• Little's law
• Product-form solution
• Balance equation
• Quasireversibility
• Flow-equivalent server method
• Arrival theorem
• Decomposition method
• Beneš method
Limit theorems
• Fluid limit
• Mean-field theory
• Heavy traffic approximation
• Reflected Brownian motion
Extensions
• Fluid queue
• Layered queueing network
• Polling system
• Adversarial queueing network
• Loss network
• Retrial queue
Information systems
• Data buffer
• Erlang (unit)
• Erlang distribution
• Flow control (data)
• Message queue
• Network congestion
• Network scheduler
• Pipeline (software)
• Quality of service
• Scheduling (computing)
• Teletraffic engineering
Category
|
Wikipedia
|
Random optimization
Random optimization (RO) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized and RO can hence be used on functions that are not continuous or differentiable. Such optimization methods are also known as direct-search, derivative-free, or black-box methods.
The name random optimization is attributed to Matyas [1] who made an early presentation of RO along with basic mathematical analysis. RO works by iteratively moving to better positions in the search-space which are sampled using e.g. a normal distribution surrounding the current position.
Algorithm
Let f: ℝn → ℝ be the fitness or cost function which must be minimized. Let x ∈ ℝn designate a position or candidate solution in the search-space. The basic RO algorithm can then be described as:
• Initialize x with a random position in the search-space.
• Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:
• Sample a new position y by adding a normally distributed random vector to the current position x
• If (f(y) < f(x)) then move to the new position by setting x = y
• Now x holds the best-found position.
This algorithm corresponds to a (1+1) evolution strategy with constant step-size.
Convergence and variants
Matyas showed the basic form of RO converges to the optimum of a simple unimodal function by using a limit-proof which shows convergence to the optimum is certain to occur if a potentially infinite number of iterations are performed. However, this proof is not useful in practice because a finite number of iterations can only be executed. In fact, such a theoretical limit-proof will also show that purely random sampling of the search-space will inevitably yield a sample arbitrarily close to the optimum.
Mathematical analyses are also conducted by Baba [2] and Solis and Wets [3] to establish that convergence to a region surrounding the optimum is inevitable under some mild conditions for RO variants using other probability distributions for the sampling. An estimate on the number of iterations required to approach the optimum is derived by Dorea.[4] These analyses are criticized through empirical experiments by Sarma [5] who used the optimizer variants of Baba and Dorea on two real-world problems, showing the optimum to be approached very slowly and moreover that the methods were actually unable to locate a solution of adequate fitness, unless the process was started sufficiently close to the optimum to begin with.
See also
• Random search is a closely related family of optimization methods which sample from a hypersphere instead of a normal distribution.
• Luus–Jaakola is a closely related optimization method using a uniform distribution in its sampling and a simple formula for exponentially decreasing the sampling range.
• Pattern search takes steps along the axes of the search-space using exponentially decreasing step sizes.
• Stochastic optimization
References
1. Matyas, J. (1965). "Random optimization". Automation and Remote Control. 26 (2): 246–253.
2. Baba, N. (1981). "Convergence of a random optimization method for constrained optimization problems". Journal of Optimization Theory and Applications. 33 (4): 451–461. doi:10.1007/bf00935752.
3. Solis, F.J.; Wets, R.J-B. (1981). "Minimization by random search techniques". Mathematics of Operations Research. 6 (1): 19–30. doi:10.1287/moor.6.1.19.
4. Dorea, C.C.Y. (1983). "Expected number of steps of a random optimization method". Journal of Optimization Theory and Applications. 39 (3): 165–171. doi:10.1007/bf00934526.
5. Sarma, M.S. (1990). "On the convergence of the Baba and Dorea random optimization methods". Journal of Optimization Theory and Applications. 66 (2): 337–343. doi:10.1007/bf00939542.
Major subfields of optimization
• Convex programming
• Fractional programming
• Integer programming
• Quadratic programming
• Nonlinear programming
• Stochastic programming
• Robust optimization
• Combinatorial optimization
• Infinite-dimensional optimization
• Metaheuristics
• Constraint satisfaction
• Multiobjective optimization
• Simulated annealing
|
Wikipedia
|
Random algebra
In set theory, the random algebra or random real algebra is the Boolean algebra of Borel sets of the unit interval modulo the ideal of measure zero sets. It is used in random forcing to add random reals to a model of set theory. The random algebra was studied by John von Neumann in 1935 (in work later published as Neumann (1998, p. 253)) who showed that it is not isomorphic to the Cantor algebra of Borel sets modulo meager sets. Random forcing was introduced by Solovay (1970).
See also
• Random number
References
• Bartoszyński, Tomek (2010), "Invariants of measure and category", Handbook of set theory, vol. 2, Springer, pp. 491–555, MR 2768686
• Bukowský, Lev (1977), "Random forcing", Set theory and hierarchy theory, V (Proc. Third Conf., Bierutowice, 1976), Lecture Notes in Math., vol. 619, Berlin: Springer, pp. 101–117, MR 0485358
• Solovay, Robert M. (1970), "A model of set-theory in which every set of reals is Lebesgue measurable", Annals of Mathematics, Second Series, 92: 1–56, doi:10.2307/1970696, ISSN 0003-486X, JSTOR 1970696, MR 0265151
• Neumann, John von (1998) [1960], Continuous geometry, Princeton Landmarks in Mathematics, Princeton University Press, ISBN 978-0-691-05893-1, MR 0120174
|
Wikipedia
|
Random regular graph
A random r-regular graph is a graph selected from ${\mathcal {G}}_{n,r}$, which denotes the probability space of all r-regular graphs on $n$ vertices, where $3\leq r<n$ and $nr$ is even.[1] It is therefore a particular kind of random graph, but the regularity restriction significantly alters the properties that will hold, since most graphs are not regular.
Properties of random regular graphs
As with more general random graphs, it is possible to prove that certain properties of random $m$–regular graphs hold asymptotically almost surely. In particular, for $r\geq 3$, a random r-regular graph of large size is asymptotically almost surely r-connected.[2] In other words, although $r$–regular graphs with connectivity less than $r$ exist, the probability of selecting such a graph tends to 0 as $n$ increases.
If $\epsilon >0$ is a positive constant, and $d$ is the least integer satisfying
$(r-1)^{d-1}\geq (2+\epsilon )rn\ln n$
then, asymptotically almost surely, a random r-regular graph has diameter at most d. There is also a (more complex) lower bound on the diameter of r-regular graphs, so that almost all r-regular graphs (of the same size) have almost the same diameter.[3]
The distribution of the number of short cycles is also known: for fixed $m\geq 3$, let $Y_{3},Y_{4},...Y_{m}$ be the number of cycles of lengths up to $m$. Then the $Y_{i}$are asymptotically independent Poisson random variables with means[4]
$\lambda _{i}={\frac {(r-1)^{i}}{2i}}$
Algorithms for random regular graphs
It is non-trivial to implement the random selection of r-regular graphs efficiently and in an unbiased way, since most graphs are not regular. The pairing model (also configuration model) is a method which takes nr points, and partitions them into n buckets with r points in each of them. Taking a random matching of the nr points, and then contracting the r points in each bucket into a single vertex, yields an r-regular graph or multigraph. If this object has no multiple edges or loops (i.e. it is a graph), then it is the required result. If not, a restart is required.[5]
A refinement of this method was developed by Brendan McKay and Nicholas Wormald.[6]
References
1. Béla Bollobás, Random Graphs, 2nd edition, Cambridge University Press (2001), section 2.4: Random Regular Graphs
2. Bollobás, section 7.6: Random Regular Graphs
3. Bollobás, section 10.3: The Diameter of Random Regular Graphs
4. Bollobás, section 2.4: Random Regular Graphs (Corollary 2.19)
5. N. Wormald, "Models of Random Regular Graphs," in Surveys in Combinatorics, Cambridge University Press (1999), pp 239-298
6. B. McKay and N. Wormald, "Uniform Generation of Random Regular Graphs of Moderate Degree," Journal of Algorithms, Vol. 11 (1990), pp 52-67:
|
Wikipedia
|
Random search
Random search (RS) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized, and RS can hence be used on functions that are not continuous or differentiable. Such optimization methods are also known as direct-search, derivative-free, or black-box methods.
Anderson in 1953 reviewed the progress of methods in finding maximum or minimum of problems using a series of guesses distributed with a certain order or pattern in the parameter searching space, e.g. a confounded design with exponentially distributed spacings/steps.[1] This search goes on sequentially on each parameter and refines iteratively on the best guesses from the last sequence. The pattern can be a grid (factorial) search of all parameters, a sequential search on each parameter, or a combination of both. The method was developed to screen the experimental conditions in chemical reactions by a number of scientists listed in Anderson's paper. A MATLAB code reproducing the sequential procedure for the general non-linear regression of an example mathematical model can be found here (FitNGuess @ GitHub).[2]
The name "random search" is attributed to Rastrigin[3] who made an early presentation of RS along with basic mathematical analysis. RS works by iteratively moving to better positions in the search space, which are sampled from a hypersphere surrounding the current position.
The algorithm described herein is a type of local random search, where every iteration is dependent on the prior iteration's candidate solution. There are alternative random search methods that sample from the entirety of the search space (for example pure random search or uniform global random search), but these are not described in this article.
Random search has been used in artificial neural network for hyper-parameter optimization.[4]
If good parts of the search space occupy 5% of the volume the chances of hitting a good configuration in search space is 5%. The probability of finding at least one good configuration is above 95% after trying out 60 configurations ($1-0.95^{60}=0.953>0.95$, making use of the counterprobability).
Algorithm
Let f: ℝn → ℝ be the fitness or cost function which must be minimized. Let x ∈ ℝn designate a position or candidate solution in the search-space. The basic RS algorithm can then be described as:
1. Initialize x with a random position in the search-space.
2. Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:
1. Sample a new position y from the hypersphere of a given radius surrounding the current position x (see e.g. Marsaglia's technique for sampling a hypersphere.)
2. If f(y) < f(x) then move to the new position by setting x = y
Variants
Truly random search is purely by luck and varies from very costive to very lucky, but the structured random search is strategic. A number of RS variants have been introduced in the literature with structured sampling in the searching space:
• Friedman-Savage procedure: Sequentially search each parameter with a set of guesses that have a space pattern between the initial guess and the boundaries.[5] An example of exponentially distributed steps can be found here in a MATLAB code (FigNGuess @ GitHub).[2] This example code converges 1-2 orders of magnitude slower than the Levenberg–Marquardt algorithm, with an example also provided in the GitHub.
• Fixed Step Size Random Search (FSSRS) is Rastrigin's [3] basic algorithm which samples from a hypersphere of fixed radius.
• Optimum Step Size Random Search (OSSRS) by Schumer and Steiglitz [6] is primarily a theoretical study on how to optimally adjust the radius of the hypersphere so as to allow for speedy convergence to the optimum. The actual implementation of the OSSRS needs to approximate this optimal radius by repeated sampling and is therefore expensive to execute.
• Adaptive Step Size Random Search (ASSRS) by Schumer and Steiglitz [6] attempts to heuristically adapt the hypersphere's radius: two new candidate solutions are generated, one with the current nominal step size and one with a larger step-size. The larger step size becomes the new nominal step size if and only if it leads to a larger improvement. If for several iterations neither of the steps leads to an improvement, the nominal step size is reduced.
• Optimized Relative Step Size Random Search (ORSSRS) by Schrack and Choit [7] approximate the optimal step size by a simple exponential decrease. However, the formula for computing the decrease factor is somewhat complicated.
See also
• Random optimization is a closely related family of optimization methods which sample from a normal distribution instead of a hypersphere.
• Luus–Jaakola is a closely related optimization method using a uniform distribution in its sampling and a simple formula for exponentially decreasing the sampling range.
• Pattern search takes steps along the axes of the search-space using exponentially decreasing step sizes.
References
1. Anderson, R.L. (1953). "Recent Advances in Finding Best Operating Conditions". Journal of the American Statistical Association. 48 (264): 789–798. doi:10.2307/2281072.
2. "GitHub - Jixin Chen/FigNGuess: A Random Search Algorithm for general mathematical model(s) fittings".
3. Rastrigin, L.A. (1963). "The convergence of the random search method in the extremal control of a many parameter system". Automation and Remote Control. 24 (11): 1337–1342. Retrieved 30 November 2021. 1964 translation of Russian Avtomat. i Telemekh pages 1467–1473
4. Bergstra, J.; Bengio, Y. (2012). "Random search for hyper-parameter optimization". Journal of Machine Learning Research. 13: 281–305.
5. Friedman, M.; Savage, L.J. (1947). Planning experiments seeking maxima, chapter 13 of Techniques of Statistical Analysis, edited by Eisenhart, Hastay, and Wallis. McGraw-Hill Book Co., New York. pp. 363–372. Retrieved 30 November 2021 – via Milton Friedman from Hoover Institution at Stanford University.
6. Schumer, M.A.; Steiglitz, K. (1968). "Adaptive step size random search". IEEE Transactions on Automatic Control. 13 (3): 270–276. CiteSeerX 10.1.1.118.9779. doi:10.1109/tac.1968.1098903.
7. Schrack, G.; Choit, M. (1976). "Optimized relative step size random searches". Mathematical Programming. 10 (1): 230–244. doi:10.1007/bf01580669.
Major subfields of optimization
• Convex programming
• Fractional programming
• Integer programming
• Quadratic programming
• Nonlinear programming
• Stochastic programming
• Robust optimization
• Combinatorial optimization
• Infinite-dimensional optimization
• Metaheuristics
• Constraint satisfaction
• Multiobjective optimization
• Simulated annealing
|
Wikipedia
|
Random sequential adsorption
Random sequential adsorption (RSA) refers to a process where particles are randomly introduced in a system, and if they do not overlap any previously adsorbed particle, they adsorb and remain fixed for the rest of the process. RSA can be carried out in computer simulation, in a mathematical analysis, or in experiments. It was first studied by one-dimensional models: the attachment of pendant groups in a polymer chain by Paul Flory, and the car-parking problem by Alfréd Rényi.[1] Other early works include those of Benjamin Widom.[2] In two and higher dimensions many systems have been studied by computer simulation, including in 2d, disks, randomly oriented squares and rectangles, aligned squares and rectangles, various other shapes, etc.
An important result is the maximum surface coverage, called the saturation coverage or the packing fraction. On this page we list that coverage for many systems.
The blocking process has been studied in detail in terms of the random sequential adsorption (RSA) model.[3] The simplest RSA model related to deposition of spherical particles considers irreversible adsorption of circular disks. One disk after another is placed randomly at a surface. Once a disk is placed, it sticks at the same spot, and cannot be removed. When an attempt to deposit a disk would result in an overlap with an already deposited disk, this attempt is rejected. Within this model, the surface is initially filled rapidly, but the more one approaches saturation the slower the surface is being filled. Within the RSA model, saturation is sometimes referred to as jamming. For circular disks, saturation occurs at a coverage of 0.547. When the depositing particles are polydisperse, much higher surface coverage can be reached, since the small particles will be able to deposit into the holes in between the larger deposited particles. On the other hand, rod like particles may lead to much smaller coverage, since a few misaligned rods may block a large portion of the surface.
For the one-dimensional parking-car problem, Renyi[1] has shown that the maximum coverage is equal to
$\theta _{1}=\int _{0}^{\infty }\exp \left(-2\int _{0}^{x}{\frac {1-e^{-y}}{y}}dy\right)dx=0.7475979202534\ldots $
the so-called Renyi car-parking constant.[4]
Then followed the conjecture of Ilona Palásti,[5] who proposed that the coverage of d-dimensional aligned squares, cubes and hypercubes is equal to θ1d. This conjecture led to a great deal of work arguing in favor of it, against it, and finally computer simulations in two and three dimensions showing that it was a good approximation but not exact. The accuracy of this conjecture in higher dimensions is not known.
For $k$-mers on a one-dimensional lattice, we have for the fraction of vertices covered,[6]
$\theta _{k}=k\int _{0}^{\infty }\exp \left(-u-2\sum _{j=1}^{k-1}{\frac {1-e^{-ju}}{j}}\right)du=k\int _{0}^{1}\exp \left(-2\sum _{j=1}^{k-1}{\frac {1-v^{j}}{j}}\right)dv$
When $k$ goes to infinity, this gives the Renyi result above. For k = 2, this gives the Flory [7] result $\theta _{1}=1-e^{-2}$.
For percolation thresholds related to random sequentially adsorbed particles, see Percolation threshold.
Saturation coverage of k-mers on 1d lattice systems
system Saturated coverage $\theta _{k}$(fraction of sites filled)
dimers $1-e^{-2}=0.86466472$[7]
trimers ${\frac {3{\sqrt {\pi }}({\text{erfi}}(2)-{\text{erfi}}(1))}{2e^{4}}}\approx 0.82365296$[6]
k = 4 $0.80389348$[6]
k = 10 $0.76957741$[6]
k = 100 $0.74976335$[6]
k = 1000 $0.74781413$[6]
k = 10000 $0.74761954$[6]
k = 100000 $0.74760008$[6]
k = $\infty $ $0.74759792$[1]
Asymptotic behavior: $\theta _{k}\sim \theta _{\infty }+0.2162/k+\ldots $ .
Saturation coverage of segments of two lengths on a one dimensional continuum
R = size ratio of segments. Assume equal rates of adsorption
system Saturated coverage $\theta $(fraction of line filled)
R = 1 0.74759792[1]
R = 1.05 0.7544753(62) [9]
R = 1.1 0.7599829(63) [9]
R = 2 0.7941038(58) [9]
Saturation coverage of k-mers on a 2d square lattice
system Saturated coverage $\theta _{k}$(fraction of sites filled)
dimers k = 2 0.906820(2),[10] 0.906,[11] 0.9068,[12] 0.9062,[13] 0.906,[14] 0.905(9),[15] 0.906,[11] 0.906823(2),[16]
trimers k = 3 $$[6] 0.846,[11] 0.8366 [12]
k = 4 $$0.8094 [13] 0.81[11]
k = 5 0.7868 [11]
k = 6 0.7703 [11]
k = 7 0.7579 [11]
k = 8 0.7479,[13] 0.747[11]
k = 9 0.7405[11]
k = 16 0.7103,[13] 0.71[11]
k = 32 0.6892,[13] 0.689,[11] 0.6893(4)[17]
k = 48 0.6809(5),[17]
k = 64 0.6755,[13] 0.678,[11] 0.6765(6)[17]
k = 96 0.6714(5)[17]
k = 128 0.6686,[13] 0.668(9),[15] 0.668[11] 0.6682(6)[17]
k = 192 0.6655(7)[17]
k = 256 0.6628[13] 0.665,[11] 0.6637(6)[17]
k = 384 0.6634(6)[17]
k = 512 0.6618,[13] 0.6628(9)[17]
k = 1024 0.6592 [13]
k = 2048 0.6596 [13]
k = 4096 0.6575[13]
k = 8192 0.6571 [13]
k = 16384 0.6561 [13]
k = ∞ 0.660(2),[17] 0.583(10),[18]
Asymptotic behavior: $\theta _{k}\sim \theta _{\infty }+\ldots $ .
Saturation coverage of k-mers on a 2d triangular lattice
system Saturated coverage $\theta _{k}$(fraction of sites filled)
dimers k = 2 0.9142(12),[19]
k = 3 0.8364(6),[19]
k = 4 0.7892(5),[19]
k = 5 0.7584(6),[19]
k = 6 0.7371(7),[19]
k = 8 0.7091(6),[19]
k = 10 0.6912(6),[19]
k = 12 0.6786(6),[19]
k = 20 0.6515(6),[19]
k = 30 0.6362(6),[19]
k = 40 0.6276(6),[19]
k = 50 0.6220(7),[19]
k = 60 0.6183(6),[19]
k = 70 0.6153(6),[19]
k = 80 0.6129(7),[19]
k = 90 0.6108(7),[19]
k = 100 0.6090(8),[19]
k = 128 0.6060(13),[19]
Saturation coverage for particles with neighbors exclusion on 2d lattices
system Saturated coverage $\theta _{k}$(fraction of sites filled)
Square lattice with NN exclusion 0.3641323(1),[20] 0.36413(1),[21] 0.3641330(5),[22]
Honeycomb lattice with NN exclusion 0.37913944(1),[20] 0.38(1),[2] 0.379[23]
.
Saturation coverage of $k\times k$ squares on a 2d square lattice
system Saturated coverage $\theta _{k}$(fraction of sites filled)
k = 2 0.74793(1),[24] 0.747943(37),[25] 0.749(1),[26]
k = 3 0.67961(1),[24] 0.681(1),[26]
k = 4 0.64793(1),[24] 0.647927(22)[25] 0.646(1),[26]
k = 5 0.62968(1)[24] 0.628(1),[26]
k = 8 0.603355(55)[25] 0.603(1),[26]
k = 10 0.59476(4)[24] 0.593(1),[26]
k = 15 0.583(1),[26]
k = 16 0.582233(39)[25]
k = 20 0.57807(5)[24] 0.578(1),[26]
k = 30 0.574(1),[26]
k = 32 0.571916(27)[25]
k = 50 0.56841(10)[24]
k = 64 0.567077(40)[25]
k = 100 0.56516(10)[24]
k = 128 0.564405(51)[25]
k = 256 0.563074(52)[25]
k = 512 0.562647(31)[25]
k = 1024 0.562346(33)[25]
k = 4096 0.562127(33)[25]
k = 16384 0.562038(33)[25]
For k = ∞, see "2d aligned squares" below. Asymptotic behavior:[25] $\theta _{k}\sim \theta _{\infty }+0.316/k+0.114/k^{2}\ldots $ . See also [27]
Saturation coverage for randomly oriented 2d systems
system Saturated coverage
equilateral triangles 0.52590(4)[28]
squares 0.523-0.532,[29] 0.530(1),[30] 0.530(1),[31] 0.52760(5)[28]
regular pentagons 0.54130(5)[28]
regular hexagons 0.53913(5)[28]
regular heptagons 0.54210(6)[28]
regular octagons 0.54238(5)[28]
regular enneagons 0.54405(5)[28]
regular decagons 0.54421(6)[28]
2d oblong shapes with maximal coverage
system aspect ratio Saturated coverage
rectangle 1.618 0.553(1)[32]
dimer 1.5098 0.5793(1)[33]
ellipse 2.0 0.583(1)[32]
spherocylinder 1.75 0.583(1)[32]
smoothed dimer 1.6347 0.5833(5)[34]
Saturation coverage for 3d systems
system Saturated coverage
spheres 0.3841307(21),[35] 0.38278(5),[36] 0.384(1)[37]
randomly oriented cubes 0.3686(15),[38] 0.36306(60)[39]
randomly oriented cuboids 0.75:1:1.3 0.40187(97),[39]
Saturation coverages for disks, spheres, and hyperspheres
system Saturated coverage
2d disks 0.5470735(28),[35] 0.547067(3),[40] 0.547070,[41] 0.5470690(7),[42] 0.54700(6),[36] 0.54711(16),[43] 0.5472(2),[44] 0.547(2),[45] 0.5479,[16]
3d spheres 0.3841307(21),[35] 0.38278(5),[36] 0.384(1)[37]
4d hyperspheres 0.2600781(37),[35] 0.25454(9),[36]
5d hyperspheres 0.1707761(46),[35] 0.16102(4),[36]
6d hyperspheres 0.109302(19),[35] 0.09394(5),[36]
7d hyperspheres 0.068404(16),[35]
8d hyperspheres 0.04230(21),[35]
Saturation coverages for aligned squares, cubes, and hypercubes
system Saturated coverage
2d aligned squares 0.562009(4),[25] 0.5623(4),[16] 0.562(2),[45] 0.5565(15),[46] 0.5625(5),[47] 0.5444(24),[48] 0.5629(6),[49] 0.562(2),[50]
3d aligned cubes 0.4227(6),[50] 0.42(1),[51] 0.4262,[52] 0.430(8),[53] 0.422(8),[54] 0.42243(5)[38]
4d aligned hypercubes 0.3129,[50] 0.3341,[52]
See also
• Adsorption
• Particle deposition
• Percolation threshold
References
1. Rényi, A. (1958). "On a one-dimensional problem concerning random space filling". Publ. Math. Inst. Hung. Acad. Sci. 3 (109–127): 30–36.
2. Widom, B. J. (1966). "Random Sequential Addition of Hard Spheres to a Volume". J. Chem. Phys. 44 (10): 3888–3894. Bibcode:1966JChPh..44.3888W. doi:10.1063/1.1726548.
3. Evans, J. W. (1993). "Random and cooperative sequential adsorption". Rev. Mod. Phys. 65 (4): 1281–1329. Bibcode:1993RvMP...65.1281E. doi:10.1103/RevModPhys.65.1281.
4. Weisstein, Eric W., "Rényi's Parking Constants", From MathWorld--A Wolfram Web Resource
5. Palasti, I. (1960). "On some random space filling problems". Publ. Math. Inst. Hung. Acad. Sci. 5: 353–359.
6. Krapivsky, P.; S. Redner; E. Ben-Naim (2010). A Kinetic View of Statistical Physics. Cambridge Univ. Press.
7. Flory, P. J. (1939). "Intramolecular Reaction between Neighboring Substituents of Vinyl Polymers". J. Am. Chem. Soc. 61 (6): 1518–1521. doi:10.1021/ja01875a053.
8. Ziff, Robert M.; R. Dennis Vigil (1990). "Kinetics and fractal properties of the random sequential adsorption of line segments". J. Phys. A: Math. Gen. 23 (21): 5103–5108. Bibcode:1990JPhA...23.5103Z. doi:10.1088/0305-4470/23/21/044. hdl:2027.42/48820.
9. Araujo, N. A. M.; Cadilhe, A. (2006). "Gap-size distribution functions of a random sequential adsorption model of segments on a line". Phys. Rev. E. 73 (5): 051602. arXiv:cond-mat/0404422. Bibcode:2006PhRvE..73e1602A. doi:10.1103/PhysRevE.73.051602. PMID 16802941. S2CID 8046084.
10. Wang, Jian-Sheng; Pandey, Ras B. (1996). "Kinetics and jamming coverage in a random sequential adsorption of polymer chains". Phys. Rev. Lett. 77 (9): 1773–1776. arXiv:cond-mat/9605038. Bibcode:1996PhRvL..77.1773W. doi:10.1103/PhysRevLett.77.1773. PMID 10063168. S2CID 36659964.
11. Tarasevich, Yuri Yu; Laptev, Valeri V.; Vygornitskii, Nikolai V.; Lebovka, Nikolai I. (2015). "Impact of defects on percolation in random sequential adsorption of linear k-mers on square lattices". Phys. Rev. E. 91 (1): 012109. arXiv:1412.7267. Bibcode:2015PhRvE..91a2109T. doi:10.1103/PhysRevE.91.012109. PMID 25679572. S2CID 35537612.
12. Nord, R. S.; Evans, J. W. (1985). "Irreversible immobile random adsorption of dimers, trimers, ... on 2D lattices". J. Chem. Phys. 82 (6): 2795–2810. Bibcode:1985JChPh..82.2795N. doi:10.1063/1.448279.
13. Slutskii, M. G.; Barash, L. Yu.; Tarasevich, Yu. Yu. (2018). "Percolation and jamming of random sequential adsorption samples of large linear k-mers on a square lattice". Physical Review E. 98 (6): 062130. arXiv:1810.06800. Bibcode:2018PhRvE..98f2130S. doi:10.1103/PhysRevE.98.062130. S2CID 53709717.
14. Vandewalle, N.; Galam, S.; Kramer, M. (2000). "A new universality for random sequential deposition of needles". Eur. Phys. J. B. 14 (3): 407–410. arXiv:cond-mat/0004271. Bibcode:2000EPJB...14..407V. doi:10.1007/s100510051047. S2CID 11142384.
15. Lebovka, Nikolai I.; Karmazina, Natalia; Tarasevich, Yuri Yu; Laptev, Valeri V. (2011). "Random sequential adsorption of partially oriented linear k-mers on a square lattice". Phys. Rev. E. 85 (6): 029902. arXiv:1109.3271. Bibcode:2011PhRvE..84f1603L. doi:10.1103/PhysRevE.84.061603. PMID 22304098. S2CID 25377751.
16. Wang, J. S. (2000). "Series expansion and computer simulation studies of random sequential adsorption". Colloids and Surfaces A. 165 (1–3): 325–343. doi:10.1016/S0927-7757(99)00444-6.
17. Bonnier, B.; Hontebeyrie, M.; Leroyer, Y.; Meyers, Valeri C.; Pommiers, E. (1994). "Random sequential adsorption of partially oriented linear k-mers on a square lattice". Phys. Rev. E. 49 (1): 305–312. arXiv:cond-mat/9307043. doi:10.1103/PhysRevE.49.305. PMID 9961218. S2CID 131089.
18. Manna, S. S.; Svrakic, N. M. (1991). "Random sequential adsorption: line segments on the square lattice". J. Phys. A: Math. Gen. 24 (12): L671–L676. Bibcode:1991JPhA...24L.671M. doi:10.1088/0305-4470/24/12/003.
19. Perino, E. J.; Matoz-Fernandez, D. A.; Pasinetti1, P. M.; Ramirez-Pastor, A. J. (2017). "Jamming and percolation in random sequential adsorption of straight rigid rods on a two-dimensional triangular lattice". Journal of Statistical Mechanics: Theory and Experiment. 2017 (7): 073206. arXiv:1703.07680. Bibcode:2017JSMTE..07.3206P. doi:10.1088/1742-5468/aa79ae. S2CID 119374271.
20. Gan, C. K.; Wang, J.-S. (1998). "Extended series expansions for random sequential adsorption". J. Chem. Phys. 108 (7): 3010–3012. arXiv:cond-mat/9710340. Bibcode:1998JChPh.108.3010G. doi:10.1063/1.475687. S2CID 97703000.
21. Meakin, P.; Cardy, John L.; Loh, John L.; Scalapino, John L. (1987). "Extended series expansions for random sequential adsorption". J. Chem. Phys. 86 (4): 2380–2382. doi:10.1063/1.452085.
22. Baram, Asher; Fixman, Marshall (1995). "Random sequential adsorption: Long time dynamics". J. Chem. Phys. 103 (5): 1929–1933. Bibcode:1995JChPh.103.1929B. doi:10.1063/1.469717.
23. Evans, J. W. (1989). "Comment on Kinetics of random sequential adsorption". Phys. Rev. Lett. 62 (22): 2642. Bibcode:1989PhRvL..62.2642E. doi:10.1103/PhysRevLett.62.2642. PMID 10040048.
24. Privman, V.; Wang, J. S.; Nielaba, P. (1991). "Continuum limit in random sequential adsorption". Phys. Rev. B. 43 (4): 3366–3372. Bibcode:1991PhRvB..43.3366P. doi:10.1103/PhysRevB.43.3366. PMID 9997649.
25. Brosilow, B. J.; R. M. Ziff; R. D. Vigil (1991). "Random sequential adsorption of parallel squares". Phys. Rev. A. 43 (2): 631–638. Bibcode:1991PhRvA..43..631B. doi:10.1103/PhysRevA.43.631. PMID 9905079.
26. Nakamura, Mitsunobu (1986). "Random sequential packing in square cellular structures". J. Phys. A: Math. Gen. 19 (12): 2345–2351. Bibcode:1986JPhA...19.2345N. doi:10.1088/0305-4470/19/12/020.
27. Sutton, Clifton (1989). "Asymptotic packing densities for two-dimensional lattice models". Stochastic Models. 5 (4): 601–615. doi:10.1080/15326348908807126.
28. Zhang, G. (2018). "Precise algorithm to generate random sequential adsorption of hard polygons at saturation". Physical Review E. 97 (4): 043311. arXiv:1803.08348. Bibcode:2018PhRvE..97d3311Z. doi:10.1103/PhysRevE.97.043311. PMID 29758708. S2CID 46892756.
29. Vigil, R. Dennis; Robert M. Ziff (1989). "Random sequential adsorption of unoriented rectangles onto a plane". J. Chem. Phys. 91 (4): 2599–2602. Bibcode:1989JChPh..91.2599V. doi:10.1063/1.457021. hdl:2027.42/70834.
30. Viot, P.; G. Targus (1990). "Random Sequential Addition of Unoriented Squares: Breakdown of Swendsen's Conjecture". EPL. 13 (4): 295–300. Bibcode:1990EL.....13..295V. doi:10.1209/0295-5075/13/4/002. S2CID 250852782.
31. Viot, P.; G. Targus; S. M. Ricci; J. Talbot (1992). "Random sequential adsorption of anisotropic particles. I. Jamming limit and asymptotic behavior". J. Chem. Phys. 97 (7): 5212. Bibcode:1992JChPh..97.5212V. doi:10.1063/1.463820.
32. Viot, P.; G. Tarjus; S. Ricci; J.Talbot (1992). "Random sequential adsorption of anisotropic particles. I. Jamming limit and asymptotic behavior". J. Chem. Phys. 97 (7): 5212–5218. Bibcode:1992JChPh..97.5212V. doi:10.1063/1.463820.
33. Cieśla, Michał (2014). "Properties of random sequential adsorption of generalized dimers". Phys. Rev. E. 89 (4): 042404. arXiv:1403.3200. Bibcode:2014PhRvE..89d2404C. doi:10.1103/PhysRevE.89.042404. PMID 24827257. S2CID 12961099.
34. Ciesśla, Michałl; Grzegorz Pająk; Robert M. Ziff (2015). "Shapes for maximal coverage for two-dimensional random sequential adsorption". Phys. Chem. Chem. Phys. 17 (37): 24376–24381. arXiv:1506.08164. Bibcode:2015PCCP...1724376C. doi:10.1039/c5cp03873a. PMID 26330194. S2CID 14368653.
35. Zhang, G.; S. Torquato (2013). "Precise algorithm to generate random sequential addition of hard hyperspheres at saturation". Phys. Rev. E. 88 (5): 053312. arXiv:1402.4883. Bibcode:2013PhRvE..88e3312Z. doi:10.1103/PhysRevE.88.053312. PMID 24329384. S2CID 14810845.
36. Torquato, S.; O. U. Uche; F. H. Stillinger (2006). "Random sequential addition of hard spheres in high Euclidean dimensions". Phys. Rev. E. 74 (6): 061308. arXiv:cond-mat/0608402. Bibcode:2006PhRvE..74f1308T. doi:10.1103/PhysRevE.74.061308. PMID 17280063. S2CID 15604775.
37. Meakin, Paul (1992). "Random sequential adsorption of spheres of different sizes". Physica A. 187 (3): 475–488. Bibcode:1992PhyA..187..475M. doi:10.1016/0378-4371(92)90006-C.
38. Ciesla, Michal; Kubala, Piotr (2018). "Random sequential adsorption of cubes". The Journal of Chemical Physics. 148 (2): 024501. Bibcode:2018JChPh.148b4501C. doi:10.1063/1.5007319. PMID 29331110.
39. Ciesla, Michal; Kubala, Piotr (2018). "Random sequential adsorption of cuboids". The Journal of Chemical Physics. 149 (19): 194704. Bibcode:2018JChPh.149s4704C. doi:10.1063/1.5061695. PMID 30466287. S2CID 53727841.
40. Cieśla, Michał; Ziff, Robert (2018). "Boundary conditions in random sequential adsorption". Journal of Statistical Mechanics: Theory and Experiment. 2018 (4): 043302. arXiv:1712.09663. Bibcode:2018JSMTE..04.3302C. doi:10.1088/1742-5468/aab685. S2CID 118969644.
41. Cieśla, Michał; Aleksandra Nowak (2016). "Managing numerical errors in random sequential adsorption". Surface Science. 651: 182–186. Bibcode:2016SurSc.651..182C. doi:10.1016/j.susc.2016.04.014.
42. Wang, Jian-Sheng (1994). "A fast algorithm for random sequential adsorption of discs". Int. J. Mod. Phys. C. 5 (4): 707–715. arXiv:cond-mat/9402066. Bibcode:1994IJMPC...5..707W. doi:10.1142/S0129183194000817. S2CID 119032105.
43. Chen, Elizabeth R.; Miranda Holmes-Cerfon (2017). "Random Sequential Adsorption of Discs on Surfaces of Constant Curvature: Plane, Sphere, Hyperboloid, and Projective Plane". J. Nonlinear Sci. 27 (6): 1743–1787. arXiv:1709.05029. Bibcode:2017JNS....27.1743C. doi:10.1007/s00332-017-9385-2. S2CID 26861078.
44. Hinrichsen, Einar L.; Jens Feder; Torstein Jøssang (1990). "Random packing of disks in two dimensions". Phys. Rev. A. 41 (8): 4199–4209. Bibcode:1990PhRvA..41.4199H. doi:10.1103/PhysRevA.41.4199.
45. Feder, Jens (1980). "Random sequential adsorption". J. Theor. Biol. 87 (2): 237–254. Bibcode:1980JThBi..87..237F. doi:10.1016/0022-5193(80)90358-6.
46. Blaisdell, B. Edwin; Herbert Solomon (1970). "On random sequential packing in the plane and a conjecture of Palasti". J. Appl. Probab. 7 (3): 667–698. doi:10.1017/S0021900200110630.
47. Dickman, R.; J. S. Wang; I. Jensen (1991). "Random sequential adsorption of parallel squares". J. Chem. Phys. 94 (12): 8252. Bibcode:1991JChPh..94.8252D. doi:10.1063/1.460109.
48. Tory, E. M.; W. S. Jodrey; D. K. Pikard (1983). "Simulation of Random Sequential Adsorption: Efficient Methods and Resolution of Conflicting Results". J. Theor. Biol. 102 (12): 439–445. Bibcode:1991JChPh..94.8252D. doi:10.1063/1.460109.
49. Akeda, Yoshiaki; Motoo Hori (1976). "On random sequential packing in two and three dimensions". Biometrika. 63 (2): 361–366. doi:10.1093/biomet/63.2.361.
50. Jodrey, W. S.; E. M. Tory (1980). "Random sequential packing in R^n". Journal of Statistical Computation and Simulation. 10 (2): 87–93. doi:10.1080/00949658008810351.
51. Bonnier, B.; M. Hontebeyrie; C. Meyers (1993). "On the random filling of R^d by non-overlapping d-dimensional cubes". Physica A. 198 (1): 1–10. arXiv:cond-mat/9302023. Bibcode:1993PhyA..198....1B. doi:10.1016/0378-4371(93)90180-C. S2CID 11802063.
52. Blaisdell, B. Edwin; Herbert Solomon (1982). "Random Sequential Packing in Euclidean Spaces of Dimensions Three and Four and a Conjecture of Palásti". Journal of Applied Probability. 19 (2): 382–390. doi:10.2307/3213489. JSTOR 3213489. S2CID 118248194.
53. Cooper, Douglas W. (1989). "Random sequential packing simulations in three dimensions for aligned cubes". J. Appl. Probab. 26 (3): 664–670. doi:10.2307/3214426. JSTOR 3214426. S2CID 124311298.
54. Nord, R. S. (1991). "Irreversible random sequential filling of lattices by Monte Carlo simulation". Journal of Statistical Computation and Simulation. 39 (4): 231–240. doi:10.1080/00949659108811358.
|
Wikipedia
|
Random surfing model
The random surfing model is a graph model which describes the probability of a random user visiting a web page. The model attempts to predict the chance that a random internet surfer will arrive at a page by either clicking a link or by accessing the site directly, for example by directly entering the website's URL in the address bar. For this reason, an assumption is made that all users surfing the internet will eventually stop following links in favor of switching to another site completely. The model is similar to a Markov chain, where the chain's states are web pages the user lands on and transitions are equally probable links between these pages.
Description
A user navigates the internet in two primary ways; the user may access a site directly by entering the site's URL or clicking a bookmark, or the user may use a series of hyperlinks to get to the desired page. The random surfer model assumes that the link which the user selects next is picked at random. The model also assumes that the number of successive links is not infinite – the user will at some point lose interest and leave the current site for a completely new site.[1]
The random surfer model is presented as a series of nodes which indicate web pages that can be accessed at random by users. A new node is added to the a graph when a new website is published. The movement about the graphs nodes is modeled by choosing a start node at random, then performing a short and random traversal of the nodes, or random walk. This traversal is analogous to a user accessing a website, then following hyperlink $t$ number of times, until the user either exits the page or accesses another site completely. Connections to other nodes in this graph are formed when outbound links are placed on the page.
Graph definitions
In the random surfing model, webgraphs are presented as a sequence of directed graphs $G_{t},t=1,2,\ldots $ such that a graph $G_{t}$ has $t$ vertices and $t$ edges. The process of defining graphs is parameterized with a probability $p$, thus we let $q=1-p$.[2]
Nodes of the model arrive one at time, forming $k$ connections to the existing graph $G_{t}$. In some models, connections represent directed edges, and in others, connections represent undirected edges. Models start with a single node $v_{0}$ and have $k$ self-loops. $v_{t}$ denotes a vertex added in the $t^{th}$ step, and $n$ denotes the total number of vertices.[1]
Model 1. (1-step walk with self-loop)
At time $t$, vertex $v_{t}$ makes $k$ connections by $k$ iterations of the following steps:
1. Pick an existing node $v$ uniformly at random from $\{v_{0},v_{1},\ldots ,v_{t-1}\}$
2. With probability $p$ stay at $v$; with probability $1-p$ take a 1-step walk to a random neighbor of $v$
3. Add an edge from $v_{t}$ to the current node
For directed graphs, edges added are directed from $v_{t}$ into the existing graph. Edges are undirected in respective undirected graphs.
Model 2. (Random walks with coin flips)
At time $t$, vertex $v_{t}$ makes $k$ connections by $k$ iterations of the following steps:
1. Pick an existing node $v$ uniformly at random from $\{v_{0},v_{1},...,v_{t-1}\}$
2. Flip a coin of bias $p$
3. If the coin comes up heads add an edge from $v_{t}$ to the current node and stop
4. If the coin comes up tails, move to a random neighbor of the current node and go back to step 2
For directed graphs, edges added are directed from $v_{t}$ into the existing graph. Edges are undirected in respective undirected graphs.
Limitations
There are some caveats to the standard random surfer model, one of which is that the model ignores the content of the sites which users select – since the model assumes links are selected at random. Because users tend to have a goal in mind when surfing the internet, the content of the linked sites is a determining factor of whether or not the user will click a link.[1][2]
Application
The normalized eigenvector centrality combined with random surfer model's assumption of random jumps created the foundation of Google's PageRank algorithm.[2][3]
See also
• Avrim Blum
• PageRank
• Webgraph
References
1. Blum, Avrim; Chan, T-H. Hubert; Rwebangira, Mugizi Robert (21 January 2006). Written at 3600 University City Science Center Philadelphia, PA, United States. "A Random-Surfer Web-Graph Model" (PDF). Computer Science Department. ANALCO '06: Proceedings of the Meeting on Analytic Algorithmics and Combinatorics. Carnegie Mellon University: Society for Industrial and Applied Mathematics: 238–246.{{cite journal}}: CS1 maint: location (link)
2. Chebolu, Prasad; Melsted, Páll (1 January 2008). "PageRank and the random surfer model" (PDF). Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms. Department of Mathematical Sciences, Carnegie Mellon University: 1010–1018.
3. Zaki, Mohammed J.; Meira, Jr., Wagner (2014). Data Mining and Analysis: Fundamental Concepts and Algorithms. Cambridge University Press. ISBN 9780521766333.
External links
• Case study on random web surfers
• Data Mining and Analysis: Fundamental Concepts and Algorithms is freely available to download for personal use here
• Microsoft research on PageRank and the Random Surfer Model
• Paper on how Google web search implements PageRank to find relevant search results
|
Wikipedia
|
Random variate
In probability and statistics, a random variate or simply variate is a particular outcome of a random variable; the random variates which are other outcomes of the same random variable might have different values (random numbers).
A random deviate or simply deviate is the difference of a random variate with respect to the distribution central location (e.g., mean), often divided by the standard deviation of the distribution (i.e., as a standard score).[1]
Random variates are used when simulating processes driven by random influences (stochastic processes). In modern applications, such simulations would derive random variates corresponding to any given probability distribution from computer procedures designed to create random variates corresponding to a uniform distribution, where these procedures would actually provide values chosen from a uniform distribution of pseudorandom numbers.
Procedures to generate random variates corresponding to a given distribution are known as procedures for (uniform) random number generation or non-uniform pseudo-random variate generation.
In probability theory, a random variable is a measurable function from a probability space to a measurable space of values that the variable can take on. In that context, those values are also known as random variates or random deviates, and this represents a wider meaning than just that associated with pseudorandom numbers.
Definition
Devroye[2] defines a random variate generation algorithm (for real numbers) as follows:
Assume that
1. Computers can manipulate real numbers.
2. Computers have access to a source of random variates that are uniformly distributed on the closed interval [0,1].
Then a random variate generation algorithm is any program that halts almost surely and exits with a real number x. This x is called a random variate.
(Both assumptions are violated in most real computers. Computers necessarily lack the ability to manipulate real numbers, typically using floating point representations instead. Most computers lack a source of true randomness (like certain hardware random number generators), and instead use pseudorandom number sequences.)
The distinction between random variable and random variate is subtle and is not always made in the literature. It is useful when one wants to distinguish between a random variable itself with an associated probability distribution on the one hand, and random draws from that probability distribution on the other, in particular when those draws are ultimately derived by floating-point arithmetic from a pseudo-random sequence.
Practical aspects
For the generation of uniform random variates, see Random number generation.
For the generation of non-uniform random variates, see Pseudo-random number sampling.
See also
• Deviation (statistics)
• Raw score
References
1. "Deviate: the value of a random variable measured from some standard point of location, usually the mean. It is often understood that the value is expressed in standard measure, i.e., as a proportion of the parent standard deviation." Y. Dodge (Ed.) The Oxford Dictionary of Statistical Terms,
2. Luc Devroye (1986). Non-Uniform Random Variate Generation. New York: Springer-Verlag, pp. 1–2. ("Non-Uniform Random Variate Generation". Archived from the original on 2009-05-05. Retrieved 2009-05-05.)
|
Wikipedia
|
Random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events.[1] The term 'random variable' can be misleading as it is not actually random nor a variable,[2] but rather it is a function from possible outcomes (e.g., the possible upper sides of a flipped coin such as heads $H$ and tails $T$) in a sample space (e.g., the set $\{H,T\}$) to a measurable space (e.g., $\{-1,1\}$ in which 1 corresponding to $H$ and −1 corresponding to $T$), often to the real numbers.
Part of a series on statistics
Probability theory
• Probability
• Axioms
• Determinism
• System
• Indeterminism
• Randomness
• Probability space
• Sample space
• Event
• Collectively exhaustive events
• Elementary event
• Mutual exclusivity
• Outcome
• Singleton
• Experiment
• Bernoulli trial
• Probability distribution
• Bernoulli distribution
• Binomial distribution
• Normal distribution
• Probability measure
• Random variable
• Bernoulli process
• Continuous or discrete
• Expected value
• Markov chain
• Observed value
• Random walk
• Stochastic process
• Complementary event
• Joint probability
• Marginal probability
• Conditional probability
• Independence
• Conditional independence
• Law of total probability
• Law of large numbers
• Bayes' theorem
• Boole's inequality
• Venn diagram
• Tree diagram
Informally, randomness typically represents some fundamental element of chance, such as in the roll of a dice; it may also represent uncertainty, such as measurement error.[1] However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup.
In the formal mathematical language of measure theory, a random variable is defined as a measurable function from a probability measure space (called the sample space) to a measurable space. This allows consideration of the pushforward measure, which is called the distribution of the random variable; the distribution is thus a probability measure on the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may be independent.
It is common to consider the special cases of discrete random variables and absolutely continuous random variables, corresponding to whether a random variable is valued in a discrete set (such as a finite set) or in an interval of real numbers. There are other important possibilities, especially in the theory of stochastic processes, wherein it is natural to consider random sequences or random functions. Sometimes a random variable is taken to be automatically valued in the real numbers, with more general random quantities instead being called random elements.
According to George Mackey, Pafnuty Chebyshev was the first person "to think systematically in terms of random variables".[3]
Definition
A random variable $X$ is a measurable function $X\colon \Omega \to E$ from a sample space $\Omega $ as a set of possible outcomes to a measurable space $E$. The technical axiomatic definition requires the sample space $\Omega $ to be a sample space of a probability triple $(\Omega ,{\mathcal {F}},\operatorname {P} )$ (see the measure-theoretic definition). A random variable is often denoted by capital Roman letters such as $X,Y,Z,T$.[4]
The probability that $X$ takes on a value in a measurable set $S\subseteq E$ is written as
$\operatorname {P} (X\in S)=\operatorname {P} (\{\omega \in \Omega \mid X(\omega )\in S\})$
Standard case
In many cases, $X$ is real-valued, i.e. $E=\mathbb {R} $. In some contexts, the term random element (see extensions) is used to denote a random variable not of this form.
When the image (or range) of $X$ is countable, the random variable is called a discrete random variable[5]: 399 and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of $X$. If the image is uncountably infinite (usually an interval) then $X$ is called a continuous random variable.[6][7] In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous,[8] a mixture distribution is one such counterexample; such random variables cannot be described by a probability density or a probability mass function.
Any random variable can be described by its cumulative distribution function, which describes the probability that the random variable will be less than or equal to a certain value.
Extensions
The term "random variable" in statistics is traditionally limited to the real-valued case ($E=\mathbb {R} $). In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution.
However, the definition above is valid for any measurable space $E$ of values. Thus one can consider random elements of other sets $E$, such as random boolean values, categorical values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, and functions. One may then specifically refer to a random variable of type $E$, or an $E$-valued random variable.
This more general concept of a random element is particularly useful in disciplines such as graph theory, machine learning, natural language processing, and other fields in discrete mathematics and computer science, where one is often interested in modeling the random variation of non-numerical data structures. In some cases, it is nonetheless convenient to represent each element of $E$, using one or more real numbers. In this case, a random element may optionally be represented as a vector of real-valued random variables (all defined on the same underlying probability space $\Omega $, which allows the different random variables to covary). For example:
• A random word may be represented as a random integer that serves as an index into the vocabulary of possible words. Alternatively, it can be represented as a random indicator vector, whose length equals the size of the vocabulary, where the only values of positive probability are $(1\ 0\ 0\ 0\ \cdots )$, $(0\ 1\ 0\ 0\ \cdots )$, $(0\ 0\ 1\ 0\ \cdots )$ and the position of the 1 indicates the word.
• A random sentence of given length $N$ may be represented as a vector of $N$ random words.
• A random graph on $N$ given vertices may be represented as a $N\times N$ matrix of random variables, whose values specify the adjacency matrix of the random graph.
• A random function $F$ may be represented as a collection of random variables $F(x)$, giving the function's values at the various points $x$ in the function's domain. The $F(x)$ are ordinary real-valued random variables provided that the function is real-valued. For example, a stochastic process is a random function of time, a random vector is a random function of some index set such as $1,2,\ldots ,n$, and random field is a random function on any set (typically time, space, or a discrete set).
Distribution functions
If a random variable $X\colon \Omega \to \mathbb {R} $ defined on the probability space $(\Omega ,{\mathcal {F}},\operatorname {P} )$ is given, we can ask questions like "How likely is it that the value of $X$ is equal to 2?". This is the same as the probability of the event $\{\omega :X(\omega )=2\}\,\!$ which is often written as $P(X=2)\,\!$ or $p_{X}(2)$ for short.
Recording all these probabilities of outputs of a random variable $X$ yields the probability distribution of $X$. The probability distribution "forgets" about the particular probability space used to define $X$ and only records the probabilities of various output values of $X$. Such a probability distribution, if $X$ is real-valued, can always be captured by its cumulative distribution function
$F_{X}(x)=\operatorname {P} (X\leq x)$
and sometimes also using a probability density function, $f_{X}$. In measure-theoretic terms, we use the random variable $X$ to "push-forward" the measure $P$ on $\Omega $ to a measure $p_{X}$ on $\mathbb {R} $. The measure $p_{X}$ is called the "(probability) distribution of $X$" or the "law of $X$". [9] The density $f_{X}=dp_{X}/d\mu $, the Radon–Nikodym derivative of $p_{X}$ with respect to some reference measure $\mu $ on $\mathbb {R} $ (often, this reference measure is the Lebesgue measure in the case of continuous random variables, or the counting measure in the case of discrete random variables). The underlying probability space $\Omega $ is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as correlation and dependence or independence based on a joint distribution of two or more random variables on the same probability space. In practice, one often disposes of the space $\Omega $ altogether and just puts a measure on $\mathbb {R} $ that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article on quantile functions for fuller development.
Examples
Discrete random variable
In an experiment a person may be chosen at random, and one random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to the person's height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190 cm, or the probability that the height is either less than 150 or more than 200 cm.
Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum $\operatorname {PMF} (0)+\operatorname {PMF} (2)+\operatorname {PMF} (4)+\cdots $.
In examples such as these, the sample space is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed.
If $ \{a_{n}\},\{b_{n}\}$ are countable sets of real numbers, $ b_{n}>0$ and $ \sum _{n}b_{n}=1$, then $ F=\sum _{n}b_{n}\delta _{a_{n}}(x)$ is a discrete distribution function. Here $\delta _{t}(x)=0$ for $x<t$, $\delta _{t}(x)=1$ for $x\geq t$. Taking for instance an enumeration of all rational numbers as $\{a_{n}\}$ , one gets a discrete function that is not necessarily a step function (piecewise constant).
Coin toss
The possible outcomes for one coin toss can be described by the sample space $\Omega =\{{\text{heads}},{\text{tails}}\}$. We can introduce a real-valued random variable $Y$ that models a $1 payoff for a successful bet on heads as follows:
$Y(\omega )={\begin{cases}1,&{\text{if }}\omega ={\text{heads}},\\[6pt]0,&{\text{if }}\omega ={\text{tails}}.\end{cases}}$
If the coin is a fair coin, Y has a probability mass function $f_{Y}$ given by:
$f_{Y}(y)={\begin{cases}{\tfrac {1}{2}},&{\text{if }}y=1,\\[6pt]{\tfrac {1}{2}},&{\text{if }}y=0,\end{cases}}$
Dice roll
A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbers n1 and n2 from {1, 2, 3, 4, 5, 6} (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variable X given by the function that maps the pair to the sum:
$X((n_{1},n_{2}))=n_{1}+n_{2}$
and (if the dice are fair) has a probability mass function fX given by:
$f_{X}(S)={\frac {\min(S-1,13-S)}{36}},{\text{ for }}S\in \{2,3,4,5,6,7,8,9,10,11,12\}$
Continuous random variable
Formally, a continuous random variable is a random variable whose cumulative distribution function is continuous everywhere.[10] There are no "gaps", which would correspond to numbers which have a finite probability of occurring. Instead, continuous random variables almost never take an exact prescribed value c (formally, $ \forall c\in \mathbb {R} :\;\Pr(X=c)=0$ :\;\Pr(X=c)=0} ) but there is a positive probability that its value will lie in particular intervals which can be arbitrarily small. Continuous random variables usually admit probability density functions (PDF), which characterize their CDF and probability measures; such distributions are also called absolutely continuous; but some continuous distributions are singular, or mixes of an absolutely continuous part and a singular part.
An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case, X = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any range of values. For example, the probability of choosing a number in [0, 180] is 1⁄2. Instead of speaking of a probability mass function, we say that the probability density of X is 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set.
More formally, given any interval $ I=[a,b]=\{x\in \mathbb {R} :a\leq x\leq b\}$, a random variable $X_{I}\sim \operatorname {U} (I)=\operatorname {U} [a,b]$ is called a "continuous uniform random variable" (CURV) if the probability that it takes a value in a subinterval depends only on the length of the subinterval. This implies that the probability of $X_{I}$ falling in any subinterval $[c,d]\subseteq [a,b]$ is proportional to the length of the subinterval, that is, if a ≤ c ≤ d ≤ b, one has
$\Pr \left(X_{I}\in [c,d]\right)={\frac {d-c}{b-a}}$
where the last equality results from the unitarity axiom of probability. The probability density function of a CURV $X\sim \operatorname {U} [a,b]$ is given by the indicator function of its interval of support normalized by the interval's length:
$f_{X}(x)={\begin{cases}\displaystyle {1 \over b-a},&a\leq x\leq b\\0,&{\text{otherwise}}.\end{cases}}$
Of particular interest is the uniform distribution on the unit interval $[0,1]$. Samples of any desired probability distribution $\operatorname {D} $ can be generated by calculating the quantile function of $\operatorname {D} $ on a randomly-generated number distributed uniformly on the unit interval. This exploits properties of cumulative distribution functions, which are a unifying framework for all random variables.
Mixed type
A mixed random variable is a random variable whose cumulative distribution function is neither discrete nor everywhere-continuous.[10] It can be realized as a mixture of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.[10]
An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = −1; otherwise X = the value of the spinner as in the preceding example. There is a probability of 1⁄2 that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example.
Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; see Lebesgue's decomposition theorem § Refinement. The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers).
Measure-theoretic definition
The most formal, axiomatic definition of a random variable involves measure theory. Continuous random variables are defined in terms of sets of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the Banach–Tarski paradox) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a sigma-algebra to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the Borel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or countably infinite number of unions and/or intersections of such intervals.[11]
The measure-theoretic definition is as follows.
Let $(\Omega ,{\mathcal {F}},P)$ be a probability space and $(E,{\mathcal {E}})$ a measurable space. Then an $(E,{\mathcal {E}})$-valued random variable is a measurable function $X\colon \Omega \to E$, which means that, for every subset $B\in {\mathcal {E}}$, its preimage is ${\mathcal {F}}$-measurable; $X^{-1}(B)\in {\mathcal {F}}$, where $X^{-1}(B)=\{\omega :X(\omega )\in B\}$.[12] This definition enables us to measure any subset $B\in {\mathcal {E}}$ in the target space by looking at its preimage, which by assumption is measurable.
In more intuitive terms, a member of $\Omega $ is a possible outcome, a member of ${\mathcal {F}}$ is a measurable subset of possible outcomes, the function $P$ gives the probability of each such measurable subset, $E$ represents the set of values that the random variable can take (such as the set of real numbers), and a member of ${\mathcal {E}}$ is a "well-behaved" (measurable) subset of $E$ (those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability.
When $E$ is a topological space, then the most common choice for the σ-algebra ${\mathcal {E}}$ is the Borel σ-algebra ${\mathcal {B}}(E)$, which is the σ-algebra generated by the collection of all open sets in $E$. In such case the $(E,{\mathcal {E}})$-valued random variable is called an $E$-valued random variable. Moreover, when the space $E$ is the real line $\mathbb {R} $, then such a real-valued random variable is called simply a random variable.
Real-valued random variables
In this case the observation space is the set of real numbers. Recall, $(\Omega ,{\mathcal {F}},P)$ is the probability space. For a real observation space, the function $X\colon \Omega \rightarrow \mathbb {R} $ is a real-valued random variable if
$\{\omega :X(\omega )\leq r\}\in {\mathcal {F}}\qquad \forall r\in \mathbb {R} .$
This definition is a special case of the above because the set $\{(-\infty ,r]:r\in \mathbb {R} \}$ generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that $\{\omega :X(\omega )\leq r\}=X^{-1}((-\infty ,r])$.
Moments
The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted $\operatorname {E} [X]$, and also called the first moment. In general, $\operatorname {E} [f(X)]$ is not equal to $f(\operatorname {E} [X])$. Once the "average value" is known, one could then ask how far from this average value the values of $X$ typically are, a question that is answered by the variance and standard deviation of a random variable. $\operatorname {E} [X]$ can be viewed intuitively as an average obtained from an infinite population, the members of which are particular evaluations of $X$.
Mathematically, this is known as the (generalised) problem of moments: for a given class of random variables $X$, find a collection $\{f_{i}\}$ of functions such that the expectation values $\operatorname {E} [f_{i}(X)]$ fully characterise the distribution of the random variable $X$.
Moments can only be defined for real-valued functions of random variables (or complex-valued, etc.). If the random variable is itself real-valued, then moments of the variable itself can be taken, which are equivalent to moments of the identity function $f(X)=X$ of the random variable. However, even for non-real-valued random variables, moments can be taken of real-valued functions of those variables. For example, for a categorical random variable X that can take on the nominal values "red", "blue" or "green", the real-valued function $[X={\text{green}}]$ can be constructed; this uses the Iverson bracket, and has the value 1 if $X$ has the value "green", 0 otherwise. Then, the expected value and other moments of this function can be determined.
Functions of random variables
A new random variable Y can be defined by applying a real Borel measurable function $g\colon \mathbb {R} \rightarrow \mathbb {R} $ to the outcomes of a real-valued random variable $X$. That is, $Y=g(X)$. The cumulative distribution function of $Y$ is then
$F_{Y}(y)=\operatorname {P} (g(X)\leq y).$
If function $g$ is invertible (i.e., $h=g^{-1}$ exists, where $h$ is $g$'s inverse function) and is either increasing or decreasing, then the previous relation can be extended to obtain
$F_{Y}(y)=\operatorname {P} (g(X)\leq y)={\begin{cases}\operatorname {P} (X\leq h(y))=F_{X}(h(y)),&{\text{if }}h=g^{-1}{\text{ increasing}},\\\\\operatorname {P} (X\geq h(y))=1-F_{X}(h(y)),&{\text{if }}h=g^{-1}{\text{ decreasing}}.\end{cases}}$
With the same hypotheses of invertibility of $g$, assuming also differentiability, the relation between the probability density functions can be found by differentiating both sides of the above expression with respect to $y$, in order to obtain[10]
$f_{Y}(y)=f_{X}{\bigl (}h(y){\bigr )}\left|{\frac {dh(y)}{dy}}\right|.$
If there is no invertibility of $g$ but each $y$ admits at most a countable number of roots (i.e., a finite, or countably infinite, number of $x_{i}$ such that $y=g(x_{i})$) then the previous relation between the probability density functions can be generalized with
$f_{Y}(y)=\sum _{i}f_{X}(g_{i}^{-1}(y))\left|{\frac {dg_{i}^{-1}(y)}{dy}}\right|$
where $x_{i}=g_{i}^{-1}(y)$, according to the inverse function theorem. The formulas for densities do not demand $g$ to be increasing.
In the measure-theoretic, axiomatic approach to probability, if a random variable $X$ on $\Omega $ and a Borel measurable function $g\colon \mathbb {R} \rightarrow \mathbb {R} $, then $Y=g(X)$ is also a random variable on $\Omega $, since the composition of measurable functions is also measurable. (However, this is not necessarily true if $g$ is Lebesgue measurable.) The same procedure that allowed one to go from a probability space $(\Omega ,P)$ to $(\mathbb {R} ,dF_{X})$ can be used to obtain the distribution of $Y$.
Example 1
Let $X$ be a real-valued, continuous random variable and let $Y=X^{2}$.
$F_{Y}(y)=\operatorname {P} (X^{2}\leq y).$
If $y<0$, then $P(X^{2}\leq y)=0$, so
$F_{Y}(y)=0\qquad {\hbox{if}}\quad y<0.$
If $y\geq 0$, then
$\operatorname {P} (X^{2}\leq y)=\operatorname {P} (|X|\leq {\sqrt {y}})=\operatorname {P} (-{\sqrt {y}}\leq X\leq {\sqrt {y}}),$
so
$F_{Y}(y)=F_{X}({\sqrt {y}})-F_{X}(-{\sqrt {y}})\qquad {\hbox{if}}\quad y\geq 0.$
Example 2
Suppose $X$ is a random variable with a cumulative distribution
$F_{X}(x)=P(X\leq x)={\frac {1}{(1+e^{-x})^{\theta }}}$
where $\theta >0$ is a fixed parameter. Consider the random variable $Y=\mathrm {log} (1+e^{-X}).$ Then,
$F_{Y}(y)=P(Y\leq y)=P(\mathrm {log} (1+e^{-X})\leq y)=P(X\geq -\mathrm {log} (e^{y}-1)).\,$
The last expression can be calculated in terms of the cumulative distribution of $X,$ so
${\begin{aligned}F_{Y}(y)&=1-F_{X}(-\log(e^{y}-1))\\[5pt]&=1-{\frac {1}{(1+e^{\log(e^{y}-1)})^{\theta }}}\\[5pt]&=1-{\frac {1}{(1+e^{y}-1)^{\theta }}}\\[5pt]&=1-e^{-y\theta }.\end{aligned}}$
which is the cumulative distribution function (CDF) of an exponential distribution.
Example 3
Suppose $X$ is a random variable with a standard normal distribution, whose density is
$f_{X}(x)={\frac {1}{\sqrt {2\pi }}}e^{-x^{2}/2}.$
Consider the random variable $Y=X^{2}.$ We can find the density using the above formula for a change of variables:
$f_{Y}(y)=\sum _{i}f_{X}(g_{i}^{-1}(y))\left|{\frac {dg_{i}^{-1}(y)}{dy}}\right|.$
In this case the change is not monotonic, because every value of $Y$ has two corresponding values of $X$ (one positive and negative). However, because of symmetry, both halves will transform identically, i.e.,
$f_{Y}(y)=2f_{X}(g^{-1}(y))\left|{\frac {dg^{-1}(y)}{dy}}\right|.$
The inverse transformation is
$x=g^{-1}(y)={\sqrt {y}}$
and its derivative is
${\frac {dg^{-1}(y)}{dy}}={\frac {1}{2{\sqrt {y}}}}.$
Then,
$f_{Y}(y)=2{\frac {1}{\sqrt {2\pi }}}e^{-y/2}{\frac {1}{2{\sqrt {y}}}}={\frac {1}{\sqrt {2\pi y}}}e^{-y/2}.$
This is a chi-squared distribution with one degree of freedom.
Example 4
Suppose $X$ is a random variable with a normal distribution, whose density is
$f_{X}(x)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-(x-\mu )^{2}/(2\sigma ^{2})}.$
Consider the random variable $Y=X^{2}.$ We can find the density using the above formula for a change of variables:
$f_{Y}(y)=\sum _{i}f_{X}(g_{i}^{-1}(y))\left|{\frac {dg_{i}^{-1}(y)}{dy}}\right|.$
In this case the change is not monotonic, because every value of $Y$ has two corresponding values of $X$ (one positive and negative). Differently from the previous example, in this case however, there is no symmetry and we have to compute the two distinct terms:
$f_{Y}(y)=f_{X}(g_{1}^{-1}(y))\left|{\frac {dg_{1}^{-1}(y)}{dy}}\right|+f_{X}(g_{2}^{-1}(y))\left|{\frac {dg_{2}^{-1}(y)}{dy}}\right|.$
The inverse transformation is
$x=g_{1,2}^{-1}(y)=\pm {\sqrt {y}}$
and its derivative is
${\frac {dg_{1,2}^{-1}(y)}{dy}}=\pm {\frac {1}{2{\sqrt {y}}}}.$
Then,
$f_{Y}(y)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}{\frac {1}{2{\sqrt {y}}}}(e^{-({\sqrt {y}}-\mu )^{2}/(2\sigma ^{2})}+e^{-(-{\sqrt {y}}-\mu )^{2}/(2\sigma ^{2})}).$
This is a noncentral chi-squared distribution with one degree of freedom.
Some properties
• The probability distribution of the sum of two independent random variables is the convolution of each of their distributions.
• Probability distributions are not a vector space—they are not closed under linear combinations, as these do not preserve non-negativity or total integral 1—but they are closed under convex combination, thus forming a convex subset of the space of functions (or measures).
Equivalence of random variables
There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution.
In increasing order of strength, the precise definition of these notions of equivalence is given below.
Equality in distribution
If the sample space is a subset of the real line, random variables X and Y are equal in distribution (denoted $X{\stackrel {d}{=}}Y$) if they have the same distribution functions:
$\operatorname {P} (X\leq x)=\operatorname {P} (Y\leq x)\quad {\text{for all }}x.$
To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equal moment generating functions have the same distribution. This provides, for example, a useful method of checking equality of certain functions of independent, identically distributed (IID) random variables. However, the moment generating function exists only for distributions that have a defined Laplace transform.
Almost sure equality
Two random variables X and Y are equal almost surely (denoted $X\;{\stackrel {\text{a.s.}}{=}}\;Y$) if, and only if, the probability that they are different is zero:
$\operatorname {P} (X\neq Y)=0.$
For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance:
$d_{\infty }(X,Y)=\operatorname {ess} \sup _{\omega }|X(\omega )-Y(\omega )|,$
where "ess sup" represents the essential supremum in the sense of measure theory.
Equality
Finally, the two random variables X and Y are equal if they are equal as functions on their measurable space:
$X(\omega )=Y(\omega )\qquad {\hbox{for all }}\omega .$
This notion is typically the least useful in probability theory because in practice and in theory, the underlying measure space of the experiment is rarely explicitly characterized or even characterizable.
Convergence
Main article: Convergence of random variables
A significant theme in mathematical statistics consists of obtaining convergence results for certain sequences of random variables; for instance the law of large numbers and the central limit theorem.
There are various senses in which a sequence $X_{n}$ of random variables can converge to a random variable $X$. These are explained in the article on convergence of random variables.
See also
• Aleatoricism
• Algebra of random variables
• Event (probability theory)
• Multivariate random variable
• Pairwise independent random variables
• Observable variable
• Random compact set
• Random element
• Random function
• Random measure
• Random number generator
• Random variate
• Random vector
• Randomness
• Stochastic process
• Relationships among probability distributions
References
Inline citations
1. Blitzstein, Joe; Hwang, Jessica (2014). Introduction to Probability. CRC Press. ISBN 9781466575592.
2. Deisenroth, Marc Peter (2020). Mathematics for machine learning. A. Aldo Faisal, Cheng Soon Ong. Cambridge, United Kingdom. ISBN 978-1-108-47004-9. OCLC 1104219401.{{cite book}}: CS1 maint: location missing publisher (link)
3. George Mackey (July 1980). "Harmonic analysis as the exploitation of symmetry – a historical survey". Bulletin of the American Mathematical Society. New Series. 3 (1).
4. "Random Variables". www.mathsisfun.com. Retrieved 2020-08-21.
5. Yates, Daniel S.; Moore, David S; Starnes, Daren S. (2003). The Practice of Statistics (2nd ed.). New York: Freeman. ISBN 978-0-7167-4773-4. Archived from the original on 2005-02-09.
6. "Random Variables". www.stat.yale.edu. Retrieved 2020-08-21.
7. Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005). "A Modern Introduction to Probability and Statistics". Springer Texts in Statistics. doi:10.1007/1-84628-168-7. ISBN 978-1-85233-896-1. ISSN 1431-875X.
8. L. Castañeda; V. Arunachalam & S. Dharmaraja (2012). Introduction to Probability and Stochastic Processes with Applications. Wiley. p. 67. ISBN 9781118344941.
9. Billingsley, Patrick (1995). Probability and Measure (3rd ed.). Wiley. p. 187. ISBN 9781466575592.
10. Bertsekas, Dimitri P. (2002). Introduction to Probability. Tsitsiklis, John N., Τσιτσικλής, Γιάννης Ν. Belmont, Mass.: Athena Scientific. ISBN 188652940X. OCLC 51441829.
11. Steigerwald, Douglas G. "Economics 245A – Introduction to Measure Theory" (PDF). University of California, Santa Barbara. Retrieved April 26, 2013.
12. Fristedt & Gray (1996, page 11)
Literature
• Fristedt, Bert; Gray, Lawrence (1996). A modern approach to probability theory. Boston: Birkhäuser. ISBN 3-7643-3807-5.
• Billingsley, Patrick (1995). Probability and Measure. New York: Wiley. ISBN 8126517719.
• Kallenberg, Olav (1986). Random Measures (4th ed.). Berlin: Akademie Verlag. ISBN 0-12-394960-2. MR 0854102.
• Kallenberg, Olav (2001). Foundations of Modern Probability (2nd ed.). Berlin: Springer Verlag. ISBN 0-387-95313-2.
• Papoulis, Athanasios (1965). Probability, Random Variables, and Stochastic Processes (9th ed.). Tokyo: McGraw–Hill. ISBN 0-07-119981-0.
External links
• "Random variable", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Zukerman, Moshe (2014), Introduction to Queueing Theory and Stochastic Teletraffic Models (PDF), arXiv:1307.2968
• Zukerman, Moshe (2014), Basic Probability Topics (PDF)
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
|
Wikipedia
|
Multivariate random variable
In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit. For example, while a given person has a specific age, height and weight, the representation of these features of an unspecified person from within a group would be a random vector. Normally each element of a random vector is a real number.
Part of a series on statistics
Probability theory
• Probability
• Axioms
• Determinism
• System
• Indeterminism
• Randomness
• Probability space
• Sample space
• Event
• Collectively exhaustive events
• Elementary event
• Mutual exclusivity
• Outcome
• Singleton
• Experiment
• Bernoulli trial
• Probability distribution
• Bernoulli distribution
• Binomial distribution
• Normal distribution
• Probability measure
• Random variable
• Bernoulli process
• Continuous or discrete
• Expected value
• Markov chain
• Observed value
• Random walk
• Stochastic process
• Complementary event
• Joint probability
• Marginal probability
• Conditional probability
• Independence
• Conditional independence
• Law of total probability
• Law of large numbers
• Bayes' theorem
• Boole's inequality
• Venn diagram
• Tree diagram
Random vectors are often used as the underlying implementation of various types of aggregate random variables, e.g. a random matrix, random tree, random sequence, stochastic process, etc.
More formally, a multivariate random variable is a column vector $\mathbf {X} =(X_{1},\dots ,X_{n})^{\mathsf {T}}$ (or its transpose, which is a row vector) whose components are scalar-valued random variables on the same probability space as each other, $(\Omega ,{\mathcal {F}},P)$, where $\Omega $ is the sample space, ${\mathcal {F}}$ is the sigma-algebra (the collection of all events), and $P$ is the probability measure (a function returning each event's probability).
Probability distribution
Every random vector gives rise to a probability measure on $\mathbb {R} ^{n}$ with the Borel algebra as the underlying sigma-algebra. This measure is also known as the joint probability distribution, the joint distribution, or the multivariate distribution of the random vector.
The distributions of each of the component random variables $X_{i}$ are called marginal distributions. The conditional probability distribution of $X_{i}$ given $X_{j}$ is the probability distribution of $X_{i}$ when $X_{j}$ is known to be a particular value.
The cumulative distribution function $F_{\mathbf {X} }:\mathbb {R} ^{n}\mapsto [0,1]$ of a random vector $\mathbf {X} =(X_{1},\dots ,X_{n})^{\mathsf {T}}$ is defined as[1]: p.15
$F_{\mathbf {X} }(\mathbf {x} )=\operatorname {P} (X_{1}\leq x_{1},\ldots ,X_{n}\leq x_{n})$
(Eq.1)
where $\mathbf {x} =(x_{1},\dots ,x_{n})^{\mathsf {T}}$.
Operations on random vectors
Random vectors can be subjected to the same kinds of algebraic operations as can non-random vectors: addition, subtraction, multiplication by a scalar, and the taking of inner products.
Affine transformations
Similarly, a new random vector $\mathbf {Y} $ can be defined by applying an affine transformation $g\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}$ to a random vector $\mathbf {X} $:
$\mathbf {Y} =\mathbf {A} \mathbf {X} +b$, where $\mathbf {A} $ is an $n\times n$ matrix and $b$ is an $n\times 1$ column vector.
If $\mathbf {A} $ is an invertible matrix and $\textstyle \mathbf {X} $ has a probability density function $f_{\mathbf {X} }$, then the probability density of $\mathbf {Y} $ is
$f_{\mathbf {Y} }(y)={\frac {f_{\mathbf {X} }(\mathbf {A} ^{-1}(y-b))}{|\det \mathbf {A} |}}$.
Invertible mappings
More generally we can study invertible mappings of random vectors.[2]: p.290–291
Let $g$ be a one-to-one mapping from an open subset ${\mathcal {D}}$ of $\mathbb {R} ^{n}$ onto a subset ${\mathcal {R}}$ of $\mathbb {R} ^{n}$, let $g$ have continuous partial derivatives in ${\mathcal {D}}$ and let the Jacobian determinant of $g$ be zero at no point of ${\mathcal {D}}$. Assume that the real random vector $\mathbf {X} $ has a probability density function $f_{\mathbf {X} }(\mathbf {x} )$ and satisfies $P(\mathbf {X} \in {\mathcal {D}})=1$. Then the random vector $\mathbf {Y} =g(\mathbf {X} )$ is of probability density
$\left.f_{\mathbf {Y} }(\mathbf {y} )={\frac {f_{\mathbf {Z} }(\mathbf {z} )}{\left|\det {\frac {\partial \mathbf {z} }{\partial \mathbf {y} }}\right|}}\right|_{\mathbf {z} =g^{-1}(\mathbf {y} )}\mathbf {1} (\mathbf {y} \in R_{\mathbf {Y} })$
where $\mathbf {1} $ denotes the indicator function and set $R_{\mathbf {Y} }=\{\mathbf {y} =g(\mathbf {x} ):f_{\mathbf {X} }(\mathbf {x} )>0\}\subseteq {\mathcal {R}}$ denotes support of $\mathbf {Y} $.
Expected value
The expected value or mean of a random vector $\mathbf {X} $ is a fixed vector $\operatorname {E} [\mathbf {X} ]$ whose elements are the expected values of the respective random variables.[3]: p.333
$\operatorname {E} [\mathbf {X} ]=(\operatorname {E} [X_{1}],...,\operatorname {E} [X_{n}])^{\mathrm {T} }$
(Eq.2)
Covariance and cross-covariance
Definitions
The covariance matrix (also called second central moment or variance-covariance matrix) of an $n\times 1$ random vector is an $n\times n$ matrix whose (i,j)th element is the covariance between the i th and the j th random variables. The covariance matrix is the expected value, element by element, of the $n\times n$ matrix computed as $[\mathbf {X} -\operatorname {E} [\mathbf {X} ]][\mathbf {X} -\operatorname {E} [\mathbf {X} ]]^{T}$, where the superscript T refers to the transpose of the indicated vector:[2]: p. 464 [3]: p.335
$\operatorname {K} _{\mathbf {X} \mathbf {X} }=\operatorname {Var} [\mathbf {X} ]=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{T}]=\operatorname {E} [\mathbf {X} \mathbf {X} ^{T}]-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {X} ]^{T}$
(Eq.3)
By extension, the cross-covariance matrix between two random vectors $\mathbf {X} $ and $\mathbf {Y} $ ($\mathbf {X} $ having $n$ elements and $\mathbf {Y} $ having $p$ elements) is the $n\times p$ matrix[3]: p.336
$\operatorname {K} _{\mathbf {X} \mathbf {Y} }=\operatorname {Cov} [\mathbf {X} ,\mathbf {Y} ]=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])^{T}]=\operatorname {E} [\mathbf {X} \mathbf {Y} ^{T}]-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{T}$
(Eq.4)
where again the matrix expectation is taken element-by-element in the matrix. Here the (i,j)th element is the covariance between the i th element of $\mathbf {X} $ and the j th element of $\mathbf {Y} $.
Properties
The covariance matrix is a symmetric matrix, i.e.[2]: p. 466
$\operatorname {K} _{\mathbf {X} \mathbf {X} }^{T}=\operatorname {K} _{\mathbf {X} \mathbf {X} }$.
The covariance matrix is a positive semidefinite matrix, i.e.[2]: p. 465
$\mathbf {a} ^{T}\operatorname {K} _{\mathbf {X} \mathbf {X} }\mathbf {a} \geq 0\quad {\text{for all }}\mathbf {a} \in \mathbb {R} ^{n}$.
The cross-covariance matrix $\operatorname {Cov} [\mathbf {Y} ,\mathbf {X} ]$ is simply the transpose of the matrix $\operatorname {Cov} [\mathbf {X} ,\mathbf {Y} ]$, i.e.
$\operatorname {K} _{\mathbf {Y} \mathbf {X} }=\operatorname {K} _{\mathbf {X} \mathbf {Y} }^{T}$.
Uncorrelatedness
Two random vectors $\mathbf {X} =(X_{1},...,X_{m})^{T}$ and $\mathbf {Y} =(Y_{1},...,Y_{n})^{T}$ are called uncorrelated if
$\operatorname {E} [\mathbf {X} \mathbf {Y} ^{T}]=\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{T}$.
They are uncorrelated if and only if their cross-covariance matrix $\operatorname {K} _{\mathbf {X} \mathbf {Y} }$ is zero.[3]: p.337
Correlation and cross-correlation
Definitions
The correlation matrix (also called second moment) of an $n\times 1$ random vector is an $n\times n$ matrix whose (i,j)th element is the correlation between the i th and the j th random variables. The correlation matrix is the expected value, element by element, of the $n\times n$ matrix computed as $\mathbf {X} \mathbf {X} ^{T}$, where the superscript T refers to the transpose of the indicated vector:[4]: p.190 [3]: p.334
$\operatorname {R} _{\mathbf {X} \mathbf {X} }=\operatorname {E} [\mathbf {X} \mathbf {X} ^{\mathrm {T} }]$
(Eq.5)
By extension, the cross-correlation matrix between two random vectors $\mathbf {X} $ and $\mathbf {Y} $ ($\mathbf {X} $ having $n$ elements and $\mathbf {Y} $ having $p$ elements) is the $n\times p$ matrix
$\operatorname {R} _{\mathbf {X} \mathbf {Y} }=\operatorname {E} [\mathbf {X} \mathbf {Y} ^{T}]$
(Eq.6)
Properties
The correlation matrix is related to the covariance matrix by
$\operatorname {R} _{\mathbf {X} \mathbf {X} }=\operatorname {K} _{\mathbf {X} \mathbf {X} }+\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {X} ]^{T}$.
Similarly for the cross-correlation matrix and the cross-covariance matrix:
$\operatorname {R} _{\mathbf {X} \mathbf {Y} }=\operatorname {K} _{\mathbf {X} \mathbf {Y} }+\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{T}$
Orthogonality
Two random vectors of the same size $\mathbf {X} =(X_{1},...,X_{n})^{T}$ and $\mathbf {Y} =(Y_{1},...,Y_{n})^{T}$ are called orthogonal if
$\operatorname {E} [\mathbf {X} ^{T}\mathbf {Y} ]=0$.
Independence
Main article: Independence (probability theory)
Two random vectors $\mathbf {X} $ and $\mathbf {Y} $ are called independent if for all $\mathbf {x} $ and $\mathbf {y} $
$F_{\mathbf {X,Y} }(\mathbf {x,y} )=F_{\mathbf {X} }(\mathbf {x} )\cdot F_{\mathbf {Y} }(\mathbf {y} )$
where $F_{\mathbf {X} }(\mathbf {x} )$ and $F_{\mathbf {Y} }(\mathbf {y} )$ denote the cumulative distribution functions of $\mathbf {X} $ and $\mathbf {Y} $ and$F_{\mathbf {X,Y} }(\mathbf {x,y} )$ denotes their joint cumulative distribution function. Independence of $\mathbf {X} $ and $\mathbf {Y} $ is often denoted by $\mathbf {X} \perp \!\!\!\perp \mathbf {Y} $. Written component-wise, $\mathbf {X} $ and $\mathbf {Y} $ are called independent if for all $x_{1},\ldots ,x_{m},y_{1},\ldots ,y_{n}$
$F_{X_{1},\ldots ,X_{m},Y_{1},\ldots ,Y_{n}}(x_{1},\ldots ,x_{m},y_{1},\ldots ,y_{n})=F_{X_{1},\ldots ,X_{m}}(x_{1},\ldots ,x_{m})\cdot F_{Y_{1},\ldots ,Y_{n}}(y_{1},\ldots ,y_{n})$.
Characteristic function
The characteristic function of a random vector $\mathbf {X} $ with $n$ components is a function $\mathbb {R} ^{n}\to \mathbb {C} $ that maps every vector $\mathbf {\omega } =(\omega _{1},\ldots ,\omega _{n})^{T}$ to a complex number. It is defined by[2]: p. 468
$\varphi _{\mathbf {X} }(\mathbf {\omega } )=\operatorname {E} \left[e^{i(\mathbf {\omega } ^{T}\mathbf {X} )}\right]=\operatorname {E} \left[e^{i(\omega _{1}X_{1}+\ldots +\omega _{n}X_{n})}\right]$.
Further properties
Expectation of a quadratic form
One can take the expectation of a quadratic form in the random vector $\mathbf {X} $ as follows:[5]: p.170–171
$\operatorname {E} [\mathbf {X} ^{T}A\mathbf {X} ]=\operatorname {E} [\mathbf {X} ]^{T}A\operatorname {E} [\mathbf {X} ]+\operatorname {tr} (AK_{\mathbf {X} \mathbf {X} }),$
where $K_{\mathbf {X} \mathbf {X} }$ is the covariance matrix of $\mathbf {X} $ and $\operatorname {tr} $ refers to the trace of a matrix — that is, to the sum of the elements on its main diagonal (from upper left to lower right). Since the quadratic form is a scalar, so is its expectation.
Proof: Let $\mathbf {z} $ be an $m\times 1$ random vector with $\operatorname {E} [\mathbf {z} ]=\mu $ and $\operatorname {Cov} [\mathbf {z} ]=V$ and let $A$ be an $m\times m$ non-stochastic matrix.
Then based on the formula for the covariance, if we denote $\mathbf {z} ^{T}=\mathbf {X} $ and $\mathbf {z} ^{T}A^{T}=\mathbf {Y} $, we see that:
$\operatorname {Cov} [\mathbf {X} ,\mathbf {Y} ]=\operatorname {E} [\mathbf {X} \mathbf {Y} ^{T}]-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{T}$
Hence
${\begin{aligned}\operatorname {E} [XY^{T}]&=\operatorname {Cov} [X,Y]+\operatorname {E} [X]\operatorname {E} [Y]^{T}\\\operatorname {E} [z^{T}Az]&=\operatorname {Cov} [z^{T},z^{T}A^{T}]+\operatorname {E} [z^{T}]\operatorname {E} [z^{T}A^{T}]^{T}\\&=\operatorname {Cov} [z^{T},z^{T}A^{T}]+\mu ^{T}(\mu ^{T}A^{T})^{T}\\&=\operatorname {Cov} [z^{T},z^{T}A^{T}]+\mu ^{T}A\mu ,\end{aligned}}$
which leaves us to show that
$\operatorname {Cov} [z^{T},z^{T}A^{T}]=\operatorname {tr} (AV).$
This is true based on the fact that one can cyclically permute matrices when taking a trace without changing the end result (e.g.: $\operatorname {tr} (AB)=\operatorname {tr} (BA)$).
We see that
${\begin{aligned}\operatorname {Cov} [z^{T},z^{T}A^{T}]&=\operatorname {E} \left[\left(z^{T}-E(z^{T})\right)\left(z^{T}A^{T}-E\left(z^{T}A^{T}\right)\right)^{T}\right]\\&=\operatorname {E} \left[(z^{T}-\mu ^{T})(z^{T}A^{T}-\mu ^{T}A^{T})^{T}\right]\\&=\operatorname {E} \left[(z-\mu )^{T}(Az-A\mu )\right].\end{aligned}}$
And since
$\left({z-\mu }\right)^{T}\left({Az-A\mu }\right)$
is a scalar, then
$(z-\mu )^{T}(Az-A\mu )=\operatorname {tr} \left({(z-\mu )^{T}(Az-A\mu )}\right)=\operatorname {tr} \left((z-\mu )^{T}A(z-\mu )\right)$
trivially. Using the permutation we get:
$\operatorname {tr} \left({(z-\mu )^{T}A(z-\mu )}\right)=\operatorname {tr} \left({A(z-\mu )(z-\mu )^{T}}\right),$
and by plugging this into the original formula we get:
${\begin{aligned}\operatorname {Cov} \left[{z^{T},z^{T}A^{T}}\right]&=E\left[{\left({z-\mu }\right)^{T}(Az-A\mu )}\right]\\&=E\left[\operatorname {tr} \left(A(z-\mu )(z-\mu )^{T}\right)\right]\\&=\operatorname {tr} \left({A\cdot \operatorname {E} \left((z-\mu )(z-\mu )^{T}\right)}\right)\\&=\operatorname {tr} (AV).\end{aligned}}$
Expectation of the product of two different quadratic forms
One can take the expectation of the product of two different quadratic forms in a zero-mean Gaussian random vector $\mathbf {X} $ as follows:[5]: pp. 162–176
$\operatorname {E} \left[(\mathbf {X} ^{T}A\mathbf {X} )(\mathbf {X} ^{T}B\mathbf {X} )\right]=2\operatorname {tr} (AK_{\mathbf {X} \mathbf {X} }BK_{\mathbf {X} \mathbf {X} })+\operatorname {tr} (AK_{\mathbf {X} \mathbf {X} })\operatorname {tr} (BK_{\mathbf {X} \mathbf {X} })$
where again $K_{\mathbf {X} \mathbf {X} }$ is the covariance matrix of $\mathbf {X} $. Again, since both quadratic forms are scalars and hence their product is a scalar, the expectation of their product is also a scalar.
Applications
Portfolio theory
In portfolio theory in finance, an objective often is to choose a portfolio of risky assets such that the distribution of the random portfolio return has desirable properties. For example, one might want to choose the portfolio return having the lowest variance for a given expected value. Here the random vector is the vector $\mathbf {r} $ of random returns on the individual assets, and the portfolio return p (a random scalar) is the inner product of the vector of random returns with a vector w of portfolio weights — the fractions of the portfolio placed in the respective assets. Since p = wT$\mathbf {r} $, the expected value of the portfolio return is wTE($\mathbf {r} $) and the variance of the portfolio return can be shown to be wTCw, where C is the covariance matrix of $\mathbf {r} $.
Regression theory
In linear regression theory, we have data on n observations on a dependent variable y and n observations on each of k independent variables xj. The observations on the dependent variable are stacked into a column vector y; the observations on each independent variable are also stacked into column vectors, and these latter column vectors are combined into a design matrix X (not denoting a random vector in this context) of observations on the independent variables. Then the following regression equation is postulated as a description of the process that generated the data:
$y=X\beta +e,$
where β is a postulated fixed but unknown vector of k response coefficients, and e is an unknown random vector reflecting random influences on the dependent variable. By some chosen technique such as ordinary least squares, a vector ${\hat {\beta }}$ is chosen as an estimate of β, and the estimate of the vector e, denoted ${\hat {e}}$, is computed as
${\hat {e}}=y-X{\hat {\beta }}.$
Then the statistician must analyze the properties of ${\hat {\beta }}$ and ${\hat {e}}$, which are viewed as random vectors since a randomly different selection of n cases to observe would have resulted in different values for them.
Vector time series
The evolution of a k×1 random vector $\mathbf {X} $ through time can be modelled as a vector autoregression (VAR) as follows:
$\mathbf {X} _{t}=c+A_{1}\mathbf {X} _{t-1}+A_{2}\mathbf {X} _{t-2}+\cdots +A_{p}\mathbf {X} _{t-p}+\mathbf {e} _{t},\,$
where the i-periods-back vector observation $\mathbf {X} _{t-i}$ is called the i-th lag of $\mathbf {X} $, c is a k × 1 vector of constants (intercepts), Ai is a time-invariant k × k matrix and $\mathbf {e} _{t}$ is a k × 1 random vector of error terms.
References
1. Gallager, Robert G. (2013). Stochastic Processes Theory for Applications. Cambridge University Press. ISBN 978-1-107-03975-9.
2. Lapidoth, Amos (2009). A Foundation in Digital Communication. Cambridge University Press. ISBN 978-0-521-19395-5.
3. Gubner, John A. (2006). Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press. ISBN 978-0-521-86470-1.
4. Papoulis, Athanasius (1991). Probability, Random Variables and Stochastic Processes (Third ed.). McGraw-Hill. ISBN 0-07-048477-5.
5. Kendrick, David (1981). Stochastic Control for Economic Models. McGraw-Hill. ISBN 0-07-033962-7.
Further reading
• Stark, Henry; Woods, John W. (2012). "Random Vectors". Probability, Statistics, and Random Processes for Engineers (Fourth ed.). Pearson. pp. 295–339. ISBN 978-0-13-231123-6.
|
Wikipedia
|
Random walk
In mathematics, a random walk is a random process that describes a path that consists of a succession of random steps on some mathematical space.
Part of a series on statistics
Probability theory
• Probability
• Axioms
• Determinism
• System
• Indeterminism
• Randomness
• Probability space
• Sample space
• Event
• Collectively exhaustive events
• Elementary event
• Mutual exclusivity
• Outcome
• Singleton
• Experiment
• Bernoulli trial
• Probability distribution
• Bernoulli distribution
• Binomial distribution
• Normal distribution
• Probability measure
• Random variable
• Bernoulli process
• Continuous or discrete
• Expected value
• Markov chain
• Observed value
• Random walk
• Stochastic process
• Complementary event
• Joint probability
• Marginal probability
• Conditional probability
• Independence
• Conditional independence
• Law of total probability
• Law of large numbers
• Bayes' theorem
• Boole's inequality
• Venn diagram
• Tree diagram
An elementary example of a random walk is the random walk on the integer number line $\mathbb {Z} $ which starts at 0, and at each step moves +1 or −1 with equal probability. Other examples include the path traced by a molecule as it travels in a liquid or a gas (see Brownian motion), the search path of a foraging animal, or the price of a fluctuating stock and the financial status of a gambler. Random walks have applications to engineering and many scientific fields including ecology, psychology, computer science, physics, chemistry, biology, economics, and sociology. The term random walk was first introduced by Karl Pearson in 1905.[1]
Lattice random walk
A popular random walk model is that of a random walk on a regular lattice, where at each step the location jumps to another site according to some probability distribution. In a simple random walk, the location can only jump to neighboring sites of the lattice, forming a lattice path. In a simple symmetric random walk on a locally finite lattice, the probabilities of the location jumping to each one of its immediate neighbors are the same. The best-studied example is the random walk on the d-dimensional integer lattice (sometimes called the hypercubic lattice) $\mathbb {Z} ^{d}$.[2]
If the state space is limited to finite dimensions, the random walk model is called a simple bordered symmetric random walk, and the transition probabilities depend on the location of the state because on margin and corner states the movement is limited.[3]
One-dimensional random walk
An elementary example of a random walk is the random walk on the integer number line, $\mathbb {Z} $, which starts at 0 and at each step moves +1 or −1 with equal probability.
This walk can be illustrated as follows. A marker is placed at zero on the number line, and a fair coin is flipped. If it lands on heads, the marker is moved one unit to the right. If it lands on tails, the marker is moved one unit to the left. After five flips, the marker could now be on -5, -3, -1, 1, 3, 5. With five flips, three heads and two tails, in any order, it will land on 1. There are 10 ways of landing on 1 (by flipping three heads and two tails), 10 ways of landing on −1 (by flipping three tails and two heads), 5 ways of landing on 3 (by flipping four heads and one tail), 5 ways of landing on −3 (by flipping four tails and one head), 1 way of landing on 5 (by flipping five heads), and 1 way of landing on −5 (by flipping five tails). See the figure below for an illustration of the possible outcomes of 5 flips.
To define this walk formally, take independent random variables $Z_{1},Z_{2},\dots $, where each variable is either 1 or −1, with a 50% probability for either value, and set $S_{0}=0$ and $ S_{n}=\sum _{j=1}^{n}Z_{j}.$ The series $\{S_{n}\}$ is called the simple random walk on $\mathbb {Z} $. This series (the sum of the sequence of −1s and 1s) gives the net distance walked, if each part of the walk is of length one. The expectation $E(S_{n})$ of $S_{n}$ is zero. That is, the mean of all coin flips approaches zero as the number of flips increases. This follows by the finite additivity property of expectation:
$E(S_{n})=\sum _{j=1}^{n}E(Z_{j})=0.$
A similar calculation, using the independence of the random variables and the fact that $E(Z_{n}^{2})=1$, shows that:
$E(S_{n}^{2})=\sum _{i=1}^{n}E(Z_{i}^{2})+2\sum _{1\leq i<j\leq n}E(Z_{i}Z_{j})=n.$
This hints that $E(|S_{n}|)\,\!$, the expected translation distance after n steps, should be of the order of ${\sqrt {n}}$. In fact,[4]
$\lim _{n\to \infty }{\frac {E(|S_{n}|)}{\sqrt {n}}}={\sqrt {\frac {2}{\pi }}}.$
To answer the question of how many times will a random walk cross a boundary line if permitted to continue walking forever, a simple random walk on $\mathbb {Z} $ will cross every point an infinite number of times. This result has many names: the level-crossing phenomenon, recurrence or the gambler's ruin. The reason for the last name is as follows: a gambler with a finite amount of money will eventually lose when playing a fair game against a bank with an infinite amount of money. The gambler's money will perform a random walk, and it will reach zero at some point, and the game will be over.
If a and b are positive integers, then the expected number of steps until a one-dimensional simple random walk starting at 0 first hits b or −a is ab. The probability that this walk will hit b before −a is $a/(a+b)$, which can be derived from the fact that simple random walk is a martingale. And these expectations and hitting probabilities can be computed in $O(a+b)$ in the general one-dimensional random walk Markov chain.
Some of the results mentioned above can be derived from properties of Pascal's triangle. The number of different walks of n steps where each step is +1 or −1 is 2n. For the simple random walk, each of these walks is equally likely. In order for Sn to be equal to a number k it is necessary and sufficient that the number of +1 in the walk exceeds those of −1 by k. It follows +1 must appear (n + k)/2 times among n steps of a walk, hence the number of walks which satisfy $S_{n}=k$ equals the number of ways of choosing (n + k)/2 elements from an n element set,[5] denoted $ n \choose (n+k)/2$. For this to have meaning, it is necessary that n + k be an even number, which implies n and k are either both even or both odd. Therefore, the probability that $S_{n}=k$ is equal to $ 2^{-n}{n \choose (n+k)/2}$. By representing entries of Pascal's triangle in terms of factorials and using Stirling's formula, one can obtain good estimates for these probabilities for large values of $n$.
If space is confined to $\mathbb {Z} $+ for brevity, the number of ways in which a random walk will land on any given number having five flips can be shown as {0,5,0,4,0,1}.
This relation with Pascal's triangle is demonstrated for small values of n. At zero turns, the only possibility will be to remain at zero. However, at one turn, there is one chance of landing on −1 or one chance of landing on 1. At two turns, a marker at 1 could move to 2 or back to zero. A marker at −1, could move to −2 or back to zero. Therefore, there is one chance of landing on −2, two chances of landing on zero, and one chance of landing on 2.
k −5 −4 −3 −2 −1 0 1 2 3 4 5
$P[S_{0}=k]$ 1
$2P[S_{1}=k]$ 1 1
$2^{2}P[S_{2}=k]$ 1 2 1
$2^{3}P[S_{3}=k]$ 1 3 3 1
$2^{4}P[S_{4}=k]$ 1 4 6 4 1
$2^{5}P[S_{5}=k]$ 1 5 10 10 5 1
The central limit theorem and the law of the iterated logarithm describe important aspects of the behavior of simple random walks on $\mathbb {Z} $. In particular, the former entails that as n increases, the probabilities (proportional to the numbers in each row) approach a normal distribution.
As a direct generalization, one can consider random walks on crystal lattices (infinite-fold abelian covering graphs over finite graphs). Actually it is possible to establish the central limit theorem and large deviation theorem in this setting.[6][7]
As a Markov chain
A one-dimensional random walk can also be looked at as a Markov chain whose state space is given by the integers $i=0,\pm 1,\pm 2,\dots .$ For some number p satisfying $\,0<p<1$, the transition probabilities (the probability Pi,j of moving from state i to state j) are given by
$\,P_{i,i+1}=p=1-P_{i,i-1}.$
Heterogeneous generalization
Main article: Heterogeneous random walk in one dimension
The heterogeneous random walk draws in each time step a random number that determines the local jumping probabilities and then a random number that determines the actual jump direction. The main question is the probability of staying in each of the various sites after $t$ jumps, and in the limit of this probability when $t$ is very large.
Higher dimensions
In higher dimensions, the set of randomly walked points has interesting geometric properties. In fact, one gets a discrete fractal, that is, a set which exhibits stochastic self-similarity on large scales. On small scales, one can observe "jaggedness" resulting from the grid on which the walk is performed. The trajectory of a random walk is the collection of points visited, considered as a set with disregard to when the walk arrived at the point. In one dimension, the trajectory is simply all points between the minimum height and the maximum height the walk achieved (both are, on average, on the order of ${\sqrt {n}}$).
To visualize the two-dimensional case, one can imagine a person walking randomly around a city. The city is effectively infinite and arranged in a square grid of sidewalks. At every intersection, the person randomly chooses one of the four possible routes (including the one originally travelled from). Formally, this is a random walk on the set of all points in the plane with integer coordinates.
To answer the question of the person ever getting back to the original starting point of the walk, this is the 2-dimensional equivalent of the level-crossing problem discussed above. In 1921 George Pólya proved that the person almost surely would in a 2-dimensional random walk, but for 3 dimensions or higher, the probability of returning to the origin decreases as the number of dimensions increases. In 3 dimensions, the probability decreases to roughly 34%.[8] The mathematician Shizuo Kakutani was known to refer to this result with the following quote: "A drunk man will find his way home, but a drunk bird may get lost forever".[9]
Another variation of this question which was also asked by Pólya is: "if two people leave the same starting point, then will they ever meet again?"[10] It can be shown that the difference between their locations (two independent random walks) is also a simple random walk, so they almost surely meet again in a 2-dimensional walk, but for 3 dimensions and higher the probability decreases with the number of the dimensions. Paul Erdős and Samuel James Taylor also showed in 1960 that for dimensions less or equal than 4, two independent random walks starting from any two given points have infinitely many intersections almost surely, but for dimensions higher than 5, they almost surely intersect only finitely often.[11]
The asymptotic function for a two-dimensional random walk as the number of steps increases is given by a Rayleigh distribution. The probability distribution is a function of the radius from the origin and the step length is constant for each step.
$P(r)={\frac {2r}{N}}e^{-r^{2}/N}$
Relation to Wiener process
A Wiener process is a stochastic process with similar behavior to Brownian motion, the physical phenomenon of a minute particle diffusing in a fluid. (Sometimes the Wiener process is called "Brownian motion", although this is strictly speaking a confusion of a model with the phenomenon being modeled.)
A Wiener process is the scaling limit of random walk in dimension 1. This means that if there is a random walk with very small steps, there is an approximation to a Wiener process (and, less accurately, to Brownian motion). To be more precise, if the step size is ε, one needs to take a walk of length L/ε2 to approximate a Wiener length of L. As the step size tends to 0 (and the number of steps increases proportionally), random walk converges to a Wiener process in an appropriate sense. Formally, if B is the space of all paths of length L with the maximum topology, and if M is the space of measure over B with the norm topology, then the convergence is in the space M. Similarly, a Wiener process in several dimensions is the scaling limit of random walk in the same number of dimensions.
A random walk is a discrete fractal (a function with integer dimensions; 1, 2, ...), but a Wiener process trajectory is a true fractal, and there is a connection between the two. For example, take a random walk until it hits a circle of radius r times the step length. The average number of steps it performs is r2. This fact is the discrete version of the fact that a Wiener process walk is a fractal of Hausdorff dimension 2.
In two dimensions, the average number of points the same random walk has on the boundary of its trajectory is r4/3. This corresponds to the fact that the boundary of the trajectory of a Wiener process is a fractal of dimension 4/3, a fact predicted by Mandelbrot using simulations but proved only in 2000 by Lawler, Schramm and Werner.[12]
A Wiener process enjoys many symmetries a random walk does not. For example, a Wiener process walk is invariant to rotations, but the random walk is not, since the underlying grid is not (random walk is invariant to rotations by 90 degrees, but Wiener processes are invariant to rotations by, for example, 17 degrees too). This means that in many cases, problems on a random walk are easier to solve by translating them to a Wiener process, solving the problem there, and then translating back. On the other hand, some problems are easier to solve with random walks due to its discrete nature.
Random walk and Wiener process can be coupled, namely manifested on the same probability space in a dependent way that forces them to be quite close. The simplest such coupling is the Skorokhod embedding, but there exist more precise couplings, such as Komlós–Major–Tusnády approximation theorem.
The convergence of a random walk toward the Wiener process is controlled by the central limit theorem, and by Donsker's theorem. For a particle in a known fixed position at t = 0, the central limit theorem tells us that after a large number of independent steps in the random walk, the walker's position is distributed according to a normal distribution of total variance:
$\sigma ^{2}={\frac {t}{\delta t}}\,\varepsilon ^{2},$
where t is the time elapsed since the start of the random walk, $\varepsilon $ is the size of a step of the random walk, and $\delta t$ is the time elapsed between two successive steps.
This corresponds to the Green's function of the diffusion equation that controls the Wiener process, which suggests that, after a large number of steps, the random walk converges toward a Wiener process.
In 3D, the variance corresponding to the Green's function of the diffusion equation is:
$\sigma ^{2}=6\,D\,t.$
By equalizing this quantity with the variance associated to the position of the random walker, one obtains the equivalent diffusion coefficient to be considered for the asymptotic Wiener process toward which the random walk converges after a large number of steps:
$D={\frac {\varepsilon ^{2}}{6\delta t}}$
(valid only in 3D).
The two expressions of the variance above correspond to the distribution associated to the vector ${\vec {R}}$ that links the two ends of the random walk, in 3D. The variance associated to each component $R_{x}$, $R_{y}$ or $R_{z}$ is only one third of this value (still in 3D).
For 2D:[13]
$D={\frac {\varepsilon ^{2}}{4\delta t}}.$
For 1D:[14]
$D={\frac {\varepsilon ^{2}}{2\delta t}}.$
Gaussian random walk
A random walk having a step size that varies according to a normal distribution is used as a model for real-world time series data such as financial markets. The Black–Scholes formula for modeling option prices, for example, uses a Gaussian random walk as an underlying assumption.
Here, the step size is the inverse cumulative normal distribution $\Phi ^{-1}(z,\mu ,\sigma )$ where 0 ≤ z ≤ 1 is a uniformly distributed random number, and μ and σ are the mean and standard deviations of the normal distribution, respectively.
If μ is nonzero, the random walk will vary about a linear trend. If vs is the starting value of the random walk, the expected value after n steps will be vs + nμ.
For the special case where μ is equal to zero, after n steps, the translation distance's probability distribution is given by N(0, nσ2), where N() is the notation for the normal distribution, n is the number of steps, and σ is from the inverse cumulative normal distribution as given above.
Proof: The Gaussian random walk can be thought of as the sum of a sequence of independent and identically distributed random variables, Xi from the inverse cumulative normal distribution with mean equal zero and σ of the original inverse cumulative normal distribution:
$Z=\sum _{i=0}^{n}{X_{i}},$
but we have the distribution for the sum of two independent normally distributed random variables, Z = X + Y, is given by
${\mathcal {N}}(\mu _{X}+\mu _{Y},\sigma _{X}^{2}+\sigma _{Y}^{2})$
(see here).
In our case, μX = μY = 0 and σ2X = σ2Y = σ2 yield
${\mathcal {N}}(0,2\sigma ^{2})$
By induction, for n steps we have $Z\sim {\mathcal {N}}(0,n\sigma ^{2}).$ For steps distributed according to any distribution with zero mean and a finite variance (not necessarily just a normal distribution), the root mean square translation distance after n steps is (see Bienaymé's identity)
${\sqrt {Var(S_{n})}}={\sqrt {E[S_{n}^{2}]}}=\sigma {\sqrt {n}}.$
But for the Gaussian random walk, this is just the standard deviation of the translation distance's distribution after n steps. Hence, if μ is equal to zero, and since the root mean square(RMS) translation distance is one standard deviation, there is 68.27% probability that the RMS translation distance after n steps will fall between $\pm \sigma {\sqrt {n}}$. Likewise, there is 50% probability that the translation distance after n steps will fall between $\pm 0.6745\sigma {\sqrt {n}}$.
Number of distinct sites
The number of distinct sites visited by a single random walker $S(t)$ has been studied extensively for square and cubic lattices and for fractals.[15][16] This quantity is useful for the analysis of problems of trapping and kinetic reactions. It is also related to the vibrational density of states,[17][18] diffusion reactions processes[19] and spread of populations in ecology.[20][21]
Information rate
The information rate of a Gaussian random walk with respect to the squared error distance, i.e. its quadratic rate distortion function, is given parametrically by[22]
$R(D_{\theta })={\frac {1}{2}}\int _{0}^{1}\max\{0,\log _{2}\left(S(\varphi )/\theta \right)\}\,d\varphi ,$
$D_{\theta }=\int _{0}^{1}\min\{S(\varphi ),\theta \}\,d\varphi ,$
where $S(\varphi )=\left(2\sin(\pi \varphi /2)\right)^{-2}$. Therefore, it is impossible to encode ${\{Z_{n}\}_{n=1}^{N}}$ using a binary code of less than $NR(D_{\theta })$ bits and recover it with expected mean squared error less than $D_{\theta }$. On the other hand, for any $\varepsilon >0$, there exists an $N\in \mathbb {N} $ large enough and a binary code of no more than $2^{NR(D_{\theta })}$ distinct elements such that the expected mean squared error in recovering ${\{Z_{n}\}_{n=1}^{N}}$ from this code is at most $D_{\theta }-\varepsilon $.
Applications
As mentioned the range of natural phenomena which have been subject to attempts at description by some flavour of random walks is considerable, particularly in physics[23][24] and chemistry,[25] materials science,[26][27] and biology.[28][29][30] The following are some specific applications of random walks:
• In financial economics, the random walk hypothesis is used to model shares prices and other factors.[31] Empirical studies found some deviations from this theoretical model, especially in short term and long term correlations. See share prices.
• In population genetics, random walk describes the statistical properties of genetic drift
• In physics, random walks are used as simplified models of physical Brownian motion and diffusion such as the random movement of molecules in liquids and gases. See for example diffusion-limited aggregation. Also in physics, random walks and some of the self interacting walks play a role in quantum field theory.
• In semiconductor manufacturing, random walks are used to analyze the effects of thermal treatment at smaller nodes. It is applied to understand the diffusion of dopants, defects, impurities etc., during critical fabrication steps. Random walk treatments are also used to study the diffusion of reactants, products and plasma during chemical vapor deposition processes. Continuum diffusion has been used to study the flow of gases, at macroscopic scales, in CVD reactors. However, smaller dimensions and increased complexity has forced us to treat them with random walk. This allows for accurate analysis of stochastic processes, at molecular level and smaller, in semiconductor manufacturing.
• In mathematical ecology, random walks are used to describe individual animal movements, to empirically support processes of biodiffusion, and occasionally to model population dynamics.
• In polymer physics, random walk describes an ideal chain. It is the simplest model to study polymers.[32]
• In other fields of mathematics, random walk is used to calculate solutions to Laplace's equation, to estimate the harmonic measure, and for various constructions in analysis and combinatorics.
• In computer science, random walks are used to estimate the size of the Web.[33]
• In image segmentation, random walks are used to determine the labels (i.e., "object" or "background") to associate with each pixel.[34] This algorithm is typically referred to as the random walker segmentation algorithm.
• In brain research, random walks and reinforced random walks are used to model cascades of neuron firing in the brain.
• In vision science, ocular drift tends to behave like a random walk.[35] According to some authors, fixational eye movements in general are also well described by a random walk.[36]
• In psychology, random walks explain accurately the relation between the time needed to make a decision and the probability that a certain decision will be made.[37]
• Random walks can be used to sample from a state space which is unknown or very large, for example to pick a random page off the internet. In computer science, this method is known as Markov Chain Monte Carlo (MCMC).
• In wireless networking, a random walk is used to model node movement.
• Motile bacteria engage in biased random walks.[38]
• In physics, random walks underlie the method of Fermi estimation.
• On the web, the Twitter website uses random walks to make suggestions of whom to follow[39]
• Dave Bayer and Persi Diaconis have proven that 7 riffle shuffles are sufficient to mix a deck of cards (see more details under shuffle). This result translates to a statement about random walk on the symmetric group which is what they prove, with a crucial use of the group structure via Fourier analysis.
Variants
A number of types of stochastic processes have been considered that are similar to the pure random walks but where the simple structure is allowed to be more generalized. The pure structure can be characterized by the steps being defined by independent and identically distributed random variables. Random walks can take place on a variety of spaces, such as graphs, the integers, the real line, the plane or higher-dimensional vector spaces, on curved surfaces or higher-dimensional Riemannian manifolds, and on groups. It is also possible to define random walks which take their steps at random times, and in that case, the position X
t
has to be defined for all times t ∈ [0, +∞). Specific cases or limits of random walks include the Lévy flight and diffusion models such as Brownian motion.
On graphs
A random walk of length k on a possibly infinite graph G with a root 0 is a stochastic process with random variables $X_{1},X_{2},\dots ,X_{k}$ such that $X_{1}=0$ and ${X_{i+1}}$ is a vertex chosen uniformly at random from the neighbors of $X_{i}$. Then the number $p_{v,w,k}(G)$ is the probability that a random walk of length k starting at v ends at w. In particular, if G is a graph with root 0, $p_{0,0,2k}$ is the probability that a $2k$-step random walk returns to 0.
Building on the analogy from the earlier section on higher dimensions, assume now that our city is no longer a perfect square grid. When our person reaches a certain junction, he picks between the variously available roads with equal probability. Thus, if the junction has seven exits the person will go to each one with probability one-seventh. This is a random walk on a graph. Will our person reach his home? It turns out that under rather mild conditions, the answer is still yes,[40] but depending on the graph, the answer to the variant question 'Will two persons meet again?' may not be that they meet infinitely often almost surely.[41]
An example of a case where the person will reach his home almost surely is when the lengths of all the blocks are between a and b (where a and b are any two finite positive numbers). Notice that we do not assume that the graph is planar, i.e. the city may contain tunnels and bridges. One way to prove this result is using the connection to electrical networks. Take a map of the city and place a one ohm resistor on every block. Now measure the "resistance between a point and infinity". In other words, choose some number R and take all the points in the electrical network with distance bigger than R from our point and wire them together. This is now a finite electrical network, and we may measure the resistance from our point to the wired points. Take R to infinity. The limit is called the resistance between a point and infinity. It turns out that the following is true (an elementary proof can be found in the book by Doyle and Snell):
Theorem: a graph is transient if and only if the resistance between a point and infinity is finite. It is not important which point is chosen if the graph is connected.
In other words, in a transient system, one only needs to overcome a finite resistance to get to infinity from any point. In a recurrent system, the resistance from any point to infinity is infinite.
This characterization of transience and recurrence is very useful, and specifically it allows us to analyze the case of a city drawn in the plane with the distances bounded.
A random walk on a graph is a very special case of a Markov chain. Unlike a general Markov chain, random walk on a graph enjoys a property called time symmetry or reversibility. Roughly speaking, this property, also called the principle of detailed balance, means that the probabilities to traverse a given path in one direction or the other have a very simple connection between them (if the graph is regular, they are just equal). This property has important consequences.
Starting in the 1980s, much research has gone into connecting properties of the graph to random walks. In addition to the electrical network connection described above, there are important connections to isoperimetric inequalities, see more here, functional inequalities such as Sobolev and Poincaré inequalities and properties of solutions of Laplace's equation. A significant portion of this research was focused on Cayley graphs of finitely generated groups. In many cases these discrete results carry over to, or are derived from manifolds and Lie groups.
In the context of random graphs, particularly that of the Erdős–Rényi model, analytical results to some properties of random walkers have been obtained. These include the distribution of first[42] and last hitting times[43] of the walker, where the first hitting time is given by the first time the walker steps into a previously visited site of the graph, and the last hitting time corresponds the first time the walker cannot perform an additional move without revisiting a previously visited site.
A good reference for random walk on graphs is the online book by Aldous and Fill. For groups see the book of Woess. If the transition kernel $p(x,y)$ is itself random (based on an environment $\omega $) then the random walk is called a "random walk in random environment". When the law of the random walk includes the randomness of $\omega $, the law is called the annealed law; on the other hand, if $\omega $ is seen as fixed, the law is called a quenched law. See the book of Hughes, the book of Revesz, or the lecture notes of Zeitouni.
We can think about choosing every possible edge with the same probability as maximizing uncertainty (entropy) locally. We could also do it globally – in maximal entropy random walk (MERW) we want all paths to be equally probable, or in other words: for every two vertexes, each path of given length is equally probable.[44] This random walk has much stronger localization properties.
Self-interacting random walks
There are a number of interesting models of random paths in which each step depends on the past in a complicated manner. All are more complex for solving analytically than the usual random walk; still, the behavior of any model of a random walker is obtainable using computers. Examples include:
• The self-avoiding walk.[45]
The self-avoiding walk of length n on $\mathbb {Z} ^{d}$ is the random n-step path which starts at the origin, makes transitions only between adjacent sites in $\mathbb {Z} ^{d}$, never revisit a site, and is chosen uniformly among all such paths. In two dimensions, due to self-trapping, a typical self-avoiding walk is very short,[46] while in higher dimension it grows beyond all bounds. This model has often been used in polymer physics (since the 1960s).
• The loop-erased random walk.[47][48]
• The reinforced random walk.[49]
• The exploration process.
• The multiagent random walk.[50]
Biased random walks on graphs
Main article: Biased random walk on a graph
Maximal entropy random walk
Main article: Maximal entropy random walk
Random walk chosen to maximize entropy rate, has much stronger localization properties.
Correlated random walks
Random walks where the direction of movement at one time is correlated with the direction of movement at the next time. It is used to model animal movements.[51][52]
See also
• Branching random walk
• Brownian motion
• Law of the iterated logarithm
• Lévy flight
• Lévy flight foraging hypothesis
• Loop-erased random walk
• Maximal entropy random walk
• Self-avoiding walk
• Unit root
References
1. Pearson, Karl (1905). "The Problem of the Random Walk". Nature. 72 (1865): 294. Bibcode:1905Natur..72..294P. doi:10.1038/072294b0. S2CID 4010776.
2. Pal, Révész (1990) Random walk in random and nonrandom environments, World Scientific
3. Kohls, Moritz; Hernandez, Tanja (2016). "Expected Coverage of Random Walk Mobility Algorithm". arXiv:1611.02861 [stat.AP].
4. "Random Walk-1-Dimensional – from Wolfram MathWorld". Mathworld.wolfram.com. 26 April 2000. Retrieved 2 November 2016.
5. Edward A. Codling et al., Random walk models in biology, Journal of the Royal Society Interface, 2008
6. Kotani, M.; Sunada, T. (2003). Spectral geometry of crystal lattices. Contemporary Mathematics. Vol. 338. pp. 271–305. doi:10.1090/conm/338/06077. ISBN 978-0-8218-3383-4.
7. Kotani, M.; Sunada, T. (2006). "Large deviation and the tangent cone at infinity of a crystal lattice". Math. Z. 254 (4): 837–870. doi:10.1007/s00209-006-0951-9. S2CID 122531716.
8. "Pólya's Random Walk Constants". Mathworld.wolfram.com. Retrieved 2 November 2016.
9. Durrett, Rick (2010). Probability: Theory and Examples. Cambridge University Press. pp. 191. ISBN 978-1-139-49113-6.
10. Pólya, George (1984). Probability; Combinatorics; Teaching and learning in mathematics. Rota, Gian-Carlo, 1932-1999., Reynolds, M. C., Shortt, Rae Michael. Cambridge, Mass.: MIT Press. pp. 582–585. ISBN 0-262-16097-8. OCLC 10208449.
11. Erdős, P.; Taylor, S. J. (1960). "Some intersection properties of random walk paths". Acta Mathematica Academiae Scientiarum Hungaricae. 11 (3–4): 231–248. doi:10.1007/BF02020942. ISSN 0001-5954. S2CID 14143214.
12. MacKenzie, D. (2000). "MATHEMATICS: Taking the Measure of the Wildest Dance on Earth". Science. 290 (5498): 1883–4. doi:10.1126/science.290.5498.1883. PMID 17742050. S2CID 12829171. (Erratum: doi:10.1126/science.291.5504.597)
13. Chapter 2 DIFFUSION. dartmouth.edu.
14. Diffusion equation for the random walk Archived 21 April 2015 at the Wayback Machine. physics.uakron.edu.
15. Weiss, George H.; Rubin, Robert J. (1982). "Random Walks: Theory and Selected Applications". Advances in Chemical Physics. Vol. 52. pp. 363–505. doi:10.1002/9780470142769.ch5. ISBN 978-0-470-14276-9.
16. Blumen, A.; Klafter, J.; Zumofen, G. (1986). "Models for Reaction Dynamics in Glasses". Optical Spectroscopy of Glasses. Physic and Chemistry of Materials with Low-Dimensional Structures. Vol. 1. pp. 199–265. Bibcode:1986PCMLD...1..199B. doi:10.1007/978-94-009-4650-7_5. ISBN 978-94-010-8566-3.
17. Alexander, S.; Orbach, R. (1982). "Density of states on fractals: " fractons "". Journal de Physique Lettres. 43 (17): 625–631. doi:10.1051/jphyslet:019820043017062500. S2CID 67757791.
18. Rammal, R.; Toulouse, G. (1983). "Random walks on fractal structures and percolation clusters". Journal de Physique Lettres. 44 (1): 13–22. doi:10.1051/jphyslet:0198300440101300.
19. Smoluchowski, M.V. (1917). "Versuch einer mathematischen Theorie der Koagulationskinetik kolloider Lösungen". Z. Phys. Chem. (29): 129–168.,Rice, S.A. (1 March 1985). Diffusion-Limited Reactions. Comprehensive Chemical Kinetics. Vol. 25. Elsevier. ISBN 978-0-444-42354-2. Retrieved 13 August 2013.
20. Skellam, J. G. (1951). "Random Dispersal in Theoretical Populations". Biometrika. 38 (1/2): 196–218. doi:10.2307/2332328. JSTOR 2332328. PMID 14848123.
21. Skellam, J. G. (1952). "Studies in Statistical Ecology: I. Spatial Pattern". Biometrika. 39 (3/4): 346–362. doi:10.2307/2334030. JSTOR 2334030.
22. Berger, T. (1970). "Information rates of Wiener processes". IEEE Transactions on Information Theory. 16 (2): 134–139. doi:10.1109/TIT.1970.1054423.
23. Risken H. (1984) The Fokker–Planck Equation. Springer, Berlin.
24. De Gennes P. G. (1979) Scaling Concepts in Polymer Physics. Cornell University Press, Ithaca and London.
25. Van Kampen N. G. (1992) Stochastic Processes in Physics and Chemistry, revised and enlarged edition. North-Holland, Amsterdam.
26. Weiss, George H. (1994). Aspects and Applications of the Random Walk. Random Materials and Processes. North-Holland Publishing Co., Amsterdam. ISBN 978-0-444-81606-1. MR 1280031.
27. Doi M. and Edwards S. F. (1986) The Theory of Polymer Dynamics. Clarendon Press, Oxford
28. Goel N. W. and Richter-Dyn N. (1974) Stochastic Models in Biology. Academic Press, New York.
29. Redner S. (2001) A Guide to First-Passage Process. Cambridge University Press, Cambridge, UK.
30. Cox D. R. (1962) Renewal Theory. Methuen, London.
31. David A. Kodde and Hein Schreuder (1984), Forecasting Corporate Revenue and Profit: Time-Series Models versus Management and Analysts, Journal of Business Finance and Accounting, vol. 11, no 3, Autumn 1984
32. Jones, R.A.L. (2004). Soft condensed matter (Reprint. ed.). Oxford [u.a.]: Oxford Univ. Pr. pp. 77–78. ISBN 978-0-19-850589-1.
33. Bar-Yossef, Ziv; Gurevich, Maxim (2008). "Random sampling from a search engine's index". Journal of the ACM. Association for Computing Machinery (ACM). 55 (5): 1–74. doi:10.1145/1411509.1411514. ISSN 0004-5411.
34. Grady, L (2006). "Random walks for image segmentation" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 28 (11): 1768–83. CiteSeerX 10.1.1.375.3389. doi:10.1109/TPAMI.2006.233. PMID 17063682. S2CID 489789. Archived from the original (PDF) on 5 July 2017. Retrieved 2 November 2016.
35. Rucci, M; Victor, J. D. (2015). "The unsteady eye: An information-processing stage, not a bug". Trends in Neurosciences. 38 (4): 195–206. doi:10.1016/j.tins.2015.01.005. PMC 4385455. PMID 25698649.
36. Engbert, R.; Mergenthaler, K.; Sinn, P.; Pikovsky, A. (2011). "An integrated model of fixational eye movements and microsaccades". Proceedings of the National Academy of Sciences. 108 (39): E765-70. Bibcode:2011PNAS..108E.765E. doi:10.1073/pnas.1102730108. PMC 3182695. PMID 21873243.
37. Nosofsky, R. M.; Palmeri, T. J. (1997). "An exemplar-based random walk model of speeded classification" (PDF). Psychological Review. 104 (2): 266–300. doi:10.1037/0033-295x.104.2.266. PMID 9127583. Archived from the original (PDF) on 10 December 2004.
38. Codling, E. A; Plank, M. J; Benhamou, S. (6 August 2008). "Random walk models in biology". Journal of the Royal Society Interface. 5 (25): 813–834. doi:10.1098/rsif.2008.0014. PMC 2504494. PMID 18426776.
39. Gupta, Pankaj et al. WTF: The who-to-follow system at Twitter, Proceedings of the 22nd international conference on World Wide Web
40. It is interesting to remark that in a general graph the meeting of two independent random walkers does not always reduces to the problem of a single random walk returning to its starting point.
41. Krishnapur, Manjunath; Peres, Yuval (2004). "Recurrent Graphs where Two Independent Random Walks Collide Finitely Often". Electronic Communications in Probability. 9: 72–81. arXiv:math/0406487. Bibcode:2004math......6487K. doi:10.1214/ECP.v9-1111. ISSN 1083-589X. S2CID 16584737.
42. Tishby, Ido; Biham, Ofer; Katzav, Eytan (2017). "The distribution of first hitting times of randomwalks on Erdős–Rényi networks". Journal of Physics A: Mathematical and Theoretical. 50 (11): 115001. arXiv:1606.01560. Bibcode:2017JPhA...50k5001T. doi:10.1088/1751-8121/aa5af3. S2CID 118850609.
43. Tishby, Ido; Biham, Ofer; Katzav, Eytan (2016). "The distribution of path lengths of self avoiding walks on Erdős–Rényi networks". Journal of Physics A: Mathematical and Theoretical. 49 (28): 285002. arXiv:1603.06613. Bibcode:2016JPhA...49B5002T. doi:10.1088/1751-8113/49/28/285002. S2CID 119182848.
44. Burda, Z.; Duda, J.; Luck, J. M.; Waclaw, B. (2009). "Localization of the Maximal Entropy Random Walk". Physical Review Letters. 102 (16): 160602. arXiv:0810.4113. Bibcode:2009PhRvL.102p0602B. doi:10.1103/PhysRevLett.102.160602. PMID 19518691. S2CID 32134048.
45. Madras, Neal and Slade, Gordon (1996) The Self-Avoiding Walk, Birkhäuser Boston. ISBN 0-8176-3891-1.
46. Hemmer, S.; Hemmer, P. C. (1984). "An average self-avoiding random walk on the square lattice lasts 71 steps". J. Chem. Phys. 81 (1): 584–585. Bibcode:1984JChPh..81..584H. doi:10.1063/1.447349.
47. Lawler, Gregory (1996). Intersection of random walks, Birkhäuser Boston. ISBN 0-8176-3892-X.
48. Lawler, Gregory Conformally Invariant Processes in the Plane, book.ps.
49. Pemantle, Robin (2007). "A survey of random processes with reinforcement" (PDF). Probability Surveys. 4: 1–79. arXiv:math/0610076. doi:10.1214/07-PS094. S2CID 11964062.
50. Alamgir, M and von Luxburg, U (2010). "Multi-agent random walks for local clustering on graphs" Archived 15 April 2012 at the Wayback Machine, IEEE 10th International Conference on Data Mining (ICDM), pp. 18–27.
51. Bovet, Pierre; Benhamou, Simon (1988). "Spatial analysis of animals' movements using a correlated random walk model". Journal of Theoretical Biology. 131 (4): 419–433. Bibcode:1988JThBi.131..419B. doi:10.1016/S0022-5193(88)80038-9.
52. Kareiva, P.M.; Shigesada, N. (1983). "Analyzing insect movement as a correlated random walk". Oecologia. 56 (2–3): 234–238. Bibcode:1983Oecol..56..234K. doi:10.1007/BF00379695. PMID 28310199. S2CID 20329045.
Bibliography
• Aldous, David; Fill, James Allen (2002). Reversible Markov Chains and Random Walks on Graphs. Archived from the original on 27 February 2019.
• Doyle, Peter G.; Snell, J. Laurie (1984). Random Walks and Electric Networks. Carus Mathematical Monographs. Vol. 22. Mathematical Association of America. arXiv:math.PR/0001057. ISBN 978-0-88385-024-4. MR 0920811.
• Feller, William (1968), An Introduction to Probability Theory and its Applications (Volume 1). ISBN 0-471-25708-7
• Hughes, Barry D. (1996), Random Walks and Random Environments, Oxford University Press. ISBN 0-19-853789-1
• Norris, James (1998), Markov Chains, Cambridge University Press. ISBN 0-521-63396-6
• Pólya G.(1921), "Über eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Strassennetz" Archived 4 March 2016 at the Wayback Machine, Mathematische Annalen, 84(1–2):149–160, March 1921.
• Révész, Pal (2013), Random Walk in Random and Non-random Environments (Third Edition), World Scientific Pub Co. ISBN 978-981-4447-50-8
• Sunada, Toshikazu (2012). Topological Crystallography: With a View Towards Discrete Geometric Analysis. Surveys and Tutorials in the Applied Mathematical Sciences. Vol. 6. Springer. ISBN 978-4-431-54177-6.
• Weiss G. Aspects and Applications of the Random Walk, North-Holland, 1994.
• Woess, Wolfgang (2000), Random Walks on Infinite Graphs and Groups, Cambridge tracts in mathematics 138, Cambridge University Press. ISBN 0-521-55292-3
External links
• Pólya's Random Walk Constants
• Random walk in Java Applet Archived 31 August 2007 at the Wayback Machine
• Quantum random walk
• Gaussian random walk estimator
• Electron Conductance Models Using Maximal Entropy Random Walks Wolfram Demonstrations Project
Stochastic processes
Discrete time
• Bernoulli process
• Branching process
• Chinese restaurant process
• Galton–Watson process
• Independent and identically distributed random variables
• Markov chain
• Moran process
• Random walk
• Loop-erased
• Self-avoiding
• Biased
• Maximal entropy
Continuous time
• Additive process
• Bessel process
• Birth–death process
• pure birth
• Brownian motion
• Bridge
• Excursion
• Fractional
• Geometric
• Meander
• Cauchy process
• Contact process
• Continuous-time random walk
• Cox process
• Diffusion process
• Empirical process
• Feller process
• Fleming–Viot process
• Gamma process
• Geometric process
• Hawkes process
• Hunt process
• Interacting particle systems
• Itô diffusion
• Itô process
• Jump diffusion
• Jump process
• Lévy process
• Local time
• Markov additive process
• McKean–Vlasov process
• Ornstein–Uhlenbeck process
• Poisson process
• Compound
• Non-homogeneous
• Schramm–Loewner evolution
• Semimartingale
• Sigma-martingale
• Stable process
• Superprocess
• Telegraph process
• Variance gamma process
• Wiener process
• Wiener sausage
Both
• Branching process
• Galves–Löcherbach model
• Gaussian process
• Hidden Markov model (HMM)
• Markov process
• Martingale
• Differences
• Local
• Sub-
• Super-
• Random dynamical system
• Regenerative process
• Renewal process
• Stochastic chains with memory of variable length
• White noise
Fields and other
• Dirichlet process
• Gaussian random field
• Gibbs measure
• Hopfield model
• Ising model
• Potts model
• Boolean network
• Markov random field
• Percolation
• Pitman–Yor process
• Point process
• Cox
• Poisson
• Random field
• Random graph
Time series models
• Autoregressive conditional heteroskedasticity (ARCH) model
• Autoregressive integrated moving average (ARIMA) model
• Autoregressive (AR) model
• Autoregressive–moving-average (ARMA) model
• Generalized autoregressive conditional heteroskedasticity (GARCH) model
• Moving-average (MA) model
Financial models
• Binomial options pricing model
• Black–Derman–Toy
• Black–Karasinski
• Black–Scholes
• Chan–Karolyi–Longstaff–Sanders (CKLS)
• Chen
• Constant elasticity of variance (CEV)
• Cox–Ingersoll–Ross (CIR)
• Garman–Kohlhagen
• Heath–Jarrow–Morton (HJM)
• Heston
• Ho–Lee
• Hull–White
• LIBOR market
• Rendleman–Bartter
• SABR volatility
• Vašíček
• Wilkie
Actuarial models
• Bühlmann
• Cramér–Lundberg
• Risk process
• Sparre–Anderson
Queueing models
• Bulk
• Fluid
• Generalized queueing network
• M/G/1
• M/M/1
• M/M/c
Properties
• Càdlàg paths
• Continuous
• Continuous paths
• Ergodic
• Exchangeable
• Feller-continuous
• Gauss–Markov
• Markov
• Mixing
• Piecewise-deterministic
• Predictable
• Progressively measurable
• Self-similar
• Stationary
• Time-reversible
Limit theorems
• Central limit theorem
• Donsker's theorem
• Doob's martingale convergence theorems
• Ergodic theorem
• Fisher–Tippett–Gnedenko theorem
• Large deviation principle
• Law of large numbers (weak/strong)
• Law of the iterated logarithm
• Maximal ergodic theorem
• Sanov's theorem
• Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy)
Inequalities
• Burkholder–Davis–Gundy
• Doob's martingale
• Doob's upcrossing
• Kunita–Watanabe
• Marcinkiewicz–Zygmund
Tools
• Cameron–Martin formula
• Convergence of random variables
• Doléans-Dade exponential
• Doob decomposition theorem
• Doob–Meyer decomposition theorem
• Doob's optional stopping theorem
• Dynkin's formula
• Feynman–Kac formula
• Filtration
• Girsanov theorem
• Infinitesimal generator
• Itô integral
• Itô's lemma
• Karhunen–Loève theorem
• Kolmogorov continuity theorem
• Kolmogorov extension theorem
• Lévy–Prokhorov metric
• Malliavin calculus
• Martingale representation theorem
• Optional stopping theorem
• Prokhorov's theorem
• Quadratic variation
• Reflection principle
• Skorokhod integral
• Skorokhod's representation theorem
• Skorokhod space
• Snell envelope
• Stochastic differential equation
• Tanaka
• Stopping time
• Stratonovich integral
• Uniform integrability
• Usual hypotheses
• Wiener space
• Classical
• Abstract
Disciplines
• Actuarial mathematics
• Control theory
• Econometrics
• Ergodic theory
• Extreme value theory (EVT)
• Large deviations theory
• Mathematical finance
• Mathematical statistics
• Probability theory
• Queueing theory
• Renewal theory
• Ruin theory
• Signal processing
• Statistics
• Stochastic analysis
• Time series analysis
• Machine learning
• List of topics
• Category
Authority control: National
• France
• BnF data
• Israel
• United States
• Japan
|
Wikipedia
|
Random walk closeness centrality
Random walk closeness centrality is a measure of centrality in a network, which describes the average speed with which randomly walking processes reach a node from other nodes of the network. It is similar to the closeness centrality except that the farness is measured by the expected length of a random walk rather than by the shortest path.
The concept was first proposed by White and Smyth (2003) under the name Markov centrality.[1]
Intuition
Consider a network with a finite number of nodes and a random walk process that starts in a certain node and proceeds from node to node along the edges. From each node, it chooses randomly the edge to be followed. In an unweighted network, the probability of choosing a certain edge is equal across all available edges, while in a weighted network it is proportional to the edge weights. A node is considered to be close to other nodes, if the random walk process initiated from any node of the network arrives to this particular node in relatively few steps on average.
Definition
Consider a weighted network – either directed or undirected – with n nodes denoted by j=1, …, n; and a random walk process on this network with a transition matrix M. The $m_{ij}$ element of M describes the probability of the random walker that has reached node i, proceeds directly to node j. These probabilities are defined in the following way.
$m_{ij}={\frac {a_{ij}}{\sum _{k=1}^{n}a_{ik}}}$
where $a_{ij}$ is the (i,j)th element of the weighting matrix A of the network. When there is no edge between two nodes, the corresponding element of the A matrix is zero.
The random walk closeness centrality of a node i is the inverse of the average mean first passage time to that node:
$C_{i}^{RWC}={\frac {n}{\sum _{j=1}^{n}H(j,i)}}$
where $H(j,i)$ is the mean first passage time from node j to node i.
Mean first passage time
The mean first passage time from node i to node j is the expected number of steps it takes for the process to reach node j from node i for the first time:
$H(i,j)=\sum _{r=1}^{\infty }rP(i,j,r)$
where P(i,j,r) denotes the probability that it takes exactly r steps to reach j from i for the first time. To calculate these probabilities of reaching a node for the first time in r steps, it is useful to regard the target node as an absorbing one, and introduce a transformation of M by deleting its j-th row and column and denoting it by $M_{-j}$. As the probability of a process starting at i and being in k after r-1 steps is simply given by the (i,k)th element of $M_{-j}^{r-1}$, P(i,j,r) can be expressed as
$P(i,j,r)=\sum _{k\neq j}((M_{-j}^{r-1}))_{ik}m_{kj}$
Substituting this into the expression for mean first passage time yields
$H(i,j)=\sum _{r=1}^{\infty }r\sum _{k\neq j}((M_{-j}^{r-1}))_{ik}m_{kj}$
Using the formula for the summation of geometric series for matrices yields
$H(i,j)=\sum _{k\neq j}((I-M_{-j})^{-2})_{ik}m_{kj}$
where I is the n-1 dimensional identity matrix.
For computational convenience, this expression can be vectorized as
$H(.,j)=(I-M_{-j})^{-1}e$
where $H(.,j)$ is the vector for first passage times for a walk ending at node j, and e is an n-1 dimensional vector of ones.
Mean first passage time is not symmetric, even for undirected graphs.
In model networks
According to simulations performed by Noh and Rieger (2004), the distribution of random walk closeness centrality in a Barabási-Albert model is mainly determined by the degree distribution. In such a network, the random walk closeness centrality of a node is roughly proportional to, but does not increase monotonically with its degree.
Applications for real networks
Random walk closeness centrality is more relevant measure than the simple closeness centrality in case of applications where the concept of shortest paths is not meaningful or is very restrictive for a reasonable assessment of the nature of the system. This is the case for example when the analyzed process evolves in the network without any specific intention to reach a certain point, or without the ability of finding the shortest path to reach its target. One example for a random walk in a network is the way a certain coin circulates in an economy: it is passed from one person to another through transactions, without any intention of reaching a specific individual. Another example where the concept of shortest paths is not very useful is a densely connected network. Furthermore, as shortest paths are not influenced by self-loops, random walk closeness centrality is a more adequate measure than closeness centrality when analyzing networks where self-loops are important.
An important application on the field of economics is the analysis of the input-output model of an economy, which is represented by a densely connected weighted network with important self-loops.[2]
The concept is widely used in natural sciences as well. One biological application is the analysis of protein-protein interactions.[3]
Random walk betweenness centrality
A related concept, proposed by Newman,[4] is random walk betweenness centrality. Just as random walk closeness centrality is a random walk counterpart of closeness centrality, random walk betweenness centrality is, similarly, the random walk counterpart of betweenness centrality. Unlike the usual betweenness centrality measure, it does not only count shortest paths passing through the given node, but all possible paths crossing it.
Formally, the random walk betweenness centrality of a node is
$C_{i}^{RWB}=\sum _{j\neq i\neq k}r_{jk}$
where the $r_{jk}$ element of matrix R contains the probability of a random walk starting at node j with absorbing node k, passing through node i.
Calculating random walk betweenness in large networks is computationally very intensive.[5]
Second order centrality
Another random walk based centrality is the second order centrality.[6] Instead of counting the shortest paths passing through a given node (as for random walk betweenness centrality), it focuses on another characteristic of random walks on graphs. The expectation of the standard deviation of the return times of a random walk to a node constitutes its centrality. The lower that deviation, the more central that node is.
Calculating the second order betweenness on large arbitrary graphs is also intensive, as its complexity is $O(n^{3})$ (worst case achieved on the Lollipop graph).
See also
• Centrality
References
1. White, Scott; Smyth, Padhraic (2003). Algorithms for Estimating Relative Importance in Networks (PDF). ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. doi:10.1145/956750.956782. ISBN 1581137370.
2. Blöchl F, Theis FJ, Vega-Redondo F, and Fisher E: Vertex Centralities in Input-Output Networks Reveal the Structure of Modern Economies, Physical Review E, 83(4):046127, 2011.
3. Aidong Zhang: Protein Interaction Networks: Computational Analysis (Cambridge University Press) 2007
4. Newman, M.E. J.: A measure of betweenness centrality based on random walks. Social Networks, Volume 27, Issue 1, January 2005, Pages 39–54
5. Kang, U., Papadimitriou, S., Sun, J., and Tong, H.: Centralities in Large Networks: Algorithms and Observations. SIAM International Conference on Data Mining 2011, Mesa, Arizona, USA.
6. A.-M. Kermarrec, E. Le Merrer, B. Sericola, G. Trédan: Second order centrality: Distributed assessment of nodes criticity in complex networks. Elsevier Computer Communications 34(5):619-628, 2011.
|
Wikipedia
|
RL (complexity)
Randomized Logarithmic-space (RL),[1] sometimes called RLP (Randomized Logarithmic-space Polynomial-time),[2] is the complexity class of computational complexity theory problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with one-sided error. It is named in analogy with RP, which is similar but has no logarithmic space restriction.
Definition
The probabilistic Turing machines in the definition of RL never accept incorrectly but are allowed to reject incorrectly less than 1/3 of the time; this is called one-sided error. The constant 1/3 is arbitrary; any x with 0 < x < 1 would suffice. This error can be made 2−p(x) times smaller for any polynomial p(x) without using more than polynomial time or logarithmic space by running the algorithm repeatedly.
Relation to other complexity classes
Sometimes the name RL is reserved for the class of problems solvable by logarithmic-space probabilistic machines in unbounded time. However, this class can be shown to be equal to NL using a probabilistic counter, and so is usually referred to as NL instead; this also shows that RL is contained in NL. RL is contained in BPL, which is similar but allows two-sided error (incorrect accepts). RL contains L, the problems solvable by deterministic Turing machines in log space, since its definition is just more general.
Noam Nisan showed in 1992 the weak derandomization result that RL is contained in SC,[3] the class of problems solvable in polynomial time and polylogarithmic space on a deterministic Turing machine; in other words, given polylogarithmic space, a deterministic machine can simulate logarithmic space probabilistic algorithms.
It is believed that RL is equal to L, that is, that polynomial-time logspace computation can be completely derandomized; major evidence for this was presented by Reingold et al. in 2005.[4] A proof of this is the holy grail of the efforts in the field of unconditional derandomization of complexity classes. A major step forward was Omer Reingold's proof that SL is equal to L.
References
1. Complexity Zoo: RL
2. A. Borodin, S.A. Cook, P.W. Dymond, W.L. Ruzzo, and M. Tompa. Two applications of inductive counting for complementation problems. SIAM Journal on Computing, 18(3):559–578. 1989.
3. Nisan, Noam (1992), "RL ⊆ SC", Proceedings of the 24th ACM Symposium on Theory of computing (STOC '92), Victoria, British Columbia, Canada, pp. 619–623, doi:10.1145/129712.129772{{citation}}: CS1 maint: location missing publisher (link).
4. O. Reingold and L. Trevisan and S. Vadhan. Pseudorandom walks in biregular graphs and the RL vs. L problem, ECCC TR05-022, 2004.
Important complexity classes
Considered feasible
• DLOGTIME
• AC0
• ACC0
• TC0
• L
• SL
• RL
• NL
• NL-complete
• NC
• SC
• CC
• P
• P-complete
• ZPP
• RP
• BPP
• BQP
• APX
• FP
Suspected infeasible
• UP
• NP
• NP-complete
• NP-hard
• co-NP
• co-NP-complete
• AM
• QMA
• PH
• ⊕P
• PP
• #P
• #P-complete
• IP
• PSPACE
• PSPACE-complete
Considered infeasible
• EXPTIME
• NEXPTIME
• EXPSPACE
• 2-EXPTIME
• ELEMENTARY
• PR
• R
• RE
• ALL
Class hierarchies
• Polynomial hierarchy
• Exponential hierarchy
• Grzegorczyk hierarchy
• Arithmetical hierarchy
• Boolean hierarchy
Families of classes
• DTIME
• NTIME
• DSPACE
• NSPACE
• Probabilistically checkable proof
• Interactive proof system
List of complexity classes
|
Wikipedia
|
Randomized response
Randomised response is a research method used in structured survey interview. It was first proposed by S. L. Warner in 1965 and later modified by B. G. Greenberg and coauthors in 1969.[1][2] It allows respondents to respond to sensitive issues (such as criminal behavior or sexuality) while maintaining confidentiality. Chance decides, unknown to the interviewer, whether the question is to be answered truthfully, or "yes", regardless of the truth.
For example, social scientists have used it to ask people whether they use drugs, whether they have illegally installed telephones, or whether they have evaded paying taxes. Before abortions were legal, social scientists used the method to ask women whether they had had abortions.[3]
The concept is somewhat similar to plausible deniability. Plausible deniability allows the subject to credibly say that they did not make a statement, while the randomized response technique allows the subject to credibly say that they had not been truthful when making a statement.
Example
With a coin
A person is asked if they had sex with a prostitute this month. Before they answer, they flip a coin. They are then instructed to answer "yes" if the coin comes up tails, and truthfully, if it comes up heads. Only they know whether their answer reflects the toss of the coin or their true experience. It is very important to assume that people who get heads will answer truthfully, otherwise the surveyor is not able to speculate.
Half the people—or half the questionnaire population—get tails and the other half get heads when they flip the coin. Therefore, half of those people will answer "yes" regardless of whether they have done it. The other half will answer truthfully according to their experience. So whatever proportion of the group said "no", the true number who did not have sex with a prostitute is double that, based on the assumption that the two halves are probably close to the same as it is a large randomized sampling. For example, if 20% of the population surveyed said "no", then the true fraction that did not have sex with a prostitute is 40%.
With cards
The same question can be asked with three cards which are unmarked on one side, and bear a question on the other side. The cards are randomly mixed, and laid in front of the subject. The subject takes one card, turns it over, and answers the question on it truthfully with either "yes" or "no".
• One card asks: "Did you have sex with a prostitute this month?"
• Another card asks: "Is there a triangle on this card?" (There is no triangle.)
• The last card asks: "Is there a triangle on this card?" (There is a triangle.)
The researcher does not know which question has been asked.
Under the assumption that the "yes" and "no" answers to the control questions cancel each other out, the number of subjects who have had sex with a prostitute is triple that of all "yes" answers in excess of the "no" answers.
Original version
Warner's original version (1965) is slightly different: The sensitive question is worded in two dichotomous alternatives, and chance decides, unknown to the interviewer, which one is to be answered honestly. The interviewer gets a "yes" or "no" without knowing to which of the two questions. For mathematical reasons chance cannot be "fair" (½ and ½). Let $p$ be the probability to answer the sensitive question and $EP$ the true proportion of those interviewed bearing the embarrassing property, then the proportion of "yes"-answers $YA$ is composed as follows:
• $YA=p\times EP+(1-p)(1-EP)$
Transformed to yield EP:
• $EP={\frac {YA+p-1}{2p-1}}$
Example
• Alternative 1: "I have consumed marijuana."
• Alternative 2: "I have never consumed marijuana."
The interviewed are asked to secretly throw a die and answer the first question only if they throw a 6, otherwise the second question ($p={\tfrac {1}{6}}$). The "yes"-answers are now composed of consumers who have thrown a 6 and non-consumers who have thrown a different number. Let the result be 75 "yes"-answers out of 100 interviewed ($YA={\tfrac {3}{4}}$). Inserted into the formula you get
• $EP=({\tfrac {3}{4}}+{\tfrac {1}{6}}-1)/(2\times {\tfrac {1}{6}}-1)={\tfrac {1}{8}}$
If all interviewed have answered honestly then their true proportion of consumers is 1/8 (= 12.5%).
See also
• Bogus pipeline
• Differential privacy
• Loaded question
• Unmatched count
References
1. Warner, S. L. (March 1965). "Randomised response: a survey technique for eliminating evasive answer bias". Journal of the American Statistical Association. Taylor & Francis. 60 (309): 63–69. doi:10.1080/01621459.1965.10480775. JSTOR 2283137. PMID 12261830. S2CID 35435339.
2. Greenberg, B. G.; et al. (June 1969). "The Unrelated Question Randomised Response Model: Theoretical Framework". Journal of the American Statistical Association. Taylor & Francis. 64 (326): 520–39. doi:10.2307/2283636. JSTOR 2283636.
3. Abernathy, James R.; et al. (February 1970). "Estimates of induced abortion in urban North Carolina". Demography. 7 (1): 19–29. doi:10.2307/2060019. JSTOR 2060019. PMID 5524615.
Further reading
• Aoki, S.; Sezaki, K. (2014). "Privacy-preserving community sensing for medical research with duplicated perturbation". 2014 IEEE International Conference on Communications (ICC). pp. 4252–4257. doi:10.1109/ICC.2014.6883988. ISBN 978-1-4799-2003-7. S2CID 24050604.
• Chaudhuri, Arijit; Mukerjee, Rahul (1987). Randomized Response: Theory and Techniques. CRC Press. ISBN 9780824777852 – via Google Books.
• John, Leslie K.; et al. (2018). "When and Why Randomized Response Techniques (Fail to) Elicit the Truth". Organizational Behavior and Human Decision Processes. 148: 101–123. doi:10.1016/j.obhdp.2018.07.004. S2CID 52263233.
• Lee, Cheon-Sig; et al. (2013). "Estimating at least seven measures for qualitative variables using randomized response sampling". Statistics and Probability Letters. 83 (1): 399–409. doi:10.1016/j.spl.2012.10.004.
• Ostapczuk, M.; et al. (2009). "Assessing sensitive attributes using the randomized-response-technique: Evidence for the importance of response symmetry". Journal of Educational and Behavioral Statistics. 34 (2): 267–87. doi:10.3102/1076998609332747. S2CID 15064377.
• Ostapczuk, M.; et al. (2009). "A randomized-response investigation of the education effect in attitudes towards foreigners". European Journal of Social Psychology. 39 (6): 920–31. doi:10.1002/ejsp.588.
• Quercia, D.; et al. (2011). "SpotME if You Can: Randomized Responses for Location Obfuscation on Mobile Phones". 2011 31st International Conference on Distributed Computing Systems. pp. 363–372. doi:10.1109/ICDCS.2011.79. ISBN 978-1-61284-384-1. S2CID 15454609. {{cite book}}: |journal= ignored (help)
|
Wikipedia
|
Randomized algorithms as zero-sum games
Randomized algorithms are algorithms that employ a degree of randomness as part of their logic. These algorithms can be used to give good average-case results (complexity-wise) to problems which are hard to solve deterministically, or display poor worst-case complexity. An algorithmic game theoretic approach can help explain why in the average case randomized algorithms may work better than deterministic algorithms.
Formalizing the game
Consider a zero-sum game between player A, whose strategies are deterministic algorithms, and player B, whose strategies are inputs for A's algorithms. The cost of a strategy profile is the running time of A's chosen algorithm on B's chosen input. Therefore, player A tries to minimize the cost, and player B tries to maximize it. In the world of pure strategies, for every algorithm that A chooses, B may choose the most costly input – this is the worst-case scenario, and can be found using standard complexity analysis.
But in the real world, inputs are normally not selected by an ‘evil opponent’ – rather, they come from some distribution over inputs. Since this is the case, if we allow the algorithms to also be drawn from some distribution, we may look at the game as one that allows mixed strategies. That is, each player chooses a distribution over its strategies.
Analysis
Incorporating mixed strategies into the game allows us to use von Neumann's minimax theorem:
$\min _{R}\max _{D}T(A,D)=\max _{D}\min _{A}T(A,D)\,$
where R is a distribution over the algorithms, D is a distribution over inputs, A is a single deterministic algorithm, and T(A, D) is the average running time of algorithm a on input D. More specifically:
$T(A,D)=\,{\underset {x\sim D}{\operatorname {E} }}[T(A,X)].\,$
If we limit the set of algorithms to a specific family (for instance, all deterministic choices for pivots in the quick sort algorithm), choosing an algorithm A from R is equivalent to running a randomized algorithm (for instance, running quick sort and randomly choosing the pivots at each step).
This gives us an insight on Yao's principle, which states that the expected cost of any randomized algorithm for solving a given problem, on the worst-case input for that algorithm, can be no better than the expected cost, for a worst-case random probability distribution on the inputs, of the deterministic algorithm that performs best against that distribution.
|
Wikipedia
|
Randomized algorithm
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables.
Part of a series on
Probabilistic
data structures
• Bloom filter
• Count–min sketch
• Quotient filter
• Skip list
Random trees
• Random binary tree
• Treap
• Rapidly exploring random tree
Related
• Randomized algorithm
• HyperLogLog
One has to distinguish between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (Las Vegas algorithms, for example Quicksort[1]), and algorithms which have a chance of producing an incorrect result (Monte Carlo algorithms, for example the Monte Carlo algorithm for the MFAS problem[2]) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem.[3]
In common practice, randomized algorithms are approximated using a pseudorandom number generator in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior and mathematical guarantees which may depend on the existence of an ideal true random number generator.
Motivation
As a motivating example, consider the problem of finding an ‘a’ in an array of n elements.
Input: An array of n≥2 elements, in which half are ‘a’s and the other half are ‘b’s.
Output: Find an ‘a’ in the array.
We give two versions of the algorithm, one Las Vegas algorithm and one Monte Carlo algorithm.
Las Vegas algorithm:
findingA_LV(array A, n)
begin
repeat
Randomly select one element out of n elements.
until 'a' is found
end
This algorithm succeeds with probability 1. The number of iterations varies and can be arbitrarily large, but the expected number of iterations is
$\lim _{n\to \infty }\sum _{i=1}^{n}{\frac {i}{2^{i}}}=2$
Since it is constant, the expected run time over many calls is $\Theta (1)$. (See Big Theta notation)
Monte Carlo algorithm:
findingA_MC(array A, n, k)
begin
i := 0
repeat
Randomly select one element out of n elements.
i := i + 1
until i = k or 'a' is found
end
If an ‘a’ is found, the algorithm succeeds, else the algorithm fails. After k iterations, the probability of finding an ‘a’ is:
$\Pr[\mathrm {find~a} ]=1-(1/2)^{k}$
This algorithm does not guarantee success, but the run time is bounded. The number of iterations is always less than or equal to k. Taking k to be constant the run time (expected and absolute) is $\Theta (1)$.
Randomized algorithms are particularly useful when faced with a malicious "adversary" or attacker who deliberately tries to feed a bad input to the algorithm (see worst-case complexity and competitive analysis (online algorithm)) such as in the Prisoner's dilemma. It is for this reason that randomness is ubiquitous in cryptography. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore, either a source of truly random numbers or a cryptographically secure pseudo-random number generator is required. Another area in which randomness is inherent is quantum computing.
In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm (related to the Monte Carlo method for simulation) is guaranteed to complete in an amount of time that can be bounded by a function the input size and its parameter k, but allows a small probability of error. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (via Markov's inequality), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.
Computational complexity
Computational complexity theory models randomized algorithms as probabilistic Turing machines. Both Las Vegas and Monte Carlo algorithms are considered, and several complexity classes are studied. The most basic randomized complexity class is RP, which is the class of decision problems for which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms with polynomial time average case running time whose output is always correct are said to be in ZPP.
The class of problems for which both YES and NO-instances are allowed to be identified with some error is called BPP. This class acts as the randomized equivalent of P, i.e. BPP represents the class of efficient randomized algorithms.
Early history
Sorting
Quicksort was discovered by Tony Hoare in 1959, and subsequently published in 1961.[4] In the same year, Hoare published the quickselect algorithm,[5] which finds the median element of a list in linear expected time. It remained open until 1973 whether a deterministic linear-time algorithm existed.[6]
Number Theory
In 1917, Henry Cabourn Pocklington introduced a randomized algorithm known as Pocklington's algorithm for efficiently finding square roots modulo prime numbers.[7] In 1970, Elwyn Berlekamp introduced a randomized algorithm for efficiently computing the roots of a polynomial over a finite field.[8] In 1977, Robert M. Solovay and Volker Strassen discovered a polynomial-time randomized primality test (i.e., determining the primality of a number). Soon afterwards Michael O. Rabin demonstrated that the 1976 Miller's primality test could also be turned into a polynomial-time randomized algorithm. At that time, no provably polynomial-time deterministic algorithms for primality testing were known.
Data Structures
One of the earliest randomized data structures is the hash table, which was introduced in 1953 by Hans Peter Luhn at IBM.[9] Luhn's hash table used chaining to resolve collisions and was also one of the first applications of linked lists.[9] Subsequently, in 1954, Gene Amdahl, Elaine M. McGraw, Nathaniel Rochester, and Arthur Samuel of IBM Research introduced linear probing,[9] although Andrey Ershov independently had the same idea in 1957.[9] In 1962, Donald Knuth performed the first correct analysis of linear probing,[9] although the memorandum containing his analysis was not published until much later.[10] The first published analysis was due to Konheim and Weiss in 1966.[11]
Early works on hash tables either assumed access to a fully random hash function or assumed that the keys themselves were random.[9] In 1979, Carter and Wegman introduced universal hash functions,[12] which they showed could be used to implement chained hash tables with constant expected time per operation.
Early work on randomized data structures also extended beyond hash tables. In 1970, Burton Howard Bloom introduced an approximate-membership data structure known as the Bloom filter.[13] In 1989, Raimund Seidel and Cecilia R. Aragon introduced a randomized balanced search tree known as the treap.[14] In the same year, William Pugh introduced another randomized search tree known as the skip list.[15]
Implicit Uses in Combinatorics
Prior to the popularization of randomized algorithms in computer science, Paul Erdős popularized the use of randomized constructions as a mathematical technique for establishing the existence of mathematical objects. This technique has become known as the probabilistic method.[16] Erdős gave his first application of the probabilistic method in 1947, when he used a simple randomized construction to establish the existence of Ramsey graphs.[17] He famously used a much more sophisticated randomized algorithm in 1959 to establish the existence of graphs with high girth and chromatic number.[18][16]
Examples
Quicksort
Quicksort is a familiar, commonly used algorithm in which randomness can be useful. Many deterministic versions of this algorithm require O(n2) time to sort n numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing in O(n log n) time regardless of the characteristics of the input.
Randomized incremental constructions in geometry
In computational geometry, a standard technique to build a structure like a convex hull or Delaunay triangulation is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be bounded from above. This technique is known as randomized incremental construction.[19]
Min cut
Input: A graph G(V,E)
Output: A cut partitioning the vertices into L and R, with the minimum number of edges between L and R.
Recall that the contraction of two nodes, u and v, in a (multi-)graph yields a new node u ' with edges that are the union of the edges incident on either u or v, except from any edge(s) connecting u and v. Figure 1 gives an example of contraction of vertex A and B. After contraction, the resulting graph may have parallel edges, but contains no self loops.
Karger's[20] basic algorithm:
begin
i = 1
repeat
repeat
Take a random edge (u,v) ∈ E in G
replace u and v with the contraction u'
until only 2 nodes remain
obtain the corresponding cut result Ci
i = i + 1
until i = m
output the minimum cut among C1, C2, ..., Cm.
end
In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution is $O(n)$, and n denotes the number of vertices. After m times executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an example of one execution of the algorithm. After execution, we get a cut of size 3.
Lemma 1 — Let k be the min cut size, and let C = {e1, e2, ..., ek} be the min cut. If, during iteration i, no edge e ∈ C is selected for contraction, then Ci = C.
Proof
If G is not connected, then G can be partitioned into L and R without any edge between them. So the min cut in a disconnected graph is 0. Now, assume G is connected. Let V=L∪R be the partition of V induced by C : C = { {u,v} ∈ E : u ∈ L,v ∈ R} (well-defined since G is connected). Consider an edge {u,v} of C. Initially, u,v are distinct vertices. As long as we pick an edge $f\neq e$, u and v do not get merged. Thus, at the end of the algorithm, we have two compound nodes covering the entire graph, one consisting of the vertices of L and the other consisting of the vertices of R. As in figure 2, the size of min cut is 1, and C = {(A,B)}. If we don't select (A,B) for contraction, we can get the min cut.
Lemma 2 — If G is a multigraph with p vertices and whose min cut has size k, then G has at least pk/2 edges.
Proof
Because the min cut is k, every vertex v must satisfy degree(v) ≥ k. Therefore, the sum of the degree is at least pk. But it is well known that the sum of vertex degrees equals 2|E|. The lemma follows.
Analysis of algorithm
The probability that the algorithm succeeds is 1 − the probability that all attempts fail. By independence, the probability that all attempts fail is
$\prod _{i=1}^{m}\Pr(C_{i}\neq C)=\prod _{i=1}^{m}(1-\Pr(C_{i}=C)).$
By lemma 1, the probability that Ci = C is the probability that no edge of C is selected during iteration i. Consider the inner loop and let Gj denote the graph after j edge contractions, where j ∈ {0, 1, …, n − 3}. Gj has n − j vertices. We use the chain rule of conditional possibilities. The probability that the edge chosen at iteration j is not in C, given that no edge of C has been chosen before, is $1-{\frac {k}{|E(G_{j})|}}$. Note that Gj still has min cut of size k, so by Lemma 2, it still has at least ${\frac {(n-j)k}{2}}$ edges.
Thus, $1-{\frac {k}{|E(G_{j})|}}\geq 1-{\frac {2}{n-j}}={\frac {n-j-2}{n-j}}$.
So by the chain rule, the probability of finding the min cut C is
$\Pr[C_{i}=C]\geq \left({\frac {n-2}{n}}\right)\left({\frac {n-3}{n-1}}\right)\left({\frac {n-4}{n-2}}\right)\ldots \left({\frac {3}{5}}\right)\left({\frac {2}{4}}\right)\left({\frac {1}{3}}\right).$
Cancellation gives $\Pr[C_{i}=C]\geq {\frac {2}{n(n-1)}}$. Thus the probability that the algorithm succeeds is at least $1-\left(1-{\frac {2}{n(n-1)}}\right)^{m}$. For $m={\frac {n(n-1)}{2}}\ln n$, this is equivalent to $1-{\frac {1}{n}}$. The algorithm finds the min cut with probability $1-{\frac {1}{n}}$, in time $O(mn)=O(n^{3}\log n)$.
Derandomization
Randomness can be viewed as a resource, like space and time. Derandomization is then the process of removing randomness (or using as little of it as possible). It is not currently known if all algorithms can be derandomized without significantly increasing their running time. For instance, in computational complexity, it is unknown whether P = BPP, i.e., we do not know whether we can take an arbitrary randomized algorithm that runs in polynomial time with a small error probability and derandomize it to run in polynomial time without using randomness.
There are specific methods that can be employed to derandomize particular randomized algorithms:
• the method of conditional probabilities, and its generalization, pessimistic estimators
• discrepancy theory (which is used to derandomize geometric algorithms)
• the exploitation of limited independence in the random variables used by the algorithm, such as the pairwise independence used in universal hashing
• the use of expander graphs (or dispersers in general) to amplify a limited amount of initial randomness (this last approach is also referred to as generating pseudorandom bits from a random source, and leads to the related topic of pseudorandomness)
• changing the randomized algorithm to use a hash function as a source of randomness for the algorithm's tasks, and then derandomizing the algorithm by brute-forcing all possible parameters (seeds) of the hash function. This technique is usually used to exhaustively search a sample space and making the algorithm deterministic (e.g. randomized graph algorithms)
Where randomness helps
When the model of computation is restricted to Turing machines, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements.
• Based on the initial motivating example: given an exponentially long string of 2k characters, half a's and half b's, a random-access machine requires 2k−1 lookups in the worst-case to find the index of an a; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups.
• The natural way of carrying out a numerical computation in embedded systems or cyber-physical systems is to provide a result that approximates the correct one with high probability (or Probably Approximately Correct Computation (PACC)). The hard problem associated with the evaluation of the discrepancy loss between the approximated and the correct computation can be effectively addressed by resorting to randomization[21]
• In communication complexity, the equality of two strings can be verified to some reliability using $\log n$ bits of communication with a randomized protocol. Any deterministic protocol requires $\Theta (n)$ bits if defending against a strong opponent.[22]
• The volume of a convex body can be estimated by a randomized algorithm to arbitrary precision in polynomial time.[23] Bárány and Füredi showed that no deterministic algorithm can do the same.[24] This is true unconditionally, i.e. without relying on any complexity-theoretic assumptions, assuming the convex body can be queried only as a black box.
• A more complexity-theoretic example of a place where randomness appears to help is the class IP. IP consists of all languages that can be accepted (with high probability) by a polynomially long interaction between an all-powerful prover and a verifier that implements a BPP algorithm. IP = PSPACE.[25] However, if it is required that the verifier be deterministic, then IP = NP.
• In a chemical reaction network (a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable. More specifically, a limited Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random chemical reaction network is used. With a simple nondeterministic chemical reaction network (any possible reaction can happen next), the computational power is limited to primitive recursive functions.[26]
See also
• Probabilistic analysis of algorithms
• Atlantic City algorithm
• Monte Carlo algorithm
• Las Vegas algorithm
• Bogosort
• Principle of deferred decision
• Randomized algorithms as zero-sum games
• Probabilistic roadmap
• HyperLogLog
• count–min sketch
• approximate counting algorithm
• Karger's algorithm
Notes
1. Hoare, C. A. R. (July 1961). "Algorithm 64: Quicksort". Commun. ACM. 4 (7): 321–. doi:10.1145/366622.366644. ISSN 0001-0782.
2. Kudelić, Robert (2016-04-01). "Monte-Carlo randomized algorithm for minimal feedback arc set problem". Applied Soft Computing. 41: 235–246. doi:10.1016/j.asoc.2015.12.018.
3. "In testing primality of very large numbers chosen at random, the chance of stumbling upon a value that fools the Fermat test is less than the chance that cosmic radiation will cause the computer to make an error in carrying out a 'correct' algorithm. Considering an algorithm to be inadequate for the first reason but not for the second illustrates the difference between mathematics and engineering." Hal Abelson and Gerald J. Sussman (1996). Structure and Interpretation of Computer Programs. MIT Press, section 1.2.
4. Hoare, C. A. R. (July 1961). "Algorithm 64: Quicksort". Communications of the ACM. 4 (7): 321. doi:10.1145/366622.366644. ISSN 0001-0782.
5. Hoare, C. A. R. (July 1961). "Algorithm 65: find". Communications of the ACM. 4 (7): 321–322. doi:10.1145/366622.366647. ISSN 0001-0782.
6. Blum, Manuel; Floyd, Robert W.; Pratt, Vaughan; Rivest, Ronald L.; Tarjan, Robert E. (August 1973). "Time bounds for selection". Journal of Computer and System Sciences. 7 (4): 448–461. doi:10.1016/S0022-0000(73)80033-9.
7. Williams, H. C.; Shallit, J. O. (1994), "Factoring integers before computers", in Gautschi, Walter (ed.), Mathematics of Computation 1943–1993: a half-century of computational mathematics; Papers from the Symposium on Numerical Analysis and the Minisymposium on Computational Number Theory held in Vancouver, British Columbia, August 9–13, 1993, Proceedings of Symposia in Applied Mathematics, vol. 48, Amer. Math. Soc., Providence, RI, pp. 481–531, doi:10.1090/psapm/048/1314885, MR 1314885; see p. 504, "Perhaps Pocklington also deserves credit as the inventor of the randomized algorithm".
8. Berlekamp, E. R. (1971). "Factoring polynomials over large finite fields". Proceedings of the second ACM symposium on Symbolic and algebraic manipulation - SYMSAC '71. Los Angeles, California, United States: ACM Press. p. 223. doi:10.1145/800204.806290. ISBN 9781450377867. S2CID 6464612.
9. Knuth, Donald E. (1998). The art of computer programming, volume 3: (2nd ed.) sorting and searching. USA: Addison Wesley Longman Publishing Co., Inc. pp. 536–549. ISBN 978-0-201-89685-5.
10. Knuth, Donald (1963), Notes on "Open" Addressing, archived from the original on 2016-03-03
11. Konheim, Alan G.; Weiss, Benjamin (November 1966). "An Occupancy Discipline and Applications". SIAM Journal on Applied Mathematics. 14 (6): 1266–1274. doi:10.1137/0114101. ISSN 0036-1399.
12. Carter, J. Lawrence; Wegman, Mark N. (1979-04-01). "Universal classes of hash functions". Journal of Computer and System Sciences. 18 (2): 143–154. doi:10.1016/0022-0000(79)90044-8. ISSN 0022-0000.
13. Bloom, Burton H. (July 1970). "Space/time trade-offs in hash coding with allowable errors". Communications of the ACM. 13 (7): 422–426. doi:10.1145/362686.362692. ISSN 0001-0782. S2CID 7931252.
14. Aragon, C.R.; Seidel, R.G. (October 1989). "Randomized search trees". 30th Annual Symposium on Foundations of Computer Science. pp. 540–545. doi:10.1109/SFCS.1989.63531. ISBN 0-8186-1982-1.
15. Pugh, William (April 1989). Concurrent Maintenance of Skip Lists (PS, PDF) (Technical report). Dept. of Computer Science, U. Maryland. CS-TR-2222.
16. Alon, Noga (2016). The probabilistic method. Joel H. Spencer (Fourth ed.). Hoboken, New Jersey. ISBN 978-1-119-06195-3. OCLC 910535517.{{cite book}}: CS1 maint: location missing publisher (link)
17. P. Erdős: Some remarks on the theory of graphs, Bull. Amer. Math. Soc. 53 (1947), 292--294 MR8,479d; Zentralblatt 32,192.
18. Erdös, P. (1959). "Graph Theory and Probability". Canadian Journal of Mathematics. 11: 34–38. doi:10.4153/CJM-1959-003-9. ISSN 0008-414X. S2CID 122784453.
19. Seidel R. Backwards Analysis of Randomized Geometric Algorithms.
20. A. A. Tsay, W. S. Lovejoy, David R. Karger, Random Sampling in Cut, Flow, and Network Design Problems, Mathematics of Operations Research, 24(2):383–413, 1999.
21. Alippi, Cesare (2014), Intelligence for Embedded Systems, Springer, ISBN 978-3-319-05278-6.
22. Kushilevitz, Eyal; Nisan, Noam (2006), Communication Complexity, Cambridge University Press, ISBN 9780521029834. For the deterministic lower bound see p. 11; for the logarithmic randomized upper bound see pp. 31–32.
23. Dyer, M.; Frieze, A.; Kannan, R. (1991), "A random polynomial-time algorithm for approximating the volume of convex bodies" (PDF), Journal of the ACM, 38 (1): 1–17, doi:10.1145/102782.102783, S2CID 13268711
24. Füredi, Z.; Bárány, I. (1986), "Computing the volume is difficult", Proc. 18th ACM Symposium on Theory of Computing (Berkeley, California, May 28–30, 1986) (PDF), New York, NY: ACM, pp. 442–447, CiteSeerX 10.1.1.726.9448, doi:10.1145/12130.12176, ISBN 0-89791-193-8, S2CID 17867291
25. Shamir, A. (1992), "IP = PSPACE", Journal of the ACM, 39 (4): 869–877, doi:10.1145/146585.146609, S2CID 315182
26. Cook, Matthew; Soloveichik, David; Winfree, Erik; Bruck, Jehoshua (2009), "Programmability of chemical reaction networks", in Condon, Anne; Harel, David; Kok, Joost N.; Salomaa, Arto; Winfree, Erik (eds.), Algorithmic Bioprocesses (PDF), Natural Computing Series, Springer-Verlag, pp. 543–584, doi:10.1007/978-3-540-88869-7_27.
References
• Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw–Hill, 1990. ISBN 0-262-03293-7. Chapter 5: Probabilistic Analysis and Randomized Algorithms, pp. 91–122.
• Dirk Draheim. "Semantics of the Probabilistic Typed Lambda Calculus (Markov Chain Semantics, Termination Behavior, and Denotational Semantics)." Springer, 2017.
• Jon Kleinberg and Éva Tardos. Algorithm Design. Chapter 13: "Randomized algorithms".
• Fallis, D. (2000). "The reliability of randomized algorithms". The British Journal for the Philosophy of Science. 51 (2): 255–271. doi:10.1093/bjps/51.2.255.
• M. Mitzenmacher and E. Upfal. Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, New York (NY), 2005.
• Rajeev Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, New York (NY), 1995.
• Rajeev Motwani and P. Raghavan. Randomized Algorithms. A survey on Randomized Algorithms.
• Christos Papadimitriou (1993), Computational Complexity (1st ed.), Addison Wesley, ISBN 978-0-201-53082-7 Chapter 11: Randomized computation, pp. 241–278.
• Rabin, Michael O. (1980). "Probabilistic algorithm for testing primality". Journal of Number Theory. 12: 128–138. doi:10.1016/0022-314X(80)90084-0.
• A. A. Tsay, W. S. Lovejoy, David R. Karger, Random Sampling in Cut, Flow, and Network Design Problems, Mathematics of Operations Research, 24(2):383–413, 1999.
• "Randomized Algorithms for Scientific Computing" (RASC), OSTI.GOV (July 10th, 2021).
|
Wikipedia
|
EP matrix
In mathematics, an EP matrix (or range-Hermitian matrix[1] or RPN matrix[2]) is a square matrix A whose range is equal to the range of its conjugate transpose A*. Another equivalent characterization of EP matrices is that the range of A is orthogonal to the nullspace of A. Thus, EP matrices are also known as RPN (Range Perpendicular to Nullspace) matrices.
EP matrices were introduced in 1950 by Hans Schwerdtfeger,[1][3] and since then, many equivalent characterizations of EP matrices have been investigated through the literature.[4] The meaning of the EP abbreviation stands originally for Equal Principal, but it is widely believed that it stands for Equal Projectors instead, since an equivalent characterization of EP matrices is based in terms of equality of the projectors AA+ and A+A.[5]
The range of any matrix A is perpendicular to the null-space of A*, but is not necessarily perpendicular to the null-space of A. When A is an EP matrix, the range of A is precisely perpendicular to the null-space of A.
Properties
• An equivalent characterization of an EP matrix A is that A commutes with its Moore-Penrose inverse, that is, the projectors AA+ and A+A are equal. This is similar to the characterization of normal matrices where A commutes with its conjugate transpose.[4] As a corollary, nonsingular matrices are always EP matrices.
• The sum of EP matrices Ai is an EP matrix if the null-space of the sum is contained in the null-space of each matrix Ai.[6]
• To be an EP matrix is a necessary condition for normality: A is normal if and only if A is EP matrix and AA*A2 = A2A*A.[4]
• When A is an EP matrix, the Moore-Penrose inverse of A is equal to the group inverse of A.[4]
• A is an EP matrix if and only if the Moore-Penrose inverse of A is an EP matrix.[4]
Decomposition
The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix.
Weakening the normality condition to EPness, a similar statement is still valid. Precisely, a matrix A of rank r is an EP matrix if and only if it is unitarily similar to a core-nilpotent matrix,[2] that is,
$A=U{\begin{pmatrix}C&0\\0&0\end{pmatrix}}U^{*},$
where U is an orthogonal matrix and C is an r x r nonsingular matrix. Note that if A is full rank, then A = UCU*.
References
1. Drivaliaris, Dimosthenis; Karanasios, Sotirios; Pappas, Dimitrios (2008-10-01). "Factorizations of EP operators". Linear Algebra and Its Applications. 429 (7): 1555–1567. arXiv:0806.2088. doi:10.1016/j.laa.2008.04.026. ISSN 0024-3795.
2. Meyer, Carl D. (2000). Matrix analysis and applied linear algebra. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 0898714540. OCLC 43662189.
3. Schwerdtfeger, Hans (1950). Introduction to linear algebra and the theory of matrices. P. Noordhoff.
4. Cheng, Shizhen; Tian, Yongge (2003-12-01). "Two sets of new characterizations for normal and EP matrices". Linear Algebra and Its Applications. 375: 181–195. doi:10.1016/S0024-3795(03)00650-5. ISSN 0024-3795.
5. S., Bernstein, Dennis (2018). Scalar, Vector, and Matrix Mathematics : Theory, Facts, and Formulas. Princeton: Princeton University Press. ISBN 9781400888252. OCLC 1023540775.{{cite book}}: CS1 maint: multiple names: authors list (link)
6. Meenakshi, A.R. (1983). "On sums of EP matrices". Houston Journal of Mathematics. 9. CiteSeerX 10.1.1.638.7389.
|
Wikipedia
|
Range of a function
In mathematics, the range of a function may refer to either of two closely related concepts:
• The codomain of the function
• The image of the function
Given two sets X and Y, a binary relation f between X and Y is a (total) function (from X to Y) if for every x in X there is exactly one y in Y such that f relates x to y. The sets X and Y are called domain and codomain of f, respectively. The image of f is then the subset of Y consisting of only those elements y of Y such that there is at least one x in X with f(x) = y.
Terminology
As the term "range" can have different meanings, it is considered a good practice to define it the first time it is used in a textbook or article. Older books, when they use the word "range", tend to use it to mean what is now called the codomain.[1] More modern books, if they use the word "range" at all, generally use it to mean what is now called the image.[2] To avoid any confusion, a number of modern books don't use the word "range" at all.[3]
Elaboration and example
Given a function
$f\colon X\to Y$
with domain $X$, the range of $f$, sometimes denoted $\operatorname {ran} (f)$ or $\operatorname {Range} (f)$,[4] may refer to the codomain or target set $Y$ (i.e., the set into which all of the output of $f$ is constrained to fall), or to $f(X)$, the image of the domain of $f$ under $f$ (i.e., the subset of $Y$ consisting of all actual outputs of $f$). The image of a function is always a subset of the codomain of the function.[5]
As an example of the two different usages, consider the function $f(x)=x^{2}$ as it is used in real analysis (that is, as a function that inputs a real number and outputs its square). In this case, its codomain is the set of real numbers $\mathbb {R} $, but its image is the set of non-negative real numbers $\mathbb {R} ^{+}$, since $x^{2}$ is never negative if $x$ is real. For this function, if we use "range" to mean codomain, it refers to $\mathbb \mathbb {R} ^{}} $; if we use "range" to mean image, it refers to $\mathbb {R} ^{+}$.
In many cases, the image and the codomain can coincide. For example, consider the function $f(x)=2x$, which inputs a real number and outputs its double. For this function, the codomain and the image are the same (both being the set of real numbers), so the word range is unambiguous.
See also
• Bijection, injection and surjection
• Essential range
Notes and references
1. Hungerford 1974, p. 3; Childs 2009, p. 140.
2. Dummit & Foote 2004, p. 2.
3. Rudin 1991, p. 99.
4. Weisstein, Eric W. "Range". mathworld.wolfram.com. Retrieved 2020-08-28.
5. Nykamp, Duane. "Range definition". Math Insight. Retrieved August 28, 2020.{{cite web}}: CS1 maint: url-status (link)
Bibliography
• Childs, Lindsay N. (2009). A Concrete Introduction to Higher Algebra. Undergraduate Texts in Mathematics (3rd ed.). Springer. doi:10.1007/978-0-387-74725-5. ISBN 978-0-387-74527-5. OCLC 173498962.
• Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). Wiley. ISBN 978-0-471-43334-7. OCLC 52559229.
• Hungerford, Thomas W. (1974). Algebra. Graduate Texts in Mathematics. Vol. 73. Springer. doi:10.1007/978-1-4612-6101-8. ISBN 0-387-90518-9. OCLC 703268.
• Rudin, Walter (1991). Functional Analysis (2nd ed.). McGraw Hill. ISBN 0-07-054236-8.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
|
Wikipedia
|
Vector measure
In mathematics, a vector measure is a function defined on a family of sets and taking vector values satisfying certain properties. It is a generalization of the concept of finite measure, which takes nonnegative real values only.
Definitions and first consequences
Given a field of sets $(\Omega ,{\mathcal {F}})$ and a Banach space $X,$ a finitely additive vector measure (or measure, for short) is a function $\mu :{\mathcal {F}}\to X$ :{\mathcal {F}}\to X} such that for any two disjoint sets $A$ and $B$ in ${\mathcal {F}}$ one has
$\mu (A\cup B)=\mu (A)+\mu (B).$
A vector measure $\mu $ is called countably additive if for any sequence $(A_{i})_{i=1}^{\infty }$ of disjoint sets in ${\mathcal {F}}$ such that their union is in ${\mathcal {F}}$ it holds that
$\mu {\left(\bigcup _{i=1}^{\infty }A_{i}\right)}=\sum _{i=1}^{\infty }\mu (A_{i})$
with the series on the right-hand side convergent in the norm of the Banach space $X.$
It can be proved that an additive vector measure $\mu $ is countably additive if and only if for any sequence $(A_{i})_{i=1}^{\infty }$ as above one has
$\lim _{n\to \infty }\left\|\mu {\left(\bigcup _{i=n}^{\infty }A_{i}\right)}\right\|=0,$
(*)
where $\|\cdot \|$ is the norm on $X.$
Countably additive vector measures defined on sigma-algebras are more general than finite measures, finite signed measures, and complex measures, which are countably additive functions taking values respectively on the real interval $[0,\infty ),$ the set of real numbers, and the set of complex numbers.
Examples
Consider the field of sets made up of the interval $[0,1]$ together with the family ${\mathcal {F}}$ of all Lebesgue measurable sets contained in this interval. For any such set $A,$ define
$\mu (A)=\chi _{A}$
where $\chi $ is the indicator function of $A.$ Depending on where $\mu $ is declared to take values, two different outcomes are observed.
• $\mu ,$ viewed as a function from ${\mathcal {F}}$ to the $L^{p}$-space $L^{\infty }([0,1]),$ is a vector measure which is not countably-additive.
• $\mu ,$ viewed as a function from ${\mathcal {F}}$ to the $L^{p}$-space $L^{1}([0,1]),$ is a countably-additive vector measure.
Both of these statements follow quite easily from the criterion (*) stated above.
The variation of a vector measure
Given a vector measure $\mu :{\mathcal {F}}\to X,$ :{\mathcal {F}}\to X,} the variation $|\mu |$ of $\mu $ is defined as
$|\mu |(A)=\sup \sum _{i=1}^{n}\|\mu (A_{i})\|$
where the supremum is taken over all the partitions
$A=\bigcup _{i=1}^{n}A_{i}$
of $A$ into a finite number of disjoint sets, for all $A$ in ${\mathcal {F}}.$ Here, $\|\cdot \|$ is the norm on $X.$
The variation of $\mu $ is a finitely additive function taking values in $[0,\infty ].$ It holds that
$\|\mu (A)\|\leq |\mu |(A)$
for any $A$ in ${\mathcal {F}}.$ If $|\mu |(\Omega )$ is finite, the measure $\mu $ is said to be of bounded variation. One can prove that if $\mu $ is a vector measure of bounded variation, then $\mu $ is countably additive if and only if $|\mu |$ is countably additive.
Lyapunov's theorem
In the theory of vector measures, Lyapunov's theorem states that the range of a (non-atomic) finite-dimensional vector measure is closed and convex.[1][2][3] In fact, the range of a non-atomic vector measure is a zonoid (the closed and convex set that is the limit of a convergent sequence of zonotopes).[2] It is used in economics,[4][5][6] in ("bang–bang") control theory,[1][3][7][8] and in statistical theory.[8] Lyapunov's theorem has been proved by using the Shapley–Folkman lemma,[9] which has been viewed as a discrete analogue of Lyapunov's theorem.[8][10][11]
See also
• Bochner measurable function
• Bochner integral
• Bochner space – Mathematical concept
• Complex measure
• Signed measure – Generalized notion of measure in mathematics
• Vector-valued functions – Function valued in a vector space; typically a real or complex onePages displaying short descriptions of redirect targets
• Weakly measurable function
References
1. Kluvánek, I., Knowles, G., Vector Measures and Control Systems, North-Holland Mathematics Studies 20, Amsterdam, 1976.
2. Diestel, Joe; Uhl, Jerry J., Jr. (1977). Vector measures. Providence, R.I: American Mathematical Society. ISBN 0-8218-1515-6.{{cite book}}: CS1 maint: multiple names: authors list (link)
3. Rolewicz, Stefan (1987). Functional analysis and control theory: Linear systems. Mathematics and its Applications (East European Series). Vol. 29 (Translated from the Polish by Ewa Bednarczuk ed.). Dordrecht; Warsaw: D. Reidel Publishing Co.; PWN—Polish Scientific Publishers. pp. xvi+524. ISBN 90-277-2186-6. MR 0920371. OCLC 13064804.
4. Roberts, John (July 1986). "Large economies". In David M. Kreps; John Roberts; Robert B. Wilson (eds.). Contributions to the New Palgrave (PDF). Research paper. Vol. 892. Palo Alto, CA: Graduate School of Business, Stanford University. pp. 30–35. (Draft of articles for the first edition of New Palgrave Dictionary of Economics). Retrieved 7 February 2011.
5. Aumann, Robert J. (January 1966). "Existence of competitive equilibrium in markets with a continuum of traders". Econometrica. 34 (1): 1–17. doi:10.2307/1909854. JSTOR 1909854. MR 0191623. S2CID 155044347. This paper builds on two papers by Aumann:
Aumann, Robert J. (January–April 1964). "Markets with a continuum of traders". Econometrica. 32 (1–2): 39–50. doi:10.2307/1913732. JSTOR 1913732. MR 0172689.
Aumann, Robert J. (August 1965). "Integrals of set-valued functions". Journal of Mathematical Analysis and Applications. 12 (1): 1–12. doi:10.1016/0022-247X(65)90049-1. MR 0185073.
6. Vind, Karl (May 1964). "Edgeworth-allocations in an exchange economy with many traders". International Economic Review. Vol. 5, no. 2. pp. 165–77. JSTOR 2525560. Vind's article was noted by Debreu (1991, p. 4) with this comment:
The concept of a convex set (i.e., a set containing the segment connecting any two of its points) had repeatedly been placed at the center of economic theory before 1964. It appeared in a new light with the introduction of integration theory in the study of economic competition: If one associates with every agent of an economy an arbitrary set in the commodity space and if one averages those individual sets over a collection of insignificant agents, then the resulting set is necessarily convex. [Debreu appends this footnote: "On this direct consequence of a theorem of A. A. Lyapunov, see Vind (1964)."] But explanations of the ... functions of prices ... can be made to rest on the convexity of sets derived by that averaging process. Convexity in the commodity space obtained by aggregation over a collection of insignificant agents is an insight that economic theory owes ... to integration theory. [Italics added]
Debreu, Gérard (March 1991). "The Mathematization of economic theory". The American Economic Review. Vol. 81, number 1, no. Presidential address delivered at the 103rd meeting of the American Economic Association, 29 December 1990, Washington, DC. pp. 1–7. JSTOR 2006785.
7. Hermes, Henry; LaSalle, Joseph P. (1969). Functional analysis and time optimal control. Mathematics in Science and Engineering. Vol. 56. New York—London: Academic Press. pp. viii+136. MR 0420366.
8. Artstein, Zvi (1980). "Discrete and continuous bang-bang and facial spaces, or: Look for the extreme points". SIAM Review. 22 (2): 172–185. doi:10.1137/1022026. JSTOR 2029960. MR 0564562.
9. Tardella, Fabio (1990). "A new proof of the Lyapunov convexity theorem". SIAM Journal on Control and Optimization. 28 (2): 478–481. doi:10.1137/0328026. MR 1040471.
10. Starr, Ross M. (2008). "Shapley–Folkman theorem". In Durlauf, Steven N.; Blume, Lawrence E., ed. (eds.). The New Palgrave Dictionary of Economics (Second ed.). Palgrave Macmillan. pp. 317–318 (1st ed.). doi:10.1057/9780230226203.1518. ISBN 978-0-333-78676-5.{{cite book}}: CS1 maint: multiple names: editors list (link)
11. Page 210: Mas-Colell, Andreu (1978). "A note on the core equivalence theorem: How many blocking coalitions are there?". Journal of Mathematical Economics. 5 (3): 207–215. doi:10.1016/0304-4068(78)90010-1. MR 0514468.
Bibliography
• Cohn, Donald L. (1997) [1980]. Measure theory (reprint ed.). Boston–Basel–Stuttgart: Birkhäuser Verlag. pp. IX+373. ISBN 3-7643-3003-1. Zbl 0436.28001.
• Diestel, Joe; Uhl, Jerry J., Jr. (1977). Vector measures. Mathematical Surveys. Vol. 15. Providence, R.I: American Mathematical Society. pp. xiii+322. ISBN 0-8218-1515-6.{{cite book}}: CS1 maint: multiple names: authors list (link)
• Kluvánek, I., Knowles, G, Vector Measures and Control Systems, North-Holland Mathematics Studies 20, Amsterdam, 1976.
• van Dulst, D. (2001) [1994], "Vector measures", Encyclopedia of Mathematics, EMS Press
• Rudin, W (1973). Functional analysis. New York: McGraw-Hill. p. 114. ISBN 9780070542259.
Analysis in topological vector spaces
Basic concepts
• Abstract Wiener space
• Classical Wiener space
• Bochner space
• Convex series
• Cylinder set measure
• Infinite-dimensional vector function
• Matrix calculus
• Vector calculus
Derivatives
• Differentiable vector–valued functions from Euclidean space
• Differentiation in Fréchet spaces
• Fréchet derivative
• Total
• Functional derivative
• Gateaux derivative
• Directional
• Generalizations of the derivative
• Hadamard derivative
• Holomorphic
• Quasi-derivative
Measurability
• Besov measure
• Cylinder set measure
• Canonical Gaussian
• Classical Wiener measure
• Measure like set functions
• infinite-dimensional Gaussian measure
• Projection-valued
• Vector
• Bochner / Weakly / Strongly measurable function
• Radonifying function
Integrals
• Bochner
• Direct integral
• Dunford
• Gelfand–Pettis/Weak
• Regulated
• Paley–Wiener
Results
• Cameron–Martin theorem
• Inverse function theorem
• Nash–Moser theorem
• Feldman–Hájek theorem
• No infinite-dimensional Lebesgue measure
• Sazonov's theorem
• Structure theorem for Gaussian measures
Related
• Crinkled arc
• Covariance operator
Functional calculus
• Borel functional calculus
• Continuous functional calculus
• Holomorphic functional calculus
Applications
• Banach manifold (bundle)
• Convenient vector space
• Choquet theory
• Fréchet manifold
• Hilbert manifold
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Measure theory
Basic concepts
• Absolute continuity of measures
• Lebesgue integration
• Lp spaces
• Measure
• Measure space
• Probability space
• Measurable space/function
Sets
• Almost everywhere
• Atom
• Baire set
• Borel set
• equivalence relation
• Borel space
• Carathéodory's criterion
• Cylindrical σ-algebra
• Cylinder set
• 𝜆-system
• Essential range
• infimum/supremum
• Locally measurable
• π-system
• σ-algebra
• Non-measurable set
• Vitali set
• Null set
• Support
• Transverse measure
• Universally measurable
Types of Measures
• Atomic
• Baire
• Banach
• Besov
• Borel
• Brown
• Complex
• Complete
• Content
• (Logarithmically) Convex
• Decomposable
• Discrete
• Equivalent
• Finite
• Inner
• (Quasi-) Invariant
• Locally finite
• Maximising
• Metric outer
• Outer
• Perfect
• Pre-measure
• (Sub-) Probability
• Projection-valued
• Radon
• Random
• Regular
• Borel regular
• Inner regular
• Outer regular
• Saturated
• Set function
• σ-finite
• s-finite
• Signed
• Singular
• Spectral
• Strictly positive
• Tight
• Vector
Particular measures
• Counting
• Dirac
• Euler
• Gaussian
• Haar
• Harmonic
• Hausdorff
• Intensity
• Lebesgue
• Infinite-dimensional
• Logarithmic
• Product
• Projections
• Pushforward
• Spherical measure
• Tangent
• Trivial
• Young
Maps
• Measurable function
• Bochner
• Strongly
• Weakly
• Convergence: almost everywhere
• of measures
• in measure
• of random variables
• in distribution
• in probability
• Cylinder set measure
• Random: compact set
• element
• measure
• process
• variable
• vector
• Projection-valued measure
Main results
• Carathéodory's extension theorem
• Convergence theorems
• Dominated
• Monotone
• Vitali
• Decomposition theorems
• Hahn
• Jordan
• Maharam's
• Egorov's
• Fatou's lemma
• Fubini's
• Fubini–Tonelli
• Hölder's inequality
• Minkowski inequality
• Radon–Nikodym
• Riesz–Markov–Kakutani representation theorem
Other results
• Disintegration theorem
• Lifting theory
• Lebesgue's density theorem
• Lebesgue differentiation theorem
• Sard's theorem
For Lebesgue measure
• Isoperimetric inequality
• Brunn–Minkowski theorem
• Milman's reverse
• Minkowski–Steiner formula
• Prékopa–Leindler inequality
• Vitale's random Brunn–Minkowski inequality
Applications & related
• Convex analysis
• Descriptive set theory
• Probability theory
• Real analysis
• Spectral theory
|
Wikipedia
|
Bucket sort
Bucket sort, or bin sort, is a sorting algorithm that works by distributing the elements of an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, a generalization of pigeonhole sort that allows multiple keys per bucket, and is a cousin of radix sort in the most-to-least significant digit flavor. Bucket sort can be implemented with comparisons and therefore can also be considered a comparison sort algorithm. The computational complexity depends on the algorithm used to sort each bucket, the number of buckets to use, and whether the input is uniformly distributed.
Bucket sort
ClassSorting algorithm
Data structureArray
Worst-case performance$O\left(n^{2}\right)$
Average performance$O\left(n+{\frac {n^{2}}{k}}+k\right)$, where k is the number of buckets. $O(n),{\text{when }}k\approx n$.
Worst-case space complexity$O(n+k)$
Bucket sort works as follows:
1. Set up an array of initially empty "buckets".
2. Scatter: Go over the original array, putting each object in its bucket.
3. Sort each non-empty bucket.
4. Gather: Visit the buckets in order and put all elements back into the original array.
Pseudocode
function bucketSort(array, k) is
buckets ← new array of k empty lists
M ← 1 + the maximum key value in the array
for i = 0 to length(array) do
insert array[i] into buckets[floor(k × array[i] / M)]
for i = 0 to k do
nextSort(buckets[i])
return the concatenation of buckets[0], ...., buckets[k]
Let array denote the array to be sorted and k denote the number of buckets to use. One can compute the maximum key value in linear time by iterating over all the keys once. The floor function must be used to convert a floating number to an integer ( and possibly casting of datatypes too ). The function nextSort is a sorting function used to sort each bucket. Conventionally, insertion sort is used, but other algorithms could be used as well, such as selection sort or merge sort. Using bucketSort itself as nextSort produces a relative of radix sort; in particular, the case n = 2 corresponds to quicksort (although potentially with poor pivot choices).
Analysis
Worst-case analysis
When the input contains several keys that are close to each other (clustering), those elements are likely to be placed in the same bucket, which results in some buckets containing more elements than average. The worst-case scenario occurs when all the elements are placed in a single bucket. The overall performance would then be dominated by the algorithm used to sort each bucket, for example $O(n^{2})$ insertion sort or $O(n\log(n))$ comparison sort algorithms, such as merge sort.
Average-case analysis
Consider the case that the input is uniformly distributed. The first step, which is initialize the buckets and find the maximum key value in the array, can be done in $O(n)$ time. If division and multiplication can be done in constant time, then scattering each element to its bucket also costs $O(n)$. Assume insertion sort is used to sort each bucket, then the third step costs $O(\textstyle \sum _{i=1}^{k}\displaystyle n_{i}^{2})$, where $n_{i}$ is the length of the bucket indexed $i$. Since we are concerning the average time, the expectation $E(n_{i}^{2})$ has to be evaluated instead. Let $X_{ij}$ be the random variable that is $1$ if element $j$ is placed in bucket $i$, and $0$ otherwise. We have $n_{i}=\sum _{j=1}^{n}X_{ij}$. Therefore,
${\begin{aligned}E(n_{i}^{2})&=E\left(\sum _{j=1}^{n}X_{ij}\sum _{l=1}^{n}X_{il}\right)\\&=E\left(\sum _{j=1}^{n}\sum _{l=1}^{n}X_{ij}X_{il}\right)\\&=E\left(\sum _{j=1}^{n}X_{ij}^{2}\right)+E\left(\sum _{1\leq j,l\leq n}\sum _{j\neq l}X_{ij}X_{il}\right)\end{aligned}}$
The last line separates the summation into the case $j=l$ and the case $j\neq l$. Since the chance of an object distributed to bucket $i$ is $1/k$, $X_{ij}$ is 1 with probability $1/k$ and 0 otherwise.
$E(X_{ij}^{2})=1^{2}\cdot \left({\frac {1}{k}}\right)+0^{2}\cdot \left(1-{\frac {1}{k}}\right)={\frac {1}{k}}$
$E(X_{ij}X_{ik})=1\cdot \left({\frac {1}{k}}\right)\left({\frac {1}{k}}\right)={\frac {1}{k^{2}}}$
With the summation, it would be
$E\left(\sum _{j=1}^{n}X_{ij}^{2}\right)+E\left(\sum _{1\leq j,k\leq n}\sum _{j\neq k}X_{ij}X_{ik}\right)=n\cdot {\frac {1}{k}}+n(n-1)\cdot {\frac {1}{k^{2}}}={\frac {n^{2}+nk-n}{k^{2}}}$
Finally, the complexity would be $O\left(\sum _{i=1}^{k}E(n_{i}^{2})\right)=O\left(\sum _{i=1}^{k}{\frac {n^{2}+nk-n}{k^{2}}}\right)=O\left({\frac {n^{2}}{k}}+n\right)$.
The last step of bucket sort, which is concatenating all the sorted objects in each buckets, requires $O(k)$ time. Therefore, the total complexity is $O\left(n+{\frac {n^{2}}{k}}+k\right)$. Note that if k is chosen to be $k=\Theta (n)$, then bucket sort runs in $O(n)$ average time, given a uniformly distributed input.[1]
Optimizations
A common optimization is to put the unsorted elements of the buckets back in the original array first, then run insertion sort over the complete array; because insertion sort's runtime is based on how far each element is from its final position, the number of comparisons remains relatively small, and the memory hierarchy is better exploited by storing the list contiguously in memory.[2]
If the input distribution is known or can be estimated, buckets can often be chosen which contain constant density (rather than merely having constant size). This allows $O(n)$ average time complexity even without uniformly distributed input.
Variants
Generic bucket sort
The most common variant of bucket sort operates on a list of n numeric inputs between zero and some maximum value M and divides the value range into n buckets each of size M/n. If each bucket is sorted using insertion sort, the sort can be shown to run in expected linear time (where the average is taken over all possible inputs).[3] However, the performance of this sort degrades with clustering; if many values occur close together, they will all fall into a single bucket and be sorted slowly. This performance degradation is avoided in the original bucket sort algorithm by assuming that the input is generated by a random process that distributes elements uniformly over the interval [0,1).[1]
ProxmapSort
Similar to generic bucket sort as described above, ProxmapSort works by dividing an array of keys into subarrays via the use of a "map key" function that preserves a partial ordering on the keys; as each key is added to its subarray, insertion sort is used to keep that subarray sorted, resulting in the entire array being in sorted order when ProxmapSort completes. ProxmapSort differs from bucket sorts in its use of the map key to place the data approximately where it belongs in sorted order, producing a "proxmap" — a proximity mapping — of the keys.
Histogram sort
Another variant of bucket sort known as histogram sort or counting sort adds an initial pass that counts the number of elements that will fall into each bucket using a count array.[4] Using this information, the array values can be arranged into a sequence of buckets in-place by a sequence of exchanges, leaving no space overhead for bucket storage.
Postman's sort
The Postman's sort is a variant of bucket sort that takes advantage of a hierarchical structure of elements, typically described by a set of attributes. This is the algorithm used by letter-sorting machines in post offices: mail is sorted first between domestic and international; then by state, province or territory; then by destination post office; then by routes, etc. Since keys are not compared against each other, sorting time is O(cn), where c depends on the size of the key and number of buckets. This is similar to a radix sort that works "top down," or "most significant digit first."[5]
Shuffle sort
The shuffle sort[6] is a variant of bucket sort that begins by removing the first 1/8 of the n items to be sorted, sorts them recursively, and puts them in an array. This creates n/8 "buckets" to which the remaining 7/8 of the items are distributed. Each "bucket" is then sorted, and the "buckets" are concatenated into a sorted array.
Comparison with other sorting algorithms
Bucket sort can be seen as a generalization of counting sort; in fact, if each bucket has size 1 then bucket sort degenerates to counting sort. The variable bucket size of bucket sort allows it to use O(n) memory instead of O(M) memory, where M is the number of distinct values; in exchange, it gives up counting sort's O(n + M) worst-case behavior.
Bucket sort with two buckets is effectively a version of quicksort where the pivot value is always selected to be the middle value of the value range. While this choice is effective for uniformly distributed inputs, other means of choosing the pivot in quicksort such as randomly selected pivots make it more resistant to clustering in the input distribution.
The n-way mergesort algorithm also begins by distributing the list into n sublists and sorting each one; however, the sublists created by mergesort have overlapping value ranges and so cannot be recombined by simple concatenation as in bucket sort. Instead, they must be interleaved by a merge algorithm. However, this added expense is counterbalanced by the simpler scatter phase and the ability to ensure that each sublist is the same size, providing a good worst-case time bound.
Top-down radix sort can be seen as a special case of bucket sort where both the range of values and the number of buckets is constrained to be a power of two. Consequently, each bucket's size is also a power of two, and the procedure can be applied recursively. This approach can accelerate the scatter phase, since we only need to examine a prefix of the bit representation of each element to determine its bucket.
See also
• Bucket evaluations, a similarly named correlation method
References
1. Thomas H. Cormen; Charles E. Leiserson; Ronald L. Rivest & Clifford Stein. Introduction to Algorithms. Bucket sort runs in linear time on the average. Like counting sort, bucket sort is fast because it assumes something about the input. Whereas counting sort assumes that the input consists of integers in a small range, bucket sort assumes that the input is generated by a random process that distributes elements uniformly over the interval [0,1). The idea of bucket sort is to divide the interval [0, 1) into n equal-sized subintervals, or buckets, and then distribute the n input numbers into the buckets. Since the inputs are uniformly distributed over [0, 1), we don't expect many numbers to fall into each bucket. To produce the output, we simply sort the numbers in each bucket and then go through the buckets in order, listing the elements in each.
2. Corwin, E. and Logar, A. "Sorting in linear time — variations on the bucket sort". Journal of Computing Sciences in Colleges, 20, 1, pp.197–202. October 2004.
3. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 8.4: Bucket sort, pp.174–177.
4. NIST's Dictionary of Algorithms and Data Structures: histogram sort
5. "Robert Ramey Software Development".
6. A revolutionary new sort from John Cohen Nov 26, 1997
• Paul E. Black "Postman's Sort" from Dictionary of Algorithms and Data Structures at NIST.
• Robert Ramey '"The Postman's Sort" C Users Journal Aug. 1992
• NIST's Dictionary of Algorithms and Data Structures: bucket sort
External links
• Bucket Sort Code for Ansi C
• Variant of Bucket Sort with Demo
Sorting algorithms
Theory
• Computational complexity theory
• Big O notation
• Total order
• Lists
• Inplacement
• Stability
• Comparison sort
• Adaptive sort
• Sorting network
• Integer sorting
• X + Y sorting
• Transdichotomous model
• Quantum sort
Exchange sorts
• Bubble sort
• Cocktail shaker sort
• Odd–even sort
• Comb sort
• Gnome sort
• Proportion extend sort
• Quicksort
Selection sorts
• Selection sort
• Heapsort
• Smoothsort
• Cartesian tree sort
• Tournament sort
• Cycle sort
• Weak-heap sort
Insertion sorts
• Insertion sort
• Shellsort
• Splaysort
• Tree sort
• Library sort
• Patience sorting
Merge sorts
• Merge sort
• Cascade merge sort
• Oscillating merge sort
• Polyphase merge sort
Distribution sorts
• American flag sort
• Bead sort
• Bucket sort
• Burstsort
• Counting sort
• Interpolation sort
• Pigeonhole sort
• Proxmap sort
• Radix sort
• Flashsort
Concurrent sorts
• Bitonic sorter
• Batcher odd–even mergesort
• Pairwise sorting network
• Samplesort
Hybrid sorts
• Block merge sort
• Kirkpatrick–Reisch sort
• Timsort
• Introsort
• Spreadsort
• Merge-insertion sort
Other
• Topological sorting
• Pre-topological order
• Pancake sorting
• Spaghetti sort
Impractical sorts
• Stooge sort
• Slowsort
• Bogosort
|
Wikipedia
|
Rangle
In falconry, rangle is a term used for small stones which are fed to hawks to aid in digestion.[1] These stones, which are generally slightly larger than peas, are used less often now than they were historically.[2]
See also
• Gastrolith
References
1. Woodford, Michael (1960). A manual of falconry. CT Branford Co. p. 171.
2. Ford, Emma (1982). Falconry in mews and field. BT Batsford.
|
Wikipedia
|
Andrew Ranicki
Andrew Alexander Ranicki (born Andrzej Aleksander Ranicki; 30 December 1948 – 21 February 2018) was a British mathematician who worked on algebraic topology. He was a professor of mathematics at the University of Edinburgh.
Andrew Ranicki
Ranicki in 2006
Born(1948-12-30)30 December 1948
London, England
Died21 February 2018(2018-02-21) (aged 69)
Edinburgh, Scotland
Alma materTrinity College, Cambridge
SpouseIda Thompson
Parent(s)Marcel Reich-Ranicki and Teofila Reich-Ranicki
Scientific career
FieldsMathematics
InstitutionsTrinity College, Cambridge
Princeton University
University of Edinburgh
Doctoral advisorsAndrew Casson
John Frank Adams
Websitehttps://www.maths.ed.ac.uk/~v1ranick/
Life
Ranicki was the only child of the well-known literary critic Marcel Reich-Ranicki[1] and the artist Teofila Reich-Ranicki; he spoke Polish in his family. Born in London, he lived in Warsaw, in Frankfurt am Main and Hamburg, and attended school in England at the King's School, Canterbury from the age of sixteen.[2][3][4]
Ranicki studied Mathematics at Trinity College, Cambridge, and graduated with a BA in 1969.[4] At Cambridge, he was a student of topologists Andrew Casson and John Frank Adams.[5] He earned his doctoral degree in 1973 with a thesis on algebraic L-theory. Ranicki received numerous awards and honors for his scientific achievements during his studies. From 1972 to 1977, he was a Fellow of Trinity College.[6]
From 1977 to 1982, he was assistant professor at Princeton University. In 1982, he began at the University of Edinburgh as a lecturer; in 1987, he was promoted to reader. In 1992, he became a Fellow of the Royal Society of Edinburgh.[7] Since 1995, Ranicki has been the Chair of Algebraic Surgery at the University of Edinburgh.[8] Several times, he stayed as a visiting scientist at the Max Planck Institute for Mathematics in Bonn, most recently in 2011.[9]
Personal life, death, and legacy
Ranicki was married to American paleontologist Ida Thompson in 1979; they have a daughter. Ranicki suffered from leukemia; he died peacefully in the presence of his wife.[10]
A conference celebrating his legacy was held at the International Centre for Mathematical Sciences (Edinburgh) in summer 2020.[11]
Published works
• Exact sequences in the algebraic theory of surgery, Princeton University Press, 1981.
• Lower K and L Theory, London Mathematical Society Lecture Notes, Vol. 178, Cambridge University Press. 1992.
• Algebraic L-Theory and Topological Manifolds, Cambridge Tracts in Mathematics Vol. 102, Cambridge University Press, 1992.
• Algebraic and Geometric Surgery, Oxford University Press, 2002.
• High dimensional knot theory , Springer, 1998.
• with Bruce Hughes: Ends of Complexes , Cambridge Tracts in Mathematics Vol. 123, Cambridge University Press, 1996.
• with Norman Levitt and Frank Quinn: "Algebraic and geometric topology" (Rutgers University conference, New Brunswick, 1983), Springer 1985, Lecture Notes in Mathematics Vol. 1126.
• Editor with David Lewis and Eva Bayer-Fluckiger: "Quadratic forms and their applications" (Conference Dublin 1999), Contemporary Mathematics Vol. 272, American Mathematical Society, 2000.
• Publisher: Noncommutative Localization in Algebra and Topology , London Mathematical Society Vol. 330, Cambridge University Press, 2006.
• Editor with Steven Ferry and Jonathan Rosenberg: "The Novikov conjectures, index theorems and rigidity" (Oberwolfach, 1993), London Mathematical Society Lecture Notes, Vol. 226, 227, Cambridge University Press, 1995.
• Editor: The Hauptvermutung Book, Kluwer, 1996.
• Editor with Sylvain Cappell and Jonathan Rosenberg: Surveys on surgery theory. Papers dedicated to C.T. C. Wall.
References
1. "Marcel Reich-Ranicki: Widely admired literary critic". Independent.co.uk. 22 September 2013. Archived from the original on 7 May 2022.
2. Emilia Smechowski: „Er hatte die Wucht eines Niagara-Falls“, Interview in TAZ, 13. September 2014, S. 32 f.
3. Volker Hage, Martin Doerry: Spiegel-Gespräch: „Ich habe nie gefragt“, Der Spiegel, May 26, 2014
4. 'Cambridge Tripos: English; Medical Sciences; Mathematics', Times, 20 June 1969.
5. Andrew Ranicki at the Mathematics Genealogy Project
6. Curriculum Vitae, Andrew Ranicki
7. Directory of Fellows Archived 2016-10-20 at the Wayback Machine, page 33.
8. Chair of Algebraic Surgery, University of Edinburgh, Communications and Marketing.
9. Johannes Seiler: "Mathematics is a drug!" A conversation with Andrew Ranicki, Bonner General-Anzeiger from January 8–9, 2011 on the website of the Max Planck Institute for Mathematics Bonn
10. Thomas Anz: Obituary of Andrew Ranicki, literaturkritik.de, February 22, 2018.
11. "Manifolds and K-theory: the legacy of Andrew Ranicki".
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• 2
• BnF data
• 2
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Czech Republic
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
|
Wikipedia
|
Rank-maximal allocation
Rank-maximal (RM) allocation is a rule for fair division of indivisible items. Suppose we have to allocate some items among people. Each person can rank the items from best to worst. The RM rule says that we have to give as many people as possible their best (#1) item. Subject to that, we have to give as many people as possible their next-best (#2) item, and so on.
In the special case in which each person should receive a single item (for example, when the "items" are tasks and each task has to be done by a single person), the problem is called rank-maximal matching or greedy matching.
The idea is similar to that of utilitarian cake-cutting, where the goal is to maximize the sum of utilities of all participants. However, the utilitarian rule works with cardinal (numeric) utility functions, while the RM rule works with ordinal utilities (rankings).
Definition
There are several items and several agents. Each agent has a total order on the items. Agents can be indifferent between some items; for each agent, we can partition the items to equivalence classes that contain items of the same rank. For example, If Alice's preference-relation is x > y,z > w, it means that Alice's 1st choice is x, which is better for her than all other items; Alice's 2nd choice is y and z, which are equally good in her eyes but not as good as x; and Alice's 3rd choice is w, which she considers worse than all other items.
For every allocation of items to the agents, we construct its rank-vector as follows. Element #1 in the vector is the total number of items that are 1st-choice for their owners; Element #2 is the total number of items that are 2nd-choice for their owners; and so on.
A rank-maximal allocation is one in which the rank-vector is maximum, in lexicographic order.
Example
Three items, x y and z, have to be divided among three agents whose rankings are:
• Alice: x > y > z
• Bob: x > y > z
• Carl: y > x > z
In the allocation (x, y, z), Alice gets her 1st choice (x), Bob gets his 2nd choice (y), and Carl gets his 3rd choice (z). The rank-vector is thus (1,1,1).
In the allocation (x,z,y), both Alice and Carl get their 1st choice and Bob gets his 3rd choice. The rank-vector is thus (2,0,1), which is lexicographically higher than (1,1,1) – it gives more people their 1st choice.
It is easy to check that no allocation produces a lexicographically higher rank-vector. Hence, the allocation (x,z,y) is rank-maximal. Similarly, the allocation (z,x,y) is rank-maximal – it produces the same rank-vector (2,0,1).
Algorithms
RM matchings were first studied by Robert Irving, who called them greedy matchings. He presented an algorithm that finds an RM matching in time $O(n^{2}c^{3})$, where n is the number of agents and c is the largest length of a preference-list of an agent.[1]
Later, an improved algorithm was found, which runs in time $O(m\cdot \min(n,C{\sqrt {n}}))$, where m is the total length of all preference-lists (total number of edges in the graph), and C is the maximal rank of an item used in an RM matching (i.e., the maximal number of non-zero elements in an optimal rank vector).[2] The algorithm reduces the problem to maximum-cardinality matching. Intuitively, we would like to first find a maximum-cardinality matching using only edges of rank 1; then, extend this matching to a maximum-cardniality matching using only edges of ranks 1 and 2; then, extend this matching to a maximum-cardniality matching using only edges of ranks 1 2 and 3; and so on. The problem is that, if we pick the "wrong" maximum-cardinality matching for rank 1, then we might miss the optimal matching for rank 2. The algorithm of [2] solves this problem using the Dulmage–Mendelsohn decomposition, which is a decomposition that uses a maximum-cardinality matching, but does not depend on which matching is chosen (the decomposition is the same for every maximum-cardinality matching chosen). It works in the following way.
1. Let G1 be the sub-graph of G containing only edges of rank 1 (the highest rank).
2. Find a maximum-cardinality matching in G1, and use it to find the decomposition of G1 into E1, O1 and U1.
3. One property of the decomposition is that every maximum-cardinality matching in G1 saturates all vertices in O1 and U1. Therefore, in a rank-maximal matching, all vertices in O1 and U1 are adjacent to an edge of rank 1. So we can remove from the graph all edges with rank 2 or higher adjacent to any of these vertices.
4. Another property of the decomposition is that any maximum-cardinality matching in G1 contains only E1-O1 and U1-U1 edges. Therefore, we can remove all other edges (O1-O1 and O1-U1 edges) from the graph.
5. Add to G1 all the edges with the next-highest rank. If there are no such edges, stop. Else, go back to step 2.
A different solution, using maximum-weight matchings, attains a similar run-time: $O(m\cdot \min(n+C,C{\sqrt {n}}))$.[3]
Variants
The problem has several variants.
1. In maximum-cardinality RM matching, the goal is to find, among all different RM matchings, the one with the maximum number of matchings.
2. In fair matching, the goal is to find a maximum-cardinality matching such that the minimum number of edges of rank r are used, given that - the minimum number of edges of rank r−1 are used, and so on.
Both maximum-cardinality RM matching and fair matching can be found by reduction to maximum-weight matching.[3]
3. In the capacitated RM matching problem, each agent has an upper capacity denoting an upper bound on the total number of items he should get. Each item has an upper quota denoting an upper bound on the number of different agents it can be allocated to. It was first studied by Melhorn and Michail, who gave an algorithm with run-time $O(Cnm\log(n^{2}/m)\log(n))$.[4] There is an improved algorithm with run-time $O(m\cdot \min(B,C{\sqrt {B}}))$, where B is the minimum of the sum-of-quotas of the agents and the sum-of-quotas of the items. It is based on an extension of the Gallai–Edmonds decomposition to multi-edge matchings.[5]
See also
• Fair item assignment
• Stable matching
• Envy-free matching
• Priority matching
References
1. Irving, Robert W. (2003). Greedy matchings. University of Glasgow. pp. Tr–2003–136. CiteSeerX 10.1.1.119.1747.{{cite book}}: CS1 maint: location missing publisher (link)
2. Irving, Robert W.; Kavitha, Telikepalli; Mehlhorn, Kurt; Michail, Dimitrios; Paluch, Katarzyna E. (1 October 2006). "Rank-maximal Matchings". ACM Trans. Algorithms. 2 (4): 602–610. doi:10.1145/1198513.1198520. ISSN 1549-6325.
3. Michail, Dimitrios (10 December 2007). "Reducing rank-maximal to maximum weight matching". Theoretical Computer Science. 389 (1): 125–132. doi:10.1016/j.tcs.2007.08.004. ISSN 0304-3975.
4. Kurt Mehlhorn and Dimitrios Michail (2005). "Network Problems with Non-Polynomial Weights and Applications".
5. Paluch, Katarzyna (22 May 2013). Capacitated Rank-Maximal Matchings. pp. 324–335. doi:10.1007/978-3-642-38233-8_27. ISBN 978-3-642-38232-1. {{cite book}}: |journal= ignored (help)
|
Wikipedia
|
RRQR factorization
An RRQR factorization or rank-revealing QR factorization is a matrix decomposition algorithm based on the QR factorization which can be used to determine the rank of a matrix.[1] The singular value decomposition can be used to generate an RRQR, but it is not an efficient method to do so.[2] An RRQR implementation is available in MATLAB.[3]
References
1. Gu, Ming; Stanley C. Eisenstat (July 1996). "Efficient algorithms for computing a strong rank-revealing QR factorization" (PDF). SIAM Journal on Scientific Computing. 17 (4): 848–869. doi:10.1137/0917055. Retrieved 22 September 2014.
2. Hong, Y.P.; C.-T. Pan (January 1992). "Rank-Revealing QR Factorizations and the Singular Value Decomposition". Mathematics of Computation. 58 (197): 213–232. doi:10.2307/2153029. JSTOR 2153029.
3. "RRQR Factorization" (PDF). 29 March 2007. Retrieved 2 April 2011.
Numerical linear algebra
Key concepts
• Floating point
• Numerical stability
Problems
• System of linear equations
• Matrix decompositions
• Matrix multiplication (algorithms)
• Matrix splitting
• Sparse problems
Hardware
• CPU cache
• TLB
• Cache-oblivious algorithm
• SIMD
• Multiprocessing
Software
• MATLAB
• Basic Linear Algebra Subprograms (BLAS)
• LAPACK
• Specialized libraries
• General purpose software
|
Wikipedia
|
Rank (differential topology)
In mathematics, the rank of a differentiable map $f:M\to N$ between differentiable manifolds at a point $p\in M$ is the rank of the derivative of $f$ at $p$. Recall that the derivative of $f$ at $p$ is a linear map
$d_{p}f:T_{p}M\to T_{f(p)}N\,$
from the tangent space at p to the tangent space at f(p). As a linear map between vector spaces it has a well-defined rank, which is just the dimension of the image in Tf(p)N:
$\operatorname {rank} (f)_{p}=\dim(\operatorname {im} (d_{p}f)).$
Constant rank maps
A differentiable map f : M → N is said to have constant rank if the rank of f is the same for all p in M. Constant rank maps have a number of nice properties and are an important concept in differential topology.
Three special cases of constant rank maps occur. A constant rank map f : M → N is
• an immersion if rank f = dim M (i.e. the derivative is everywhere injective),
• a submersion if rank f = dim N (i.e. the derivative is everywhere surjective),
• a local diffeomorphism if rank f = dim M = dim N (i.e. the derivative is everywhere bijective).
The map f itself need not be injective, surjective, or bijective for these conditions to hold, only the behavior of the derivative is important. For example, there are injective maps which are not immersions and immersions which are not injections. However, if f : M → N is a smooth map of constant rank then
• if f is injective it is an immersion,
• if f is surjective it is a submersion,
• if f is bijective it is a diffeomorphism.
Constant rank maps have a nice description in terms of local coordinates. Suppose M and N are smooth manifolds of dimensions m and n respectively, and f : M → N is a smooth map with constant rank k. Then for all p in M there exist coordinates (x1, ..., xm) centered at p and coordinates (y1, ..., yn) centered at f(p) such that f is given by
$f(x^{1},\ldots ,x^{m})=(x^{1},\ldots ,x^{k},0,\ldots ,0)\,$
in these coordinates.
Examples
Maps whose rank is generically maximal, but drops at certain singular points, occur frequently in coordinate systems. For example, in spherical coordinates, the rank of the map from the two angles to a point on the sphere (formally, a map T2 → S2 from the torus to the sphere) is 2 at regular points, but is only 1 at the north and south poles (zenith and nadir).
A subtler example occurs in charts on SO(3), the rotation group. This group occurs widely in engineering, due to 3-dimensional rotations being heavily used in navigation, nautical engineering, and aerospace engineering, among many other uses. Topologically, SO(3) is the real projective space RP3, and it is often desirable to represent rotations by a set of three numbers, known as Euler angles (in numerous variants), both because this is conceptually simple, and because one can build a combination of three gimbals to produce rotations in three dimensions. Topologically this corresponds to a map from the 3-torus T3 of three angles to the real projective space RP3 of rotations, but this map does not have rank 3 at all points (formally because it cannot be a covering map, as the only (non-trivial) covering space is the hypersphere S3), and the phenomenon of the rank dropping to 2 at certain points is referred to in engineering as gimbal lock.
References
• Lee, John (2003). Introduction to Smooth Manifolds. Graduate Texts in Mathematics 218. New York: Springer. ISBN 978-0-387-95495-0.
|
Wikipedia
|
Rank (graph theory)
In graph theory, a branch of mathematics, the rank of an undirected graph has two unrelated definitions. Let n equal the number of vertices of the graph.
• In the matrix theory of graphs the rank r of an undirected graph is defined as the rank of its adjacency matrix.
Analogously, the nullity of the graph is the nullity of its adjacency matrix, which equals n − r.
• In the matroid theory of graphs the rank of an undirected graph is defined as the number n − c, where c is the number of connected components of the graph.[1] Equivalently, the rank of a graph is the rank of the oriented incidence matrix associated with the graph.[2]
Analogously, the nullity of the graph is the nullity of its oriented incidence matrix, given by the formula m − n + c, where n and c are as above and m is the number of edges in the graph. The nullity is equal to the first Betti number of the graph. The sum of the rank and the nullity is the number of edges.
Relevant topics on
Graph connectivity
• Connectivity
• Algebraic connectivity
• Cycle rank
• Rank (graph theory)
• SPQR tree
• St-connectivity
• K-connectivity certificate
• Pixel connectivity
• Vertex separator
• Strongly connected component
• Biconnected graph
• Bridge
Examples
A sample graph and matrix:
(corresponding to the four edges, e1–e4):
1234
1 0111
2 1000
3 1001
4 1010
=
${\begin{pmatrix}0&1&1&1\\1&0&0&0\\1&0&0&1\\1&0&1&0\\\end{pmatrix}}.$
In this example, the matrix theory rank of the matrix is 4, because its column vectors are linearly independent.
See also
• Circuit rank
• Cycle rank
• Nullity (graph theory)
Notes
1. Weisstein, Eric W. "Graph Rank." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/GraphRank.html
2. Grossman, Jerrold W.; Kulkarni, Devadatta M.; Schochetman, Irwin E. (1995), "On the minors of an incidence matrix and its Smith normal form", Linear Algebra and Its Applications, 218: 213–224, doi:10.1016/0024-3795(93)00173-W, MR 1324059. See in particular the discussion on p. 218.
References
• Chen, Wai-Kai (1976), Applied Graph Theory, North Holland Publishing Company, ISBN 0-7204-2371-6.
• Hedetniemi, S. T., Jacobs, D. P., Laskar, R. (1989), Inequalities involving the rank of a graph. Journal of Combinatorial Mathematics and Combinatorial Computing, vol. 6, pp. 173–176.
• Bevis, Jean H., Blount, Kevin K., Davis, George J., Domke, Gayla S., Miller, Valerie A. (1997), The rank of a graph after vertex addition. Linear Algebra and its Applications, vol. 265, pp. 55–69.
|
Wikipedia
|
Rank 3 permutation group
In mathematical finite group theory, a rank 3 permutation group acts transitively on a set such that the stabilizer of a point has 3 orbits. The study of these groups was started by Higman (1964, 1971). Several of the sporadic simple groups were discovered as rank 3 permutation groups.
Classification
The primitive rank 3 permutation groups are all in one of the following classes:
• Cameron (1981) classified the ones such that $T\times T\leq G\leq T_{0}\operatorname {wr} Z/2Z$ where the socle T of T0 is simple, and T0 is a 2-transitive group of degree √n.
• Liebeck (1987) classified the ones with a regular elementary abelian normal subgroup
• Bannai (1971–72) classified the ones whose socle is a simple alternating group
• Kantor & Liebler (1982) classified the ones whose socle is a simple classical group
• Liebeck & Saxl (1986) classified the ones whose socle is a simple exceptional or sporadic group.
Examples
If G is any 4-transitive group acting on a set S, then its action on pairs of elements of S is a rank 3 permutation group.[1] In particular most of the alternating groups, symmetric groups, and Mathieu groups have 4-transitive actions, and so can be made into rank 3 permutation groups.
The projective general linear group acting on lines in a projective space of dimension at least 3 is a rank-3 permutation group.
Several 3-transposition groups are rank-3 permutation groups (in the action on transpositions).
It is common for the point-stabilizer of a rank-3 permutation group acting on one of the orbits to be a rank-3 permutation group. This gives several "chains" of rank-3 permutation groups, such as the Suzuki chain and the chain ending with the Fischer groups.
Some unusual rank-3 permutation groups (many from (Liebeck & Saxl 1986)) are listed below.
For each row in the table below, in the grid in the column marked "size", the number to the left of the equal sign is the degree of the permutation group for the permutation group mentioned in the row. In the grid, the sum to the right of the equal sign shows the lengths of the three orbits of the stabilizer of a point of the permutation group. For example, the expression 15 = 1+6+8 in the first row of the table under the heading means that the permutation group for the first row has degree 15, and the lengths of three orbits of the stabilizer of a point of the permutation group are 1, 6 and 8 respectively.
GroupPoint stabilizersizeComments
A6 = L2(9) = Sp4(2)' = M10'S415 = 1+6+8Pairs of points, or sets of 3 blocks of 2, in the 6-point permutation representation; two classes
A9L2(8):3120 = 1+56+63Projective line P1(8); two classes
A10(A5×A5):4126 = 1+25+100Sets of 2 blocks of 5 in the natural 10-point permutation representation
L2(8)7:2 = Dih(7)36 = 1+14+21Pairs of points in P1(8)
L3(4)A656 = 1+10+45Hyperovals in P2(4); three classes
L4(3)PSp4(3):2117 = 1+36+80Symplectic polarities of P3(3); two classes
G2(2)' = U3(3)PSL3(2)36 = 1+14+21Suzuki chain
U3(5)A750 = 1+7+42The action on the vertices of the Hoffman-Singleton graph; three classes
U4(3)L3(4)162 = 1+56+105Two classes
Sp6(2)G2(2) = U3(3):2120 = 1+56+63The Chevalley group of type G2 acting on the octonion algebra over GF(2)
Ω7(3)G2(3)1080 = 1+351+728The Chevalley group of type G2 acting on the imaginary octonions of the octonion algebra over GF(3); two classes
U6(2)U4(3):221408 = 1+567+840The point stabilizer is the image of the linear representation resulting from "bringing down" the complex representation of Mitchell's group (a complex reflection group) modulo 2; three classes
M11M9:2 = 32:SD1655 = 1+18+36Pairs of points in the 11-point permutation representation
M12M10:2 = A6.22 = PΓL(2,9)66 = 1+20+45Pairs of points, or pairs of complementary blocks of S(5,6,12), in the 12-point permutation representation; two classes
M2224:A677 = 1+16+60Blocks of S(3,6,22)
J2U3(3)100 = 1+36+63Suzuki chain; the action on the vertices of the Hall-Janko graph
Higman-Sims group HSM22100 = 1+22+77The action on the vertices of the Higman-Sims graph
M22A7176 = 1+70+105Two classes
M23M21:2 = L3(4):22 = PΣL(3,4)253 = 1+42+210Pairs of points in the 23-point permutation representation
M2324:A7253 = 1+112+140Blocks of S(4,7,23)
McLaughlin group McLU4(3)275 = 1+112+162The action on the vertices of the McLaughlin graph
M24M22:2276 = 1+44+231Pairs of points in the 24-point permutation representation
G2(3)U3(3):2351 = 1+126+244Two classes
G2(4)J2416 = 1+100+315Suzuki chain
M24M12:21288 = 1+495+792Pairs of complementary dodecads in the 24-point permutation representation
Suzuki group SuzG2(4)1782 = 1+416+1365Suzuki chain
G2(4)U3(4):22016 = 1+975+1040
Co2PSU6(2):22300 = 1+891+1408
Rudvalis group Ru2F4(2)4060 = 1+1755+2304
Fi222.PSU6(2)3510 = 1+693+28163-transpositions
Fi22Ω7(3)14080 = 1+3159+10920Two classes
Fi232.Fi2231671 = 1+3510+281603-transpositions
G2(8).3SU3(8).6130816 = 1+32319+98496
Fi23PΩ8+(3).S3137632 = 1+28431+109200
Fi24'Fi23306936 = 1+31671+2752643-transpositions
Notes
1. The three orbits are: the fixed pair itself; those pairs having one element in common with the fixed pair; and those pairs having no element in common with the fixed pair.
References
• Bannai, Eiichi (1971–72), "Maximal subgroups of low rank of finite symmetric and alternating groups", Journal of the Faculty of Science. University of Tokyo. Section IA. Mathematics, 18: 475–486, ISSN 0040-8980, MR 0357559
• Brouwer, A. E.; Cohen, A. M.; Neumaier, Arnold (1989), Distance-regular graphs, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], vol. 18, Berlin, New York: Springer-Verlag, ISBN 978-3-540-50619-5, MR 1002568
• Cameron, Peter J. (1981), "Finite permutation groups and finite simple groups", The Bulletin of the London Mathematical Society, 13 (1): 1–22, CiteSeerX 10.1.1.122.1628, doi:10.1112/blms/13.1.1, ISSN 0024-6093, MR 0599634
• Higman, Donald G. (1964), "Finite permutation groups of rank 3" (PDF), Mathematische Zeitschrift, 86 (2): 145–156, doi:10.1007/BF01111335, hdl:2027.42/46298, ISSN 0025-5874, MR 0186724, S2CID 51836896
• Higman, Donald G. (1971), "A survey of some questions and results about rank 3 permutation groups", Actes du Congrès International des Mathématiciens (Nice, 1970), vol. 1, Gauthier-Villars, pp. 361–365, MR 0427435
• Kantor, William M.; Liebler, Robert A. (1982), "The rank 3 permutation representations of the finite classical groups" (PDF), Transactions of the American Mathematical Society, 271 (1): 1–71, doi:10.2307/1998750, ISSN 0002-9947, JSTOR 1998750, MR 0648077
• Liebeck, Martin W. (1987), "The affine permutation groups of rank three", Proceedings of the London Mathematical Society, Third Series, 54 (3): 477–516, CiteSeerX 10.1.1.135.7735, doi:10.1112/plms/s3-54.3.477, ISSN 0024-6115, MR 0879395
• Liebeck, Martin W.; Saxl, Jan (1986), "The finite primitive permutation groups of rank three", The Bulletin of the London Mathematical Society, 18 (2): 165–172, doi:10.1112/blms/18.2.165, ISSN 0024-6093, MR 0818821
|
Wikipedia
|
Rank–size distribution
Rank–size distribution is the distribution of size by rank, in decreasing order of size. For example, if a data set consists of items of sizes 5, 100, 5, and 8, the rank-size distribution is 100, 8, 5, 5 (ranks 1 through 4). This is also known as the rank–frequency distribution, when the source data are from a frequency distribution. These are particularly of interest when the data vary significantly in scales, such as city size or word frequency. These distributions frequently follow a power law distribution, or less well-known ones such as a stretched exponential function or parabolic fractal distribution, at least approximately for certain ranges of ranks; see below.
A rank-size distribution is not a probability distribution or cumulative distribution function. Rather, it is a discrete form of a quantile function (inverse cumulative distribution) in reverse order, giving the size of the element at a given rank.
Simple rank–size distributions
In the case of city populations, the resulting distribution in a country, a region, or the world will be characterized by its largest city, with other cities decreasing in size respective to it, initially at a rapid rate and then more slowly. This results in a few large cities and a much larger number of cities orders of magnitude smaller. For example, a rank 3 city would have one-third the population of a country's largest city, a rank 4 city would have one-fourth the population of the largest city, and so on.[2]
Segmentation
A rank-size (or rank–frequency) distribution is often segmented into ranges. This is frequently done somewhat arbitrarily or due to external factors, particularly for market segmentation, but can also be due to distinct behavior as rank varies.
Most simply and commonly, a distribution may be split in two pieces, termed the head and tail. If a distribution is broken into three pieces, the third (middle) piece has several terms, generically middle,[3] also belly,[4] torso,[5] and body.[6] These frequently have some adjectives added, most significantly long tail, also fat belly,[4] chunky middle, etc. In more traditional terms, these may be called top-tier, mid-tier, and bottom-tier.
The relative sizes and weights of these segments (how many ranks in each segment, and what proportion of the total population is in a given segment) qualitatively characterize a distribution, analogously to the skewness or kurtosis of a probability distribution. Namely: is it dominated by a few top members (head-heavy, like profits in the recorded music industry), or is it dominated by many small members (tail-heavy, like internet search queries), or distributed in some other way? Practically, this determines strategy: where should attention be focused?
These distinctions may be made for various reasons. For example, they may arise from differing properties of the population, as in the 90–9–1 principle, which posits that in an internet community, 90% of the participants of a community only view content, 9% of the participants edit content, and 1% of the participants actively create new content. As another example, in marketing, one may pragmatically consider the head as all members that receive personalized attention, such as personal phone calls; while the tail is everything else, which does not receive personalized attention, for example receiving form letters; and the line is simply set at a point that resources allow, or where it makes business sense to stop.
Purely quantitatively, a conventional way of splitting a distribution into head and tail is to consider the head to be the first p portion of ranks, which account for $1-p$ of the overall population, as in the 80:20 Pareto principle, where the top 20% (head) comprises 80% of the overall population. The exact cutoff depends on the distribution – each distribution has a single such cutoff point—and for power, laws can be computed from the Pareto index.
Segments may arise naturally due to actual changes in the behavior of the distribution as rank varies. Most common is the king effect, where the behavior of the top handful of items does not fit the pattern of the rest, as illustrated at the top for country populations, and above for most common words in English Wikipedia. For higher ranks, behavior may change at some point, and be well-modeled by different relations in different regions; on the whole by a piecewise function. For example, if two different power laws fit better in different regions, one can use a broken power law for the overall relation; the word frequency in English Wikipedia (above) also demonstrates this.
The Yule–Simon distribution that results from preferential attachment (intuitively, "the rich get richer" and "success breeds success") simulates a broken power law and has been shown to "very well capture" word frequency versus rank distributions.[7] It originated from trying to explain the population versus rank in different species. It has also been shown to fit city population versus rank better.[8]
Rank–size rule
The rank-size rule (or law) describes the remarkable regularity in many phenomena, including the distribution of city sizes, the sizes of businesses, the sizes of particles (such as sand), the lengths of rivers, the frequencies of word usage, and wealth among individuals.
All are real-world observations that follow power laws, such as Zipf's law, the Yule distribution, or the Pareto distribution. If one ranks the population size of cities in a given country or in the entire world and calculates the natural logarithm of the rank and of the city population, the resulting graph will show a linear pattern. This is the rank-size distribution.[9]
Known exceptions to simple rank–size distributions
While Zipf's law works well in many cases, it tends to not fit the largest cities in many countries; one type of deviation is known as the King effect. A 2002 study found that Zipf's law was rejected in 53 of 73 countries, far more than would be expected based on random chance.[10] The study also found that variations of the Pareto exponent are better explained by political variables than by economic geography variables like proxies for economies of scale or transportation costs.[11] A 2004 study showed that Zipf's law did not work well for the five largest cities in six countries.[12] In the richer countries, the distribution was flatter than predicted. For instance, in the United States, although its largest city, New York City, has more than twice the population of second-place Los Angeles, the two cities' metropolitan areas (also the two largest in the country) are much closer in population. In metropolitan-area population, New York City is only 1.3 times larger than Los Angeles. In other countries, the largest city would dominate much more than expected. For instance, in the Democratic Republic of the Congo, the capital, Kinshasa, is more than eight times larger than the second-largest city, Lubumbashi. When considering the entire distribution of cities, including the smallest ones, the rank-size rule does not hold. Instead, the distribution is log-normal. This follows from Gibrat's law of proportionate growth.
Because exceptions are so easy to find, the function of the rule for analyzing cities today is to compare the city systems in different countries. The rank-size rule is a common standard by which urban primacy is established. A distribution such as that in the United States or China does not exhibit a pattern of primacy, but countries with a dominant "primate city" clearly vary from the rank-size rule in the opposite manner. Therefore, the rule helps to classify national (or regional) city systems according to the degree of dominance exhibited by the largest city. Countries with a primate city, for example, have typically had a colonial history that accounts for that city pattern. If a normal city distribution pattern is expected to follow the rank-size rule (i.e. if the rank-size principle correlates with central place theory), then it suggests that those countries or regions with distributions that do not follow the rule have experienced some conditions that have altered the normal distribution pattern. For example, the presence of multiple regions within large nations such as China and the United States tends to favor a pattern in which more large cities appear than would be predicted by the rule. By contrast, small countries that had been connected (e.g. colonially/economically) to much larger areas will exhibit a distribution in which the largest city is much larger than would fit the rule, compared with the other cities—the excessive size of the city theoretically stems from its connection with a larger system rather than the natural hierarchy that central place theory would predict within that one country or region alone.
See also
• Pareto principle
• Long tail
References
1. "Stretched exponential distributions in nature and economy: "fat tails" with characteristic scales", J. Laherrère and D. Sornette
2. "The 200 Largest Cities in the United States by Population 2021". worldpopulationreview.com. Retrieved 2021-03-28.
3. Illustrating the Long Tail, Rand Fishkin, November 24th, 2009
4. Digg that Fat Belly!, Robert Young, Sep. 4, 2006
5. The Long Tail Keyword Optimization Guide - How to Profit from Long Tail Keywords, August 3, 2009, Tom Demers
6. The Small Head, the Medium Body, and the Long Tail .. so, where's Microsoft? Archived 2015-11-17 at the Wayback Machine, 12 Mar 2005, Lawrence Liu's Report from the Inside
7. Lin, Ruokuang; Ma, Qianli D. Y.; Bian, Chunhua (2014). "Scaling laws in human speech, decreasing emergence of new words and a generalized model". arXiv:1412.4846. Bibcode:2014arXiv1412.4846L. {{cite journal}}: Cite journal requires |journal= (help)
8. Dacey, M F (1 April 1979). "A Growth Process for Zipf's and Yule's City-Size Laws". Environment and Planning A. 11 (4): 361–372. doi:10.1068/a110361. S2CID 122325866.
9. Zipf's Law, or the Rank–Size Distribution Archived 2007-02-13 at the Wayback Machine Steven Brakman, Harry Garretsen, and Charles van Marrewijk
10. "Kwok Tong Soo (2002)" (PDF).
11. Zipf's Law, or the Rank–Size Distribution Archived 2007-03-02 at the Wayback Machine
12. Cuberes, David, The Rise and Decline of Cities, University of Chicago, September 29, 2004,
Further reading
• Brakman, S.; Garretsen, H.; Van Marrewijk, C.; Van Den Berg, M. (1999). "The Return of Zipf: Towards a Further Understanding of the Rank–Size Distribution". Journal of Regional Science. 39 (1): 183–213. doi:10.1111/1467-9787.00129. S2CID 56011475.
• Guérin-Pace, F. (1995). "Rank–Size Distribution and the Process of Urban Growth". Urban Studies. 32 (3): 551–562. doi:10.1080/00420989550012960. S2CID 154660734.
• Reed, W.J. (2001). "The Pareto, Zipf and other power laws". Economics Letters. 74 (1): 15–19. doi:10.1016/S0165-1765(01)00524-9.
• Douglas R. White, Laurent Tambayong, and Nataša Kejžar. 2008. Oscillatory dynamics of city-size distributions in world-historical systems. Globalization as an Evolutionary Process: Modeling Global Change. Ed. by George Modelski, Tessaleno Devezas, and William R. Thompson. London: Routledge. ISBN 978-0-415-77361-4
• The Use of Agent-Based Models in Regional Science—an agent-based simulation study that explains rank–size distribution.
External links
• Media related to Rank-size distribution at Wikimedia Commons
|
Wikipedia
|
Birch and Swinnerton-Dyer conjecture
In mathematics, the Birch and Swinnerton-Dyer conjecture (often called the Birch–Swinnerton-Dyer conjecture) describes the set of rational solutions to equations defining an elliptic curve. It is an open problem in the field of number theory and is widely recognized as one of the most challenging mathematical problems. It is named after mathematicians Bryan John Birch and Peter Swinnerton-Dyer, who developed the conjecture during the first half of the 1960s with the help of machine computation. As of 2023, only special cases of the conjecture have been proven.
Millennium Prize Problems
• Birch and Swinnerton-Dyer conjecture
• Hodge conjecture
• Navier–Stokes existence and smoothness
• P versus NP problem
• Poincaré conjecture (solved)
• Riemann hypothesis
• Yang–Mills existence and mass gap
The modern formulation of the conjecture relates arithmetic data associated with an elliptic curve E over a number field K to the behaviour of the Hasse–Weil L-function L(E, s) of E at s = 1. More specifically, it is conjectured that the rank of the abelian group E(K) of points of E is the order of the zero of L(E, s) at s = 1, and the first non-zero coefficient in the Taylor expansion of L(E, s) at s = 1 is given by more refined arithmetic data attached to E over K (Wiles 2006).
The conjecture was chosen as one of the seven Millennium Prize Problems listed by the Clay Mathematics Institute, which has offered a $1,000,000 prize for the first correct proof.[1]
Background
Mordell (1922) proved Mordell's theorem: the group of rational points on an elliptic curve has a finite basis. This means that for any elliptic curve there is a finite subset of the rational points on the curve, from which all further rational points may be generated.
If the number of rational points on a curve is infinite then some point in a finite basis must have infinite order. The number of independent basis points with infinite order is called the rank of the curve, and is an important invariant property of an elliptic curve.
If the rank of an elliptic curve is 0, then the curve has only a finite number of rational points. On the other hand, if the rank of the curve is greater than 0, then the curve has an infinite number of rational points.
Although Mordell's theorem shows that the rank of an elliptic curve is always finite, it does not give an effective method for calculating the rank of every curve. The rank of certain elliptic curves can be calculated using numerical methods but (in the current state of knowledge) it is unknown if these methods handle all curves.
An L-function L(E, s) can be defined for an elliptic curve E by constructing an Euler product from the number of points on the curve modulo each prime p. This L-function is analogous to the Riemann zeta function and the Dirichlet L-series that is defined for a binary quadratic form. It is a special case of a Hasse–Weil L-function.
The natural definition of L(E, s) only converges for values of s in the complex plane with Re(s) > 3/2. Helmut Hasse conjectured that L(E, s) could be extended by analytic continuation to the whole complex plane. This conjecture was first proved by Deuring (1941) for elliptic curves with complex multiplication. It was subsequently shown to be true for all elliptic curves over Q, as a consequence of the modularity theorem in 2001.
Finding rational points on a general elliptic curve is a difficult problem. Finding the points on an elliptic curve modulo a given prime p is conceptually straightforward, as there are only a finite number of possibilities to check. However, for large primes it is computationally intensive.
History
In the early 1960s Peter Swinnerton-Dyer used the EDSAC-2 computer at the University of Cambridge Computer Laboratory to calculate the number of points modulo p (denoted by Np) for a large number of primes p on elliptic curves whose rank was known. From these numerical results Birch & Swinnerton-Dyer (1965) conjectured that Np for a curve E with rank r obeys an asymptotic law
$\prod _{p\leq x}{\frac {N_{p}}{p}}\approx C\log(x)^{r}{\mbox{ as }}x\rightarrow \infty $
where C is a constant.
Initially this was based on somewhat tenuous trends in graphical plots; this induced a measure of skepticism in J. W. S. Cassels (Birch's Ph.D. advisor).[2] Over time the numerical evidence stacked up.
This in turn led them to make a general conjecture about the behaviour of a curve's L-function L(E, s) at s = 1, namely that it would have a zero of order r at this point. This was a far-sighted conjecture for the time, given that the analytic continuation of L(E, s) there was only established for curves with complex multiplication, which were also the main source of numerical examples. (NB that the reciprocal of the L-function is from some points of view a more natural object of study; on occasion this means that one should consider poles rather than zeroes.)
The conjecture was subsequently extended to include the prediction of the precise leading Taylor coefficient of the L-function at s = 1. It is conjecturally given by[3]
${\frac {L^{(r)}(E,1)}{r!}}={\frac {\#\mathrm {Sha} (E)\Omega _{E}R_{E}\prod _{p|N}c_{p}}{(\#E_{\mathrm {Tor} })^{2}}}$
where the quantities on the right hand side are invariants of the curve, studied by Cassels, Tate, Shafarevich and others (Wiles 2006):
$\#E_{\mathrm {Tor} }$ is the order of the torsion group,
$\#\mathrm {Sha} (E)$ is the order of the Tate–Shafarevich group,
$\Omega _{E}$ is the real period of E multiplied by the number of connected components of E,
$R_{E}$ is the regulator of E which is defined via the canonical heights of a basis of rational points,
$c_{p}$ is the Tamagawa number of E at a prime p dividing the conductor N of E. It can be found by Tate's algorithm.
Current status
The Birch and Swinnerton-Dyer conjecture has been proved only in special cases:
1. Coates & Wiles (1977) proved that if E is a curve over a number field F with complex multiplication by an imaginary quadratic field K of class number 1, F = K or Q, and L(E, 1) is not 0 then E(F) is a finite group. This was extended to the case where F is any finite abelian extension of K by Arthaud (1978).
2. Gross & Zagier (1986) showed that if a modular elliptic curve has a first-order zero at s = 1 then it has a rational point of infinite order; see Gross–Zagier theorem.
3. Kolyvagin (1989) showed that a modular elliptic curve E for which L(E, 1) is not zero has rank 0, and a modular elliptic curve E for which L(E, 1) has a first-order zero at s = 1 has rank 1.
4. Rubin (1991) showed that for elliptic curves defined over an imaginary quadratic field K with complex multiplication by K, if the L-series of the elliptic curve was not zero at s = 1, then the p-part of the Tate–Shafarevich group had the order predicted by the Birch and Swinnerton-Dyer conjecture, for all primes p > 7.
5. Breuil et al. (2001), extending work of Wiles (1995), proved that all elliptic curves defined over the rational numbers are modular, which extends results #2 and #3 to all elliptic curves over the rationals, and shows that the L-functions of all elliptic curves over Q are defined at s = 1.
6. Bhargava & Shankar (2015) proved that the average rank of the Mordell–Weil group of an elliptic curve over Q is bounded above by 7/6. Combining this with the p-parity theorem of Nekovář (2009) and Dokchitser & Dokchitser (2010) and with the proof of the main conjecture of Iwasawa theory for GL(2) by Skinner & Urban (2014), they conclude that a positive proportion of elliptic curves over Q have analytic rank zero, and hence, by Kolyvagin (1989), satisfy the Birch and Swinnerton-Dyer conjecture.
There are currently no proofs involving curves with rank greater than 1.
There is extensive numerical evidence for the truth of the conjecture.[4]
Consequences
Much like the Riemann hypothesis, this conjecture has multiple consequences, including the following two:
• Let n be an odd square-free integer. Assuming the Birch and Swinnerton-Dyer conjecture, n is the area of a right triangle with rational side lengths (a congruent number) if and only if the number of triplets of integers (x, y, z) satisfying 2x2 + y2 + 8z2 = n is twice the number of triplets satisfying 2x2 + y2 + 32z2 = n. This statement, due to Tunnell's theorem (Tunnell 1983), is related to the fact that n is a congruent number if and only if the elliptic curve y2 = x3 − n2x has a rational point of infinite order (thus, under the Birch and Swinnerton-Dyer conjecture, its L-function has a zero at 1). The interest in this statement is that the condition is easily verified.[5]
• In a different direction, certain analytic methods allow for an estimation of the order of zero in the center of the critical strip of families of L-functions. Admitting the BSD conjecture, these estimations correspond to information about the rank of families of elliptic curves in question. For example: suppose the generalized Riemann hypothesis and the BSD conjecture, the average rank of curves given by y2 = x3 + ax+ b is smaller than 2.[6]
Notes
1. Birch and Swinnerton-Dyer Conjecture at Clay Mathematics Institute
2. Stewart, Ian (2013), Visions of Infinity: The Great Mathematical Problems, Basic Books, p. 253, ISBN 9780465022403, Cassels was highly skeptical at first.
3. Cremona, John (2011). "Numerical evidence for the Birch and Swinnerton-Dyer Conjecture" (PDF). Talk at the BSD 50th Anniversary Conference, May 2011., page 50
4. Cremona, John (2011). "Numerical evidence for the Birch and Swinnerton-Dyer Conjecture" (PDF). Talk at the BSD 50th Anniversary Conference, May 2011.
5. Koblitz, Neal (1993). Introduction to Elliptic Curves and Modular Forms. Graduate Texts in Mathematics. Vol. 97 (2nd ed.). Springer-Verlag. ISBN 0-387-97966-2.
6. Heath-Brown, D. R. (2004). "The Average Analytic Rank of Elliptic Curves". Duke Mathematical Journal. 122 (3): 591–623. arXiv:math/0305114. doi:10.1215/S0012-7094-04-12235-3. MR 2057019. S2CID 15216987.
References
• Arthaud, Nicole (1978). "On Birch and Swinnerton-Dyer's conjecture for elliptic curves with complex multiplication". Compositio Mathematica. 37 (2): 209–232. MR 0504632.
• Bhargava, Manjul; Shankar, Arul (2015). "Ternary cubic forms having bounded invariants, and the existence of a positive proportion of elliptic curves having rank 0". Annals of Mathematics. 181 (2): 587–621. arXiv:1007.0052. doi:10.4007/annals.2015.181.2.4. S2CID 1456959.
• Birch, Bryan; Swinnerton-Dyer, Peter (1965). "Notes on Elliptic Curves (II)". J. Reine Angew. Math. 165 (218): 79–108. doi:10.1515/crll.1965.218.79. S2CID 122531425.
• Breuil, Christophe; Conrad, Brian; Diamond, Fred; Taylor, Richard (2001). "On the Modularity of Elliptic Curves over Q: Wild 3-Adic Exercises". Journal of the American Mathematical Society. 14 (4): 843–939. doi:10.1090/S0894-0347-01-00370-8.
• Coates, J.H.; Greenberg, R.; Ribet, K.A.; Rubin, K. (1999). Arithmetic Theory of Elliptic Curves. Lecture Notes in Mathematics. Vol. 1716. Springer-Verlag. ISBN 3-540-66546-3.
• Coates, J.; Wiles, A. (1977). "On the conjecture of Birch and Swinnerton-Dyer". Inventiones Mathematicae. 39 (3): 223–251. Bibcode:1977InMat..39..223C. doi:10.1007/BF01402975. S2CID 189832636. Zbl 0359.14009.
• Deuring, Max (1941). "Die Typen der Multiplikatorenringe elliptischer Funktionenkörper". Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg. 14 (1): 197–272. doi:10.1007/BF02940746. S2CID 124821516.
• Dokchitser, Tim; Dokchitser, Vladimir (2010). "On the Birch–Swinnerton-Dyer quotients modulo squares". Annals of Mathematics. 172 (1): 567–596. arXiv:math/0610290. doi:10.4007/annals.2010.172.567. MR 2680426. S2CID 9479748.
• Gross, Benedict H.; Zagier, Don B. (1986). "Heegner points and derivatives of L-series". Inventiones Mathematicae. 84 (2): 225–320. Bibcode:1986InMat..84..225G. doi:10.1007/BF01388809. MR 0833192. S2CID 125716869.
• Kolyvagin, Victor (1989). "Finiteness of E(Q) and X(E, Q) for a class of Weil curves". Math. USSR Izv. 32 (3): 523–541. Bibcode:1989IzMat..32..523K. doi:10.1070/im1989v032n03abeh000779.
• Mordell, Louis (1922). "On the rational solutions of the indeterminate equations of the third and fourth degrees". Proc. Camb. Phil. Soc. 21: 179–192.
• Nekovář, Jan (2009). "On the parity of ranks of Selmer groups IV". Compositio Mathematica. 145 (6): 1351–1359. doi:10.1112/S0010437X09003959.
• Rubin, Karl (1991). "The 'main conjectures' of Iwasawa theory for imaginary quadratic fields". Inventiones Mathematicae. 103 (1): 25–68. Bibcode:1991InMat.103...25R. doi:10.1007/BF01239508. S2CID 120179735. Zbl 0737.11030.
• Skinner, Christopher; Urban, Éric (2014). "The Iwasawa main conjectures for GL2". Inventiones Mathematicae. 195 (1): 1–277. Bibcode:2014InMat.195....1S. doi:10.1007/s00222-013-0448-1. S2CID 120848645.
• Tunnell, Jerrold B. (1983). "A classical Diophantine problem and modular forms of weight 3/2" (PDF). Inventiones Mathematicae. 72 (2): 323–334. Bibcode:1983InMat..72..323T. doi:10.1007/BF01389327. hdl:10338.dmlcz/137483. S2CID 121099824. Zbl 0515.10013.
• Wiles, Andrew (1995). "Modular elliptic curves and Fermat's last theorem". Annals of Mathematics. Second Series. 141 (3): 443–551. doi:10.2307/2118559. ISSN 0003-486X. JSTOR 2118559. MR 1333035.
• Wiles, Andrew (2006). "The Birch and Swinnerton-Dyer conjecture" (PDF). In Carlson, James; Jaffe, Arthur; Wiles, Andrew (eds.). The Millennium prize problems. American Mathematical Society. pp. 31–44. ISBN 978-0-8218-3679-8. MR 2238272.
External links
Wikiquote has quotations related to Birch and Swinnerton-Dyer conjecture
• Weisstein, Eric W. "Swinnerton-Dyer Conjecture". MathWorld.
• "Birch and Swinnerton-Dyer Conjecture". PlanetMath.
• The Birch and Swinnerton-Dyer Conjecture: An Interview with Professor Henri Darmon by Agnes F. Beaudry
• What is the Birch and Swinnerton-Dyer Conjecture? lecture by Manjul Bhargava (september 2016) given during the Clay Research Conference held at the University of Oxford
L-functions in number theory
Analytic examples
• Riemann zeta function
• Dirichlet L-functions
• L-functions of Hecke characters
• Automorphic L-functions
• Selberg class
Algebraic examples
• Dedekind zeta functions
• Artin L-functions
• Hasse–Weil L-functions
• Motivic L-functions
Theorems
• Analytic class number formula
• Riemann–von Mangoldt formula
• Weil conjectures
Analytic conjectures
• Riemann hypothesis
• Generalized Riemann hypothesis
• Lindelöf hypothesis
• Ramanujan–Petersson conjecture
• Artin conjecture
Algebraic conjectures
• Birch and Swinnerton-Dyer conjecture
• Deligne's conjecture
• Beilinson conjectures
• Bloch–Kato conjecture
• Langlands conjecture
p-adic L-functions
• Main conjecture of Iwasawa theory
• Selmer group
• Euler system
Authority control: National
• Israel
• United States
|
Wikipedia
|
Rank correlation
In statistics, a rank correlation is any of several statistics that measure an ordinal association—the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment of the ordering labels "first", "second", "third", etc. to different observations of a particular variable. A rank correlation coefficient measures the degree of similarity between two rankings, and can be used to assess the significance of the relation between them. For example, two common nonparametric methods of significance that use rank correlation are the Mann–Whitney U test and the Wilcoxon signed-rank test.
Context
If, for example, one variable is the identity of a college basketball program and another variable is the identity of a college football program, one could test for a relationship between the poll rankings of the two types of program: do colleges with a higher-ranked basketball program tend to have a higher-ranked football program? A rank correlation coefficient can measure that relationship, and the measure of significance of the rank correlation coefficient can show whether the measured relationship is small enough to likely be a coincidence.
If there is only one variable, the identity of a college football program, but it is subject to two different poll rankings (say, one by coaches and one by sportswriters), then the similarity of the two different polls' rankings can be measured with a rank correlation coefficient.
As another example, in a contingency table with low income, medium income, and high income in the row variable and educational level—no high school, high school, university—in the column variable),[1] a rank correlation measures the relationship between income and educational level.
Correlation coefficients
Some of the more popular rank correlation statistics include
1. Spearman's ρ
2. Kendall's τ
3. Goodman and Kruskal's γ
4. Somers' D
An increasing rank correlation coefficient implies increasing agreement between rankings. The coefficient is inside the interval [−1, 1] and assumes the value:
• 1 if the agreement between the two rankings is perfect; the two rankings are the same.
• 0 if the rankings are completely independent.
• −1 if the disagreement between the two rankings is perfect; one ranking is the reverse of the other.
Following Diaconis (1988), a ranking can be seen as a permutation of a set of objects. Thus we can look at observed rankings as data obtained when the sample space is (identified with) a symmetric group. We can then introduce a metric, making the symmetric group into a metric space. Different metrics will correspond to different rank correlations.
General correlation coefficient
Kendall 1970[2] showed that his $\tau $ (tau) and Spearman's $\rho $ (rho) are particular cases of a general correlation coefficient.
Suppose we have a set of $n$ objects, which are being considered in relation to two properties, represented by $x$ and $y$, forming the sets of values $\{x_{i}\}_{i\leq n}$ and $\{y_{i}\}_{i\leq n}$. To any pair of individuals, say the $i$-th and the $j$-th we assign a $x$-score, denoted by $a_{ij}$, and a $y$-score, denoted by $b_{ij}$. The only requirement for these functions is that they be anti-symmetric, so $a_{ij}=-a_{ji}$ and $b_{ij}=-b_{ji}$. (Note that in particular $a_{ij}=b_{ij}=0$ if $i=j$.) Then the generalized correlation coefficient $\Gamma $ is defined as
$\Gamma ={\frac {\sum _{i,j=1}^{n}a_{ij}b_{ij}}{\sqrt {\sum _{i,j=1}^{n}a_{ij}^{2}\sum _{i,j=1}^{n}b_{ij}^{2}}}}$
Equivalently, if all coefficients are collected into matrices $A=(a_{ij})$ and $B=(b_{ij})$, with $A^{\textsf {T}}=-A$ and $B^{\textsf {T}}=-B$, then
$\Gamma ={\frac {\langle A,B\rangle _{\rm {F}}}{\|A\|_{\rm {F}}\|B\|_{\rm {F}}}}$
where $\langle A,B\rangle _{\rm {F}}$ is the Frobenius inner product and $\|A\|_{\rm {F}}={\sqrt {\langle A,A\rangle _{\rm {F}}}}$ the Frobenius norm. In particular, the general correlation coefficient is the cosine of the angle between the matrices $A$ and $B$.
See also: Inner product space § Norms on inner product spaces
Kendall's τ as a particular case
If $r_{i}$, $s_{i}$ are the ranks of the $i$-member according to the $x$-quality and $y$-quality respectively, then we can define
$a_{ij}=\operatorname {sgn}(r_{j}-r_{i}),\quad b_{ij}=\operatorname {sgn}(s_{j}-s_{i}).$
The sum $\sum a_{ij}b_{ij}$ is the number of concordant pairs minus the number of discordant pairs (see Kendall tau rank correlation coefficient). The sum $\sum a_{ij}^{2}$ is just $n(n-1)/2$, the number of terms $a_{ij}$, as is $\sum b_{ij}^{2}$. Thus in this case,
$\Gamma ={\frac {2\,(({\text{number of concordant pairs}})-({\text{number of discordant pairs}}))}{n(n-1)}}={\text{Kendall's }}\tau $
Spearman’s ρ as a particular case
If $r_{i}$, $s_{i}$ are the ranks of the $i$-member according to the $x$ and the $y$-quality respectively, we may consider the matrices $a,b\in M(n\times n;\mathbb {R} )$ defined by
$a_{ij}:=r_{j}-r_{i}$
$b_{ij}:=s_{j}-s_{i}$
The sums $\sum a_{ij}^{2}$ and $\sum b_{ij}^{2}$ are equal, since both $r_{i}$ and $s_{i}$ range from $1$ to $n$. Hence
$\Gamma ={\frac {\sum (r_{j}-r_{i})(s_{j}-s_{i})}{\sum (r_{j}-r_{i})^{2}}}$
To simplify this expression, let $d_{i}:=r_{i}-s_{i}$ denote the difference in the ranks for each $i$. Further, let $U$ be a uniformly distributed discrete random variables on $\{1,2,\ldots ,n\}$. Since the ranks $r,s$ are just permutations of $1,2,\ldots ,n$, we can view both as being random variables distributed like $U$. Using basic summation results from discrete mathematics, it is easy to see that for the uniformly distributed random variable, $U$, we have $\mathbb {E} [U]=\textstyle {\frac {n+1}{2}}$ and $\mathbb {E} [U^{2}]=\textstyle {\frac {(n+1)(2n+1)}{6}}$ and thus $\mathrm {Var} (U)=\textstyle {\frac {(n+1)(2n+1)}{6}}-\textstyle {\frac {(n+1)(n+1)}{4}}=\textstyle {\frac {n^{2}-1}{12}}$. Now, observing symmetries allows us to compute the parts of $\Gamma $ as follows:
${\begin{aligned}{\frac {1}{n^{2}}}\sum _{i,j=1}^{n}(r_{j}-r_{i})(s_{j}-s_{i})&=2\left({\frac {1}{n^{2}}}\cdot n\sum _{i=1}^{n}r_{i}s_{i}-({\frac {1}{n}}\sum _{i=1}^{n}r_{i})({\frac {1}{n}}\sum _{j=1}^{n}s_{j})\right)\\&={\frac {1}{n}}\sum _{i=1}^{n}(r_{i}^{2}+s_{i}^{2}-d_{i}^{2})-2(\mathbb {E} [U])^{2}\\&={\frac {1}{n}}\sum _{i=1}^{n}r_{i}^{2}+{\frac {1}{n}}\sum _{i=1}^{n}s_{i}^{2}-{\frac {1}{n}}\sum _{i=1}^{n}d_{i}^{2}-2(\mathbb {E} [U])^{2}\\&=2(\mathbb {E} [U^{2}]-(\mathbb {E} [U])^{2})-{\frac {1}{n}}\sum _{i=1}^{n}d_{i}^{2}\\\end{aligned}}$
and
${\begin{aligned}{\frac {1}{n^{2}}}\sum _{i,j=1}^{n}(r_{j}-r_{i})^{2}&={\frac {1}{n^{2}}}\cdot n\sum _{i,j=1}^{n}(r_{i}^{2}+r_{j}^{2}-2r_{i}r_{j})\\&=2{\frac {1}{n}}\sum _{i=1}^{n}r_{i}^{2}-2({\frac {1}{n}}\sum _{i=1}^{n}r_{i})({\frac {1}{n}}\sum _{j=1}^{n}r_{j})\\&=2(\mathbb {E} [U^{2}]-(\mathbb {E} [U])^{2})\\\end{aligned}}$
Hence
$\Gamma =1-{\frac {\sum _{i=1}^{n}d_{i}^{2}}{2n\mathrm {Var} (U)}}=1-{\frac {6\sum _{i=1}^{n}d_{i}^{2}}{n(n^{2}-1)}}$
where $d_{i}=r_{i}-s_{i}$ is the difference between ranks, which is exactly Spearman's rank correlation coefficient $\rho $.
Rank-biserial correlation
Gene Glass (1965) noted that the rank-biserial can be derived from Spearman's $\rho $. "One can derive a coefficient defined on X, the dichotomous variable, and Y, the ranking variable, which estimates Spearman's rho between X and Y in the same way that biserial r estimates Pearson's r between two normal variables” (p. 91). The rank-biserial correlation had been introduced nine years before by Edward Cureton (1956) as a measure of rank correlation when the ranks are in two groups.
Kerby simple difference formula
Dave Kerby (2014) recommended the rank-biserial as the measure to introduce students to rank correlation, because the general logic can be explained at an introductory level. The rank-biserial is the correlation used with the Mann–Whitney U test, a method commonly covered in introductory college courses on statistics. The data for this test consists of two groups; and for each member of the groups, the outcome is ranked for the study as a whole.
Kerby showed that this rank correlation can be expressed in terms of two concepts: the percent of data that support a stated hypothesis, and the percent of data that do not support it. The Kerby simple difference formula states that the rank correlation can be expressed as the difference between the proportion of favorable evidence (f) minus the proportion of unfavorable evidence (u).
$r=f-u$
Example and interpretation
To illustrate the computation, suppose a coach trains long-distance runners for one month using two methods. Group A has 5 runners, and Group B has 4 runners. The stated hypothesis is that method A produces faster runners. The race to assess the results finds that the runners from Group A do indeed run faster, with the following ranks: 1, 2, 3, 4, and 6. The slower runners from Group B thus have ranks of 5, 7, 8, and 9.
The analysis is conducted on pairs, defined as a member of one group compared to a member of the other group. For example, the fastest runner in the study is a member of four pairs: (1,5), (1,7), (1,8), and (1,9). All four of these pairs support the hypothesis, because in each pair the runner from Group A is faster than the runner from Group B. There are a total of 20 pairs, and 19 pairs support the hypothesis. The only pair that does not support the hypothesis are the two runners with ranks 5 and 6, because in this pair, the runner from Group B had the faster time. By the Kerby simple difference formula, 95% of the data support the hypothesis (19 of 20 pairs), and 5% do not support (1 of 20 pairs), so the rank correlation is r = .95 − .05 = .90.
The maximum value for the correlation is r = 1, which means that 100% of the pairs favor the hypothesis. A correlation of r = 0 indicates that half the pairs favor the hypothesis and half do not; in other words, the sample groups do not differ in ranks, so there is no evidence that they come from two different populations. An effect size of r = 0 can be said to describe no relationship between group membership and the members' ranks.
References
1. Kruskal, William H. (1958). "Ordinal Measures of Association". Journal of the American Statistical Association. 53 (284): 814–861. doi:10.2307/2281954. JSTOR 2281954.
2. Kendall, Maurice G (1970). Rank Correlation Methods (4 ed.). Griffin. ISBN 9780852641996.
Further reading
• Cureton, Edward E. (1956). "Rank-biserial correlation". Psychometrika. 21 (3): 287–290. doi:10.1007/BF02289138. S2CID 122500836.
• Everitt, B. S. (2002), The Cambridge Dictionary of Statistics, Cambridge: Cambridge University Press, ISBN 0-521-81099-X
• Diaconis, P. (1988), Group Representations in Probability and Statistics, Lecture Notes-Monograph Series, Hayward, CA: Institute of Mathematical Statistics, ISBN 0-940600-14-5
• Glass, Gene V. (1965). "A ranking variable analogue of biserial correlation: implications for short-cut item analysis". Journal of Educational Measurement. 2 (1): 91–95. doi:10.1111/j.1745-3984.1965.tb00396.x.
• Kendall, M. G. (1970), Rank Correlation Methods, London: Griffin, ISBN 0-85264-199-0
• Kerby, Dave S. (2014). "The Simple Difference Formula: An Approach to Teaching Nonparametric Correlation". Comprehensive Psychology. 3 (1): 11.IT.3.1. doi:10.2466/11.IT.3.1.
External links
• Brief guide by experimental psychologist Karl L. Weunsch - Nonparametric effect sizes (Copyright 2015 by Karl L. Weunsch)
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
|
Wikipedia
|
Rank (linear algebra)
In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns.[1][2][3] This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows.[4] Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.
The rank is commonly denoted by rank(A) or rk(A);[2] sometimes the parentheses are not written, as in rank A.[lower-roman 1]
Main definitions
In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these.
The column rank of A is the dimension of the column space of A, while the row rank of A is the dimension of the row space of A.
A fundamental result in linear algebra is that the column rank and the row rank are always equal. (Three proofs of this result are given in § Proofs that column rank = row rank, below.) This number (i.e., the number of linearly independent rows or columns) is simply called the rank of A.
A matrix is said to have full rank if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be rank-deficient if it does not have full rank. The rank deficiency of a matrix is the difference between the lesser of the number of rows and columns, and the rank.
The rank of a linear map or operator $\Phi $ is defined as the dimension of its image:[5][6][7][8]
$\operatorname {rank} (\Phi ):=\dim(\operatorname {img} (\Phi ))$
where $\dim $ is the dimension of a vector space, and $\operatorname {img} $ is the image of a map.
Examples
The matrix
${\begin{bmatrix}1&0&1\\-2&-3&1\\3&3&0\end{bmatrix}}$
has rank 2: the first two columns are linearly independent, so the rank is at least 2, but since the third is a linear combination of the first two (the first column minus the second), the three columns are linearly dependent so the rank must be less than 3.
The matrix
$A={\begin{bmatrix}1&1&0&2\\-1&-1&0&-2\end{bmatrix}}$
has rank 1: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the transpose
$A^{\mathrm {T} }={\begin{bmatrix}1&-1\\1&-1\\0&0\\2&-2\end{bmatrix}}$
of A has rank 1. Indeed, since the column vectors of A are the row vectors of the transpose of A, the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e., rank(A) = rank(AT).
Computing the rank of a matrix
Rank from row echelon forms
Main article: Gaussian elimination
A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally row echelon form, by elementary row operations. Row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots (or basic columns) and also the number of non-zero rows.
For example, the matrix A given by
$A={\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}$
can be put in reduced row-echelon form by using the following elementary row operations:
${\begin{aligned}{\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}&\xrightarrow {2R_{1}+R_{2}\to R_{2}} {\begin{bmatrix}1&2&1\\0&1&3\\3&5&0\end{bmatrix}}\xrightarrow {-3R_{1}+R_{3}\to R_{3}} {\begin{bmatrix}1&2&1\\0&1&3\\0&-1&-3\end{bmatrix}}\\&\xrightarrow {R_{2}+R_{3}\to R_{3}} \,\,{\begin{bmatrix}1&2&1\\0&1&3\\0&0&0\end{bmatrix}}\xrightarrow {-2R_{2}+R_{1}\to R_{1}} {\begin{bmatrix}1&0&-5\\0&1&3\\0&0&0\end{bmatrix}}~.\end{aligned}}$
The final matrix (in row echelon form) has two non-zero rows and thus the rank of matrix A is 2.
Computation
When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank-revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less expensive choices, such as QR decomposition with pivoting (so-called rank-revealing QR factorization), which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.
Proofs that column rank = row rank
Proof using row reduction
The fact that the column and row ranks of any matrix are equal forms is fundamental in linear algebra. Many proofs have been given. One of the most elementary ones has been sketched in § Rank from row echelon forms. Here is a variant of this proof:
It is straightforward to show that neither the row rank nor the column rank are changed by an elementary row operation. As Gaussian elimination proceeds by elementary row operations, the reduced row echelon form of a matrix has the same row rank and the same column rank as the original matrix. Further elementary column operations allow putting the matrix in the form of an identity matrix possibly bordered by rows and columns of zeros. Again, this changes neither the row rank nor the column rank. It is immediate that both the row and column ranks of this resulting matrix is the number of its nonzero entries.
We present two other proofs of this result. The first uses only basic properties of linear combinations of vectors, and is valid over any field. The proof is based upon Wardlaw (2005).[9] The second uses orthogonality and is valid for matrices over the real numbers; it is based upon Mackiw (1995).[4] Both proofs can be found in the book by Banerjee and Roy (2014).[10]
Proof using linear combinations
Let A be an m × n matrix. Let the column rank of A be r, and let c1, ..., cr be any basis for the column space of A. Place these as the columns of an m × r matrix C. Every column of A can be expressed as a linear combination of the r columns in C. This means that there is an r × n matrix R such that A = CR. R is the matrix whose ith column is formed from the coefficients giving the ith column of A as a linear combination of the r columns of C. In other words, R is the matrix which contains the multiples for the bases of the column space of A (which is C), which are then used to form A as a whole. Now, each row of A is given by a linear combination of the r rows of R. Therefore, the rows of R form a spanning set of the row space of A and, by the Steinitz exchange lemma, the row rank of A cannot exceed r. This proves that the row rank of A is less than or equal to the column rank of A. This result can be applied to any matrix, so apply the result to the transpose of A. Since the row rank of the transpose of A is the column rank of A and the column rank of the transpose of A is the row rank of A, this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of A. (Also see Rank factorization.)
Proof using orthogonality
Let A be an m × n matrix with entries in the real numbers whose row rank is r. Therefore, the dimension of the row space of A is r. Let x1, x2, …, xr be a basis of the row space of A. We claim that the vectors Ax1, Ax2, …, Axr are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c1, c2, …, cr:
$0=c_{1}A\mathbf {x} _{1}+c_{2}A\mathbf {x} _{2}+\cdots +c_{r}A\mathbf {x} _{r}=A(c_{1}\mathbf {x} _{1}+c_{2}\mathbf {x} _{2}+\cdots +c_{r}\mathbf {x} _{r})=A\mathbf {v} ,$
where v = c1x1 + c2x2 + ⋯ + crxr. We make two observations: (a) v is a linear combination of vectors in the row space of A, which implies that v belongs to the row space of A, and (b) since Av = 0, the vector v is orthogonal to every row vector of A and, hence, is orthogonal to every vector in the row space of A. The facts (a) and (b) together imply that v is orthogonal to itself, which proves that v = 0 or, by the definition of v,
$c_{1}\mathbf {x} _{1}+c_{2}\mathbf {x} _{2}+\cdots +c_{r}\mathbf {x} _{r}=0.$
But recall that the xi were chosen as a basis of the row space of A and so are linearly independent. This implies that c1 = c2 = ⋯ = cr = 0. It follows that Ax1, Ax2, …, Axr are linearly independent.
Now, each Axi is obviously a vector in the column space of A. So, Ax1, Ax2, …, Axr is a set of r linearly independent vectors in the column space of A and, hence, the dimension of the column space of A (i.e., the column rank of A) must be at least as big as r. This proves that row rank of A is no larger than the column rank of A. Now apply this result to the transpose of A to get the reverse inequality and conclude as in the previous proof.
Alternative definitions
In all the definitions in this section, the matrix A is taken to be an m × n matrix over an arbitrary field F.
Dimension of image
Given the matrix $A$, there is an associated linear mapping
$f:F^{n}\mapsto F^{m}$
defined by
$f(x)=Ax.$
The rank of $A$ is the dimension of the image of $f$. This definition has the advantage that it can be applied to any linear map without need for a specific matrix.
Rank in terms of nullity
Given the same linear mapping f as above, the rank is n minus the dimension of the kernel of f. The rank–nullity theorem states that this definition is equivalent to the preceding one.
Column rank – dimension of column space
The rank of A is the maximal number of linearly independent columns $\mathbf {c} _{1},\mathbf {c} _{2},\dots ,\mathbf {c} _{k}$ of A; this is the dimension of the column space of A (the column space being the subspace of Fm generated by the columns of A, which is in fact just the image of the linear map f associated to A).
Row rank – dimension of row space
The rank of A is the maximal number of linearly independent rows of A; this is the dimension of the row space of A.
Decomposition rank
The rank of A is the smallest integer k such that A can be factored as $A=CR$, where C is an m × k matrix and R is a k × n matrix. In fact, for all integers k, the following are equivalent:
1. the column rank of A is less than or equal to k,
2. there exist k columns $\mathbf {c} _{1},\ldots ,\mathbf {c} _{k}$ of size m such that every column of A is a linear combination of $\mathbf {c} _{1},\ldots ,\mathbf {c} _{k}$,
3. there exist an $m\times k$ matrix C and a $k\times n$ matrix R such that $A=CR$ (when k is the rank, this is a rank factorization of A),
4. there exist k rows $\mathbf {r} _{1},\ldots ,\mathbf {r} _{k}$ of size n such that every row of A is a linear combination of $\mathbf {r} _{1},\ldots ,\mathbf {r} _{k}$,
5. the row rank of A is less than or equal to k.
Indeed, the following equivalences are obvious: $(1)\Leftrightarrow (2)\Leftrightarrow (3)\Leftrightarrow (4)\Leftrightarrow (5)$. For example, to prove (3) from (2), take C to be the matrix whose columns are $\mathbf {c} _{1},\ldots ,\mathbf {c} _{k}$ from (2). To prove (2) from (3), take $\mathbf {c} _{1},\ldots ,\mathbf {c} _{k}$ to be the columns of C.
It follows from the equivalence $(1)\Leftrightarrow (5)$ that the row rank is equal to the column rank.
As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map f : V → W is the minimal dimension k of an intermediate space X such that f can be written as the composition of a map V → X and a map X → W. Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See rank factorization for details.
Rank in terms of singular values
The rank of A equals the number of non-zero singular values, which is the same as the number of non-zero diagonal elements in Σ in the singular value decomposition $A=U\Sigma V^{*}$.
Determinantal rank – size of largest non-vanishing minor
The rank of A is the largest order of any non-zero minor in A. (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix.
A non-vanishing p-minor (p × p submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of n vectors has dimension p, then p of those vectors span the space (equivalently, that one can choose a spanning set that is a subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of n vectors has dimension p, then p of these vectors span the space and there is a set of p coordinates on which they are linearly independent).
Tensor rank – minimum number of simple tensors
Main articles: Tensor rank decomposition and Tensor rank
The rank of A is the smallest number k such that A can be written as a sum of k rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product $c\cdot r$ of a column vector c and a row vector r. This notion of rank is called tensor rank; it can be generalized in the separable models interpretation of the singular value decomposition.
Properties
We assume that A is an m × n matrix, and we define the linear map f by f(x) = Ax as above.
• The rank of an m × n matrix is a nonnegative integer and cannot be greater than either m or n. That is,
$\operatorname {rank} (A)\leq \min(m,n).$
A matrix that has rank min(m, n) is said to have full rank; otherwise, the matrix is rank deficient.
• Only a zero matrix has rank zero.
• f is injective (or "one-to-one") if and only if A has rank n (in this case, we say that A has full column rank).
• f is surjective (or "onto") if and only if A has rank m (in this case, we say that A has full row rank).
• If A is a square matrix (i.e., m = n), then A is invertible if and only if A has rank n (that is, A has full rank).
• If B is any n × k matrix, then
$\operatorname {rank} (AB)\leq \min(\operatorname {rank} (A),\operatorname {rank} (B)).$
• If B is an n × k matrix of rank n, then
$\operatorname {rank} (AB)=\operatorname {rank} (A).$
• If C is an l × m matrix of rank m, then
$\operatorname {rank} (CA)=\operatorname {rank} (A).$
• The rank of A is equal to r if and only if there exists an invertible m × m matrix X and an invertible n × n matrix Y such that
$XAY={\begin{bmatrix}I_{r}&0\\0&0\\\end{bmatrix}},$
where Ir denotes the r × r identity matrix.
• Sylvester’s rank inequality: if A is an m × n matrix and B is n × k, then[lower-roman 2]
$\operatorname {rank} (A)+\operatorname {rank} (B)-n\leq \operatorname {rank} (AB).$
This is a special case of the next inequality.
• The inequality due to Frobenius: if AB, ABC and BC are defined, then[lower-roman 3]
$\operatorname {rank} (AB)+\operatorname {rank} (BC)\leq \operatorname {rank} (B)+\operatorname {rank} (ABC).$
• Subadditivity:
$\operatorname {rank} (A+B)\leq \operatorname {rank} (A)+\operatorname {rank} (B)$
when A and B are of the same dimension. As a consequence, a rank-k matrix can be written as the sum of k rank-1 matrices, but not fewer.
• The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix. (This is the rank–nullity theorem.)
• If A is a matrix over the real numbers then the rank of A and the rank of its corresponding Gram matrix are equal. Thus, for real matrices
$\operatorname {rank} (A^{\mathrm {T} }A)=\operatorname {rank} (AA^{\mathrm {T} })=\operatorname {rank} (A)=\operatorname {rank} (A^{\mathrm {T} }).$
This can be shown by proving equality of their null spaces. The null space of the Gram matrix is given by vectors x for which $A^{\mathrm {T} }A\mathbf {x} =0.$ If this condition is fulfilled, we also have $0=\mathbf {x} ^{\mathrm {T} }A^{\mathrm {T} }A\mathbf {x} =\left|A\mathbf {x} \right|^{2}.$[11]
• If A is a matrix over the complex numbers and ${\overline {A}}$ denotes the complex conjugate of A and A∗ the conjugate transpose of A (i.e., the adjoint of A), then
$\operatorname {rank} (A)=\operatorname {rank} ({\overline {A}})=\operatorname {rank} (A^{\mathrm {T} })=\operatorname {rank} (A^{*})=\operatorname {rank} (A^{*}A)=\operatorname {rank} (AA^{*}).$
Applications
One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. According to the Rouché–Capelli theorem, the system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank. In this case (and assuming the system of equations is in the real or complex numbers) the system of equations has infinitely many solutions.
In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable.
In the field of communication complexity, the rank of the communication matrix of a function gives bounds on the amount of communication needed for two parties to compute the function.
Generalization
There are different generalizations of the concept of rank to matrices over arbitrary rings, where column rank, row rank, dimension of column space, and dimension of row space of a matrix may be different from the others or may not exist.
Thinking of matrices as tensors, the tensor rank generalizes to arbitrary tensors; for tensors of order greater than 2 (matrices are order 2 tensors), rank is very hard to compute, unlike for matrices.
There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative.
Matrices as tensors
Matrix rank should not be confused with tensor order, which is called tensor rank. Tensor order is the number of indices required to write a tensor, and thus matrices all have tensor order 2. More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see Tensor (intrinsic definition) for details.
The tensor rank of a matrix can also mean the minimum number of simple tensors necessary to express the matrix as a linear combination, and that this definition does agree with matrix rank as here discussed.
See also
• Matroid rank
• Nonnegative rank (linear algebra)
• Rank (differential topology)
• Multicollinearity
• Linear dependence
Notes
1. Alternative notation includes $\rho (\Phi )$ from Katznelson & Katznelson (2008, p. 52, §2.5.1) and Halmos (1974, p. 90, § 50).
2. Proof: Apply the rank–nullity theorem to the inequality
$\dim \ker(AB)\leq \dim \ker(A)+\dim \ker(B).$
3. Proof. The map
$C:\ker(ABC)/\ker(BC)\to \ker(AB)/\ker(B)$
is well-defined and injective. We thus obtain the inequality in terms of dimensions of kernel, which can then be converted to the inequality in terms of ranks by the rank–nullity theorem. Alternatively, if $M$ is a linear subspace then $\dim(AM)\leq \dim(M)$; apply this inequality to the subspace defined by the orthogonal complement of the image of $BC$ in the image of $B$, whose dimension is $\operatorname {rank} (B)-\operatorname {rank} (BC)$; its image under $A$ has dimension $\operatorname {rank} (AB)-\operatorname {rank} (ABC)$.
References
1. Axler (2015) pp. 111-112, §§ 3.115, 3.119
2. Roman (2005) p. 48, § 1.16
3. Bourbaki, Algebra, ch. II, §10.12, p. 359
4. Mackiw, G. (1995), "A Note on the Equality of the Column and Row Rank of a Matrix", Mathematics Magazine, 68 (4): 285–286, doi:10.1080/0025570X.1995.11996337
5. Hefferon (2020) p. 200, ch. 3, Definition 2.1
6. Katznelson & Katznelson (2008) p. 52, § 2.5.1
7. Valenza (1993) p. 71, § 4.3
8. Halmos (1974) p. 90, § 50
9. Wardlaw, William P. (2005), "Row Rank Equals Column Rank", Mathematics Magazine, 78 (4): 316–318, doi:10.1080/0025570X.2005.11953349, S2CID 218542661
10. Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388
11. Mirsky, Leonid (1955). An introduction to linear algebra. Dover Publications. ISBN 978-0-486-66434-7.
Sources
• Axler, Sheldon (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 978-3-319-11079-0.
• Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-90093-4.
• Hefferon, Jim (2020). Linear Algebra (4th ed.). ISBN 978-1-944325-11-4.
• Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
• Roman, Steven (2005). Advanced Linear Algebra. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-24766-1.
• Valenza, Robert J. (1993) [1951]. Linear Algebra: An Introduction to Abstract Mathematics. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 3-540-94099-5.
Further reading
• Roger A. Horn and Charles R. Johnson (1985). Matrix Analysis. ISBN 978-0-521-38632-6.
• Kaw, Autar K. Two Chapters from the book Introduction to Matrix Algebra: 1. Vectors and System of Equations
• Mike Brookes: Matrix Reference Manual.
Linear algebra
• Outline
• Glossary
Basic concepts
• Scalar
• Vector
• Vector space
• Scalar multiplication
• Vector projection
• Linear span
• Linear map
• Linear projection
• Linear independence
• Linear combination
• Basis
• Change of basis
• Row and column vectors
• Row and column spaces
• Kernel
• Eigenvalues and eigenvectors
• Transpose
• Linear equations
Matrices
• Block
• Decomposition
• Invertible
• Minor
• Multiplication
• Rank
• Transformation
• Cramer's rule
• Gaussian elimination
Bilinear
• Orthogonality
• Dot product
• Hadamard product
• Inner product space
• Outer product
• Kronecker product
• Gram–Schmidt process
Multilinear algebra
• Determinant
• Cross product
• Triple product
• Seven-dimensional cross product
• Geometric algebra
• Exterior algebra
• Bivector
• Multivector
• Tensor
• Outermorphism
Vector space constructions
• Dual
• Direct sum
• Function space
• Quotient
• Subspace
• Tensor product
Numerical
• Floating-point
• Numerical stability
• Basic Linear Algebra Subprograms
• Sparse matrix
• Comparison of linear algebra libraries
• Category
• Mathematics portal
• Commons
• Wikibooks
• Wikiversity
|
Wikipedia
|
Rank-into-rank
In set theory, a branch of mathematics, a rank-into-rank embedding is a large cardinal property defined by one of the following four axioms given in order of increasing consistency strength. (A set of rank < λ is one of the elements of the set Vλ of the von Neumann hierarchy.)
• Axiom I3: There is a nontrivial elementary embedding of Vλ into itself.
• Axiom I2: There is a nontrivial elementary embedding of V into a transitive class M that includes Vλ where λ is the first fixed point above the critical point.
• Axiom I1: There is a nontrivial elementary embedding of Vλ+1 into itself.
• Axiom I0: There is a nontrivial elementary embedding of L(Vλ+1) into itself with critical point below λ.
These are essentially the strongest known large cardinal axioms not known to be inconsistent in ZFC; the axiom for Reinhardt cardinals is stronger, but is not consistent with the axiom of choice.
If j is the elementary embedding mentioned in one of these axioms and κ is its critical point, then λ is the limit of $j^{n}(\kappa )$ as n goes to ω. More generally, if the axiom of choice holds, it is provable that if there is a nontrivial elementary embedding of Vα into itself then α is either a limit ordinal of cofinality ω or the successor of such an ordinal.
The axioms I0, I1, I2, and I3 were at first suspected to be inconsistent (in ZFC) as it was thought possible that Kunen's inconsistency theorem that Reinhardt cardinals are inconsistent with the axiom of choice could be extended to them, but this has not yet happened and they are now usually believed to be consistent.
Every I0 cardinal κ (speaking here of the critical point of j) is an I1 cardinal.
Every I1 cardinal κ (sometimes called ω-huge cardinals) is an I2 cardinal and has a stationary set of I2 cardinals below it.
Every I2 cardinal κ is an I3 cardinal and has a stationary set of I3 cardinals below it.
Every I3 cardinal κ has another I3 cardinal above it and is an n-huge cardinal for every n<ω.
Axiom I1 implies that Vλ+1 (equivalently, H(λ+)) does not satisfy V=HOD. There is no set S⊂λ definable in Vλ+1 (even from parameters Vλ and ordinals <λ+) with S cofinal in λ and |S|<λ, that is, no such S witnesses that λ is singular. And similarly for Axiom I0 and ordinal definability in L(Vλ+1) (even from parameters in Vλ). However globally, and even in Vλ,[1] V=HOD is relatively consistent with Axiom I1.
Notice that I0 is sometimes strengthened further by adding an "Icarus set", so that it would be
• Axiom Icarus set: There is a nontrivial elementary embedding of L(Vλ+1, Icarus) into itself with the critical point below λ.
The Icarus set should be in Vλ+2 − L(Vλ+1) but chosen to avoid creating an inconsistency. So for example, it cannot encode a well-ordering of Vλ+1. See section 10 of Dimonte for more details.
Notes
1. Consistency of V = HOD With the Wholeness Axiom, Paul Corazza, Archive for Mathematical Logic, No. 39, 2000.
References
• Dimonte, Vincenzo (2017), "I0 and rank-into-rank axioms", arXiv:1707.02613 [math.LO].
• Gaifman, Haim (1974), "Elementary embeddings of models of set-theory and certain subtheories", Axiomatic set theory, Proc. Sympos. Pure Math., vol. XIII, Part II, Providence R.I.: Amer. Math. Soc., pp. 33–101, MR 0376347
• Kanamori, Akihiro (2003), The Higher Infinite : Large Cardinals in Set Theory from Their Beginnings (2nd ed.), Springer, ISBN 3-540-00384-3.
• Laver, Richard (1997), "Implications between strong large cardinal axioms", Ann. Pure Appl. Logic, 90 (1–3): 79–90, doi:10.1016/S0168-0072(97)00031-6, MR 1489305.
• Solovay, Robert M.; Reinhardt, William N.; Kanamori, Akihiro (1978), "Strong axioms of infinity and elementary embeddings", Annals of Mathematical Logic, 13 (1): 73–116, doi:10.1016/0003-4843(78)90031-1.
|
Wikipedia
|
Cartan subgroup
In the theory of algebraic groups, a Cartan subgroup of a connected linear algebraic group $G$ over a (not necessarily algebraically closed) field $k$ is the centralizer of a maximal torus. Cartan subgroups are smooth (equivalently reduced), connected and nilpotent. If $k$ is algebraically closed, they are all conjugate to each other. [1]
For a Cartan subgroup of a Lie group, see Cartan subalgebra § Cartan subgroup.
Notice that in the context of algebraic groups a torus is an algebraic group $T$ such that the base extension $T_{({\bar {k}})}$ (where ${\bar {k}}$ is the algebraic closure of $k$) is isomorphic to the product of a finite number of copies of the $\mathbf {G} _{m}=\mathbf {GL} _{1}$. Maximal such subgroups have in the theory of algebraic groups a role that is similar to that of maximal tori in the theory of Lie groups.
If $G$ is reductive (in particular, if it is semi-simple), then a torus is maximal if and only if it is its own centraliser [2] and thus Cartan subgroups of $G$ are precisely the maximal tori.
Example
The general linear groups $\mathbf {GL} _{n}$ are reductive. The diagonal subgroup is clearly a torus (indeed a split torus, since it is product of n copies of $\mathbf {G} _{m}$ already before any base extension), and it can be shown to be maximal. Since $\mathbf {GL} _{n}$ is reductive, the diagonal subgroup is a Cartan subgroup.
See also
• Borel subgroup
• Algebraic group
• Algebraic torus
References
1. Milne (2017), Proposition 17.44.
2. Milne (2017), Corollary 17.84.
• Borel, Armand (1991-12-31). Linear algebraic groups. ISBN 3-540-97370-2.
• Lang, Serge (2002). Algebra. Springer. ISBN 978-0-387-95385-4.
• Milne, J. S. (2017), Algebraic Groups: The Theory of Group Schemes of Finite Type over a Field, Cambridge University Press, doi:10.1017/9781316711736, ISBN 978-1107167483, MR 3729270
• Popov, V. L. (2001) [1994], "Cartan subgroup", Encyclopedia of Mathematics, EMS Press
• Springer, Tonny A. (1998), Linear algebraic groups, Progress in Mathematics, vol. 9 (2nd ed.), Boston, MA: Birkhäuser Boston, ISBN 978-0-8176-4021-7, MR 1642713
|
Wikipedia
|
Rank of a group
In the mathematical subject of group theory, the rank of a group G, denoted rank(G), can refer to the smallest cardinality of a generating set for G, that is
$\operatorname {rank} (G)=\min\{|X|:X\subseteq G,\langle X\rangle =G\}.$
For the torsion-free rank, see Rank of an abelian group. For the dimension of the Cartan subgroup, see Rank of a Lie group.
If G is a finitely generated group, then the rank of G is a nonnegative integer. The notion of rank of a group is a group-theoretic analog of the notion of dimension of a vector space. Indeed, for p-groups, the rank of the group P is the dimension of the vector space P/Φ(P), where Φ(P) is the Frattini subgroup.
The rank of a group is also often defined in such a way as to ensure subgroups have rank less than or equal to the whole group, which is automatically the case for dimensions of vector spaces, but not for groups such as affine groups. To distinguish these different definitions, one sometimes calls this rank the subgroup rank. Explicitly, the subgroup rank of a group G is the maximum of the ranks of its subgroups:
$\operatorname {sr} (G)=\max _{H\leq G}\min\{|X|:X\subseteq H,\langle X\rangle =H\}.$
Sometimes the subgroup rank is restricted to abelian subgroups.
Known facts and examples
• For a nontrivial group G, we have rank(G) = 1 if and only if G is a cyclic group. The trivial group T has rank(T) = 0, since the minimal generating set of T is the empty set.
• For a free abelian group $\mathbb {Z} ^{n}$ we have ${\rm {rank}}(\mathbb {Z} ^{n})=n.$
• If X is a set and G = F(X) is the free group with free basis X then rank(G) = |X|.
• If a group H is a homomorphic image (or a quotient group) of a group G then rank(H) ≤ rank(G).
• If G is a finite non-abelian simple group (e.g. G = An, the alternating group, for n > 4) then rank(G) = 2. This fact is a consequence of the Classification of finite simple groups.
• If G is a finitely generated group and Φ(G) ≤ G is the Frattini subgroup of G (which is always normal in G so that the quotient group G/Φ(G) is defined) then rank(G) = rank(G/Φ(G)).[1]
• If G is the fundamental group of a closed (that is compact and without boundary) connected 3-manifold M then rank(G)≤g(M), where g(M) is the Heegaard genus of M.[2]
• If H,K ≤ F(X) are finitely generated subgroups of a free group F(X) such that the intersection $L=H\cap K$ is nontrivial, then L is finitely generated and
rank(L) − 1 ≤ 2(rank(K) − 1)(rank(H) − 1).
This result is due to Hanna Neumann.[3][4] The Hanna Neumann conjecture states that in fact one always has rank(L) − 1 ≤ (rank(K) − 1)(rank(H) − 1). The Hanna Neumann conjecture has recently been solved by Igor Mineyev[5] and announced independently by Joel Friedman.[6]
• According to the classic Grushko theorem, rank behaves additively with respect to taking free products, that is, for any groups A and B we have
rank(A$\ast $B) = rank(A) + rank(B).
• If $G=\langle x_{1},\dots ,x_{n}|r=1\rangle $ is a one-relator group such that r is not a primitive element in the free group F(x1,..., xn), that is, r does not belong to a free basis of F(x1,..., xn), then rank(G) = n.[7][8]
The rank problem
There is an algorithmic problem studied in group theory, known as the rank problem. The problem asks, for a particular class of finitely presented groups if there exists an algorithm that, given a finite presentation of a group from the class, computes the rank of that group. The rank problem is one of the harder algorithmic problems studied in group theory and relatively little is known about it. Known results include:
• The rank problem is algorithmically undecidable for the class of all finitely presented groups. Indeed, by a classical result of Adian–Rabin, there is no algorithm to decide if a finitely presented group is trivial, so even the question of whether rank(G)=0 is undecidable for finitely presented groups.[9][10]
• The rank problem is decidable for finite groups and for finitely generated abelian groups.
• The rank problem is decidable for finitely generated nilpotent groups. The reason is that for such a group G, the Frattini subgroup of G contains the commutator subgroup of G and hence the rank of G is equal to the rank of the abelianization of G.[11]
• The rank problem is undecidable for word hyperbolic groups.[12]
• The rank problem is decidable for torsion-free Kleinian groups.[13]
• The rank problem is open for finitely generated virtually abelian groups (that is containing an abelian subgroup of finite index), for virtually free groups, and for 3-manifold groups.
Generalizations and related notions
The rank of a finitely generated group G can be equivalently defined as the smallest cardinality of a set X such that there exists an onto homomorphism F(X) → G, where F(X) is the free group with free basis X. There is a dual notion of co-rank of a finitely generated group G defined as the largest cardinality of X such that there exists an onto homomorphism G → F(X). Unlike rank, co-rank is always algorithmically computable for finitely presented groups,[14] using the algorithm of Makanin and Razborov for solving systems of equations in free groups.[15][16] The notion of co-rank is related to the notion of a cut number for 3-manifolds.[17]
If p is a prime number, then the p-rank of G is the largest rank of an elementary abelian p-subgroup.[18] The sectional p-rank is the largest rank of an elementary abelian p-section (quotient of a subgroup).
See also
• Rank of an abelian group
• Prüfer rank
• Grushko theorem
• Free group
• Nielsen equivalence
Notes
1. D. J. S. Robinson. A course in the theory of groups, 2nd edn, Graduate Texts in Mathematics 80 (Springer-Verlag, 1996). ISBN 0-387-94461-3
2. Friedhelm Waldhausen. Some problems on 3-manifolds. Algebraic and geometric topology (Proc. Sympos. Pure Math., Stanford Univ., Stanford, Calif., 1976), Part 2, pp. 313–322, Proc. Sympos. Pure Math., XXXII, Amer. Math. Soc., Providence, R.I., 1978; ISBN 0-8218-1433-8
3. Hanna Neumann. On the intersection of finitely generated free groups. Publicationes Mathematicae Debrecen, vol. 4 (1956), 186–189.
4. Hanna Neumann. On the intersection of finitely generated free groups. Addendum. Publicationes Mathematicae Debrecen, vol. 5 (1957), p. 128
5. Igor Minevev, "Submultiplicativity and the Hanna Neumann Conjecture." Ann. of Math., 175 (2012), no. 1, 393–414.
6. "Sheaves on Graphs and a Proof of the Hanna Neumann Conjecture". Math.ubc.ca. Retrieved 2012-06-12.
7. Wilhelm Magnus, Uber freie Faktorgruppen und freie Untergruppen Gegebener Gruppen, Monatshefte für Mathematik, vol. 47(1939), pp. 307–313.
8. Roger C. Lyndon and Paul E. Schupp. Combinatorial Group Theory. Springer-Verlag, New York, 2001. "Classics in Mathematics" series, reprint of the 1977 edition. ISBN 978-3-540-41158-1; Proposition 5.11, p. 107
9. W. W. Boone. Decision problems about algebraic and logical systems as a whole and recursively enumerable degrees of unsolvability. 1968 Contributions to Math. Logic (Colloquium, Hannover, 1966) pp. 13 33 North-Holland, Amsterdam
10. Charles F. Miller, III. Decision problems for groups — survey and reflections. Algorithms and classification in combinatorial group theory (Berkeley, CA, 1989), pp. 1–59, Math. Sci. Res. Inst. Publ., 23, Springer, New York, 1992; ISBN 0-387-97685-X
11. John Lennox, and Derek J. S. Robinson. The theory of infinite soluble groups. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 2004. ISBN 0-19-850728-3
12. G. Baumslag, C. F. Miller and H. Short. Unsolvable problems about small cancellation and word hyperbolic groups. Bulletin of the London Mathematical Society, vol. 26 (1994), pp. 97–101
13. Ilya Kapovich, and Richard Weidmann. Kleinian groups and the rank problem. Geometry and Topology, vol. 9 (2005), pp. 375–402
14. John R. Stallings. Problems about free quotients of groups. Geometric group theory (Columbus, OH, 1992), pp. 165–182, Ohio State Univ. Math. Res. Inst. Publ., 3, de Gruyter, Berlin, 1995. ISBN 3-11-014743-2
15. A. A. Razborov. Systems of equations in a free group. (in Russian) Izvestia Akademii Nauk SSSR, Seriya Matematischeskaya, vol. 48 (1984), no. 4, pp. 779–832.
16. G. S.Makanin Equations in a free group. (Russian), Izvestia Akademii Nauk SSSR, Seriya Matematischeskaya, vol. 46 (1982), no. 6, pp. 1199–1273
17. Shelly L. Harvey. On the cut number of a 3-manifold. Geometry & Topology, vol. 6 (2002), pp. 409–424
18. Aschbacher, M. (2002), Finite Group Theory, Cambridge University Press, p. 5, ISBN 978-0-521-78675-1
|
Wikipedia
|
Finitely generated module
In mathematics, a finitely generated module is a module that has a finite generating set. A finitely generated module over a ring R may also be called a finite R-module, finite over R,[1] or a module of finite type.
Related concepts include finitely cogenerated modules, finitely presented modules, finitely related modules and coherent modules all of which are defined below. Over a Noetherian ring the concepts of finitely generated, finitely presented and coherent modules coincide.
A finitely generated module over a field is simply a finite-dimensional vector space, and a finitely generated module over the integers is simply a finitely generated abelian group.
Definition
The left R-module M is finitely generated if there exist a1, a2, ..., an in M such that for any x in M, there exist r1, r2, ..., rn in R with x = r1a1 + r2a2 + ... + rnan.
The set {a1, a2, ..., an} is referred to as a generating set of M in this case. A finite generating set need not be a basis, since it need not be linearly independent over R. What is true is: M is finitely generated if and only if there is a surjective R-linear map:
$R^{n}\to M$
for some n (M is a quotient of a free module of finite rank).
If a set S generates a module that is finitely generated, then there is a finite generating set that is included in S, since only finitely many elements in S are needed to express any finite generating set, and these finitely many elements form a generating set. However, it may occur that S does not contain any finite generating set of minimal cardinality. For example the set of the prime numbers is a generating set of $\mathbb {Z} $ viewed as $\mathbb {Z} $-module, and a generating set formed from prime numbers has at least two elements, while the singleton{1} is also a generating set.
In the case where the module M is a vector space over a field R, and the generating set is linearly independent, n is well-defined and is referred to as the dimension of M (well-defined means that any linearly independent generating set has n elements: this is the dimension theorem for vector spaces).
Any module is the union of the directed set of its finitely generated submodules.
A module M is finitely generated if and only if any increasing chain Mi of submodules with union M stabilizes: i.e., there is some i such that Mi = M. This fact with Zorn's lemma implies that every nonzero finitely generated module admits maximal submodules. If any increasing chain of submodules stabilizes (i.e., any submodule is finitely generated), then the module M is called a Noetherian module.
Examples
• If a module is generated by one element, it is called a cyclic module.
• Let R be an integral domain with K its field of fractions. Then every finitely generated R-submodule I of K is a fractional ideal: that is, there is some nonzero r in R such that rI is contained in R. Indeed, one can take r to be the product of the denominators of the generators of I. If R is Noetherian, then every fractional ideal arises in this way.
• Finitely generated modules over the ring of integers Z coincide with the finitely generated abelian groups. These are completely classified by the structure theorem, taking Z as the principal ideal domain.
• Finitely generated (say left) modules over a division ring are precisely finite dimensional vector spaces (over the division ring).
Some facts
Every homomorphic image of a finitely generated module is finitely generated. In general, submodules of finitely generated modules need not be finitely generated. As an example, consider the ring R = Z[X1, X2, ...] of all polynomials in countably many variables. R itself is a finitely generated R-module (with {1} as generating set). Consider the submodule K consisting of all those polynomials with zero constant term. Since every polynomial contains only finitely many terms whose coefficients are non-zero, the R-module K is not finitely generated.
In general, a module is said to be Noetherian if every submodule is finitely generated. A finitely generated module over a Noetherian ring is a Noetherian module (and indeed this property characterizes Noetherian rings): A module over a Noetherian ring is finitely generated if and only if it is a Noetherian module. This resembles, but is not exactly Hilbert's basis theorem, which states that the polynomial ring R[X] over a Noetherian ring R is Noetherian. Both facts imply that a finitely generated commutative algebra over a Noetherian ring is again a Noetherian ring.
More generally, an algebra (e.g., ring) that is a finitely generated module is a finitely generated algebra. Conversely, if a finitely generated algebra is integral (over the coefficient ring), then it is finitely generated module. (See integral element for more.)
Let 0 → M′ → M → M′′ → 0 be an exact sequence of modules. Then M is finitely generated if M′, M′′ are finitely generated. There are some partial converses to this. If M is finitely generated and M′′ is finitely presented (which is stronger than finitely generated; see below), then M′ is finitely generated. Also, M is Noetherian (resp. Artinian) if and only if M′, M′′ are Noetherian (resp. Artinian).
Let B be a ring and A its subring such that B is a faithfully flat right A-module. Then a left A-module F is finitely generated (resp. finitely presented) if and only if the B-module B ⊗A F is finitely generated (resp. finitely presented).[2]
Finitely generated modules over a commutative ring
For finitely generated modules over a commutative ring R, Nakayama's lemma is fundamental. Sometimes, the lemma allows one to prove finite dimensional vector spaces phenomena for finitely generated modules. For example, if f : M → M is a surjective R-endomorphism of a finitely generated module M, then f is also injective, and hence is an automorphism of M.[3] This says simply that M is a Hopfian module. Similarly, an Artinian module M is coHopfian: any injective endomorphism f is also a surjective endomorphism.[4]
Any R-module is an inductive limit of finitely generated R-submodules. This is useful for weakening an assumption to the finite case (e.g., the characterization of flatness with the Tor functor).
An example of a link between finite generation and integral elements can be found in commutative algebras. To say that a commutative algebra A is a finitely generated ring over R means that there exists a set of elements G = {x1, ..., xn} of A such that the smallest subring of A containing G and R is A itself. Because the ring product may be used to combine elements, more than just R-linear combinations of elements of G are generated. For example, a polynomial ring R[x] is finitely generated by {1, x} as a ring, but not as a module. If A is a commutative algebra (with unity) over R, then the following two statements are equivalent:[5]
• A is a finitely generated R module.
• A is both a finitely generated ring over R and an integral extension of R.
Generic rank
Let M be a finitely generated module over an integral domain A with the field of fractions K. Then the dimension $\operatorname {dim} _{K}(M\otimes _{A}K)$ is called the generic rank of M over A. This number is the same as the number of maximal A-linearly independent vectors in M or equivalently the rank of a maximal free submodule of M (cf. Rank of an abelian group). Since $(M/F)_{(0)}=M_{(0)}/F_{(0)}=0$, $M/F$ is a torsion module. When A is Noetherian, by generic freeness, there is an element f (depending on M) such that $M[f^{-1}]$ is a free $A[f^{-1}]$-module. Then the rank of this free module is the generic rank of M.
Now suppose the integral domain A is generated as algebra over a field k by finitely many homogeneous elements of degrees $d_{i}$. Suppose M is graded as well and let $P_{M}(t)=\sum (\operatorname {dim} _{k}M_{n})t^{n}$ be the Poincaré series of M. By the Hilbert–Serre theorem, there is a polynomial F such that $P_{M}(t)=F(t)\prod (1-t^{d_{i}})^{-1}$. Then $F(1)$ is the generic rank of M.[6]
A finitely generated module over a principal ideal domain is torsion-free if and only if it is free. This is a consequence of the structure theorem for finitely generated modules over a principal ideal domain, the basic form of which says a finitely generated module over a PID is a direct sum of a torsion module and a free module. But it can also be shown directly as follows: let M be a torsion-free finitely generated module over a PID A and F a maximal free submodule. Let f be in A such that $fM\subset F$. Then $fM$ is free since it is a submodule of a free module and A is a PID. But now $f:M\to fM$ is an isomorphism since M is torsion-free.
By the same argument as above, a finitely generated module over a Dedekind domain A (or more generally a semi-hereditary ring) is torsion-free if and only if it is projective; consequently, a finitely generated module over A is a direct sum of a torsion module and a projective module. A finitely generated projective module over a Noetherian integral domain has constant rank and so the generic rank of a finitely generated module over A is the rank of its projective part.
Equivalent definitions and finitely cogenerated modules
The following conditions are equivalent to M being finitely generated (f.g.):
• For any family of submodules {Ni | i ∈ I} in M, if $\sum _{i\in I}N_{i}=M\,$, then $\sum _{i\in F}N_{i}=M\,$ for some finite subset F of I.
• For any chain of submodules {Ni | i ∈ I} in M, if $\bigcup _{i\in I}N_{i}=M\,$, then Ni = M for some i in I.
• If $\phi :\bigoplus _{i\in I}R\to M\,$ :\bigoplus _{i\in I}R\to M\,} is an epimorphism, then the restriction $\phi :\bigoplus _{i\in F}R\to M\,$ :\bigoplus _{i\in F}R\to M\,} is an epimorphism for some finite subset F of I.
From these conditions it is easy to see that being finitely generated is a property preserved by Morita equivalence. The conditions are also convenient to define a dual notion of a finitely cogenerated module M. The following conditions are equivalent to a module being finitely cogenerated (f.cog.):
• For any family of submodules {Ni | i ∈ I} in M, if $\bigcap _{i\in I}N_{i}=\{0\}\,$, then $\bigcap _{i\in F}N_{i}=\{0\}\,$ for some finite subset F of I.
• For any chain of submodules {Ni | i ∈ I} in M, if $\bigcap _{i\in I}N_{i}=\{0\}\,$, then Ni = {0} for some i in I.
• If $\phi :M\to \prod _{i\in I}N_{i}\,$ is a monomorphism, where each $N_{i}$ is an R module, then $\phi :M\to \prod _{i\in F}N_{i}\,$ is a monomorphism for some finite subset F of I.
Both f.g. modules and f.cog. modules have interesting relationships to Noetherian and Artinian modules, and the Jacobson radical J(M) and socle soc(M) of a module. The following facts illustrate the duality between the two conditions. For a module M:
• M is Noetherian if and only if every submodule N of M is f.g.
• M is Artinian if and only if every quotient module M/N is f.cog.
• M is f.g. if and only if J(M) is a superfluous submodule of M, and M/J(M) is f.g.
• M is f.cog. if and only if soc(M) is an essential submodule of M, and soc(M) is f.g.
• If M is a semisimple module (such as soc(N) for any module N), it is f.g. if and only if f.cog.
• If M is f.g. and nonzero, then M has a maximal submodule and any quotient module M/N is f.g.
• If M is f.cog. and nonzero, then M has a minimal submodule, and any submodule N of M is f.cog.
• If N and M/N are f.g. then so is M. The same is true if "f.g." is replaced with "f.cog."
Finitely cogenerated modules must have finite uniform dimension. This is easily seen by applying the characterization using the finitely generated essential socle. Somewhat asymmetrically, finitely generated modules do not necessarily have finite uniform dimension. For example, an infinite direct product of nonzero rings is a finitely generated (cyclic!) module over itself, however it clearly contains an infinite direct sum of nonzero submodules. Finitely generated modules do not necessarily have finite co-uniform dimension either: any ring R with unity such that R/J(R) is not a semisimple ring is a counterexample.
Finitely presented, finitely related, and coherent modules
Another formulation is this: a finitely generated module M is one for which there is an epimorphism mapping Rk onto M :
f : Rk → M.
Suppose now there is an epimorphism,
φ : F → M.
for a module M and free module F.
• If the kernel of φ is finitely generated, then M is called a finitely related module. Since M is isomorphic to F/ker(φ), this basically expresses that M is obtained by taking a free module and introducing finitely many relations within F (the generators of ker(φ)).
• If the kernel of φ is finitely generated and F has finite rank (i.e. F = Rk), then M is said to be a finitely presented module. Here, M is specified using finitely many generators (the images of the k generators of F = Rk) and finitely many relations (the generators of ker(φ)). See also: free presentation. Finitely presented modules can be characterized by an abstract property within the category of R-modules: they are precisely the compact objects in this category.
• A coherent module M is a finitely generated module whose finitely generated submodules are finitely presented.
Over any ring R, coherent modules are finitely presented, and finitely presented modules are both finitely generated and finitely related. For a Noetherian ring R, finitely generated, finitely presented, and coherent are equivalent conditions on a module.
Some crossover occurs for projective or flat modules. A finitely generated projective module is finitely presented, and a finitely related flat module is projective.
It is true also that the following conditions are equivalent for a ring R:
1. R is a right coherent ring.
2. The module RR is a coherent module.
3. Every finitely presented right R module is coherent.
Although coherence seems like a more cumbersome condition than finitely generated or finitely presented, it is nicer than them since the category of coherent modules is an abelian category, while, in general, neither finitely generated nor finitely presented modules form an abelian category.
See also
• Integral element
• Artin–Rees lemma
• Countably generated module
• Finite algebra
References
1. For example, Matsumura uses this terminology.
2. Bourbaki 1998, Ch 1, §3, no. 6, Proposition 11.
3. Matsumura 1989, Theorem 2.4.
4. Atiyah & Macdonald 1969, Exercise 6.1.
5. Kaplansky 1970, p. 11, Theorem 17.
6. Springer 1977, Theorem 2.5.6.
Textbooks
• Atiyah, M. F.; Macdonald, I. G. (1969), Introduction to commutative algebra, Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., pp. ix+128, MR 0242802
• Bourbaki, Nicolas (1998), Commutative algebra. Chapters 1--7 Translated from the French. Reprint of the 1989 English translation, Elements of Mathematics, Berlin: Springer-Verlag, ISBN 3-540-64239-0
• Kaplansky, Irving (1970), Commutative rings, Boston, Mass.: Allyn and Bacon Inc., pp. x+180, MR 0254021
• Lam, T. Y. (1999), Lectures on modules and rings, Graduate Texts in Mathematics No. 189, Springer-Verlag, ISBN 978-0-387-98428-5
• Lang, Serge (1997), Algebra (3rd ed.), Addison-Wesley, ISBN 978-0-201-55540-0
• Matsumura, Hideyuki (1989), Commutative ring theory, Cambridge Studies in Advanced Mathematics, vol. 8, Translated from the Japanese by M. Reid (2 ed.), Cambridge: Cambridge University Press, pp. xiv+320, ISBN 0-521-36764-6, MR 1011461
• Springer, Tonny A. (1977), Invariant theory, Lecture Notes in Mathematics, vol. 585, Springer, doi:10.1007/BFb0095644, ISBN 978-3-540-08242-2.
|
Wikipedia
|
Rank of an elliptic curve
In mathematics, the rank of an elliptic curve is the rational Mordell–Weil rank of an elliptic curve $E$ defined over the field of rational numbers. Mordell's theorem says the group of rational points on an elliptic curve has a finite basis. This means that for any elliptic curve there is a finite subset of the rational points on the curve, from which all further rational points may be generated. If the number of rational points on a curve is infinite then some point in a finite basis must have infinite order. The number of independent basis points with infinite order is the rank of the curve.
The rank is related to several outstanding problems in number theory, most notably the Birch–Swinnerton-Dyer conjecture. It is widely believed that there is no maximum rank for an elliptic curve,[1] and it has been shown that there exist curves with rank as large as 28,[2] but it is widely believed that such curves are rare. Indeed, Goldfeld[3] and later Katz–Sarnak[4] conjectured that in a suitable asymptotic sense (see below), the rank of elliptic curves should be 1/2 on average. In other words, half of all elliptic curves should have rank 0 (meaning that the infinite part of its Mordell–Weil group is trivial) and the other half should have rank 1; all remaining ranks consist of a total of 0% of all elliptic curves.
Heights
Mordell–Weil's theorem shows that $E(\mathbb {Q} )$ is a finitely generated abelian group, thus $E(\mathbb {Q} )\cong E(\mathbb {Q} )_{\mathrm {tors} }\times \mathbb {Z} ^{r}$ where $E(\mathbb {Q} )_{\mathrm {tors} }$ is the finite torsion subgroup and r is the rank of the elliptic curve.
In order to obtain a reasonable notion of 'average', one must be able to count elliptic curves $E/\mathbb {Q} $ somehow. This requires the introduction of a height function on the set of rational elliptic curves. To define such a function, recall that a rational elliptic curve $E/\mathbb {Q} $ can be given in terms of a Weierstrass form, that is, we can write
$E:y^{2}=x^{3}+Ax+B$
for some integers $A,B$. Moreover, this model is unique if for any prime number $p$ such that $p^{4}$ divides $A$, we have $p^{6}\nmid B$. We can then assume that $A,B$ are integers that satisfy this property and define a height function on the set of elliptic curves $E/\mathbb {Q} $ by
$H(E)=H(E(A,B))=\max\{4|A|^{3},27B^{2}\}.$
It can then be shown that the number of elliptic curves $E/\mathbb {Q} $ with bounded height $H(E)$ is finite.
Average rank
We denote by $r(E)$ the Mordell–Weil rank of the elliptic curve $E/\mathbb {Q} $. With the height function $H(E)$ in hand, one can then define the "average rank" as a limit, provided that it exists:
$\lim _{X\rightarrow \infty }{\frac {\sum _{H(E(A,B))\leq X}r(E)}{\sum _{H(E(A,B))\leq X}1}}.$
It is not known whether or not this limit exists. However, by replacing the limit with the limit superior, one can obtain a well-defined quantity. Obtaining estimates for this quantity is therefore obtaining upper bounds for the size of the average rank of elliptic curves (provided that an average exists).
Upper bounds for the average rank
In the past two decades there has been some progress made towards the task of finding upper bounds for the average rank. A. Brumer [5] showed that, conditioned on the Birch–Swinnerton-Dyer conjecture and the Generalized Riemann hypothesis that one can obtain an upper bound of $2.3$ for the average rank. Heath-Brown showed [6] that one can obtain an upper bound of $2$, still assuming the same two conjectures. Finally, Young showed [7] that one can obtain a bound of $25/14$; still assuming both conjectures.
Bhargava and Shankar showed that the average rank of elliptic curves is bounded above by $1.5$ [8] and ${\frac {7}{6}}$ [9] without assuming either the Birch–Swinnerton-Dyer conjecture or the Generalized Riemann Hypothesis. This is achieved by computing the average size of the $2$-Selmer and $3$-Selmer groups of elliptic curves $E/\mathbb {Q} $ respectively.
Bhargava and Shankar's approach
Bhargava and Shankar's unconditional proof of the boundedness of the average rank of elliptic curves is obtained by using a certain exact sequence involving the Mordell-Weil group of an elliptic curve $E/\mathbb {Q} $. Denote by $E(\mathbb {Q} )$ the Mordell-Weil group of rational points on the elliptic curve $E$, $\operatorname {Sel} _{p}(E)$ the $p$-Selmer group of $E$, and let Ш${}_{E}[p]$ denote the $p$-part of the Tate–Shafarevich group of $E$. Then we have the following exact sequence
$0\rightarrow E(\mathbb {Q} )/pE(\mathbb {Q} )\rightarrow \operatorname {Sel} _{p}(E)\rightarrow $ Ш ${}_{E}[p]\rightarrow 0.$
This shows that the rank of $\operatorname {Sel} _{p}(E)$, also called the $p$-Selmer rank of $E$, defined as the non-negative integer $s$ such that $\#\operatorname {Sel} _{p}(E)=p^{s}$, is an upper bound for the Mordell-Weil rank $r$ of $E(\mathbb {Q} )$. Therefore, if one can compute or obtain an upper bound on $p$-Selmer rank of $E$, then one would be able to bound the Mordell-Weil rank on average as well.
In Binary quartic forms having bounded invariants, and the boundedness of the average rank of elliptic curves,[8] Bhargava and Shankar computed the 2-Selmer rank of elliptic curves on average. They did so by counting binary quartic forms, using a method used by Birch and Swinnerton-Dyer in their original computation of the analytic rank of elliptic curves which led to their famous conjecture.
Largest known ranks
A common conjecture is that there is no bound on the largest possible rank for an elliptic curve. In 2006, Noam Elkies discovered an elliptic curve with a rank of at least 28:[2]
y2 + xy + y = x3 − x2 − 20067762415575526585033208209338542750930230312178956502x + 34481611795030556467032985690390720374855944359319180361266008296291939448732243429
In 2020, Elkies and Zev Klagsbrun discovered a curve with a rank of exactly 20:[10][11]
y2 + xy + y = x3 − x2 -
244537673336319601463803487168961769270757573821859853707x + 961710182053183034546222979258806817743270682028964434238957830989898438151121499931
References
1. Hartnett, Kevin (31 October 2018). "Without a Proof, Mathematicians Wonder How Much Evidence Is Enough". Quanta Magazine. Retrieved 18 July 2019.
2. Dujella, Andrej. "History of elliptic curves rank records". Retrieved 3 August 2016.
3. D. Goldfeld, Conjectures on elliptic curves over quadratic fields, in Number Theory, Carbondale 1979 (Proc. Southern Illinois Conf., Southern Illinois Univ., Carbondale, Ill., 1979), Lecture Notes in Math. 751, Springer-Verlag, New York, 1979, pp. 108–118. MR0564926. Zbl 0417.14031. doi:10.1007/BFb0062705.
4. N. M. Katz and P. Sarnak, Random Matrices, Frobenius Eigenvalues, and Monodromy, Amer. Math. Soc. Colloq. Publ. 45, Amer. Math. Soc., 1999. MR1659828. Zbl 0958.11004.
5. A. Brumer, The average rank of elliptic curves. I, Invent. Math. 109 (1992), 445–472. MR1176198. Zbl 0783.14019. doi:10.1007/BF01232033.
6. D. R. Heath-Brown, The average analytic rank of elliptic curves, Duke Math. J. 122 (2004), 591–623. MR2057019. Zbl 1063.11013. doi:10.1215/S0012-7094-04-12235-3.
7. M. P. Young, Low-lying zeros of families of elliptic curves, J. Amer. Math. Soc. 19 (2006), 205–250. MR2169047. Zbl 1086.11032. doi:10.1090/S0894-0347-05-00503-5.
8. M. Bhargava and A. Shankar, Binary quartic forms having bounded invariants, and the boundedness of the average rank of elliptic curves, Annals of Mathematics 181 (2015), 191–242 doi:10.4007/annals.2015.181.1.3
9. M. Bhargava and A. Shankar, Ternary cubic forms having bounded invariants, and the existence of a positive proportion of elliptic curves having rank 0, Annals of Mathematics 181 (2015), 587–621 doi:10.4007/annals.2015.181.2.4
10. Dujella, Andrej. "History of elliptic curves rank records". Retrieved 30 March 2020.
11. Elkies, Noam. "New records for ranks of elliptic curves with torsion". NMBRTHRY Archives. Retrieved 30 March 2020.
|
Wikipedia
|
Rank ring
In mathematics, a rank ring is a ring with a real-valued rank function behaving like the rank of an endomorphism. John von Neumann (1998) introduced rank rings in his work on continuous geometry, and showed that the ring associated to a continuous geometry is a rank ring.
Definition
John von Neumann (1998, p.231) defined a ring to be a rank ring if it is regular and has a real-valued rank function R with the following properties:
• 0 ≤ R(a) ≤ 1 for all a
• R(a) = 0 if and only if a = 0
• R(1) = 1
• R(ab) ≤ R(a), R(ab) ≤ R(b)
• If e2 = e, f 2 = f, ef = fe = 0 then R(e + f ) = R(e) + R(f ).
References
• Halperin, Israel (1965), "Regular rank rings", Canadian Journal of Mathematics, 17: 709–719, doi:10.4153/CJM-1965-071-4, ISSN 0008-414X, MR 0191926
• von Neumann, John (1936), "Examples of continuous geometries.", Proc. Natl. Acad. Sci. USA, 22 (2): 101–108, Bibcode:1936PNAS...22..101N, doi:10.1073/pnas.22.2.101, JFM 62.0648.03, JSTOR 86391, PMC 1076713, PMID 16588050
• von Neumann, John (1998) [1960], Continuous geometry, Princeton Landmarks in Mathematics, Princeton University Press, ISBN 978-0-691-05893-1, MR 0120174
|
Wikipedia
|
Rank–nullity theorem
The rank–nullity theorem is a theorem in linear algebra, which asserts:
• the number of columns of a matrix M is the sum of the rank of M and the nullity of M; and
• the dimension of the domain of a linear transformation f is the sum of the rank of f (the dimension of the image of f) and the nullity of f (the dimension of the kernel of f).[1][2][3][4]
"Rank theorem" redirects here. For the rank theorem of multivariable calculus, see constant rank theorem.
It follows that for linear transformations of vector spaces of finite dimension, either injectivity or surjectivity implies bijectivity.
Stating the theorem
Linear transformations
Let $T:V\to W$ be a linear transformation between two vector spaces where $T$'s domain $V$ is finite dimensional. Then
$\operatorname {rank} (T)~+~\operatorname {nullity} (T)~=~\dim V,$
where $ \operatorname {rank} (T)$ is the rank of $T$ (the dimension of its image) and $\operatorname {nullity} (T)$ is the nullity of $T$ (the dimension of its kernel). In other words,
$\dim(\operatorname {Im} T)+\dim(\operatorname {Ker} T)=\dim(\operatorname {Domain} (T)).$
This theorem can be refined via the splitting lemma to be a statement about an isomorphism of spaces, not just dimensions. Explicitly, since $T$ induces an isomorphism from $V/\operatorname {Ker} (T)$ to $\operatorname {Image} (T),$ the existence of a basis for $V$ that extends any given basis of $\operatorname {Ker} (T)$ implies, via the splitting lemma, that $\operatorname {Image} (T)\oplus \operatorname {Ker} (T)\cong V.$ Taking dimensions, the rank–nullity theorem follows.
Matrices
Linear maps can be represented with matrices. More precisely, an $m\times n$ matrix M represents a linear map $f:F^{n}\to F^{m},$ where $F$ is the underlying field.[5] So, the dimension of the domain of $f$ is n, the number of columns of M, and the rank–nullity theorem for an $m\times n$ matrix M is
$\operatorname {rank} (M)+\operatorname {nullity} (M)=n.$
Proofs
Here we provide two proofs. The first[2] operates in the general case, using linear maps. The second proof[6] looks at the homogeneous system $\mathbf {Ax} =\mathbf {0} ,$ where $\mathbf {A} $ is a $m\times n$ with rank $r,$ and shows explicitly that there exists a set of $n-r$ linearly independent solutions that span the null space of $\mathbf {A} $.
While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain.
First proof
Let $V,W$ be vector spaces over some field $F,$ and $T$ defined as in the statement of the theorem with $\dim V=n$.
As $\operatorname {Ker} T\subset V$ is a subspace, there exists a basis for it. Suppose $\dim \operatorname {Ker} T=k$ and let
${\mathcal {K}}:=\{v_{1},\ldots ,v_{k}\}\subset \operatorname {Ker} (T)$
be such a basis.
We may now, by the Steinitz exchange lemma, extend ${\mathcal {K}}$ with $n-k$ linearly independent vectors $w_{1},\ldots ,w_{n-k}$ to form a full basis of $V$.
Let
${\mathcal {S}}:=\{w_{1},\ldots ,w_{n-k}\}\subset V\setminus \operatorname {Ker} (T)$
such that
${\mathcal {B}}:={\mathcal {K}}\cup {\mathcal {S}}=\{v_{1},\ldots ,v_{k},w_{1},\ldots ,w_{n-k}\}\subset V$
is a basis for $V$. From this, we know that
$\operatorname {Im} T=\operatorname {Span} T({\mathcal {B}})=\operatorname {Span} \{T(v_{1}),\ldots ,T(v_{k}),T(w_{1}),\ldots ,T(w_{n-k})\}$
$=\operatorname {Span} \{T(w_{1}),\ldots ,T(w_{n-k})\}=\operatorname {Span} T({\mathcal {S}}).$
We now claim that $T({\mathcal {S}})$ is a basis for $\operatorname {Im} T$. The above equality already states that $T({\mathcal {S}})$ is a generating set for $\operatorname {Im} T$; it remains to be shown that it is also linearly independent to conclude that it is a basis.
Suppose $T({\mathcal {S}})$ is not linearly independent, and let
$\sum _{j=1}^{n-k}\alpha _{j}T(w_{j})=0_{W}$
for some $\alpha _{j}\in F$.
Thus, owing to the linearity of $T$, it follows that
$T\left(\sum _{j=1}^{n-k}\alpha _{j}w_{j}\right)=0_{W}\implies \left(\sum _{j=1}^{n-k}\alpha _{j}w_{j}\right)\in \operatorname {Ker} T=\operatorname {Span} {\mathcal {K}}\subset V.$
This is a contradiction to ${\mathcal {B}}$ being a basis, unless all $\alpha _{j}$ are equal to zero. This shows that $T({\mathcal {S}})$ is linearly independent, and more specifically that it is a basis for $\operatorname {Im} T$.
To summarize, we have ${\mathcal {K}}$, a basis for $\operatorname {Ker} T$, and $T({\mathcal {S}})$, a basis for $\operatorname {Im} T$.
Finally we may state that
$\operatorname {Rank} (T)+\operatorname {Nullity} (T)=\dim \operatorname {Im} T+\dim \operatorname {Ker} T$
$=|T({\mathcal {S}})|+|{\mathcal {K}}|=(n-k)+k=n=\dim V.$
This concludes our proof.
Second proof
Let $\mathbf {A} $ be an $m\times n$ matrix with $r$ linearly independent columns (i.e. $\operatorname {Rank} (\mathbf {A} )=r$). We will show that:
1. There exists a set of $n-r$ linearly independent solutions to the homogeneous system $\mathbf {Ax} =\mathbf {0} $.
2. That every other solution is a linear combination of these $n-r$ solutions.
To do this, we will produce an $n\times (n-r)$ matrix $\mathbf {X} $ whose columns form a basis of the null space of $\mathbf {A} $.
Without loss of generality, assume that the first $r$ columns of $\mathbf {A} $ are linearly independent. So, we can write
$\mathbf {A} ={\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{2}\end{pmatrix}},$
where
• $\mathbf {A} _{1}$ is an $m\times r$ matrix with $r$ linearly independent column vectors, and
• $\mathbf {A} _{2}$ is an $m\times (n-r)$ matrix such that each of its $n-r$ columns is linear combinations of the columns of $\mathbf {A} _{1}$.
This means that $\mathbf {A} _{2}=\mathbf {A} _{1}\mathbf {B} $ for some $r\times (n-r)$ matrix $\mathbf {B} $ (see rank factorization) and, hence,
$\mathbf {A} ={\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{1}\mathbf {B} \end{pmatrix}}.$
Let
$\mathbf {X} ={\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}},$
where $\mathbf {I} _{n-r}$ is the $(n-r)\times (n-r)$ identity matrix. So, $\mathbf {X} $ is an $n\times (n-r)$ matrix such that
$\mathbf {A} \mathbf {X} ={\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{1}\mathbf {B} \end{pmatrix}}{\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}}=-\mathbf {A} _{1}\mathbf {B} +\mathbf {A} _{1}\mathbf {B} =\mathbf {0} _{m\times (n-r)}.$
Therefore, each of the $n-r$ columns of $\mathbf {X} $ are particular solutions of $\mathbf {Ax} ={0}_{{F}^{m}}$.
Furthermore, the $n-r$ columns of $\mathbf {X} $ are linearly independent because $\mathbf {Xu} =\mathbf {0} _{{F}^{n}}$ will imply $\mathbf {u} =\mathbf {0} _{{F}^{n-r}}$ for $\mathbf {u} \in {F}^{n-r}$:
$\mathbf {X} \mathbf {u} =\mathbf {0} _{{F}^{n}}\implies {\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}}\mathbf {u} =\mathbf {0} _{{F}^{n}}\implies {\begin{pmatrix}-\mathbf {B} \mathbf {u} \\\mathbf {u} \end{pmatrix}}={\begin{pmatrix}\mathbf {0} _{{F}^{r}}\\\mathbf {0} _{{F}^{n-r}}\end{pmatrix}}\implies \mathbf {u} =\mathbf {0} _{{F}^{n-r}}.$
Therefore, the column vectors of $\mathbf {X} $ constitute a set of $n-r$ linearly independent solutions for $\mathbf {Ax} =\mathbf {0} _{\mathbb {F} ^{m}}$.
We next prove that any solution of $\mathbf {Ax} =\mathbf {0} _{{F}^{m}}$ must be a linear combination of the columns of $\mathbf {X} $.
For this, let
$\mathbf {u} ={\begin{pmatrix}\mathbf {u} _{1}\\\mathbf {u} _{2}\end{pmatrix}}\in {F}^{n}$
be any vector such that $\mathbf {Au} =\mathbf {0} _{{F}^{m}}$. Since the columns of $\mathbf {A} _{1}$ are linearly independent, $\mathbf {A} _{1}\mathbf {x} =\mathbf {0} _{{F}^{m}}$ implies $\mathbf {x} =\mathbf {0} _{{F}^{r}}$.
Therefore,
${\begin{array}{rcl}\mathbf {A} \mathbf {u} &=&\mathbf {0} _{{F}^{m}}\\\implies {\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{1}\mathbf {B} \end{pmatrix}}{\begin{pmatrix}\mathbf {u} _{1}\\\mathbf {u} _{2}\end{pmatrix}}&=&\mathbf {A} _{1}\mathbf {u} _{1}+\mathbf {A} _{1}\mathbf {B} \mathbf {u} _{2}&=&\mathbf {A} _{1}(\mathbf {u} _{1}+\mathbf {B} \mathbf {u} _{2})&=&\mathbf {0} _{\mathbb {F} ^{m}}\\\implies \mathbf {u} _{1}+\mathbf {B} \mathbf {u} _{2}&=&\mathbf {0} _{{F}^{r}}\\\implies \mathbf {u} _{1}&=&-\mathbf {B} \mathbf {u} _{2}\end{array}}$
$\implies \mathbf {u} ={\begin{pmatrix}\mathbf {u} _{1}\\\mathbf {u} _{2}\end{pmatrix}}={\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}}\mathbf {u} _{2}=\mathbf {X} \mathbf {u} _{2}.$
This proves that any vector $\mathbf {u} $ that is a solution of $\mathbf {Ax} =\mathbf {0} $ must be a linear combination of the $n-r$ special solutions given by the columns of $\mathbf {X} $. And we have already seen that the columns of $\mathbf {X} $ are linearly independent. Hence, the columns of $\mathbf {X} $ constitute a basis for the null space of $\mathbf {A} $. Therefore, the nullity of $\mathbf {A} $ is $n-r$. Since $r$ equals rank of $\mathbf {A} $, it follows that $\operatorname {Rank} (\mathbf {A} )+\operatorname {Nullity} (\mathbf {A} )=n$. This concludes our proof.
A third fundamental subspace
When $T:V\to W$ is a linear transformation between two finite-dimensional subspaces, with $n=\dim(V)$ and $m=\dim(W)$ (so can be represented by an $m\times n$ matrix $M$), the rank–nullity theorem asserts that if $T$ has rank $r$, then $n-r$ is the dimension of the null space of $M$, which represents the kernel of $T$. In some texts, a third fundamental subspace associated to $T$ is considered alongside its image and kernel: the cokernel of $T$ is the quotient space $W/\operatorname {Image} (T)$, and its dimension is $m-r$. This dimension formula (which might also be rendered $\dim \operatorname {Image} (T)+\dim \operatorname {Coker} (T)=\dim(W)$) together with the rank–nullity theorem is sometimes called the fundamental theorem of linear algebra.[7][8]
Reformulations and generalizations
This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma.
In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Explicitly, given that
$0\rightarrow U\rightarrow V\mathbin {\overset {T}{\rightarrow }} R\rightarrow 0$
is a short exact sequence of vector spaces, then $U\oplus R\cong V$, hence
$\dim(U)+\dim(R)=\dim(V).$
Here $R$ plays the role of $\operatorname {Im} T$ and $U$ is $\operatorname {Ker} T$, i.e.
$0\rightarrow \ker T\mathbin {\hookrightarrow } V\mathbin {\overset {T}{\rightarrow }} \operatorname {im} T\rightarrow 0$
In the finite-dimensional case, this formulation is susceptible to a generalization: if
$0\rightarrow V_{1}\rightarrow V_{2}\rightarrow \cdots V_{r}\rightarrow 0$
is an exact sequence of finite-dimensional vector spaces, then[9]
$\sum _{i=1}^{r}(-1)^{i}\dim(V_{i})=0.$
The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map $T\in \operatorname {Hom} (V,W)$, where $V$ and $W$ are finite-dimensional, is defined by
$\operatorname {index} T=\dim \operatorname {Ker} (T)-\dim \operatorname {Coker} T.$
Intuitively, $\dim \operatorname {Ker} T$ is the number of independent solutions $v$ of the equation $Tv=0$, and $\dim \operatorname {Coker} T$ is the number of independent restrictions that have to be put on $w$ to make $Tv=w$ solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement
$\operatorname {index} T=\dim V-\dim W.$
We see that we can easily read off the index of the linear map $T$ from the involved spaces, without any need to analyze $T$ in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.
Citations
1. Axler (2015) p. 63, §3.22
2. Friedberg, Insel & Spence (2014) p. 70, §2.1, Theorem 2.3
3. Katznelson & Katznelson (2008) p. 52, §2.5.1
4. Valenza (1993) p. 71, §4.3
5. Friedberg, Insel & Spence (2014) pp. 103-104, §2.4, Theorem 2.20
6. Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388
• Strang, Gilbert. Linear Algebra and Its Applications. 3rd ed. Orlando: Saunders, 1988.
7. Strang, Gilbert (1993), "The fundamental theorem of linear algebra" (PDF), American Mathematical Monthly, 100 (9): 848–855, CiteSeerX 10.1.1.384.2309, doi:10.2307/2324660, JSTOR 2324660
8. Zaman, Ragib. "Dimensions of vector spaces in an exact sequence". Mathematics Stack Exchange. Retrieved 27 October 2015.
References
• Axler, Sheldon (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 978-3-319-11079-0.
• Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388
• Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (2014). Linear Algebra (4th ed.). Pearson Education. ISBN 978-0130084514.
• Meyer, Carl D. (2000), Matrix Analysis and Applied Linear Algebra, SIAM, ISBN 978-0-89871-454-8.
• Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
• Valenza, Robert J. (1993) [1951]. Linear Algebra: An Introduction to Abstract Mathematics. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 3-540-94099-5.
External links
• Gilbert Strang, MIT Linear Algebra Lecture on the Four Fundamental Subspaces, from MIT OpenCourseWare
|
Wikipedia
|
Ranked poset
In mathematics, a ranked partially ordered set or ranked poset may be either:
• a graded poset, or
• a poset with the property that for every element x, all maximal chains among those with x as greatest element have the same finite length, or
• a poset in which all maximal chains have the same finite length.
The second definition differs from the first in that it requires all minimal elements to have the same rank; for posets with a least element, however, the two requirements are equivalent. The third definition is even more strict in that it excludes posets with infinite chains and also requires all maximal elements to have the same rank. Richard P. Stanley defines a graded poset of length n as one in which all maximal chains have length n.[1]
References
1. Richard Stanley, Enumerative Combinatorics, vol.1 p.99, Cambridge Studies in Advanced Mathematics 49, Cambridge University Press, 1995, ISBN 0-521-66351-2
|
Wikipedia
|
Rankin–Cohen bracket
In mathematics, the Rankin–Cohen bracket of two modular forms is another modular form, generalizing the product of two modular forms. Rankin (1956, 1957) gave some general conditions for polynomials in derivatives of modular forms to be modular forms, and Cohen (1975) found the explicit examples of such polynomials that give Rankin–Cohen brackets. They were named by Zagier (1994), who introduced Rankin–Cohen algebras as an abstract setting for Rankin–Cohen brackets.
Definition
If $f(\tau )$ and $g(\tau )$ are modular forms of weight k and h respectively then their nth Rankin–Cohen bracket [f,g]n is given by
$[f,g]_{n}={\frac {1}{(2\pi i)^{n}}}\sum _{r+s=n}(-1)^{r}{\binom {k+n-1}{s}}{\binom {h+n-1}{r}}{\frac {\mathrm {d} ^{r}f}{\mathrm {d} \tau ^{r}}}{\frac {\mathrm {d} ^{s}g}{\mathrm {d} \tau ^{s}}}\ .$
It is a modular form of weight k + h + 2n. Note that the factor of $(2\pi i)^{n}$ is included so that the q-expansion coefficients of $[f,g]_{n}$ are rational if those of $f$ and $g$ are. $d^{r}f/d\tau ^{r}$ and $d^{s}g/d\tau ^{s}$ are the standard derivatives, as opposed to the derivative with respect to the square of the nome which is sometimes also used.
Representation theory
The mysterious formula for the Rankin–Cohen bracket can be explained in terms of representation theory. Modular forms can be regarded as lowest weight vectors for discrete series representations of SL2(R) in a space of functions on SL2(R)/SL2(Z). The tensor product of two lowest weight representations corresponding to modular forms f and g splits as a direct sum of lowest weight representations indexed by non-negative integers n, and a short calculation shows that the corresponding lowest weight vectors are the Rankin–Cohen brackets [f,g]n.
Rings of modular forms
The zero-th Rankin–Cohen bracket is the Lie bracket when considering a ring of modular forms as a Lie algebra.
References
• Cohen, Henri (1975), "Sums involving the values at negative integers of L-functions of quadratic characters", Math. Ann., 217 (3): 271–285, doi:10.1007/BF01436180, MR 0382192, Zbl 0311.10030
• Rankin, R. A. (1956), "The construction of automorphic forms from the derivatives of a given form", J. Indian Math. Soc., New Series, 20: 103–116, MR 0082563, Zbl 0072.08601
• Rankin, R. A. (1957), "The construction of automorphic forms from the derivatives of given forms", Michigan Math. J., 4: 181–186, doi:10.1307/mmj/1028989013, MR 0092870
• Zagier, Don (1994), "Modular forms and differential operators", Proc. Indian Acad. Sci. Math. Sci., K. G. Ramanathan memorial issue, 104 (1): 57–75, doi:10.1007/BF02830874, MR 1280058, Zbl 0806.11022
|
Wikipedia
|
Rankin–Selberg method
In mathematics, the Rankin–Selberg method, introduced by (Rankin 1939) and Selberg (1940), also known as the theory of integral representations of L-functions, is a technique for directly constructing and analytically continuing several important examples of automorphic L-functions. Some authors reserve the term for a special type of integral representation, namely those that involve an Eisenstein series. It has been one of the most powerful techniques for studying the Langlands program.
History
The theory in some sense dates back to Bernhard Riemann, who constructed his zeta function as the Mellin transform of Jacobi's theta function. Riemann used asymptotics of the theta function to obtain the analytic continuation, and the automorphy of the theta function to prove the functional equation. Erich Hecke, and later Hans Maass, applied the same Mellin transform method to modular forms on the upper half-plane, after which Riemann's example can be seen as a special case.
Robert Alexander Rankin and Atle Selberg independently constructed their convolution L-functions, now thought of as the Langlands L-function associated to the tensor product of standard representation of GL(2) with itself. Like Riemann, they used an integral of modular forms, but one of a different type: they integrated the product of two weight k modular forms f, g with a real analytic Eisenstein series E(τ,s) over a fundamental domain D of the modular group SL2(Z) acting on the upper half plane
$\displaystyle \int _{D}f(\tau ){\overline {g(\tau )}}E(\tau ,s)y^{k-2}dxdy$.
The integral converges absolutely if one of the two forms is cuspidal; otherwise the asymptotics must be used to get a meromorphic continuation like Riemann did. The analytic continuation and functional equation then boil down to those of the Eisenstein series. The integral was identified with the convolution L-function by a technique called "unfolding", in which the definition of the Eisenstein series and the range of integration are converted into a simpler expression that more readily exhibits the L-function as a Dirichlet series. The simultaneous combination of an unfolding together with global control over the analytic properties, is special and what makes the technique successful.
Modern adelic theory
Hervé Jacquet and Robert Langlands later gave adelic integral representations for the standard, and tensor product L-functions that had been earlier obtained by Riemann, Hecke, Maass, Rankin, and Selberg. They gave a very complete theory, in that they elucidated formulas for all local factors, stated the functional equation in a precise form, and gave sharp analytic continuations.
Generalizations and limitations
Nowadays one has integral representations for a large constellation of automorphic L-functions, however with two frustrating caveats. The first is that it is not at all clear which L-functions possibly have integral representations, or how they may be found; it is feared that the method is near exhaustion, though time and again new examples are found via clever arguments. The second is that in general it is difficult or perhaps even impossible to compute the local integrals after the unfolding stage. This means that the integrals may have the desired analytic properties, only that they may not represent an L-function (but instead something close to it).
Thus, having an integral representation for an L-function by no means indicates its analytic properties are resolved: there may be serious analytic issues remaining. At minimum, though, it ensures the L-function has an algebraic construction through formal manipulations of an integral of automorphic forms, and that at all but a finite number of places it has the conjectured Euler product of a particular L-function. In many situations the Langlands–Shahidi method gives complementary information.
Notable examples
• Standard L-function on GL(n) (Godement–Jacquet). The theory was completely resolved in the original manuscript.
• Standard L-function on classical groups (Piatetski-Shapiro-Rallis). This construction was known as the doubling method and works for non-generic representations as well.
• Tensor product L-function on GL(n) × G with G a classical group (Cai-Friedberg-Ginzburg-Kaplan). This construction was a vast generalization of the doubling method, now known as the generalized doubling method.
• Tensor product L-function on GL(n) × GL(m) (includes the standard L-function if m = 1), due to Jacquet, Piatetski-Shapiro, and Shalika. The theory was completely resolved by Moeglin–Waldspurger, and was reverse-engineered to establish the "converse theorem".
• Symmetric square on GL(n) due to Shimura, and Gelbart–Jacquet (n = 2), Piatetski-Shapiro and Patterson (n = 3), and Bump–Ginzburg (n > 3).
• Exterior square on GL(n), due to Jacquet–Shalika and Bump–Ginzburg.
• Triple Product on GL(2) × GL(2) × GL(2) (Garrett, as well as Harris, Ikeda, Piatetski-Shapiro, Rallis, Ramakrishnan, and Orloff).
• Symmetric cube on GL(2) (Bump–Ginzburg–Hoffstein).
• Symmetric fourth power on GL(2) (Ginzburg–Rallis).
• Standard L-function of E6 and E7 (Ginzburg).
• Standard L-function of G2 (Ginzburg-Hundley, Gurevich-Segal).
References
• Bump, Daniel (1989), "The Rankin-Selberg method: a survey", Number theory, trace formulas and discrete groups (Oslo, 1987), Boston, MA: Academic Press, pp. 49–109, MR 0993311
• Bump, Daniel (2005), "The Rankin-Selberg method: an introduction and survey", in Cogdell, James W.; Jiang, Dihua; Kudla, Stephen S.; Soudry, David; Stanton, Robert (eds.), Automorphic representations, L-functions and applications: progress and prospects, Ohio State Univ. Math. Res. Inst. Publ., vol. 11, Berlin: de Gruyter, pp. 41–73, ISBN 978-3-11-017939-2, MR 2192819
• Rankin, Robert A. (1939), "Contributions to the theory of Ramanujan's function τ(n) and similar arithmetical functions. I. The zeros of the function Σn=1∞τ(n)/ns on the line R s=13/2. II. The order of the Fourier coefficients of integral modular forms", Proc. Cambridge Philos. Soc., 35: 351–372, doi:10.1017/S0305004100021095, MR 0000411
• Selberg, Atle (1940), "Bemerkungen über eine Dirichletsche Reihe, die mit der Theorie der Modulformen nahe verbunden ist", Arch. Math. Naturvid., 43: 47–50, MR 0002626
|
Wikipedia
|
Rankit
In statistics, rankits of a set of data are the expected values of the order statistics of a sample from the standard normal distribution the same size as the data. They are primarily used in the normal probability plot, a graphical technique for normality testing.
Example
This is perhaps most readily understood by means of an example. If an i.i.d. sample of six items is taken from a normally distributed population with expected value 0 and variance 1 (the standard normal distribution) and then sorted into increasing order, the expected values of the resulting order statistics are:
−1.2672, −0.6418, −0.2016, 0.2016, 0.6418, 1.2672.
Suppose the numbers in a data set are
65, 75, 16, 22, 43, 40.
Then one may sort these and line them up with the corresponding rankits; in order they are
16, 22, 40, 43, 65, 75,
which yields the points:
data pointrankit
16−1.2672
22−0.6418
40−0.2016
430.2016
650.6418
751.2672
These points are then plotted as the vertical and horizontal coordinates of a scatter plot.
Alternative method
Alternatively, rather than sort the data points, one may rank them, and rearrange the rankits accordingly. This yields the same pairs of numbers, but in a different order.
For:
65, 75, 16, 22, 43, 40,
the corresponding ranks are:
5, 6, 1, 2, 4, 3,
i.e., the number appearing first is the 5th-smallest, the number appearing second is 6th-smallest, the number appearing third is smallest, the number appearing fourth is 2nd-smallest, etc. One rearranges the expected normal order statistics accordingly, getting the rankits of this data set:
data pointrankrankit
6550.6418
7561.2672
161−1.2672
222−0.6418
4340.2016
403−0.2016
Rankit plot
A graph plotting the rankits on the horizontal axis and the data points on the vertical axis is called a rankit plot or a normal probability plot. Such a plot is necessarily nondecreasing. In large samples from a normally distributed population, such a plot will approximate a straight line. Substantial deviations from straightness are considered evidence against normality of the distribution.
Rankit plots are usually used to visually demonstrate whether data are from a specified probability distribution.
A rankit plot is a kind of Q–Q plot – it plots the order statistics (quantiles) of the sample against certain quantiles (the rankits) of the assumed normal distribution. Q–Q plots may use other quantiles for the normal distribution, however.
History
The rankit plot and the word rankit was introduced by the biologist and statistician Chester Ittner Bliss (1899–1979).
See also
• Probit analysis developed by C. I. Bliss in 1934.
External links
• Engineering Statistics Handbook
|
Wikipedia
|
Ranklet
In statistics, a ranklet is an orientation-selective non-parametric feature which is based on the computation of Mann–Whitney–Wilcoxon (MWW) rank-sum test statistics.[1] Ranklets achieve similar response to Haar wavelets as they share the same pattern of orientation-selectivity, multi-scale nature and a suitable notion of completeness.[2] There were invented by Fabrizio Smeralhi in 2002.
Rank-based (non-parametric) features have become popular in the field of image processing for their robustness in detecting outliers and invariance to monotonic transformations such as brightness, contrast changes and gamma correction.
The MWW is a combination of Wilcoxon rank-sum test and Mann–Whitney U-test. It is a non-parametric alternative to the t-test used to test the hypothesis for the comparison of two independent distributions. It assesses whether two samples of observations, usually referred as Treatment T and Control C, come from the same distribution but do not have to be normally distributed.
The Wilcoxon rank-sum statistics Ws is determined as:[3]
$W_{s}=\sum _{i=1}^{N}\pi _{i}V_{i}{\text{ where }}\pi _{i}={\text{rank of element }}i{\text{ and }}V_{i}={\begin{cases}0&{\text{ for }}\pi _{i}\in C\\[3pt]1&{\text{ for }}\pi _{i}\in T\end{cases}}$
Subsequently, let MW be the Mann–Whitney statistics defined by:
$MW=W_{s}-{\frac {m(m+1)}{2}}$
where m is the number of Treatment values.
A ranklet R is defined as the normalization of MW in the range [−1, +1]:
$R={\frac {MW}{mn/2}}-1$
where a positive value means that the Treatment region is brighter than the Control region, and a negative value otherwise.
Example
Suppose $T=\lbrace 5,9,1,10,15\rbrace $ and $C=\lbrace 20,4,7,13,19,11\rbrace $ then
Intensity 1 4 5 7 9 10 11 13 15 19 20
Sample T C T C T T C C T C C
Rank 1 2 3 4 5 6 7 8 9 10 11
• $W_{s}={\Big \lbrace }1+3+5+6+9{\Big \rbrace }=24$
• $MW=24-[5\times (5+1)/2]=9$
• $R=[9/[5\times 6/2]]-1=-0.4$
Hence, in the above example the Control region was a little bit brighter than the Treatment region.
Method
Since Ranklets are non-linear filters, they can only be applied in the spatial domain. Filtering with Ranklets involves dividing an image window W into Treatment and Control regions as shown in the image below:
Subsequently, Wilcoxon rank-sum test statistics are computed in order to determine the intensity variations among conveniently chosen regions (according to the required orientation) of the samples in W. The intensity values of both regions are then replaced by the respective ranking scores. These ranking scores determine a pairwise comparison between the T and C regions. This means that a ranklet essentially counts the number of TxC pairs which are brighter in the T set. Hence a positive value means that the Treatment values are brighter than the Control values, and vice versa.
References
1. "www.Ranklets.net". www.eecs.qmul.ac.uk. Retrieved 2022-06-05.
2. Smeraldi, F. (August 2002). "Ranklets: Orientation selective non-parametric features applied to face detection". Object recognition supported by user interaction for service robots. Vol. 3. pp. 379–382. doi:10.1109/ICPR.2002.1047924. ISBN 0-7695-1695-X. S2CID 16667804. {{cite book}}: Check |isbn= value: checksum (help)
3. "www.Ranklets.net". www.eecs.qmul.ac.uk. Retrieved 2022-06-05.
External links
• Matlab RankletFilter.m -> source file to decompose an image into Intensity Ranklets
|
Wikipedia
|
Ranthony Edmonds
Ranthony A. C. Edmonds is an American mathematician specializing in commutative ring theory, factorization theory, and applied algebraic topology. She is a postdoctoral fellow in the department of mathematics at the Ohio State University.
Ranthony Edmonds
Alma materUniversity of Iowa
Known forData Science
Scientific career
FieldsMathematics
InstitutionsOhio State University
Doctoral advisorDaniel D. Anderson
Early life and education
Edmonds was born in Birmingham, Alabama and was raised in Lexington, Kentucky. She earned dual degrees in Mathematics and English from the University of Kentucky in 2011. Edmonds earned her master's degree in Mathematical Sciences at Eastern Kentucky University in 2013 and earned her Ph.D. in mathematics from the University of Iowa in 2018.[1] Edmonds was a fellow of the Mathematical Association of America (MAA) Project NeXt Fellowship until 2019.[2]
Career
Edmunds is a National Science Foundation Mathematical and Physical Sciences Ascending Postdoctoral Research Fellow in the Department of Mathematics at Ohio State University.[3] Her main areas of expertise include Commutative Ring Theory, Factorization Theory, and Applied Algebraic Topology.[3] She is currently a fellow of the Early Career Fellowship from Mathematically Gifted and Black and the Society for Industrial and Applied Mathematics, which recognizes the achievements of early career applied mathematicians, especially those belonging to racial and ethnic groups that have been historically excluded from mathematical sciences.[4]
While working at Ohio State University, Edmonds was a first-round awardee of the Seed Fund for Racial Justice in 2020.[5] The Fund seeks to develop research approaches that contribute to the elimination of racism on a local and national scale. Edmonds was awarded for leading a case study seeking to become the first comprehensive historical study of Black mathematicians at a single U.S. institution.[6]
In 2021, Edmonds and John Johnson codeveloped a course titled Intersections of Math and Society: Hidden Figure.[7] She advocates for math community outreach.[8]
References
1. "| Ranthony A.C. Edmonds". Retrieved 2023-04-10.
2. "Fellow Search Form | Mathematical Association of America". www.maa.org. Retrieved 2023-04-10.
3. "Ranthony Edmonds | Department of Mathematics". math.osu.edu. Retrieved 2023-04-10.
4. "MGB-SIAM Early Career Fellowship". www.siam.org. Retrieved 2023-04-10.
5. "First round awardees for the Seed Fund for Racial Justice announced". research.osu.edu. Retrieved 2023-04-10.
6. Hendrix, Sheridan (March 10, 2023). "Meet an Ohio State researcher introducing students to Black mathematicians through history". The Columbus Dispatch. Retrieved 2023-04-21.
7. Ramirez, Rebecca; Sofia, Madeline K. (April 19, 2021). "A Classroom Where Math And Community Intersect". NPR. Retrieved 2023-04-21.
8. Ramirez, Rebecca; Sofia, Madeline K. (December 28, 2021). "Our Favorite Things: Math And Community In The Classroom". NPR. Retrieved 2023-04-21.
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
|
Wikipedia
|
Raoul Bott
Raoul Bott (September 24, 1923 – December 20, 2005)[1] was a Hungarian-American mathematician known for numerous foundational contributions to geometry in its broad sense. He is best known for his Bott periodicity theorem, the Morse–Bott functions which he used in this context, and the Borel–Bott–Weil theorem.
Raoul Bott
Raoul Bott in 1986
Born(1923-09-24)September 24, 1923
Budapest, Hungary
DiedDecember 20, 2005(2005-12-20) (aged 82)
San Diego, California
NationalityHungarian American
Alma materMcGill University
Carnegie Mellon University
Known forBott cannibalistic class
Bott periodicity theorem
Bott residue formula
Bott–Duffin synthesis
Bott–Samelson resolution
Bott–Taubes polytope
Bott–Virasoro group
Atiyah–Bott formula
Atiyah–Bott fixed-point theorem
Borel–Weil–Bott theorem
Morse–Bott theory
AwardsVeblen Prize (1964)
Jeffery–Williams Prize (1983)
National Medal of Science (1987)
Steele Prize (1990)
Wolf Prize (2000)
ForMemRS (2005)
Scientific career
FieldsMathematics
InstitutionsUniversity of Michigan in Ann Arbor
Harvard University
Doctoral advisorRichard Duffin
Doctoral students
• Edward B. Curtis
• Harold Edwards
• Nancy Hingston
• Peter Landweber
• Robert MacPherson
• Daniel Quillen
• Stephen Smale
• Susan Tolman
• Eric Weinstein
Early life
Bott was born in Budapest, Hungary, the son of Margit Kovács and Rudolph Bott.[2] His father was of Austrian descent, and his mother was of Hungarian Jewish descent; Bott was raised a Catholic by his mother and stepfather.[3][4] Bott grew up in Czechoslovakia and spent his working life in the United States. His family emigrated to Canada in 1938, and subsequently he served in the Canadian Army in Europe during World War II.
Career
Bott later went to college at McGill University in Montreal, where he studied electrical engineering. He then earned a PhD in mathematics from Carnegie Mellon University in Pittsburgh in 1949. His thesis, titled Electrical Network Theory, was written under the direction of Richard Duffin. Afterward, he began teaching at the University of Michigan in Ann Arbor. Bott continued his study at the Institute for Advanced Study in Princeton.[5] He was a professor at Harvard University from 1959 to 1999. In 2005 Bott died of cancer in San Diego.
With Richard Duffin at Carnegie Mellon, Bott studied existence of electronic filters corresponding to given positive-real functions. In 1949 they proved[6] a fundamental theorem of filter synthesis. Duffin and Bott extended earlier work by Otto Brune that requisite functions of complex frequency s could be realized by a passive network of inductors and capacitors. The proof, relying on induction on the sum of the degrees of the polynomials in the numerator and denominator of the rational function, was published in Journal of Applied Physics, volume 20, page 816. In his 2000 interview[7] with Allyn Jackson of the American Mathematical Society, he explained that he sees "networks as discrete versions of harmonic theory", so his experience with network synthesis and electronic filter topology introduced him to algebraic topology.
Bott met Arnold S. Shapiro at the IAS and they worked together. He studied the homotopy theory of Lie groups, using methods from Morse theory, leading to the Bott periodicity theorem (1957). In the course of this work, he introduced Morse–Bott functions, an important generalization of Morse functions.
This led to his role as collaborator over many years with Michael Atiyah, initially via the part played by periodicity in K-theory. Bott made important contributions towards the index theorem, especially in formulating related fixed-point theorems, in particular the so-called 'Woods Hole fixed-point theorem', a combination of the Riemann–Roch theorem and Lefschetz fixed-point theorem (it is named after Woods Hole, Massachusetts, the site of a conference at which collective discussion formulated it).[8] The major Atiyah–Bott papers on what is now the Atiyah–Bott fixed-point theorem were written in the years up to 1968; they collaborated further in recovering in contemporary language Ivan Petrovsky on Petrovsky lacunas of hyperbolic partial differential equations, prompted by Lars Gårding. In the 1980s, Atiyah and Bott investigated gauge theory, using the Yang–Mills equations on a Riemann surface to obtain topological information about the moduli spaces of stable bundles on Riemann surfaces. In 1983 he spoke to the Canadian Mathematical Society in a talk he called "A topologist marvels at Physics".[9]
He is also well known in connection with the Borel–Bott–Weil theorem on representation theory of Lie groups via holomorphic sheaves and their cohomology groups; and for work on foliations. With Chern he worked on Nevanlinna theory, studied holomorphic vector bundles over complex analytic manifolds and introduced the Bott-Chern classes, useful in the theory of Arakelov geometry and also to algebraic number theory.
He introduced Bott–Samelson varieties and the Bott residue formula for complex manifolds and the Bott cannibalistic class.
Awards
In 1964, he was awarded the Oswald Veblen Prize in Geometry by the American Mathematical Society. In 1983, he was awarded the Jeffery–Williams Prize by the Canadian Mathematical Society. In 1987, he was awarded the National Medal of Science.[10]
In 2000, he received the Wolf Prize. In 2005, he was elected an Overseas Fellow of the Royal Society of London.
Students
Bott had 35 PhD students, including Stephen Smale, Lawrence Conlon, Daniel Quillen, Peter Landweber, Robert MacPherson, Robert W. Brooks, Robin Forman, Rama Kocherlakota, Susan Tolman, András Szenes, Kevin Corlette,[11] and Eric Weinstein.[12][13][14] Smale and Quillen won Fields Medals in 1966 and 1978 respectively.
Publications
• 1995: Collected Papers. Vol. 4. Mathematics Related to Physics. Edited by Robert MacPherson. Contemporary Mathematicians. Birkhäuser Boston, xx+485 pp. ISBN 0-8176-3648-X MR1321890
• 1995: Collected Papers. Vol. 3. Foliations. Edited by Robert D. MacPherson. Contemporary Mathematicians. Birkhäuser, xxxii+610 pp. ISBN 0-8176-3647-1 MR1321886
• 1994: Collected Papers. Vol. 2. Differential Operators. Edited by Robert D. MacPherson. Contemporary Mathematicians. Birkhäuser, xxxiv+802 pp. ISBN 0-8176-3646-3 MR1290361
• 1994: Collected Papers. Vol. 1. Topology and Lie Groups. Edited by Robert D. MacPherson. Contemporary Mathematicians. Birkhäuser, xii+584 pp. ISBN 0-8176-3613-7 MR1280032
• 1982: (with Loring W. Tu) Differential Forms in Algebraic Topology. Graduate Texts in Mathematics #82. Springer-Verlag, New York-Berlin. xiv+331 pp. ISBN 0-387-90613-4 doi:10.1007/978-1-4757-3951-0 MR0658304[15]
• 1969: Lectures on K(X). Mathematics Lecture Note Series W. A. Benjamin, New York-Amsterdam x+203 pp.MR0258020
See also
• Bott–Duffin inverse
• Parallelizable manifold
• Thom's and Bott's proofs of the Lefschetz hyperplane theorem
References
1. Atiyah, Michael (2007). "Raoul Harry Bott. 24 September 1923 -- 20 December 2005: Elected ForMemRS 2005". Biographical Memoirs of Fellows of the Royal Society. 53: 63. doi:10.1098/rsbm.2007.0006.
2. McMurray, Emily J.; Kosek, Jane Kelly; Valade, Roger M. (1 January 1995). Notable Twentieth-century Scientists: A-E. Gale Research. ISBN 9780810391826. Retrieved 28 October 2016 – via Internet Archive. Raoul Bott Margit Kovacs.
3. "Raoul Bott". MacTutor History of Mathematics. Retrieved 28 October 2016.
4. Tu, Loring W. (May 2006). "The Life and Works of Raoul Bott" (PDF). Notices of the American Mathematical Society. 53 (5): 554–570. ISSN 0002-9920.
5. "Community of Scholars". ias.edu. Institute for Advanced Study. Archived from the original on 2013-03-10. Retrieved 4 April 2018.
6. John H. Hubbard (2010) "The Bott-Duffin Synthesis of Electrical Circuits", pp 33 to 40 in A Celebration of the Mathematical Legacy of Raoul Bott, P. Robert Kotiuga editor, CRM Proceedings and Lecture Notes #50, American Mathematical Society
7. Jackson, Allyn, "Interview with Raoul Bott", Notices of the American Mathematical Society 48 (2001), no. 4, 374–382.
8. "Marine Policy Center - Woods Hole Oceanographic Institution". Archived from the original on 2005-08-27. Retrieved 2005-12-23.
9. R. Bott (1985). "On some recent interactions between mathematics and physics". Canadian Mathematical Bulletin. 28 (2): 129–164. doi:10.4153/CMB-1985-016-3. S2CID 120399958.
10. "The President's National Medal of Science: Recipient Details - NSF - National Science Foundation". Retrieved 28 October 2016.
11. Raoul H. Bott at the Mathematics Genealogy Project
12. Eric Weinstein at the Mathematics Genealogy Project
13. Tu, Loring W., ed. (2018). "Raoul Bott: Collected Papers, Volume 5". Notices of the American Mathematical Society. Contemporary Mathematicians. Birkhäuser: 47. ISBN 9783319517810. Retrieved 14 April 2020.
14. "PhD Dissertations Archival Listing". Harvard Mathematics Department. Retrieved 2020-04-14.
15. Stasheff, James D. "Review: Differential forms in algebraic topology, by Raoul Bott and Loring W. Tu". Bulletin of the American Mathematical Society. New Seriesyear=1984. 10 (1): 117–121. doi:10.1090/S0273-0979-1984-15208-X.
External links
• Raoul Bott at the Mathematics Genealogy Project
• Commemorative website at Harvard Math Department
• "The Life and Works of Raoul Bott", by Loring Tu.
• "Raoul Bott, an Innovator in Mathematics, Dies at 82", The New York Times, January 8, 2006.
Fellows of the Royal Society elected in 2005
Fellows
• James Barber
• Martin T. Barlow
• Laurence D. Barron
• Andrew Blake
• Harry Bryden
• Stephen Busby
• Luca Cardelli
• Deborah Charlesworth
• John Collinge
• Paul Corkum
• John Croxall
• Tom Curran
• John Diffley
• Julian Downward
• Ronald Ekers
• Robert Evans
• Philip Richard Evans
• Alastair Fitter
• Uta Frith
• David Gadsby
• Douglas Higgs
• Brian Kennett
• David Masser
• Thomas Masters
• Tom McKillop
• Goverdhan Mehta
• Roger Ervin Miller
• Michael J. Morgan
• Ian Paterson
• John Richard Anthony Pearson
• Philip Power
• Nicholas Proudfoot
• Trevor Robbins
• Douglas Ross
• Philip Russell
• Peter John Sadler
• Christopher Snowden
• David Spiegelhalter
• Daniel St Johnston
• Lloyd N. Trefethen
• Richard S. Ward
• Colin Watts
• John G. White
• Ernest Marshall Wright
Foreign
• Raoul Bott
• Catherine Cesarsky
• Ilkka Hanski
• Hartmut Michel
• Ryōji Noyori
• Harold Varmus
Honorary
• Leonard Wolfson
Laureates of the Wolf Prize in Mathematics
1970s
• Israel Gelfand / Carl L. Siegel (1978)
• Jean Leray / André Weil (1979)
1980s
• Henri Cartan / Andrey Kolmogorov (1980)
• Lars Ahlfors / Oscar Zariski (1981)
• Hassler Whitney / Mark Krein (1982)
• Shiing-Shen Chern / Paul Erdős (1983/84)
• Kunihiko Kodaira / Hans Lewy (1984/85)
• Samuel Eilenberg / Atle Selberg (1986)
• Kiyosi Itô / Peter Lax (1987)
• Friedrich Hirzebruch / Lars Hörmander (1988)
• Alberto Calderón / John Milnor (1989)
1990s
• Ennio de Giorgi / Ilya Piatetski-Shapiro (1990)
• Lennart Carleson / John G. Thompson (1992)
• Mikhail Gromov / Jacques Tits (1993)
• Jürgen Moser (1994/95)
• Robert Langlands / Andrew Wiles (1995/96)
• Joseph Keller / Yakov G. Sinai (1996/97)
• László Lovász / Elias M. Stein (1999)
2000s
• Raoul Bott / Jean-Pierre Serre (2000)
• Vladimir Arnold / Saharon Shelah (2001)
• Mikio Sato / John Tate (2002/03)
• Grigory Margulis / Sergei Novikov (2005)
• Stephen Smale / Hillel Furstenberg (2006/07)
• Pierre Deligne / Phillip A. Griffiths / David B. Mumford (2008)
2010s
• Dennis Sullivan / Shing-Tung Yau (2010)
• Michael Aschbacher / Luis Caffarelli (2012)
• George Mostow / Michael Artin (2013)
• Peter Sarnak (2014)
• James G. Arthur (2015)
• Richard Schoen / Charles Fefferman (2017)
• Alexander Beilinson / Vladimir Drinfeld (2018)
• Jean-François Le Gall / Gregory Lawler (2019)
2020s
• Simon K. Donaldson / Yakov Eliashberg (2020)
• George Lusztig (2022)
• Ingrid Daubechies (2023)
Mathematics portal
United States National Medal of Science laureates
Behavioral and social science
1960s
1964
Neal Elgar Miller
1980s
1986
Herbert A. Simon
1987
Anne Anastasi
George J. Stigler
1988
Milton Friedman
1990s
1990
Leonid Hurwicz
Patrick Suppes
1991
George A. Miller
1992
Eleanor J. Gibson
1994
Robert K. Merton
1995
Roger N. Shepard
1996
Paul Samuelson
1997
William K. Estes
1998
William Julius Wilson
1999
Robert M. Solow
2000s
2000
Gary Becker
2003
R. Duncan Luce
2004
Kenneth Arrow
2005
Gordon H. Bower
2008
Michael I. Posner
2009
Mortimer Mishkin
2010s
2011
Anne Treisman
2014
Robert Axelrod
2015
Albert Bandura
Biological sciences
1960s
1963
C. B. van Niel
1964
Theodosius Dobzhansky
Marshall W. Nirenberg
1965
Francis P. Rous
George G. Simpson
Donald D. Van Slyke
1966
Edward F. Knipling
Fritz Albert Lipmann
William C. Rose
Sewall Wright
1967
Kenneth S. Cole
Harry F. Harlow
Michael Heidelberger
Alfred H. Sturtevant
1968
Horace Barker
Bernard B. Brodie
Detlev W. Bronk
Jay Lush
Burrhus Frederic Skinner
1969
Robert Huebner
Ernst Mayr
1970s
1970
Barbara McClintock
Albert B. Sabin
1973
Daniel I. Arnon
Earl W. Sutherland Jr.
1974
Britton Chance
Erwin Chargaff
James V. Neel
James Augustine Shannon
1975
Hallowell Davis
Paul Gyorgy
Sterling B. Hendricks
Orville Alvin Vogel
1976
Roger Guillemin
Keith Roberts Porter
Efraim Racker
E. O. Wilson
1979
Robert H. Burris
Elizabeth C. Crosby
Arthur Kornberg
Severo Ochoa
Earl Reece Stadtman
George Ledyard Stebbins
Paul Alfred Weiss
1980s
1981
Philip Handler
1982
Seymour Benzer
Glenn W. Burton
Mildred Cohn
1983
Howard L. Bachrach
Paul Berg
Wendell L. Roelofs
Berta Scharrer
1986
Stanley Cohen
Donald A. Henderson
Vernon B. Mountcastle
George Emil Palade
Joan A. Steitz
1987
Michael E. DeBakey
Theodor O. Diener
Harry Eagle
Har Gobind Khorana
Rita Levi-Montalcini
1988
Michael S. Brown
Stanley Norman Cohen
Joseph L. Goldstein
Maurice R. Hilleman
Eric R. Kandel
Rosalyn Sussman Yalow
1989
Katherine Esau
Viktor Hamburger
Philip Leder
Joshua Lederberg
Roger W. Sperry
Harland G. Wood
1990s
1990
Baruj Benacerraf
Herbert W. Boyer
Daniel E. Koshland Jr.
Edward B. Lewis
David G. Nathan
E. Donnall Thomas
1991
Mary Ellen Avery
G. Evelyn Hutchinson
Elvin A. Kabat
Robert W. Kates
Salvador Luria
Paul A. Marks
Folke K. Skoog
Paul C. Zamecnik
1992
Maxine Singer
Howard Martin Temin
1993
Daniel Nathans
Salome G. Waelsch
1994
Thomas Eisner
Elizabeth F. Neufeld
1995
Alexander Rich
1996
Ruth Patrick
1997
James Watson
Robert A. Weinberg
1998
Bruce Ames
Janet Rowley
1999
David Baltimore
Jared Diamond
Lynn Margulis
2000s
2000
Nancy C. Andreasen
Peter H. Raven
Carl Woese
2001
Francisco J. Ayala
George F. Bass
Mario R. Capecchi
Ann Graybiel
Gene E. Likens
Victor A. McKusick
Harold Varmus
2002
James E. Darnell
Evelyn M. Witkin
2003
J. Michael Bishop
Solomon H. Snyder
Charles Yanofsky
2004
Norman E. Borlaug
Phillip A. Sharp
Thomas E. Starzl
2005
Anthony Fauci
Torsten N. Wiesel
2006
Rita R. Colwell
Nina Fedoroff
Lubert Stryer
2007
Robert J. Lefkowitz
Bert W. O'Malley
2008
Francis S. Collins
Elaine Fuchs
J. Craig Venter
2009
Susan L. Lindquist
Stanley B. Prusiner
2010s
2010
Ralph L. Brinster
Rudolf Jaenisch
2011
Lucy Shapiro
Leroy Hood
Sallie Chisholm
2012
May Berenbaum
Bruce Alberts
2013
Rakesh K. Jain
2014
Stanley Falkow
Mary-Claire King
Simon Levin
Chemistry
1960s
1964
Roger Adams
1980s
1982
F. Albert Cotton
Gilbert Stork
1983
Roald Hoffmann
George C. Pimentel
Richard N. Zare
1986
Harry B. Gray
Yuan Tseh Lee
Carl S. Marvel
Frank H. Westheimer
1987
William S. Johnson
Walter H. Stockmayer
Max Tishler
1988
William O. Baker
Konrad E. Bloch
Elias J. Corey
1989
Richard B. Bernstein
Melvin Calvin
Rudolph A. Marcus
Harden M. McConnell
1990s
1990
Elkan Blout
Karl Folkers
John D. Roberts
1991
Ronald Breslow
Gertrude B. Elion
Dudley R. Herschbach
Glenn T. Seaborg
1992
Howard E. Simmons Jr.
1993
Donald J. Cram
Norman Hackerman
1994
George S. Hammond
1995
Thomas Cech
Isabella L. Karle
1996
Norman Davidson
1997
Darleane C. Hoffman
Harold S. Johnston
1998
John W. Cahn
George M. Whitesides
1999
Stuart A. Rice
John Ross
Susan Solomon
2000s
2000
John D. Baldeschwieler
Ralph F. Hirschmann
2001
Ernest R. Davidson
Gábor A. Somorjai
2002
John I. Brauman
2004
Stephen J. Lippard
2005
Tobin J. Marks
2006
Marvin H. Caruthers
Peter B. Dervan
2007
Mostafa A. El-Sayed
2008
Joanna Fowler
JoAnne Stubbe
2009
Stephen J. Benkovic
Marye Anne Fox
2010s
2010
Jacqueline K. Barton
Peter J. Stang
2011
Allen J. Bard
M. Frederick Hawthorne
2012
Judith P. Klinman
Jerrold Meinwald
2013
Geraldine L. Richmond
2014
A. Paul Alivisatos
Engineering sciences
1960s
1962
Theodore von Kármán
1963
Vannevar Bush
John Robinson Pierce
1964
Charles S. Draper
Othmar H. Ammann
1965
Hugh L. Dryden
Clarence L. Johnson
Warren K. Lewis
1966
Claude E. Shannon
1967
Edwin H. Land
Igor I. Sikorsky
1968
J. Presper Eckert
Nathan M. Newmark
1969
Jack St. Clair Kilby
1970s
1970
George E. Mueller
1973
Harold E. Edgerton
Richard T. Whitcomb
1974
Rudolf Kompfner
Ralph Brazelton Peck
Abel Wolman
1975
Manson Benedict
William Hayward Pickering
Frederick E. Terman
Wernher von Braun
1976
Morris Cohen
Peter C. Goldmark
Erwin Wilhelm Müller
1979
Emmett N. Leith
Raymond D. Mindlin
Robert N. Noyce
Earl R. Parker
Simon Ramo
1980s
1982
Edward H. Heinemann
Donald L. Katz
1983
Bill Hewlett
George Low
John G. Trump
1986
Hans Wolfgang Liepmann
Tung-Yen Lin
Bernard M. Oliver
1987
Robert Byron Bird
H. Bolton Seed
Ernst Weber
1988
Daniel C. Drucker
Willis M. Hawkins
George W. Housner
1989
Harry George Drickamer
Herbert E. Grier
1990s
1990
Mildred Dresselhaus
Nick Holonyak Jr.
1991
George H. Heilmeier
Luna B. Leopold
H. Guyford Stever
1992
Calvin F. Quate
John Roy Whinnery
1993
Alfred Y. Cho
1994
Ray W. Clough
1995
Hermann A. Haus
1996
James L. Flanagan
C. Kumar N. Patel
1998
Eli Ruckenstein
1999
Kenneth N. Stevens
2000s
2000
Yuan-Cheng B. Fung
2001
Andreas Acrivos
2002
Leo Beranek
2003
John M. Prausnitz
2004
Edwin N. Lightfoot
2005
Jan D. Achenbach
2006
Robert S. Langer
2007
David J. Wineland
2008
Rudolf E. Kálmán
2009
Amnon Yariv
2010s
2010
Shu Chien
2011
John B. Goodenough
2012
Thomas Kailath
Mathematical, statistical, and computer sciences
1960s
1963
Norbert Wiener
1964
Solomon Lefschetz
H. Marston Morse
1965
Oscar Zariski
1966
John Milnor
1967
Paul Cohen
1968
Jerzy Neyman
1969
William Feller
1970s
1970
Richard Brauer
1973
John Tukey
1974
Kurt Gödel
1975
John W. Backus
Shiing-Shen Chern
George Dantzig
1976
Kurt Otto Friedrichs
Hassler Whitney
1979
Joseph L. Doob
Donald E. Knuth
1980s
1982
Marshall H. Stone
1983
Herman Goldstine
Isadore Singer
1986
Peter Lax
Antoni Zygmund
1987
Raoul Bott
Michael Freedman
1988
Ralph E. Gomory
Joseph B. Keller
1989
Samuel Karlin
Saunders Mac Lane
Donald C. Spencer
1990s
1990
George F. Carrier
Stephen Cole Kleene
John McCarthy
1991
Alberto Calderón
1992
Allen Newell
1993
Martin David Kruskal
1994
John Cocke
1995
Louis Nirenberg
1996
Richard Karp
Stephen Smale
1997
Shing-Tung Yau
1998
Cathleen Synge Morawetz
1999
Felix Browder
Ronald R. Coifman
2000s
2000
John Griggs Thompson
Karen Uhlenbeck
2001
Calyampudi R. Rao
Elias M. Stein
2002
James G. Glimm
2003
Carl R. de Boor
2004
Dennis P. Sullivan
2005
Bradley Efron
2006
Hyman Bass
2007
Leonard Kleinrock
Andrew J. Viterbi
2009
David B. Mumford
2010s
2010
Richard A. Tapia
S. R. Srinivasa Varadhan
2011
Solomon W. Golomb
Barry Mazur
2012
Alexandre Chorin
David Blackwell
2013
Michael Artin
Physical sciences
1960s
1963
Luis W. Alvarez
1964
Julian Schwinger
Harold Urey
Robert Burns Woodward
1965
John Bardeen
Peter Debye
Leon M. Lederman
William Rubey
1966
Jacob Bjerknes
Subrahmanyan Chandrasekhar
Henry Eyring
John H. Van Vleck
Vladimir K. Zworykin
1967
Jesse Beams
Francis Birch
Gregory Breit
Louis Hammett
George Kistiakowsky
1968
Paul Bartlett
Herbert Friedman
Lars Onsager
Eugene Wigner
1969
Herbert C. Brown
Wolfgang Panofsky
1970s
1970
Robert H. Dicke
Allan R. Sandage
John C. Slater
John A. Wheeler
Saul Winstein
1973
Carl Djerassi
Maurice Ewing
Arie Jan Haagen-Smit
Vladimir Haensel
Frederick Seitz
Robert Rathbun Wilson
1974
Nicolaas Bloembergen
Paul Flory
William Alfred Fowler
Linus Carl Pauling
Kenneth Sanborn Pitzer
1975
Hans A. Bethe
Joseph O. Hirschfelder
Lewis Sarett
Edgar Bright Wilson
Chien-Shiung Wu
1976
Samuel Goudsmit
Herbert S. Gutowsky
Frederick Rossini
Verner Suomi
Henry Taube
George Uhlenbeck
1979
Richard P. Feynman
Herman Mark
Edward M. Purcell
John Sinfelt
Lyman Spitzer
Victor F. Weisskopf
1980s
1982
Philip W. Anderson
Yoichiro Nambu
Edward Teller
Charles H. Townes
1983
E. Margaret Burbidge
Maurice Goldhaber
Helmut Landsberg
Walter Munk
Frederick Reines
Bruno B. Rossi
J. Robert Schrieffer
1986
Solomon J. Buchsbaum
H. Richard Crane
Herman Feshbach
Robert Hofstadter
Chen-Ning Yang
1987
Philip Abelson
Walter Elsasser
Paul C. Lauterbur
George Pake
James A. Van Allen
1988
D. Allan Bromley
Paul Ching-Wu Chu
Walter Kohn
Norman Foster Ramsey Jr.
Jack Steinberger
1989
Arnold O. Beckman
Eugene Parker
Robert Sharp
Henry Stommel
1990s
1990
Allan M. Cormack
Edwin M. McMillan
Robert Pound
Roger Revelle
1991
Arthur L. Schawlow
Ed Stone
Steven Weinberg
1992
Eugene M. Shoemaker
1993
Val Fitch
Vera Rubin
1994
Albert Overhauser
Frank Press
1995
Hans Dehmelt
Peter Goldreich
1996
Wallace S. Broecker
1997
Marshall Rosenbluth
Martin Schwarzschild
George Wetherill
1998
Don L. Anderson
John N. Bahcall
1999
James Cronin
Leo Kadanoff
2000s
2000
Willis E. Lamb
Jeremiah P. Ostriker
Gilbert F. White
2001
Marvin L. Cohen
Raymond Davis Jr.
Charles Keeling
2002
Richard Garwin
W. Jason Morgan
Edward Witten
2003
G. Brent Dalrymple
Riccardo Giacconi
2004
Robert N. Clayton
2005
Ralph A. Alpher
Lonnie Thompson
2006
Daniel Kleppner
2007
Fay Ajzenberg-Selove
Charles P. Slichter
2008
Berni Alder
James E. Gunn
2009
Yakir Aharonov
Esther M. Conwell
Warren M. Washington
2010s
2011
Sidney Drell
Sandra Faber
Sylvester James Gates
2012
Burton Richter
Sean C. Solomon
2014
Shirley Ann Jackson
Recipients of the Oswald Veblen Prize in Geometry
• 1964 Christos Papakyriakopoulos
• 1964 Raoul Bott
• 1966 Stephen Smale
• 1966 Morton Brown and Barry Mazur
• 1971 Robion Kirby
• 1971 Dennis Sullivan
• 1976 William Thurston
• 1976 James Harris Simons
• 1981 Mikhail Gromov
• 1981 Shing-Tung Yau
• 1986 Michael Freedman
• 1991 Andrew Casson and Clifford Taubes
• 1996 Richard S. Hamilton and Gang Tian
• 2001 Jeff Cheeger, Yakov Eliashberg and Michael J. Hopkins
• 2004 David Gabai
• 2007 Peter Kronheimer and Tomasz Mrowka; Peter Ozsváth and Zoltán Szabó
• 2010 Tobias Colding and William Minicozzi; Paul Seidel
• 2013 Ian Agol and Daniel Wise
• 2016 Fernando Codá Marques and André Neves
• 2019 Xiuxiong Chen, Simon Donaldson and Song Sun
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Sweden
• Japan
• Czech Republic
• Australia
• Netherlands
Academics
• CiNii
• DBLP
• Leopoldina
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
|
Wikipedia
|
Raoul Bricard
Raoul Bricard (23 March 1870 – 26 November 1943) was a French engineer and a mathematician. He is best known for his work in geometry, especially descriptive geometry and scissors congruence, and kinematics, especially mechanical linkages.
Raoul Bricard
Born(1870-03-23)23 March 1870
Died26 November 1943(1943-11-26) (aged 73)
Scientific career
FieldsMathematics
Biography
Bricard taught geometry at Ecole Centrale des Arts et Manufactures. In 1908 he became a professor of applied geometry at the National Conservatory of Arts and Crafts in Paris.[1] In 1932 he received the Poncelet Prize in mathematics from the Paris Academy of Sciences for his work in geometry.[2]
Work
In 1896 Bricard published a paper on Hilbert's third problem, even before the problem was stated by Hilbert.[3] In it he proved that mirror symmetric polytopes are scissors congruent, and proved a weak version of Dehn's criterion.
In 1897 Bricard published an important investigation on flexible polyhedra.[4] In it he classified all flexible octahedra, now known as Bricard octahedra.[5] This work was the subject of Henri Lebesgue's lectures in 1938.[6] Later Bricard discovered notable 6-bar linkages.[7][8]
Bricard also gave one of the first geometric proofs of Morley's trisector theorem in 1922.[9][10]
Books
Bricard authored six books, including a mathematics survey in Esperanto.[11] He is listed in Encyclopedia of Esperanto.[12]
• Matematika terminaro kaj krestomatio (in Esperanto), Hachette, Paris, 1905
• Géométrie descriptive, O. Doin et fils, 1911
• Cinématique et mécanismes, A. Colin, 1921
• Petit traité de perspective, Vuibert, 1924[13]
• Leçons de cinématique, Gauthier-Villars et cie., 1926
• Le calcul vectoriel, A. Colin, 1929
Notes
1. Science, vol. 28 (1908), p. 707.
2. "Prize Awards of the Paris Academy of Sciences", Nature vol. 131 (1933) 174-175.
3. R. Bricard, "Sur une question de géométrie relative aux polyèdres", Nouvelles annales de mathématiques, Ser. 3, Vol. 15 (1896), 331-334.
4. R. Bricard, Mémoire sur la théorie de l’octaèdre articulé Archived 2011-07-17 at the Wayback Machine, J. Math. Pures Appl., Vol. 3 (1897), 113–150 (see also the English translation and an alternative scan).
5. P. Cromwell, Polyhedra, Cambridge University Press, 1997.
6. Lebesgue H. (1967). "Octaedres articules de Bricard". Enseign. Math. Series 2. 13 (3): 175–185. doi:10.5169/seals-41541.
7. K. Wohlhart, The two types of the orthogonal Bricard linkage, Mechanism and machine theory, vol. 28 (1993), 809-817.
8. Bricard 6 Bar Linkage Origami on YouTube.
9. Guy Richard K. (2007). "The Lighthouse Theorem, Morley & Malfatti - A Budget of Paradoxes" (PDF). American Mathematical Monthly. 114 (2): 97–141. doi:10.1080/00029890.2007.11920398. JSTOR 27642143. S2CID 46275242. Archived from the original (PDF) on April 19, 2012.
10. Alain Connes, "Symmetries", European Mathematical Society Newsletter No. 54 (December 2004).
11. Raoul Bricard, from Open Library.
12. Encyclopedia of Esperanto Archived 2008-12-18 at the Wayback Machine
13. Emch, Arnold (1925). "Review: Petit Traité de Perspective by Raoul Bricard" (PDF). Bull. Amer. Math. Soc. 31 (9): 564–565. doi:10.1090/s0002-9904-1925-04125-7.
References
• Laurent R., Raoul Bricard, Professeur de Géométrie appliquée aux arts, in Fontanon C., Grelon A. (éds.), Les professeurs du Conservatoire national des arts et métiers, dictionnaire biographique, 1794-1955, INRP-CNAM, Paris 1994, vol. 1, pp. 286–291.
External links
• Works by or about Raoul Bricard at Internet Archive
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Rafael Bombelli
Rafael Bombelli (baptised on 20 January 1526; died 1572)[lower-alpha 1][1][2] was an Italian mathematician. Born in Bologna, he is the author of a treatise on algebra and is a central figure in the understanding of imaginary numbers.
He was the one who finally managed to address the problem with imaginary numbers. In his 1572 book, L'Algebra, Bombelli solved equations using the method of del Ferro/Tartaglia. He introduced the rhetoric that preceded the representative symbols +i and -i and described how they both worked.
Life
Rafael Bombelli was baptised on 20 January 1526[3] in Bologna, Papal States. He was born to Antonio Mazzoli, a wool merchant, and Diamante Scudieri, a tailor's daughter. The Mazzoli family was once quite powerful in Bologna. When Pope Julius II came to power, in 1506, he exiled the ruling family, the Bentivoglios. The Bentivoglio family attempted to retake Bologna in 1508, but failed. Rafael's grandfather participated in the coup attempt, and was captured and executed. Later, Antonio was able to return to Bologna, having changed his surname to Bombelli to escape the reputation of the Mazzoli family. Rafael was the oldest of six children. Rafael received no college education, but was instead taught by an engineer-architect by the name of Pier Francesco Clementi.
Bombelli felt that none of the works on algebra by the leading mathematicians of his day provided a careful and thorough exposition of the subject. Instead of another convoluted treatise that only mathematicians could comprehend, Rafael decided to write a book on algebra that could be understood by anyone. His text would be self-contained and easily read by those without higher education.
Bombelli died in 1572 in Rome.
Bombelli's Algebra
In the book that was published in 1572, entitled Algebra, Bombelli gave a comprehensive account of the algebra known at the time. He was the first European to write down the way of performing computations with negative numbers. The following is an excerpt from the text:
"Plus times plus makes plus
Minus times minus makes plus
Plus times minus makes minus
Minus times plus makes minus
Plus 8 times plus 8 makes plus 64
Minus 5 times minus 6 makes plus 30
Minus 4 times plus 5 makes minus 20
Plus 5 times minus 4 makes minus 20"
As was intended, Bombelli used simple language as can be seen above so that anybody could understand it. But at the same time, he was thorough.
Notation
Bombelli introduced, for the first time in a printed text (in Book II of his Algebra), a form of index notation in which the equation
$x^{3}=6x+40$
appeared as
1U3 a. 6U1 p. 40.[4]
in which he wrote the U3 as a raised bowl-shape (like the curved part of the capital letter U) with the number 3 above it. Full symbolic notation was developed shortly thereafter by the French mathematician François_Viète.
Complex numbers
Perhaps more importantly than his work with algebra, however, the book also includes Bombelli's monumental contributions to complex number theory. Before he writes about complex numbers, he points out that they occur in solutions of equations of the form $x^{3}=ax+b,$ given that $(a/3)^{3}>(b/2)^{2},$ which is another way of stating that the discriminant of the cubic is negative. The solution of this kind of equation requires taking the cube root of the sum of one number and the square root of some negative number.
Before Bombelli delves into using imaginary numbers practically, he goes into a detailed explanation of the properties of complex numbers. Right away, he makes it clear that the rules of arithmetic for imaginary numbers are not the same as for real numbers. This was a big accomplishment, as even numerous subsequent mathematicians were extremely confused on the topic.
Bombelli avoided confusion by giving a special name to square roots of negative numbers, instead of just trying to deal with them as regular radicals like other mathematicians did. This made it clear that these numbers were neither positive nor negative. This kind of system avoids the confusion that Euler encountered. Bombelli called the imaginary number i "plus of minus" and used "minus of minus" for -i.
Bombelli had the foresight to see that imaginary numbers were crucial and necessary to solving quartic and cubic equations. At the time, people cared about complex numbers only as tools to solve practical equations. As such, Bombelli was able to get solutions using Scipione del Ferro's rule, even in the irreducible case, where other mathematicians such as Cardano had given up.
In his book, Bombelli explains complex arithmetic as follows:
"Plus by plus of minus, makes plus of minus.
Minus by plus of minus, makes minus of minus.
Plus by minus of minus, makes minus of minus.
Minus by minus of minus, makes plus of minus.
Plus of minus by plus of minus, makes minus.
Plus of minus by minus of minus, makes plus.
Minus of minus by plus of minus, makes plus.
Minus of minus by minus of minus makes minus."
After dealing with the multiplication of real and imaginary numbers, Bombelli goes on to talk about the rules of addition and subtraction. He is careful to point out that real parts add to real parts, and imaginary parts add to imaginary parts.
Reputation
Bombelli is generally regarded as the inventor of complex numbers, as no one before him had made rules for dealing with such numbers, and no one believed that working with imaginary numbers would have useful results. Upon reading Bombelli's Algebra, Leibniz praised Bombelli as an ". . . outstanding master of the analytical art." Crossley writes in his book, "Thus we have an engineer, Bombelli, making practical use of complex numbers perhaps because they gave him useful results, while Cardan found the square roots of negative numbers useless. Bombelli is the first to give a treatment of any complex numbers. . . It is remarkable how thorough he is in his presentation of the laws of calculation of complex numbers. . ."[5]
In honor of his accomplishments, a Moon crater was named Bombelli.
Bombelli's method of calculating square roots
Bombelli used a method related to continued fractions to calculate square roots. He did not yet have the concept of a continued fraction, and below is the algorithm of a later version given by Pietro Cataldi (1613).[6]
The method for finding ${\sqrt {n}}$ begins with $n=(a\pm r)^{2}=a^{2}\pm 2ar+r^{2}\ $ with $0<r<1\ $, from which it can be shown that $r={\frac {|n-a^{2}|}{2a\pm r}}$. Repeated substitution of the expression on the right hand side for $r$ into itself yields a continued fraction
$a\pm {\frac {|n-a^{2}|}{2a\pm {\frac {|n-a^{2}|}{2a\pm {\frac {|n-a^{2}|}{2a\pm \cdots }}}}}}$
for the root but Bombelli is more concerned with better approximations for $r$. The value chosen for $a$ is either of the whole numbers whose squares $n$ lies between. The method gives the following convergents for ${\sqrt {13}}\ $ while the actual value is 3.605551275... :
$3{\frac {2}{3}},\ 3{\frac {3}{5}},\ 3{\frac {20}{33}},\ 3{\frac {66}{109}},\ 3{\frac {109}{180}},\ 3{\frac {720}{1189}},\ \cdots $
The last convergent equals 3.605550883... . Bombelli's method should be compared with formulas and results used by Heros and Archimedes. The result ${\frac {265}{153}}<{\sqrt {3}}<{\frac {1351}{780}}$ used by Archimedes in his determination of the value of $\pi $ can be found by using 1 and 0 for the initial values of $r$.
References
Footnotes
1. Dates follow the Julian calendar. The Gregorian calendar was adopted in Italy in 1582 (4 October 1582 was followed by 15 October 1582).
Citations
1. "The Gregorian calendar".
2. Crossley 1987, p. 95.
3. "Rafael Bombelli". www.gavagai.de. Archived from the original on 19 November 2003.
4. Stedall, Jacqueline Anne (2000). A large discourse concerning algebra: John Wallis's 1685 Treatise of algebra (Thesis). The Open University Press.
5. Crossley 1987.
6. Bombelli_algebra
Sources
• Morris Kline, Mathematical Thought from Ancient to Modern Times, 1972, Oxford University Press, New York, ISBN 0-19-501496-0
• David Eugene Smith, A Source Book in Mathematics, 1959, Dover Publications, New York, ISBN 0-486-64690-4
• Crossley, John N. (1987). The emergence of number. Singapore: World Scientific. doi:10.1142/0462. ISBN 978-9971-5-0413-7.{{cite book}}: CS1 maint: ref duplicates default (link)
• Daniel J. Curtin, et al., Rafael Bombelli's L'Algebra, 1996,
https://www.people.iup.edu/gsstoudt/history/bombelli/bombelli.pdf
External links
• L'Algebra, Libri I, II, III, IV e V, original Italian texts.
• O'Connor, John J.; Robertson, Edmund F., "Rafael Bombelli", MacTutor History of Mathematics Archive, University of St Andrews
• Background
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Italy
• Israel
• United States
• Netherlands
• Vatican
Academics
• CiNii
• MathSciNet
• zbMATH
People
• Italian People
Other
• IdRef
|
Wikipedia
|
Raphael Høegh-Krohn
Jan Raphael Høegh-Krohn (10 February 1938 – 24 January 1988) was a Norwegian mathematician.
Raphael Høegh-Krohn
Born10 February 1938
Ålesund
Died24 January 1988
NationalityNorwegian
EducationPh.D, New York University
Alma materNew York University
Scientific career
FieldsMathematics
Doctoral advisorKurt Friedrichs
Doctoral studentsHelge Holden
He finished his Ph.D. in 1966, titled On Partly Gentle Perturbation with Application to Perturbation by Annihilation-Creation Operator, under the supervision of Kurt Friedrichs at the New York University.
He authored over 150 papers and is known for the discovery of a fundamental duality in relativistic quantum statistical mechanics by representing the basic correlation functions in terms of a certain stochastic process, now known as the Høegh-Krohn process.
Books
• Albeverio, Sergio; Gesztesy, Friedrich; Høegh-Krohn, Raphael; Holden, Helge: Solvable models in quantum mechanics. Texts and Monographs in Physics. Springer-Verlag, New York, 1988.
• Albeverio, Sergio; Høegh-Krohn, Raphael; Fenstad, Jens Erik; Lindstrøm, Tom: Nonstandard methods in stochastic analysis and mathematical physics. Pure and Applied Mathematics, 122. Academic Press, Inc., Orlando, FL, 1986.
• Albeverio, S.; Gesztesy, F.; Høegh-Krohn, R.; Holden, H.: Solvable models in quantum mechanics. Second edition. With an appendix by Pavel Exner. AMS Chelsea Publishing, Providence, RI, 2005.
• Albeverio, Sergio A.; Høegh-Krohn, Raphael J.: Mathematical theory of Feynman path integrals. Lecture Notes in Mathematics, Vol. 523. Springer-Verlag, Berlin-New York, 1976.
External links
• Raphael Høegh-Krohn at the Mathematics Genealogy Project
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Italy
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Raphaël Rouquier
Raphaël Alexis Marcel Rouquier (born 9 December 1969) is a French mathematician and a professor of mathematics at UCLA.
Raphaël Rouquier
Born (1969-12-09) 9 December 1969[1]
Étampes, France[1]
Alma materParis Diderot University[1][2]
AwardsWhitehead Prize (2006)
Adams Prize (2009)
Elie Cartan Prize (2009)
Scientific career
InstitutionsCNRS
University of Leeds
University of Oxford
UCLA
Doctoral advisorMichel Broué[2] and J.G. Thompson[1]
Education
Rouquier was born in Étampes, France.[3]
Rouquier studied at the École Normale Supérieure from 1988 to 1989 and from 1989 to 1990 for a DEA in mathematics under the direction of Michel Broué, where he continued to study for his PhD. Rouquier spent the second year of his PhD study at the University of Cambridge under the supervision of J. G. Thompson.[1]
Career
He was hired by the CNRS in 1992 where he completed his PhD (1992) and Habilitation (1998–1999). He was appointed director of research there in 2003. From 2005 to 2006 he was Professor of Representation Theory at the Department of Pure Mathematics at the University of Leeds[3] before moving to the University of Oxford as the Waynflete Professor of Pure Mathematics.[4] In 2012, he moved to UCLA.[5] [6]
Awards and honors
He was awarded the Whitehead Prize in 2006[7] and the Adams Prize in 2009 for contributions to representation theory.[8][9] He was awarded the Elie Cartan Prize in 2009. In 2012 he became a fellow of the American Mathematical Society.[10] In 2015 he became a Simons Investigator.[11]
He gave the Peccot Lectures at Collège de France in 2000, the Whittemore Lectures at Yale University in 2005, an Algebra Section lecture at the International Congress of Mathematicians in 2006, the Albert Lectures at the University of Chicago in 2008, the Moursund Lectures at the University of Oregon in 2013, the Simons Lectures at MIT in 2013, the CBMS Lectures in 2014 and the Ellis Kolchin Memorial Lecture at Columbia University in 2016.
Notes
1. "CURRICULUM VITAE" (PDF). Raphaël Rouquier at the CNRS. Retrieved 17 June 2009.
2. Raphaël Rouquier at the Mathematics Genealogy Project
3. "Who's Who 2009: New Names" (PDF). The Daily Telegraph. Retrieved 17 June 2009.
4. "On the move..." Times Higher Education. 8 December 2006. Retrieved 17 June 2009.
5. "UCLA website".
6. "Oxford advertising his previous position" (PDF). Archived from the original (PDF) on 14 November 2012.
7. "Prizewinners 2006". Bulletin of the London Mathematical Society. 38 (5): 873–880. 2006. doi:10.1112/S0024609306019448. S2CID 247740410.
8. "'Representation Theory' work wins 2009 Adams Prize". 31 March 2009. Archived from the original on 1 April 2009. Retrieved 17 June 2009.
9. "Raphaël Rouquier wins the 2009 Adams Prize". Mathematical Institute, University of Oxford. 10 April 2009. Retrieved 17 June 2009.
10. List of Fellows of the American Mathematical Society, retrieved 7 July 2013.
11. "UCLA Department of Mathematics Newsletter" (PDF).
External links
• Raphaël Rouquier at Oxford
• Raphaël Rouquier at UCLA
• Raphaël Rouquier at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• Norway
• Israel
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Michael Rapoport
Michael Rapoport (born 2 October 1948)[1] is an Austrian mathematician.
Michael Rapoport
Born (1948-10-02) 2 October 1948
Cincinnati, Ohio
NationalityAustrian
Alma materParis-Sud 11 University
Known forWorks on Shimura varieties and Langlands program
AwardsLeibniz Prize (1992)
Heinz Hopf Prize (2011)
Scientific career
FieldsMathematics
InstitutionsUniversity of Bonn
Doctoral advisorPierre Deligne
Doctoral students
• Peter Scholze
• Eva Viehmann
Career
Rapoport received his PhD from Paris-Sud 11 University in 1976, under the supervision of Pierre Deligne.[2] He held a chair for arithmetic algebraic geometry at the University of Bonn,[3] as well as a visiting appointment at the University of Maryland. In 1992, he was awarded the Gottfried Wilhelm Leibniz Prize,[4] in 1999 he won the Gay-Lussac Humboldt Prize,[5] and he is the recipient of the 2011 Heinz Hopf Prize.[6] In 1994, he was an Invited Speaker (with talk Non-Archimedean period domains) at the ICM in Zürich.
Rapoport's students include Maria Heep-Altiner, Werner Baer, Peter Scholze, Eva Viehmann.[2]
Personal life
Michael Rapoport is the son of pediatrician Ingeborg Rapoport and biochemist Samuel Mitja Rapoport, and brother of biochemist Tom Rapoport.
Selected publications
• Deligne, P.; Rapoport, M. (1973). "Les Schémas de Modules de Courbes Elliptiques" (PDF). Lecture Notes in Mathematics. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-540-37855-6_4. ISBN 978-3-540-06558-6. ISSN 0075-8434.
• Ash, Avner; Mumford, David; Rapoport, Michael; Tai, Yung-sheng (2009). Smooth Compactifications of Locally Symmetric Varieties (PDF). Cambridge: Cambridge University Press. doi:10.1017/cbo9780511674693. ISBN 978-0-511-67469-3.
• Rapoport, M.; Zink, Th. (1982). "Über die lokale Zetafunktion von Shimuravarietäten. Monodromiefiltration und verschwindende Zyklen in ungleicher Charakteristik". Inventiones Mathematicae (in German). Springer Science and Business Media LLC. 68 (1): 21–101. Bibcode:1982InMat..68...21R. doi:10.1007/bf01394268. ISSN 0020-9910. S2CID 118533956.
• Laumon, G.; Rapoport, M.; Stuhler, U. (1993). "D-elliptic sheaves and the langlands correspondence". Inventiones Mathematicae. Springer Science and Business Media LLC. 113 (1): 217–338. Bibcode:1993InMat.113..217L. doi:10.1007/bf01244308. ISSN 0020-9910. S2CID 124557672.
• Rapoport, Michael (1995). "Non-Archimedian [sic] Period Domains" (PDF). Proceedings of the International Congress of Mathematicians. Basel: Birkhäuser Basel. pp. 423–434. doi:10.1007/978-3-0348-9078-6_35. ISBN 978-3-0348-9897-3.
• with M. Richartz: On the classification and specialization of F-isocrystals with additional structure. In: Composito Mathematica 103(1996), no. 2, pp. 153–182. MR1411570
• Rapoport, M (1996). Period spaces for p-divisible groups (PDF). Princeton, N.J: Princeton University Press. ISBN 978-0-691-02782-1. OCLC 945632434.
• Kudla, Stephen (2006). Modular forms and special cycles on Shimura curves (PDF). Princeton: Princeton University Press. ISBN 0-691-12551-1. OCLC 803434031.
• Kudla, Stephen; Rapoport, Michael (25 November 2010). "Special cycles on unitary Shimura varieties I. Unramified local theory". Inventiones Mathematicae. Springer Science and Business Media LLC. 184 (3): 629–682. arXiv:0804.0600. doi:10.1007/s00222-010-0298-z. ISSN 0020-9910. S2CID 15824793.
References
1. "HCM: Prof. Dr. Michael Rapoport". hcm.uni-bonn.de. Retrieved 18 December 2021.
2. Michael Rapoport at the Mathematics Genealogy Project
3. "Prof. Dr. Michael Rapoport (i.R.)". Mathematisches Institut der Universität Bonn (in German). 19 October 2019. Retrieved 18 December 2021.
4. List of Leibniz Prize winners from 1986 to 2022, DFG
5. "Gay-Lussac/Humboldt-Preis für Professor Rapoport". idw – Informationsdienst Wissenschaft e.V. (in German). 17 July 2000. Retrieved 18 December 2021.
6. "Laureates". ETH Zurich. 22 September 2021. Retrieved 18 December 2021.
External links
• Homepage in Bonn
• Oberwolfach Photo Collection, Details for Michael Rapoport
• Michael Rapoport publications indexed by Google Scholar
Authority control
International
• ISNI
• VIAF
• WorldCat
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Netherlands
Academics
• Google Scholar
• Leopoldina
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
|
Wikipedia
|
Rarita–Schwinger equation
In theoretical physics, the Rarita–Schwinger equation is the relativistic field equation of spin-3/2 fermions. It is similar to the Dirac equation for spin-1/2 fermions. This equation was first introduced by William Rarita and Julian Schwinger in 1941.
In modern notation it can be written as:[1]
$\left(\epsilon ^{\mu \kappa \rho \nu }\gamma _{5}\gamma _{\kappa }\partial _{\rho }-im\sigma ^{\mu \nu }\right)\psi _{\nu }=0$
where $\epsilon ^{\mu \kappa \rho \nu }$ is the Levi-Civita symbol, $\gamma _{5}$ and $\gamma _{\nu }$ are Dirac matrices, $m$ is the mass, $\sigma ^{\mu \nu }\equiv {\frac {i}{2}}[\gamma ^{\mu },\gamma ^{\nu }]$, and $\psi _{\nu }$ is a vector-valued spinor with additional components compared to the four component spinor in the Dirac equation. It corresponds to the (1/2, 1/2) ⊗ ((1/2, 0) ⊕ (0, 1/2)) representation of the Lorentz group, or rather, its (1, 1/2) ⊕ (1/2, 1) part.[2]
This field equation can be derived as the Euler–Lagrange equation corresponding to the Rarita–Schwinger Lagrangian:[3]
${\mathcal {L}}=-{\tfrac {1}{2}}\;{\bar {\psi }}_{\mu }\left(\epsilon ^{\mu \kappa \rho \nu }\gamma _{5}\gamma _{\kappa }\partial _{\rho }-im\sigma ^{\mu \nu }\right)\psi _{\nu }$
where the bar above $\psi _{\mu }$ denotes the Dirac adjoint.
This equation controls the propagation of the wave function of composite objects such as the delta baryons (
Δ
) or for the conjectural gravitino. So far, no elementary particle with spin 3/2 has been found experimentally.
The massless Rarita–Schwinger equation has a fermionic gauge symmetry: is invariant under the gauge transformation $\psi _{\mu }\rightarrow \psi _{\mu }+\partial _{\mu }\epsilon $, where $\epsilon \equiv \epsilon _{\alpha }$ is an arbitrary spinor field. This is simply the local supersymmetry of supergravity, and the field must be a gravitino.
"Weyl" and "Majorana" versions of the Rarita–Schwinger equation also exist.
Equations of motion in the massless case
Consider a massless Rarita–Schwinger field described by the Lagrangian density
${\mathcal {L}}_{RS}={\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\partial _{\nu }\psi _{\rho },$
where the sum over spin indices is implicit, $\psi _{\mu }$ are Majorana spinors, and
$\gamma ^{\mu \nu \rho }\equiv {\frac {1}{3!}}\gamma ^{[\mu }\gamma ^{\nu }\gamma ^{\rho ]}.$
To obtain the equations of motion we vary the Lagrangian with respect to the fields $\psi _{\mu }$, obtaining:
$\delta {\mathcal {L}}_{RS}=\delta {\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\partial _{\nu }\psi _{\rho }+{\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\partial _{\nu }\delta \psi _{\rho }=\delta {\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\partial _{\nu }\psi _{\rho }-\partial _{\nu }{\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\delta \psi _{\rho }+{\text{ boundary terms}}$
using the Majorana flip properties[4] we see that the second and first terms on the RHS are equal, concluding that
$\delta {\mathcal {L}}_{RS}=2\delta {\bar {\psi }}_{\mu }\gamma ^{\mu \nu \rho }\partial _{\nu }\psi _{\rho },$
plus unimportant boundary terms. Imposing $\delta {\mathcal {L}}_{RS}=0$ we thus see that the equation of motion for a massless Majorana Rarita–Schwinger spinor reads:
$\gamma ^{\mu \nu \rho }\partial _{\nu }\psi _{\rho }=0.$
Drawbacks of the equation
The current description of massive, higher spin fields through either Rarita–Schwinger or Fierz–Pauli formalisms is afflicted with several maladies.
Superluminal propagation
As in the case of the Dirac equation, electromagnetic interaction can be added by promoting the partial derivative to gauge covariant derivative:
$\partial _{\mu }\rightarrow D_{\mu }=\partial _{\mu }-ieA_{\mu }$.
In 1969, Velo and Zwanziger showed that the Rarita–Schwinger Lagrangian coupled to electromagnetism leads to equation with solutions representing wavefronts, some of which propagate faster than light. In other words, the field then suffers from acausal, superluminal propagation; consequently, the quantization in interaction with electromagnetism is essentially flawed. In extended supergravity, though, Das and Freedman[5] have shown that local supersymmetry solves this problem.
References
1. S. Weinberg, "The quantum theory of fields", Vol. 3, Cambridge p. 335
2. S. Weinberg, "The quantum theory of fields", Vol. 1, Cambridge p. 232
3. S. Weinberg, "The quantum theory of fields", Vol. 3, Cambridge p. 335
4. Pierre Ramond - Field theory, a Modern Primer - p.40
5. Das, A.; Freedman, D. Z. (1976). "Gauge quantization for spin-3/2 fields". Nuclear Physics B. 114 (2): 271. Bibcode:1976NuPhB.114..271D. doi:10.1016/0550-3213(76)90589-7.; Freedman, D. Z.; Das, A. (1977). "Gauge internal symmetry in extended supergravity". Nuclear Physics B. 120 (2): 221. Bibcode:1977NuPhB.120..221F. doi:10.1016/0550-3213(77)90041-4.
Sources
• Rarita, William; Schwinger, Julian (1941-07-01). "On a Theory of Particles with Half-Integral Spin". Physical Review. American Physical Society (APS). 60 (1): 61. Bibcode:1941PhRv...60...61R. doi:10.1103/physrev.60.61. ISSN 0031-899X.
• Collins P.D.B., Martin A.D., Squires E.J., Particle physics and cosmology (1989) Wiley, Section 1.6.
• Velo, Giorgio; Zwanziger, Daniel (1969-10-25). "Propagation and Quantization of Rarita-Schwinger Waves in an External Electromagnetic Potential". Physical Review. American Physical Society (APS). 186 (5): 1337–1341. Bibcode:1969PhRv..186.1337V. doi:10.1103/physrev.186.1337. ISSN 0031-899X.
• Velo, Giorgio; Zwanzinger, Daniel (1969-12-25). "Noncausality and Other Defects of Interaction Lagrangians for Particles with Spin One and Higher". Physical Review. American Physical Society (APS). 188 (5): 2218–2222. Bibcode:1969PhRv..188.2218V. doi:10.1103/physrev.188.2218. ISSN 0031-899X.
• Kobayashi, M.; Shamaly, A. (1978-04-15). "Minimal electromagnetic coupling for massive spin-two fields". Physical Review D. American Physical Society (APS). 17 (8): 2179–2181. Bibcode:1978PhRvD..17.2179K. doi:10.1103/physrevd.17.2179. ISSN 0556-2821.
|
Wikipedia
|
Rasiowa–Sikorski lemma
In axiomatic set theory, the Rasiowa–Sikorski lemma (named after Helena Rasiowa and Roman Sikorski) is one of the most fundamental facts used in the technique of forcing. In the area of forcing, a subset E of a poset (P, ≤) is called dense in P if for any p ∈ P there is e ∈ E with e ≤ p. If D is a family of dense subsets of P, then a filter F in P is called D-generic if
F ∩ E ≠ ∅ for all E ∈ D.
Now we can state the Rasiowa–Sikorski lemma:
Let (P, ≤) be a poset and p ∈ P. If D is a countable family of dense subsets of P then there exists a D-generic filter F in P such that p ∈ F.
Proof of the Rasiowa–Sikorski lemma
The proof runs as follows: since D is countable, one can enumerate the dense subsets of P as D1, D2, …. By assumption, there exists p ∈ P. Then by density, there exists p1 ≤ p with p1 ∈ D1. Repeating, one gets … ≤ p2 ≤ p1 ≤ p with pi ∈ Di. Then G = { q ∈ P: ∃ i, q ≥ pi} is a D-generic filter.
The Rasiowa–Sikorski lemma can be viewed as an equivalent to a weaker form of Martin's axiom. More specifically, it is equivalent to MA($\aleph _{0}$).
Examples
• For (P, ≤) = (Func(X, Y), ⊇), the poset of partial functions from X to Y, reverse-ordered by inclusion, define Dx = {s ∈ P: x ∈ dom(s)}. If X is countable, the Rasiowa–Sikorski lemma yields a {Dx: x ∈ X}-generic filter F and thus a function F: X → Y.
• If we adhere to the notation used in dealing with D-generic filters, {H ∪ G0: PijPt} forms an H-generic filter.
• If D is uncountable, but of cardinality strictly smaller than 2ℵ0 and the poset has the countable chain condition, we can instead use Martin's axiom.
See also
• Generic filter – in set theory, given a collection of dense open subsets of a poset, a filter that meets all sets in that collectionPages displaying wikidata descriptions as a fallback
• Martin's axiom – axiom in mathematical logic that all cardinals less than the cardinality of the continuum behave like ℵ₀ in a specific sensePages displaying wikidata descriptions as a fallback
References
• Ciesielski, Krzysztof (1997). Set theory for the working mathematician. London Mathematical Society Student Texts. Vol. 39. Cambridge: Cambridge University Press. ISBN 0-521-59441-3. Zbl 0938.03067.
• Kunen, Kenneth (1980). Set Theory: An Introduction to Independence Proofs. Studies in Logic and the Foundations of Mathematics. Vol. 102. North-Holland. ISBN 0-444-85401-0. Zbl 0443.03021.
External links
• Timothy Chow's paper A beginner’s guide to forcing is a good introduction to the concepts and ideas behind forcing.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
|
Wikipedia
|
Rastrigin function
In mathematical optimization, the Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It is a typical example of non-linear multimodal function. It was first proposed in 1974 by Rastrigin[1] as a 2-dimensional function and has been generalized by Rudolph.[2] The generalized version was popularized by Hoffmeister & Bäck[3] and Mühlenbein et al.[4] Finding the minimum of this function is a fairly difficult problem due to its large search space and its large number of local minima.
Rastrigin function of two variables
In 3D
Contour
On an $n$-dimensional domain it is defined by:
$f(\mathbf {x} )=An+\sum _{i=1}^{n}\left[x_{i}^{2}-A\cos(2\pi x_{i})\right]$
where $A=10$ and $x_{i}\in [-5.12,5.12]$. There are many extrema:
• The global minimum is at $\mathbf {x} =\mathbf {0} $ where $f(\mathbf {x} )=0$.
• The maximum function value for $x_{i}\in [-5.12,5.12]$ is located around $x_{i}\in [\pm 4.52299366...,...,\pm 4.52299366...]$:
Number of dimensions Maximum value at $\pm 4.52299366$
1 40.35329019
2 80.70658039
3 121.0598706
4 161.4131608
5 201.7664509
6 242.1197412
7 282.4730314
8 322.8263216
9 363.1796117
Here are all the values at 0.5 interval listed for the 2D Rastrigin function with $x_{i}\in [-5.12,5.12]$:
$f(x)$ $x1$
$0$ $\pm 0.5$ $\pm 1$ $\pm 1.5$ $\pm 2$ $\pm 2.5$ $\pm 3$ $\pm 3.5$ $\pm 4$ $\pm 4.5$ $\pm 5$ $\pm 5.12$
$x2$ $0$ 0 20.25 1 22.25 4 26.25 9 32.25 16 40.25 25 28.92
$\pm 0.5$ 20.25 40.5 21.25 42.5 24.25 46.5 29.25 52.5 36.25 60.5 45.25 49.17
$\pm 1$ 1 21.25 2 23.25 5 27.25 10 33.25 17 41.25 26 29.92
$\pm 1.5$ 22.25 42.5 23.25 44.5 26.25 48.5 31.25 54.5 38.25 62.5 47.25 51.17
$\pm 2$ 4 24.25 5 26.25 8 30.25 13 36.25 20 44.25 29 32.92
$\pm 2.5$ 26.25 46.5 27.25 48.5 30.25 52.5 35.25 58.5 42.25 66.5 51.25 55.17
$\pm 3$ 9 29.25 10 31.25 13 35.25 18 41.25 25 49.25 34 37.92
$\pm 3.5$ 32.25 52.5 33.25 54.5 36.25 58.5 41.25 64.5 48.25 72.5 57.25 61.17
$\pm 4$ 16 36.25 17 38.25 20 42.25 25 48.25 32 56.25 41 44.92
$\pm 4.5$ 40.25 60.5 41.25 62.5 44.25 66.5 49.25 72.5 56.25 80.5 65.25 69.17
$\pm 5$ 25 45.25 26 47.25 29 51.25 34 57.25 41 65.25 50 53.92
$\pm 5.12$ 28.92 49.17 29.92 51.17 32.92 55.17 37.92 61.17 44.92 69.17 53.92 57.85
The abundance of local minima underlines the necessity of a global optimization algorithm when needing to find the global minimum. Local optimization algorithms are likely to get stuck in a local minimum.
See also
• Test functions for optimization
Notes
1. Rastrigin, L. A. "Systems of extremal control." Mir, Moscow (1974).
2. G. Rudolph. "Globale Optimierung mit parallelen Evolutionsstrategien". Diplomarbeit. Department of Computer Science, University of Dortmund, July 1990.
3. F. Hoffmeister and T. Bäck. "Genetic Algorithms and Evolution Strategies: Similarities and Differences", pages 455–469 in: H.-P. Schwefel and R. Männer (eds.): Parallel Problem Solving from Nature, PPSN I, Proceedings, Springer, 1991.
4. H. Mühlenbein, D. Schomisch and J. Born. "The Parallel Genetic Algorithm as Function Optimizer ". Parallel Computing, 17, pages 619–632, 1991.
|
Wikipedia
|
Ratan Shankar Mishra
Ratan Shankar Mishra (1918–1999) was an Indian mathematician and academic who was known for his solutions to the Unified fluid theory of Albert Einstein.[1] He headed the department of Mathematics of the University of Gorakhpur (1958) and University of Allahabad (1963-1968) and served as the vice chancellor of Lucknow University (1982-1985),[2] as the reader at University of Delhi (1954-1958) and as the dean at Banares Hindu University, Varanasi (1965-1968).[1] He was honoured by the Government of India in 1971 with Padma Shri, the fourth highest Indian civilian award.[3]
Ratan Shankar Mishra
Born(1918-10-15)15 October 1918
Ajgain, Unnao district, Uttar Pradesh, India
Died23 August 1999(1999-08-23) (aged 80)
India
Occupation(s)Mathematician, academic
Years active1944-1999
Known forUnified fluid theory
AwardsPadma Shri
Banerjee Prize
IMS Distinguished Service Award
Teachers' Day Award
Biography
Ratan Shankar Mishra was born on 15 October 1918 at Ajgaon, a small hamlet in Unnao district in the Indian state of Uttar Pradesh.[4] Hi completed his schooling from the Government High School in Unnao in 1937 and did intermediate course at Kanyakubj Inter College, Lucknow after which he passed BSc with honours and MSc from Lucknow University.[4] He continued his studies at Delhi University and secured a doctoral degree (PhD) in 1947, the first PhD awarded by the university in Mathematics.[4] His Doctor of Science degree (DSc) came from Lucknow University in 1952,[1] again the first DSc degree awarded by Lucknow University.[4]
Mishra had already started his career while doing his doctoral research by joining the faculty of Mathematics at Ramjas College in 1944,[1] later worked at Delhi College of Arts and Commerce and moved to Lucknow University where he worked till 1954.[4] That year, he was appointed as the reader at the University of Delhi and stayed at the Indian capital till 1958.[4] When Gorakhpur University invited him to head the department of Mathematics in 1958,[1] he accepted the offer[1] and shifted to the University of Allahabad to head the department of mathematics there.[5] He was promoted as the dean in 1965 and in 1968, he joined Banares Hindu University as a selection grade professor and headed the maths and statistics department.[4] In 1973, he became the Chief Proctor of the university and in 1975 became the dean[1] to finally retire in 1978.[4] After retirement, he worked as a visiting professor at Jammu University for a short term and took up the post of the vice chancellor of Lucknow University in 1982.[2] In 1985, he resigned from the post[2] to associate himself with Tensor, the University International Maths Journal, published from Japan.[6] He also served as a visiting professor at University of Kuwait (1970, 1980–81, 1986), University of Windsor (1974) and the University of Waterloo (1967, 1972)[1] and had associations with Kanpur University in academic matters.[7]
Ratan Shankar Mishra died on 23 August 1999, at the age of 80.[1]
Legacy
Mishra specialised in differential geometry, relativity and fluid mechanics and his contributions to these fields have been documented.[1][8][9][10] He was known to have elucidated the complete solutions to the unified field theory of Albert Einstein.[1] He also added to the index-free notations and developed his own notations in differential geometry.[1] He also wrote structures for Differentiable manifolds and Almost Contact Metric Manifolds.[1]
Several academic and administrative reforms have been credited to Mishra during his tenure at the University of Allahabad. He guided several mathematicians for their PhD, DSc and DPhil research and introduced many new subjects such as Modern Algebra, Topology, Riemannian Geometry and Statistics & Probability into curriculum.[5] Under his guidance, the university introduced a course in Abstract Algebra at the graduate level, the first time in India the subject was taught at the graduate level. He was also instrumental in conducting conferences and seminars,[11] with financial assistance from the University Grants Commission where mathematicians from India and abroad like Jack P. Tull[12] moderated the proceedings.[5] It is also reported that the department of mathematics had the highest number of faculty members during his tenure as its head.[5] Apart from several articles,[13] he is the author of twelve text books and a report for Indian Science Congress Association published under the name, Progress of Mathematics - A decade (1963-1972).[4]
Positions
Mishra's efforts were behind the establishment of the Tensor Society of India, a mathematical society started in 1983, of which he was the founder president.[6] He was associated with the Indian Science Congress Association for a number of years and served as a member of its executive council, as the sectional president in 1965, as the general secretary (1968-1971) and as the president[1] in 1974.[4] He served the National Academy of Sciences, India as its council member, as the president of the Physical Sciences section (1965-1966) and as its vice president (1969-1979).[4] He was the president of the Indian Mathematical Society from 1982 to 1984 and a member of the executive council of the Indian National Science Academy from 1968 to 1970.[1] He was a member of the board of directors of the United States Education Foundation in India and the nominating committee of the International Society on General Relativity and Gravitation.[1] He served several government and semi government bodies and two award committees, Shanti Swarup Bhatnagar Prize for Science and Technology, the highest Indian award in science category and Magsaysay Award in their mathematical science research committee.[4] He also chaired the All India panel for writing text books in mathematics and sat on the committee of National Council of Educational Research and Training (NCERT).[4]
Mishra was also active in academic circles and held the presidency of the Gorakhpur University Teachers' Association in 1958 and was the hostel warden during his tenure there.[4] While working for the Lucknow University, he was the president of the Teachers' Association (1975) and the Alumni Association.[4] He was the president of the National Academy of Mathematics, Gorakhpur[4] and was a founder member of the Society for Scientific Values.[14] He was the editor in chief of Allhabad University journal, Progress of mathematics,[15] editorial advisor of Forum of Mathematics journal and a member of the editorial boards of Research Journal in Science of Kanpur University[4] and the Indian Journal of Pure and Applied Mathematics.[1]
Awards and honours
Ratan Shankar Mishra was an elected fellow of the Indian National Science Academy (INSA),[1] Indian Academy of Sciences (IAS),[16] National Academy of Sciences, India (NASI),[17] International Academy of Physical Sciences and the Bihar Academy of Sciences.[4] Banares Hindu University honoured him by selecting him as the Emeritus Professor of the university.[4] Vaclav Hlavaty, the Czech-American mathematician bequeathed his unfinished problem on field equations to R. S. Mishra, by way of a note left on his death bed and Mishra completed the problem, text of which runs into about 100 pages.[4]
He received the Banerjee Prize from Lucknow University in 1952 for the best research work.[4] The Government of India awarded him the civilian honour of Padma Shri in 1971.[3] Indian Mathematical Society placed the Distinguished Service Award on him in 1982 and the Jawaharlal Nehru Rashtriya Yuwa Kendra selected him for the Teachers' Day Award, Shikha Shiromani Alankaran in 1994.[4] On 5 May 1995, Banares Hindu University honoured him as the chief guest of their Teachers' Day functions.[4]
See also
• Riemannian manifold
• Differential manifold
• Differential geometry
• Fluid mechanics
References
1. "Insa profile". Indian National Science Academy. 2015. Retrieved 27 May 2015.
2. "List of Vice Chancellors". Lucknow University. 2015. Retrieved 27 May 2015.
3. "Padma Shri" (PDF). Padma Shri. 2015. Retrieved 11 November 2014.
4. "IAPS Bio". Indian Academy of Physical Sciences. 2015. Retrieved 27 May 2015.
5. "Mathematics and Mathematicians at Prayag". Academia. 2015. Retrieved 27 May 2015.
6. "Tensor Society of India". Tensor Society of India. 2015. Retrieved 27 May 2015.
7. "Kanpur University". Kanpur University. 2015. Retrieved 27 May 2015.
8. "R. S. Mishra - articles". Indian Academy of Sciences. 2015. Retrieved 27 May 2015.
9. G. P. Pokhariyal; R. S. Mishra (28 September 1970). "Curvature Tensors and their Relativistics Significance" (PDF). Google Scholar. Retrieved 27 May 2015.
10. Mishra, R. S.; Pandey, S. B. (1976). "On general differentiable structure, Nijenhuis tensor". Indian Journal of Pure and Applied Mathematics. 7 (3): 328–336. ISSN 0019-5588.
11. "National Conference" (PDF). Banares Hindu University. 2015. Retrieved 27 May 2015.
12. Tull, Jack P. (2015). "Journal of the Australian Mathematical Society". 5 (2). Journal of the Australian Mathematical Society: 196–206. doi:10.1017/S1446788700026768. S2CID 121316306. Retrieved 27 May 2015. {{cite journal}}: Cite journal requires |journal= (help)
13. "Worldcat search". Worldcat. 2015. Retrieved 28 May 2015.
14. "Society for Scientific Values". Society for Scientific Values. 2015. Retrieved 27 May 2015.
15. Progress of mathematics. Worldcat. 2015. OCLC 1762981. Retrieved 28 May 2015.
16. "Past Fellows". Indian Academy of Sciences. 2015. Retrieved 27 May 2015.
17. "Deceased Fellows". National Academy of Sciences, India. 2015. Archived from the original on 28 May 2015. Retrieved 27 May 2015.
Further reading
• G. P. Pokhariyal; R. S. Mishra (28 September 1970). "Curvature Tensors and their Relativistics Significance" (PDF). Google Scholar. Retrieved 27 May 2015.
• Mishra, R. S.; Pandey, S. B. (1976). "On general differentiable structure, Nijenhuis tensor". Indian Journal of Pure and Applied Mathematics. 7 (3): 328–336. ISSN 0019-5588.
Authority control
International
• VIAF
National
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Rate–distortion theory
Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion D.
Information theory
• Entropy
• Differential entropy
• Conditional entropy
• Joint entropy
• Mutual information
• Directed information
• Conditional mutual information
• Relative entropy
• Entropy rate
• Limiting density of discrete points
• Asymptotic equipartition property
• Rate–distortion theory
• Shannon's source coding theorem
• Channel capacity
• Noisy-channel coding theorem
• Shannon–Hartley theorem
Introduction
Rate–distortion theory gives an analytical expression for how much compression can be achieved using lossy compression methods. Many of the existing audio, speech, image, and video compression techniques have transforms, quantization, and bit-rate allocation procedures that capitalize on the general shape of rate–distortion functions.
Rate–distortion theory was created by Claude Shannon in his foundational work on information theory.
In rate–distortion theory, the rate is usually understood as the number of bits per data sample to be stored or transmitted. The notion of distortion is a subject of on-going discussion.[1] In the most simple case (which is actually used in most cases), the distortion is defined as the expected value of the square of the difference between input and output signal (i.e., the mean squared error). However, since we know that most lossy compression techniques operate on data that will be perceived by human consumers (listening to music, watching pictures and video) the distortion measure should preferably be modeled on human perception and perhaps aesthetics: much like the use of probability in lossless compression, distortion measures can ultimately be identified with loss functions as used in Bayesian estimation and decision theory. In audio compression, perceptual models (and therefore perceptual distortion measures) are relatively well developed and routinely used in compression techniques such as MP3 or Vorbis, but are often not easy to include in rate–distortion theory. In image and video compression, the human perception models are less well developed and inclusion is mostly limited to the JPEG and MPEG weighting (quantization, normalization) matrix.
Distortion functions
Distortion functions measure the cost of representing a symbol $x$ by an approximated symbol ${\hat {x}}$. Typical distortion functions are the Hamming distortion and the Squared-error distortion.
Hamming distortion
$d(x,{\hat {x}})={\begin{cases}0&{\text{if }}x={\hat {x}}\\1&{\text{if }}x\neq {\hat {x}}\end{cases}}$
Squared-error distortion
$d(x,{\hat {x}})=\left(x-{\hat {x}}\right)^{2}$
Rate–distortion functions
The functions that relate the rate and distortion are found as the solution of the following minimization problem:
$\inf _{Q_{Y\mid X}(y\mid x)}I_{Q}(Y;X){\text{ subject to }}D_{Q}\leq D^{*}.$
Here $Q_{Y\mid X}(y\mid x)$, sometimes called a test channel, is the conditional probability density function (PDF) of the communication channel output (compressed signal) $Y$ for a given input (original signal) $X$, and $I_{Q}(Y;X)$ is the mutual information between $Y$ and $X$ defined as
$I(Y;X)=H(Y)-H(Y\mid X)\,$
where $H(Y)$ and $H(Y\mid X)$ are the entropy of the output signal Y and the conditional entropy of the output signal given the input signal, respectively:
$H(Y)=-\int _{-\infty }^{\infty }P_{Y}(y)\log _{2}(P_{Y}(y))\,dy$
$H(Y\mid X)=-\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }Q_{Y\mid X}(y\mid x)P_{X}(x)\log _{2}(Q_{Y\mid X}(y\mid x))\,dx\,dy.$
The problem can also be formulated as a distortion–rate function, where we find the infimum over achievable distortions for given rate constraint. The relevant expression is:
$\inf _{Q_{Y\mid X}(y\mid x)}E[D_{Q}[X,Y]]{\text{ subject to }}I_{Q}(Y;X)\leq R.$
The two formulations lead to functions which are inverses of each other.
The mutual information can be understood as a measure for 'prior' uncertainty the receiver has about the sender's signal (H(Y)), diminished by the uncertainty that is left after receiving information about the sender's signal ($H(Y\mid X)$). Of course the decrease in uncertainty is due to the communicated amount of information, which is $I\left(Y;X\right)$.
As an example, in case there is no communication at all, then $H(Y\mid X)=H(Y)$ and $I(Y;X)=0$. Alternatively, if the communication channel is perfect and the received signal $Y$ is identical to the signal $X$ at the sender, then $H(Y\mid X)=0$ and $I(Y;X)=H(X)=H(Y)$.
In the definition of the rate–distortion function, $D_{Q}$ and $D^{*}$ are the distortion between $X$ and $Y$ for a given $Q_{Y\mid X}(y\mid x)$ and the prescribed maximum distortion, respectively. When we use the mean squared error as distortion measure, we have (for amplitude-continuous signals):
$D_{Q}=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }P_{X,Y}(x,y)(x-y)^{2}\,dx\,dy=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }Q_{Y\mid X}(y\mid x)P_{X}(x)(x-y)^{2}\,dx\,dy.$
As the above equations show, calculating a rate–distortion function requires the stochastic description of the input $X$ in terms of the PDF $P_{X}(x)$, and then aims at finding the conditional PDF $Q_{Y\mid X}(y\mid x)$ that minimize rate for a given distortion $D^{*}$. These definitions can be formulated measure-theoretically to account for discrete and mixed random variables as well.
An analytical solution to this minimization problem is often difficult to obtain except in some instances for which we next offer two of the best known examples. The rate–distortion function of any source is known to obey several fundamental properties, the most important ones being that it is a continuous, monotonically decreasing convex (U) function and thus the shape for the function in the examples is typical (even measured rate–distortion functions in real life tend to have very similar forms).
Although analytical solutions to this problem are scarce, there are upper and lower bounds to these functions including the famous Shannon lower bound (SLB), which in the case of squared error and memoryless sources, states that for arbitrary sources with finite differential entropy,
$R(D)\geq h(X)-h(D)\,$
where h(D) is the differential entropy of a Gaussian random variable with variance D. This lower bound is extensible to sources with memory and other distortion measures. One important feature of the SLB is that it is asymptotically tight in the low distortion regime for a wide class of sources and in some occasions, it actually coincides with the rate–distortion function. Shannon Lower Bounds can generally be found if the distortion between any two numbers can be expressed as a function of the difference between the value of these two numbers.
The Blahut–Arimoto algorithm, co-invented by Richard Blahut, is an elegant iterative technique for numerically obtaining rate–distortion functions of arbitrary finite input/output alphabet sources and much work has been done to extend it to more general problem instances.
When working with stationary sources with memory, it is necessary to modify the definition of the rate distortion function and it must be understood in the sense of a limit taken over sequences of increasing lengths.
$R(D)=\lim _{n\rightarrow \infty }R_{n}(D)$
where
$R_{n}(D)={\frac {1}{n}}\inf _{Q_{Y^{n}\mid X^{n}}\in {\mathcal {Q}}}I(Y^{n},X^{n})$
and
${\mathcal {Q}}=\{Q_{Y^{n}\mid X^{n}}(Y^{n}\mid X^{n},X_{0}):E[d(X^{n},Y^{n})]\leq D\}$
where superscripts denote a complete sequence up to that time and the subscript 0 indicates initial state.
Memoryless (independent) Gaussian source with squared-error distortion
If we assume that $X$ is a Gaussian random variable with variance $\sigma ^{2}$, and if we assume that successive samples of the signal $X$ are stochastically independent (or equivalently, the source is memoryless, or the signal is uncorrelated), we find the following analytical expression for the rate–distortion function:
$R(D)={\begin{cases}{\frac {1}{2}}\log _{2}(\sigma _{x}^{2}/D),&{\text{if }}0\leq D\leq \sigma _{x}^{2}\\0,&{\text{if }}D>\sigma _{x}^{2}.\end{cases}}$ [2]: 310
The following figure shows what this function looks like:
Rate–distortion theory tell us that 'no compression system exists that performs outside the gray area'. The closer a practical compression system is to the red (lower) bound, the better it performs. As a general rule, this bound can only be attained by increasing the coding block length parameter. Nevertheless, even at unit blocklengths one can often find good (scalar) quantizers that operate at distances from the rate–distortion function that are practically relevant.[2]
This rate–distortion function holds only for Gaussian memoryless sources. It is known that the Gaussian source is the most "difficult" source to encode: for a given mean square error, it requires the greatest number of bits. The performance of a practical compression system working on—say—images, may well be below the $R\left(D\right)$ lower bound shown.
Memoryless (independent) Bernoulli source with Hamming distortion
The rate-distortion function of a Bernoulli random variable with Hamming distortion is given by:
$R(D)=\left\{{\begin{matrix}H_{b}(p)-H_{b}(D),&0\leq D\leq \min {(p,1-p)}\\0,&D>\min {(p,1-p)}\end{matrix}}\right.$
where $H_{b}$ denotes the binary entropy function.
Plot of the rate-distortion function for $p=0.5$:
Connecting rate-distortion theory to channel capacity [3]
Suppose we want to transmit information about a source to the user with a distortion not exceeding D. Rate–distortion theory tells us that at least $R(D)$ bits/symbol of information from the source must reach the user. We also know from Shannon's channel coding theorem that if the source entropy is H bits/symbol, and the channel capacity is C (where $C<H$), then $H-C$ bits/symbol will be lost when transmitting this information over the given channel. For the user to have any hope of reconstructing with a maximum distortion D, we must impose the requirement that the information lost in transmission does not exceed the maximum tolerable loss of $H-R(D)$ bits/symbol. This means that the channel capacity must be at least as large as $R(D)$.
See also
• Decorrelation
• Rate–distortion optimization
• Data compression
• Sphere packing
• White noise
• Blahut–Arimoto algorithm
References
1. Blau, Y. & Michaeli, T. "Rethinking Lossy Compression: The Rate-Distortion-Perception Tradeoff". Proceedings of the International Conference on Machine Learning, 2019.
2. Thomas M. Cover, Joy A. Thomas (2006). Elements of Information Theory. John Wiley & Sons, New York.
3. Berger, Toby (1971). Rate Distortion Theory: A Mathematical Basis for Data Compression. Prentice Hall. LCCN 75-148254.
External links
• PyRated: Python code for basic calculations in rate-distortion theory.
• VcDemo Image and Video Compression Learning Tool
Data compression methods
Lossless
Entropy type
• Adaptive coding
• Arithmetic
• Asymmetric numeral systems
• Golomb
• Huffman
• Adaptive
• Canonical
• Modified
• Range
• Shannon
• Shannon–Fano
• Shannon–Fano–Elias
• Tunstall
• Unary
• Universal
• Exp-Golomb
• Fibonacci
• Gamma
• Levenshtein
Dictionary type
• Byte pair encoding
• Lempel–Ziv
• 842
• Brotli
• Deflate
• LZ4
• LZFSE
• LZJB
• LZMA
• LZO
• LZRW
• LZS
• LZSS
• LZW
• LZWL
• LZX
• Snappy
• Zstandard
Other types
• BWT
• CTW
• Delta
• Incremental
• DMC
• DPCM
• LDCT
• MTF
• PAQ
• PPM
• RLE
Lossy
Transform type
• Discrete cosine transform
• DCT
• MDCT
• DST
• FFT
• Wavelet
• Daubechies
• DWT
• SPIHT
Predictive type
• DPCM
• ADPCM
• LPC
• ACELP
• CELP
• LAR
• LSP
• WLPC
• Motion
• Compensation
• Estimation
• Vector
• Psychoacoustic
Audio
Concepts
• Bit rate
• ABR
• CBR
• VBR
• Companding
• Convolution
• Dynamic range
• Latency
• Nyquist–Shannon theorem
• Sampling
• Sound quality
• Speech coding
• Sub-band coding
Codec parts
• A-law
• μ-law
• DPCM
• ADPCM
• DM
• FT
• FFT
• LPC
• ACELP
• CELP
• LAR
• LSP
• WLPC
• MDCT
• Psychoacoustic model
Image
Concepts
• Chroma subsampling
• Coding tree unit
• Color space
• Compression artifact
• Image resolution
• Macroblock
• Pixel
• PSNR
• Quantization
• Standard test image
Methods
• Chain code
• DCT
• Deflate
• Fractal
• KLT
• LP
• RLE
• Wavelet
• Daubechies
• DWT
• EZW
• SPIHT
Video
Concepts
• Bit rate
• ABR
• CBR
• VBR
• Display resolution
• Frame
• Frame rate
• Frame types
• Interlace
• Video characteristics
• Video quality
Codec parts
• DCT
• DPCM
• Deblocking filter
• Lapped transform
• Motion
• Compensation
• Estimation
• Vector
• Wavelet
• Daubechies
• DWT
Theory
• Entropy
• Grammar
• Re-Pair
• Sequitur
• Information theory
• Timeline
• Kolmogorov complexity
• Prefix code
• Quantization
• Rate–distortion
• Redundancy
• Compression formats
• Compression software (codecs)
Authority control: National
• Israel
• United States
|
Wikipedia
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.