text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Semiorder
In order theory, a branch of mathematics, a semiorder is a type of ordering for items with numerical scores, where items with widely differing scores are compared by their scores and where scores within a given margin of error are deemed incomparable. Semiorders were introduced and applied in mathematical psychology by Duncan Luce (1956) as a model of human preference. They generalize strict weak orderings, in which items with equal scores may be tied but there is no margin of error. They are a special case of partial orders and of interval orders, and can be characterized among the partial orders by additional axioms, or by two forbidden four-item suborders.
Utility theory
The original motivation for introducing semiorders was to model human preferences without assuming that incomparability is a transitive relation. For instance, suppose that $x$, $y$, and $z$ represent three quantities of the same material, and that $x$ is larger than $z$ by the smallest amount that is perceptible as a difference, while $y$ is halfway between the two of them. Then, a person who desires more of the material would prefer $x$ to $z$, but would not have a preference between the other two pairs. In this example, $x$ and $y$ are incomparable in the preference ordering, as are $y$ and $z$, but $x$ and $z$ are comparable, so incomparability does not obey the transitive law.[1]
To model this mathematically, suppose that objects are given numerical utility values, by letting $u$ be any utility function that maps the objects to be compared (a set $X$) to real numbers. Set a numerical threshold (which may be normalized to 1) such that utilities within that threshold of each other are declared incomparable, and define a binary relation $<$ on the objects, by setting $x<y$ whenever $u(x)\leq u(y)-1$. Then $(X,<)$ forms a semiorder.[2] If, instead, objects are declared comparable whenever their utilities differ, the result would be a strict weak ordering, for which incomparability of objects (based on equality of numbers) would be transitive.[1]
Axiomatics
Forbidden: two mutually incomparable two-point linear orders
Forbidden: three linearly ordered points and a fourth incomparable point
A semiorder, defined from a utility function as above, is a partially ordered set with the following two properties:[3]
• Whenever two disjoint pairs of elements are comparable, for instance as $w<x$ and $y<z$, there must be an additional comparison among these elements, because $u(w)\leq u(y)$ would imply $w<z$ while $u(w)\geq u(y)$ would imply $y<x$. Therefore, it is impossible to have two mutually incomparable two-point linear orders.[3]
• If three elements form a linear ordering $w<x<y$, then every fourth point $z$ must be comparable to at least one of them, because $u(z)\leq u(x)$ would imply $z<y$ while $u(z)\geq u(x)$ would imply $w<z$, in either case showing that $z$ is comparable to $w$ or to $y$. So it is impossible to have a three-point linear order with a fourth incomparable point.[3]
Conversely, every finite partial order that avoids the two forbidden four-point orderings described above can be given utility values making it into a semiorder. [4] Therefore, rather than being a consequence of a definition in terms of utility, these forbidden orderings, or equivalent systems of axioms, can be taken as a combinatorial definition of semiorders.[5] If a semiorder on $n$ elements is given only in terms of the order relation between its pairs of elements, obeying these axioms, then it is possible to construct a utility function that represents the order in time $O(n^{2})$, where the $O$ is an instance of big O notation.[6]
For orderings on infinite sets of elements, the orderings that can be defined by utility functions and the orderings that can be defined by forbidden four-point orders differ from each other. For instance, if a semiorder $(X,<)$ (as defined by forbidden orders) includes an uncountable totally ordered subset then there do not exist sufficiently many sufficiently well-spaced real-numbers for it to be representable by a utility function. Fishburn (1973) supplies a precise characterization of the semiorders that may be defined numerically.[7]
Relation to other kinds of order
Partial orders
One may define a partial order $(X,\leq )$ from a semiorder $(X,<)$ by declaring that $x\leq y$ whenever either $x<y$ or $x=y$. Of the axioms that a partial order is required to obey, reflexivity ($x\leq x$) follows automatically from this definition. Antisymmetry (if $x\leq y$ and $y\leq x$ then $x=y$) follows from the first semiorder axiom. Transitivity (if $x\leq y$ and $y\leq z$ then $x\leq z$) follows from the second semiorder axiom. Therefore, the binary relation $(X,\leq )$ defined in this way meets the three requirements of a partial order that it be reflexive, antisymmetric, and transitive.
Conversely, suppose that $(X,\leq )$ is a partial order that has been constructed in this way from a semiorder. Then the semiorder may be recovered by declaring that $x<y$ whenever $x\leq y$ and $x\neq y$. Not every partial order leads to a semiorder in this way, however: The first of the semiorder axioms listed above follows automatically from the axioms defining a partial order, but the others do not. A partial order that includes four elements forming two two-element chains would lead to a relation $(X,<)$ that violates the second semiorder axiom, and a partial order that includes four elements forming a three-element chain and an unrelated item would violate the third semiorder axiom (cf. pictures in section #Axiomatics).
Weak orders
Every strict weak ordering < is also a semi-order. More particularly, transitivity of < and transitivity of incomparability with respect to < together imply the above axiom 2, while transitivity of incomparability alone implies axiom 3. The semiorder shown in the top image is not a strict weak ordering, since the rightmost vertex is incomparable to its two closest left neighbors, but they are comparable.
Interval orders
The semiorder defined from a utility function $u$ may equivalently be defined as the interval order defined by the intervals $[u(x),u(x)+1]$,[8] so every semiorder is an example of an interval order. A relation is a semiorder if, and only if, it can be obtained as an interval order of unit length intervals $(\ell _{i},\ell _{i}+1)$.
Quasitransitive relations
According to Amartya K. Sen,[9] semi-orders were examined by Dean T. Jamison and Lawrence J. Lau[10] and found to be a special case of quasitransitive relations. In fact, every semiorder is quasitransitive,[11] and quasitransitivity is invariant to adding all pairs of incomparable items.[12] Removing all non-vertical red lines from the topmost image results in a Hasse diagram for a relation that is still quasitransitive, but violates both axiom 2 and 3; this relation might no longer be useful as a preference ordering.
Combinatorial enumeration
The number of distinct semiorders on $n$ unlabeled items is given by the Catalan numbers[13]
${\frac {1}{n+1}}{\binom {2n}{n}},$
while the number of semiorders on $n$ labeled items is given by the sequence[14]
1, 1, 3, 19, 183, 2371, 38703, 763099, 17648823, ... (sequence A006531 in the OEIS)
Other results
Any finite semiorder has order dimension at most three.[15]
Among all partial orders with a fixed number of elements and a fixed number of comparable pairs, the partial orders that have the largest number of linear extensions are semiorders.[16]
Semiorders are known to obey the 1/3–2/3 conjecture: in any finite semiorder that is not a total order, there exists a pair of elements $x$ and $y$ such that $x$ appears earlier than $y$ in between 1/3 and 2/3 of the linear extensions of the semiorder.[3]
The set of semiorders on an $n$-element set is well-graded: if two semiorders on the same set differ from each other by the addition or removal of $k$ order relations, then it is possible to find a path of $k$ steps from the first semiorder to the second one, in such a way that each step of the path adds or removes a single order relation and each intermediate state in the path is itself a semiorder.[17]
The incomparability graphs of semiorders are called indifference graphs, and are a special case of the interval graphs.[18]
Notes
1. Luce (1956), p. 179.
2. Luce (1956), Theorem 3 describes a more general situation in which the threshold for comparability between two utilities is a function of the utility rather than being identically 1; however, this does not lead to a different class of orderings.
3. Brightwell (1989).
4. This result is typically credited to Scott & Suppes (1958); see, e.g., Rabinovitch (1977).
5. Luce (1956, p. 181) used four axioms, the first two of which combine asymmetry and the definition of incomparability, while each of the remaining two is equivalent to one of the above prohibition properties.
6. Avery (1992).
7. Fishburn (1973).
8. Fishburn (1970).
9. Sen (1971, Section 10, p. 314) Since Luce modelled indifference between x and y as "neither xRy nor yRx", while Sen modelled it as "both xRy and yRx", Sen's remark on p.314 is likely to mean the latter property.
10. Jamison & Lau (1970).
11. since it is transitive
12. more general, to adding any symmetric relation
13. Dean & Keller (1968); Kim & Roush (1978)
14. Chandon, Lemaire & Pouget (1978).
15. Rabinovitch (1978).
16. Fishburn & Trotter (1992).
17. Doignon & Falmagne (1997).
18. Roberts (1969).
References
• Avery, Peter (1992), "An algorithmic proof that semiorders are representable", Journal of Algorithms, 13 (1): 144–147, doi:10.1016/0196-6774(92)90010-A, MR 1146337.
• Brightwell, Graham R. (1989), "Semiorders and the 1/3–2/3 conjecture", Order, 5 (4): 369–380, doi:10.1007/BF00353656, S2CID 86860160.
• Chandon, J.-L.; Lemaire, J.; Pouget, J. (1978), "Dénombrement des quasi-ordres sur un ensemble fini", Centre de Mathématique Sociale. École Pratique des Hautes Études. Mathématiques et Sciences Humaines (62): 61–80, 83, MR 0517680.
• Dean, R. A.; Keller, Gordon (1968), "Natural partial orders", Canadian Journal of Mathematics, 20: 535–554, doi:10.4153/CJM-1968-055-7, MR 0225686.
• Doignon, Jean-Paul; Falmagne, Jean-Claude (1997), "Well-graded families of relations", Discrete Mathematics, 173 (1–3): 35–44, doi:10.1016/S0012-365X(96)00095-7, MR 1468838.
• Fishburn, Peter C. (1970), "Intransitive indifference with unequal indifference intervals", Journal of Mathematical Psychology, 7: 144–149, doi:10.1016/0022-2496(70)90062-3, MR 0253942.
• Fishburn, Peter C. (1973), "Interval representations for interval orders and semiorders", Journal of Mathematical Psychology, 10: 91–105, doi:10.1016/0022-2496(73)90007-2, MR 0316322.
• Fishburn, Peter C.; Trotter, W. T. (1992), "Linear extensions of semiorders: a maximization problem", Discrete Mathematics, 103 (1): 25–40, doi:10.1016/0012-365X(92)90036-F, MR 1171114.
• Jamison, Dean T.; Lau, Lawrence J. (Sep 1973), "Semiorders and the Theory of Choice", Econometrica, 41 (5): 901–912, doi:10.2307/1913813, JSTOR 1913813.
• Jamison, Dean T.; Lau, Lawrence J. (Sep–Nov 1975), "Semiorders and the Theory of Choice: A Correction", Econometrica, 43 (5–6): 979–980, doi:10.2307/1911339, JSTOR 1911339.
• Jamison, Dean T.; Lau, Lawrence J. (July 1970), Semiorders, Revealed Preference, and the Theory of the Consumer Demand, Stanford University, Institute for Mathematical Studies in the Social Sciences. Presented at the World Economics Congress, Cambridge, Sep 1970.
• Jamison, Dean T.; Lau, Lawrence J. (October 1977), "The nature of equilibrium with semiordered preferences", Econometrica, 45 (7): 1595–1605, doi:10.2307/1913952, JSTOR 1913952.
• Kim, K. H.; Roush, F. W. (1978), "Enumeration of isomorphism classes of semiorders", Journal of Combinatorics, Information &System Sciences, 3 (2): 58–61, MR 0538212.
• Luce, R. Duncan (1956), "Semiorders and a theory of utility discrimination" (PDF), Econometrica, 24 (2): 178–191, doi:10.2307/1905751, JSTOR 1905751, MR 0078632.
• Rabinovitch, Issie (1977), "The Scott-Suppes theorem on semiorders", Journal of Mathematical Psychology, 15 (2): 209–212, doi:10.1016/0022-2496(77)90030-x, MR 0437404.
• Rabinovitch, Issie (1978), "The dimension of semiorders", Journal of Combinatorial Theory, Series A, 25 (1): 50–61, doi:10.1016/0097-3165(78)90030-4, MR 0498294.
• Roberts, Fred S. (1969), "Indifference graphs", Proof Techniques in Graph Theory (Proc. Second Ann Arbor Graph Theory Conf., Ann Arbor, Mich., 1968), Academic Press, New York, pp. 139–146, MR 0252267.
• Scott, Dana; Suppes, Patrick (1958), "Foundational aspects of theories of measurement", The Journal of Symbolic Logic, 23 (2): 113–128, doi:10.2307/2964389, JSTOR 2964389, MR 0115919.
• Sen, Amartya K. (July 1971), "Choice Functions and Revealed Preference" (PDF), The Review of Economic Studies, 38 (3): 307–317, doi:10.2307/2296384, JSTOR 2296384.
Further reading
• Pirlot, M.; Vincke, Ph. (1997), Semiorders: Properties, representations, applications, Theory and Decision Library. Series B: Mathematical and Statistical Methods, vol. 36, Dordrecht: Kluwer Academic Publishers Group, ISBN 0-7923-4617-3, MR 1472236.
Order theory
• Topics
• Glossary
• Category
Key concepts
• Binary relation
• Boolean algebra
• Cyclic order
• Lattice
• Partial order
• Preorder
• Total order
• Weak ordering
Results
• Boolean prime ideal theorem
• Cantor–Bernstein theorem
• Cantor's isomorphism theorem
• Dilworth's theorem
• Dushnik–Miller theorem
• Hausdorff maximal principle
• Knaster–Tarski theorem
• Kruskal's tree theorem
• Laver's theorem
• Mirsky's theorem
• Szpilrajn extension theorem
• Zorn's lemma
Properties & Types (list)
• Antisymmetric
• Asymmetric
• Boolean algebra
• topics
• Completeness
• Connected
• Covering
• Dense
• Directed
• (Partial) Equivalence
• Foundational
• Heyting algebra
• Homogeneous
• Idempotent
• Lattice
• Bounded
• Complemented
• Complete
• Distributive
• Join and meet
• Reflexive
• Partial order
• Chain-complete
• Graded
• Eulerian
• Strict
• Prefix order
• Preorder
• Total
• Semilattice
• Semiorder
• Symmetric
• Total
• Tolerance
• Transitive
• Well-founded
• Well-quasi-ordering (Better)
• (Pre) Well-order
Constructions
• Composition
• Converse/Transpose
• Lexicographic order
• Linear extension
• Product order
• Reflexive closure
• Series-parallel partial order
• Star product
• Symmetric closure
• Transitive closure
Topology & Orders
• Alexandrov topology & Specialization preorder
• Ordered topological vector space
• Normal cone
• Order topology
• Order topology
• Topological vector lattice
• Banach
• Fréchet
• Locally convex
• Normed
Related
• Antichain
• Cofinal
• Cofinality
• Comparability
• Graph
• Duality
• Filter
• Hasse diagram
• Ideal
• Net
• Subnet
• Order morphism
• Embedding
• Isomorphism
• Order type
• Ordered field
• Ordered vector space
• Partially ordered
• Positive cone
• Riesz space
• Upper set
• Young's lattice
|
Wikipedia
|
Semiorthogonal decomposition
In mathematics, a semiorthogonal decomposition is a way to divide a triangulated category into simpler pieces. One way to produce a semiorthogonal decomposition is from an exceptional collection, a special sequence of objects in a triangulated category. For an algebraic variety X, it has been fruitful to study semiorthogonal decompositions of the bounded derived category of coherent sheaves, ${\text{D}}^{\text{b}}(X)$.
Semiorthogonal decomposition
Alexei Bondal and Mikhail Kapranov (1989) defined a semiorthogonal decomposition of a triangulated category ${\mathcal {T}}$ to be a sequence ${\mathcal {A}}_{1},\ldots ,{\mathcal {A}}_{n}$ of strictly full triangulated subcategories such that:[1]
• for all $1\leq i<j\leq n$ and all objects $A_{i}\in {\mathcal {A}}_{i}$ and $A_{j}\in {\mathcal {A}}_{j}$, every morphism from $A_{j}$ to Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): A_{i} is zero. That is, there are "no morphisms from right to left".
• ${\mathcal {T}}$ is generated by ${\mathcal {A}}_{1},\ldots ,{\mathcal {A}}_{n}$. That is, the smallest strictly full triangulated subcategory of ${\mathcal {T}}$ containing ${\mathcal {A}}_{1},\ldots ,{\mathcal {A}}_{n}$ is equal to ${\mathcal {T}}$.
The notation ${\mathcal {T}}=\langle {\mathcal {A}}_{1},\ldots ,{\mathcal {A}}_{n}\rangle $ is used for a semiorthogonal decomposition.
Having a semiorthogonal decomposition implies that every object of ${\mathcal {T}}$ has a canonical "filtration" whose graded pieces are (successively) in the subcategories ${\mathcal {A}}_{1},\ldots ,{\mathcal {A}}_{n}$. That is, for each object T of ${\mathcal {T}}$, there is a sequence
$0=T_{n}\to T_{n-1}\to \cdots \to T_{0}=T$
of morphisms in ${\mathcal {T}}$ such that the cone of $T_{i}\to T_{i-1}$ is in ${\mathcal {A}}_{i}$, for each i. Moreover, this sequence is unique up to a unique isomorphism.[2]
One can also consider "orthogonal" decompositions of a triangulated category, by requiring that there are no morphisms from ${\mathcal {A}}_{i}$ to ${\mathcal {A}}_{j}$ for any $i\neq j$. However, that property is too strong for most purposes. For example, for an (irreducible) smooth projective variety X over a field, the bounded derived category ${\text{D}}^{\text{b}}(X)$ of coherent sheaves never has a nontrivial orthogonal decomposition, whereas it may have a semiorthogonal decomposition, by the examples below.
A semiorthogonal decomposition of a triangulated category may be considered as analogous to a finite filtration of an abelian group. Alternatively, one may consider a semiorthogonal decomposition ${\mathcal {T}}=\langle {\mathcal {A}},{\mathcal {B}}\rangle $ as closer to a split exact sequence, because the exact sequence $0\to {\mathcal {A}}\to {\mathcal {T}}\to {\mathcal {T}}/{\mathcal {A}}\to 0$ of triangulated categories is split by the subcategory ${\mathcal {B}}\subset {\mathcal {T}}$, mapping isomorphically to ${\mathcal {T}}/{\mathcal {A}}$.
Using that observation, a semiorthogonal decomposition ${\mathcal {T}}=\langle {\mathcal {A}}_{1},\ldots ,{\mathcal {A}}_{n}\rangle $ implies a direct sum splitting of Grothendieck groups:
$K_{0}({\mathcal {T}})\cong K_{0}({\mathcal {A}}_{1})\oplus \cdots \oplus K_{0}({\mathcal {A_{n}}}).$
For example, when ${\mathcal {T}}={\text{D}}^{\text{b}}(X)$ is the bounded derived category of coherent sheaves on a smooth projective variety X, $K_{0}({\mathcal {T}})$ can be identified with the Grothendieck group $K_{0}(X)$ of algebraic vector bundles on X. In this geometric situation, using that ${\text{D}}^{\text{b}}(X)$ comes from a dg-category, a semiorthogonal decomposition actually gives a splitting of all the algebraic K-groups of X:
$K_{i}(X)\cong K_{i}({\mathcal {A}}_{1})\oplus \cdots \oplus K_{i}({\mathcal {A_{n}}})$
for all i.[3]
Admissible subcategory
One way to produce a semiorthogonal decomposition is from an admissible subcategory. By definition, a full triangulated subcategory ${\mathcal {A}}\subset {\mathcal {T}}$ is left admissible if the inclusion functor $i\colon {\mathcal {A}}\to {\mathcal {T}}$ has a left adjoint functor, written $i^{*}$. Likewise, ${\mathcal {A}}\subset {\mathcal {T}}$ is right admissible if the inclusion has a right adjoint, written $i^{!}$, and it is admissible if it is both left and right admissible.
A right admissible subcategory ${\mathcal {B}}\subset {\mathcal {T}}$ determines a semiorthogonal decomposition
${\mathcal {T}}=\langle {\mathcal {B}}^{\perp },{\mathcal {B}}\rangle $,
where
${\mathcal {B}}^{\perp }:=\{T\in {\mathcal {T}}:\operatorname {Hom} ({\mathcal {B}},T)=0\}$
is the right orthogonal of ${\mathcal {B}}$ in ${\mathcal {T}}$.[2] Conversely, every semiorthogonal decomposition ${\mathcal {T}}=\langle {\mathcal {A}},{\mathcal {B}}\rangle $ arises in this way, in the sense that ${\mathcal {B}}$ is right admissible and ${\mathcal {A}}={\mathcal {B}}^{\perp }$. Likewise, for any semiorthogonal decomposition ${\mathcal {T}}=\langle {\mathcal {A}},{\mathcal {B}}\rangle $, the subcategory ${\mathcal {A}}$ is left admissible, and ${\mathcal {B}}={}^{\perp }{\mathcal {A}}$, where
${}^{\perp }{\mathcal {A}}:=\{T\in {\mathcal {T}}:\operatorname {Hom} (T,{\mathcal {A}})=0\}$
is the left orthogonal of ${\mathcal {A}}$.
If ${\mathcal {T}}$ is the bounded derived category of a smooth projective variety over a field k, then every left or right admissible subcategory of ${\mathcal {T}}$ is in fact admissible.[4] By results of Bondal and Michel Van den Bergh, this holds more generally for ${\mathcal {T}}$ any regular proper triangulated category that is idempotent-complete.[5]
Moreover, for a regular proper idempotent-complete triangulated category ${\mathcal {T}}$, a full triangulated subcategory is admissible if and only if it is regular and idempotent-complete. These properties are intrinsic to the subcategory.[6] For example, for X a smooth projective variety and Y a subvariety not equal to X, the subcategory of ${\text{D}}^{\text{b}}(X)$ of objects supported on Y is not admissible.
Exceptional collection
Let k be a field, and let ${\mathcal {T}}$ be a k-linear triangulated category. An object E of ${\mathcal {T}}$ is called exceptional if Hom(E,E) = k and Hom(E,E[t]) = 0 for all nonzero integers t, where [t] is the shift functor in ${\mathcal {T}}$. (In the derived category of a smooth complex projective variety X, the first-order deformation space of an object E is $\operatorname {Ext} _{X}^{1}(E,E)\cong \operatorname {Hom} (E,E[1])$, and so an exceptional object is in particular rigid. It follows, for example, that there are at most countably many exceptional objects in ${\text{D}}^{\text{b}}(X)$, up to isomorphism. That helps to explain the name.)
The triangulated subcategory generated by an exceptional object E is equivalent to the derived category ${\text{D}}^{\text{b}}(k)$ of finite-dimensional k-vector spaces, the simplest triangulated category in this context. (For example, every object of that subcategory is isomorphic to a finite direct sum of shifts of E.)
Alexei Gorodentsev and Alexei Rudakov (1987) defined an exceptional collection to be a sequence of exceptional objects $E_{1},\ldots ,E_{m}$ such that $\operatorname {Hom} (E_{j},E_{i}[t])=0$ for all i < j and all integers t. (That is, there are "no morphisms from right to left".) In a proper triangulated category ${\mathcal {T}}$ over k, such as the bounded derived category of coherent sheaves on a smooth projective variety, every exceptional collection generates an admissible subcategory, and so it determines a semiorthogonal decomposition:
${\mathcal {T}}=\langle {\mathcal {A}},E_{1},\ldots ,E_{m}\rangle ,$
where ${\mathcal {A}}=\langle E_{1},\ldots ,E_{m}\rangle ^{\perp }$, and $E_{i}$ denotes the full triangulated subcategory generated by the object $E_{i}$.[7] An exceptional collection is called full if the subcategory ${\mathcal {A}}$ is zero. (Thus a full exceptional collection breaks the whole triangulated category up into finitely many copies of ${\text{D}}^{\text{b}}(k)$.)
In particular, if X is a smooth projective variety such that ${\text{D}}^{\text{b}}(X)$ has a full exceptional collection $E_{1},\ldots ,E_{m}$, then the Grothendieck group of algebraic vector bundles on X is the free abelian group on the classes of these objects:
$K_{0}(X)\cong \mathbb {Z} \{E_{1},\ldots ,E_{m}\}.$
A smooth complex projective variety X with a full exceptional collection must have trivial Hodge theory, in the sense that $h^{p,q}(X)=0$ for all $p\neq q$; moreover, the cycle class map $CH^{*}(X)\otimes \mathbb {Q} \to H^{*}(X,\mathbb {Q} )$ must be an isomorphism.[8]
Examples
The original example of a full exceptional collection was discovered by Alexander Beilinson (1978): the derived category of projective space over a field has the full exceptional collection
${\text{D}}^{\text{b}}(\mathbf {P} ^{n})=\langle O,O(1),\ldots ,O(n)\rangle $,
where O(j) for integers j are the line bundles on projective space.[9] Full exceptional collections have also been constructed on all smooth projective toric varieties, del Pezzo surfaces, many projective homogeneous varieties, and some other Fano varieties.[10]
More generally, if X is a smooth projective variety of positive dimension such that the coherent sheaf cohomology groups $H^{i}(X,O_{X})$ are zero for i > 0, then the object $O_{X}$ in ${\text{D}}^{\text{b}}(X)$ is exceptional, and so it induces a nontrivial semiorthogonal decomposition ${\text{D}}^{\text{b}}(X)=\langle (O_{X})^{\perp },O_{X}\rangle $. This applies to every Fano variety over a field of characteristic zero, for example. It also applies to some other varieties, such as Enriques surfaces and some surfaces of general type.
A source of examples is Orlov's blowup formula concerning the blowup $X=\operatorname {Bl} _{Z}(Y)$ of a scheme $Y$ at a codimension $k$ locally complete intersection subscheme $Z$ with exceptional locus $\iota :E\simeq \mathbb {P} _{Z}(N_{Z/Y})\to X$. There is a semiorthogonal decomposition $D^{b}(X)=\langle \Phi _{1-k}(D^{b}(Z)),\ldots ,\Phi _{-1}(D^{b}(Z)),\pi ^{*}(D^{b}(Y))\rangle $ where $\Phi _{i}:D^{b}(Z)\to D^{b}(X)$ is the functor $\Phi _{i}(-)=\iota _{*}({\mathcal {O}}_{E}(k))\otimes p^{*}(-))$ with $p:X\to Y$is the natural map.[11]
While these examples encompass a large number of well-studied derived categories, many naturally occurring triangulated categories are "indecomposable". In particular, for a smooth projective variety X whose canonical bundle $K_{X}$ is basepoint-free, every semiorthogonal decomposition ${\text{D}}^{\text{b}}(X)=\langle {\mathcal {A}},{\mathcal {B}}\rangle $ is trivial in the sense that ${\mathcal {A}}$ or ${\mathcal {B}}$ must be zero.[12] For example, this applies to every variety which is Calabi–Yau in the sense that its canonical bundle is trivial.
See also
• Derived noncommutative algebraic geometry
Notes
1. Huybrechts 2006, Definition 1.59.
2. Bondal & Kapranov 1990, Proposition 1.5.
3. Orlov 2016, Section 1.2.
4. Kuznetsov 2007, Lemmas 2.10, 2.11, and 2.12.
5. Orlov 2016, Theorem 3.16.
6. Orlov 2016, Propositions 3.17 and 3.20.
7. Huybrechts 2006, Lemma 1.58.
8. Marcolli & Tabuada 2015, Proposition 1.9.
9. Huybrechts 2006, Corollary 8.29.
10. Kuznetsov 2014, Section 2.2.
11. Orlov, D O (1993-02-28). "PROJECTIVE BUNDLES, MONOIDAL TRANSFORMATIONS, AND DERIVED CATEGORIES OF COHERENT SHEAVES". Russian Academy of Sciences. Izvestiya Mathematics. 41 (1): 133–141. doi:10.1070/im1993v041n01abeh002182. ISSN 1064-5632.
12. Kuznetsov 2014, Section 2.5.
References
• Bondal, Alexei; Kapranov, Mikhail (1990), "Representable functors, Serre functors, and reconstructions", Mathematics of the USSR-Izvestiya, 35: 519–541, doi:10.1070/IM1990v035n03ABEH000716, MR 1039961
• Huybrechts, Daniel (2006), Fourier–Mukai transforms in algebraic geometry, Oxford University Press, ISBN 978-0199296866, MR 2244106
• Kuznetsov, Alexander (2007), "Homological projective duality", Publications Mathématiques de l'IHÉS, 105: 157–220, arXiv:math/0507292, doi:10.1007/s10240-007-0006-8, MR 2354207
• Kuznetsov, Alexander (2014), "Semiorthogonal decompositions in algebraic geometry", Proceedings of the International Congress of Mathematicians (Seoul, 2014), vol. 2, Seoul: Kyung Moon Sa, pp. 635–660, arXiv:1404.3143, MR 3728631
• Marcolli, Matilde; Tabuada, Gonçalo (2015), "From exceptional collections to motivic decompositions via noncommutative motives", Journal für die reine und angewandte Mathematik, 701: 153–167, arXiv:1202.6297, doi:10.1515/crelle-2013-0027, MR 3331729
• Orlov, Dmitri (2016), "Smooth and proper noncommutative schemes and gluing of DG categories", Advances in Mathematics, 302: 59–105, arXiv:1402.7364, doi:10.1016/j.aim.2016.07.014, MR 3545926
|
Wikipedia
|
Semiparametric model
In statistics, a semiparametric model is a statistical model that has parametric and nonparametric components.
A statistical model is a parameterized family of distributions: $\{P_{\theta }:\theta \in \Theta \}$ indexed by a parameter $\theta $.
• A parametric model is a model in which the indexing parameter $\theta $ is a vector in $k$-dimensional Euclidean space, for some nonnegative integer $k$.[1] Thus, $\theta $ is finite-dimensional, and $\Theta \subseteq \mathbb {R} ^{k}$.
• With a nonparametric model, the set of possible values of the parameter $\theta $ is a subset of some space $V$, which is not necessarily finite-dimensional. For example, we might consider the set of all distributions with mean 0. Such spaces are vector spaces with topological structure, but may not be finite-dimensional as vector spaces. Thus, $\Theta \subseteq V$ for some possibly infinite-dimensional space $V$.
• With a semiparametric model, the parameter has both a finite-dimensional component and an infinite-dimensional component (often a real-valued function defined on the real line). Thus, Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \Theta \subseteq \mathbb{R}^k \times V} , where $V$ is an infinite-dimensional space.
It may appear at first that semiparametric models include nonparametric models, since they have an infinite-dimensional as well as a finite-dimensional component. However, a semiparametric model is considered to be "smaller" than a completely nonparametric model because we are often interested only in the finite-dimensional component of $\theta $. That is, the infinite-dimensional component is regarded as a nuisance parameter.[2] In nonparametric models, by contrast, the primary interest is in estimating the infinite-dimensional parameter. Thus the estimation task is statistically harder in nonparametric models.
These models often use smoothing or kernels.
Example
A well-known example of a semiparametric model is the Cox proportional hazards model.[3] If we are interested in studying the time $T$ to an event such as death due to cancer or failure of a light bulb, the Cox model specifies the following distribution function for $T$:
$F(t)=1-\exp \left(-\int _{0}^{t}\lambda _{0}(u)e^{\beta x}du\right),$
where $x$ is the covariate vector, and $\beta $ and $\lambda _{0}(u)$ are unknown parameters. $\theta =(\beta ,\lambda _{0}(u))$. Here $\beta $ is finite-dimensional and is of interest; $\lambda _{0}(u)$ is an unknown non-negative function of time (known as the baseline hazard function) and is often a nuisance parameter. The set of possible candidates for $\lambda _{0}(u)$ is infinite-dimensional.
See also
• Semiparametric regression
• Statistical model
• Generalized method of moments
Notes
1. Bickel, P. J.; Klaassen, C. A. J.; Ritov, Y.; Wellner, J. A. (2006), "Semiparametrics", in Kotz, S.; et al. (eds.), Encyclopedia of Statistical Sciences, Wiley.
2. Oakes, D. (2006), "Semi-parametric models", in Kotz, S.; et al. (eds.), Encyclopedia of Statistical Sciences, Wiley.
3. Balakrishnan, N.; Rao, C. R. (2004). Handbook of Statistics 23: Advances in Survival Analysis. Elsevier. p. 126.
References
• Bickel, P. J.; Klaassen, C. A. J.; Ritov, Y.; Wellner, J. A. (1998), Efficient and Adaptive Estimation for Semiparametric Models, Springer
• Härdle, Wolfgang; Müller, Marlene; Sperlich, Stefan; Werwatz, Axel (2004), Nonparametric and Semiparametric Models, Springer
• Kosorok, Michael R. (2008), Introduction to Empirical Processes and Semiparametric Inference, Springer
• Tsiatis, Anastasios A. (2006), Semiparametric Theory and Missing Data, Springer
• Begun, Janet M.; Hall, W. J.; Huang, Wei-Min; Wellner, Jon A. (1983), "Information and asymptotic efficiency in parametric--nonparametric models", Annals of Statistics, 11 (1983), no. 2, 432--452
|
Wikipedia
|
Maximum likelihood estimation
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate.[1] The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.[2][3][4]
If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when the random errors are assumed to have normal distributions with the same variance.[5]
From the perspective of Bayesian inference, MLE is generally equivalent to maximum a posteriori (MAP) estimation with uniform prior distributions (or a normal prior distribution with a standard deviation of infinity). In frequentist inference, MLE is a special case of an extremum estimator, with the objective function being the likelihood.
Principles
We model a set of observations as a random sample from an unknown joint probability distribution which is expressed in terms of a set of parameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vector $\;\theta =\left[\theta _{1},\,\theta _{2},\,\ldots ,\,\theta _{k}\right]^{\mathsf {T}}\;$ so that this distribution falls within a parametric family $\;\{f(\cdot \,;\theta )\mid \theta \in \Theta \}\;,$ where $\,\Theta \,$ is called the parameter space, a finite-dimensional subset of Euclidean space. Evaluating the joint density at the observed data sample $\;\mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})\;$ gives a real-valued function,
${\mathcal {L}}_{n}(\theta )={\mathcal {L}}_{n}(\theta ;\mathbf {y} )=f_{n}(\mathbf {y} ;\theta )\;,$ ;\mathbf {y} )=f_{n}(\mathbf {y} ;\theta )\;,}
which is called the likelihood function. For independent and identically distributed random variables, $f_{n}(\mathbf {y} ;\theta )$ ;\theta )} will be the product of univariate density functions:
$f_{n}(\mathbf {y} ;\theta )=\prod _{k=1}^{n}\,f_{k}^{\mathsf {univar}}(y_{k};\theta )~.$ ;\theta )=\prod _{k=1}^{n}\,f_{k}^{\mathsf {univar}}(y_{k};\theta )~.}
The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space,[6] that is
${\hat {\theta }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.$
Intuitively, this selects the parameter values that make the observed data most probable. The specific value $~{\hat {\theta }}={\hat {\theta }}_{n}(\mathbf {y} )\in \Theta ~$ that maximizes the likelihood function $\,{\mathcal {L}}_{n}\,$ is called the maximum likelihood estimate. Further, if the function $\;{\hat {\theta }}_{n}:\mathbb {R} ^{n}\to \Theta \;$ so defined is measurable, then it is called the maximum likelihood estimator. It is generally a function defined over the sample space, i.e. taking a given sample as its argument. A sufficient but not necessary condition for its existence is for the likelihood function to be continuous over a parameter space $\,\Theta \,$ that is compact.[7] For an open $\,\Theta \,$ the likelihood function may increase without ever reaching a supremum value.
In practice, it is often convenient to work with the natural logarithm of the likelihood function, called the log-likelihood:
$\ell (\theta \,;\mathbf {y} )=\ln {\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.$
Since the logarithm is a monotonic function, the maximum of $\;\ell (\theta \,;\mathbf {y} )\;$ occurs at the same value of $\theta $ as does the maximum of $\,{\mathcal {L}}_{n}~.$[8] If $\ell (\theta \,;\mathbf {y} )$ is differentiable in $\,\Theta \,,$ the necessary conditions for the occurrence of a maximum (or a minimum) are
${\frac {\partial \ell }{\partial \theta _{1}}}=0,\quad {\frac {\partial \ell }{\partial \theta _{2}}}=0,\quad \ldots ,\quad {\frac {\partial \ell }{\partial \theta _{k}}}=0~,$
known as the likelihood equations. For some models, these equations can be explicitly solved for $\,{\widehat {\theta \,}}\,,$ but in general no closed-form solution to the maximization problem is known or available, and an MLE can only be found via numerical optimization. Another problem is that in finite samples, there may exist multiple roots for the likelihood equations.[9] Whether the identified root $\,{\widehat {\theta \,}}\,$ of the likelihood equations is indeed a (local) maximum depends on whether the matrix of second-order partial and cross-partial derivatives, the so-called Hessian matrix
$\mathbf {H} \left({\widehat {\theta \,}}\right)={\begin{bmatrix}\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\vdots &\vdots &\ddots &\vdots \\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}\end{bmatrix}}~,$
is negative semi-definite at ${\widehat {\theta \,}}$, as this indicates local concavity. Conveniently, most common probability distributions – in particular the exponential family – are logarithmically concave.[10][11]
Restricted parameter space
While the domain of the likelihood function—the parameter space—is generally a finite-dimensional subset of Euclidean space, additional restrictions sometimes need to be incorporated into the estimation process. The parameter space can be expressed as
$\Theta =\left\{\theta :\theta \in \mathbb {R} ^{k},\;h(\theta )=0\right\}~,$ :\theta \in \mathbb {R} ^{k},\;h(\theta )=0\right\}~,}
where $\;h(\theta )=\left[h_{1}(\theta ),h_{2}(\theta ),\ldots ,h_{r}(\theta )\right]\;$ is a vector-valued function mapping $\,\mathbb {R} ^{k}\,$ into $\;\mathbb {R} ^{r}~.$ Estimating the true parameter $\theta $ belonging to $\Theta $ then, as a practical matter, means to find the maximum of the likelihood function subject to the constraint $~h(\theta )=0~.$
Theoretically, the most natural approach to this constrained optimization problem is the method of substitution, that is "filling out" the restrictions $\;h_{1},h_{2},\ldots ,h_{r}\;$ to a set $\;h_{1},h_{2},\ldots ,h_{r},h_{r+1},\ldots ,h_{k}\;$ in such a way that $\;h^{\ast }=\left[h_{1},h_{2},\ldots ,h_{k}\right]\;$ is a one-to-one function from $\mathbb {R} ^{k}$ to itself, and reparameterize the likelihood function by setting $\;\phi _{i}=h_{i}(\theta _{1},\theta _{2},\ldots ,\theta _{k})~.$[12] Because of the equivariance of the maximum likelihood estimator, the properties of the MLE apply to the restricted estimates also.[13] For instance, in a multivariate normal distribution the covariance matrix $\,\Sigma \,$ must be positive-definite; this restriction can be imposed by replacing $\;\Sigma =\Gamma ^{\mathsf {T}}\Gamma \;,$ where $\Gamma $ is a real upper triangular matrix and $\Gamma ^{\mathsf {T}}$ is its transpose.[14]
In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to the restricted likelihood equations
${\frac {\partial \ell }{\partial \theta }}-{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\lambda =0$ and $h(\theta )=0\;,$
where $~\lambda =\left[\lambda _{1},\lambda _{2},\ldots ,\lambda _{r}\right]^{\mathsf {T}}~$ is a column-vector of Lagrange multipliers and $\;{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\;$ is the k × r Jacobian matrix of partial derivatives.[12] Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero.[15] This in turn allows for a statistical test of the "validity" of the constraint, known as the Lagrange multiplier test.
Nonparametric Maximum Likelihood Estimation
Nonparametric Maximum likelihood estimation can be performed using the empirical likelihood.
Properties
A maximum likelihood estimator is an extremum estimator obtained by maximizing, as a function of θ, the objective function ${\widehat {\ell \,}}(\theta \,;x)$. If the data are independent and identically distributed, then we have
${\widehat {\ell \,}}(\theta \,;x)={\frac {1}{n}}\sum _{i=1}^{n}\ln f(x_{i}\mid \theta ),$
this being the sample analogue of the expected log-likelihood $\ell (\theta )=\operatorname {\mathbb {E} } [\,\ln f(x_{i}\mid \theta )\,]$, where this expectation is taken with respect to the true density.
Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.[16] However, like other estimation methods, maximum likelihood estimation possesses a number of attractive limiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:
• Consistency: the sequence of MLEs converges in probability to the value being estimated.
• Invariance: If ${\hat {\theta }}$ is the maximum likelihood estimator for $\theta $, and if $g(\theta )$ is any transformation of $\theta $, then the maximum likelihood estimator for $\alpha =g(\theta )$ is ${\hat {\alpha }}=g({\hat {\theta }})$. This property is less commonly known as functional equivariance. The invariance property holds for arbitrary transformation $g$, although the proof simplifies if $g$ is restricted to one-to-one transformations.
• Efficiency, i.e. it achieves the Cramér–Rao lower bound when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic mean squared error than the MLE (or other estimators attaining this bound), which also means that MLE has asymptotic normality.
• Second-order efficiency after correction for bias.
Consistency
Under the conditions outlined below, the maximum likelihood estimator is consistent. The consistency means that if the data were generated by $f(\cdot \,;\theta _{0})$ and we have a sufficiently large number of observations n, then it is possible to find the value of θ0 with arbitrary precision. In mathematical terms this means that as n goes to infinity the estimator ${\widehat {\theta \,}}$ converges in probability to its true value:
${\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{p}}}\ \theta _{0}.$
Under slightly stronger conditions, the estimator converges almost surely (or strongly):
${\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{a.s.}}}\ \theta _{0}.$
In practical applications, data is never generated by $f(\cdot \,;\theta _{0})$. Rather, $f(\cdot \,;\theta _{0})$ is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics that all models are wrong. Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have.
To establish consistency, the following conditions are sufficient.[17]
1. Identification of the model:
$\theta \neq \theta _{0}\quad \Leftrightarrow \quad f(\cdot \mid \theta )\neq f(\cdot \mid \theta _{0}).$
In other words, different parameter values θ correspond to different distributions within the model. If this condition did not hold, there would be some value θ1 such that θ0 and θ1 generate an identical distribution of the observable data. Then we would not be able to distinguish between these two parameters even with an infinite amount of data—these parameters would have been observationally equivalent.
The identification condition is absolutely necessary for the ML estimator to be consistent. When this condition holds, the limiting likelihood function ℓ(θ|·) has unique global maximum at θ0.
2. Compactness: the parameter space Θ of the model is compact.
The identification condition establishes that the log-likelihood has a unique global maximum. Compactness implies that the likelihood cannot approach the maximum value arbitrarily close at some other point (as demonstrated for example in the picture on the right).
Compactness is only a sufficient condition and not a necessary condition. Compactness can be replaced by some other conditions, such as:
• both concavity of the log-likelihood function and compactness of some (nonempty) upper level sets of the log-likelihood function, or
• existence of a compact neighborhood N of θ0 such that outside of N the log-likelihood function is less than the maximum by at least some ε > 0.
3. Continuity: the function ln f(x | θ) is continuous in θ for almost all values of x:
$\operatorname {\mathbb {P} } {\Bigl [}\;\ln f(x\mid \theta )\;\in \;C^{0}(\Theta )\;{\Bigr ]}=1.$
The continuity here can be replaced with a slightly weaker condition of upper semi-continuity.
4. Dominance: there exists D(x) integrable with respect to the distribution f(x | θ0) such that
${\Bigl |}\ln f(x\mid \theta ){\Bigr |}<D(x)\quad {\text{ for all }}\theta \in \Theta .$
By the uniform law of large numbers, the dominance condition together with continuity establish the uniform convergence in probability of the log-likelihood:
$\sup _{\theta \in \Theta }\left|{\widehat {\ell \,}}(\theta \mid x)-\ell (\theta )\,\right|\ {\xrightarrow {\text{p}}}\ 0.$
The dominance condition can be employed in the case of i.i.d. observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequence ${\widehat {\ell \,}}(\theta \mid x)$ is stochastically equicontinuous.
If one wants to demonstrate that the ML estimator ${\widehat {\theta \,}}$ converges to θ0 almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:
$\sup _{\theta \in \Theta }\left\|\;{\widehat {\ell \,}}(\theta \mid x)-\ell (\theta )\;\right\|\ \xrightarrow {\text{a.s.}} \ 0.$
Additionally, if (as assumed above) the data were generated by $f(\cdot \,;\theta _{0})$, then under certain conditions, it can also be shown that the maximum likelihood estimator converges in distribution to a normal distribution. Specifically,[18]
${\sqrt {n}}\left({\widehat {\theta \,}}_{\mathrm {mle} }-\theta _{0}\right)\ \xrightarrow {d} \ {\mathcal {N}}\left(0,\,I^{-1}\right)$
where I is the Fisher information matrix.
Functional invariance
The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, if ${\widehat {\theta \,}}$ is the MLE for $\theta $, and if $g(\theta )$ is any transformation of $\theta $, then the MLE for $\alpha =g(\theta )$ is by definition[19]
${\widehat {\alpha }}=g(\,{\widehat {\theta \,}}\,).\,$
It maximizes the so-called profile likelihood:
${\bar {L}}(\alpha )=\sup _{\theta :\alpha =g(\theta )}L(\theta ).\,$ :\alpha =g(\theta )}L(\theta ).\,}
The MLE is also equivariant with respect to certain transformations of the data. If $y=g(x)$ where $g$ is one to one and does not depend on the parameters to be estimated, then the density functions satisfy
$f_{Y}(y)={\frac {f_{X}(x)}{|g'(x)|}}$
and hence the likelihood functions for $X$ and $Y$ differ only by a factor that does not depend on the model parameters.
For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data.
Efficiency
As assumed above, if the data were generated by $~f(\cdot \,;\theta _{0})~,$ then under certain conditions, it can also be shown that the maximum likelihood estimator converges in distribution to a normal distribution. It is √n -consistent and asymptotically efficient, meaning that it reaches the Cramér–Rao bound. Specifically,[18]
${\sqrt {n\,}}\,\left({\widehat {\theta \,}}_{\text{mle}}-\theta _{0}\right)\ \ \xrightarrow {d} \ \ {\mathcal {N}}\left(0,\ {\mathcal {I}}^{-1}\right)~,$
where $~{\mathcal {I}}~$ is the Fisher information matrix:
${\mathcal {I}}_{jk}=\operatorname {\mathbb {E} } \,{\biggl [}\;-{\frac {\partial ^{2}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{j}\,\partial \theta _{k}}}\;{\biggr ]}~.$
In particular, it means that the bias of the maximum likelihood estimator is equal to zero up to the order 1/√n .
Second-order efficiency after correction for bias
However, when we consider the higher-order terms in the expansion of the distribution of this estimator, it turns out that θmle has bias of order 1⁄n. This bias is equal to (componentwise)[20]
$b_{h}\;\equiv \;\operatorname {\mathbb {E} } {\biggl [}\;\left({\widehat {\theta }}_{\mathrm {mle} }-\theta _{0}\right)_{h}\;{\biggr ]}\;=\;{\frac {1}{\,n\,}}\,\sum _{i,j,k=1}^{m}\;{\mathcal {I}}^{hi}\;{\mathcal {I}}^{jk}\left({\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\right)$
where ${\mathcal {I}}^{jk}$ (with superscripts) denotes the (j,k)-th component of the inverse Fisher information matrix ${\mathcal {I}}^{-1}$, and
${\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\;=\;\operatorname {\mathbb {E} } \,{\biggl [}\;{\frac {1}{2}}{\frac {\partial ^{3}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{i}\;\partial \theta _{j}\;\partial \theta _{k}}}+{\frac {\;\partial \ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{j}}}\,{\frac {\;\partial ^{2}\ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{i}\,\partial \theta _{k}}}\;{\biggr ]}~.$
Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, and correct for that bias by subtracting it:
${\widehat {\theta \,}}_{\text{mle}}^{*}={\widehat {\theta \,}}_{\text{mle}}-{\widehat {b\,}}~.$
This estimator is unbiased up to the terms of order 1/ n , and is called the bias-corrected maximum likelihood estimator.
This bias-corrected estimator is second-order efficient (at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order 1/ n2 . It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, the maximum likelihood estimator is not third-order efficient.[21]
Relation to Bayesian inference
A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter θ that maximizes the probability of θ given the data, given by Bayes' theorem:
$\operatorname {\mathbb {P} } (\theta \mid x_{1},x_{2},\ldots ,x_{n})={\frac {f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}{\operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}}$
where $\operatorname {\mathbb {P} } (\theta )$ is the prior distribution for the parameter θ and where $\operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})$ is the probability of the data averaged over all parameters. Since the denominator is independent of θ, the Bayesian estimator is obtained by maximizing $f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )$ with respect to θ. If we further assume that the prior $\operatorname {\mathbb {P} } (\theta )$ is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood function $f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )$. Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distribution $\operatorname {\mathbb {P} } (\theta )$.
Application of maximum-likelihood estimation in Bayes decision theory
In many practical applications in machine learning, maximum-likelihood estimation is used as the model for parameter estimation.
The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution.[22]
Thus, the Bayes Decision Rule is stated as
"decide $\;w_{1}\;$ if $~\operatorname {\mathbb {P} } (w_{1}|x)\;>\;\operatorname {\mathbb {P} } (w_{2}|x)~;~$ otherwise decide $\;w_{2}\;$"
where $\;w_{1}\,,w_{2}\;$ are predictions of different classes. From a perspective of minimizing error, it can also be stated as
$w={\underset {w}{\operatorname {arg\;max} }}\;\int _{-\infty }^{\infty }\operatorname {\mathbb {P} } ({\text{ error}}\mid x)\operatorname {\mathbb {P} } (x)\,\operatorname {d} x~$
where
$\operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{1}\mid x)~$
if we decide $\;w_{2}\;$ and $\;\operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{2}\mid x)\;$ if we decide $\;w_{1}\;.$
By applying Bayes' theorem
$\operatorname {\mathbb {P} } (w_{i}\mid x)={\frac {\operatorname {\mathbb {P} } (x\mid w_{i})\operatorname {\mathbb {P} } (w_{i})}{\operatorname {\mathbb {P} } (x)}}$,
and if we further assume the zero-or-one loss function, which is a same loss for all errors, the Bayes Decision rule can be reformulated as:
$h_{\text{Bayes}}={\underset {w}{\operatorname {arg\;max} }}\,{\bigl [}\,\operatorname {\mathbb {P} } (x\mid w)\,\operatorname {\mathbb {P} } (w)\,{\bigr ]}\;,$
where $h_{\text{Bayes}}$ is the prediction and $\;\operatorname {\mathbb {P} } (w)\;$ is the prior probability.
Relation to minimizing Kullback–Leibler divergence and cross entropy
Finding ${\hat {\theta }}$ that maximizes the likelihood is asymptotically equivalent to finding the ${\hat {\theta }}$ that defines a probability distribution ($Q_{\hat {\theta }}$) that has a minimal distance, in terms of Kullback–Leibler divergence, to the real probability distribution from which our data were generated (i.e., generated by $P_{\theta _{0}}$).[23] In an ideal world, P and Q are the same (and the only thing unknown is $\theta $ that defines P), but even if they are not and the model we use is misspecified, still the MLE will give us the "closest" distribution (within the restriction of a model Q that depends on ${\hat {\theta }}$) to the real distribution $P_{\theta _{0}}$.[24]
Proof.
For simplicity of notation, let's assume that P=Q. Let there be n i.i.d data samples $\mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})$ from some probability $y\sim P_{\theta _{0}}$, that we try to estimate by finding ${\hat {\theta }}$ that will maximize the likelihood using $P_{\theta }$, then:
${\begin{aligned}{\hat {\theta }}&={\underset {\theta }{\operatorname {arg\,max} }}\,L_{P_{\theta }}(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P_{\theta }(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P(\mathbf {y} \mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\prod _{i=1}^{n}P(y_{i}\mid \theta )={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log P(y_{i}\mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\left(\sum _{i=1}^{n}\log P(y_{i}\mid \theta )-\sum _{i=1}^{n}\log P(y_{i}\mid \theta _{0})\right)={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\left(\log P(y_{i}\mid \theta )-\log P(y_{i}\mid \theta _{0})\right)\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta )}{P(y_{i}\mid \theta _{0})}}={\underset {\theta }{\operatorname {arg\,min} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}\\&={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}h_{\theta }(y_{i})\quad {\underset {n\to \infty }{\longrightarrow }}\quad {\underset {\theta }{\operatorname {arg\,min} }}\,E[h_{\theta }(y)]\\&={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)h_{\theta }(y)dy={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)\log {\frac {P(y\mid \theta _{0})}{P(y\mid \theta )}}dy\\&={\underset {\theta }{\operatorname {arg\,min} }}\,D_{\text{KL}}(P_{\theta _{0}}\parallel P_{\theta })\end{aligned}}$
Where $h_{\theta }(x)=\log {\frac {P(x\mid \theta _{0})}{P(x\mid \theta )}}$. Using h helps see how we are using the law of large numbers to move from the average of h(x) to the expectancy of it using the law of the unconscious statistician. The first several transitions have to do with laws of logarithm and that finding ${\hat {\theta }}$ that maximizes some function will also be the one that maximizes some monotonic transformation of that function (i.e.: adding/multiplying by a constant).
Since cross entropy is just Shannon's entropy plus KL divergence, and since the entropy of $P_{\theta _{0}}$ is constant, then the MLE is also asymptotically minimizing cross entropy.[25]
Examples
Discrete uniform distribution
Main article: German tank problem
Consider a case where n tickets numbered from 1 to n are placed in a box and one is selected at random (see uniform distribution); thus, the sample size is 1. If n is unknown, then the maximum likelihood estimator ${\widehat {n}}$ of n is the number m on the drawn ticket. (The likelihood is 0 for n < m, 1⁄n for n ≥ m, and this is greatest when n = m. Note that the maximum likelihood estimate of n occurs at the lower extreme of possible values {m, m + 1, ...}, rather than somewhere in the "middle" of the range of possible values, which would result in less bias.) The expected value of the number m on the drawn ticket, and therefore the expected value of ${\widehat {n}}$, is (n + 1)/2. As a result, with a sample size of 1, the maximum likelihood estimator for n will systematically underestimate n by (n − 1)/2.
Discrete distribution, finite parameter space
Suppose one wishes to determine just how biased an unfair coin is. Call the probability of tossing a 'head' p. The goal then becomes to determine p.
Suppose the coin is tossed 80 times: i.e. the sample might be something like x1 = H, x2 = T, ..., x80 = T, and the count of the number of heads "H" is observed.
The probability of tossing tails is 1 − p (so here p is θ above). Suppose the outcome is 49 heads and 31 tails, and suppose the coin was taken from a box containing three coins: one which gives heads with probability p = 1⁄3, one which gives heads with probability p = 1⁄2 and another which gives heads with probability p = 2⁄3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation, the coin that has the largest likelihood can be found, given the data that were observed. By using the probability mass function of the binomial distribution with sample size equal to 80, number successes equal to 49 but for different values of p (the "probability of success"), the likelihood function (defined below) takes one of three values:
${\begin{aligned}\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{3}})^{49}(1-{\tfrac {1}{3}})^{31}\approx 0.000,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{2}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{2}})^{49}(1-{\tfrac {1}{2}})^{31}\approx 0.012,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {2}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {2}{3}})^{49}(1-{\tfrac {2}{3}})^{31}\approx 0.054~.\end{aligned}}$
The likelihood is maximized when p = 2⁄3, and so this is the maximum likelihood estimate for p.
Discrete distribution, continuous parameter space
Now suppose that there was only one coin but its p could have been any value 0 ≤ p ≤ 1 . The likelihood function to be maximised is
$L(p)=f_{D}(\mathrm {H} =49\mid p)={\binom {80}{49}}p^{49}(1-p)^{31}~,$
and the maximisation is over all possible values 0 ≤ p ≤ 1 .
One way to maximize this function is by differentiating with respect to p and setting to zero:
${\begin{aligned}0&={\frac {\partial }{\partial p}}\left({\binom {80}{49}}p^{49}(1-p)^{31}\right)~,\\[8pt]0&=49p^{48}(1-p)^{31}-31p^{49}(1-p)^{30}\\[8pt]&=p^{48}(1-p)^{30}\left[49(1-p)-31p\right]\\[8pt]&=p^{48}(1-p)^{30}\left[49-80p\right]~.\end{aligned}}$
This is a product of three terms. The first term is 0 when p = 0. The second is 0 when p = 1. The third is zero when p = 49⁄80. The solution that maximizes the likelihood is clearly p = 49⁄80 (since p = 0 and p = 1 result in a likelihood of 0). Thus the maximum likelihood estimator for p is 49⁄80.
This result is easily generalized by substituting a letter such as s in the place of 49 to represent the observed number of 'successes' of our Bernoulli trials, and a letter such as n in the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yields s⁄n which is the maximum likelihood estimator for any sequence of n Bernoulli trials resulting in s 'successes'.
Continuous distribution, continuous parameter space
For the normal distribution ${\mathcal {N}}(\mu ,\sigma ^{2})$ which has probability density function
$f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right),$
the corresponding probability density function for a sample of n independent identically distributed normal random variables (the likelihood) is
$f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})=\prod _{i=1}^{n}f(x_{i}\mid \mu ,\sigma ^{2})=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right).$
This family of distributions has two parameters: θ = (μ, σ); so we maximize the likelihood, ${\mathcal {L}}(\mu ,\sigma ^{2})=f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})$, over both parameters simultaneously, or if possible, individually.
Since the logarithm function itself is a continuous strictly increasing function over the range of the likelihood, the values which maximize the likelihood will also maximize its logarithm (the log-likelihood itself is not necessarily strictly increasing). The log-likelihood can be written as follows:
$\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{2}}\log(2\pi \sigma ^{2})-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}$
(Note: the log-likelihood is closely related to information entropy and Fisher information.)
We now compute the derivatives of this log-likelihood as follows.
${\begin{aligned}0&={\frac {\partial }{\partial \mu }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=0-{\frac {\;-2n({\bar {x}}-\mu )\;}{2\sigma ^{2}}}.\end{aligned}}$
where ${\bar {x}}$ is the sample mean. This is solved by
${\widehat {\mu }}={\bar {x}}=\sum _{i=1}^{n}{\frac {\,x_{i}\,}{n}}.$
This is indeed the maximum of the function, since it is the only turning point in μ and the second derivative is strictly less than zero. Its expected value is equal to the parameter μ of the given distribution,
$\operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\mu }}\;{\bigr ]}=\mu ,\,$
which means that the maximum likelihood estimator ${\widehat {\mu }}$ is unbiased.
Similarly we differentiate the log-likelihood with respect to σ and equate to zero:
${\begin{aligned}0&={\frac {\partial }{\partial \sigma }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{\sigma }}+{\frac {1}{\sigma ^{3}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}.\end{aligned}}$
which is solved by
${\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.$
Inserting the estimate $\mu ={\widehat {\mu }}$ we obtain
${\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}x_{i}x_{j}.$
To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error) $\delta _{i}\equiv \mu -x_{i}$. Expressing the estimate in these variables yields
${\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(\mu -\delta _{i})^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}(\mu -\delta _{i})(\mu -\delta _{j}).$
Simplifying the expression above, utilizing the facts that $\operatorname {\mathbb {E} } {\bigl [}\;\delta _{i}\;{\bigr ]}=0$ and $\operatorname {E} {\bigl [}\;\delta _{i}^{2}\;{\bigr ]}=\sigma ^{2}$, allows us to obtain
$\operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\sigma }}^{2}\;{\bigr ]}={\frac {\,n-1\,}{n}}\sigma ^{2}.$
This means that the estimator ${\widehat {\sigma }}^{2}$ is biased for $\sigma ^{2}$. It can also be shown that ${\widehat {\sigma }}$ is biased for $\sigma $, but that both ${\widehat {\sigma }}^{2}$ and ${\widehat {\sigma }}$ are consistent.
Formally we say that the maximum likelihood estimator for $\theta =(\mu ,\sigma ^{2})$ is
${\widehat {\theta \,}}=\left({\widehat {\mu }},{\widehat {\sigma }}^{2}\right).$
In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously.
The normal log-likelihood at its maximum takes a particularly simple form:
$\log {\Bigl (}{\mathcal {L}}({\widehat {\mu }},{\widehat {\sigma }}){\Bigr )}={\frac {\,-n\;\;}{2}}{\bigl (}\,\log(2\pi {\widehat {\sigma }}^{2})+1\,{\bigr )}$
This maximum log-likelihood can be shown to be the same for more general least squares, even for non-linear least squares. This is often used in determining likelihood-based approximate confidence intervals and confidence regions, which are generally more accurate than those using the asymptotic normality discussed above.
Non-independent variables
It may be the case that variables are correlated, that is, not independent. Two random variables $y_{1}$ and $y_{2}$ are independent only if their joint probability density function is the product of the individual probability density functions, i.e.
$f(y_{1},y_{2})=f(y_{1})f(y_{2})\,$
Suppose one constructs an order-n Gaussian vector out of random variables $(y_{1},\ldots ,y_{n})$, where each variable has means given by $(\mu _{1},\ldots ,\mu _{n})$. Furthermore, let the covariance matrix be denoted by ${\mathit {\Sigma }}$. The joint probability density function of these n random variables then follows a multivariate normal distribution given by:
$f(y_{1},\ldots ,y_{n})={\frac {1}{(2\pi )^{n/2}{\sqrt {\det({\mathit {\Sigma }})}}}}\exp \left(-{\frac {1}{2}}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]{\mathit {\Sigma }}^{-1}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]^{\mathrm {T} }\right)$
In the bivariate case, the joint probability density function is given by:
$f(y_{1},y_{2})={\frac {1}{2\pi \sigma _{1}\sigma _{2}{\sqrt {1-\rho ^{2}}}}}\exp \left[-{\frac {1}{2(1-\rho ^{2})}}\left({\frac {(y_{1}-\mu _{1})^{2}}{\sigma _{1}^{2}}}-{\frac {2\rho (y_{1}-\mu _{1})(y_{2}-\mu _{2})}{\sigma _{1}\sigma _{2}}}+{\frac {(y_{2}-\mu _{2})^{2}}{\sigma _{2}^{2}}}\right)\right]$
In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section "principles," using this density.
Example
$X_{1},\ X_{2},\ldots ,\ X_{m}$ are counts in cells / boxes 1 up to m; each box has a different probability (think of the boxes being bigger or smaller) and we fix the number of balls that fall to be $n$:$x_{1}+x_{2}+\cdots +x_{m}=n$. The probability of each box is $p_{i}$, with a constraint: $p_{1}+p_{2}+\cdots +p_{m}=1$. This is a case in which the $X_{i}$ s are not independent, the joint probability of a vector $x_{1},\ x_{2},\ldots ,x_{m}$ is called the multinomial and has the form:
$f(x_{1},x_{2},\ldots ,x_{m}\mid p_{1},p_{2},\ldots ,p_{m})={\frac {n!}{\prod x_{i}!}}\prod p_{i}^{x_{i}}={\binom {n}{x_{1},x_{2},\ldots ,x_{m}}}p_{1}^{x_{1}}p_{2}^{x_{2}}\cdots p_{m}^{x_{m}}$
Each box taken separately against all the other boxes is a binomial and this is an extension thereof.
The log-likelihood of this is:
$\ell (p_{1},p_{2},\ldots ,p_{m})=\log n!-\sum _{i=1}^{m}\log x_{i}!+\sum _{i=1}^{m}x_{i}\log p_{i}$
The constraint has to be taken into account and use the Lagrange multipliers:
$L(p_{1},p_{2},\ldots ,p_{m},\lambda )=\ell (p_{1},p_{2},\ldots ,p_{m})+\lambda \left(1-\sum _{i=1}^{m}p_{i}\right)$
By posing all the derivatives to be 0, the most natural estimate is derived
${\hat {p}}_{i}={\frac {x_{i}}{n}}$
Maximizing log likelihood, with and without constraints, can be an unsolvable problem in closed form, then we have to use iterative procedures.
Iterative procedures
Except for special cases, the likelihood equations
${\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}=0$ ;\mathbf {y} )}{\partial \theta }}=0}
cannot be solved explicitly for an estimator ${\widehat {\theta }}={\widehat {\theta }}(\mathbf {y} )$. Instead, they need to be solved iteratively: starting from an initial guess of $\theta $ (say ${\widehat {\theta }}_{1}$), one seeks to obtain a convergent sequence $\left\{{\widehat {\theta }}_{r}\right\}$. Many methods for this kind of optimization problem are available,[26][27] but the most commonly used ones are algorithms based on an updating formula of the form
${\widehat {\theta }}_{r+1}={\widehat {\theta }}_{r}+\eta _{r}\mathbf {d} _{r}\left({\widehat {\theta }}\right)$
where the vector $\mathbf {d} _{r}\left({\widehat {\theta }}\right)$ indicates the descent direction of the rth "step," and the scalar $\eta _{r}$ captures the "step length,"[28][29] also known as the learning rate.[30]
Gradient descent method
(Note: here it is a maximization problem, so the sign before gradient is flipped)
$\eta _{r}\in \mathbb {R} ^{+}$ that is small enough for convergence and $\mathbf {d} _{r}\left({\widehat {\theta }}\right)=\nabla \ell \left({\widehat {\theta }}_{r};\mathbf {y} \right)$
Gradient descent method requires to calculate the gradient at the rth iteration, but no need to calculate the inverse of second-order derivative, i.e., the Hessian matrix. Therefore, it is computationally faster than Newton-Raphson method.
Newton–Raphson method
$\eta _{r}=1$ and $\mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)\mathbf {s} _{r}\left({\widehat {\theta }}\right)$
where $\mathbf {s} _{r}({\widehat {\theta }})$ is the score and $\mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)$ is the inverse of the Hessian matrix of the log-likelihood function, both evaluated the rth iteration.[31][32] But because the calculation of the Hessian matrix is computationally costly, numerous alternatives have been proposed. The popular Berndt–Hall–Hall–Hausman algorithm approximates the Hessian with the outer product of the expected gradient, such that
$\mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\left[{\frac {1}{n}}\sum _{t=1}^{n}{\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\left({\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\right)^{\mathsf {T}}\right]^{-1}\mathbf {s} _{r}\left({\widehat {\theta }}\right)$ ;\mathbf {y} )}{\partial \theta }}\left({\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\right)^{\mathsf {T}}\right]^{-1}\mathbf {s} _{r}\left({\widehat {\theta }}\right)}
Quasi-Newton methods
Other quasi-Newton methods use more elaborate secant updates to give approximation of Hessian matrix.
Davidon–Fletcher–Powell formula
DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value of second-order derivative:
$\mathbf {H} _{k+1}=\left(I-\gamma _{k}y_{k}s_{k}^{\mathsf {T}}\right)\mathbf {H} _{k}\left(I-\gamma _{k}s_{k}y_{k}^{\mathsf {T}}\right)+\gamma _{k}y_{k}y_{k}^{\mathsf {T}},$
where
$y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),$
$\gamma _{k}={\frac {1}{y_{k}^{T}s_{k}}},$
$s_{k}=x_{k+1}-x_{k}.$
Broyden–Fletcher–Goldfarb–Shanno algorithm
BFGS also gives a solution that is symmetric and positive-definite:
$B_{k+1}=B_{k}+{\frac {y_{k}y_{k}^{\mathsf {T}}}{y_{k}^{\mathsf {T}}s_{k}}}-{\frac {B_{k}s_{k}s_{k}^{\mathsf {T}}B_{k}^{\mathsf {T}}}{s_{k}^{\mathsf {T}}B_{k}s_{k}}}\ ,$
where
$y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),$
$s_{k}=x_{k+1}-x_{k}.$
BFGS method is not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances
Fisher's scoring
Another popular method is to replace the Hessian with the Fisher information matrix, ${\mathcal {I}}(\theta )=\operatorname {\mathbb {E} } \left[\mathbf {H} _{r}\left({\widehat {\theta }}\right)\right]$, giving us the Fisher scoring algorithm. This procedure is standard in the estimation of many methods, such as generalized linear models.
Although popular, quasi-Newton methods may converge to a stationary point that is not necessarily a local or global maximum,[33] but rather a local minimum or a saddle point. Therefore, it is important to assess the validity of the obtained solution to the likelihood equations, by verifying that the Hessian, evaluated at the solution, is both negative definite and well-conditioned.[34]
History
Early users of maximum likelihood were Carl Friedrich Gauss, Pierre-Simon Laplace, Thorvald N. Thiele, and Francis Ysidro Edgeworth.[35][36] However, its widespread use rose between 1912 and 1922 when Ronald Fisher recommended, widely popularized, and carefully analyzed maximum-likelihood estimation (with fruitless attempts at proofs).[37]
Maximum-likelihood estimation finally transcended heuristic justification in a proof published by Samuel S. Wilks in 1938, now called Wilks' theorem.[38] The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent observations is asymptotically χ 2-distributed, which enables convenient determination of a confidence region around any estimate of the parameters. The only difficult part of Wilks' proof depends on the expected value of the Fisher information matrix, which is provided by a theorem proven by Fisher.[39] Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962.[40]
Reviews of the development of maximum likelihood estimation have been provided by a number of authors.[41][42][43][44][45][46][47][48]
See also
Related concepts
• Akaike information criterion: a criterion to compare statistical models, based on MLE
• Extremum estimator: a more general class of estimators to which MLE belongs
• Fisher information: information matrix, its relationship to covariance matrix of ML estimates
• Mean squared error: a measure of how 'good' an estimator of a distributional parameter is (be it the maximum likelihood estimator or some other estimator)
• RANSAC: a method to estimate parameters of a mathematical model given data that contains outliers
• Rao–Blackwell theorem: yields a process for finding the best possible unbiased estimator (in the sense of having minimal mean squared error); the MLE is often a good starting place for the process
• Wilks' theorem: provides a means of estimating the size and shape of the region of roughly equally-probable estimates for the population's parameter values, using the information from a single sample, using a chi-squared distribution
Other estimation methods
• Generalized method of moments: methods related to the likelihood equation in maximum likelihood estimation
• M-estimator: an approach used in robust statistics
• Maximum a posteriori (MAP) estimator: for a contrast in the way to calculate estimators when prior knowledge is postulated
• Maximum spacing estimation: a related method that is more robust in many situations
• Maximum entropy estimation
• Method of moments (statistics): another popular method for finding parameters of distributions
• Method of support, a variation of the maximum likelihood technique
• Minimum-distance estimation
• Partial likelihood methods for panel data
• Quasi-maximum likelihood estimator: an MLE estimator that is misspecified, but still consistent
• Restricted maximum likelihood: a variation using a likelihood function calculated from a transformed set of data
References
1. Rossi, Richard J. (2018). Mathematical Statistics: An Introduction to Likelihood Based Inference. New York: John Wiley & Sons. p. 227. ISBN 978-1-118-77104-4.
2. Hendry, David F.; Nielsen, Bent (2007). Econometric Modeling: A Likelihood Approach. Princeton: Princeton University Press. ISBN 978-0-691-13128-3.
3. Chambers, Raymond L.; Steel, David G.; Wang, Suojin; Welsh, Alan (2012). Maximum Likelihood Estimation for Sample Surveys. Boca Raton: CRC Press. ISBN 978-1-58488-632-7.
4. Ward, Michael Don; Ahlquist, John S. (2018). Maximum Likelihood for Social Science: Strategies for Analysis. New York: Cambridge University Press. ISBN 978-1-107-18582-1.
5. Press, W.H.; Flannery, B.P.; Teukolsky, S.A.; Vetterling, W.T. (1992). "Least Squares as a Maximum Likelihood Estimator". Numerical Recipes in FORTRAN: The Art of Scientific Computing (2nd ed.). Cambridge: Cambridge University Press. pp. 651–655. ISBN 0-521-43064-X.
6. Myung, I.J. (2003). "Tutorial on maximum likelihood Estimation". Journal of Mathematical Psychology. 47 (1): 90–100. doi:10.1016/S0022-2496(02)00028-7.
7. Gourieroux, Christian; Monfort, Alain (1995). Statistics and Econometrics Models. Cambridge University Press. p. 161. ISBN 0-521-40551-3.
8. Kane, Edward J. (1968). Economic Statistics and Econometrics. New York, NY: Harper & Row. p. 179.
9. Small, Christoper G.; Wang, Jinfang (2003). "Working with roots". Numerical Methods for Nonlinear Estimating Equations. Oxford University Press. pp. 74–124. ISBN 0-19-850688-0.
10. Kass, Robert E.; Vos, Paul W. (1997). Geometrical Foundations of Asymptotic Inference. New York, NY: John Wiley & Sons. p. 14. ISBN 0-471-82668-5.
11. Papadopoulos, Alecos (25 September 2013). "Why we always put log() before the joint pdf when we use MLE (Maximum likelihood Estimation)?". Stack Exchange.
12. Silvey, S. D. (1975). Statistical Inference. London, UK: Chapman and Hall. p. 79. ISBN 0-412-13820-4.
13. Olive, David (2004). "Does the MLE maximize the likelihood?" (PDF). Southern Illinois University.
14. Schwallie, Daniel P. (1985). "Positive definite maximum likelihood covariance estimators". Economics Letters. 17 (1–2): 115–117. doi:10.1016/0165-1765(85)90139-9.
15. Magnus, Jan R. (2017). Introduction to the Theory of Econometrics. Amsterdam: VU University Press. pp. 64–65. ISBN 978-90-8659-766-6.
16. Pfanzagl (1994, p. 206) harvtxt error: no target: CITEREFPfanzagl1994 (help)
17. By Theorem 2.5 in Newey, Whitney K.; McFadden, Daniel (1994). "Chapter 36: Large sample estimation and hypothesis testing". In Engle, Robert; McFadden, Dan (eds.). Handbook of Econometrics, Vol.4. Elsevier Science. pp. 2111–2245. ISBN 978-0-444-88766-5.
18. By Theorem 3.3 in Newey, Whitney K.; McFadden, Daniel (1994). "Chapter 36: Large sample estimation and hypothesis testing". In Engle, Robert; McFadden, Dan (eds.). Handbook of Econometrics, Vol.4. Elsevier Science. pp. 2111–2245. ISBN 978-0-444-88766-5.
19. Zacks, Shelemyahu (1971). The Theory of Statistical Inference. New York: John Wiley & Sons. p. 223. ISBN 0-471-98103-6.
20. See formula 20 in Cox, David R.; Snell, E. Joyce (1968). "A general definition of residuals". Journal of the Royal Statistical Society, Series B. 30 (2): 248–275. JSTOR 2984505.
21. Kano, Yutaka (1996). "Third-order efficiency implies fourth-order efficiency". Journal of the Japan Statistical Society. 26: 101–117. doi:10.14490/jjss1995.26.101.
22. Christensen, Henrikt I. "Pattern Recognition" (PDF) (lecture). Bayesian Decision Theory - CS 7616. Georgia Tech.
23. cmplx96 (https://stats.stackexchange.com/users/177679/cmplx96), Kullback–Leibler divergence, URL (version: 2017-11-18): https://stats.stackexchange.com/q/314472 (at the youtube video, look at minutes 13 to 25)
24. Introduction to Statistical Inference | Stanford (Lecture 16 — MLE under model misspecification)
25. Sycorax says Reinstate Monica (https://stats.stackexchange.com/users/22311/sycorax-says-reinstate-monica), the relationship between maximizing the likelihood and minimizing the cross-entropy, URL (version: 2019-11-06): https://stats.stackexchange.com/q/364237
26. Fletcher, R. (1987). Practical Methods of Optimization (Second ed.). New York, NY: John Wiley & Sons. ISBN 0-471-91547-5.
27. Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (Second ed.). New York, NY: Springer. ISBN 0-387-30303-0.
28. Daganzo, Carlos (1979). Multinomial Probit: The Theory and its Application to Demand Forecasting. New York: Academic Press. pp. 61–78. ISBN 0-12-201150-3.
29. Gould, William; Pitblado, Jeffrey; Poi, Brian (2010). Maximum Likelihood Estimation with Stata (Fourth ed.). College Station: Stata Press. pp. 13–20. ISBN 978-1-59718-078-8.
30. Murphy, Kevin P. (2012). Machine Learning: A Probabilistic Perspective. Cambridge: MIT Press. p. 247. ISBN 978-0-262-01802-9.
31. Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 137–138. ISBN 0-674-00560-0.
32. Sargan, Denis (1988). "Methods of Numerical Optimization". Lecture Notes on Advanced Econometric Theory. Oxford: Basil Blackwell. pp. 161–169. ISBN 0-631-14956-2.
33. See theorem 10.1 in Avriel, Mordecai (1976). Nonlinear Programming: Analysis and Methods. Englewood Cliffs, NJ: Prentice-Hall. pp. 293–294. ISBN 978-0-486-43227-4.
34. Gill, Philip E.; Murray, Walter; Wright, Margaret H. (1981). Practical Optimization. London, UK: Academic Press. pp. 312–313. ISBN 0-12-283950-1.
35. Edgeworth, Francis Y. (Sep 1908). "On the probable errors of frequency-constants". Journal of the Royal Statistical Society. 71 (3): 499–512. doi:10.2307/2339293. JSTOR 2339293.
36. Edgeworth, Francis Y. (Dec 1908). "On the probable errors of frequency-constants". Journal of the Royal Statistical Society. 71 (4): 651–678. doi:10.2307/2339378. JSTOR 2339378.
37. Pfanzagl, Johann; Hamböker, R. (1994). Parametric Statistical Theory. Walter de Gruyter. pp. 207–208. ISBN 978-3-11-013863-4.
38. Wilks, S.S. (1938). "The large-sample distribution of the likelihood ratio for testing composite hypotheses". Annals of Mathematical Statistics. 9: 60–62. doi:10.1214/aoms/1177732360.
39. Owen, Art B. (2001). Empirical Likelihood. London, UK; Boca Raton, FL: Chapman & Hall; CRC Press. ISBN 978-1-58488-071-4.
40. Wilks, Samuel S. (1962). Mathematical Statistics. New York, NY: John Wiley & Sons. ISBN 978-0-471-94650-2.
41. Savage, Leonard J. (1976). "On rereading R.A. Fisher". The Annals of Statistics. 4 (3): 441–500. doi:10.1214/aos/1176343456. JSTOR 2958221.
42. Pratt, John W. (1976). "F. Y. Edgeworth and R. A. Fisher on the efficiency of maximum likelihood estimation". The Annals of Statistics. 4 (3): 501–514. doi:10.1214/aos/1176343457. JSTOR 2958222.
43. Stigler, Stephen M. (1978). "Francis Ysidro Edgeworth, statistician". Journal of the Royal Statistical Society, Series A. 141 (3): 287–322. doi:10.2307/2344804. JSTOR 2344804.
44. Stigler, Stephen M. (1986). The history of statistics: the measurement of uncertainty before 1900. Harvard University Press. ISBN 978-0-674-40340-6.
45. Stigler, Stephen M. (1999). Statistics on the table: the history of statistical concepts and methods. Harvard University Press. ISBN 978-0-674-83601-3.
46. Hald, Anders (1998). A history of mathematical statistics from 1750 to 1930. New York, NY: Wiley. ISBN 978-0-471-17912-2.
47. Hald, Anders (1999). "On the history of maximum likelihood in relation to inverse probability and least squares". Statistical Science. 14 (2): 214–222. doi:10.1214/ss/1009212248. JSTOR 2676741.
48. Aldrich, John (1997). "R.A. Fisher and the making of maximum likelihood 1912–1922". Statistical Science. 12 (3): 162–176. doi:10.1214/ss/1030037906. MR 1617519.
Further reading
• Cramer, J.S. (1986). Econometric Applications of Maximum Likelihood Methods. New York, NY: Cambridge University Press. ISBN 0-521-25317-9.
• Eliason, Scott R. (1993). Maximum Likelihood Estimation: Logic and Practice. Newbury Park: Sage. ISBN 0-8039-4107-2.
• King, Gary (1989). Unifying Political Methodology: the Likehood Theory of Statistical Inference. Cambridge University Press. ISBN 0-521-36697-6.
• Le Cam, Lucien (1990). "Maximum likelihood: An Introduction". ISI Review. 58 (2): 153–171. doi:10.2307/1403464. JSTOR 1403464.
• Magnus, Jan R. (2017). "Maximum Likelihood". Introduction to the Theory of Econometrics. Amsterdam, NL: VU University Press. pp. 53–68. ISBN 978-90-8659-766-6.
• Millar, Russell B. (2011). Maximum Likelihood Estimation and Inference. Hoboken, NJ: Wiley. ISBN 978-0-470-09482-2.
• Pickles, Andrew (1986). An Introduction to Likelihood Analysis. Norwich: W. H. Hutchins & Sons. ISBN 0-86094-190-6.
• Severini, Thomas A. (2000). Likelihood Methods in Statistics. New York, NY: Oxford University Press. ISBN 0-19-850650-3.
• Ward, Michael D.; Ahlquist, John S. (2018). Maximum Likelihood for Social Science: Strategies for Analysis. Cambridge University Press. ISBN 978-1-316-63682-4.
External links
• Tilevik, Andreas (2022). Maximum likelihood vs least squares in linear regression (video)
• "Maximum-likelihood method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Purcell, S. "Maximum Likelihood Estimation".
• Sargent, Thomas; Stachurski, John. "Maximum Likelihood Estimation". Quantitative Economics with Python.
• Toomet, Ott; Henningsen, Arne (2019-05-19). "maxLik: A package for maximum likelihood estimation in R".
• Lesser, Lawrence M. (2007). "'MLE' song lyrics". Mathematical Sciences / College of Science. University of Texas. El Paso, TX. Retrieved 2021-03-06.
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
|
Wikipedia
|
Semiparametric regression
In statistics, semiparametric regression includes regression models that combine parametric and nonparametric models. They are often used in situations where the fully nonparametric model may not perform well or when the researcher wants to use a parametric model but the functional form with respect to a subset of the regressors or the density of the errors is not known. Semiparametric regression models are a particular type of semiparametric modelling and, since semiparametric models contain a parametric component, they rely on parametric assumptions and may be misspecified and inconsistent, just like a fully parametric model.
Part of a series on
Regression analysis
Models
• Linear regression
• Simple regression
• Polynomial regression
• General linear model
• Generalized linear model
• Vector generalized linear model
• Discrete choice
• Binomial regression
• Binary regression
• Logistic regression
• Multinomial logistic regression
• Mixed logit
• Probit
• Multinomial probit
• Ordered logit
• Ordered probit
• Poisson
• Multilevel model
• Fixed effects
• Random effects
• Linear mixed-effects model
• Nonlinear mixed-effects model
• Nonlinear regression
• Nonparametric
• Semiparametric
• Robust
• Quantile
• Isotonic
• Principal components
• Least angle
• Local
• Segmented
• Errors-in-variables
Estimation
• Least squares
• Linear
• Non-linear
• Ordinary
• Weighted
• Generalized
• Generalized estimating equation
• Partial
• Total
• Non-negative
• Ridge regression
• Regularized
• Least absolute deviations
• Iteratively reweighted
• Bayesian
• Bayesian multivariate
• Least-squares spectral analysis
Background
• Regression validation
• Mean and predicted response
• Errors and residuals
• Goodness of fit
• Studentized residual
• Gauss–Markov theorem
• Mathematics portal
Methods
Many different semiparametric regression methods have been proposed and developed. The most popular methods are the partially linear, index and varying coefficient models.
Partially linear models
A partially linear model is given by
$Y_{i}=X'_{i}\beta +g\left(Z_{i}\right)+u_{i},\,\quad i=1,\ldots ,n,\,$
where $Y_{i}$ is the dependent variable, $X_{i}$ is a $p\times 1$ vector of explanatory variables, $\beta $ is a $p\times 1$ vector of unknown parameters and $Z_{i}\in \operatorname {R} ^{q}$. The parametric part of the partially linear model is given by the parameter vector $\beta $ while the nonparametric part is the unknown function $g\left(Z_{i}\right)$. The data is assumed to be i.i.d. with $E\left(u_{i}|X_{i},Z_{i}\right)=0$ and the model allows for a conditionally heteroskedastic error process $E\left(u_{i}^{2}|x,z\right)=\sigma ^{2}\left(x,z\right)$ of unknown form. This type of model was proposed by Robinson (1988) and extended to handle categorical covariates by Racine and Li (2007).
This method is implemented by obtaining a ${\sqrt {n}}$ consistent estimator of $\beta $ and then deriving an estimator of $g\left(Z_{i}\right)$ from the nonparametric regression of $Y_{i}-X'_{i}{\hat {\beta }}$ on $z$ using an appropriate nonparametric regression method.[1]
Index models
A single index model takes the form
$Y=g\left(X'\beta _{0}\right)+u,\,$
where $Y$, $X$ and $\beta _{0}$ are defined as earlier and the error term $u$ satisfies $E\left(u|X\right)=0$. The single index model takes its name from the parametric part of the model $x'\beta $ which is a scalar single index. The nonparametric part is the unknown function $g\left(\cdot \right)$.
Ichimura's method
The single index model method developed by Ichimura (1993) is as follows. Consider the situation in which $y$ is continuous. Given a known form for the function $g\left(\cdot \right)$, $\beta _{0}$ could be estimated using the nonlinear least squares method to minimize the function
$\sum _{i=1}\left(Y_{i}-g\left(X'_{i}\beta \right)\right)^{2}.$
Since the functional form of $g\left(\cdot \right)$ is not known, we need to estimate it. For a given value for $\beta $ an estimate of the function
$G\left(X'_{i}\beta \right)=E\left(Y_{i}|X'_{i}\beta \right)=E\left[g\left(X'_{i}\beta _{o}\right)|X'_{i}\beta \right]$
using kernel method. Ichimura (1993) proposes estimating $g\left(X'_{i}\beta \right)$ with
${\hat {G}}_{-i}\left(X'_{i}\beta \right),\,$
the leave-one-out nonparametric kernel estimator of $G\left(X'_{i}\beta \right)$.
Klein and Spady's estimator
If the dependent variable $y$ is binary and $X_{i}$ and $u_{i}$ are assumed to be independent, Klein and Spady (1993) propose a technique for estimating $\beta $ using maximum likelihood methods. The log-likelihood function is given by
$L\left(\beta \right)=\sum _{i}\left(1-Y_{i}\right)\ln \left(1-{\hat {g}}_{-i}\left(X'_{i}\beta \right)\right)+\sum _{i}Y_{i}\ln \left({\hat {g}}_{-i}\left(X'_{i}\beta \right)\right),$
where ${\hat {g}}_{-i}\left(X'_{i}\beta \right)$ is the leave-one-out estimator.
Smooth coefficient/varying coefficient models
Hastie and Tibshirani (1993) propose a smooth coefficient model given by
$Y_{i}=\alpha \left(Z_{i}\right)+X'_{i}\beta \left(Z_{i}\right)+u_{i}=\left(1+X'_{i}\right)\left({\begin{array}{c}\alpha \left(Z_{i}\right)\\\beta \left(Z_{i}\right)\end{array}}\right)+u_{i}=W'_{i}\gamma \left(Z_{i}\right)+u_{i},$
where $X_{i}$ is a $k\times 1$ vector and $\beta \left(z\right)$ is a vector of unspecified smooth functions of $z$.
$\gamma \left(\cdot \right)$ may be expressed as
$\gamma \left(Z_{i}\right)=\left(E\left[W_{i}W'_{i}|Z_{i}\right]\right)^{-1}E\left[W_{i}Y_{i}|Z_{i}\right].$
See also
• Nonparametric regression
• Effective degree of freedom
Notes
1. See Li and Racine (2007) for an in-depth look at nonparametric regression methods.
References
• Robinson, P.M. (1988). "Root-n Consistent Semiparametric Regression". Econometrica. The Econometric Society. 56 (4): 931–954. doi:10.2307/1912705. JSTOR 1912705.
• Li, Qi; Racine, Jeffrey S. (2007). Nonparametric Econometrics: Theory and Practice. Princeton University Press. ISBN 978-0-691-12161-1.
• Racine, J.S.; Qui, L. (2007). "A Partially Linear Kernel Estimator for Categorical Data". Unpublished Manuscript, Mcmaster University.
• Ichimura, H. (1993). "Semiparametric Least Squares (SLS) and Weighted SLS Estimation of Single Index Models". Journal of Econometrics. 58 (1–2): 71–120. doi:10.1016/0304-4076(93)90114-K.
• Klein, R. W.; R. H. Spady (1993). "An Efficient Semiparametric Estimator for Binary Response Models". Econometrica. The Econometric Society. 61 (2): 387–421. CiteSeerX 10.1.1.318.4925. doi:10.2307/2951556. JSTOR 2951556.
• Hastie, T.; R. Tibshirani (1993). "Varying-Coefficient Models". Journal of the Royal Statistical Society, Series B. 55: 757–796.
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
|
Wikipedia
|
Semiperfect magic cube
In mathematics, a semiperfect magic cube is a magic cube that is not a perfect magic cube, i.e., a magic cube for which the cross section diagonals do not necessarily sum up to the cube's magic constant.
References
• Pickover, Clifford A. (2003), The Zen of Magic Squares, Circles, and Stars: An Exhibition of Surprising Structures across Dimensions, Princeton University Press, p. 98, ISBN 1400841518.
|
Wikipedia
|
Perfect ring
In the area of abstract algebra known as ring theory, a left perfect ring is a type of ring in which all left modules have projective covers. The right case is defined by analogy, and the condition is not left-right symmetric; that is, there exist rings which are perfect on one side but not the other. Perfect rings were introduced in Bass's book.[1]
This article is about perfect rings as introduced by Hyman Bass. For perfect rings of characteristic p generalizing perfect fields, see perfect field.
A semiperfect ring is a ring over which every finitely generated left module has a projective cover. This property is left-right symmetric.
Perfect ring
Definitions
The following equivalent definitions of a left perfect ring R are found in Aderson and Fuller:[2]
• Every left R module has a projective cover.
• R/J(R) is semisimple and J(R) is left T-nilpotent (that is, for every infinite sequence of elements of J(R) there is an n such that the product of first n terms are zero), where J(R) is the Jacobson radical of R.
• (Bass' Theorem P) R satisfies the descending chain condition on principal right ideals. (There is no mistake; this condition on right principal ideals is equivalent to the ring being left perfect.)
• Every flat left R-module is projective.
• R/J(R) is semisimple and every non-zero left R module contains a maximal submodule.
• R contains no infinite orthogonal set of idempotents, and every non-zero right R module contains a minimal submodule.
Examples
• Right or left Artinian rings, and semiprimary rings are known to be right-and-left perfect.
• The following is an example (due to Bass) of a local ring which is right but not left perfect. Let F be a field, and consider a certain ring of infinite matrices over F.
Take the set of infinite matrices with entries indexed by $\mathbb {N} $× $\mathbb {N} $, and which have only finitely many nonzero entries, all of them above the diagonal, and denote this set by $J$. Also take the matrix $I\,$ with all 1's on the diagonal, and form the set
$R=\{f\cdot I+j\mid f\in F,j\in J\}\,$
It can be shown that R is a ring with identity, whose Jacobson radical is J. Furthermore R/J is a field, so that R is local, and R is right but not left perfect.[3]
Properties
For a left perfect ring R:
• From the equivalences above, every left R module has a maximal submodule and a projective cover, and the flat left R modules coincide with the projective left modules.
• An analogue of the Baer's criterion holds for projective modules.
Semiperfect ring
Definition
Let R be ring. Then R is semiperfect if any of the following equivalent conditions hold:
• R/J(R) is semisimple and idempotents lift modulo J(R), where J(R) is the Jacobson radical of R.
• R has a complete orthogonal set e1, ..., en of idempotents with each ei R ei a local ring.
• Every simple left (right) R-module has a projective cover.
• Every finitely generated left (right) R-module has a projective cover.
• The category of finitely generated projective $R$-modules is Krull-Schmidt.
Examples
Examples of semiperfect rings include:
• Left (right) perfect rings.
• Local rings.
• Kaplansky's theorem on projective modules
• Left (right) Artinian rings.
• Finite dimensional k-algebras.
Properties
Since a ring R is semiperfect iff every simple left R-module has a projective cover, every ring Morita equivalent to a semiperfect ring is also semiperfect.
Citations
1. Bass 1960.
2. Anderson & Fuller 1992, p. 315.
3. Lam 2001, pp. 345–346.
References
• Anderson, Frank W; Fuller, Kent R (1992), Rings and Categories of Modules (2nd ed.), Springer-Verlag, ISBN 978-0-387-97845-1
• Bass, Hyman (1960), "Finitistic dimension and a homological generalization of semi-primary rings", Transactions of the American Mathematical Society, 95 (3): 466–488, doi:10.2307/1993568, ISSN 0002-9947, JSTOR 1993568, MR 0157984
• Lam, T. Y. (2001), A first course in noncommutative rings, Graduate Texts in Mathematics, vol. 131 (2 ed.), New York: Springer-Verlag, doi:10.1007/978-1-4419-8616-0, ISBN 0-387-95183-0, MR 1838439
|
Wikipedia
|
Semipermutable subgroup
In mathematics, in algebra, in the realm of group theory, a subgroup $H$ of a finite group $G$ is said to be semipermutable if $H$ commutes with every subgroup $K$ whose order is relatively prime to that of $H$.
Clearly, every permutable subgroup of a finite group is semipermutable. The converse, however, is not necessarily true.
External links
• The Influence of semipermutable subgroups on the structure of finite groups
|
Wikipedia
|
Semiregular polytope
In geometry, by Thorold Gosset's definition a semiregular polytope is usually taken to be a polytope that is vertex-transitive and has all its facets being regular polytopes. E.L. Elte compiled a longer list in 1912 as The Semiregular Polytopes of the Hyperspaces which included a wider definition.
Gosset's figures
3D honeycombs
Simple tetroctahedric check
Complex tetroctahedric check
4D polytopes
Tetroctahedric
Octicosahedric
Tetricosahedric
Gosset's list
In three-dimensional space and below, the terms semiregular polytope and uniform polytope have identical meanings, because all uniform polygons must be regular. However, since not all uniform polyhedra are regular, the number of semiregular polytopes in dimensions higher than three is much smaller than the number of uniform polytopes in the same number of dimensions.
The three convex semiregular 4-polytopes are the rectified 5-cell, snub 24-cell and rectified 600-cell. The only semiregular polytopes in higher dimensions are the k21 polytopes, where the rectified 5-cell is the special case of k = 0. These were all listed by Gosset, but a proof of the completeness of this list was not published until the work of Makarov (1988) for four dimensions, and Blind & Blind (1991) for higher dimensions.
Gosset's 4-polytopes (with his names in parentheses)
Rectified 5-cell (Tetroctahedric),
Rectified 600-cell (Octicosahedric),
Snub 24-cell (Tetricosahedric), , or
Semiregular E-polytopes in higher dimensions
5-demicube (5-ic semi-regular), a 5-polytope, ↔
221 polytope (6-ic semi-regular), a 6-polytope, or
321 polytope (7-ic semi-regular), a 7-polytope,
421 polytope (8-ic semi-regular), an 8-polytope,
Euclidean honeycombs
Semiregular polytopes can be extended to semiregular honeycombs. The semiregular Euclidean honeycombs are the tetrahedral-octahedral honeycomb (3D), gyrated alternated cubic honeycomb (3D) and the 521 honeycomb (8D).
Gosset honeycombs:
1. Tetrahedral-octahedral honeycomb or alternated cubic honeycomb (Simple tetroctahedric check), ↔ (Also quasiregular polytope)
2. Gyrated alternated cubic honeycomb (Complex tetroctahedric check),
Semiregular E-honeycomb:
• 521 honeycomb (9-ic check) (8D Euclidean honeycomb),
Gosset (1900) additionally allowed Euclidean honeycombs as facets of higher-dimensional Euclidean honeycombs, giving the following additional figures:
1. Hypercubic honeycomb prism, named by Gosset as the (n – 1)-ic semi-check (analogous to a single rank or file of a chessboard)
2. Alternated hexagonal slab honeycomb (tetroctahedric semi-check),
Hyperbolic honeycombs
There are also hyperbolic uniform honeycombs composed of only regular cells (Coxeter & Whitrow 1950), including:
• Hyperbolic uniform honeycombs, 3D honeycombs:
1. Alternated order-5 cubic honeycomb, ↔ (Also quasiregular polytope)
2. Tetrahedral-octahedral honeycomb,
3. Tetrahedron-icosahedron honeycomb,
• Paracompact uniform honeycombs, 3D honeycombs, which include uniform tilings as cells:
1. Rectified order-6 tetrahedral honeycomb,
2. Rectified square tiling honeycomb,
3. Rectified order-4 square tiling honeycomb, ↔
4. Alternated order-6 cubic honeycomb, ↔ (Also quasiregular)
5. Alternated hexagonal tiling honeycomb, ↔
6. Alternated order-4 hexagonal tiling honeycomb, ↔
7. Alternated order-5 hexagonal tiling honeycomb, ↔
8. Alternated order-6 hexagonal tiling honeycomb, ↔
9. Alternated square tiling honeycomb, ↔ (Also quasiregular)
10. Cubic-square tiling honeycomb,
11. Order-4 square tiling honeycomb, =
12. Tetrahedral-triangular tiling honeycomb,
• 9D hyperbolic paracompact honeycomb:
1. 621 honeycomb (10-ic check),
See also
• Semiregular polyhedron
References
• Blind, G.; Blind, R. (1991). "The semiregular polytopes". Commentarii Mathematici Helvetici. 66 (1): 150–154. doi:10.1007/BF02566640. MR 1090169. S2CID 119695696.
• Coxeter, H. S. M. (1973). Regular Polytopes (3rd ed.). New York: Dover Publications. ISBN 0-486-61480-8.
• Coxeter, H. S. M.; Whitrow, G. J. (1950). "World-structure and non-Euclidean honeycombs". Proceedings of the Royal Society. 201 (1066): 417–437. Bibcode:1950RSPSA.201..417C. doi:10.1098/rspa.1950.0070. MR 0041576. S2CID 120322123.
• Elte, E. L. (1912). The Semiregular Polytopes of the Hyperspaces. Groningen: University of Groningen. ISBN 1-4181-7968-X.
• Gosset, Thorold (1900). "On the regular and semi-regular figures in space of n dimensions". Messenger of Mathematics. 29: 43–48.
• Makarov, P. V. (1988). "On the derivation of four-dimensional semi-regular polytopes". Voprosy Diskret. Geom. Mat. Issled. Akad. Nauk. Mold. 103: 139–150, 177. MR 0958024.
|
Wikipedia
|
Uniform k 21 polytope
In geometry, a uniform k21 polytope is a polytope in k + 4 dimensions constructed from the En Coxeter group, and having only regular polytope facets. The family was named by their Coxeter symbol k21 by its bifurcating Coxeter–Dynkin diagram, with a single ring on the end of the k-node sequence.
Thorold Gosset discovered this family as a part of his 1900 enumeration of the regular and semiregular polytopes, and so they are sometimes called Gosset's semiregular figures. Gosset named them by their dimension from 5 to 9, for example the 5-ic semiregular figure.
Family members
The sequence as identified by Gosset ends as an infinite tessellation (space-filling honeycomb) in 8-space, called the E8 lattice. (A final form was not discovered by Gosset and is called the E9 lattice: 621. It is a tessellation of hyperbolic 9-space constructed of ∞ 9-simplex and ∞ 9-orthoplex facets with all vertices at infinity.)
The family starts uniquely as 6-polytopes. The triangular prism and rectified 5-cell are included at the beginning for completeness. The demipenteract also exists in the demihypercube family.
They are also sometimes named by their symmetry group, like E6 polytope, although there are many uniform polytopes within the E6 symmetry.
The complete family of Gosset semiregular polytopes are:
1. triangular prism: −121 (2 triangles and 3 square faces)
2. rectified 5-cell: 021, Tetroctahedric (5 tetrahedra and 5 octahedra cells)
3. demipenteract: 121, 5-ic semiregular figure (16 5-cell and 10 16-cell facets)
4. 2 21 polytope: 221, 6-ic semiregular figure (72 5-simplex and 27 5-orthoplex facets)
5. 3 21 polytope: 321, 7-ic semiregular figure (576 6-simplex and 126 6-orthoplex facets)
6. 4 21 polytope: 421, 8-ic semiregular figure (17280 7-simplex and 2160 7-orthoplex facets)
7. 5 21 honeycomb: 521, 9-ic semiregular check tessellates Euclidean 8-space (∞ 8-simplex and ∞ 8-orthoplex facets)
8. 6 21 honeycomb: 621, tessellates hyperbolic 9-space (∞ 9-simplex and ∞ 9-orthoplex facets)
Each polytope is constructed from (n − 1)-simplex and (n − 1)-orthoplex facets.
The orthoplex faces are constructed from the Coxeter group Dn−1 and have a Schläfli symbol of {31,n−1,1} rather than the regular {3n−2,4}. This construction is an implication of two "facet types". Half the facets around each orthoplex ridge are attached to another orthoplex, and the others are attached to a simplex. In contrast, every simplex ridge is attached to an orthoplex.
Each has a vertex figure as the previous form. For example, the rectified 5-cell has a vertex figure as a triangular prism.
Elements
Gosset semiregular figures
n-ic k21 Graph Name
Coxeter
diagram
Facets Elements
(n − 1)-simplex
{3n−2}
(n − 1)-orthoplex
{3n−4,1,1}
Vertices Edges Faces Cells 4-faces 5-faces 6-faces 7-faces
3-ic −121 Triangular prism
2 triangles
3 squares
6 9 5
4-ic 021 Rectified 5-cell
5 tetrahedron
5 octahedron
10 30 30 10
5-ic 121 Demipenteract
16 5-cell
10 16-cell
16 80 160 120 26
6-ic 221 221 polytope
72 5-simplexes
27 5-orthoplexes
27 216 720 1080 648 99
7-ic 321 321 polytope
576 6-simplexes
126 6-orthoplexes
56 756 4032 10080 12096 6048 702
8-ic 421 421 polytope
17280 7-simplexes
2160 7-orthoplexes
240 6720 60480 241920 483840 483840 207360 19440
9-ic 521 521 honeycomb
∞ 8-simplexes
∞ 8-orthoplexes
∞
10-ic 621 621 honeycomb
∞ 9-simplexes
∞ 9-orthoplexes
∞
See also
• Uniform 2k1 polytope family
• Uniform 1k2 polytope family
References
• T. Gosset: On the Regular and Semi-Regular Figures in Space of n Dimensions, Messenger of Mathematics, Macmillan, 1900
• Alicia Boole Stott Geometrical deduction of semiregular from regular polytopes and space fillings, Verhandelingen of the Koninklijke academy van Wetenschappen width unit Amsterdam, Eerste Sectie 11,1, Amsterdam, 1910
• Stott, A. B. "Geometrical Deduction of Semiregular from Regular Polytopes and Space Fillings." Verhandelingen der Koninklijke Akad. Wetenschappen Amsterdam 11, 3–24, 1910.
• Alicia Boole Stott, "Geometrical deduction of semiregular from regular polytopes and space fillings," Verhandelingen der Koninklijke Akademie van Wetenschappen te Amsterdam, (eerste sectie), Vol. 11, No. 1, pp. 1–24 plus 3 plates, 1910.
• Stott, A. B. 1910. "Geometrical Deduction of Semiregular from Regular Polytopes and Space Fillings." Verhandelingen der Koninklijke Akad. Wetenschappen Amsterdam
• Schoute, P. H., Analytical treatment of the polytopes regularly derived from the regular polytopes, Ver. der Koninklijke Akad. van Wetenschappen te Amsterdam (eerstie sectie), vol 11.5, 1913.
• H. S. M. Coxeter: Regular and Semi-Regular Polytopes, Part I, Mathematische Zeitschrift, Springer, Berlin, 1940
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
• H.S.M. Coxeter: Regular and Semi-Regular Polytopes, Part II, Mathematische Zeitschrift, Springer, Berlin, 1985
• H.S.M. Coxeter: Regular and Semi-Regular Polytopes, Part III, Mathematische Zeitschrift, Springer, Berlin, 1988
• G.Blind and R.Blind, "The semi-regular polyhedra", Commentari Mathematici Helvetici 66 (1991) 150–154
• John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 26. pp. 411–413: The Gosset Series: n21)
External links
• PolyGloss v0.05: Gosset figures (Gossetoicosatope)
• Regular, SemiRegular, Regular faced and Archimedean polytopes
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Fundamental convex regular and uniform honeycombs in dimensions 2–9
Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$
E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal
E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4
E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb
E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6
E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222
E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331
E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521
E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10
E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11
En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
|
Wikipedia
|
Biregular graph
In graph-theoretic mathematics, a biregular graph[1] or semiregular bipartite graph[2] is a bipartite graph $G=(U,V,E)$ for which every two vertices on the same side of the given bipartition have the same degree as each other. If the degree of the vertices in $U$ is $x$ and the degree of the vertices in $V$ is $y$, then the graph is said to be $(x,y)$-biregular.
Graph families defined by their automorphisms
distance-transitive → distance-regular ← strongly regular
↓
symmetric (arc-transitive) ← t-transitive, t ≥ 2 skew-symmetric
↓
(if connected)
vertex- and edge-transitive
→ edge-transitive and regular → edge-transitive
↓ ↓ ↓
vertex-transitive → regular → (if bipartite)
biregular
↑
Cayley graph ← zero-symmetric asymmetric
Example
Every complete bipartite graph $K_{a,b}$ is $(b,a)$-biregular.[3] The rhombic dodecahedron is another example; it is (3,4)-biregular.[4]
Vertex counts
An $(x,y)$-biregular graph $G=(U,V,E)$ must satisfy the equation $x|U|=y|V|$. This follows from a simple double counting argument: the number of endpoints of edges in $U$ is $x|U|$, the number of endpoints of edges in $V$ is $y|V|$, and each edge contributes the same amount (one) to both numbers.
Symmetry
Every regular bipartite graph is also biregular. Every edge-transitive graph (disallowing graphs with isolated vertices) that is not also vertex-transitive must be biregular.[3] In particular every edge-transitive graph is either regular or biregular.
Configurations
The Levi graphs of geometric configurations are biregular; a biregular graph is the Levi graph of an (abstract) configuration if and only if its girth is at least six.[5]
References
1. Scheinerman, Edward R.; Ullman, Daniel H. (1997), Fractional graph theory, Wiley-Interscience Series in Discrete Mathematics and Optimization, New York: John Wiley & Sons Inc., p. 137, ISBN 0-471-17864-0, MR 1481157.
2. Dehmer, Matthias; Emmert-Streib, Frank (2009), Analysis of Complex Networks: From Biology to Linguistics, John Wiley & Sons, p. 149, ISBN 9783527627998.
3. Lauri, Josef; Scapellato, Raffaele (2003), Topics in Graph Automorphisms and Reconstruction, London Mathematical Society Student Texts, Cambridge University Press, pp. 20–21, ISBN 9780521529037.
4. Réti, Tamás (2012), "On the relationships between the first and second Zagreb indices" (PDF), MATCH Commun. Math. Comput. Chem., 68: 169–188, archived from the original (PDF) on 2017-08-29, retrieved 2012-09-02.
5. Gropp, Harald (2007), "VI.7 Configurations", in Colbourn, Charles J.; Dinitz, Jeffrey H. (eds.), Handbook of combinatorial designs, Discrete Mathematics and its Applications (Boca Raton) (Second ed.), Chapman & Hall/CRC, Boca Raton, Florida, pp. 353–355.
|
Wikipedia
|
Semiregular space
A semiregular space is a topological space whose regular open sets (sets that equal the interiors of their closures) form a base for the topology.[1]
Examples and sufficient conditions
Every regular space is semiregular, and every topological space may be embedded into a semiregular space.[1]
The space $X=\mathbb {R} ^{2}\cup \{0^{*}\}$ with the double origin topology[2] and the Arens square[3] are examples of spaces that are Hausdorff semiregular, but not regular.
See also
• Separation axiom – Axioms in topology defining notions of "separation"
Notes
1. Willard, Stephen (2004), "14E. Semiregular spaces", General Topology, Dover, p. 98, ISBN 978-0-486-43479-7.
2. Steen & Seebach, example #74
3. Steen & Seebach, example #80
References
• Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology. Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1995. ISBN 0-486-68735-X (Dover edition).
• Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
|
Wikipedia
|
Semiset
In set theory, a semiset is a proper class that is a subclass of a set. The theory of semisets was proposed and developed by Czech mathematicians Petr Vopěnka and Petr Hájek (1972). It is based on a modification of the von Neumann–Bernays–Gödel set theory; in standard NBG, the existence of semisets is precluded by the axiom of separation.
Not to be confused with Semialgebraic set.
The concept of semisets opens the way for a formulation of an alternative set theory. In particular, Vopěnka's Alternative Set Theory (1979) axiomatizes the concept of semiset, supplemented with several additional principles.
Semisets can be used to represent sets with imprecise boundaries. Novák (1984) studied approximation of semisets by fuzzy sets, which are often more suitable for practical applications of the modeling of imprecision.
References
• Vopěnka, P., and Hájek, P. The Theory of Semisets. Amsterdam: North-Holland, 1972.
• Vopěnka, P. Mathematics in the Alternative Set Theory. Teubner, Leipzig, 1979.
• Holmes, M.R. Alternative Axiomatic Set Theories, §9.2, Vopenka's alternative set theory. In E. N. Zalta (ed.): The Stanford Encyclopedia of Philosophy (Fall 2014 Edition).
• Novák, V. "Fuzzy sets—the approximation of semisets." Fuzzy Sets and Systems 14 (1984): 259–272.
|
Wikipedia
|
Semisimple algebra
In ring theory, a branch of mathematics, a semisimple algebra is an associative artinian algebra over a field which has trivial Jacobson radical (only the zero element of the algebra is in the Jacobson radical). If the algebra is finite-dimensional this is equivalent to saying that it can be expressed as a Cartesian product of simple subalgebras.
See also: Semisimple module
Definition
The Jacobson radical of an algebra over a field is the ideal consisting of all elements that annihilate every simple left-module. The radical contains all nilpotent ideals, and if the algebra is finite-dimensional, the radical itself is a nilpotent ideal. A finite-dimensional algebra is then said to be semisimple if its radical contains only the zero element.
An algebra A is called simple if it has no proper ideals and A2 = {ab | a, b ∈ A} ≠ {0}. As the terminology suggests, simple algebras are semisimple. The only possible ideals of a simple algebra A are A and {0}. Thus if A is simple, then A is not nilpotent. Because A2 is an ideal of A and A is simple, A2 = A. By induction, An = A for every positive integer n, i.e. A is not nilpotent.
Any self-adjoint subalgebra A of n × n matrices with complex entries is semisimple. Let Rad(A) be the radical of A. Suppose a matrix M is in Rad(A). Then M*M lies in some nilpotent ideals of A, therefore (M*M)k = 0 for some positive integer k. By positive-semidefiniteness of M*M, this implies M*M = 0. So M x is the zero vector for all x, i.e. M = 0.
If {Ai} is a finite collection of simple algebras, then their Cartesian product A=Π Ai is semisimple. If (ai) is an element of Rad(A) and e1 is the multiplicative identity in A1 (all simple algebras possess a multiplicative identity), then (a1, a2, ...) · (e1, 0, ...) = (a1, 0..., 0) lies in some nilpotent ideal of Π Ai. This implies, for all b in A1, a1b is nilpotent in A1, i.e. a1 ∈ Rad(A1). So a1 = 0. Similarly, ai = 0 for all other i.
It is less apparent from the definition that the converse of the above is also true, that is, any finite-dimensional semisimple algebra is isomorphic to a Cartesian product of a finite number of simple algebras.
Characterization
Let A be a finite-dimensional semisimple algebra, and
$\{0\}=J_{0}\subset \cdots \subset J_{n}\subset A$
be a composition series of A, then A is isomorphic to the following Cartesian product:
$A\simeq J_{1}\times J_{2}/J_{1}\times J_{3}/J_{2}\times ...\times J_{n}/J_{n-1}\times A/J_{n}$
where each
$J_{i+1}/J_{i}\,$
is a simple algebra.
The proof can be sketched as follows. First, invoking the assumption that A is semisimple, one can show that the J1 is a simple algebra (therefore unital). So J1 is a unital subalgebra and an ideal of J2. Therefore, one can decompose
$J_{2}\simeq J_{1}\times J_{2}/J_{1}.$
By maximality of J1 as an ideal in J2 and also the semisimplicity of A, the algebra
$J_{2}/J_{1}\,$
is simple. Proceed by induction in similar fashion proves the claim. For example, J3 is the Cartesian product of simple algebras
$J_{3}\simeq J_{2}\times J_{3}/J_{2}\simeq J_{1}\times J_{2}/J_{1}\times J_{3}/J_{2}.$
The above result can be restated in a different way. For a semisimple algebra A = A1 ×...× An expressed in terms of its simple factors, consider the units ei ∈ Ai. The elements Ei = (0,...,ei,...,0) are idempotent elements in A and they lie in the center of A. Furthermore, Ei A = Ai, EiEj = 0 for i ≠ j, and Σ Ei = 1, the multiplicative identity in A.
Therefore, for every semisimple algebra A, there exists idempotents {Ei} in the center of A, such that
1. EiEj = 0 for i ≠ j (such a set of idempotents is called central orthogonal),
2. Σ Ei = 1,
3. A is isomorphic to the Cartesian product of simple algebras E1 A ×...× En A.
Classification
A theorem due to Joseph Wedderburn completely classifies finite-dimensional semisimple algebras over a field $k$. Any such algebra is isomorphic to a finite product $\prod M_{n_{i}}(D_{i})$ where the $n_{i}$ are natural numbers, the $D_{i}$ are division algebras over $k$, and $M_{n_{i}}(D_{i})$ is the algebra of $n_{i}\times n_{i}$ matrices over $D_{i}$. This product is unique up to permutation of the factors.[1]
This theorem was later generalized by Emil Artin to semisimple rings. This more general result is called the Wedderburn-Artin theorem.
References
1. Anthony Knapp (2007). Advanced Algebra, Chap. II: Wedderburn-Artin Ring Theory (PDF). Springer Verlag.
Springer Encyclopedia of Mathematics
|
Wikipedia
|
Semisimple representation
In mathematics, specifically in representation theory, a semisimple representation (also called a completely reducible representation) is a linear representation of a group or an algebra that is a direct sum of simple representations (also called irreducible representations).[1] It is an example of the general mathematical notion of semisimplicity.
Many representations that appear in applications of representation theory are semisimple or can be approximated by semisimple representations. A semisimple module over an algebra over a field is an example of a semisimple representation. Conversely, a semisimple representation of a group G over a field k is a semisimple module over the group ring k[G].
Equivalent characterizations
Let V be a representation of a group G; or more generally, let V be a vector space with a set of linear endomorphisms acting on it. In general, a vector space acted on by a set of linear endomorphisms is said to be simple (or irreducible) if the only invariant subspaces for those operators are zero and the vector space itself; a semisimple representation then is a direct sum of simple representations in that sense.[1]
The following are equivalent:[2]
1. V is semisimple as a representation.
2. V is a sum of simple subrepresentations.
3. Each subrepresentation W of V admits a complementary representation: a subrepresentation W' such that $V=W\oplus W'$.
The equivalences of the above conditions can be shown based on the next lemma, which is of independent interest:
Lemma[3] — Let p:V → W be a surjective equivariant map between representations. If V is semisimple, then p splits; i.e., it admits a section.
Proof of the lemma: Write $V=\bigoplus _{i\in I}V_{i}$ where $V_{i}$ are simple representations. Without loss of generality, we can assume $V_{i}$ are subrepresentations; i.e., we can assume the direct sum is internal. Now, consider the family of all possible direct sums $V_{J}:=\bigoplus _{i\in J}V_{i}\subset V$ with various subsets $J\subset I$. Put the partial ordering on it by saying the direct sum over K is less than the direct sum over J if $K\subset J$. By Zorn's lemma, we can find a maximal $J\subset I$ such that $\operatorname {ker} p\cap V_{J}=0$. We claim that $V=\operatorname {ker} p\oplus V_{J}$. By definition, $\operatorname {ker} p\cap V_{J}=0$ so we only need to show that $V=\operatorname {ker} p+V_{J}$. If $\operatorname {ker} p+V_{J}$ is a proper subrepresentatiom of $V$ then there exists $k\in I-J$ such that $V_{k}\not \subset \operatorname {ker} p+V_{J}$. Since $V_{k}$ is simple (irreducible), $V_{k}\cap (\operatorname {ker} p+V_{J})=0$. This contradicts the maximality of $J$, so $V=\operatorname {ker} p\oplus V_{J}$ as claimed. Hence, $W\simeq V/\operatorname {ker} p\simeq V_{J}\to V$ is a section of p. $\square $
Note that we cannot take $J$ to the set of $i$ such that $\ker(p)\cap V_{i}=0$. The reason is that it can happen, and frequently does, that $X$ is a subspace of $Y\oplus Z$ and yet $X\cap Y=0=X\cap Z$. For example, take $X$, $Y$ and $Z$ to be three distinct lines through the origin in $\mathbb {R} ^{2}$. For an explicit counterexample, let $A=\operatorname {Mat} _{2}(F)$ be the algebra of $2\times 2$ matrices and set $V=A$, the regular representation of $A$. Set $V_{1}={\Bigl \{}{\begin{pmatrix}a&0\\b&0\end{pmatrix}}{\Bigr \}}$ and $V_{2}={\Bigl \{}{\begin{pmatrix}0&c\\0&d\end{pmatrix}}{\Bigr \}}$ and set $W={\Bigl \{}{\begin{pmatrix}c&c\\d&d\end{pmatrix}}{\Bigr \}}$. Then $V_{1}$, $V_{2}$ and $W$ are all irreducible $A$-modules and $V=V_{1}\oplus V_{2}$. Let $p:V\to V/W$ be the natural surjection. Then $\operatorname {ker} p=W\neq 0$ and $V_{1}\cap \operatorname {ker} p=0=V_{2}\cap \operatorname {ker} p$. In this case, $W\simeq V_{1}\simeq V_{2}$ but $V\neq \operatorname {ker} p\oplus V_{1}\oplus V_{2}$ because this sum is not direct.
Proof of equivalences[4] $1.\Rightarrow 3.$: Take p to be the natural surjection $V\to V/W$. Since V is semisimple, p splits and so, through a section, $V/W$ is isomorphic to a subrepretation that is complementary to W.
$3.\Rightarrow 2.$: We shall first observe that every nonzero subrepresentation W has a simple subrepresentation. Shrinking W to a (nonzero) cyclic subrepresentation we can assume it is finitely generated. Then it has a maximal subrepresentation U. By the condition 3., $V=U\oplus U'$ for some $U'$. By modular law, it implies $W=U\oplus (W\cap U')$. Then $(W\cap U')\simeq W/U$ is a simple subrepresentation of W ("simple" because of maximality). This establishes the observation. Now, take $W$ to be the sum of all simple subrepresentations, which, by 3., admits a complementary representation $W'$. If $W'\neq 0$, then, by the early observation, $W'$ contains a simple subrepresentation and so $W\cap W'\neq 0$, a nonsense. Hence, $W'=0$.
$2.\Rightarrow 1.$:[5] The implication is a direct generalization of a basic fact in linear algebra that a basis can be extracted from a spanning set of a vector space. That is we can prove the following slightly more precise statement:
• When $V=\sum _{i\in I}V_{i}$ is a sum of simple subrepresentations, a semisimple decomposition $V=\bigoplus _{i\in I'}V_{i}$, some subset $I'\subset I$, can be extracted from the sum.
As in the proof of the lemma, we can find a maximal direct sum $W$ that consists of some $V_{i}$’s. Now, for each i in I, by simplicity, either $V_{i}\subset W$ or $V_{i}\cap W=0$. In the second case, the direct sum $W\oplus V_{i}$ is a contradiction to the maximality of W. Hence, $V_{i}\subset W$. $\square $
Examples and non-examples
Unitary representations
A finite-dimensional unitary representation (i.e., a representation factoring through a unitary group) is a basic example of a semisimple representation. Such a representation is semisimple since if W is a subrepresentation, then the orthogonal complement to W is a complementary representation[6] because if $v\in W^{\bot }$ and $g\in G$, then $\langle \pi (g)v,w\rangle =\langle v,\pi (g^{-1})w\rangle =0$ for any w in W since W is G-invariant, and so $\pi (g)v\in W^{\bot }$.
For example, given a continuous finite-dimensional complex representation $\pi :G\to GL(V)$ of a finite group or a compact group G, by the averaging argument, one can define an inner product $\langle ,\rangle $ on V that is G-invariant: i.e., $\langle \pi (g)v,\pi (g)w\rangle =\langle v,w\rangle $, which is to say $\pi (g)$ is a unitary operator and so $\pi $ is a unitary representation.[6] Hence, every finite-dimensional continuous complex representation of G is semisimple.[7] For a finite group, this is a special case of Maschke's theorem, which says a finite-dimensional representation of a finite group G over a field k with characteristic not dividing the order of G is semisimple.[8][9]
Representations of semisimple Lie algebras
By Weyl's theorem on complete reducibility, every finite-dimensional representation of a semisimple Lie algebra over a field of characteristic zero is semisimple.[10]
Separable minimal polynomials
Given a linear endomorphism T of a vector space V, V is semisimple as a representation of T (i.e., T is a semisimple operator) if and only if the minimal polynomial of T is separable; i.e., a product of distinct irreducible polynomials.[11]
Associated semisimple representation
Given a finite-dimensional representation V, the Jordan–Hölder theorem says there is a filtration by subrepresentations: $V=V_{0}\supset V_{1}\supset \cdots \supset V_{n}=0$ such that each successive quotient $V_{i}/V_{i+1}$ is a simple representation. Then the associated vector space $\operatorname {gr} V:=\bigoplus _{i=0}^{n-1}V_{i}/V_{i+1}$ is a semisimple representation called an associated semisimple representation, which, up to an isomorphism, is uniquely determined by V.[12]
Unipotent group non-example
A representation of a unipotent group is generally not semisimple. Take $G$ to be the group consisting of real matrices ${\begin{bmatrix}1&a\\&1\end{bmatrix}}$; it acts on $V=\mathbb {R} ^{2}$ in a natural way and makes V a representation of G. If W is a subrepresentation of V that has dimension 1, then a simple calculation shows that it must be spanned by the vector ${\begin{bmatrix}1\\0\end{bmatrix}}$. That is, there are exactly three G-subrepresentations of V; in particular, V is not semisimple (as a unique one-dimensional subrepresentation does not admit a complementary representation).[13]
Semisimple decomposition and multiplicity
See also: Decomposition of a module
The decomposition of a semisimple representation into simple ones, called a semisimple decomposition, need not be unique; for example, for a trivial representation, simple representations are one-dimensional vector spaces and thus a semisimple decomposition amounts to a choice of a basis of the representation vector space.[14] The isotypic decomposition, on the other hand, is an example of a unique decomposition.[15]
However, for a finite-dimensional semisimple representation V over an algebraically closed field, the numbers of simple representations up to isomorphisms appearing in the decomposition of V (1) are unique and (2) completely determine the representation up to isomorphisms;[16] this is a consequence of Schur's lemma in the following way. Suppose a finite-dimensional semisimple representation V over an algebraically closed field is given: by definition, it is a direct sum of simple representations. By grouping together simple representations in the decomposition that are isomorphic to each other, up to an isomorphism, one finds a decomposition (not necessarily unique):[16]
$V\simeq \bigoplus _{i}V_{i}^{\oplus m_{i}}$
where $V_{i}$ are simple representations, mutually non-isomorphic to one another, and $m_{i}$ are positive integers. By Schur's lemma,
$m_{i}=\dim \operatorname {Hom} _{\text{equiv}}(V_{i},V)=\dim \operatorname {Hom} _{\text{equiv}}(V,V_{i})$,
where $\operatorname {Hom} _{\text{equiv}}$ refers to the equivariant linear maps. Also, each $m_{i}$ is unchanged if $V_{i}$ is replaced by another simple representation isomorphic to $V_{i}$. Thus, the integers $m_{i}$ are independent of chosen decompositions; they are the multiplicities of simple representations $V_{i}$, up to isomorphisms, in V.[17]
In general, given a finite-dimensional representation $\pi :G\to GL(V)$ of a group G over a field k, the composition $\chi _{V}:G{\overset {\pi }{\to }}GL(V){\overset {\text{tr}}{\to }}k$ is called the character of $(\pi ,V)$.[18] When $(\pi ,V)$ is semisimple with the decomposition $V\simeq \bigoplus _{i}V_{i}^{\oplus m_{i}}$ as above, the trace $\operatorname {tr} (\pi (g))$ is the sum of the traces of $\pi (g):V_{i}\to V_{i}$ with multiplicities and thus, as functions on G,
$\chi _{V}=\sum _{i}m_{i}\chi _{V_{i}}$
where $\chi _{V_{i}}$ are the characters of $V_{i}$. When G is a finite group or more generally a compact group and $V$ is a unitary representation with the inner product given by the averaging argument, the Schur orthogonality relations say:[19] the irreducible characters (characters of simple representations) of G are an orthonormal subset of the space of complex-valued functions on G and thus $m_{i}=\langle \chi _{V},\chi _{V_{i}}\rangle $.
Isotypic decomposition
There is a decomposition of a semisimple representation that is unique, called the isotypic decomposition of the representation. By definition, given a simple representation S, the isotypic component of type S of a representation V is the sum of all subrepresentations of V that are isomorphic to S;[15] note the component is also isomorphic to the direct sum of some choice of subrepresentations isomorphic to S (so the component is unique, while the summands are not necessary so).
Then the isotypic decomposition of a semisimple representation V is the (unique) direct sum decomposition:[15][20]
$V=\bigoplus _{\lambda \in {\widehat {G}}}V^{\lambda }$
where ${\widehat {G}}$ is the set of isomorphism classes of simple representations of G and $V^{\lambda }$ is the isotypic component of V of type S for some $S\in \lambda $.
Example
Let $V$ be the space of homogeneous degree-three polynomials over the complex numbers in variables $x_{1},x_{2},x_{3}$. Then $S_{3}$ acts on $V$ by permutation of the three variables. This is a finite-dimensional complex representation of a finite group, and so is semisimple. Therefore, this 10-dimensional representation can be broken up into three isotypic components, each corresponding to one of the three irreducible representations of $S_{3}$. In particular, $V$ contains three copies of the trivial representation, one copy of the sign representation, and three copies of the two-dimensional irreducible representation $W$ of $S_{3}$. For example, the span of $x_{1}^{2}x_{2}-x_{2}^{2}x_{1}+x_{1}^{2}x_{3}-x_{2}^{2}x_{3}$ and $x_{2}^{2}x_{3}-x_{3}^{2}x_{2}+x_{2}^{2}x_{1}-x_{3}^{2}x_{1}$ is isomorphic to $W$. This can more easily be seen by writing this two-dimensional subspace as
$W_{1}=\{a(x_{1}^{2}x_{2}+x_{1}^{2}x_{3})+b(x_{2}^{2}x_{1}+x_{2}^{2}x_{3})+c(x_{3}^{2}x_{1}+x_{3}^{2}x_{2})\mid a+b+c=0\}$.
Another copy of $W$ can be written in a similar form:
$W_{2}=\{a(x_{2}^{2}x_{1}+x_{3}^{2}x_{1})+b(x_{1}^{2}x_{2}+x_{3}^{2}x_{2})+c(x_{1}^{2}x_{3}+x_{2}^{2}x_{3})\mid a+b+c=0\}$.
So can the third:
$W_{3}=\{ax_{1}^{3}+bx_{2}^{3}+cx_{3}^{3}\mid a+b+c=0\}$.
Then $W_{1}\oplus W_{2}\oplus W_{3}$ is the isotypic component of type $W$ in $V$.
Completion
In Fourier analysis, one decomposes a (nice) function as the limit of the Fourier series of the function. In much the same way, a representation itself may not be semisimple but it may be the completion (in a suitable sense) of a semisimple representation. The most basic case of this is the Peter–Weyl theorem, which decomposes the left (or right) regular representation of a compact group into the Hilbert-space completion of the direct sum of all simple unitary representations. As a corollary,[21] there is a natural decomposition for $W=L^{2}(G)$ = the Hilbert space of (classes of) square-integrable functions on a compact group G:
$W\simeq {\widehat {\bigoplus _{[(\pi ,V)]}}}V^{\oplus \dim V}$
where ${\widehat {\bigoplus }}$ means the completion of the direct sum and the direct sum runs over all isomorphism classes of simple finite-dimensional unitary representations $(\pi ,V)$ of G.[note 1] Note here that every simple unitary representation (up to an isomorphism) appears in the sum with the multiplicity the dimension of the representation.
When the group G is a finite group, the vector space $W=\mathbb {C} [G]$ is simply the group algebra of G and also the completion is vacuous. Thus, the theorem simply says that
$\mathbb {C} [G]=\bigoplus _{[(\pi ,V)]}V^{\oplus \dim V}.$
That is, each simple representation of G appears in the regular representation with multiplicity the dimension of the representation.[22] This is one of standard facts in the representation theory of a finite group (and is much easier to prove).
When the group G is the circle group $S^{1}$, the theorem exactly amounts to the classical Fourier analysis.[23]
Applications to physics
In quantum mechanics and particle physics, the angular momentum of an object can be described by complex representations of the rotation group|SO(3), all of which are semisimple.[24] Due to connection between SO(3) and SU(2), the non-relativistic spin of an elementary particle is described by complex representations of SU(2) and the relativistic spin is described by complex representations of SL2(C), all of which are semisimple.[24] In angular momentum coupling, Clebsch–Gordan coefficients arise from the multiplicities of irreducible representations occurring in the semisimple decomposition of a tensor product of irreducible representations.[25]
Notes
1. To be precise, the theorem concerns the regular representation of $G\times G$ and the above statement is a corollary.
References
Citations
1. Procesi 2007, Ch. 6, § 1.1, Definition 1 (ii).
2. Procesi 2007, Ch. 6, § 2.1.
3. Anderson & Fuller 1992, Proposition 9.4.
4. Anderson & Fuller 1992, Theorem 9.6.
5. Anderson & Fuller 1992, Lemma 9.2.
6. Fulton & Harris 1991, § 9.3. A
7. Hall 2015, Theorem 4.28
8. Fulton & Harris 1991, Corollary 1.6.
9. Serre 1977, Theorem 2.
10. Hall 2015 Theorem 10.9
11. Jacobson 1989, § 3.5. Exercise 4.
12. Artin 1999, Ch. V, § 14.
13. Fulton & Harris 1991, just after Corollary 1.6.
14. Serre 1977, § 1.4. remark
15. Procesi 2007, Ch. 6, § 2.3.
16. Fulton & Harris 1991, Proposition 1.8.
17. Fulton & Harris 1991, § 2.3.
18. Fulton & Harris 1991, § 2.1. Definition
19. Serre 1977, § 2.3. Theorem 3 and § 4.3.
20. Serre 1977, § 2.6. Theorem 8 (i)
21. Procesi 2007, Ch. 8, Theorem 3.2.
22. Serre 1977, § 2.4. Corollary 1 to Proposition 5
23. Procesi 2007, Ch. 8, § 3.3.
24. Hall, Brian C. (2013). "Angular Momentum and Spin". Quantum Theory for Mathematicians. Graduate Texts in Mathematics. Vol. 267. Springer. pp. 367–392. ISBN 978-1461471158.
25. Klimyk, A. U.; Gavrilik, A. M. (1979). "Representation matrix elements and Clebsch–Gordan coefficients of the semisimple Lie groups". Journal of Mathematical Physics. 20 (1624): 1624–1642. Bibcode:1979JMP....20.1624K. doi:10.1063/1.524268.
Sources
• Anderson, Frank W.; Fuller, Kent R. (1992), Rings and categories of modules, Graduate Texts in Mathematics, vol. 13 (2nd ed.), New York, NY: Springer-Verlag, pp. x+376, doi:10.1007/978-1-4612-4418-9, ISBN 0-387-97845-3, MR 1245487; NB: this reference, nominally, considers a semisimple module over a ring not over a group but this is not a material difference (the abstract part of the discussion goes through for groups as well).
• Artin, Michael (1999). "Noncommutative Rings" (PDF).
• Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
• Hall, Brian C. (2015). Lie Groups, Lie Algebras, and Representations: An Elementary Introduction. Graduate Texts in Mathematics. Vol. 222 (2nd ed.). Springer. ISBN 978-3319134666.
• Jacobson, Nathan (1989), Basic algebra II (2nd ed.), W. H. Freeman, ISBN 978-0-7167-1933-5
• Procesi, Claudio (2007). Lie Groups: an approach through invariants and representation. Springer. ISBN 9780387260402..
• Serre, Jean-Pierre (1977-09-01). Linear Representations of Finite Groups. Graduate Texts in Mathematics, 42. New York–Heidelberg: Springer-Verlag. ISBN 978-0-387-90190-9. MR 0450380. Zbl 0355.20006.
|
Wikipedia
|
Semi-simplicity
In mathematics, semi-simplicity is a widespread concept in disciplines such as linear algebra, abstract algebra, representation theory, category theory, and algebraic geometry. A semi-simple object is one that can be decomposed into a sum of simple objects, and simple objects are those that do not contain non-trivial proper sub-objects. The precise definitions of these words depends on the context.
For example, if G is a finite group, then a nontrivial finite-dimensional representation V over a field is said to be simple if the only subrepresentations it contains are either {0} or V (these are also called irreducible representations). Now Maschke's theorem says that any finite-dimensional representation of a finite group is a direct sum of simple representations (provided the characteristic of the base field does not divide the order of the group). So in the case of finite groups with this condition, every finite-dimensional representation is semi-simple. Especially in algebra and representation theory, "semi-simplicity" is also called complete reducibility. For example, Weyl's theorem on complete reducibility says a finite-dimensional representation of a semisimple compact Lie group is semisimple.
A square matrix (in other words a linear operator $T:V\to V$ with V finite dimensional vector space) is said to be simple if its only invariant subspaces under T are {0} and V. If the field is algebraically closed (such as the complex numbers), then the only simple matrices are of size 1 by 1. A semi-simple matrix is one that is similar to a direct sum of simple matrices; if the field is algebraically closed, this is the same as being diagonalizable.
These notions of semi-simplicity can be unified using the language of semi-simple modules, and generalized to semi-simple categories.
Introductory example of vector spaces
If one considers all vector spaces (over a field, such as the real numbers), the simple vector spaces are those that contain no proper nontrivial subspaces. Therefore, the one-dimensional vector spaces are the simple ones. So it is a basic result of linear algebra that any finite-dimensional vector space is the direct sum of simple vector spaces; in other words, all finite-dimensional vector spaces are semi-simple.
Semi-simple matrices
A square matrix or, equivalently, a linear operator T on a finite-dimensional vector space V is called semi-simple if every T-invariant subspace has a complementary T-invariant subspace.[1][2] This is equivalent to the minimal polynomial of T being square-free.
For vector spaces over an algebraically closed field F, semi-simplicity of a matrix is equivalent to diagonalizability.[1] This is because such an operator always has an eigenvector; if it is, in addition, semi-simple, then it has a complementary invariant hyperplane, which itself has an eigenvector, and thus by induction is diagonalizable. Conversely, diagonalizable operators are easily seen to be semi-simple, as invariant subspaces are direct sums of eigenspaces, and any eigenbasis for this subspace can be extended to an eigenbasis of the full space.
Semi-simple modules and rings
Further information: Semisimple module and Semisimple ring
For a fixed ring R, a nontrivial R-module M is simple, if it has no submodules other than 0 and M. An R-module M is semi-simple if every R-submodule of M is an R-module direct summand of M (the trivial module 0 is semi-simple, but not simple). For an R-module M, M is semi-simple if and only if it is the direct sum of simple modules (the trivial module is the empty direct sum). Finally, R is called a semi-simple ring if it is semi-simple as an R-module. As it turns out, this is equivalent to requiring that any finitely generated R-module M is semi-simple.[3]
Examples of semi-simple rings include fields and, more generally, finite direct products of fields. For a finite group G Maschke's theorem asserts that the group ring R[G] over some ring R is semi-simple if and only if R is semi-simple and |G| is invertible in R. Since the theory of modules of R[G] is the same as the representation theory of G on R-modules, this fact is an important dichotomy, which causes modular representation theory, i.e., the case when |G| does divide the characteristic of R to be more difficult than the case when |G| does not divide the characteristic, in particular if R is a field of characteristic zero. By the Artin–Wedderburn theorem, a unital Artinian ring R is semisimple if and only if it is (isomorphic to) $M_{n_{1}}(D_{1})\times M_{n_{2}}(D_{2})\times \cdots \times M_{n_{r}}(D_{r})$, where each $D_{i}$ is a division ring and $M_{n}(D)$ is the ring of n-by-n matrices with entries in D.
An operator T is semi-simple in the sense above if and only if the subalgebra $F[T]\subseteq \operatorname {End} _{F}(V)$ generated by the powers (i.e., iterations) of T inside the ring of endomorphisms of V is semi-simple.
As indicated above, the theory of semi-simple rings is much more easy than the one of general rings. For example, any short exact sequence
$0\to M'\to M\to M''\to 0$
of modules over a semi-simple ring must split, i.e., $M\cong M'\oplus M''$. From the point of view of homological algebra, this means that there are no non-trivial extensions. The ring Z of integers is not semi-simple: Z is not the direct sum of nZ and Z/n.
Semi-simple categories
Many of the above notions of semi-simplicity are recovered by the concept of a semi-simple category C. Briefly, a category is a collection of objects and maps between such objects, the idea being that the maps between the objects preserve some structure inherent in these objects. For example, R-modules and R-linear maps between them form a category, for any ring R.
An abelian category[4] C is called semi-simple if there is a collection of simple objects $X_{\alpha }\in C$, i.e., ones with no subobject other than the zero object 0 and $X_{\alpha }$ itself, such that any object X is the direct sum (i.e., coproduct or, equivalently, product) of finitely many simple objects. It follows from Schur's lemma that the endomorphism ring
$\operatorname {End} _{C}(X)=\operatorname {Hom} _{C}(X,X)$
in a semi-simple category is a product of matrix rings over division rings, i.e., semi-simple.
Moreover, a ring R is semi-simple if and only if the category of finitely generated R-modules is semisimple.
An example from Hodge theory is the category of polarizable pure Hodge structures, i.e., pure Hodge structures equipped with a suitable positive definite bilinear form. The presence of this so-called polarization causes the category of polarizable Hodge structures to be semi-simple.[5] Another example from algebraic geometry is the category of pure motives of smooth projective varieties over a field k $\operatorname {Mot} (k)_{\sim }$ modulo an adequate equivalence relation $\sim $. As was conjectured by Grothendieck and shown by Jannsen, this category is semi-simple if and only if the equivalence relation is numerical equivalence.[6] This fact is a conceptual cornerstone in the theory of motives.
Semisimple abelian categories also arise from a combination of a t-structure and a (suitably related) weight structure on a triangulated category.[7]
Semi-simplicity in representation theory
Main article: Semisimple representation
See also: Maschke's theorem and Weyl's theorem on complete reducibility
One can ask whether the category of finite-dimensional representations of a group or a Lie algebra is semisimple, that is, whether every finite-dimensional representation decomposes as a direct sum of irreducible representations. The answer, in general, is no. For example, the representation of $\mathbb {R} $ given by
$\Pi (x)={\begin{pmatrix}1&x\\0&1\end{pmatrix}}$
is not a direct sum of irreducibles.[8] (There is precisely one nontrivial invariant subspace, the span of the first basis element, $e_{1}$.) On the other hand, if $G$ is compact, then every finite-dimensional representation $\Pi $ of $G$ admits an inner product with respect to which $\Pi $ is unitary, showing that $\Pi $ decomposes as a sum of irreducibles.[9] Similarly, if ${\mathfrak {g}}$ is a complex semisimple Lie algebra, every finite-dimensional representation of ${\mathfrak {g}}$ is a sum of irreducibles.[10] Weyl's original proof of this used the unitarian trick: Every such ${\mathfrak {g}}$ is the complexification of the Lie algebra of a simply connected compact Lie group $K$. Since $K$ is simply connected, there is a one-to-one correspondence between the finite-dimensional representations of $K$ and of ${\mathfrak {g}}$.[11] Thus, the just-mentioned result about representations of compact groups applies. It is also possible to prove semisimplicity of representations of ${\mathfrak {g}}$ directly by algebraic means, as in Section 10.3 of Hall's book.
See also: Fusion category (which are semisimple).
See also
• A semisimple Lie algebra is a Lie algebra that is a direct sum of simple Lie algebras.
• A semisimple algebraic group is a linear algebraic group whose radical of the identity component is trivial.
• Semisimple algebra
• Semisimple representation
References
1. Lam (2001), p. 39
2. Hoffman, Kenneth; Kunze, Ray (1971). "Semi-Simple operators". Linear algebra (2nd ed.). Englewood Cliffs, N.J.: Prentice-Hall, Inc. MR 0276251.
3. Lam, Tsit-Yuen (2001). A first course in noncommutative rings. Graduate texts in mathematics. Vol. 131 (2 ed.). Springer. p. 27. ISBN 0-387-95183-0. "(2.5) Theorem and Definition"
4. More generally, the same definition of semi-simplicity works for pseudo-abelian additive categories. See for example Yves André, Bruno Kahn: Nilpotence, radicaux et structures monoïdales. With an appendix by Peter O'Sullivan. Rend. Sem. Mat. Univ. Padova 108 (2002), 107–291. https://arxiv.org/abs/math/0203273.
5. Peters, Chris A. M.; Steenbrink, Joseph H. M. Mixed Hodge structures. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], 52. Springer-Verlag, Berlin, 2008. xiv+470 pp. ISBN 978-3-540-77015-2; see Corollary 2.12
6. Uwe Jannsen: Motives, numerical equivalence, and semi-simplicity, Invent. math. 107, 447~452 (1992)
7. Bondarko, Mikhail V. (2012), "Weight structures and 'weights' on the hearts of t-structures", Homology Homotopy Appl., 14 (1): 239–261, doi:10.4310/HHA.2012.v14.n1.a12, Zbl 1251.18006
8. Hall 2015 Example 4.25
9. Hall 2015 Theorem 4.28
10. Hall 2015 Theorem 10.9
11. Hall 2015 Theorem 5.6
• Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer
External links
• MathOverflow:Are abelian non-degenerate tensor categories semisimple?
• Semisimple category at the nLab
|
Wikipedia
|
Semisimple operator
In mathematics, a linear operator T on a vector space is semisimple if every T-invariant subspace has a complementary T-invariant subspace;[1] in other words, the vector space is a semisimple representation of the operator T. Equivalently, a linear operator is semisimple if the minimal polynomial of it is a product of distinct irreducible polynomials.[2]
A linear operator on a finite dimensional vector space over an algebraically closed field is semisimple if and only if it is diagonalizable.[1][3]
Over a perfect field, the Jordan–Chevalley decomposition expresses an endomorphism $x:V\to V$ as a sum of a semisimple endomorphism s and a nilpotent endomorphism n such that both s and n are polynomials in x.
See also
• Jordan–Chevalley decomposition
Notes
1. Lam (2001), p. 39
2. Jacobson 1979, A paragraph before Ch. II, § 5, Theorem 11.
3. This is trivial by the definition in terms of a minimal polynomial but can be seen more directly as follows. Such an operator always has an eigenvector; if it is, in addition, semi-simple, then it has a complementary invariant hyperplane, which itself has an eigenvector, and thus by induction is diagonalizable. Conversely, diagonalizable operators are easily seen to be semi-simple, as invariant subspaces are direct sums of eigenspaces, and any basis for this space can be extended to an eigenbasis.
References
• Hoffman, Kenneth; Kunze, Ray (1971). "Semi-Simple operators". Linear algebra (2nd ed.). Englewood Cliffs, N.J.: Prentice-Hall, Inc. MR 0276251.
• Jacobson, Nathan (1979). Lie algebras. New York. ISBN 0-486-63832-4. OCLC 6499793.{{cite book}}: CS1 maint: location missing publisher (link)
• Lam, Tsit-Yuen (2001). A first course in noncommutative rings. Graduate texts in mathematics. Vol. 131 (2 ed.). Springer. ISBN 0-387-95183-0.
|
Wikipedia
|
Semisimple module
In mathematics, especially in the area of abstract algebra known as module theory, a semisimple module or completely reducible module is a type of module that can be understood easily from its parts. A ring that is a semisimple module over itself is known as an Artinian semisimple ring. Some important rings, such as group rings of finite groups over fields of characteristic zero, are semisimple rings. An Artinian ring is initially understood via its largest semisimple quotient. The structure of Artinian semisimple rings is well understood by the Artin–Wedderburn theorem, which exhibits these rings as finite direct products of matrix rings.
See also: Semisimple algebra
For a group-theory analog of the same notion, see Semisimple representation.
Definition
A module over a (not necessarily commutative) ring is said to be semisimple (or completely reducible) if it is the direct sum of simple (irreducible) submodules.
For a module M, the following are equivalent:
1. M is semisimple; i.e., a direct sum of irreducible modules.
2. M is the sum of its irreducible submodules.
3. Every submodule of M is a direct summand: for every submodule N of M, there is a complement P such that M = N ⊕ P.
For the proof of the equivalences, see Semisimple representation § Equivalent characterizations.
The most basic example of a semisimple module is a module over a field, i.e., a vector space. On the other hand, the ring Z of integers is not a semisimple module over itself, since the submodule 2Z is not a direct summand.
Semisimple is stronger than completely decomposable, which is a direct sum of indecomposable submodules.
Let A be an algebra over a field K. Then a left module M over A is said to be absolutely semisimple if, for any field extension F of K, F ⊗K M is a semisimple module over F ⊗K A.
Properties
• If M is semisimple and N is a submodule, then N and M/N are also semisimple.
• An arbitrary direct sum of semisimple modules is semisimple.
• A module M is finitely generated and semisimple if and only if it is Artinian and its radical is zero.
Endomorphism rings
• A semisimple module M over a ring R can also be thought of as a ring homomorphism from R into the ring of abelian group endomorphisms of M. The image of this homomorphism is a semiprimitive ring, and every semiprimitive ring is isomorphic to such an image.
• The endomorphism ring of a semisimple module is not only semiprimitive, but also von Neumann regular, (Lam 2001, p. 62).
Semisimple rings
A ring is said to be (left)-semisimple if it is semisimple as a left module over itself.[1] Surprisingly, a left-semisimple ring is also right-semisimple and vice versa. The left/right distinction is therefore unnecessary, and one can speak of semisimple rings without ambiguity.
A semisimple ring may be characterized in terms of homological algebra: namely, a ring R is semisimple if and only if any short exact sequence of left (or right) R-modules splits. That is, for a short exact sequence
$0\to A\xrightarrow {f} B\xrightarrow {g} C\to 0$
there exists s : C → B such that the composition g ∘ s : C → C is the identity. The map s is known as a section. From this it follows that
$B\cong A\oplus C$
or in more exact terms
$B\cong f(A)\oplus s(C).$
In particular, any module over a semisimple ring is injective and projective. Since "projective" implies "flat", a semisimple ring is a von Neumann regular ring.
Semisimple rings are of particular interest to algebraists. For example, if the base ring R is semisimple, then all R-modules would automatically be semisimple. Furthermore, every simple (left) R-module is isomorphic to a minimal left ideal of R, that is, R is a left Kasch ring.
Semisimple rings are both Artinian and Noetherian. From the above properties, a ring is semisimple if and only if it is Artinian and its Jacobson radical is zero.
If an Artinian semisimple ring contains a field as a central subring, it is called a semisimple algebra.
Examples
• For a commutative ring, the four following properties are equivalent: being a semisimple ring; being artinian and reduced;[2] being a reduced Noetherian ring of Krull dimension 0; and being isomorphic to a finite direct product of fields.
• If K is a field and G is a finite group of order n, then the group ring K[G] is semisimple if and only if the characteristic of K does not divide n. This is Maschke's theorem, an important result in group representation theory.
• By the Wedderburn–Artin theorem, a unital ring R is semisimple if and only if it is (isomorphic to) Mn1(D1) × Mn2(D2) × ... × Mnr(Dr), where each Di is a division ring and each ni is a positive integer, and Mn(D) denotes the ring of n-by-n matrices with entries in D.
• An example of a semisimple non-unital ring is M∞(K), the row-finite, column-finite, infinite matrices over a field K.
Simple rings
Main article: Simple ring
One should beware that despite the terminology, not all simple rings are semisimple. The problem is that the ring may be "too big", that is, not (left/right) Artinian. In fact, if R is a simple ring with a minimal left/right ideal, then R is semisimple.
Classic examples of simple, but not semisimple, rings are the Weyl algebras, such as the $\mathbb {Q} $-algebra
$A=\mathbb {Q} {\left[x,y\right]}/\langle xy-yx-1\rangle \ ,$
which is a simple noncommutative domain. These and many other nice examples are discussed in more detail in several noncommutative ring theory texts, including chapter 3 of Lam's text, in which they are described as nonartinian simple rings. The module theory for the Weyl algebras is well studied and differs significantly from that of semisimple rings.
Jacobson semisimple
Main article: Semiprimitive ring
A ring is called Jacobson semisimple (or J-semisimple or semiprimitive) if the intersection of the maximal left ideals is zero, that is, if the Jacobson radical is zero. Every ring that is semisimple as a module over itself has zero Jacobson radical, but not every ring with zero Jacobson radical is semisimple as a module over itself. A J-semisimple ring is semisimple if and only if it is an artinian ring, so semisimple rings are often called artinian semisimple rings to avoid confusion.
For example, the ring of integers, Z, is J-semisimple, but not artinian semisimple.
See also
• Socle
• Semisimple algebra
References
Notes
1. Sengupta 2012, p. 125
2. Bourbaki 2012, VIII, pg. 133.
References
• Bourbaki, Nicolas (2012), Algèbre Ch. 8 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-35315-7
• Jacobson, Nathan (1989), Basic algebra II (2nd ed.), W. H. Freeman, ISBN 978-0-7167-1933-5
• Lam, Tsit-Yuen (2001), A First Course in Noncommutative Rings, Graduate Texts in Mathematics, vol. 131 (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4419-8616-0, ISBN 978-0-387-95325-0, MR 1838439
• Lang, Serge (2002), Algebra (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0387953854
• Pierce, R.S. (1982), Associative Algebras, Graduate Texts in Mathematics, Springer-Verlag, ISBN 978-1-4757-0165-4
• Sengupta, Ambar (2012). Representing finite groups: a semisimple introduction. New York. doi:10.1007/978-1-4614-1231-1_8. ISBN 9781461412311. OCLC 769756134.{{cite book}}: CS1 maint: location missing publisher (link)
|
Wikipedia
|
Semistable abelian variety
In algebraic geometry, a semistable abelian variety is an abelian variety defined over a global or local field, which is characterized by how it reduces at the primes of the field.
For an abelian variety $A$ defined over a field $F$ with ring of integers $R$, consider the Néron model of $A$, which is a 'best possible' model of $A$ defined over $R$. This model may be represented as a scheme over $\mathrm {Spec} (R)$ (cf. spectrum of a ring) for which the generic fibre constructed by means of the morphism $\mathrm {Spec} (F)\to \mathrm {Spec} (R)$ gives back $A$. The Néron model is a smooth group scheme, so we can consider $A^{0}$, the connected component of the Néron model which contains the identity for the group law. This is an open subgroup scheme of the Néron model. For a residue field $k$, $A_{k}^{0}$ is a group variety over $k$, hence an extension of an abelian variety by a linear group. If this linear group is an algebraic torus, so that $A_{k}^{0}$ is a semiabelian variety, then $A$ has semistable reduction at the prime corresponding to $k$. If $F$ is a global field, then $A$ is semistable if it has good or semistable reduction at all primes.
The fundamental semistable reduction theorem of Alexander Grothendieck states that an abelian variety acquires semistable reduction over a finite extension of $F$.[1]
Semistable elliptic curve
A semistable elliptic curve may be described more concretely as an elliptic curve that has bad reduction only of multiplicative type.[2] Suppose E is an elliptic curve defined over the rational number field $\mathbb {Q} $. It is known that there is a finite, non-empty set S of prime numbers p for which E has bad reduction modulo p. The latter means that the curve $E_{p}$ obtained by reduction of E to the prime field with p elements has a singular point. Roughly speaking, the condition of multiplicative reduction amounts to saying that the singular point is a double point, rather than a cusp.[3] Deciding whether this condition holds is effectively computable by Tate's algorithm.[4][5] Therefore in a given case it is decidable whether or not the reduction is semistable, namely multiplicative reduction at worst.
The semistable reduction theorem for E may also be made explicit: E acquires semistable reduction over the extension of F generated by the coordinates of the points of order 12.[6][5]
References
1. Grothendieck (1972) Théorème 3.6, p. 351
2. Husemöller (1987) pp.116-117
3. Husemoller (1987) pp.116-117
4. Husemöller (1987) pp.266-269
5. Tate, John (1975), "Algorithm for determining the type of a singular fiber in an elliptic pencil", in Birch, B.J.; Kuyk, W. (eds.), Modular Functions of One Variable IV, Lecture Notes in Mathematics, vol. 476, Berlin / Heidelberg: Springer, pp. 33–52, doi:10.1007/BFb0097582, ISBN 978-3-540-07392-5, ISSN 1617-9692, MR 0393039, Zbl 1214.14020
6. This is implicit in Husemöller (1987) pp.117-118
• Grothendieck, Alexandre (1972). Séminaire de Géométrie Algébrique du Bois Marie - 1967-69 - Groupes de monodromie en géométrie algébrique - (SGA 7) - vol. 1. Lecture Notes in Mathematics (in French). Vol. 288. Berlin; New York: Springer-Verlag. viii+523. doi:10.1007/BFb0068688. ISBN 978-3-540-05987-5. MR 0354656.
• Husemöller, Dale H. (1987). Elliptic curves. Graduate Texts in Mathematics. Vol. 111. With an appendix by Ruth Lawrence. Springer-Verlag. ISBN 0-387-96371-5. Zbl 0605.14032.
• Lang, Serge (1997). Survey of Diophantine geometry. Springer-Verlag. p. 70. ISBN 3-540-61223-8. Zbl 0869.11051.
|
Wikipedia
|
Stable curve
In algebraic geometry, a stable curve is an algebraic curve that is asymptotically stable in the sense of geometric invariant theory.
See also: Stable map of curves
This is equivalent to the condition that it is a complete connected curve whose only singularities are ordinary double points and whose automorphism group is finite. The condition that the automorphism group is finite can be replaced by the condition that it is not of arithmetic genus one and every non-singular rational component meets the other components in at least 3 points (Deligne & Mumford 1969).
A semi-stable curve is one satisfying similar conditions, except that the automorphism group is allowed to be reductive rather than finite (or equivalently its connected component may be a torus). Alternatively the condition that non-singular rational components meet the other components in at least three points is replaced by the condition that they meet in at least two points.
Similarly a curve with a finite number of marked points is called stable if it is complete, connected, has only ordinary double points as singularities, and has finite automorphism group. For example, an elliptic curve (a non-singular genus 1 curve with 1 marked point) is stable.
Over the complex numbers, a connected curve is stable if and only if, after removing all singular and marked points, the universal covers of all its components are isomorphic to the unit disk.
Definition
Given an arbitrary scheme $S$ and setting $g\geq 2$ a stable genus g curve over $S$ is defined as a proper flat morphism $\pi :C\to S$ such that the geometric fibers are reduced, connected 1-dimensional schemes $C_{s}$ such that
1. $C_{s}$ has only ordinary double-point singularities
2. Every rational component $E$ meets other components at more than $2$ points
3. $\dim H^{1}({\mathcal {O}}_{C_{s}})=g$
These technical conditions are necessary because (1) reduces the technical complexity (also Picard-Lefschetz theory can be used here), (2) rigidifies the curves so that there are no infinitesimal automorphisms of the moduli stack constructed later on, and (3) guarantees that the arithmetic genus of every fiber is the same. Note that for (1) the types of singularities found in Elliptic surfaces can be completely classified.
Examples
One classical example of a family of stable curves is given by the Weierstrass family of curves
${\begin{matrix}\operatorname {Proj} \left({\frac {\mathbb {Q} [t][x,y,z]}{(y^{2}z-x(x-z)(x-tz)}}\right)\\\downarrow \\\operatorname {Spec} (\mathbb {Q} [t])\end{matrix}}$
where the fibers over every point $\neq 0,1$ are smooth and the degenerate points only have one double-point singularity. This example can be generalized to the case of a one-parameter family of smooth hyperelliptic curves degenerating at finitely many points.
Non-examples
In the general case of more than one parameter care has to be taken to remove curves which have worse than double-point singularities. For example, consider the family over $\mathbb {A} _{s,t}^{2}$ constructed from the polynomials
$y^{2}=x(x-s)(x-t)(x-1)(x-2)$
since along the diagonal $s=t$ there are non-double-point singularities. Another non-example is the family over $\mathbb {A} _{t}^{1}$ given by the polynomials
$x^{3}-y^{2}+t$
which are a family of elliptic curves degenerating to a rational curve with a cusp.
Properties
One of the most important properties of stable curves is the fact that they are local complete intersections. This implies that standard Serre-duality theory can be used. In particular, it can be shown that for every stable curve $\omega _{C/S}^{\otimes 3}$ is a relatively very-ample sheaf; it can be used to embed the curve into $\mathbb {P} _{S}^{5g-6}$. Using the standard Hilbert Scheme theory we can construct a moduli scheme of curves of genus $g$ embedded in some projective space. The Hilbert polynomial is given by
$P_{g}(n)=(6n-1)(g-1)$
There is a sublocus of stable curves contained in the Hilbert scheme
$H_{g}\subset {\textbf {Hilb}}_{\mathbb {P} _{\mathbb {Z} }^{5g-6}}^{P_{g}}$
This represents the functor
${\mathcal {M}}_{g}(S)\cong \left.\left\{{\begin{matrix}&{\text{stable curves }}\pi :C\to S\\&{\text{ with an iso }}\\&\mathbb {P} (\pi _{*}(\omega _{C/S}^{\otimes 3}))\cong \mathbb {P} ^{5g-6}\times S\end{matrix}}\right\}{\Bigg /}{\sim }\right.\cong \operatorname {Hom} (S,H_{g})$
where $\sim $ are isomorphisms of stable curves. In order to make this the moduli space of curves without regard to the embedding (which is encoded by the isomorphism of projective spaces) we have to mod out by $PGL(5g-6)$. This gives us the moduli stack
${\mathcal {M}}_{g}:=[{\underline {H}}_{g}/{\underline {PGL}}(5g-6)]$
See also
• Moduli of algebraic curves
• Stable map of curves
References
• Artin, M.; Winters, G. (1971-11-01). "Degenerate fibres and stable reduction of curves". Topology. 10 (4): 373–383. doi:10.1016/0040-9383(71)90028-0. ISSN 0040-9383.
• Deligne, Pierre; Mumford, David (1969), "The irreducibility of the space of curves of given genus", Publications Mathématiques de l'IHÉS, 36 (36): 75–109, CiteSeerX 10.1.1.589.288, doi:10.1007/BF02684599, MR 0262240, S2CID 16482150
• Gieseker, D. (1982), Lectures on moduli of curves (PDF), Tata Institute of Fundamental Research Lectures on Mathematics and Physics, vol. 69, Published for the Tata Institute of Fundamental Research, Bombay, ISBN 978-3-540-11953-1, MR 0691308
• Harris, Joe; Morrison, Ian (1998), Moduli of curves, Graduate Texts in Mathematics, vol. 187, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98429-2, MR 1631825
Topics in algebraic curves
Rational curves
• Five points determine a conic
• Projective line
• Rational normal curve
• Riemann sphere
• Twisted cubic
Elliptic curves
Analytic theory
• Elliptic function
• Elliptic integral
• Fundamental pair of periods
• Modular form
Arithmetic theory
• Counting points on elliptic curves
• Division polynomials
• Hasse's theorem on elliptic curves
• Mazur's torsion theorem
• Modular elliptic curve
• Modularity theorem
• Mordell–Weil theorem
• Nagell–Lutz theorem
• Supersingular elliptic curve
• Schoof's algorithm
• Schoof–Elkies–Atkin algorithm
Applications
• Elliptic curve cryptography
• Elliptic curve primality
Higher genus
• De Franchis theorem
• Faltings's theorem
• Hurwitz's automorphisms theorem
• Hurwitz surface
• Hyperelliptic curve
Plane curves
• AF+BG theorem
• Bézout's theorem
• Bitangent
• Cayley–Bacharach theorem
• Conic section
• Cramer's paradox
• Cubic plane curve
• Fermat curve
• Genus–degree formula
• Hilbert's sixteenth problem
• Nagata's conjecture on curves
• Plücker formula
• Quartic plane curve
• Real plane curve
Riemann surfaces
• Belyi's theorem
• Bring's curve
• Bolza surface
• Compact Riemann surface
• Dessin d'enfant
• Differential of the first kind
• Klein quartic
• Riemann's existence theorem
• Riemann–Roch theorem
• Teichmüller space
• Torelli theorem
Constructions
• Dual curve
• Polar curve
• Smooth completion
Structure of curves
Divisors on curves
• Abel–Jacobi map
• Brill–Noether theory
• Clifford's theorem on special divisors
• Gonality of an algebraic curve
• Jacobian variety
• Riemann–Roch theorem
• Weierstrass point
• Weil reciprocity law
Moduli
• ELSV formula
• Gromov–Witten invariant
• Hodge bundle
• Moduli of algebraic curves
• Stable curve
Morphisms
• Hasse–Witt matrix
• Riemann–Hurwitz formula
• Prym variety
• Weber's theorem (Algebraic curves)
Singularities
• Acnode
• Crunode
• Cusp
• Delta invariant
• Tacnode
Vector bundles
• Birkhoff–Grothendieck theorem
• Stable vector bundle
• Vector bundles on algebraic curves
|
Wikipedia
|
Semistable reduction theorem
In algebraic geometry, semistable reduction theorems state that, given a proper flat morphism $X\to S$, there exists a morphism $S'\to S$ (called base change) such that $X\times _{S}S'\to S'$ is semistable (i.e., the singularities are mild in some sense). Precise formulations depend on the specific versions of the theorem. For example, if $S$ is the unit disk in $\mathbb {C} $, then "semistable" means that the special fiber is a divisor with normal crossings.[1]
The fundamental semistable reduction theorem for Abelian varieties by Grothendieck shows that if $A$ is an Abelian variety over the fraction field $K$ of a discrete valuation ring ${\mathcal {O}}$, then there is a finite field extension $L/K$ such that $A_{(L)}=A\otimes _{K}L$ has semistable reduction over the integral closure ${\mathcal {O}}_{L}$ of ${\mathcal {O}}$ in $L$. Semistability here means more precisely that if ${\mathcal {A}}_{L}$ is the Néron model of $A_{(L)}$ over ${\mathcal {O}}_{L},$ then the fibres ${\mathcal {A}}_{L,s}$ of ${\mathcal {A}}_{L}$ over the closed points $s\in S=\mathrm {Spec} ({\mathcal {O}}_{L})$ (which are always a smooth algebraic groups) are extensions of Abelian varieties by tori.[2] Here $S$ is the algebro-geometric analogue of "small" disc around the $s\in S$, and the condition of the theorem states essentially that $A$ can be thought of as a smooth family of Abelian varieties away from $s$; the conclusion then shows that after base change this "family" extends to the $s$ so that also the fibres over the $s$ are close to being Abelian varieties.
The imprortant semistable reduction theorem for algebraic curves was first proved by Deligne and Mumford.[3] The proof proceeds by showing that the curve has semistable reduction if and only if its Jacobian variety (which is an Abelian variety) has semistable reduction; one then applies the theorem for Abelian varieties above.
References
1. Morrison 1984, § 1.
2. Grothendieck (1972), Théorème 3.6, p. 351
3. Deligne & Mumford 1969, Corollary 2.7.
• Deligne, P.; Mumford, D. (1969). "The irreducibility of the space of curves of given genus". Publications Mathématiques de l'Institut des Hautes Scientifiques. 36 (36): 75–109. doi:10.1007/BF02684599. S2CID 16482150.
• Grothendieck, Alexandre (1972). Séminaire de Géométrie Algébrique du Bois Marie - 1967-69 - Groupes de monodromie en géométrie algébrique - (SGA 7) - vol. 1. Lecture Notes in Mathematics (in French). Vol. 288. Berlin; New York: Springer-Verlag. viii+523. doi:10.1007/BFb0068688. ISBN 978-3-540-05987-5. MR 0354656.
• Kempf, G.; Knudsen, Finn Faye; Mumford, David; Saint-Donat, B. (1973), Toroidal embeddings. I, Lecture Notes in Mathematics, vol. 339, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0070318, ISBN 978-3-540-06432-9, MR 0335518
• Morrison, David R. (1984). "Chapter VI. The Clemens-Schmid exact sequence and applications" (PDF). Topics in Transcendental Algebraic Geometry. (AM-106). pp. 101–120. doi:10.1515/9781400881659-007. ISBN 9781400881659. S2CID 125739605.
Further reading
• The Stacks Project Chapter 55: Semistable Reduction: Introduction, https://stacks.math.columbia.edu/tag/0C2Q
|
Wikipedia
|
Semitopological group
In mathematics, a semitopological group is a topological space with a group action that is continuous with respect to each variable considered separately. It is a weakening of the concept of a topological group; all topological groups are semitopological groups but the converse does not hold.
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
Formal definition
A semitopological group $G$ is a topological space that is also a group such that
$g_{1}:G\times G\to G:(x,y)\mapsto xy$
is continuous with respect to both $x$ and $y$. (Note that a topological group is continuous with reference to both variables simultaneously, and $g_{2}:G\to G:x\mapsto x^{-1}$ is also required to be continuous. Here $G\times G$ is viewed as a topological space with the product topology.)[1]
Clearly, every topological group is a semitopological group. To see that the converse does not hold, consider the real line $(\mathbb {R} ,+)$ with its usual structure as an additive abelian group. Apply the lower limit topology to $\mathbb {R} $ with topological basis the family $\{[a,b):-\infty <a<b<\infty \}$. Then $g_{1}$ is continuous, but $g_{2}$ is not continuous at 0: $[0,b)$ is an open neighbourhood of 0 but there is no neighbourhood of 0 continued in $g_{2}^{-1}([0,b))$.
It is known that any locally compact Hausdorff semitopological group is a topological group.[2] Other similar results are also known.[3]
See also
• Lie group
• Algebraic group
• Compact group
• Topological ring
References
1. Husain, Taqdir (2018). Introduction to Topological Groups. Courier Dover Publications. p. 27. ISBN 9780486828206.
2. Arhangel’skii, Alexander; Tkachenko, Mikhail (2008). Topological Groups and Related Structures, An Introduction to Topological Algebra. Springer Science & Business Media. p. 114. ISBN 9789491216350.
3. Aull, C. E.; Lowen, R. (2013). Handbook of the History of General Topology. Springer Science & Business Media. p. 1119. ISBN 9789401704700.
|
Wikipedia
|
Chamfered square tiling
In geometry, the chamfered square tiling or semitruncated square tiling is a tiling of the Euclidean plane. It is a square tiling with each edge chamfered into new hexagonal faces.
Chamfered square tiling
4 colorings
Symmetryp4m, [4,4], *442
Rotation symmetryp4, [4,4]+, 442
DualSemikis square tiling
Properties
It can also be seen as the intersection of two truncated square tilings with offset positions. And its appearance is similar to a truncated square tiling, except only half of the vertices have been truncated, leading to its descriptive name semitruncated square tiling.
Usage and Names in tiling patterns
In floor tiling, this pattern with small squares has been labeled as Metro Broadway Matte and alternate corner square tile.[1]
With large squares it has been called a Dijon tile pattern.[1]
As 3 rows of rectangles, it has been called a basketweave tiling and triple block tile pattern .[2][1]
Variations
Variations can be seen in different degrees of truncation. As well, geometric variations exist within a given symmetry. The second row shows the tilings with a 45 degree rotation which also look a little different.
Lower symmetry forms are related to the cairo pentagonal tiling with axial edges expanded into rectangles.
*432 symmetry forms2*22 symmetry forms
Shallow
(Dijon tile)
Deep
(alternate corner square tile)
Flat
(Triple block)
(Basketweave)
RectangularConcave
The chiral forms be seen as two overlapping pythagorean tilings.
442 symmetry pinwheel forms
FlatShallowDeepSkewConcave
Semikis square tiling
The dual tiling looks like a square tiling with half of the squares divided into central triangles. It can be called a semikis square tiling, as alternate squares with kis operator applied. It can be seen as 4 sets of parallel lines.
References
1. Tile Patterns Gallery
2. Laying Patterns
Tessellation
Periodic
• Pythagorean
• Rhombille
• Schwarz triangle
• Rectangle
• Domino
• Uniform tiling and honeycomb
• Coloring
• Convex
• Kisrhombille
• Wallpaper group
• Wythoff
Aperiodic
• Ammann–Beenker
• Aperiodic set of prototiles
• List
• Einstein problem
• Socolar–Taylor
• Gilbert
• Penrose
• Pentagonal
• Pinwheel
• Quaquaversal
• Rep-tile and Self-tiling
• Sphinx
• Socolar
• Truchet
Other
• Anisohedral and Isohedral
• Architectonic and catoptric
• Circle Limit III
• Computer graphics
• Honeycomb
• Isotoxal
• List
• Packing
• Problems
• Domino
• Wang
• Heesch's
• Squaring
• Dividing a square into similar rectangles
• Prototile
• Conway criterion
• Girih
• Regular Division of the Plane
• Regular grid
• Substitution
• Voronoi
• Voderberg
By vertex type
Spherical
• 2n
• 33.n
• V33.n
• 42.n
• V42.n
Regular
• 2∞
• 36
• 44
• 63
Semi-
regular
• 32.4.3.4
• V32.4.3.4
• 33.42
• 33.∞
• 34.6
• V34.6
• 3.4.6.4
• (3.6)2
• 3.122
• 42.∞
• 4.6.12
• 4.82
Hyper-
bolic
• 32.4.3.5
• 32.4.3.6
• 32.4.3.7
• 32.4.3.8
• 32.4.3.∞
• 32.5.3.5
• 32.5.3.6
• 32.6.3.6
• 32.6.3.8
• 32.7.3.7
• 32.8.3.8
• 33.4.3.4
• 32.∞.3.∞
• 34.7
• 34.8
• 34.∞
• 35.4
• 37
• 38
• 3∞
• (3.4)3
• (3.4)4
• 3.4.62.4
• 3.4.7.4
• 3.4.8.4
• 3.4.∞.4
• 3.6.4.6
• (3.7)2
• (3.8)2
• 3.142
• 3.162
• (3.∞)2
• 3.∞2
• 42.5.4
• 42.6.4
• 42.7.4
• 42.8.4
• 42.∞.4
• 45
• 46
• 47
• 48
• 4∞
• (4.5)2
• (4.6)2
• 4.6.12
• 4.6.14
• V4.6.14
• 4.6.16
• V4.6.16
• 4.6.∞
• (4.7)2
• (4.8)2
• 4.8.10
• V4.8.10
• 4.8.12
• 4.8.14
• 4.8.16
• 4.8.∞
• 4.102
• 4.10.12
• 4.122
• 4.12.16
• 4.142
• 4.162
• 4.∞2
• (4.∞)2
• 54
• 55
• 56
• 5∞
• 5.4.6.4
• (5.6)2
• 5.82
• 5.102
• 5.122
• (5.∞)2
• 64
• 65
• 66
• 68
• 6.4.8.4
• (6.8)2
• 6.82
• 6.102
• 6.122
• 6.162
• 73
• 74
• 77
• 7.62
• 7.82
• 7.142
• 83
• 84
• 86
• 88
• 8.62
• 8.122
• 8.162
• ∞3
• ∞4
• ∞5
• ∞∞
• ∞.62
• ∞.82
Wikimedia Commons has media related to Chamfered square tiling.
|
Wikipedia
|
Verbal arithmetic
Verbal arithmetic, also known as alphametics, cryptarithmetic, cryptarithm or word addition, is a type of mathematical game consisting of a mathematical equation among unknown numbers, whose digits are represented by letters of the alphabet. The goal is to identify the value of each letter. The name can be extended to puzzles that use non-alphabetic symbols instead of letters.
The equation is typically a basic operation of arithmetic, such as addition, multiplication, or division. The classic example, published in the July 1924 issue of Strand Magazine by Henry Dudeney,[1] is:
${\begin{matrix}&&{\text{S}}&{\text{E}}&{\text{N}}&{\text{D}}\\+&&{\text{M}}&{\text{O}}&{\text{R}}&{\text{E}}\\\hline =&{\text{M}}&{\text{O}}&{\text{N}}&{\text{E}}&{\text{Y}}\\\end{matrix}}$
The solution to this puzzle is O = 0, M = 1, Y = 2, E = 5, N = 6, D = 7, R = 8, and S = 9.
Traditionally, each letter should represent a different digit, and (as an ordinary arithmetic notation) the leading digit of a multi-digit number must not be zero. A good puzzle should have one unique solution, and the letters should make up a phrase (as in the example above).
Verbal arithmetic can be useful as a motivation and source of exercises in the teaching of algebra.
History
Cryptarithmic puzzles are quite old and their inventor is unknown. An 1864 example in The American Agriculturist[2] disproves the popular notion that it was invented by Sam Loyd. The name "cryptarithm" was coined by puzzlist Minos (pseudonym of Simon Vatriquant) in the May 1931 issue of Sphinx, a Belgian magazine of recreational mathematics, and was translated as "cryptarithmetic" by Maurice Kraitchik in 1942.[3] In 1955, J. A. H. Hunter introduced the word "alphametic" to designate cryptarithms, such as Dudeney's, whose letters form meaningful words or phrases.[4]
Types of cryptarithms
Types of cryptarithm include the alphametic, the digimetic, and the skeletal division.
Alphametic
A type of cryptarithm in which a set of words is written down in the form of a long addition sum or some other mathematical problem. The object is to replace the letters of the alphabet with decimal digits to make a valid arithmetic sum.
Digimetic
A cryptarithm in which digits are used to represent other digits.
Skeletal division
A long division in which most or all of the digits are replaced by symbols (usually asterisks) to form a cryptarithm.
Reverse cryptarithm
A rare variation where a formula is written, and the solution is the corresponding cryptarithm whose solution is the formula given.
Solving cryptarithms
Solving a cryptarithm by hand usually involves a mix of deductions and exhaustive tests of possibilities. For instance the following sequence of deductions solves Dudeney's SEND+MORE = MONEY puzzle above (columns are numbered from right to left):
${\begin{matrix}&&{\text{S}}&{\text{E}}&{\text{N}}&{\text{D}}\\+&&{\text{M}}&{\text{O}}&{\text{R}}&{\text{E}}\\\hline =&{\text{M}}&{\text{O}}&{\text{N}}&{\text{E}}&{\text{Y}}\\\end{matrix}}$
1. From column 5, M = 1 since it is the only carry-over possible from the sum of two single digit numbers in column 4.
2. Since there is a carry in column 5, O must be less than or equal to M (from column 4). But O cannot be equal to M, so O is less than M. Therefore O = 0.
3. Since O is 1 less than M, S is either 8 or 9 depending on whether there is a carry in column 4. But if there were a carry in column 4, N would be less than or equal to O (from column 3). This is impossible since O = 0. Therefore there is no carry in column 3 and S = 9.
4. If there were no carry in column 3 then E = N, which is impossible. Therefore there is a carry and N = E + 1.
5. If there were no carry in column 2, then ( N + R ) mod 10 = E, and N = E + 1, so ( E + 1 + R ) mod 10 = E which means ( 1 + R ) mod 10 = 0, so R = 9. But S = 9, so there must be a carry in column 2 so R = 8.
6. To produce a carry in column 2, we must have D + E = 10 + Y.
7. Y is at least 2 so D + E is at least 12.
8. The only two pairs of available numbers that sum to at least 12 are (5,7) and (6,7) so either E = 7 or D = 7.
9. Since N = E + 1, E can't be 7 because then N = 8 = R so D = 7.
10. E can't be 6 because then N = 7 = D so E = 5 and N = 6.
11. D + E = 12 so Y = 2.
Another example of TO+GO=OUT (source is unknown):
${\begin{matrix}&&{\text{T}}&{\text{O}}\\+&&{\text{G}}&{\text{O}}\\\hline =&{\text{O}}&{\text{U}}&{\text{T}}\\\end{matrix}}$
1. The sum of two biggest two-digit-numbers is 99+99=198. So O=1 and there is a carry in column 3.
2. Since column 1 is on the right of all other columns, it is impossible for it to have a carry. Therefore 1+1=T, and T=2.
3. As column 1 had been calculated in the last step, it is known that there isn't a carry in column 2. But, it is also known that there is a carry in column 3 in the first step. Therefore, 2+G≥10. If G is equal to 9, U would equal 1, but this is impossible as O also equals 1. So only G=8 is possible and with 2+8=10+U, U=0.
The use of modular arithmetic often helps. For example, use of mod-10 arithmetic allows the columns of an addition problem to be treated as simultaneous equations, while the use of mod-2 arithmetic allows inferences based on the parity of the variables.
In computer science, cryptarithms provide good examples to illustrate the brute force method, and algorithms that generate all permutations of m choices from n possibilities. For example, the Dudeney puzzle above can be solved by testing all assignments of eight values among the digits 0 to 9 to the eight letters S,E,N,D,M,O,R,Y, giving 1,814,400 possibilities. They also provide good examples for backtracking paradigm of algorithm design.
Other information
When generalized to arbitrary bases, the problem of determining if a cryptarithm has a solution is NP-complete.[6] (The generalization is necessary for the hardness result because in base 10, there are only 10! possible assignments of digits to letters, and these can be checked against the puzzle in linear time.)
Alphametics can be combined with other number puzzles such as Sudoku and Kakuro to create cryptic Sudoku and Kakuro.
Longest alphametics
Anton Pavlis constructed an alphametic in 1983 with 41 addends:
SO+MANY+MORE+MEN+SEEM+TO+SAY+THAT+
THEY+MAY+SOON+TRY+TO+STAY+AT+HOME+
SO+AS+TO+SEE+OR+HEAR+THE+SAME+ONE+
MAN+TRY+TO+MEET+THE+TEAM+ON+THE+
MOON+AS+HE+HAS+AT+THE+OTHER+TEN
=TESTS
(The answer is that MANYOTHERS=2764195083.)[7]
See also
• Diophantine equation
• Mathematical puzzles
• Permutation
• Puzzles
• Sideways Arithmetic From Wayside School - A book whose plot revolves around these puzzles
• Cryptogram
References
1. H. E. Dudeney, in Strand Magazine vol. 68 (July 1924), pp. 97 and 214.
2. "No. 109 Mathematical puzzle". American Agriculturist. Vol. 23, no. 12. December 1864. p. 349.
3. Maurice Kraitchik, Mathematical Recreations (1953), pp. 79-80.
4. J. A. H. Hunter, in the Toronto Globe and Mail (27 October 1955), p. 27.
5. Feynman, Richard P. (August 2008). Perfectly Reasonable Deviations from the Beaten Track: The Letters of Richard P. Feynman. ISBN 9780786722426.
6. David Eppstein (1987). "On the NP-completeness of cryptarithms" (PDF). SIGACT News. 18 (3): 38–40. doi:10.1145/24658.24662. S2CID 2814715.
7. Pavlis, Anton. "Crux Mathematicorum" (PDF). Canadian Mathematical Society. Canadian Mathematical Society. p. 115. Retrieved 14 December 2016.
• Martin Gardner, Mathematics, Magic, and Mystery. Dover (1956)
• Journal of Recreational Mathematics, had a regular alphametics column.
• Jack van der Elsen, Alphametics. Maastricht (1998)
• Kahan S., Have some sums to solve: The complete alphametics book, Baywood Publishing, (1978)
• Brooke M. One Hundred & Fifty Puzzles in Crypt-Arithmetic. New York: Dover, (1963)
• Hitesh Tikamchand Jain, ABC of Cryptarithmetic/Alphametics. India(2017)
External links
• Solution using Matlab code and tutorial
• Cryptarithms at cut-the-knot
• Weisstein, Eric W. "Alphametic". MathWorld.
• Weisstein, Eric W. "Cryptarithmetic". MathWorld.
• Alphametics and Cryptarithms
Alphametics solvers
• Alphametics Solver!
• Alphametics Puzzle Solver
• Android app to solve Crypt Arithmatic problems
• Alphametic Solver written in Python
• An online tool to create and solve Alphametics and Cryptarithms
• An online tool to solve, create, store and retrieve alphametics - over 4000 English alphametics available with solutions
|
Wikipedia
|
Sendov's conjecture
In mathematics, Sendov's conjecture, sometimes also called Ilieff's conjecture, concerns the relationship between the locations of roots and critical points of a polynomial function of a complex variable. It is named after Blagovest Sendov.
The conjecture states that for a polynomial
$f(z)=(z-r_{1})\cdots (z-r_{n}),\qquad (n\geq 2)$
with all roots r1, ..., rn inside the closed unit disk |z| ≤ 1, each of the n roots is at a distance no more than 1 from at least one critical point.
The Gauss–Lucas theorem says that all of the critical points lie within the convex hull of the roots. It follows that the critical points must be within the unit disk, since the roots are.
The conjecture has been proven for n < 9 by Brown-Xiang and for n sufficiently large by Tao.[1][2]
History
The conjecture was first proposed by Blagovest Sendov in 1959; he described the conjecture to his colleague Nikola Obreshkov. In 1967 the conjecture was misattributed[3] to Ljubomir Iliev by Walter Hayman.[4] In 1969 Meir and Sharma proved the conjecture for polynomials with n < 6. In 1991 Brown proved the conjecture for n < 7. Borcea extended the proof to n < 8 in 1996. Brown and Xiang[5] proved the conjecture for n < 9 in 1999. Terence Tao proved the conjecture for sufficiently large n in 2020.
References
1. Terence Tao (2020). "Sendov's conjecture for sufficiently high degree polynomials". arXiv:2012.04125 [math.CV].
2. Terence Tao (9 December 2020). "Sendov's conjecture for sufficiently high degree polynomials". What's new.
3. Marden, Morris. Conjectures on the Critical Points of a Polynomial. The American Mathematical Monthly 90 (1983), no. 4, 267-276.
4. Problem 4.5, W. K. Hayman, Research Problems in Function Theory. Althlone Press, London, 1967.
5. Brown, Johnny E.; Xiang, Guangping Proof of the Sendov conjecture for polynomials of degree at most eight. Journal of Mathematical Analysis and Applications 232 (1999), no. 2, 272–292.
• G. Schmeisser, "The Conjectures of Sendov and Smale," Approximation Theory: A Volume Dedicated to Blagovest Sendov (B. Bojoanov, ed.), Sofia: DARBA, 2002 pp. 353–369.
External links
• Sendov's Conjecture by Bruce Torrence with contributions from Paul Abbott at The Wolfram Demonstrations Project
|
Wikipedia
|
Seneca effect
The Seneca effect, or Seneca cliff or Seneca collapse, is a mathematical model proposed by Ugo Bardi to describe situations where a system's rate of decline is much sharper than its earlier rate of growth.
Description
In 2017, Bardi published a book titled The Seneca Effect: When Growth is Slow but Collapse is Rapid, named as the Roman philosopher and writer Seneca, who wrote Fortune is of sluggish growth, but ruin is rapid (Letters to Lucilius, 91.6):[1]
Whatever structure has been reared by a long sequence of years, at the cost of great toil and through the great kindness of the gods, is scattered and dispersed by a single day. Nay, he who has said "a day" has granted too long a postponement to swift-coming misfortune; an hour, an instant of time, suffices for the overthrow of empires! It would be some consolation for the feebleness of our selves and our works, if all things should perish as slowly as they come into being; but as it is, increases are of sluggish growth, but the way to ruin is rapid.
— Lucius Annaeus Seneca, Letters to Lucilius, 91.6
Bardi's book looked at cases of rapid decline across societies (including the fall of empires, financial crises, and major famines), in nature (including avalanches), and through man-made systems (including cracks in metal objects). Bardi concluded that rapid collapse is not a flaw, or "bug" as he terms it, but a "varied and ubiquitous phenomena" with multiple causes and resultant pathways. The collapse of a system can often clear the path for new, and better adapted, structures.[2] In a 2019 book titled Before the Collapse: A Guide to the Other Side of Growth, Bardi describes a "Seneca Rebound" that often takes place where new systems replace the collapsed system, and often at a rate faster than preceding growth rates as the collapse has eliminated many of impediments or constraints from the previous system.[2]
The "Seneca effect" model is related to the "World3" model from the 1972 report The Limits to Growth, issued by the Club of Rome.[1][3]
Use
One of the model's main practical applications has been to describe the resultant outcomes given the condition of a global shortage of fossil fuels.[1] Unlike the symmetrical Hubbert curve fossil fuel model, the Seneca cliff model shows material asymmetry, where the global rate of decline in fossil fuel production is far steeper than forecasted by the Hubbert curve.[4]
The term has also been used to describe rapid declines in businesses that had grown for decades, with the rapid post-2005 decline and resultant bankruptcy in Kodak as an quoted example.[5]
See also
• Hubbert curve
• Societal collapse
• Joseph Tainter
References
1. "The Seneca Effect". Business Insider. 3 August 2011. Retrieved 20 January 2011.
2. Ahmed, Nafeez (22 November 2019). "The Collapse of Civilization May Have Already Begun". Vice. Retrieved 20 January 2022.
3. Turner, Graham. Is Global Collapse Imminent?. University of Melbourne, Melbourne Sustainable Society Institute, 2014.
4. Heinrich, Torsten. "Resource Depletion, Growth, Collapse, and the Measurement of Capital." (2014).
5. Reeves, Martin; Fæste, Lars (3 October 018). "Business transformations: Why pre-emption is better than cure". Management Today. Retrieved 20 January 2022. {{cite magazine}}: Check date values in: |date= (help)
https://books.google.ca/books/about/The_Seneca_Effect.html?id=F14yDwAAQBAJ&printsec=frontcover&source=kp_read_button&redir_esc=y
Further reading
• Bardi, Ugo (2018). The Seneca Effect: Why Growth is Slow But Collapse is Rapid (2 ed.). Springer. ISBN 978-3319861036.
• Bardi, Ugo (2014). Extracted: How the quest for mineral wealth is plundering the planet. Chelsea Green Publishing. ISBN 978-1603585415.
• Bardi, Ugo (2005). The mineral economy: a model for the shape of oil production curves (PDF). Energy Policy 33.1. pp. 53–61.
• Orlov, Dmitry (2013). The Five Stages of Collapse: Survivors' Toolkit. New Society Publishers. ISBN 9780865717367.
• Jackson, Tim and Robin Webster. "Limits to Growth revisited." Reframing Global Social Policy: Social Investment for Sustainable and Inclusive Growth (2017): 295.
• Novak, Peter. "Sustainable energy system with zero emissions of GHG for cities and countries." Energy and Buildings 98 (2015): 27-33.
• Illig, Aude, and Ian Schindler. "Oil Extraction, Economic Growth, and Oil Price Dynamics." BioPhysical Economics and Resource Quality 2.1 (2017): 1.
External links
• Ugo Bardi (28 August 2011). "Cassandra Legacy Blog, The Seneca effect: why decline is faster than growth".
|
Wikipedia
|
Anne Bennett Prize
The Anne Bennett Prize and Senior Anne Bennett Prize are awards given by the London Mathematical Society.[1][2]
In every third year, the society offers the Senior Anne Bennett prize to a mathematician normally based in the United Kingdom for work in, influence on or service to mathematics, particularly in relation to advancing the careers of women in mathematics.[1]
In the two years out of three in which the Senior Anne Bennett Prize is not awarded, the society offers the Anne Bennett Prize to a mathematician within ten years of their doctorate for work in and influence on mathematics, particularly acting as an inspiration for women mathematicians.[1]
Both prizes are awarded in memory of Anne Bennett, an administrator for the London Mathematical Society who died in 2012.[3]
The Anne Bennett Prizes should be distinguished from the Anne Bennett Memorial Award for Distinguished Service of the Royal Society of Chemistry,[4] for which Anne Bennett also worked.[3]
Winners
The winners of the Anne Bennett Prize have been:
• 2015 Apala Majumdar, in recognition of her outstanding contributions to the mathematics of liquid crystals and to the liquid crystal community.[5][6]
• 2016 Julia Wolf, in recognition of her outstanding contributions to additive number theory, combinatorics and harmonic analysis and to the mathematical community.[7][8]
• 2018 Lotte Hollands, in recognition of her outstanding research at the interface between quantum theory and geometry and of her leadership in mathematical outreach activities.[9][10]
• 2019 Eva-Maria Graefe, in recognition of her outstanding research in quantum theory and the inspirational role she has played among female students and early career researchers in mathematics and physics.[11]
• 2021 Viveka Erlandsson, "for her outstanding achievements in geometry and topology and her inspirational active role in promoting women mathematicians".[12]
• 2022 Asma Hassannezhad, in recognition of her "work in spectral geometry and her substantial contributions toward the advancement of women in mathematics".[13]
The winners of the Senior Anne Bennett Prize have been:
• 2014 Caroline Series, in recognition of her leading contributions to hyperbolic geometry and symbolic dynamics, and of the major impact of her numerous initiatives towards the advancement of women in mathematics.[14][15]
• 2017 Alison Etheridge, in recognition of her outstanding research on measure-valued stochastic processes and applications to population biology; and for her impressive leadership and service to the profession.[16][17]
• 2020 Peter Clarkson, "in recognition of his tireless work to support gender equality in UK mathematics, and particularly for his leadership in developing good practice among departments of mathematical sciences".[18]
See also
• List of mathematics awards
References
1. "LMS prizes - details and regulations | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
2. List of LMS prize winners, London Mathematical Society, retrieved 2019-07-23
3. Nixon, Fiona (2012). "LMS Obituary - Anne Bennett | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.{{cite web}}: CS1 maint: url-status (link)
4. "The Anne Bennett Memorial Award for Distinguished Service". www.rsc.org. Retrieved 2019-10-08.
5. "Citations for 2015 LMS prize winners | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
6. "Prizes of the London Mathematical Society" (PDF), Mathematics People, Notices of the American Mathematical Society, 62 (9): 1081, October 2015
7. "2016 LMS Prize Winners | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
8. "Prizes of the London Mathematical Society" (PDF), Mathematics People, Notices of the American Mathematical Society, 63 (9): 1064, October 2016
9. "2018 LMS Prize Winners | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
10. "Prizes of the London Mathematical Society" (PDF), Mathematics People, Notices of the American Mathematical Society, 65 (9): 1122, October 2018
11. "2019 LMS Prize Winners | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
12. "Anne Bennett Prize: citation for Viveka Erlandsson" (PDF). London Mathematical Society. 2021. Retrieved 2022-02-04.
13. "LMS Prize Winners 2022 | London Mathematical Society". www.lms.ac.uk. Retrieved 21 August 2022.
14. "LMS Prizes 2014 | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
15. "Prizes of the London Mathematical Society" (PDF), Mathematics People, Notices of the American Mathematical Society, 61 (9): 1090, October 2014
16. "LMS Prizes 2017 | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
17. "Prizes of the London Mathematical Society" (PDF), Mathematics People, Notices of the American Mathematical Society, 64 (9): 1036, October 2017
18. "Senior Anne Bennett Prize citation: Peter Clarkson" (PDF). London Mathematical Society. 2020. Retrieved 2020-06-27.
Awards of the London Mathematical Society
• Anne Bennett Prize
• Berwick Prize
• De Morgan Medal
• Fröhlich Prize
• Louis Bachelier Prize
• Naylor Prize and Lectureship
• Pólya Prize
• Senior Berwick Prize
• Senior Whitehead Prize
• Shephard Prize
• Whitehead Prize
|
Wikipedia
|
Senior Wrangler
The Senior Wrangler is the top mathematics undergraduate at the University of Cambridge in England, a position which has been described as "the greatest intellectual achievement attainable in Britain".[1]
Specifically, it is the person who achieves the highest overall mark among the Wranglers – the students at Cambridge who gain first-class degrees in mathematics. The Cambridge undergraduate mathematics course, or Mathematical Tripos, is famously difficult.
Many Senior Wranglers have become world-leading figures in mathematics, physics, and other fields. They include George Airy, Jacob Bronowski, Christopher Budd, Kevin Buzzard, Arthur Cayley, Donald Coxeter, Arthur Eddington, Ben Green, John Herschel, James Inman, J. E. Littlewood, Lee Hsien Loong, Jayant Narlikar, Morris Pell, John Polkinghorne, Frank Ramsey, Lord Rayleigh (John Strutt), Sir George Stokes, Isaac Todhunter, Sir Gilbert Walker, and James H. Wilkinson.
Senior Wranglers were once fêted with torchlit processions and took pride of place in the University's graduation ceremony.[2] Years in Cambridge were often remembered by who had been Senior Wrangler that year.[1]
The annual ceremony in which the Senior Wrangler becomes known was first held in the 18th century. Standing on the balcony of the University's Senate House, the examiner reads out the class results for mathematics,[3] and printed copies of the results are then thrown to the audience below. The examiner no longer announces the students' exact rankings, but they still identify the Senior Wrangler, nowadays by tipping their academic hat when reading out the person's name.
Others who finished in the top 12
Those who have achieved second place, known as Second Wranglers, include Alfred Marshall, James Clerk Maxwell, J. J. Thomson, Lord Kelvin, William Clifford, and William Whewell.
Those who have finished between third and 12th include Archibald Hill, Karl Pearson and William Henry Bragg (third), George Green, G. H. Hardy, and Alfred North Whitehead (fourth), Adam Sedgwick (fifth), John Venn (sixth), Bertrand Russell, Nevil Maskelyne and Sir James Timmins Chance (seventh), Thomas Malthus (ninth), and John Maynard Keynes and William Henry Fox Talbot (12th).
History
Between 1748 and 1909, the University publicly announced the ranking,[4] which was then reported in newspapers such as The Times. The examination was considered to be by far the most important in Britain and the Empire. The prestige of being a high Wrangler was great; the respect accorded to the Senior Wrangler was immense. Andrew Warwick, author of Masters of Theory, describes the term 'Senior Wrangler' as "synonymous with academic supremacy".[5]
Since 1910, successful students in the examinations have been told their rankings privately, and not all Senior Wranglers have become publicly known as such. In recent years, the custom of discretion regarding ranking has progressively vanished, and all Senior Wranglers since 2010 have announced their identity publicly.
The youngest person to be Senior Wrangler is probably Arran Fernandez, who came top in 2013, aged 18 years and 0 months.[6] The previous youngest was probably James Wilkinson in 1939, aged 19 years and nine months.[7] The youngest up to 1909 were Alfred Flux in 1887, aged 20 years and two months[8] and Peter Tait in 1852, aged 20 years and eight months.[9]
Two individuals have placed first without becoming known as Senior Wrangler. One was the student Philippa Fawcett in 1890. At that time, although the University allowed women to take the examinations, it did not allow them to be members of the University, nor to receive degrees. Therefore they could not be known as 'Wranglers', and were merely told how they had performed compared to the male candidates, for example, "equal to the Third Wrangler", or "between the Seventh and Eighth Wranglers". Having gained the highest mark, Fawcett was declared to have finished "above the Senior Wrangler".
The other was the mathematics professor George Pólya. As he had contributed to reforming the Tripos with the aim that an excellent performance would be less dependent on solving hard problems and more so on showing a broad mathematical understanding and knowledge, G.H. Hardy asked Pólya to sit the examinations himself, unofficially, during his stay in England in 1924–5. Pólya did so, and to Hardy's surprise, received the highest mark, an achievement which, had he been a student, would have made him the Senior Wrangler.[10]
Derived uses of the term
Senior Wrangler's Walk is a path in Cambridge, the walk to and along which was considered to be sufficient constitutional exercise for a student aspiring to become the Senior Wrangler. The route was shorter than other walks, such as Wranglers' Walk and the Grantchester Grind, undertaken by undergraduates whose aspirations were lower.[11]
Senior Wrangler sauce is a Cambridge term for brandy butter, a type of hard sauce made from brandy, butter, and sugar, traditionally served in Britain with Christmas pudding and warm mince pies.[12]
Senior Wrangler is also the name of a solitaire card game, alternatively known as Mathematics and Double Calculation, played with two decks of cards and involving elementary modular arithmetic.[13][14]
Literary references
Fictional Senior Wranglers appearing in novels include Roger Hamley, a character in Elizabeth Gaskell's Wives and Daughters, and Tom Jericho, the cryptanalyst in Robert Harris's novel Enigma, who is described as having been Senior Wrangler in 1938. In Catherine Hall's The Proof of Love, Victor Turner is listed as having been Senior Wrangler in 1968.
In George Bernard Shaw's play Mrs. Warren's Profession, the title character's daughter Vivie is praised for "tieing with the third wrangler," and she comments that "the mathematical tripos" means "grind, grind, grind for six to eight hours a day at mathematics, and nothing but mathematics."
In Ford Madox Ford's Parade's End, the character Christopher Tietjens is described as having settled deliberately for only being Second Wrangler, in order to avoid the weight of expectation that the title would create.
In his Discworld series of novels, Terry Pratchett has a character called the Senior Wrangler, a faculty member at the Unseen University, whose first name is Horace.
The compiler of crosswords for The Leader in the 1930s used 'Senior Wrangler' as a pseudonym.[15]
Coaches
The two most successful 19th-century coaches of Senior Wranglers were William Hopkins and Edward Routh. Hopkins, the 'Senior Wrangler Maker', who himself was the 7th Wrangler, coached 17 Senior Wranglers. Routh, who had himself been the Senior Wrangler, coached 27.[16] Another, described by his student (and Senior Wrangler) J.E. Littlewood as "the last of the great coaches", was another Senior Wrangler, Robert Alfred Herman.[17]
Senior Wranglers and runners up, 1748–1909
During 1748–1909, the top two colleges in terms of number of Senior Wranglers were Trinity and St John's with 56 and 54 respectively. Gonville and Caius was third with 13.
YearSenior Wrangler(s)[18][note 1]CollegeProxime accessit/accesserunt
(runner(s) up)
College
1748John BatesGonville and CaiusJohn CranwellSidney Sussex
1749John GreeneCorpus ChristiFrancis CoventryMagdalene
1750William HazelandSt John'sJohn GoochGonville and Caius
1751John HewthwaiteChrist'sWilliam CardalePembroke
1752Henry BestMagdaleneJohn CayClare
1753William DisneyTrinityWilliam PrestonTrinity
1754William AbbotSt John'sSamuel HallifaxJesus
1755Thomas CastleyJesusJohn HatsellQueens'
1756John WebsterCorpus ChristiWilliam BearcroftPeterhouse
1757Edward WaringMagdaleneJohn JebbPeterhouse
1758Robert ThorpPeterhouseGeorge WollastonSidney Sussex
1759Joshua MasseySt John'sRichard WatsonTrinity
1760George CrossClareAnthony HamiltonCorpus Christi
1761John WilsonPeterhouseTimothy LowtenSt John's
1762Richard HaightonChrist'sJeremiah PembertonPembroke
1763William PaleyChrist'sJohn FrereGonville and Caius
1764Luke HeslopCorpus ChristiJohn Fairfax FrancklinEmmanuel
1765 John WhiteGonville and CaiusJohn Clement IvesGonville and Caius
1766William ArnaldSt John'sJohn LawChrist's
1767Joseph TurnerPembrokeGeorge DutensQueens'
1768Thomas KiplingSt John'sGeorge FieldingTrinity
1769Thomas ParkinsonChrist'sWilliam BurslemSt John's
1770Lewis HughesSt John'sWilliam SmithSt John's
1771Thomas StarkieSt John'sRoger KedingtonGonville and Caius
1772George Pretyman TomlinePembrokeMark Anthony StephensonClare
1773John Jelland BrundishGonville and CaiusGeorge WhitmoreSt John's
1774Isaac MilnerQueens'George MounseyPeterhouse
1775Samuel VinceGonville and CaiusHenry William CoulthurstSt John's
1776John OldershawEmmanuelGilbert WakefieldJesus
1777David OwenTrinityThomas CautleyTrinity
1778William FarishMagdaleneWilliam TaylorEmmanuel
1779Thomas JonesTrinityHerbert Marsh[note 2]St John's
1780St John PriestPembrokeWilliam FrendChrist's
1781Henry AinsliePembrokeMontague Farrer Ainslie & George Henry LawTrinity & Queens'
1782James WoodSt John'sJohn HailstoneTrinity
1783Francis John Hyde WollastonSidney SussexRichard BuckMagdalene
1784Robert Acklom IngramQueens'John HoldenSidney Sussex
1785William LaxTrinityJohn DudleyClare
1786John BellTrinityEdward OtterJesus
1787Joseph LittledaleSt John'sAlgernon FramptonSt John's
1788John BrinkleyGonville and CaiusEdmund OutramSt John's
1789William MillersSt John'sJoseph BewsherTrinity
1790Bewick BridgePeterhouseFletcher RaincockPembroke
1791Daniel Mitford PeacockTrinityWilliam GoochGonville and Caius
1792John PalmerSt John'sGeorge Frederick TavelTrinity
1793Thomas HarrisonQueens'Thomas StricklandTrinity
1794George ButlerSidney SussexJohn Singleton CopleyTrinity
1795Robert WoodhouseGonville and CaiusWilliam AtthillGonville and Caius
1796John KempthorneSt John'sWilliam DealtryTrinity
1797John HudsonTrinityJohn LowthianTrinity
1798Thomas SowerbyTrinityRobert MartinTrinity
1799William Fuller BotelerSt John'sJohn BrownTrinity
1800James InmanSt John'sGeorge D'OylyCorpus Christi
1801Henry MartynSt John'sWilliam WoodallPembroke
1802Thomas Penny WhiteQueens'John GrisdaleChrist's
1803Thomas StarkieSt John'sCharles James HoareSt John's
1804John KayeChrist'sWilliam Albin Garratt[19]Trinity
1805Thomas TurtonSt Catharine'sSamuel Hunter ChristieTrinity
1806Frederick PollockTrinityHenry WalterSt John's
1807Henry GippsSt John'sJohn CarrTrinity
1808Henry BickerstethGonville and CaiusMiles BlandSt John's
1809Edward Hall AldersonGonville and CaiusJohn StandlyGonville and Caius
1810William Henry MauleTrinityThomas Shaw BrandrethTrinity
1811Thomas Edward DiceyTrinityWilliam FrenchCaius
1812Cornelius NealeSt John'sJoseph William JordanTrinity
1813John HerschelSt John'sGeorge PeacockTrinity
1814Richard GwatkinSt John'sHenry WilkinsonSt John's
1815Charles George Frederick LeicesterTrinityFrederick CalvertJesus
1816Edward JacobGonville and CaiusWilliam WhewellTrinity
1817John Thomas AustenSt John'sTemple ChevallierPembroke
1818John George Shaw-LefevreTrinityJohn HindSt John's
1819Joshua KingQueens'George Miles CooperSt John's
1820Henry CoddingtonTrinityWatkin MaddySt John's
1821Solomon AtkinsonTrinityHenry MelvillSt John's
1822Hamnett HolditchGonville and CaiusMitford PeacockCorpus Christi
1823George Biddell AiryTrinityCharles JeffreysSt John's
1824[20]John CowlingSt John'sJames BowsteadCorpus Christi
1825James ChallisTrinityWilliam WilliamsonClare
1826William LawTrinityJohn Hymers[21]St John's
1827Henry Percy GordonPeterhouseThomas TurnerTrinity
1828Charles PerryTrinityJohn BailySt John's
1829Henry PhilpottSt Catharine'sWilliam CavendishTrinity
1830Charles Thomas WhitleySt John'sJames William Lucas HeavisideSidney Sussex
1831Samuel EarnshawSt John'sThomas GaskinSt John's
1832Douglas Denon HeathTrinitySamuel LaingSt John's
1833Alexander ElliceGonville and CaiusJoseph BowsteadPembroke
1834Philip KellandQueens'Thomas Rawson BirksTrinity
1835Henry CotterillSt John'sHenry Goulburn[note 3]Trinity
1836Archibald SmithTrinityJohn William ColensoSt John's
1837William Nathaniel GriffinSt John'sJames Joseph SylvesterSt John's
1838Thomas John MainSt John'sJames George MouldCorpus Christi
1839Benjamin Morgan CowieSt John'sPercival FrostSt John's
1840Robert Leslie EllisTrinityHarvey GoodwinCaius
1841George Gabriel StokesPembrokeHenry Cadman JonesTrinity
1842Arthur CayleyTrinityCharles Turner SimpsonSt John's
1843John Couch AdamsSt John'sFrancis BashforthSt John's
1844George Wirgman HemmingSt John'sWilliam Bonner HopkinsGonville and Caius
1845Stephen ParkinsonSt John'sWilliam Thomson (later known as Lord Kelvin)[note 4]Peterhouse
1846Lewis HensleyTrinityJohn Alfred Airey (or Lumb)Pembroke
1847William Parkinson WilsonSt John'sRobert WalkerTrinity
1848Isaac TodhunterSt John'sCharles MackenzieGonville and Caius
1849Morris Birkbeck PellSt John'sHenry Carlyon PhearGonville and Caius
1850William Henry BesantCorpus ChristiHenry William WatsonTrinity
1851Norman Macleod FerrersGonville and CaiusWilliam Charles EvansSt John's
1852Peter Guthrie TaitPeterhouseWilliam John SteelePeterhouse
1853Thomas Bond SpragueSt John'sRobert Braithwaite BattyEmmanuel
1854Edward Routh[note 5]PeterhouseJames Clerk MaxwellPeterhouse & Trinity
1855James SavageSt John'sLeonard CourtneySt John's
1856Augustus Vaughton HadleySt John'sJohn RigbyTrinity
1857Gerard Brown FinchQueens'Thomas SavagePembroke
1858George Middleton SlesserQueens'Charles Abercrombie SmithPeterhouse
1859James WilsonSt John'sFrederick Brown & Anthony William Wilson SteelTrinity & Gonville and Caius
1860James StirlingTrinityWalter BailySt John's
1861William Steadman AldisTrinityJohn BondMagdalene
1862Thomas BarkerTrinityJohn George LaingSt John's
1863Robert RomerTrinity HallEdward Tucker LeekeTrinity
1864Henry John PurkissTrinityWilliam Peverill TurnbullTrinity
1865John Strutt (Lord Rayleigh)TrinityAlfred MarshallSt John's
1866Robert MortonPeterhouseThomas Steadman AldisTrinity
1867Charles NivenTrinityWilliam Kingdon CliffordTrinity
1868John Fletcher MoultonSt John'sGeorge DarwinTrinity
1869Numa Edward Hartog[note 6]TrinityJohn EliotSt John's
1870Richard PendleburySt John'sAlfred George GreenhillSt John's
1871John HopkinsonTrinityJames Whitbread Lee GlaisherTrinity
1872Robert Rumsey WebbSt John'sHorace LambTrinity
1873Thomas Oliver HardingTrinityEdward John NansonTrinity
1874George Constantine CalliphronasGonville and CaiusW. W. Rouse BallTrinity
1875John William LordTrinityWilliam Burnside & George ChrystalPembroke & Peterhouse
1876Joseph Timmis WardSt John'sWilliam Loudon MollisonClare
1877Donald MacAlisterSt John'sFrederic Brian De Malbisse GibbonsGonville and Caius
1878E. W. HobsonChrist'sJohn Edward Aloysius SteggallTrinity
1879Andrew James Campbell AllenPeterhouseGeorge Francis WalkerQueens'
1880Joseph LarmorSt John'sJ. J. Thomson[note 4]Trinity
1881Andrew Forsyth[note 7]TrinityRobert Samuel HeathTrinity
1882[24]Robert Alfred HermanTrinityJohn Shapland YeoSt John's
1882[note 8]William WelshJesusHerbert Hall TurnerTrinity
1883George Ballard MathewsSt John'sEdward Gurner GallopTrinity
1884William Fleetwood SheppardTrinityWalter Percy WorkmanTrinity
1885Arthur BerryKing'sAugustus Edward Hough LoveSt John's
1886Alfred Cardew DixonTrinityWilliam Charles FletcherSt John's
1887H. F. Baker, Sir Alfred William Flux, John Henry Michell & John Cyril IlesSt John's, St John's, Trinity & TrinityJames Bennet PeaceEmmanuel
1888William McFadden OrrSt John'sWilliam Edwin BrunyateTrinity
1889Gilbert WalkerTrinityFrank Watson Dyson & Percy Cory Gaul[25]Trinity & Trinity
1890Geoffrey Thomas Bennett;
Philippa Fawcett placed "Above the Senior Wrangler"[note 9]
St John's
(Fawcett: Newnham)
Hugh William SegarTrinity
1891James GoodwillieCorpus ChristiDavid Beveridge Mair & Robert Hume Davison MayallChrist's & Sidney Sussex
1892Philip Herbert CowellTrinityFrancis Robert SharpeChrist's
1893George Thomas ManleyChrist'sGilbert Harrison John Hurst & Charles Percy SangerKing's & Trinity
1894Walter Sibbald Adie & William Fellows SedgwickTrinity & TrinityWilliam Edward PhilipClare
1895Thomas John I'Anson BromwichSt John'sJohn Hilton Grace & E. T. WhittakerPeterhouse & Trinity
1896William Garden FraserQueens'Ernest William Barnes, George Edward St Lawrence Carson & Algernon Charles Legge WilkinsonTrinity, Trinity & Trinity
1897William Henry AustinTrinityFrancis John Welsh WhippleTrinity
1898Ronald William Henry Turnbull HudsonSt John'sJohn Forbes Cameron & James Hopwood JeansGonville and Caius & Trinity
1899George Birtwhistle & R. P. Paranjpye[note 10]Pembroke & St John'sSamuel Bruce McLarenTrinity
1900Joseph Edmund WrightTrinityArthur Cyril Webb AldisTrinity Hall
1901Alexander BrownGonville and CaiusHerbert KnapmanEmmanuel
1902Ebenezer CunninghamSt John'sFrank SlatorSt John's
1903Harry Bateman & Philip Edward MarrackTrinity & TrinityJames Sidney Barnes, Ernest Gold, George Frederic Sowden Hills and Sidney Hill PhillipsTrinity, St John's, Trinity and St John's
1904Arthur Stanley Eddington[note 11]TrinityG. R. Blanco-WhiteTrinity
1905John Edensor Littlewood & James MercerTrinity & TrinityH. SmithTrinity Hall
1906Arunachala Tyaga Rajan & Clarence John Threlkeld SewellTrinity & TrinityW. J. HarrisonClare
1907G. N. WatsonTrinityHerbert Westren TurnbullTrinity
1908Selig Brodetsky & A. W. IbbotsonTrinity & PembrokeH. MinsonChrist's
1909Percy John DaniellTrinityE. H. NevilleTrinity
Senior Wranglers since 1910
YearSenior WranglerCollege
1912Bhupati Mohan Sen [27]King's
1914Brian Charles Molony[28]Trinity
1923Frank Ramsey[29]Trinity
1928Donald Coxeter[30]Trinity
1930Jacob Bronowski[31]Jesus
1934David Scott Dunbar[32]Clare
1939James Wilkinson[33]Trinity
1940Hermann Bondi[34]Trinity
1944Denis Sargan[35]St John's
1948Michael Edward Ash[36]Trinity
1952John PolkinghorneTrinity
1953Crispin Nash-Williams[37]Trinity Hall
1959Jayant Narlikar[38]non-collegiate
1964Geoffrey Fox[39]Trinity
1966Nigel KaltonTrinity
1967Colin Myerscough[40]Churchill
1970Derek Wanless[41]King's
1972Gordon Woo[42]Christ's
1973Lee Hsien Loong[43][44]Trinity
1975Peter J. Young[45]St John's
1977Glyn Moody[46][47]Trinity
1981Mike GilesChurchill
1982Christopher Budd[48]St John's
1983John ListerTrinity
1985Nick Mee[49]Trinity
1990Kevin Buzzard[50]Trinity
1992Ruth Hendry[51][52]Queens'
1993Ian DowkerTrinity
1994 Wee Teck Gan Churchill
1995 Balazs Szendroi Trinity
1996 David W. Essex Trinity
1997Alexander G. BarnardTrinity
1998Ben Joseph Green[53]Trinity
1999Paul RussellPeterhouse
2000Toby Gee[54][55]Trinity
2001Mohan Ganesalingam[56]Trinity
2002Jeremy YoungTrinity
2003Thomas Barnet-Lamb[57]Trinity
2004David Loeffler[58]Trinity
2005 Tim Austin Trinity
2006 Antonio Lei Trinity
2007Paul Jefferys[59]Trinity
2008Le Hung Viet Bao[60]Trinity
2009Thomas Beck[61]Trinity Hall
2010Zihan Hans Liu[62]Trinity
2011Sean Eberhard[63]Gonville and Caius
2012Sean Moss[64]Trinity
2013Arran Fernandez[6][65]Fitzwilliam
2014Yang Li[66]Downing
2015Timothy Large[67]Trinity
2016Leo LaiChurchill
2017Jonathan ZhengTrinity
2018 Barnabas JanzerTrinity
2019 Warren LiTrinity
2020Exam cancelled due to COVID-19 outbreakN/A
2021 Alejandro Epelde BlancoTrinity
2022 Gheehyun NahmTrinity
2023 Rubaiyat KhondakerTrinity
Senior Wranglers since 1910 also include:
• David Hobson[68] (Christ's College) (1940s)
• Peter Swinnerton-Dyer[69] (Trinity College) (1940s)
• Jack Leeming[70] (St John's College)
• Michael Hall[71] (Trinity College) (1950s)
See also
• Wooden spoon (award)
Notes
1. In years where there was a tie, individuals tied have been shown as Senior Wrangler, with the next placed candidate(s) as Proxime Accessit; strictly speaking, if n individuals are tied as Senior Wrangler, any runner up is (n+1)-st Wrangler .
2. Thomas Jones, the Senior Wrangler that year, acted as his tutor.
3. Also senior classic.
4. According to legend, Kelvin was so confident he had come top that he asked his servant to run to the Senate House and check who the Second Wrangler was. The servant returned and told him, "You, sir!" Kelvin was reportedly beaten largely on the basis of Parkinson's superior exam technique. The result was reversed in the Smith Prize. This story has also been attributed to J.J. Thomson in 1880, and others.[22]
5. Routh found more fame subsequently as a coach of other Senior Wranglers. Indeed for twenty-two consecutive years from 1862, one of his pupils was Senior Wrangler, and he coached twenty-seven in all. His first pupil in 1856 was Third Wrangler, and in 1858 both the Senior and Second Wrangler were coached by him.[23]
6. First Jewish Senior Wrangler. A special grace was passed to allow him to be graduated using a special form of the wording in order to not offend his religious beliefs.
7. Forsyth was one of the men who were principally responsible for the reform of the Tripos system that led to the end of the Tripos ranking.
8. Regulations were changed to split the class list into Parts I & II, and Part III. The examinations for the former were held in June and retained the ordered class list (in contrast to Part III), so two sets of results exist for this year.
9. Women were allowed to take the Tripos from 1881, when Charlotte Scott achieved the eighth highest mark (but was not officially ranked as eighth wrangler); but their results were published on a separate list and they were not officially ranked among the wranglers, so Fawcett was not officially Senior Wrangler despite receiving the highest mark on the tripos. Women students were finally admitted as full members of the university in 1948.
10. First Indian Senior Wrangler.
11. Eddington was the first person to be Senior Wrangler after only two years of study.[26]
References
1. Forfar, David (1996). "What became of the Senior Wranglers?". Mathematical Spectrum. 29 (1).
2. Moore, Gregory (2005). "Masters of Theory and its Relevance to the History of Economic Thought". History of Economics Review. 42 (1): 77–99. doi:10.1080/18386318.2005.11681216. S2CID 148477456.
3. "Peter Guthrie Tait" (PDF).
4. Craik, A.D.D. (2007). Mr Hopkins' Men. Springer London. doi:10.1007/978-1-84628-791-6. ISBN 978-1-84628-790-9.
5. Warwick, Andrew (2003). Masters of Theory: Cambridge and the Rise of Mathematical Physics. University Of Chicago Press. p. 205. ISBN 0-226-87375-7.
6. "Student, 18, youngest ever to come top in Cambridge maths finals". Daily Telegraph. 21 June 2013.
7. Wilkinson, James H. Hammarling, Sven (2003). Encyclopedia of Computer Science. Springer London. ISBN 0-470-86412-5.
8. "To the Editor of the Spectator". The Spectator. 24 June 1899. Retrieved 6 December 2013.
9. Crilly, Tony (2006). Arthur Cayley: mathematician laureate of the Victorian age. Johns Hopkins University Press. p. 160. ISBN 0-8018-8011-4.
10. Alexanderson, Gerald L. (2000). The random walks of George Pólya. Cambridge: Cambridge University Press. p. 68.
11. Shapin, Stephen; Lawrence, Christopher, eds. (1998). Science incarnate: historical embodiments of natural knowledge. Chicago: University of Chicago Press. p. 303. ISBN 0-226-47014-8.
12. "Brandy butter". Archived from the original on 3 March 2016. Retrieved 9 December 2011.
13. Coops, Helen L (1939). 100 Games of Solitaire (Complete with layouts for playing). Whitman Publishing Company. p. 205.
14. Goren, Charles Henry (1961). Goren's Hoyle Encyclopedia of Games: With Official Rules and Pointers on Play, Including the Latest Laws of Contract Bridge. Greystone Press. p. 643.
15. "Senior Wrangler" of the Leader (1932). The Handy Crossword Companion. Odhams Press.
16. Aris, Rutherford; Davis, H. Ted; Stuewer, Roger H., eds. (1983). Springs of scientific creativity : essays on founders of modern science. Minneapolis: University of Minnesota Press. p. 164. ISBN 0-8166-6830-2. OCLC 814078408.
17. Littlewood, John Edensor (1953). A Mathematician's Miscellany. Methuen Publishing. p. 70.
18. Neale, Charles Montague (1907). The senior wranglers of the University of Cambridge, from 1748 to 1907. With biographical, & c., notes. Bury St. Edmunds: Groom and Son.
19. It appears that '22nd wrangler' in the entry for William Albin Garratt in Venn. "Garratt, William Albin (GRT800WA)". A Cambridge Alumni Database. University of Cambridge. is a misprint for '2nd wrangler'; cf Neale, Charles Montague (1907), The Senior Wranglers of the University of Cambridge, from 1748 to 1907: With Biographical, etc., Notes (Bury St. Edmunds: F.T. Groom and Son; 61pp), p. 26; at all events, Garratt took the First Smith's Prize in 1804, with the Senior Wrangler, Kaye, placing Second, although Kaye also took the Senior Classical Medal (for reference without prejudice, at the time, other things being equal, undergraduates at Trinity were given preference for the Smith's Prizes)
20. Classical Tripos established.
21. Founded Hymers College.
22. A History of Mathematics in Cambridge
23. O'Connor, J. J.; Robertson, E. F. (October 2003). "Routh biography". Retrieved 6 July 2013.
24. An account exists of the 1882 graduation ceremony. "University Intelligence". The Times. 30 January 1882. Retrieved 25 September 2008.
25. John Venn (15 September 2011). Alumni Cantabrigienses: A Biographical List of All Known Students, Graduates and Holders of Office at the University of Cambridge, from the Earliest Times to 1900. Cambridge University Press. p. 25. ISBN 978-1-108-03613-9.
26. Hutchinson, Ian H. (December 2002). "Astrophysics and Mysticism: the life of Arthur Stanley Eddington". Archived from the original on 22 September 2008. Retrieved 25 September 2008.
27. "Bhupati Mohan Sen". Retrieved 14 June 2023.
28. "Brian Charles Molony (1892–1963)". Retrieved 6 July 2013.
29. Krantz, Stephen; Parks, Harold (2014). A Mathematical Odyssey: Journey from the Real to the Complex. Springer. p. 64. ISBN 978-1461489382.
30. Roberts, Siobhan; Ivić Weiss, Asia (2006). "Harold Scott MacDonald Coxeter. 9 February 1907 — 31 March 2003". Biographical Memoirs of Fellows of the Royal Society. The Royal Society. 52: 45–66. doi:10.1098/rsbm.2006.0004. ISSN 0080-4606. S2CID 70400674.
31. Bronowski's biography at the MacTutor History of Mathematics Archive: "Jacob Bronowski". University of St Andrews. Retrieved 6 July 2013.
32. Uppingham School and Clare College Archives.
33. Wilkinson's biography at the MacTutor History of Mathematics Archive: "James Hardy Wilkinson". University of St Andrews. Archived from the original on 22 November 2008. Retrieved 6 July 2013.
34. "Oral History Transcript — Dr. Hermann Bondi". American Institute of Physics. Retrieved 6 July 2013.
35. "John Denis Sargan" (PDF). Retrieved 3 April 2022.
36. Trinity College Cambridge,"Making Guinness Guinness – Michael Ash", The Fountain, Issue 23”
37. "Crispin St John Alvah Nash-Williams". University of St Andrews. Retrieved 6 July 2013.
38. Mitton, Simon (2005). Fred Hoyle: A Life in Science. Aurum. p. 275. ISBN 978-1-85410-961-3.
39. "Geoffrey Charles Fox". Retrieved 3 February 2021.
40. "Sudden end to the first Mastership". Churchill College, Cambridge. Retrieved 8 November 2021.
41. "Profile: Banking's boy wonder: Derek Wanless – NatWest's chief has a personal touch but a pragmatic vision, says William Kay". Independent. 27 March 1994.
42. Woo, Gordon (1999). The Mathematics of Natural Catastrophes. Imperial College Press. p. 292. ISBN 1-86094-182-6.
43. Kuan Yew, Lee (2000). From Third World to First: The Singapore Story: 1965-2000. Harper. pp. 750–751. ISBN 978-0-06019-776-6.
44. Neo Hui Min (12 August 2004). "Dennis Marrian, University Tutor". Straits Times. Retrieved 2 June 2013.
45. "Oral history interview with Wallace Sargent". 24 September 2021. Retrieved 12 November 2022.
46. "Seven things people didn't know about me". Retrieved 6 January 2011.
47. "(correction by author)". Retrieved 26 February 2012.
48. "Curriculum vitae of Prof. C. J. Budd" (PDF). University of Bath. Retrieved 8 May 2019.
49. "Whirlpool numbers". Plus Magazine. Retrieved 6 July 2013.
50. "CV of Kevin Buzzard" (PDF). Retrieved 16 September 2014.
51. "Letter of confirmation of first place 1992 pt II mathematical tripos" (PDF). Retrieved 6 April 2015.
52. "Where are they now? Ruth Hendry (1989): the only known female Senior Wrangler in history" (PDF). Retrieved 9 September 2016.
53. "Mathematical biography of Ben Green" (PDF). Clay Mathematics Institute. Retrieved 6 January 2011.
54. "Toby Gee" (PDF). Archived from the original (PDF) on 27 September 2011. Retrieved 6 January 2011.
55. Timothy Gowers (14 April 2013). "Answers, results of polls, and a brief description of the program". Retrieved 11 May 2013.
56. "The Next Generation of Proof Assistants" (PDF). Computing Science Department – Radboud University Nijmegen. Retrieved 6 July 2013.
57. "The 46th International Mathematical Olympiad in Mexico". Retrieved 19 January 2011.
58. "David Loeffler Curriculum Vitae" (PDF). Retrieved 1 October 2015.
59. "Varsity 100" (PDF). Mercer Management Consulting. Retrieved 10 August 2012.
60. "Dấu ấn Việt ở Cambridge". Tuổi Trẻ Online. 7 February 2008. Retrieved 18 March 2012.
61. "Trinity Hall rises to 7th in the Baxter tables". Trinity Hall. Retrieved 6 July 2013.
62. "CV Zihan Hans Liu" (PDF). Trinity.
63. "Sean Eberhard '08 Reaches Pinnacle at Cambridge". Archived from the original on 1 April 2012. Retrieved 5 October 2011.
64. "Sean is Cambridge University's Top Maths Student" (PDF). Havering Sixth Form College. Archived from the original (PDF) on 27 May 2013. Retrieved 6 December 2012.
65. Limb, Lottie (23 November 2020). "The maths genius who joined Cambridge Uni at 15 and took his GCSEs at 5". CambridgeshireLive. Retrieved 16 November 2022.
66. "Downing College News – Senior Wrangler". Archived from the original on 16 July 2014. Retrieved 22 June 2014.
67. "Trinity College Cambridge Annual Record 2014–2015" (PDF). Archived from the original (PDF) on 8 August 2016. Retrieved 10 June 2016.
68. "David Hobson: senior partner of Coopers & Lybrand (obituary)". The Times. Retrieved 6 January 2011.(subscription required)
69. "Sir Peter is Unhorsed". Sports Illustrated. Retrieved 6 July 2013.
70. "Tributes have been paid to Dame Cheryl Gillan's husband Jack Leeming". Bucks Free Press. Retrieved 1 April 2019.
71. "F M Hall (1935–2005) (obituary)". Shrewsbury School. 29 March 2011. Retrieved 6 July 2013.
Bibliography
• Galton, Francis (2000). "Classification of Men According to their Natural Gifts". In James Roy Newman (ed.). The World of Mathematics. Vol. 2. Courier Dover Publications. ISBN 0-486-41150-8.
• ed. H.C.G Matthew and Brian Harrison, ed. (2004). Oxford Dictionary of National Biography. Oxford University Press.
• Tanner, Joseph Robson (1917). The historical register of the University of Cambridge, being a supplement to the Calendar with a record of University offices, honours and distinctions to the year 1910 (PDF). Cambridge University Press. Retrieved 19 September 2008.
• Venn, John (1922–27). Alumni Cantabrigienses. Cambridge University Press.
• Paul, Margaret (2012). Frank Ramsey (1903–1930): A Sister's Memoir. Smith-Gordon.
|
Wikipedia
|
Separation of variables
In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.
Differential equations
Scope
Fields
• Natural sciences
• Engineering
• Astronomy
• Physics
• Chemistry
• Biology
• Geology
Applied mathematics
• Continuum mechanics
• Chaos theory
• Dynamical systems
Social sciences
• Economics
• Population dynamics
List of named differential equations
Classification
Types
• Ordinary
• Partial
• Differential-algebraic
• Integro-differential
• Fractional
• Linear
• Non-linear
By variable type
• Dependent and independent variables
• Autonomous
• Coupled / Decoupled
• Exact
• Homogeneous / Nonhomogeneous
Features
• Order
• Operator
• Notation
Relation to processes
• Difference (discrete analogue)
• Stochastic
• Stochastic partial
• Delay
Solution
Existence and uniqueness
• Picard–Lindelöf theorem
• Peano existence theorem
• Carathéodory's existence theorem
• Cauchy–Kowalevski theorem
General topics
• Initial conditions
• Boundary values
• Dirichlet
• Neumann
• Robin
• Cauchy problem
• Wronskian
• Phase portrait
• Lyapunov / Asymptotic / Exponential stability
• Rate of convergence
• Series / Integral solutions
• Numerical integration
• Dirac delta function
Solution methods
• Inspection
• Method of characteristics
• Euler
• Exponential response formula
• Finite difference (Crank–Nicolson)
• Finite element
• Infinite element
• Finite volume
• Galerkin
• Petrov–Galerkin
• Green's function
• Integrating factor
• Integral transforms
• Perturbation theory
• Runge–Kutta
• Separation of variables
• Undetermined coefficients
• Variation of parameters
People
List
• Isaac Newton
• Gottfried Leibniz
• Jacob Bernoulli
• Leonhard Euler
• Józef Maria Hoene-Wroński
• Joseph Fourier
• Augustin-Louis Cauchy
• George Green
• Carl David Tolmé Runge
• Martin Kutta
• Rudolf Lipschitz
• Ernst Lindelöf
• Émile Picard
• Phyllis Nicolson
• John Crank
Ordinary differential equations (ODE)
A differential equation for the unknown $f(x)$ will be separable if it can be written in the form
${\frac {d}{dx}}f(x)=g(x)h(f(x))$
where $g$ and $h$ are given functions. This is perhaps more transparent when written using $y=f(x)$ as:
${\frac {dy}{dx}}=g(x)h(y).$
So now as long as h(y) ≠ 0, we can rearrange terms to obtain:
${dy \over h(y)}=g(x)\,dx,$
where the two variables x and y have been separated. Note dx (and dy) can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition of dx as a differential (infinitesimal) is somewhat advanced.
Alternative notation
Those who dislike Leibniz's notation may prefer to write this as
${\frac {1}{h(y)}}{\frac {dy}{dx}}=g(x),$
but that fails to make it quite as obvious why this is called "separation of variables". Integrating both sides of the equation with respect to $x$, we have
$\int {\frac {1}{h(y)}}{\frac {dy}{dx}}\,dx=\int g(x)\,dx,$
(A1)
or equivalently,
$\int {\frac {1}{h(y)}}\,dy=\int g(x)\,dx$
because of the substitution rule for integrals.
If one can evaluate the two integrals, one can find a solution to the differential equation. Observe that this process effectively allows us to treat the derivative ${\frac {dy}{dx}}$ as a fraction which can be separated. This allows us to solve separable differential equations more conveniently, as demonstrated in the example below.
(Note that we do not need to use two constants of integration, in equation (A1) as in
$\int {\frac {1}{h(y)}}\,dy+C_{1}=\int g(x)\,dx+C_{2},$
because a single constant $C=C_{2}-C_{1}$ is equivalent.)
Example
Population growth is often modeled by the "logistic" differential equation
${\frac {dP}{dt}}=kP\left(1-{\frac {P}{K}}\right)$
where $P$ is the population with respect to time $t$, $k$ is the rate of growth, and $K$ is the carrying capacity of the environment. Separation of variables now leads to
${\begin{aligned}&\int {\frac {dP}{P\left(1-P/K\right)}}=\int k\,dt\end{aligned}}$
which is readily integrated using partial fractions on the left side yielding
$P(t)={\frac {K}{1+Ae^{-kt}}}$
where A is the constant of integration. We can find $A$ in terms of $P\left(0\right)=P_{0}$ at t=0. Noting $e^{0}=1$ we get
$A={\frac {K-P_{0}}{P_{0}}}.$
Generalization of separable ODEs to the nth order
Much like one can speak of a separable first-order ODE, one can speak of a separable second-order, third-order or nth-order ODE. Consider the separable first-order ODE:
${\frac {dy}{dx}}=f(y)g(x)$
The derivative can alternatively be written the following way to underscore that it is an operator working on the unknown function, y:
${\frac {dy}{dx}}={\frac {d}{dx}}(y)$
Thus, when one separates variables for first-order equations, one in fact moves the dx denominator of the operator to the side with the x variable, and the d(y) is left on the side with the y variable. The second-derivative operator, by analogy, breaks down as follows:
${\frac {d^{2}y}{dx^{2}}}={\frac {d}{dx}}\left({\frac {dy}{dx}}\right)={\frac {d}{dx}}\left({\frac {d}{dx}}(y)\right)$
The third-, fourth- and nth-derivative operators break down in the same way. Thus, much like a first-order separable ODE is reducible to the form
${\frac {dy}{dx}}=f(y)g(x)$
a separable second-order ODE is reducible to the form
${\frac {d^{2}y}{dx^{2}}}=f\left(y'\right)g(x)$
and an nth-order separable ODE is reducible to
${\frac {d^{n}y}{dx^{n}}}=f\!\left(y^{(n-1)}\right)g(x)$
Example
Consider the simple nonlinear second-order differential equation:
$y''=(y')^{2}.$
This equation is an equation only of y'' and y', meaning it is reducible to the general form described above and is, therefore, separable. Since it is a second-order separable equation, collect all x variables on one side and all y' variables on the other to get:
${\frac {d(y')}{(y')^{2}}}=dx.$
Now, integrate the right side with respect to x and the left with respect to y':
$\int {\frac {d(y')}{(y')^{2}}}=\int dx.$
This gives
$-{\frac {1}{y'}}=x+C_{1},$
which simplifies to:
$y'=-{\frac {1}{x+C_{1}}}~.$
This is now a simple integral problem that gives the final answer:
$y=C_{2}-\ln |x+C_{1}|.$
Partial differential equations
See also: Separable partial differential equation
The method of separation of variables is also used to solve a wide range of linear partial differential equations with boundary and initial conditions, such as the heat equation, wave equation, Laplace equation, Helmholtz equation and biharmonic equation.
The analytical method of separation of variables for solving partial differential equations has also been generalized into a computational method of decomposition in invariant structures that can be used to solve systems of partial differential equations.[4]
Example: homogeneous case
Consider the one-dimensional heat equation. The equation is
${\frac {\partial u}{\partial t}}-\alpha {\frac {\partial ^{2}u}{\partial x^{2}}}=0$
(1)
The variable u denotes temperature. The boundary condition is homogeneous, that is
$u{\big |}_{x=0}=u{\big |}_{x=L}=0$
(2)
Let us attempt to find a solution which is not identically zero satisfying the boundary conditions but with the following property: u is a product in which the dependence of u on x, t is separated, that is:
$u(x,t)=X(x)T(t).$
(3)
Substituting u back into equation (1) and using the product rule,
${\frac {T'(t)}{\alpha T(t)}}={\frac {X''(x)}{X(x)}}.$
(4)
Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value −λ. Thus:
$T'(t)=-\lambda \alpha T(t),$
(5)
and
$X''(x)=-\lambda X(x).$
(6)
−λ here is the eigenvalue for both differential operators, and T(t) and X(x) are corresponding eigenfunctions.
We will now show that solutions for X(x) for values of λ ≤ 0 cannot occur:
Suppose that λ < 0. Then there exist real numbers B, C such that
$X(x)=Be^{{\sqrt {-\lambda }}\,x}+Ce^{-{\sqrt {-\lambda }}\,x}.$
From (2) we get
$X(0)=0=X(L),$
(7)
and therefore B = 0 = C which implies u is identically 0.
Suppose that λ = 0. Then there exist real numbers B, C such that
$X(x)=Bx+C.$
From (7) we conclude in the same manner as in 1 that u is identically 0.
Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that
$T(t)=Ae^{-\lambda \alpha t},$
and
$X(x)=B\sin({\sqrt {\lambda }}\,x)+C\cos({\sqrt {\lambda }}\,x).$
From (7) we get C = 0 and that for some positive integer n,
${\sqrt {\lambda }}=n{\frac {\pi }{L}}.$
This solves the heat equation in the special case that the dependence of u has the special form of (3).
In general, the sum of solutions to (1) which satisfy the boundary conditions (2) also satisfies (1) and (3). Hence a complete solution can be given as
$u(x,t)=\sum _{n=1}^{\infty }D_{n}\sin {\frac {n\pi x}{L}}\exp \left(-{\frac {n^{2}\pi ^{2}\alpha t}{L^{2}}}\right),$
where Dn are coefficients determined by initial condition.
Given the initial condition
$u{\big |}_{t=0}=f(x),$
we can get
$f(x)=\sum _{n=1}^{\infty }D_{n}\sin {\frac {n\pi x}{L}}.$
This is the sine series expansion of f(x) which is amenable to Fourier analysis. Multiplying both sides with $ \sin {\frac {n\pi x}{L}}$ and integrating over [0, L] results in
$D_{n}={\frac {2}{L}}\int _{0}^{L}f(x)\sin {\frac {n\pi x}{L}}\,dx.$
This method requires that the eigenfunctions X, here $ \left\{\sin {\frac {n\pi x}{L}}\right\}_{n=1}^{\infty }$, are orthogonal and complete. In general this is guaranteed by Sturm–Liouville theory.
Example: nonhomogeneous case
Suppose the equation is nonhomogeneous,
${\frac {\partial u}{\partial t}}-\alpha {\frac {\partial ^{2}u}{\partial x^{2}}}=h(x,t)$
(8)
with the boundary condition the same as (2).
Expand h(x,t), u(x,t) and f(x) into
$h(x,t)=\sum _{n=1}^{\infty }h_{n}(t)\sin {\frac {n\pi x}{L}},$
(9)
$u(x,t)=\sum _{n=1}^{\infty }u_{n}(t)\sin {\frac {n\pi x}{L}},$
(10)
$f(x)=\sum _{n=1}^{\infty }b_{n}\sin {\frac {n\pi x}{L}},$
(11)
where hn(t) and bn can be calculated by integration, while un(t) is to be determined.
Substitute (9) and (10) back to (8) and considering the orthogonality of sine functions we get
$u'_{n}(t)+\alpha {\frac {n^{2}\pi ^{2}}{L^{2}}}u_{n}(t)=h_{n}(t),$
which are a sequence of linear differential equations that can be readily solved with, for instance, Laplace transform, or Integrating factor. Finally, we can get
$u_{n}(t)=e^{-\alpha {\frac {n^{2}\pi ^{2}}{L^{2}}}t}\left(b_{n}+\int _{0}^{t}h_{n}(s)e^{\alpha {\frac {n^{2}\pi ^{2}}{L^{2}}}s}\,ds\right).$
If the boundary condition is nonhomogeneous, then the expansion of (9) and (10) is no longer valid. One has to find a function v that satisfies the boundary condition only, and subtract it from u. The function u-v then satisfies homogeneous boundary condition, and can be solved with the above method.
Example: mixed derivatives
For some equations involving mixed derivatives, the equation does not separate as easily as the heat equation did in the first example above, but nonetheless separation of variables may still be applied. Consider the two-dimensional biharmonic equation
${\frac {\partial ^{4}u}{\partial x^{4}}}+2{\frac {\partial ^{4}u}{\partial x^{2}\partial y^{2}}}+{\frac {\partial ^{4}u}{\partial y^{4}}}=0.$
Proceeding in the usual manner, we look for solutions of the form
$u(x,y)=X(x)Y(y)$
and we obtain the equation
${\frac {X^{(4)}(x)}{X(x)}}+2{\frac {X''(x)}{X(x)}}{\frac {Y''(y)}{Y(y)}}+{\frac {Y^{(4)}(y)}{Y(y)}}=0.$
Writing this equation in the form
$E(x)+F(x)G(y)+H(y)=0,$
Taking the derivative of this expression with respect to $x$ gives $E'(x)+F'(x)G(y)=0$ which means $G(y)=const.$ and likewise, taking derivative with respect to $y$ leads to $F(x)G'(y)+H'(y)=0$ and thus $F(x)=const.$, hence either F(x) or G(y) must be a constant, say −λ. This further implies that either $-E(x)=F(x)G(y)+H(y)$ or $-H(y)=E(x)+F(x)G(y)$ are constant. Returning to the equation for X and Y, we have two cases
${\begin{aligned}X''(x)&=-\lambda _{1}X(x)\\X^{(4)}(x)&=\mu _{1}X(x)\\Y^{(4)}(y)-2\lambda _{1}Y''(y)&=-\mu _{1}Y(y)\end{aligned}}$
and
${\begin{aligned}Y''(y)&=-\lambda _{2}Y(y)\\Y^{(4)}(y)&=\mu _{2}Y(y)\\X^{(4)}(x)-2\lambda _{2}X''(x)&=-\mu _{2}X(x)\end{aligned}}$
which can each be solved by considering the separate cases for $\lambda _{i}<0,\lambda _{i}=0,\lambda _{i}>0$ and noting that $\mu _{i}=\lambda _{i}^{2}$.
Curvilinear coordinates
In orthogonal curvilinear coordinates, separation of variables can still be used, but in some details different from that in Cartesian coordinates. For instance, regularity or periodic condition may determine the eigenvalues in place of boundary conditions. See spherical harmonics for example.
Applicability
Partial differential equations
For many PDEs, such as the wave equation, Helmholtz equation and Schrodinger equation, the applicability of separation of variables is a result of the spectral theorem. In some cases, separation of variables may not be possible. Separation of variables may be possible in some coordinate systems but not others,[5] and which coordinate systems allow for separation depends on the symmetry properties of the equation.[6] Below is an outline of an argument demonstrating the applicability of the method to certain linear equations, although the precise method may differ in individual cases (for instance in the biharmonic equation above).
Consider an initial boundary value problem for a function $u(x,t)$ on $D=\{(x,t):x\in [0,l],t\geq 0\}$ in two variables:
$(Tu)(x,t)=(Su)(x,t)$
where $T$ is a differential operator with respect to $x$ and $S$ is a differential operator with respect to $t$ with boundary data:
$(Tu)(0,t)=(Tu)(l,t)=0$ for $t\geq 0$
$(Su)(x,0)=h(x)$ for $0\leq x\leq l$
where $h$ is a known function.
We look for solutions of the form $u(x,t)=f(x)g(t)$. Dividing the PDE through by $f(x)g(t)$ gives
${\frac {Tf}{f}}={\frac {Sg}{g}}$
The right hand side depends only on $x$ and the left hand side only on $t$ so both must be equal to a constant $K$, which gives two ordinary differential equations
$Tf=Kf,Sg=Kg$
which we can recognize as eigenvalue problems for the operators for $T$ and $S$. If $T$ is a compact, self-adjoint operator on the space $L^{2}[0,l]$ along with the relevant boundary conditions, then by the Spectral theorem there exists a basis for $L^{2}[0,l]$ consisting of eigenfunctions for $T$. Let the spectrum of $T$ be $E$ and let $f_{\lambda }$ be an eigenfunction with eigenvalue $\lambda \in E$. Then for any function which at each time $t$ is square-integrable with respect to $x$, we can write this function as a linear combination of the $f_{\lambda }$. In particular, we know the solution $u$ can be written as
$u(x,t)=\sum _{\lambda \in E}c_{\lambda }(t)f_{\lambda }(x)$
For some functions $c_{\lambda }(t)$. In the separation of variables, these functions are given by solutions to $Sg=Kg$
Hence, the spectral theorem ensures that the separation of variables will (when it is possible) find all the solutions.
For many differential operators, such as ${\frac {d^{2}}{dx^{2}}}$, we can show that they are self-adjoint by integration by parts. While these operators may not be compact, their inverses (when they exist) may be, as in the case of the wave equation, and these inverses have the same eigenfunctions and eigenvalues as the original operator (with the possible exception of zero).[7]
Matrices
The matrix form of the separation of variables is the Kronecker sum.
As an example we consider the 2D discrete Laplacian on a regular grid:
$L=\mathbf {D_{xx}} \oplus \mathbf {D_{yy}} =\mathbf {D_{xx}} \otimes \mathbf {I} +\mathbf {I} \otimes \mathbf {D_{yy}} ,\,$
where $\mathbf {D_{xx}} $ and $\mathbf {D_{yy}} $ are 1D discrete Laplacians in the x- and y-directions, correspondingly, and $\mathbf {I} $ are the identities of appropriate sizes. See the main article Kronecker sum of discrete Laplacians for details.
Software
Some mathematical programs are able to do separation of variables: Xcas[8] among others.
See also
• Inseparable differential equation
Notes
1. "How do you solve this differential equation using the separation of variables dy/dx= (y-2)/x?". Quora. Retrieved 2022-01-22.
2. "Separation of Variables". www.mathsisfun.com. Retrieved 2021-09-18.
3. "How do you solve this differential equation using the separation of variables dy/dx= (y-2)/x?". Quora. Retrieved 2022-01-22.
4. Miroshnikov, Victor A. (15 December 2017). Harmonic Wave Systems: Partial Differential Equations of the Helmholtz Decomposition. ISBN 9781618964069.
5. John Renze, Eric W. Weisstein, Separation of variables
6. Willard Miller(1984) Symmetry and Separation of Variables, Cambridge University Press
7. David Benson (2007) Music: A Mathematical Offering, Cambridge University Press, Appendix W
8. "Symbolic algebra and Mathematics with Xcas" (PDF).
References
• Polyanin, Andrei D. (2001-11-28). Handbook of Linear Partial Differential Equations for Engineers and Scientists. Boca Raton, FL: Chapman & Hall/CRC. ISBN 1-58488-299-9.
• Myint-U, Tyn; Debnath, Lokenath (2007). Linear Partial Differential Equations for Scientists and Engineers. Boston, MA: Birkhäuser Boston. doi:10.1007/978-0-8176-4560-1. ISBN 978-0-8176-4393-5.
• Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Graduate Studies in Mathematics. Vol. 140. Providence, RI: American Mathematical Society. ISBN 978-0-8218-8328-0.
External links
• "Fourier method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• John Renze, Eric W. Weisstein, Separation of variables (Differential Equation) at MathWorld.
• Methods of Generalized and Functional Separation of Variables at EqWorld: The World of Mathematical Equations
• Examples of separating variables to solve PDEs
• "A Short Justification of Separation of Variables"
|
Wikipedia
|
Hilbert space
In mathematics, Hilbert spaces (named after David Hilbert) allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.
The earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer), and ergodic theory (which forms the mathematical underpinning of thermodynamics). John von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis. Apart from the classical Euclidean vector spaces, examples of Hilbert spaces include spaces of square-integrable functions, spaces of sequences, Sobolev spaces consisting of generalized functions, and Hardy spaces of holomorphic functions.
Geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space. At a deeper level, perpendicular projection onto a linear subspace or a subspace (the analog of "dropping the altitude" of a triangle) plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be uniquely specified by its coordinates with respect to an orthonormal basis, in analogy with Cartesian coordinates in classical geometry. When this basis is countably infinite, it allows identifying the Hilbert space with the space of the infinite sequences that are square-summable. The latter space is often in the older literature referred to as the Hilbert space.
Definition and illustration
Motivating example: Euclidean vector space
One of the most familiar examples of a Hilbert space is the Euclidean vector space consisting of three-dimensional vectors, denoted by R3, and equipped with the dot product. The dot product takes two vectors x and y, and produces a real number x ⋅ y. If x and y are represented in Cartesian coordinates, then the dot product is defined by
${\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}\cdot {\begin{pmatrix}y_{1}\\y_{2}\\y_{3}\end{pmatrix}}=x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}\,.$
The dot product satisfies the properties[1]
1. It is symmetric in x and y: x ⋅ y = y ⋅ x.
2. It is linear in its first argument: (ax1 + bx2) ⋅ y = a(x1 ⋅ y) + b(x2 ⋅ y) for any scalars a, b, and vectors x1, x2, and y.
3. It is positive definite: for all vectors x, x ⋅ x ≥ 0 , with equality if and only if x = 0.
An operation on pairs of vectors that, like the dot product, satisfies these three properties is known as a (real) inner product. A vector space equipped with such an inner product is known as a (real) inner product space. Every finite-dimensional inner product space is also a Hilbert space.[2] The basic feature of the dot product that connects it with Euclidean geometry is that it is related to both the length (or norm) of a vector, denoted ‖x‖, and to the angle θ between two vectors x and y by means of the formula
$\mathbf {x} \cdot \mathbf {y} =\left\|\mathbf {x} \right\|\left\|\mathbf {y} \right\|\,\cos \theta \,.$
Multivariable calculus in Euclidean space relies on the ability to compute limits, and to have useful criteria for concluding that limits exist. A mathematical series
$\sum _{n=0}^{\infty }\mathbf {x} _{n}$
consisting of vectors in R3 is absolutely convergent provided that the sum of the lengths converges as an ordinary series of real numbers:[3]
$\sum _{k=0}^{\infty }\|\mathbf {x} _{k}\|<\infty \,.$
Just as with a series of scalars, a series of vectors that converges absolutely also converges to some limit vector L in the Euclidean space, in the sense that
${\Biggl \|}\mathbf {L} -\sum _{k=0}^{N}\mathbf {x} _{k}{\Biggr \|}\to 0\quad {\text{as }}N\to \infty \,.$
This property expresses the completeness of Euclidean space: that a series that converges absolutely also converges in the ordinary sense.
Hilbert spaces are often taken over the complex numbers. The complex plane denoted by C is equipped with a notion of magnitude, the complex modulus |z|, which is defined as the square root of the product of z with its complex conjugate:
$|z|^{2}=z{\overline {z}}\,.$
If z = x + iy is a decomposition of z into its real and imaginary parts, then the modulus is the usual Euclidean two-dimensional length:
$|z|={\sqrt {x^{2}+y^{2}}}\,.$
The inner product of a pair of complex numbers z and w is the product of z with the complex conjugate of w:
$\langle z,w\rangle =z{\overline {w}}\,.$
This is complex-valued. The real part of ⟨z, w⟩ gives the usual two-dimensional Euclidean dot product.
A second example is the space C2 whose elements are pairs of complex numbers z = (z1, z2). Then the inner product of z with another such vector w = (w1, w2) is given by
$\langle z,w\rangle =z_{1}{\overline {w_{1}}}+z_{2}{\overline {w_{2}}}\,.$
The real part of ⟨z, w⟩ is then the two-dimensional Euclidean dot product. This inner product is Hermitian symmetric, which means that the result of interchanging z and w is the complex conjugate:
$\langle w,z\rangle ={\overline {\langle z,w\rangle }}\,.$
Definition
A Hilbert space is a real or complex inner product space that is also a complete metric space with respect to the distance function induced by the inner product.[4]
To say that a complex vector space H is a complex inner product space means that there is an inner product $\langle x,y\rangle $ associating a complex number to each pair of elements $x,y$ of H that satisfies the following properties:
1. The inner product is conjugate symmetric; that is, the inner product of a pair of elements is equal to the complex conjugate of the inner product of the swapped elements:
$\langle y,x\rangle ={\overline {\langle x,y\rangle }}\,.$
Importantly, this implies that $\langle x,x\rangle $ is a real number.
2. The inner product is linear in its first[nb 1] argument. For all complex numbers $a$ and $b,$
$\langle ax_{1}+bx_{2},y\rangle =a\langle x_{1},y\rangle +b\langle x_{2},y\rangle \,.$
3. The inner product of an element with itself is positive definite:
${\begin{alignedat}{4}\langle x,x\rangle >0&\quad {\text{ if }}x\neq 0,\\\langle x,x\rangle =0&\quad {\text{ if }}x=0\,.\end{alignedat}}$
It follows from properties 1 and 2 that a complex inner product is antilinear, also called conjugate linear, in its second argument, meaning that
$\langle x,ay_{1}+by_{2}\rangle ={\bar {a}}\langle x,y_{1}\rangle +{\bar {b}}\langle x,y_{2}\rangle \,.$
A real inner product space is defined in the same way, except that H is a real vector space and the inner product takes real values. Such an inner product will be a bilinear map and $(H,H,\langle \cdot ,\cdot \rangle )$ will form a dual system.[5]
The norm is the real-valued function
$\|x\|={\sqrt {\langle x,x\rangle }}\,,$
and the distance $d$ between two points $x,y$ in H is defined in terms of the norm by
$d(x,y)=\|x-y\|={\sqrt {\langle x-y,x-y\rangle }}\,.$
That this function is a distance function means firstly that it is symmetric in $x$ and $y,$ secondly that the distance between $x$ and itself is zero, and otherwise the distance between $x$ and $y$ must be positive, and lastly that the triangle inequality holds, meaning that the length of one leg of a triangle xyz cannot exceed the sum of the lengths of the other two legs:
$d(x,z)\leq d(x,y)+d(y,z)\,.$
This last property is ultimately a consequence of the more fundamental Cauchy–Schwarz inequality, which asserts
$\left|\langle x,y\rangle \right|\leq \|x\|\|y\|$
with equality if and only if $x$ and $y$ are linearly dependent.
With a distance function defined in this way, any inner product space is a metric space, and sometimes is known as a Hausdorff pre-Hilbert space.[6] Any pre-Hilbert space that is additionally also a complete space is a Hilbert space.[7]
The completeness of H is expressed using a form of the Cauchy criterion for sequences in H: a pre-Hilbert space H is complete if every Cauchy sequence converges with respect to this norm to an element in the space. Completeness can be characterized by the following equivalent condition: if a series of vectors
$\sum _{k=0}^{\infty }u_{k}$
converges absolutely in the sense that
$\sum _{k=0}^{\infty }\|u_{k}\|<\infty \,,$
then the series converges in H, in the sense that the partial sums converge to an element of H.[8]
As a complete normed space, Hilbert spaces are by definition also Banach spaces. As such they are topological vector spaces, in which topological notions like the openness and closedness of subsets are well defined. Of special importance is the notion of a closed linear subspace of a Hilbert space that, with the inner product induced by restriction, is also complete (being a closed set in a complete metric space) and therefore a Hilbert space in its own right.
Second example: sequence spaces
The sequence space l2 consists of all infinite sequences z = (z1, z2, …) of complex numbers such that the following series converges:[9]
$\sum _{n=1}^{\infty }|z_{n}|^{2}$
The inner product on l2 is defined by:
$\langle \mathbf {z} ,\mathbf {w} \rangle =\sum _{n=1}^{\infty }z_{n}{\overline {w_{n}}}\,,$
This second series converges as a consequence of the Cauchy–Schwarz inequality and the convergence of the previous series.
Completeness of the space holds provided that whenever a series of elements from l2 converges absolutely (in norm), then it converges to an element of l2. The proof is basic in mathematical analysis, and permits mathematical series of elements of the space to be manipulated with the same ease as series of complex numbers (or vectors in a finite-dimensional Euclidean space).[10]
History
Prior to the development of Hilbert spaces, other generalizations of Euclidean spaces were known to mathematicians and physicists. In particular, the idea of an abstract linear space (vector space) had gained some traction towards the end of the 19th century:[11] this is a space whose elements can be added together and multiplied by scalars (such as real or complex numbers) without necessarily identifying these elements with "geometric" vectors, such as position and momentum vectors in physical systems. Other objects studied by mathematicians at the turn of the 20th century, in particular spaces of sequences (including series) and spaces of functions,[12] can naturally be thought of as linear spaces. Functions, for instance, can be added together or multiplied by constant scalars, and these operations obey the algebraic laws satisfied by addition and scalar multiplication of spatial vectors.
In the first decade of the 20th century, parallel developments led to the introduction of Hilbert spaces. The first of these was the observation, which arose during David Hilbert and Erhard Schmidt's study of integral equations,[13] that two square-integrable real-valued functions f and g on an interval [a, b] have an inner product
$\langle f,g\rangle =\int _{a}^{b}f(x)g(x)\,\mathrm {d} x$
which has many of the familiar properties of the Euclidean dot product. In particular, the idea of an orthogonal family of functions has meaning. Schmidt exploited the similarity of this inner product with the usual dot product to prove an analog of the spectral decomposition for an operator of the form
$f(x)\mapsto \int _{a}^{b}K(x,y)f(y)\,\mathrm {d} y$
where K is a continuous function symmetric in x and y. The resulting eigenfunction expansion expresses the function K as a series of the form
$K(x,y)=\sum _{n}\lambda _{n}\varphi _{n}(x)\varphi _{n}(y)$
where the functions φn are orthogonal in the sense that ⟨φn, φm⟩ = 0 for all n ≠ m. The individual terms in this series are sometimes referred to as elementary product solutions. However, there are eigenfunction expansions that fail to converge in a suitable sense to a square-integrable function: the missing ingredient, which ensures convergence, is completeness.[14]
The second development was the Lebesgue integral, an alternative to the Riemann integral introduced by Henri Lebesgue in 1904.[15] The Lebesgue integral made it possible to integrate a much broader class of functions. In 1907, Frigyes Riesz and Ernst Sigismund Fischer independently proved that the space L2 of square Lebesgue-integrable functions is a complete metric space.[16] As a consequence of the interplay between geometry and completeness, the 19th century results of Joseph Fourier, Friedrich Bessel and Marc-Antoine Parseval on trigonometric series easily carried over to these more general spaces, resulting in a geometrical and analytical apparatus now usually known as the Riesz–Fischer theorem.[17]
Further basic results were proved in the early 20th century. For example, the Riesz representation theorem was independently established by Maurice Fréchet and Frigyes Riesz in 1907.[18] John von Neumann coined the term abstract Hilbert space in his work on unbounded Hermitian operators.[19] Although other mathematicians such as Hermann Weyl and Norbert Wiener had already studied particular Hilbert spaces in great detail, often from a physically motivated point of view, von Neumann gave the first complete and axiomatic treatment of them.[20] Von Neumann later used them in his seminal work on the foundations of quantum mechanics,[21] and in his continued work with Eugene Wigner. The name "Hilbert space" was soon adopted by others, for example by Hermann Weyl in his book on quantum mechanics and the theory of groups.[22]
The significance of the concept of a Hilbert space was underlined with the realization that it offers one of the best mathematical formulations of quantum mechanics.[23] In short, the states of a quantum mechanical system are vectors in a certain Hilbert space, the observables are hermitian operators on that space, the symmetries of the system are unitary operators, and measurements are orthogonal projections. The relation between quantum mechanical symmetries and unitary operators provided an impetus for the development of the unitary representation theory of groups, initiated in the 1928 work of Hermann Weyl.[22] On the other hand, in the early 1930s it became clear that classical mechanics can be described in terms of Hilbert space (Koopman–von Neumann classical mechanics) and that certain properties of classical dynamical systems can be analyzed using Hilbert space techniques in the framework of ergodic theory.[24]
The algebra of observables in quantum mechanics is naturally an algebra of operators defined on a Hilbert space, according to Werner Heisenberg's matrix mechanics formulation of quantum theory.[25] Von Neumann began investigating operator algebras in the 1930s, as rings of operators on a Hilbert space. The kind of algebras studied by von Neumann and his contemporaries are now known as von Neumann algebras.[26] In the 1940s, Israel Gelfand, Mark Naimark and Irving Segal gave a definition of a kind of operator algebras called C*-algebras that on the one hand made no reference to an underlying Hilbert space, and on the other extrapolated many of the useful features of the operator algebras that had previously been studied. The spectral theorem for self-adjoint operators in particular that underlies much of the existing Hilbert space theory was generalized to C*-algebras.[27] These techniques are now basic in abstract harmonic analysis and representation theory.
Examples
Lebesgue spaces
Main article: Lp space
Lebesgue spaces are function spaces associated to measure spaces (X, M, μ), where X is a set, M is a σ-algebra of subsets of X, and μ is a countably additive measure on M. Let L2(X, μ) be the space of those complex-valued measurable functions on X for which the Lebesgue integral of the square of the absolute value of the function is finite, i.e., for a function f in L2(X, μ),
$\int _{X}|f|^{2}\mathrm {d} \mu <\infty \,,$
and where functions are identified if and only if they differ only on a set of measure zero.
The inner product of functions f and g in L2(X, μ) is then defined as
$\langle f,g\rangle =\int _{X}f(t){\overline {g(t)}}\,\mathrm {d} \mu (t)$
or
$\langle f,g\rangle =\int _{X}{\overline {f(t)}}g(t)\,\mathrm {d} \mu (t)\,,$
where the second form (conjugation of the first element) is commonly found in the theoretical physics literature. For f and g in L2, the integral exists because of the Cauchy–Schwarz inequality, and defines an inner product on the space. Equipped with this inner product, L2 is in fact complete.[28] The Lebesgue integral is essential to ensure completeness: on domains of real numbers, for instance, not enough functions are Riemann integrable.[29]
The Lebesgue spaces appear in many natural settings. The spaces L2(R) and L2([0,1]) of square-integrable functions with respect to the Lebesgue measure on the real line and unit interval, respectively, are natural domains on which to define the Fourier transform and Fourier series. In other situations, the measure may be something other than the ordinary Lebesgue measure on the real line. For instance, if w is any positive measurable function, the space of all measurable functions f on the interval [0, 1] satisfying
$\int _{0}^{1}{\bigl |}f(t){\bigr |}^{2}w(t)\,\mathrm {d} t<\infty $
is called the weighted L2 space L2
w
([0, 1])
, and w is called the weight function. The inner product is defined by
$\langle f,g\rangle =\int _{0}^{1}f(t){\overline {g(t)}}w(t)\,\mathrm {d} t\,.$
The weighted space L2
w
([0, 1])
is identical with the Hilbert space L2([0, 1], μ) where the measure μ of a Lebesgue-measurable set A is defined by
$\mu (A)=\int _{A}w(t)\,\mathrm {d} t\,.$
Weighted L2 spaces like this are frequently used to study orthogonal polynomials, because different families of orthogonal polynomials are orthogonal with respect to different weighting functions.[30]
Sobolev spaces
Sobolev spaces, denoted by Hs or Ws, 2, are Hilbert spaces. These are a special kind of function space in which differentiation may be performed, but that (unlike other Banach spaces such as the Hölder spaces) support the structure of an inner product. Because differentiation is permitted, Sobolev spaces are a convenient setting for the theory of partial differential equations.[31] They also form the basis of the theory of direct methods in the calculus of variations.[32]
For s a non-negative integer and Ω ⊂ Rn, the Sobolev space Hs(Ω) contains L2 functions whose weak derivatives of order up to s are also L2. The inner product in Hs(Ω) is
$\langle f,g\rangle =\int _{\Omega }f(x){\bar {g}}(x)\,\mathrm {d} x+\int _{\Omega }Df(x)\cdot D{\bar {g}}(x)\,\mathrm {d} x+\cdots +\int _{\Omega }D^{s}f(x)\cdot D^{s}{\bar {g}}(x)\,\mathrm {d} x$
where the dot indicates the dot product in the Euclidean space of partial derivatives of each order. Sobolev spaces can also be defined when s is not an integer.
Sobolev spaces are also studied from the point of view of spectral theory, relying more specifically on the Hilbert space structure. If Ω is a suitable domain, then one can define the Sobolev space Hs(Ω) as the space of Bessel potentials;[33] roughly,
$H^{s}(\Omega )=\left\{(1-\Delta )^{-s/2}f\mathrel {\Big |} f\in L^{2}(\Omega )\right\}\,.$
Here Δ is the Laplacian and (1 − Δ)−s / 2 is understood in terms of the spectral mapping theorem. Apart from providing a workable definition of Sobolev spaces for non-integer s, this definition also has particularly desirable properties under the Fourier transform that make it ideal for the study of pseudodifferential operators. Using these methods on a compact Riemannian manifold, one can obtain for instance the Hodge decomposition, which is the basis of Hodge theory.[34]
Hardy spaces
The Hardy spaces are function spaces, arising in complex analysis and harmonic analysis, whose elements are certain holomorphic functions in a complex domain.[35] Let U denote the unit disc in the complex plane. Then the Hardy space H2(U) is defined as the space of holomorphic functions f on U such that the means
$M_{r}(f)={\frac {1}{2\pi }}\int _{0}^{2\pi }\left|f{\bigl (}re^{i\theta }{\bigr )}\right|^{2}\,\mathrm {d} \theta $
remain bounded for r < 1. The norm on this Hardy space is defined by
$\left\|f\right\|_{2}=\lim _{r\to 1}{\sqrt {M_{r}(f)}}\,.$
Hardy spaces in the disc are related to Fourier series. A function f is in H2(U) if and only if
$f(z)=\sum _{n=0}^{\infty }a_{n}z^{n}$
where
$\sum _{n=0}^{\infty }|a_{n}|^{2}<\infty \,.$
Thus H2(U) consists of those functions that are L2 on the circle, and whose negative frequency Fourier coefficients vanish.
Bergman spaces
The Bergman spaces are another family of Hilbert spaces of holomorphic functions.[36] Let D be a bounded open set in the complex plane (or a higher-dimensional complex space) and let L2, h(D) be the space of holomorphic functions f in D that are also in L2(D) in the sense that
$\|f\|^{2}=\int _{D}|f(z)|^{2}\,\mathrm {d} \mu (z)<\infty \,,$
where the integral is taken with respect to the Lebesgue measure in D. Clearly L2, h(D) is a subspace of L2(D); in fact, it is a closed subspace, and so a Hilbert space in its own right. This is a consequence of the estimate, valid on compact subsets K of D, that
$\sup _{z\in K}\left|f(z)\right|\leq C_{K}\left\|f\right\|_{2}\,,$
which in turn follows from Cauchy's integral formula. Thus convergence of a sequence of holomorphic functions in L2(D) implies also compact convergence, and so the limit function is also holomorphic. Another consequence of this inequality is that the linear functional that evaluates a function f at a point of D is actually continuous on L2, h(D). The Riesz representation theorem implies that the evaluation functional can be represented as an element of L2, h(D). Thus, for every z ∈ D, there is a function ηz ∈ L2, h(D) such that
$f(z)=\int _{D}f(\zeta ){\overline {\eta _{z}(\zeta )}}\,\mathrm {d} \mu (\zeta )$
for all f ∈ L2, h(D). The integrand
$K(\zeta ,z)={\overline {\eta _{z}(\zeta )}}$
is known as the Bergman kernel of D. This integral kernel satisfies a reproducing property
$f(z)=\int _{D}f(\zeta )K(\zeta ,z)\,\mathrm {d} \mu (\zeta )\,.$
A Bergman space is an example of a reproducing kernel Hilbert space, which is a Hilbert space of functions along with a kernel K(ζ, z) that verifies a reproducing property analogous to this one. The Hardy space H2(D) also admits a reproducing kernel, known as the Szegő kernel.[37] Reproducing kernels are common in other areas of mathematics as well. For instance, in harmonic analysis the Poisson kernel is a reproducing kernel for the Hilbert space of square-integrable harmonic functions in the unit ball. That the latter is a Hilbert space at all is a consequence of the mean value theorem for harmonic functions.
Applications
Many of the applications of Hilbert spaces exploit the fact that Hilbert spaces support generalizations of simple geometric concepts like projection and change of basis from their usual finite dimensional setting. In particular, the spectral theory of continuous self-adjoint linear operators on a Hilbert space generalizes the usual spectral decomposition of a matrix, and this often plays a major role in applications of the theory to other areas of mathematics and physics.
Sturm–Liouville theory
Main articles: Sturm–Liouville theory and Spectral theory of ordinary differential equations
In the theory of ordinary differential equations, spectral methods on a suitable Hilbert space are used to study the behavior of eigenvalues and eigenfunctions of differential equations. For example, the Sturm–Liouville problem arises in the study of the harmonics of waves in a violin string or a drum, and is a central problem in ordinary differential equations.[38] The problem is a differential equation of the form
$-{\frac {\mathrm {d} }{\mathrm {d} x}}\left[p(x){\frac {\mathrm {d} y}{\mathrm {d} x}}\right]+q(x)y=\lambda w(x)y$
for an unknown function y on an interval [a, b], satisfying general homogeneous Robin boundary conditions
${\begin{cases}\alpha y(a)+\alpha 'y'(a)&=0\\\beta y(b)+\beta 'y'(b)&=0\,.\end{cases}}$
The functions p, q, and w are given in advance, and the problem is to find the function y and constants λ for which the equation has a solution. The problem only has solutions for certain values of λ, called eigenvalues of the system, and this is a consequence of the spectral theorem for compact operators applied to the integral operator defined by the Green's function for the system. Furthermore, another consequence of this general result is that the eigenvalues λ of the system can be arranged in an increasing sequence tending to infinity.[39][nb 2]
Partial differential equations
Hilbert spaces form a basic tool in the study of partial differential equations.[31] For many classes of partial differential equations, such as linear elliptic equations, it is possible to consider a generalized solution (known as a weak solution) by enlarging the class of functions. Many weak formulations involve the class of Sobolev functions, which is a Hilbert space. A suitable weak formulation reduces to a geometrical problem, the analytic problem of finding a solution or, often what is more important, showing that a solution exists and is unique for given boundary data. For linear elliptic equations, one geometrical result that ensures unique solvability for a large class of problems is the Lax–Milgram theorem. This strategy forms the rudiment of the Galerkin method (a finite element method) for numerical solution of partial differential equations.[40]
A typical example is the Poisson equation −Δu = g with Dirichlet boundary conditions in a bounded domain Ω in R2. The weak formulation consists of finding a function u such that, for all continuously differentiable functions v in Ω vanishing on the boundary:
$\int _{\Omega }\nabla u\cdot \nabla v=\int _{\Omega }gv\,.$
This can be recast in terms of the Hilbert space H1
0
(Ω)
consisting of functions u such that u, along with its weak partial derivatives, are square integrable on Ω, and vanish on the boundary. The question then reduces to finding u in this space such that for all v in this space
$a(u,v)=b(v)$
where a is a continuous bilinear form, and b is a continuous linear functional, given respectively by
$a(u,v)=\int _{\Omega }\nabla u\cdot \nabla v,\quad b(v)=\int _{\Omega }gv\,.$
Since the Poisson equation is elliptic, it follows from Poincaré's inequality that the bilinear form a is coercive. The Lax–Milgram theorem then ensures the existence and uniqueness of solutions of this equation.[41]
Hilbert spaces allow for many elliptic partial differential equations to be formulated in a similar way, and the Lax–Milgram theorem is then a basic tool in their analysis. With suitable modifications, similar techniques can be applied to parabolic partial differential equations and certain hyperbolic partial differential equations.[42]
Ergodic theory
The field of ergodic theory is the study of the long-term behavior of chaotic dynamical systems. The protypical case of a field that ergodic theory applies to is thermodynamics, in which—though the microscopic state of a system is extremely complicated (it is impossible to understand the ensemble of individual collisions between particles of matter)—the average behavior over sufficiently long time intervals is tractable. The laws of thermodynamics are assertions about such average behavior. In particular, one formulation of the zeroth law of thermodynamics asserts that over sufficiently long timescales, the only functionally independent measurement that one can make of a thermodynamic system in equilibrium is its total energy, in the form of temperature.
An ergodic dynamical system is one for which, apart from the energy—measured by the Hamiltonian—there are no other functionally independent conserved quantities on the phase space. More explicitly, suppose that the energy E is fixed, and let ΩE be the subset of the phase space consisting of all states of energy E (an energy surface), and let Tt denote the evolution operator on the phase space. The dynamical system is ergodic if there are no continuous non-constant functions on ΩE such that
$f(T_{t}w)=f(w)$
for all w on ΩE and all time t. Liouville's theorem implies that there exists a measure μ on the energy surface that is invariant under the time translation. As a result, time translation is a unitary transformation of the Hilbert space L2(ΩE, μ) consisting of square-integrable functions on the energy surface ΩE with respect to the inner product
$\left\langle f,g\right\rangle _{L^{2}\left(\Omega _{E},\mu \right)}=\int _{E}f{\bar {g}}\,\mathrm {d} \mu \,.$
The von Neumann mean ergodic theorem[24] states the following:
• If Ut is a (strongly continuous) one-parameter semigroup of unitary operators on a Hilbert space H, and P is the orthogonal projection onto the space of common fixed points of Ut, {x ∈H | Utx = x, ∀t > 0}, then
$Px=\lim _{T\to \infty }{\frac {1}{T}}\int _{0}^{T}U_{t}x\,\mathrm {d} t\,.$
For an ergodic system, the fixed set of the time evolution consists only of the constant functions, so the ergodic theorem implies the following:[43] for any function f ∈ L2(ΩE, μ),
${\underset {T\to \infty }{L^{2}-\lim }}{\frac {1}{T}}\int _{0}^{T}f(T_{t}w)\,\mathrm {d} t=\int _{\Omega _{E}}f(y)\,\mathrm {d} \mu (y)\,.$
That is, the long time average of an observable f is equal to its expectation value over an energy surface.
Fourier analysis
One of the basic goals of Fourier analysis is to decompose a function into a (possibly infinite) linear combination of given basis functions: the associated Fourier series. The classical Fourier series associated to a function f defined on the interval [0, 1] is a series of the form
$\sum _{n=-\infty }^{\infty }a_{n}e^{2\pi in\theta }$
where
$a_{n}=\int _{0}^{1}f(\theta )\;\!e^{-2\pi in\theta }\,\mathrm {d} \theta \,.$
The example of adding up the first few terms in a Fourier series for a sawtooth function is shown in the figure. The basis functions are sine waves with wavelengths λ/n (for integer n) shorter than the wavelength λ of the sawtooth itself (except for n = 1, the fundamental wave). All basis functions have nodes at the nodes of the sawtooth, but all but the fundamental have additional nodes. The oscillation of the summed terms about the sawtooth is called the Gibbs phenomenon.
A significant problem in classical Fourier series asks in what sense the Fourier series converges, if at all, to the function f. Hilbert space methods provide one possible answer to this question.[44] The functions en(θ) = e2πinθ form an orthogonal basis of the Hilbert space L2([0, 1]). Consequently, any square-integrable function can be expressed as a series
$f(\theta )=\sum _{n}a_{n}e_{n}(\theta )\,,\quad a_{n}=\langle f,e_{n}\rangle $
and, moreover, this series converges in the Hilbert space sense (that is, in the L2 mean).
The problem can also be studied from the abstract point of view: every Hilbert space has an orthonormal basis, and every element of the Hilbert space can be written in a unique way as a sum of multiples of these basis elements. The coefficients appearing on these basis elements are sometimes known abstractly as the Fourier coefficients of the element of the space.[45] The abstraction is especially useful when it is more natural to use different basis functions for a space such as L2([0, 1]). In many circumstances, it is desirable not to decompose a function into trigonometric functions, but rather into orthogonal polynomials or wavelets for instance,[46] and in higher dimensions into spherical harmonics.[47]
For instance, if en are any orthonormal basis functions of L2[0, 1], then a given function in L2[0, 1] can be approximated as a finite linear combination[48]
$f(x)\approx f_{n}(x)=a_{1}e_{1}(x)+a_{2}e_{2}(x)+\cdots +a_{n}e_{n}(x)\,.$
The coefficients {aj} are selected to make the magnitude of the difference ‖f − fn‖2 as small as possible. Geometrically, the best approximation is the orthogonal projection of f onto the subspace consisting of all linear combinations of the {ej}, and can be calculated by[49]
$a_{j}=\int _{0}^{1}{\overline {e_{j}(x)}}f(x)\,\mathrm {d} x\,.$
That this formula minimizes the difference ‖f − fn‖2 is a consequence of Bessel's inequality and Parseval's formula.
In various applications to physical problems, a function can be decomposed into physically meaningful eigenfunctions of a differential operator (typically the Laplace operator): this forms the foundation for the spectral study of functions, in reference to the spectrum of the differential operator.[50] A concrete physical application involves the problem of hearing the shape of a drum: given the fundamental modes of vibration that a drumhead is capable of producing, can one infer the shape of the drum itself?[51] The mathematical formulation of this question involves the Dirichlet eigenvalues of the Laplace equation in the plane, that represent the fundamental modes of vibration in direct analogy with the integers that represent the fundamental modes of vibration of the violin string.
Spectral theory also underlies certain aspects of the Fourier transform of a function. Whereas Fourier analysis decomposes a function defined on a compact set into the discrete spectrum of the Laplacian (which corresponds to the vibrations of a violin string or drum), the Fourier transform of a function is the decomposition of a function defined on all of Euclidean space into its components in the continuous spectrum of the Laplacian. The Fourier transformation is also geometrical, in a sense made precise by the Plancherel theorem, that asserts that it is an isometry of one Hilbert space (the "time domain") with another (the "frequency domain"). This isometry property of the Fourier transformation is a recurring theme in abstract harmonic analysis (since it reflects the conservation of energy for the continuous Fourier Transform), as evidenced for instance by the Plancherel theorem for spherical functions occurring in noncommutative harmonic analysis.
Quantum mechanics
In the mathematically rigorous formulation of quantum mechanics, developed by John von Neumann,[52] the possible states (more precisely, the pure states) of a quantum mechanical system are represented by unit vectors (called state vectors) residing in a complex separable Hilbert space, known as the state space, well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projectivization of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system; for example, the position and momentum states for a single non-relativistic spin zero particle is the space of all square-integrable functions, while the states for the spin of a single proton are unit elements of the two-dimensional complex Hilbert space of spinors. Each observable is represented by a self-adjoint linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate.[53]
The inner product between two state vectors is a complex number known as a probability amplitude. During an ideal measurement of a quantum mechanical system, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the absolute value of the probability amplitudes between the initial and final states.[54] The possible results of a measurement are the eigenvalues of the operator—which explains the choice of self-adjoint operators, for all the eigenvalues must be real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator.[55]
For a general system, states are typically not pure, but instead are represented as statistical mixtures of pure states, or mixed states, given by density matrices: self-adjoint operators of trace one on a Hilbert space.[56] Moreover, for general quantum mechanical systems, the effects of a single measurement can influence other parts of a system in a manner that is described instead by a positive operator valued measure. Thus the structure both of the states and observables in the general theory is considerably more complicated than the idealization for pure states.[57]
Probability theory
In probability theory, Hilbert spaces also have diverse applications. Here a fundamental Hilbert space is the space of random variables on a given probability space, having class $L^{2}$ (finite first and second moments). A common operation in statistics is that of centering a random variable by subtracting its expectation. Thus if $X$ is a random variable, then $X-E(X)$ is its centering. In the Hilbert space view, this is the orthogonal projection of $X$ onto the kernel of the expectation operator, which a continuous linear functional on the Hilbert space (in fact, the inner product with the constant random variable 1), and so this kernel is a closed subspace.
The conditional expectation has a natural interpretation in the Hilbert space.[58] Suppose that a probability space $(\Omega ,P,{\mathcal {B}})$ is given, where ${\mathcal {B}}$ is a sigma algebra on the set $\Omega $, and $P$ is a probability measure on the measure space $(\Omega ,{\mathcal {B}})$. If ${\mathcal {F}}\leq {\mathcal {B}}$ is a sigma subalgebra of ${\mathcal {B}}$, then the conditional expectation $E[X|{\mathcal {F}}]$ is the orthogonal projection of $X$ onto the subspace of $L^{2}(\Omega ,P)$ consisting of the ${\mathcal {F}}$-measurable functions. If the random variable $X$ in $L^{2}(\Omega ,P)$ is independent of the sigma algebra ${\mathcal {F}}$ then conditional expectation $E(X|{\mathcal {F}})=E(X)$, i.e., its projection onto the ${\mathcal {F}}$-measurable functions is constant. Equivalently, the projection of its centering is zero.
In particular, if two random variables $X$ and $Y$ (in $L^{2}(\Omega ,P)$) are independent, then the centered random variables $X-E(X)$ and $Y-E(Y)$ are orthogonal. (This means that the two variables have zero covariance: they are uncorrelated.) In that case, the Pythagorean theorem in the kernel of the expectation operator implies that the variances of $X$ and $Y$ satisfy the identity:
$\operatorname {Var} (X+Y)=\operatorname {Var} (X)+\operatorname {Var} (Y),$
sometimes called the Pythagorean theorem of statistics.
Hilbert spaces are also used throughout the foundations of the Itô calculus.[59] To any square-integrable martingale, it is possible to associate a Hilbert norm on the space of equivalence classes of progressively measurable processes with respect to the martingale (using the quadratic variation of the martingale as the measure). The Itô integral can be constructed by first defining it for simple processes, and then exploiting their density in the Hilbert space. A noteworthy result is then the Itô isometry, which attests that for any martingale M having quadratic variation measure $d\langle M\rangle _{t}$, and any progressively measurable process H:
$E\left[\left(\int _{0}^{t}H_{s}dM_{s}\right)^{2}\right]=E\left[\int _{0}^{t}H_{s}^{2}d\langle M\rangle _{s}\right]$
whenever the expectation on the right-hand side is finite.
Color perception
Any true physical color can be represented by a combination of pure spectral colors. As physical colors can be composed of any number of spectral colors, the space of physical colors may aptly be represented by a Hilbert space over spectral colors. Humans have three types of cone cells for color perception, so the perceivable colors can be represented by 3-dimensional Euclidean space. The many-to-one linear mapping from the Hilbert space of physical colors to the Euclidean space of human perceivable colors explains why many distinct physical colors may be perceived by humans to be identical (e.g., pure yellow light versus a mix of red and green light, see metamerism).[60][61]
Properties
Pythagorean identity
Two vectors u and v in a Hilbert space H are orthogonal when ⟨u, v⟩ = 0. The notation for this is u ⊥ v. More generally, when S is a subset in H, the notation u ⊥ S means that u is orthogonal to every element from S.
When u and v are orthogonal, one has
$\|u+v\|^{2}=\langle u+v,u+v\rangle =\langle u,u\rangle +2\,\operatorname {Re} \langle u,v\rangle +\langle v,v\rangle =\|u\|^{2}+\|v\|^{2}\,.$
By induction on n, this is extended to any family u1, ..., un of n orthogonal vectors,
$\left\|u_{1}+\cdots +u_{n}\right\|^{2}=\left\|u_{1}\right\|^{2}+\cdots +\left\|u_{n}\right\|^{2}.$
Whereas the Pythagorean identity as stated is valid in any inner product space, completeness is required for the extension of the Pythagorean identity to series.[62] A series Σuk of orthogonal vectors converges in H if and only if the series of squares of norms converges, and
${\Biggl \|}\sum _{k=0}^{\infty }u_{k}{\Biggr \|}^{2}=\sum _{k=0}^{\infty }\left\|u_{k}\right\|^{2}\,.$
Furthermore, the sum of a series of orthogonal vectors is independent of the order in which it is taken.
Parallelogram identity and polarization
By definition, every Hilbert space is also a Banach space. Furthermore, in every Hilbert space the following parallelogram identity holds:[63]
$\|u+v\|^{2}+\|u-v\|^{2}=2{\bigl (}\|u\|^{2}+\|v\|^{2}{\bigr )}\,.$
Conversely, every Banach space in which the parallelogram identity holds is a Hilbert space, and the inner product is uniquely determined by the norm by the polarization identity.[64] For real Hilbert spaces, the polarization identity is
$\langle u,v\rangle ={\tfrac {1}{4}}{\bigl (}\|u+v\|^{2}-\|u-v\|^{2}{\bigr )}\,.$
For complex Hilbert spaces, it is
$\langle u,v\rangle ={\tfrac {1}{4}}{\bigl (}\|u+v\|^{2}-\|u-v\|^{2}+i\|u+iv\|^{2}-i\|u-iv\|^{2}{\bigr )}\,.$
The parallelogram law implies that any Hilbert space is a uniformly convex Banach space.[65]
Best approximation
This subsection employs the Hilbert projection theorem. If C is a non-empty closed convex subset of a Hilbert space H and x a point in H, there exists a unique point y ∈ C that minimizes the distance between x and points in C,[66]
$y\in C\,,\quad \|x-y\|=\operatorname {dist} (x,C)=\min {\bigl \{}\|x-z\|\mathrel {\big |} z\in C{\bigr \}}\,.$
This is equivalent to saying that there is a point with minimal norm in the translated convex set D = C − x. The proof consists in showing that every minimizing sequence (dn) ⊂ D is Cauchy (using the parallelogram identity) hence converges (using completeness) to a point in D that has minimal norm. More generally, this holds in any uniformly convex Banach space.[67]
When this result is applied to a closed subspace F of H, it can be shown that the point y ∈ F closest to x is characterized by[68]
$y\in F\,,\quad x-y\perp F\,.$
This point y is the orthogonal projection of x onto F, and the mapping PF : x → y is linear (see Orthogonal complements and projections). This result is especially significant in applied mathematics, especially numerical analysis, where it forms the basis of least squares methods.[69]
In particular, when F is not equal to H, one can find a nonzero vector v orthogonal to F (select x ∉ F and v = x − y). A very useful criterion is obtained by applying this observation to the closed subspace F generated by a subset S of H.
A subset S of H spans a dense vector subspace if (and only if) the vector 0 is the sole vector v ∈ H orthogonal to S.
Duality
The dual space H* is the space of all continuous linear functions from the space H into the base field. It carries a natural norm, defined by
$\|\varphi \|=\sup _{\|x\|=1,x\in H}|\varphi (x)|\,.$
This norm satisfies the parallelogram law, and so the dual space is also an inner product space where this inner product can be defined in terms of this dual norm by using the polarization identity. The dual space is also complete so it is a Hilbert space in its own right. If e• = (ei)i ∈ I is a complete orthonormal basis for H then the inner product on the dual space of any two $f,g\in H^{*}$ is
$\langle f,g\rangle _{H^{*}}=\sum _{i\in I}f(e_{i}){\overline {g(e_{i})}}$
where all but countably many of the terms in this series are zero.
The Riesz representation theorem affords a convenient description of the dual space. To every element u of H, there is a unique element φu of H*, defined by
$\varphi _{u}(x)=\langle x,u\rangle $
where moreover, $\left\|\varphi _{u}\right\|=\left\|u\right\|.$
The Riesz representation theorem states that the map from H to H* defined by u ↦ φu is surjective, which makes this map an isometric antilinear isomorphism.[70] So to every element φ of the dual H* there exists one and only one uφ in H such that
$\langle x,u_{\varphi }\rangle =\varphi (x)$
for all x ∈ H. The inner product on the dual space H* satisfies
$\langle \varphi ,\psi \rangle =\langle u_{\psi },u_{\varphi }\rangle \,.$
The reversal of order on the right-hand side restores linearity in φ from the antilinearity of uφ. In the real case, the antilinear isomorphism from H to its dual is actually an isomorphism, and so real Hilbert spaces are naturally isomorphic to their own duals.
The representing vector uφ is obtained in the following way. When φ ≠ 0, the kernel F = Ker(φ) is a closed vector subspace of H, not equal to H, hence there exists a nonzero vector v orthogonal to F. The vector u is a suitable scalar multiple λv of v. The requirement that φ(v) = ⟨v, u⟩ yields
$u=\langle v,v\rangle ^{-1}\,{\overline {\varphi (v)}}\,v\,.$
This correspondence φ ↔ u is exploited by the bra–ket notation popular in physics.[71] It is common in physics to assume that the inner product, denoted by ⟨x|y⟩, is linear on the right,
$\langle x|y\rangle =\langle y,x\rangle \,.$
The result ⟨x|y⟩ can be seen as the action of the linear functional ⟨x| (the bra) on the vector |y⟩ (the ket).
The Riesz representation theorem relies fundamentally not just on the presence of an inner product, but also on the completeness of the space. In fact, the theorem implies that the topological dual of any inner product space can be identified with its completion. An immediate consequence of the Riesz representation theorem is also that a Hilbert space H is reflexive, meaning that the natural map from H into its double dual space is an isomorphism.
Weakly-convergent sequences
Main article: Weak convergence (Hilbert space)
In a Hilbert space H, a sequence {xn} is weakly convergent to a vector x ∈ H when
$\lim _{n}\langle x_{n},v\rangle =\langle x,v\rangle $
for every v ∈ H.
For example, any orthonormal sequence {fn} converges weakly to 0, as a consequence of Bessel's inequality. Every weakly convergent sequence {xn} is bounded, by the uniform boundedness principle.
Conversely, every bounded sequence in a Hilbert space admits weakly convergent subsequences (Alaoglu's theorem).[72] This fact may be used to prove minimization results for continuous convex functionals, in the same way that the Bolzano–Weierstrass theorem is used for continuous functions on Rd. Among several variants, one simple statement is as follows:[73]
If f : H → R is a convex continuous function such that f(x) tends to +∞ when ‖x‖ tends to ∞, then f admits a minimum at some point x0 ∈ H.
This fact (and its various generalizations) are fundamental for direct methods in the calculus of variations. Minimization results for convex functionals are also a direct consequence of the slightly more abstract fact that closed bounded convex subsets in a Hilbert space H are weakly compact, since H is reflexive. The existence of weakly convergent subsequences is a special case of the Eberlein–Šmulian theorem.
Banach space properties
Any general property of Banach spaces continues to hold for Hilbert spaces. The open mapping theorem states that a continuous surjective linear transformation from one Banach space to another is an open mapping meaning that it sends open sets to open sets. A corollary is the bounded inverse theorem, that a continuous and bijective linear function from one Banach space to another is an isomorphism (that is, a continuous linear map whose inverse is also continuous). This theorem is considerably simpler to prove in the case of Hilbert spaces than in general Banach spaces.[74] The open mapping theorem is equivalent to the closed graph theorem, which asserts that a linear function from one Banach space to another is continuous if and only if its graph is a closed set.[75] In the case of Hilbert spaces, this is basic in the study of unbounded operators (see closed operator).
The (geometrical) Hahn–Banach theorem asserts that a closed convex set can be separated from any point outside it by means of a hyperplane of the Hilbert space. This is an immediate consequence of the best approximation property: if y is the element of a closed convex set F closest to x, then the separating hyperplane is the plane perpendicular to the segment xy passing through its midpoint.[76]
Operators on Hilbert spaces
Bounded operators
The continuous linear operators A : H1 → H2 from a Hilbert space H1 to a second Hilbert space H2 are bounded in the sense that they map bounded sets to bounded sets.[77] Conversely, if an operator is bounded, then it is continuous. The space of such bounded linear operators has a norm, the operator norm given by
$\lVert A\rVert =\sup {\bigl \{}\|Ax\|\mathrel {\big |} \|x\|\leq 1{\bigr \}}\,.$
The sum and the composite of two bounded linear operators is again bounded and linear. For y in H2, the map that sends x ∈ H1 to ⟨Ax, y⟩ is linear and continuous, and according to the Riesz representation theorem can therefore be represented in the form
$\left\langle x,A^{*}y\right\rangle =\langle Ax,y\rangle $
for some vector A*y in H1. This defines another bounded linear operator A* : H2 → H1, the adjoint of A. The adjoint satisfies A** = A. When the Riesz representation theorem is used to identify each Hilbert space with its continuous dual space, the adjoint of A can be shown to be identical to the transpose tA : H2* → H1* of A, which by definition sends $\psi \in H_{2}^{*}$ to the functional $\psi \circ A\in H_{1}^{*}.$
The set B(H) of all bounded linear operators on H (meaning operators H → H), together with the addition and composition operations, the norm and the adjoint operation, is a C*-algebra, which is a type of operator algebra.
An element A of B(H) is called 'self-adjoint' or 'Hermitian' if A* = A. If A is Hermitian and ⟨Ax, x⟩ ≥ 0 for every x, then A is called 'nonnegative', written A ≥ 0; if equality holds only when x = 0, then A is called 'positive'. The set of self adjoint operators admits a partial order, in which A ≥ B if A − B ≥ 0. If A has the form B*B for some B, then A is nonnegative; if B is invertible, then A is positive. A converse is also true in the sense that, for a non-negative operator A, there exists a unique non-negative square root B such that
$A=B^{2}=B^{*}B\,.$
In a sense made precise by the spectral theorem, self-adjoint operators can usefully be thought of as operators that are "real". An element A of B(H) is called normal if A*A = AA*. Normal operators decompose into the sum of a self-adjoint operator and an imaginary multiple of a self adjoint operator
$A={\frac {A+A^{*}}{2}}+i{\frac {A-A^{*}}{2i}}$
that commute with each other. Normal operators can also usefully be thought of in terms of their real and imaginary parts.
An element U of B(H) is called unitary if U is invertible and its inverse is given by U*. This can also be expressed by requiring that U be onto and ⟨Ux, Uy⟩ = ⟨x, y⟩ for all x, y ∈ H. The unitary operators form a group under composition, which is the isometry group of H.
An element of B(H) is compact if it sends bounded sets to relatively compact sets. Equivalently, a bounded operator T is compact if, for any bounded sequence {xk}, the sequence {Txk} has a convergent subsequence. Many integral operators are compact, and in fact define a special class of operators known as Hilbert–Schmidt operators that are especially important in the study of integral equations. Fredholm operators differ from a compact operator by a multiple of the identity, and are equivalently characterized as operators with a finite dimensional kernel and cokernel. The index of a Fredholm operator T is defined by
$\operatorname {index} T=\dim \ker T-\dim \operatorname {coker} T\,.$
The index is homotopy invariant, and plays a deep role in differential geometry via the Atiyah–Singer index theorem.
Unbounded operators
Unbounded operators are also tractable in Hilbert spaces, and have important applications to quantum mechanics.[78] An unbounded operator T on a Hilbert space H is defined as a linear operator whose domain D(T) is a linear subspace of H. Often the domain D(T) is a dense subspace of H, in which case T is known as a densely defined operator.
The adjoint of a densely defined unbounded operator is defined in essentially the same manner as for bounded operators. Self-adjoint unbounded operators play the role of the observables in the mathematical formulation of quantum mechanics. Examples of self-adjoint unbounded operators on the Hilbert space L2(R) are:[79]
• A suitable extension of the differential operator
$(Af)(x)=-i{\frac {\mathrm {d} }{\mathrm {d} x}}f(x)\,,$
where i is the imaginary unit and f is a differentiable function of compact support.
• The multiplication-by-x operator:
$(Bf)(x)=xf(x)\,.$
These correspond to the momentum and position observables, respectively. Neither A nor B is defined on all of H, since in the case of A the derivative need not exist, and in the case of B the product function need not be square integrable. In both cases, the set of possible arguments form dense subspaces of L2(R).
Constructions
Direct sums
Two Hilbert spaces H1 and H2 can be combined into another Hilbert space, called the (orthogonal) direct sum,[80] and denoted
$H_{1}\oplus H_{2}\,,$
consisting of the set of all ordered pairs (x1, x2) where xi ∈ Hi, i = 1, 2, and inner product defined by
${\bigl \langle }(x_{1},x_{2}),(y_{1},y_{2}){\bigr \rangle }_{H_{1}\oplus H_{2}}=\left\langle x_{1},y_{1}\right\rangle _{H_{1}}+\left\langle x_{2},y_{2}\right\rangle _{H_{2}}\,.$
More generally, if Hi is a family of Hilbert spaces indexed by i ∈ I, then the direct sum of the Hi, denoted
$\bigoplus _{i\in I}H_{i}$
consists of the set of all indexed families
$x=(x_{i}\in H_{i}\mid i\in I)\in \prod _{i\in I}H_{i}$
in the Cartesian product of the Hi such that
$\sum _{i\in I}\|x_{i}\|^{2}<\infty \,.$
The inner product is defined by
$\langle x,y\rangle =\sum _{i\in I}\left\langle x_{i},y_{i}\right\rangle _{H_{i}}\,.$
Each of the Hi is included as a closed subspace in the direct sum of all of the Hi. Moreover, the Hi are pairwise orthogonal. Conversely, if there is a system of closed subspaces, Vi, i ∈ I, in a Hilbert space H, that are pairwise orthogonal and whose union is dense in H, then H is canonically isomorphic to the direct sum of Vi. In this case, H is called the internal direct sum of the Vi. A direct sum (internal or external) is also equipped with a family of orthogonal projections Ei onto the ith direct summand Hi. These projections are bounded, self-adjoint, idempotent operators that satisfy the orthogonality condition
$E_{i}E_{j}=0,\quad i\neq j\,.$
The spectral theorem for compact self-adjoint operators on a Hilbert space H states that H splits into an orthogonal direct sum of the eigenspaces of an operator, and also gives an explicit decomposition of the operator as a sum of projections onto the eigenspaces. The direct sum of Hilbert spaces also appears in quantum mechanics as the Fock space of a system containing a variable number of particles, where each Hilbert space in the direct sum corresponds to an additional degree of freedom for the quantum mechanical system. In representation theory, the Peter–Weyl theorem guarantees that any unitary representation of a compact group on a Hilbert space splits as the direct sum of finite-dimensional representations.
Tensor products
Main article: Tensor product of Hilbert spaces
If x1, y1 ∊ H1 and x2, y2 ∊ H2, then one defines an inner product on the (ordinary) tensor product as follows. On simple tensors, let
$\langle x_{1}\otimes x_{2},\,y_{1}\otimes y_{2}\rangle =\langle x_{1},y_{1}\rangle \,\langle x_{2},y_{2}\rangle \,.$
This formula then extends by sesquilinearity to an inner product on H1 ⊗ H2. The Hilbertian tensor product of H1 and H2, sometimes denoted by H1 ${\widehat {\otimes }}$ H2, is the Hilbert space obtained by completing H1 ⊗ H2 for the metric associated to this inner product.[81]
An example is provided by the Hilbert space L2([0, 1]). The Hilbertian tensor product of two copies of L2([0, 1]) is isometrically and linearly isomorphic to the space L2([0, 1]2) of square-integrable functions on the square [0, 1]2. This isomorphism sends a simple tensor f1 ⊗ f2 to the function
$(s,t)\mapsto f_{1}(s)\,f_{2}(t)$
on the square.
This example is typical in the following sense.[82] Associated to every simple tensor product x1 ⊗ x2 is the rank one operator from H∗
1
to H2 that maps a given x* ∈ H∗
1
as
$x^{*}\mapsto x^{*}(x_{1})x_{2}\,.$
This mapping defined on simple tensors extends to a linear identification between H1 ⊗ H2 and the space of finite rank operators from H∗
1
to H2. This extends to a linear isometry of the Hilbertian tensor product H1 ${\widehat {\otimes }}$ H2 with the Hilbert space HS(H∗
1
, H2)
of Hilbert–Schmidt operators from H∗
1
to H2.
Orthonormal bases
The notion of an orthonormal basis from linear algebra generalizes over to the case of Hilbert spaces.[83] In a Hilbert space H, an orthonormal basis is a family {ek}k ∈ B of elements of H satisfying the conditions:
1. Orthogonality: Every two different elements of B are orthogonal: ⟨ek, ej⟩ = 0 for all k, j ∈ B with k ≠ j.
2. Normalization: Every element of the family has norm 1: ‖ek‖ = 1 for all k ∈ B.
3. Completeness: The linear span of the family ek, k ∈ B, is dense in H.
A system of vectors satisfying the first two conditions basis is called an orthonormal system or an orthonormal set (or an orthonormal sequence if B is countable). Such a system is always linearly independent.
Despite the name, an orthonormal basis is not, in general, a basis in the sense of linear algebra (Hamel basis). More precisely, an orthonormal basis is a Hamel basis if and only if the Hilbert space is a finite-dimensional vector space. [84]
Completeness of an orthonormal system of vectors of a Hilbert space can be equivalently restated as:
for every v ∈ H, if ⟨v, ek⟩ = 0 for all k ∈ B, then v = 0.
This is related to the fact that the only vector orthogonal to a dense linear subspace is the zero vector, for if S is any orthonormal set and v is orthogonal to S, then v is orthogonal to the closure of the linear span of S, which is the whole space.
Examples of orthonormal bases include:
• the set {(1, 0, 0), (0, 1, 0), (0, 0, 1)} forms an orthonormal basis of R3 with the dot product;
• the sequence { fn | n ∈ Z} with fn(x) = exp(2πinx) forms an orthonormal basis of the complex space L2([0, 1]);
In the infinite-dimensional case, an orthonormal basis will not be a basis in the sense of linear algebra; to distinguish the two, the latter basis is also called a Hamel basis. That the span of the basis vectors is dense implies that every vector in the space can be written as the sum of an infinite series, and the orthogonality implies that this decomposition is unique.
Sequence spaces
The space $\ell _{2}$ of square-summable sequences of complex numbers is the set of infinite sequences[85]
$(c_{1},c_{2},c_{3},\dots )$
of real or complex numbers such that
$\left|c_{1}\right|^{2}+\left|c_{2}\right|^{2}+\left|c_{3}\right|^{2}+\cdots <\infty \,.$
This space has an orthonormal basis:
${\begin{aligned}e_{1}&=(1,0,0,\dots )\\e_{2}&=(0,1,0,\dots )\\&\ \ \vdots \end{aligned}}$
This space is the infinite-dimensional generalization of the $\ell _{2}^{n}$ space of finite-dimensional vectors. It is usually the first example used to show that in infinite-dimensional spaces, a set that is closed and bounded is not necessarily (sequentially) compact (as is the case in all finite dimensional spaces). Indeed, the set of orthonormal vectors above shows this: It is an infinite sequence of vectors in the unit ball (i.e., the ball of points with norm less than or equal one). This set is clearly bounded and closed; yet, no subsequence of these vectors converges to anything and consequently the unit ball in $\ell _{2}$ is not compact. Intuitively, this is because "there is always another coordinate direction" into which the next elements of the sequence can evade.
One can generalize the space $\ell _{2}$ in many ways. For example, if B is any set, then one can form a Hilbert space of sequences with index set B, defined by[86]
$\ell ^{2}(B)={\biggl \{}x:B\xrightarrow {x} \mathbb {C} \mathrel {\bigg |} \sum _{b\in B}\left|x(b)\right|^{2}<\infty {\biggr \}}\,.$
The summation over B is here defined by
$\sum _{b\in B}\left|x(b)\right|^{2}=\sup \sum _{n=1}^{N}\left|x(b_{n})\right|^{2}$
the supremum being taken over all finite subsets of B. It follows that, for this sum to be finite, every element of l2(B) has only countably many nonzero terms. This space becomes a Hilbert space with the inner product
$\langle x,y\rangle =\sum _{b\in B}x(b){\overline {y(b)}}$
for all x, y ∈ l2(B). Here the sum also has only countably many nonzero terms, and is unconditionally convergent by the Cauchy–Schwarz inequality.
An orthonormal basis of l2(B) is indexed by the set B, given by
$e_{b}(b')={\begin{cases}1&{\text{if }}b=b'\\0&{\text{otherwise.}}\end{cases}}$
Bessel's inequality and Parseval's formula
Let f1, …, fn be a finite orthonormal system in H. For an arbitrary vector x ∈ H, let
$y=\sum _{j=1}^{n}\langle x,f_{j}\rangle \,f_{j}\,.$
Then ⟨x, fk⟩ = ⟨y, fk⟩ for every k = 1, …, n. It follows that x − y is orthogonal to each fk, hence x − y is orthogonal to y. Using the Pythagorean identity twice, it follows that
$\|x\|^{2}=\|x-y\|^{2}+\|y\|^{2}\geq \|y\|^{2}=\sum _{j=1}^{n}{\bigl |}\langle x,f_{j}\rangle {\bigr |}^{2}\,.$
Let {fi}, i ∈ I, be an arbitrary orthonormal system in H. Applying the preceding inequality to every finite subset J of I gives Bessel's inequality:[87]
$\sum _{i\in I}{\bigl |}\langle x,f_{i}\rangle {\bigr |}^{2}\leq \|x\|^{2},\quad x\in H$
(according to the definition of the sum of an arbitrary family of non-negative real numbers).
Geometrically, Bessel's inequality implies that the orthogonal projection of x onto the linear subspace spanned by the fi has norm that does not exceed that of x. In two dimensions, this is the assertion that the length of the leg of a right triangle may not exceed the length of the hypotenuse.
Bessel's inequality is a stepping stone to the stronger result called Parseval's identity, which governs the case when Bessel's inequality is actually an equality. By definition, if {ek}k ∈ B is an orthonormal basis of H, then every element x of H may be written as
$x=\sum _{k\in B}\left\langle x,e_{k}\right\rangle \,e_{k}\,.$
Even if B is uncountable, Bessel's inequality guarantees that the expression is well-defined and consists only of countably many nonzero terms. This sum is called the Fourier expansion of x, and the individual coefficients ⟨x, ek⟩ are the Fourier coefficients of x. Parseval's identity then asserts that
$\|x\|^{2}=\sum _{k\in B}|\langle x,e_{k}\rangle |^{2}\,.$
Conversely, if {ek} is an orthonormal set such that Parseval's identity holds for every x, then {ek} is an orthonormal basis.
Hilbert dimension
As a consequence of Zorn's lemma, every Hilbert space admits an orthonormal basis; furthermore, any two orthonormal bases of the same space have the same cardinality, called the Hilbert dimension of the space.[88] For instance, since l2(B) has an orthonormal basis indexed by B, its Hilbert dimension is the cardinality of B (which may be a finite integer, or a countable or uncountable cardinal number).
The Hilbert dimension is not greater than the Hamel dimension (the usual dimension of a vector space). The two dimensions are equal if and only one of them is finite.
As a consequence of Parseval's identity, if {ek}k ∈ B is an orthonormal basis of H, then the map Φ : H → l2(B) defined by Φ(x) = ⟨x, ek⟩k∈B is an isometric isomorphism of Hilbert spaces: it is a bijective linear mapping such that
${\bigl \langle }\Phi (x),\Phi (y){\bigr \rangle }_{l^{2}(B)}=\left\langle x,y\right\rangle _{H}$
for all x, y ∈ H. The cardinal number of B is the Hilbert dimension of H. Thus every Hilbert space is isometrically isomorphic to a sequence space l2(B) for some set B.
Separable spaces
By definition, a Hilbert space is separable provided it contains a dense countable subset. Along with Zorn's lemma, this means a Hilbert space is separable if and only if it admits a countable orthonormal basis. All infinite-dimensional separable Hilbert spaces are therefore isometrically isomorphic to l2.
In the past, Hilbert spaces were often required to be separable as part of the definition.[89]
In quantum field theory
Most spaces used in physics are separable, and since these are all isomorphic to each other, one often refers to any infinite-dimensional separable Hilbert space as "the Hilbert space" or just "Hilbert space".[90] Even in quantum field theory, most of the Hilbert spaces are in fact separable, as stipulated by the Wightman axioms. However, it is sometimes argued that non-separable Hilbert spaces are also important in quantum field theory, roughly because the systems in the theory possess an infinite number of degrees of freedom and any infinite Hilbert tensor product (of spaces of dimension greater than one) is non-separable.[91] For instance, a bosonic field can be naturally thought of as an element of a tensor product whose factors represent harmonic oscillators at each point of space. From this perspective, the natural state space of a boson might seem to be a non-separable space.[91] However, it is only a small separable subspace of the full tensor product that can contain physically meaningful fields (on which the observables can be defined). Another non-separable Hilbert space models the state of an infinite collection of particles in an unbounded region of space. An orthonormal basis of the space is indexed by the density of the particles, a continuous parameter, and since the set of possible densities is uncountable, the basis is not countable.[91]
Orthogonal complements and projections
If S is a subset of a Hilbert space H, the set of vectors orthogonal to S is defined by
$S^{\perp }=\left\{x\in H\mid \langle x,s\rangle =0\ {\text{ for all }}s\in S\right\}\,.$
The set S⊥ is a closed subspace of H (can be proved easily using the linearity and continuity of the inner product) and so forms itself a Hilbert space. If V is a closed subspace of H, then V⊥ is called the orthogonal complement of V. In fact, every x ∈ H can then be written uniquely as x = v + w, with v ∈ V and w ∈ V⊥. Therefore, H is the internal Hilbert direct sum of V and V⊥.
The linear operator PV : H → H that maps x to v is called the orthogonal projection onto V. There is a natural one-to-one correspondence between the set of all closed subspaces of H and the set of all bounded self-adjoint operators P such that P2 = P. Specifically,
Theorem — The orthogonal projection PV is a self-adjoint linear operator on H of norm ≤ 1 with the property P2
V
= PV
. Moreover, any self-adjoint linear operator E such that E2 = E is of the form PV, where V is the range of E. For every x in H, PV(x) is the unique element v of V that minimizes the distance ‖x − v‖.
This provides the geometrical interpretation of PV(x): it is the best approximation to x by elements of V.[92]
Projections PU and PV are called mutually orthogonal if PUPV = 0. This is equivalent to U and V being orthogonal as subspaces of H. The sum of the two projections PU and PV is a projection only if U and V are orthogonal to each other, and in that case PU + PV = PU+V.[93] The composite PUPV is generally not a projection; in fact, the composite is a projection if and only if the two projections commute, and in that case PUPV = PU∩V.[94]
By restricting the codomain to the Hilbert space V, the orthogonal projection PV gives rise to a projection mapping π : H → V; it is the adjoint of the inclusion mapping
$i:V\to H\,,$
meaning that
$\left\langle ix,y\right\rangle _{H}=\left\langle x,\pi y\right\rangle _{V}$
for all x ∈ V and y ∈ H.
The operator norm of the orthogonal projection PV onto a nonzero closed subspace V is equal to 1:
$\|P_{V}\|=\sup _{x\in H,x\neq 0}{\frac {\|P_{V}x\|}{\|x\|}}=1\,.$
Every closed subspace V of a Hilbert space is therefore the image of an operator P of norm one such that P2 = P. The property of possessing appropriate projection operators characterizes Hilbert spaces:[95]
• A Banach space of dimension higher than 2 is (isometrically) a Hilbert space if and only if, for every closed subspace V, there is an operator PV of norm one whose image is V such that P2
V
= PV
.
While this result characterizes the metric structure of a Hilbert space, the structure of a Hilbert space as a topological vector space can itself be characterized in terms of the presence of complementary subspaces:[96]
• A Banach space X is topologically and linearly isomorphic to a Hilbert space if and only if, to every closed subspace V, there is a closed subspace W such that X is equal to the internal direct sum V ⊕ W.
The orthogonal complement satisfies some more elementary results. It is a monotone function in the sense that if U ⊂ V, then V⊥ ⊆ U⊥ with equality holding if and only if V is contained in the closure of U. This result is a special case of the Hahn–Banach theorem. The closure of a subspace can be completely characterized in terms of the orthogonal complement: if V is a subspace of H, then the closure of V is equal to V⊥⊥. The orthogonal complement is thus a Galois connection on the partial order of subspaces of a Hilbert space. In general, the orthogonal complement of a sum of subspaces is the intersection of the orthogonal complements:[97]
${\biggl (}\sum _{i}V_{i}{\biggr )}^{\perp }=\bigcap _{i}V_{i}^{\perp }\,.$
If the Vi are in addition closed, then
${\overline {\sum _{i}V_{i}^{\perp }}}={\biggl (}\bigcap _{i}V_{i}{\biggr )}^{\perp }\,.$
Spectral theory
There is a well-developed spectral theory for self-adjoint operators in a Hilbert space, that is roughly analogous to the study of symmetric matrices over the reals or self-adjoint matrices over the complex numbers.[98] In the same sense, one can obtain a "diagonalization" of a self-adjoint operator as a suitable sum (actually an integral) of orthogonal projection operators.
The spectrum of an operator T, denoted σ(T), is the set of complex numbers λ such that T − λ lacks a continuous inverse. If T is bounded, then the spectrum is always a compact set in the complex plane, and lies inside the disc |z| ≤ ‖T‖. If T is self-adjoint, then the spectrum is real. In fact, it is contained in the interval [m, M] where
$m=\inf _{\|x\|=1}\langle Tx,x\rangle \,,\quad M=\sup _{\|x\|=1}\langle Tx,x\rangle \,.$
Moreover, m and M are both actually contained within the spectrum.
The eigenspaces of an operator T are given by
$H_{\lambda }=\ker(T-\lambda )\,.$
Unlike with finite matrices, not every element of the spectrum of T must be an eigenvalue: the linear operator T − λ may only lack an inverse because it is not surjective. Elements of the spectrum of an operator in the general sense are known as spectral values. Since spectral values need not be eigenvalues, the spectral decomposition is often more subtle than in finite dimensions.
However, the spectral theorem of a self-adjoint operator T takes a particularly simple form if, in addition, T is assumed to be a compact operator. The spectral theorem for compact self-adjoint operators states:[99]
• A compact self-adjoint operator T has only countably (or finitely) many spectral values. The spectrum of T has no limit point in the complex plane except possibly zero. The eigenspaces of T decompose H into an orthogonal direct sum:
$H=\bigoplus _{\lambda \in \sigma (T)}H_{\lambda }\,.$
Moreover, if Eλ denotes the orthogonal projection onto the eigenspace Hλ, then
$T=\sum _{\lambda \in \sigma (T)}\lambda E_{\lambda }\,,$
where the sum converges with respect to the norm on B(H).
This theorem plays a fundamental role in the theory of integral equations, as many integral operators are compact, in particular those that arise from Hilbert–Schmidt operators.
The general spectral theorem for self-adjoint operators involves a kind of operator-valued Riemann–Stieltjes integral, rather than an infinite summation.[100] The spectral family associated to T associates to each real number λ an operator Eλ, which is the projection onto the nullspace of the operator (T − λ)+, where the positive part of a self-adjoint operator is defined by
$A^{+}={\tfrac {1}{2}}{\Bigl (}{\sqrt {A^{2}}}+A{\Bigr )}\,.$
The operators Eλ are monotone increasing relative to the partial order defined on self-adjoint operators; the eigenvalues correspond precisely to the jump discontinuities. One has the spectral theorem, which asserts
$T=\int _{\mathbb {R} }\lambda \,\mathrm {d} E_{\lambda }\,.$
The integral is understood as a Riemann–Stieltjes integral, convergent with respect to the norm on B(H). In particular, one has the ordinary scalar-valued integral representation
$\langle Tx,y\rangle =\int _{\mathbb {R} }\lambda \,\mathrm {d} \langle E_{\lambda }x,y\rangle \,.$
A somewhat similar spectral decomposition holds for normal operators, although because the spectrum may now contain non-real complex numbers, the operator-valued Stieltjes measure dEλ must instead be replaced by a resolution of the identity.
A major application of spectral methods is the spectral mapping theorem, which allows one to apply to a self-adjoint operator T any continuous complex function f defined on the spectrum of T by forming the integral
$f(T)=\int _{\sigma (T)}f(\lambda )\,\mathrm {d} E_{\lambda }\,.$
The resulting continuous functional calculus has applications in particular to pseudodifferential operators.[101]
The spectral theory of unbounded self-adjoint operators is only marginally more difficult than for bounded operators. The spectrum of an unbounded operator is defined in precisely the same way as for bounded operators: λ is a spectral value if the resolvent operator
$R_{\lambda }=(T-\lambda )^{-1}$
fails to be a well-defined continuous operator. The self-adjointness of T still guarantees that the spectrum is real. Thus the essential idea of working with unbounded operators is to look instead at the resolvent Rλ where λ is nonreal. This is a bounded normal operator, which admits a spectral representation that can then be transferred to a spectral representation of T itself. A similar strategy is used, for instance, to study the spectrum of the Laplace operator: rather than address the operator directly, one instead looks as an associated resolvent such as a Riesz potential or Bessel potential.
A precise version of the spectral theorem in this case is:[102]
Theorem — Given a densely defined self-adjoint operator T on a Hilbert space H, there corresponds a unique resolution of the identity E on the Borel sets of R, such that
$\langle Tx,y\rangle =\int _{\mathbb {R} }\lambda \,\mathrm {d} E_{x,y}(\lambda )$
for all x ∈ D(T) and y ∈ H. The spectral measure E is concentrated on the spectrum of T.
There is also a version of the spectral theorem that applies to unbounded normal operators.
In popular culture
In Gravity's Rainbow (1973), a novel by Thomas Pynchon, one of the characters is called "Sammy Hilbert-Spaess", a pun on "Hilbert Space". The novel refers also to Gödel's incompleteness theorems.[103]
See also
• Banach space – Normed vector space that is complete
• Fock space – Multi particle state space
• Fundamental theorem of Hilbert spaces
• Hadamard space – geodesically complete metric space of non-positive curvaturePages displaying wikidata descriptions as a fallback
• Hausdorff space – Type of topological space
• Hilbert algebra
• Hilbert C*-module – Mathematical objects that generalise the notion of Hilbert spaces
• Hilbert manifold – manifold modeled on Hilbert spaces; separable Hausdorff space in which each point has a neighborhood homeomorphic to an infinite dimensional Hilbert spacePages displaying wikidata descriptions as a fallback
• L-semi-inner product – Generalization of inner products that applies to all normed spaces
• Locally convex topological vector space – A vector space with a topology defined by convex open sets
• Operator theory – Mathematical field of study
• Operator topologies – Topologies on the set of operators on a Hilbert space
• Rigged Hilbert space – Construction linking the study of "bound" and continuous eigenvalues in functional analysis
• Topological vector space – Vector space with a notion of nearness
Remarks
1. In some conventions, inner products are linear in their second arguments instead.
2. The eigenvalues of the Fredholm kernel are 1/λ, which tend to zero.
Notes
1. Axler 2014, p. 164 §6.2
2. However, some sources call finite-dimensional spaces with these properties pre-Hilbert spaces, reserving the term "Hilbert space" for infinite-dimensional spaces; see, e.g., Levitan 2001.
3. Marsden 1974, §2.8
4. The mathematical material in this section can be found in any good textbook on functional analysis, such as Dieudonné (1960), Hewitt & Stromberg (1965), Reed & Simon (1980) or Rudin (1987).
5. Schaefer & Wolff 1999, pp. 122–202.
6. Dieudonné 1960, §6.2
7. Roman 2008, p. 327
8. Roman 2008, p. 330 Theorem 13.8
9. Stein & Shakarchi 2005, p. 163}}
10. Dieudonné 1960
11. Largely from the work of Hermann Grassmann, at the urging of August Ferdinand Möbius (Boyer & Merzbach 1991, pp. 584–586). The first modern axiomatic account of abstract vector spaces ultimately appeared in Giuseppe Peano's 1888 account (Grattan-Guinness 2000, §5.2.2; O'Connor & Robertson 1996).
12. A detailed account of the history of Hilbert spaces can be found in Bourbaki 1987.
13. Schmidt 1908
14. Titchmarsh 1946, §IX.1
15. Lebesgue 1904. Further details on the history of integration theory can be found in Bourbaki (1987) and Saks (2005).
16. Bourbaki 1987.
17. Dunford & Schwartz 1958, §IV.16
18. In Dunford & Schwartz (1958, §IV.16), the result that every linear functional on L2[0,1] is represented by integration is jointly attributed to Fréchet (1907) and Riesz (1907). The general result, that the dual of a Hilbert space is identified with the Hilbert space itself, can be found in Riesz (1934).
19. von Neumann 1929.
20. Kline 1972, p. 1092
21. Hilbert, Nordheim & von Neumann 1927
22. Weyl 1931.
23. Prugovečki 1981, pp. 1–10.
24. von Neumann 1932
25. Peres 1993, pp. 79–99.
26. Murphy 1990, p. 112
27. Murphy 1990, p. 72
28. Halmos 1957, Section 42.
29. Hewitt & Stromberg 1965.
30. Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 22". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 773. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
31. Bers, John & Schechter 1981.
32. Giusti 2003.
33. Stein 1970
34. Details can be found in Warner (1983).
35. A general reference on Hardy spaces is the book Duren (1970).
36. Krantz 2002, §1.4
37. Krantz 2002, §1.5
38. Young 1988, Chapter 9.
39. Pedersen 1995, §4.4
40. More detail on finite element methods from this point of view can be found in Brenner & Scott (2005).
41. Brezis 2010, section 9.5
42. Evans 1998
43. Reed & Simon 1980
44. A treatment of Fourier series from this point of view is available, for instance, in Rudin (1987) or Folland (2009).
45. Halmos 1957, §5
46. Bachman, Narici & Beckenstein 2000
47. Stein & Weiss 1971, §IV.2.
48. Lanczos 1988, pp. 212–213
49. Lanczos 1988, Equation 4-3.10
50. The classic reference for spectral methods is Courant & Hilbert 1953. A more up-to-date account is Reed & Simon 1975.
51. Kac 1966
52. von Neumann 1955
53. Holevo 2001, p. 17
54. Rieffel & Polak 2011, p. 55
55. Peres 1993, p. 101
56. Peres 1993, pp. 73
57. Nielsen & Chuang 2000, p. 90
58. Billingsley (1986), p. 477, ex. 34.13}}
59. Karatzas & Shreve 2019, Chapter 3
60. Hermann Weyl (2009), "Mind and nature", Mind and nature: , selected writings on philosophy, mathematics, and physics, Princeton University Press.
61. Berthier, M. (2020), "Geometry of color perception. Part 2: perceived colors from real quantum states and Hering's rebit", The Journal of Mathematical Neuroscience, 10 (1): 14, doi:10.1186/s13408-020-00092-x, PMC 7481323, PMID 32902776.
62. Reed & Simon 1980, Theorem 12.6
63. Reed & Simon 1980, p. 38
64. Young 1988, p. 23.
65. Clarkson 1936.
66. Rudin 1987, Theorem 4.10
67. Dunford & Schwartz 1958, II.4.29
68. Rudin 1987, Theorem 4.11
69. Blanchet, Gérard; Charbit, Maurice (2014). Digital Signal and Image Processing Using MATLAB. Digital Signal and Image Processing. Vol. 1 (Second ed.). New Jersey: Wiley. pp. 349–360. ISBN 978-1848216402.
70. Weidmann 1980, Theorem 4.8
71. Peres 1993, pp. 77–78.
72. Weidmann 1980, §4.5
73. Buttazzo, Giaquinta & Hildebrandt 1998, Theorem 5.17
74. Halmos 1982, Problem 52, 58
75. Rudin 1973
76. Trèves 1967, Chapter 18
77. A general reference for this section is Rudin (1973), chapter 12.
78. See Prugovečki (1981), Reed & Simon (1980, Chapter VIII) and Folland (1989).
79. Prugovečki 1981, III, §1.4
80. Dunford & Schwartz 1958, IV.4.17-18
81. Weidmann 1980, §3.4
82. Kadison & Ringrose 1983, Theorem 2.6.4
83. Dunford & Schwartz 1958, §IV.4.
84. Roman 2008, p. 218
85. Stein & Shakarchi 2005, p. 163}}
86. Rudin 1987, Definition 3.7
87. For the case of finite index sets, see, for instance, Halmos 1957, §5. For infinite index sets, see Weidmann 1980, Theorem 3.6.
88. Levitan 2001. Many authors, such as Dunford & Schwartz (1958, §IV.4), refer to this just as the dimension. Unless the Hilbert space is finite dimensional, this is not the same thing as its dimension as a linear space (the cardinality of a Hamel basis).
89. Prugovečki 1981, I, §4.2
90. von Neumann (1955) defines a Hilbert space via a countable Hilbert basis, which amounts to an isometric isomorphism with l2. The convention still persists in most rigorous treatments of quantum mechanics; see for instance Sobrino 1996, Appendix B.
91. Streater & Wightman 1964, pp. 86–87
92. Young 1988, Theorem 15.3
93. von Neumann 1955, Theorem 16
94. von Neumann 1955, Theorem 14
95. Kakutani 1939
96. Lindenstrauss & Tzafriri 1971
97. Halmos 1957, §12
98. A general account of spectral theory in Hilbert spaces can be found in Riesz & Sz.-Nagy (1990). A more sophisticated account in the language of C*-algebras is in Rudin (1973) or Kadison & Ringrose (1997)
99. See, for instance, Riesz & Sz.-Nagy (1990, Chapter VI) or Weidmann 1980, Chapter 7. This result was already known to Schmidt (1908) in the case of operators arising from integral kernels.
100. Riesz & Sz.-Nagy 1990, §§107–108
101. Shubin 1987
102. Rudin 1973, Theorem 13.30.
103. Thomas, Pynchon (1973). Gravity's Rainbow. Viking Press. pp. 217, 275. ISBN 978-0143039945.
References
• Axler, Sheldon (18 December 2014), Linear Algebra Done Right, Undergraduate Texts in Mathematics (3rd ed.), Springer Publishing (published 2015), p. 296, ISBN 978-3-319-11079-0
• Bachman, George; Narici, Lawrence; Beckenstein, Edward (2000), Fourier and wavelet analysis, Universitext, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98899-3, MR 1729490.
• Bers, Lipman; John, Fritz; Schechter, Martin (1981), Partial differential equations, American Mathematical Society, ISBN 978-0-8218-0049-2.
• Billingsley, Patrick (1986), Probability and measure, Wiley.
• Bourbaki, Nicolas (1986), Spectral theories, Elements of mathematics, Berlin: Springer-Verlag, ISBN 978-0-201-00767-1.
• Bourbaki, Nicolas (1987), Topological vector spaces, Elements of mathematics, Berlin: Springer-Verlag, ISBN 978-3-540-13627-9.
• Boyer, Carl Benjamin; Merzbach, Uta C (1991), A History of Mathematics (2nd ed.), John Wiley & Sons, Inc., ISBN 978-0-471-54397-8.
• Brenner, S.; Scott, R. L. (2005), The Mathematical Theory of Finite Element Methods (2nd ed.), Springer, ISBN 978-0-387-95451-6.
• Brezis, Haim (2010), Functional analysis, Sobolev spaces, and partial differential equations, Springer.
• Buttazzo, Giuseppe; Giaquinta, Mariano; Hildebrandt, Stefan (1998), One-dimensional variational problems, Oxford Lecture Series in Mathematics and its Applications, vol. 15, The Clarendon Press Oxford University Press, ISBN 978-0-19-850465-8, MR 1694383.
• Clarkson, J. A. (1936), "Uniformly convex spaces", Trans. Amer. Math. Soc., 40 (3): 396–414, doi:10.2307/1989630, JSTOR 1989630.
• Courant, Richard; Hilbert, David (1953), Methods of Mathematical Physics, Vol. I, Interscience.
• Dieudonné, Jean (1960), Foundations of Modern Analysis, Academic Press.
• Dirac, P.A.M. (1930), The Principles of Quantum Mechanics, Oxford: Clarendon Press.
• Dunford, N.; Schwartz, J.T. (1958), Linear operators, Parts I and II, Wiley-Interscience.
• Duren, P. (1970), Theory of Hp-Spaces, New York: Academic Press.
• Evans, L. C. (1998), Partial Differential Equations, Providence: American Mathematical Society, ISBN 0-8218-0772-2.
• Folland, Gerald B. (2009), Fourier analysis and its application (Reprint of Wadsworth and Brooks/Cole 1992 ed.), American Mathematical Society Bookstore, ISBN 978-0-8218-4790-9.
• Folland, Gerald B. (1989), Harmonic analysis in phase space, Annals of Mathematics Studies, vol. 122, Princeton University Press, ISBN 978-0-691-08527-2.
• Fréchet, Maurice (1907), "Sur les ensembles de fonctions et les opérations linéaires", C. R. Acad. Sci. Paris, 144: 1414–1416.
• Fréchet, Maurice (1904), "Sur les opérations linéaires", Transactions of the American Mathematical Society, 5 (4): 493–499, doi:10.2307/1986278, JSTOR 1986278.
• Giusti, Enrico (2003), Direct Methods in the Calculus of Variations, World Scientific, ISBN 978-981-238-043-2.
• Grattan-Guinness, Ivor (2000), The search for mathematical roots, 1870–1940, Princeton Paperbacks, Princeton University Press, ISBN 978-0-691-05858-0, MR 1807717.
• Halmos, Paul (1957), Introduction to Hilbert Space and the Theory of Spectral Multiplicity, Chelsea Pub. Co
• Halmos, Paul (1982), A Hilbert Space Problem Book, Springer-Verlag, ISBN 978-0-387-90685-0.
• Hewitt, Edwin; Stromberg, Karl (1965), Real and Abstract Analysis, New York: Springer-Verlag.
• Hilbert, David; Nordheim, Lothar Wolfgang; von Neumann, John (1927), "Über die Grundlagen der Quantenmechanik", Mathematische Annalen, 98: 1–30, doi:10.1007/BF01451579, S2CID 120986758.
• Holevo, Alexander S. (2001), Statistical Structure of Quantum Theory, Lecture Notes in Physics, Springer, ISBN 3-540-42082-7, OCLC 318268606.
• Kac, Mark (1966), "Can one hear the shape of a drum?", American Mathematical Monthly, 73 (4, part 2): 1–23, doi:10.2307/2313748, JSTOR 2313748.
• Kadison, Richard V.; Ringrose, John R. (1997), Fundamentals of the theory of operator algebras. Vol. I, Graduate Studies in Mathematics, vol. 15, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0819-1, MR 1468229.
• Kadison, Richard V.; Ringrose, John R. (1983), Fundamentals of the Theory of Operator Algebras, Vol. I: Elementary Theory, New York: Academic Press, Inc.
• Karatzas, Ioannis; Shreve, Steven (2019), Brownian Motion and Stochastic Calculus (2nd ed.), Springer, ISBN 978-0-387-97655-6
• Kakutani, Shizuo (1939), "Some characterizations of Euclidean space", Japanese Journal of Mathematics, 16: 93–97, doi:10.4099/jjm1924.16.0_93, MR 0000895.
• Kline, Morris (1972), Mathematical thought from ancient to modern times, Volume 3 (3rd ed.), Oxford University Press (published 1990), ISBN 978-0-19-506137-6.
• Kolmogorov, Andrey; Fomin, Sergei V. (1970), Introductory Real Analysis (Revised English edition, trans. by Richard A. Silverman (1975) ed.), Dover Press, ISBN 978-0-486-61226-3.
• Krantz, Steven G. (2002), Function Theory of Several Complex Variables, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2724-6.
• Lanczos, Cornelius (1988), Applied analysis (Reprint of 1956 Prentice-Hall ed.), Dover Publications, ISBN 978-0-486-65656-4.
• Lebesgue, Henri (1904), Leçons sur l'intégration et la recherche des fonctions primitives, Gauthier-Villars.
• Levitan, B.M. (2001) [1994], "Hilbert space", Encyclopedia of Mathematics, EMS Press.
• Lindenstrauss, J.; Tzafriri, L. (1971), "On the complemented subspaces problem", Israel Journal of Mathematics, 9 (2): 263–269, doi:10.1007/BF02771592, ISSN 0021-2172, MR 0276734, S2CID 119575718.
• Marsden, Jerrold E. (1974), Elementary classical analysis, W. H. Freeman and Co., MR 0357693.
• Murphy, Gerald J. (1990), C*-algebras and Operator Theory, Academic Press, ISBN 0-12-511360-9.
• von Neumann, John (1929), "Allgemeine Eigenwerttheorie Hermitescher Funktionaloperatoren", Mathematische Annalen, 102: 49–131, doi:10.1007/BF01782338, S2CID 121249803.
• von Neumann, John (1932), "Physical Applications of the Ergodic Hypothesis", Proc Natl Acad Sci USA, 18 (3): 263–266, Bibcode:1932PNAS...18..263N, doi:10.1073/pnas.18.3.263, JSTOR 86260, PMC 1076204, PMID 16587674.
• von Neumann, John (1955), Mathematical Foundations of Quantum Mechanics, Princeton Landmarks in Mathematics, translated by Beyer, Robert T., Princeton University Press (published 1996), ISBN 978-0-691-02893-4, MR 1435976.
• Nielsen, Michael A.; Chuang, Isaac L. (2000), Quantum Computation and Quantum Information (1st ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-63503-5, OCLC 634735192.
• O'Connor, John J.; Robertson, Edmund F. (1996), "Abstract linear spaces", MacTutor History of Mathematics Archive, University of St Andrews
• Pedersen, Gert (1995), Analysis Now, Graduate Texts in Mathematics, vol. 118, Berlin, New York: Springer-Verlag, ISBN 978-1-4612-6981-6, MR 0971256
• Peres, Asher (1993), Quantum Theory: Concepts and Methods, Kluwer, ISBN 0-7923-2549-4, OCLC 28854083
• Prugovečki, Eduard (1981), Quantum mechanics in Hilbert space (2nd ed.), Dover (published 2006), ISBN 978-0-486-45327-9.
• Reed, Michael; Simon, Barry (1980), Functional Analysis (vol I of 4 vols), Methods of Modern Mathematical Physics, Academic Press, ISBN 978-0-12-585050-6.
• Reed, Michael; Simon, Barry (1975), Fourier Analysis, Self-Adjointness (vol II of 4 vols), Methods of Modern Mathematical Physics, Academic Press, ISBN 9780125850025.
• Rieffel, Eleanor G.; Polak, Wolfgang H. (2011-03-04), Quantum Computing: A Gentle Introduction, MIT Press, ISBN 978-0-262-01506-6.
• Riesz, Frigyes (1907), "Sur une espèce de Géométrie analytique des systèmes de fonctions sommables", C. R. Acad. Sci. Paris, 144: 1409–1411.
• Riesz, Frigyes (1934), "Zur Theorie des Hilbertschen Raumes", Acta Sci. Math. Szeged, 7: 34–38.
• Riesz, Frigyes; Sz.-Nagy, Béla (1990), Functional analysis, Dover, ISBN 978-0-486-66289-3.
• Roman, Stephen (2008), Advanced Linear Algebra, Graduate Texts in Mathematics (Third ed.), Springer, ISBN 978-0-387-72828-5
• Rudin, Walter (1973). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 25 (First ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 9780070542259.
• Rudin, Walter (1987), Real and Complex Analysis, McGraw-Hill, ISBN 978-0-07-100276-9.
• Saks, Stanisław (2005), Theory of the integral (2nd Dover ed.), Dover, ISBN 978-0-486-44648-6; originally published Monografje Matematyczne, vol. 7, Warszawa, 1937.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Schmidt, Erhard (1908), "Über die Auflösung linearer Gleichungen mit unendlich vielen Unbekannten", Rend. Circ. Mat. Palermo, 25: 63–77, doi:10.1007/BF03029116, S2CID 120666844.
• Shubin, M. A. (1987), Pseudodifferential operators and spectral theory, Springer Series in Soviet Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-13621-7, MR 0883081.
• Sobrino, Luis (1996), Elements of non-relativistic quantum mechanics, River Edge, New Jersey: World Scientific Publishing Co. Inc., Bibcode:1996lnrq.book.....S, doi:10.1142/2865, ISBN 978-981-02-2386-1, MR 1626401.
• Stewart, James (2006), Calculus: Concepts and Contexts (3rd ed.), Thomson/Brooks/Cole.
• Stein, E (1970), Singular Integrals and Differentiability Properties of Functions, Princeton Univ. Press, ISBN 978-0-691-08079-6.
• Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.: Princeton University Press, ISBN 978-0-691-08078-9.
• Stein, E; Shakarchi, R (2005), Real analysis, measure theory, integration, and Hilbert spaces, Princeton University Press.
• Streater, Ray; Wightman, Arthur (1964), PCT, Spin and Statistics and All That, W. A. Benjamin, Inc.
• Teschl, Gerald (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. Providence: American Mathematical Society. ISBN 978-0-8218-4660-5..
• Titchmarsh, Edward Charles (1946), Eigenfunction expansions, part 1, Oxford University: Clarendon Press.
• Trèves, François (1967), Topological Vector Spaces, Distributions and Kernels, Academic Press.
• Warner, Frank (1983), Foundations of Differentiable Manifolds and Lie Groups, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90894-6.
• Weidmann, Joachim (1980), Linear operators in Hilbert spaces, Graduate Texts in Mathematics, vol. 68, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90427-6, MR 0566954.
• Weyl, Hermann (1931), The Theory of Groups and Quantum Mechanics (English 1950 ed.), Dover Press, ISBN 978-0-486-60269-1.
• Young, Nicholas (1988), An introduction to Hilbert space, Cambridge University Press, ISBN 978-0-521-33071-8, Zbl 0645.46024.
External links
Wikibooks has a book on the topic of: Functional Analysis/Hilbert spaces
Wikimedia Commons has media related to Hilbert space.
• "Hilbert space", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Hilbert space at Mathworld
• 245B, notes 5: Hilbert spaces by Terence Tao
Hilbert spaces
Basic concepts
• Adjoint
• Inner product and L-semi-inner product
• Hilbert space and Prehilbert space
• Orthogonal complement
• Orthonormal basis
Main results
• Bessel's inequality
• Cauchy–Schwarz inequality
• Riesz representation
Other results
• Hilbert projection theorem
• Parseval's identity
• Polarization identity (Parallelogram law)
Maps
• Compact operator on Hilbert space
• Densely defined
• Hermitian form
• Hilbert–Schmidt
• Normal
• Self-adjoint
• Sesquilinear form
• Trace class
• Unitary
Examples
• Cn(K) with K compact & n<∞
• Segal–Bargmann F
Lp spaces
Basic concepts
• Banach & Hilbert spaces
• Lp spaces
• Measure
• Lebesgue
• Measure space
• Measurable space/function
• Minkowski distance
• Sequence spaces
L1 spaces
• Integrable function
• Lebesgue integration
• Taxicab geometry
L2 spaces
• Bessel's
• Cauchy–Schwarz
• Euclidean distance
• Hilbert space
• Parseval's identity
• Polarization identity
• Pythagorean theorem
• Square-integrable function
$L^{\infty }$ spaces
• Bounded function
• Chebyshev distance
• Infimum and supremum
• Essential
• Uniform norm
Maps
• Almost everywhere
• Convergence almost everywhere
• Convergence in measure
• Function space
• Integral transform
• Locally integrable function
• Measurable function
• Symmetric decreasing rearrangement
Inequalities
• Babenko–Beckner
• Chebyshev's
• Clarkson's
• Hanner's
• Hausdorff–Young
• Hölder's
• Markov's
• Minkowski
• Young's convolution
Results
• Marcinkiewicz interpolation theorem
• Plancherel theorem
• Riemann–Lebesgue
• Riesz–Fischer theorem
• Riesz–Thorin theorem
For Lebesgue measure
• Isoperimetric inequality
• Brunn–Minkowski theorem
• Milman's reverse
• Minkowski–Steiner formula
• Prékopa–Leindler inequality
• Vitale's random Brunn–Minkowski inequality
Applications & related
• Bochner space
• Fourier analysis
• Lorentz space
• Probability theory
• Quasinorm
• Real analysis
• Sobolev space
• *-algebra
• C*-algebra
• Von Neumann
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Banach space topics
Types of Banach spaces
• Asplund
• Banach
• list
• Banach lattice
• Grothendieck
• Hilbert
• Inner product space
• Polarization identity
• (Polynomially) Reflexive
• Riesz
• L-semi-inner product
• (B
• Strictly
• Uniformly) convex
• Uniformly smooth
• (Injective
• Projective) Tensor product (of Hilbert spaces)
Banach spaces are:
• Barrelled
• Complete
• F-space
• Fréchet
• tame
• Locally convex
• Seminorms/Minkowski functionals
• Mackey
• Metrizable
• Normed
• norm
• Quasinormed
• Stereotype
Function space Topologies
• Banach–Mazur compactum
• Dual
• Dual space
• Dual norm
• Operator
• Ultraweak
• Weak
• polar
• operator
• Strong
• polar
• operator
• Ultrastrong
• Uniform convergence
Linear operators
• Adjoint
• Bilinear
• form
• operator
• sesquilinear
• (Un)Bounded
• Closed
• Compact
• on Hilbert spaces
• (Dis)Continuous
• Densely defined
• Fredholm
• kernel
• operator
• Hilbert–Schmidt
• Functionals
• positive
• Pseudo-monotone
• Normal
• Nuclear
• Self-adjoint
• Strictly singular
• Trace class
• Transpose
• Unitary
Operator theory
• Banach algebras
• C*-algebras
• Operator space
• Spectrum
• C*-algebra
• radius
• Spectral theory
• of ODEs
• Spectral theorem
• Polar decomposition
• Singular value decomposition
Theorems
• Anderson–Kadec
• Banach–Alaoglu
• Banach–Mazur
• Banach–Saks
• Banach–Schauder (open mapping)
• Banach–Steinhaus (Uniform boundedness)
• Bessel's inequality
• Cauchy–Schwarz inequality
• Closed graph
• Closed range
• Eberlein–Šmulian
• Freudenthal spectral
• Gelfand–Mazur
• Gelfand–Naimark
• Goldstine
• Hahn–Banach
• hyperplane separation
• Kakutani fixed-point
• Krein–Milman
• Lomonosov's invariant subspace
• Mackey–Arens
• Mazur's lemma
• M. Riesz extension
• Parseval's identity
• Riesz's lemma
• Riesz representation
• Robinson-Ursescu
• Schauder fixed-point
Analysis
• Abstract Wiener space
• Banach manifold
• bundle
• Bochner space
• Convex series
• Differentiation in Fréchet spaces
• Derivatives
• Fréchet
• Gateaux
• functional
• holomorphic
• quasi
• Integrals
• Bochner
• Dunford
• Gelfand–Pettis
• regulated
• Paley–Wiener
• weak
• Functional calculus
• Borel
• continuous
• holomorphic
• Measures
• Lebesgue
• Projection-valued
• Vector
• Weakly / Strongly measurable function
Types of sets
• Absolutely convex
• Absorbing
• Affine
• Balanced/Circled
• Bounded
• Convex
• Convex cone (subset)
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
• Linear cone (subset)
• Radial
• Radially convex/Star-shaped
• Symmetric
• Zonotope
Subsets / set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Bounding points
• Convex hull
• Extreme point
• Interior
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Examples
• Absolute continuity AC
• $ba(\Sigma )$
• c space
• Banach coordinate BK
• Besov $B_{p,q}^{s}(\mathbb {R} )$
• Birnbaum–Orlicz
• Bounded variation BV
• Bs space
• Continuous C(K) with K compact Hausdorff
• Hardy Hp
• Hilbert H
• Morrey–Campanato $L^{\lambda ,p}(\Omega )$
• ℓp
• $\ell ^{\infty }$
• Lp
• $L^{\infty }$
• weighted
• Schwartz $S\left(\mathbb {R} ^{n}\right)$
• Segal–Bargmann F
• Sequence space
• Sobolev Wk,p
• Sobolev inequality
• Triebel–Lizorkin
• Wiener amalgam $W(X,L^{p})$
Applications
• Differential operator
• Finite element method
• Mathematical formulation of quantum mechanics
• Ordinary Differential Equations (ODEs)
• Validated numerics
Authority control: National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
|
Wikipedia
|
Separable algebra
In mathematics, a separable algebra is a kind of semisimple algebra. It is a generalization to associative algebras of the notion of a separable field extension.
Definition and First Properties
A ring homomorphism (of unital, but not necessarily commutative rings)
$K\to A$
is called separable (or a separable extension) if the multiplication map
$\mu :A\otimes _{K}A\to A,a\otimes b\mapsto ab$
admits a section
$\sigma :A\to A\otimes _{K}A$
by means of a homomorphism σ of A-A-bimodules. Such a section σ is determined by its value
$p:=\sigma (1)=\sum a_{i}\otimes b_{i}$
σ(1). The condition that σ is a section of μ is equivalent to
$\sum a_{i}b_{i}=1$
and the condition to be an homomorphism of A-A-bimodules is equivalent to the following requirement for any a in A:
$\sum aa_{i}\otimes b_{i}=\sum a_{i}\otimes b_{i}a.$
Such an element p is called a separability idempotent, since it satisfies $p^{2}=p$.
Examples
For any commutative ring R, the (non-commutative) ring of n-by-n matrices $M_{n}(R)$ is a separable R-algebra. For any $1\leq j\leq n$, a separability idempotent is given by $\sum _{i=1}^{n}e_{ij}\otimes e_{ji}$, where $e_{ij}$ denotes the elementary matrix which is 0 except for the entry in position (i, j), which is 1. In particular, this shows that separability idempotents need not be unique.
Separable algebras over a field
A field extension L/K of finite degree is a separable extension if and only if L is separable as an associative K-algebra. If L/K has a primitive element $a$ with irreducible polynomial $p(x)=(x-a)\sum _{i=0}^{n-1}b_{i}x^{i}$, then a separability idempotent is given by $\sum _{i=0}^{n-1}a^{i}\otimes _{K}{\frac {b_{i}}{p'(a)}}$. The tensorands are dual bases for the trace map: if $\sigma _{1},\ldots ,\sigma _{n}$ are the distinct K-monomorphisms of L into an algebraic closure of K, the trace mapping Tr of L into K is defined by $Tr(x)=\sum _{i=1}^{n}\sigma _{i}(x)$. The trace map and its dual bases make explicit L as a Frobenius algebra over K.
More generally, separable algebras over a field K can be classified as follows: they are the same as finite products of matrix algebras over finite-dimensional division algebras whose centers are finite-dimensional separable field extensions of the field K. In particular: Every separable algebra is itself finite-dimensional. If K is a perfect field --- for example a field of characteristic zero, or a finite field, or an algebraically closed field --- then every extension of K is separable so that separable K-algebras are finite products of matrix algebras over finite-dimensional division algebras over field K. In other words, if K is a perfect field, there is no difference between a separable algebra over K and a finite-dimensional semisimple algebra over K. It can be shown by a generalized theorem of Maschke that an associative K-algebra A is separable if for every field extension $\scriptstyle L/K$ the algebra $\scriptstyle A\otimes _{K}L$ is semisimple.
Group rings
If K is commutative ring and G is a finite group such that the order of G is invertible in K, then the group ring K[G] is a separable K-algebra.[1] A separability idempotent is given by ${\frac {1}{o(G)}}\sum _{g\in G}g\otimes g^{-1}$.
Equivalent characterizations of separability
There are several equivalent definitions of separable algebras. A K-algebra A is separable if and only if it is projective when considered as a left module of $A^{e}$ in the usual way.[2] Moreover, an algebra A is separable if and only if it is flat when considered as a right module of $A^{e}$ in the usual way. Separable extensions can also be characterized by means of split extensions: A is separable over K if and only if all short exact sequences of A-A-bimodules that are split as A-K-bimodules also split as A-A-bimodules. Indeed, this condition is necessary since the multiplication mapping $\mu :A\otimes _{K}A\rightarrow A$ arising in the definition above is a A-A-bimodule epimorphism, which is split as an A-K-bimodule map by the right inverse mapping $A\rightarrow A\otimes _{K}A$ given by $a\mapsto a\otimes 1$. The converse can be proven by a judicious use of the separability idempotent (similarly to the proof of Maschke's theorem, applying its components within and without the splitting maps).[3]
Equivalently, the relative Hochschild cohomology groups $H^{n}(R,S;M)$ of (R,S) in any coefficient bimodule M is zero for n > 0. Examples of separable extensions are many including first separable algebras where R = separable algebra and S = 1 times the ground field. Any ring R with elements a and b satisfying ab = 1, but ba different from 1, is a separable extension over the subring S generated by 1 and bRa.
Relation to Frobenius algebras
A separable algebra is said to be strongly separable if there exists a separability idempotent that is symmetric, meaning
$e=\sum _{i=1}^{n}x_{i}\otimes y_{i}=\sum _{i=1}^{n}y_{i}\otimes x_{i}$
An algebra is strongly separable if and only if its trace form is nondegenerate, thus making the algebra into a particular kind of Frobenius algebra called a symmetric algebra (not to be confused with the symmetric algebra arising as the quotient of the tensor algebra).
If K is commutative, A is a finitely generated projective separable K-module, then A is a symmetric Frobenius algebra.[4]
Relation to formally unramified and formally étale extensions
Any separable extension A / K of commutative rings is formally unramified. The converse holds if A is a finitely generated K-algebra.[5] A separable flat (commutative) K-algebra A is formally étale.[6]
Further results
A theorem in the area is that of J. Cuadra that a separable Hopf-Galois extension R | S has finitely generated natural S-module R. A fundamental fact about a separable extension R | S is that it is left or right semisimple extension: a short exact sequence of left or right R-modules that is split as S-modules, is split as R-modules. In terms of G. Hochschild's relative homological algebra, one says that all R-modules are relative (R,S)-projective. Usually relative properties of subrings or ring extensions, such as the notion of separable extension, serve to promote theorems that say that the over-ring shares a property of the subring. For example, a separable extension R of a semisimple algebra S has R semisimple, which follows from the preceding discussion.
There is the celebrated Jans theorem that a finite group algebra A over a field of characteristic p is of finite representation type if and only if its Sylow p-subgroup is cyclic: the clearest proof is to note this fact for p-groups, then note that the group algebra is a separable extension of its Sylow p-subgroup algebra B as the index is coprime to the characteristic. The separability condition above will imply every finitely generated A-module M is isomorphic to a direct summand in its restricted, induced module. But if B has finite representation type, the restricted module is uniquely a direct sum of multiples of finitely many indecomposables, which induce to a finite number of constituent indecomposable modules of which M is a direct sum. Hence A is of finite representation type if B is. The converse is proven by a similar argument noting that every subgroup algebra B is a B-bimodule direct summand of a group algebra A.
References
1. Ford (2017, §4.2)
2. Reiner (2003, p. 102)
3. Ford, 2017 & Theorem 4.4.1 harvtxt error: no target: CITEREFFord2017Theorem_4.4.1 (help)
4. Endo & Watanabe (1967, Theorem 4.2). If A is commutative, the proof is simpler, see Kadison (1999, Lemma 5.11)
5. Ford (2017, Corollary 4.7.2, Theorem 8.3.6)
6. Ford (2017, Corollary 4.7.3)
• DeMeyer, F.; Ingraham, E. (1971). Separable algebras over commutative rings. Lecture Notes in Mathematics. Vol. 181. Berlin-Heidelberg-New York: Springer-Verlag. ISBN 978-3-540-05371-2. Zbl 0215.36602.
• Samuel Eilenberg and Tadasi Nakayama, On the dimension of modules and algebras. II. Frobenius algebras and quasi-Frobenius rings, Nagoya Math. J. Volume 9 (1955), 1–16.
• Endo, Shizuo; Watanabe, Yutaka (1967), "On separable algebras over a commutative ring", Osaka Journal of Mathematics, 4: 233–242, MR 0227211
• Ford, Timothy J. (2017), Separable algebras, Providence, RI: American Mathematical Society, ISBN 978-1-4704-3770-1, MR 3618889
• Hirata, H.; Sugano, K. (1966), "On semisimple and separable extensions of noncommutative rings", J. Math. Soc. Jpn., 18: 360–373.
• Kadison, Lars (1999), New examples of Frobenius extensions, University Lecture Series, vol. 14, Providence, RI: American Mathematical Society, doi:10.1090/ulect/014, ISBN 0-8218-1962-3, MR 1690111
• Reiner, I. (2003), Maximal Orders, London Mathematical Society Monographs. New Series, vol. 28, Oxford University Press, ISBN 0-19-852673-3, Zbl 1024.16008
• Weibel, Charles A. (1994). An introduction to homological algebra. Cambridge Studies in Advanced Mathematics. Vol. 38. Cambridge University Press. ISBN 978-0-521-55987-4. MR 1269324. OCLC 36131259.
|
Wikipedia
|
Separable extension
In field theory, a branch of algebra, an algebraic field extension $E/F$ is called a separable extension if for every $\alpha \in E$, the minimal polynomial of $\alpha $ over F is a separable polynomial (i.e., its formal derivative is not the zero polynomial, or equivalently it has no repeated roots in any extension field).[1] There is also a more general definition that applies when E is not necessarily algebraic over F. An extension that is not separable is said to be inseparable.
Every algebraic extension of a field of characteristic zero is separable, and every algebraic extension of a finite field is separable.[2] It follows that most extensions that are considered in mathematics are separable. Nevertheless, the concept of separability is important, as the existence of inseparable extensions is the main obstacle for extending many theorems proved in characteristic zero to non-zero characteristic. For example, the fundamental theorem of Galois theory is a theorem about normal extensions, which remains true in non-zero characteristic only if the extensions are also assumed to be separable.[3]
The opposite concept, a purely inseparable extension, also occurs naturally, as every algebraic extension may be decomposed uniquely as a purely inseparable extension of a separable extension. An algebraic extension $E/F$ of fields of non-zero characteristics p is a purely inseparable extension if and only if for every $\alpha \in E\setminus F$, the minimal polynomial of $\alpha $ over F is not a separable polynomial, or, equivalently, for every element x of E, there is a positive integer k such that $x^{p^{k}}\in F$.[4]
The simplest example of a (purely) inseparable extension is $E=\mathbb {F} _{p}(x)\supset F=\mathbb {F} _{p}(x^{p})$, fields of rational functions in the indeterminate x with coefficients in the finite field $\mathbb {F} _{p}=\mathbb {Z} /(p)$. The element $x\in E$ has minimal polynomial $f(X)=X^{p}-x^{p}\in F[X]$, having $f'\!(X)=0$ and a p-fold multiple root, as $f(X)=(X-x)^{p}\in E[X]$. This is a simple algebraic extension of degree p, as $E=F[x]$, but it is not a normal extension since the Galois group ${\text{Gal}}(E/F)$ is trivial.
Informal discussion
An arbitrary polynomial f with coefficients in some field F is said to have distinct roots or to be square-free if it has deg f roots in some extension field $E\supseteq F$. For instance, the polynomial g(X) = X 2 − 1 has precisely deg g = 2 roots in the complex plane; namely 1 and −1, and hence does have distinct roots. On the other hand, the polynomial h(X) = (X − 2)2, which is the square of a non-constant polynomial does not have distinct roots, as its degree is two, and 2 is its only root.
Every polynomial may be factored in linear factors over an algebraic closure of the field of its coefficients. Therefore, the polynomial does not have distinct roots if and only if it is divisible by the square of a polynomial of positive degree. This is the case if and only if the greatest common divisor of the polynomial and its derivative is not a constant. Thus for testing if a polynomial is square-free, it is not necessary to consider explicitly any field extension nor to compute the roots.
In this context, the case of irreducible polynomials requires some care. A priori, it may seem that being divisible by a square is impossible for an irreducible polynomial, which has no non-constant divisor except itself. However, irreducibility depends on the ambient field, and a polynomial may be irreducible over F and reducible over some extension of F. Similarly, divisibility by a square depends on the ambient field. If an irreducible polynomial f over F is divisible by a square over some field extension, then (by the discussion above) the greatest common divisor of f and its derivative f′ is not constant. Note that the coefficients of f′ belong to the same field as those of f, and the greatest common divisor of two polynomials is independent of the ambient field, so the greatest common divisor of f and f′ has coefficients in F. Since f is irreducible in F, this greatest common divisor is necessarily f itself. Because the degree of f′ is strictly less than the degree of f, it follows that the derivative of f is zero, which implies that the characteristic of the field is a prime number p, and f may be written
$f(x)=\sum _{i=0}^{k}a_{i}x^{pi}.$
A polynomial such as this one, whose formal derivative is zero, is said to be inseparable. Polynomials that are not inseparable are said to be separable. A separable extension is an extension that may be generated by separable elements, that is elements whose minimal polynomials are separable.
Separable and inseparable polynomials
An irreducible polynomial f in F[X] is separable if and only if it has distinct roots in any extension of F (that is if it may be factored in distinct linear factors over an algebraic closure of F).[5] Let f in F[X] be an irreducible polynomial and f ' its formal derivative. Then the following are equivalent conditions for the irreducible polynomial f to be separable:
• If E is an extension of F in which f is a product of linear factors then no square of these factors divides f in E[X] (that is f is square-free over E).[6]
• There exists an extension E of F such that f has deg(f) pairwise distinct roots in E.[6]
• The constant 1 is a polynomial greatest common divisor of f and f '.[7]
• The formal derivative f ' of f is not the zero polynomial.[8]
• Either the characteristic of F is zero, or the characteristic is p, and f is not of the form $\textstyle \sum _{i=0}^{k}a_{i}X^{pi}.$
Since the formal derivative of a positive degree polynomial can be zero only if the field has prime characteristic, for an irreducible polynomial to not be separable, its coefficients must lie in a field of prime characteristic. More generally, an irreducible (non-zero) polynomial f in F[X] is not separable, if and only if the characteristic of F is a (non-zero) prime number p, and f(X)=g(Xp) for some irreducible polynomial g in F[X].[9] By repeated application of this property, it follows that in fact, $f(X)=g(X^{p^{n}})$ for a non-negative integer n and some separable irreducible polynomial g in F[X] (where F is assumed to have prime characteristic p).[10]
If the Frobenius endomorphism $x\mapsto x^{p}$ of F is not surjective, there is an element $a\in F$ which is not a pth power of an element of F. In this case, the polynomial $X^{p}-a$ is irreducible and inseparable. Conversely, if there exists an inseparable irreducible (non-zero) polynomial $\textstyle f(X)=\sum a_{i}X^{ip}$ in F[X], then the Frobenius endomorphism of F cannot be an automorphism, since, otherwise, we would have $a_{i}=b_{i}^{p}$ for some $b_{i}$, and the polynomial f would factor as $\textstyle \sum a_{i}X^{ip}=\left(\sum b_{i}X^{i}\right)^{p}.$[11]
If K is a finite field of prime characteristic p, and if X is an indeterminate, then the field of rational functions over K, K(X), is necessarily imperfect, and the polynomial f(Y)=Yp−X is inseparable (its formal derivative in Y is 0).[1] More generally, if F is any field of (non-zero) prime characteristic for which the Frobenius endomorphism is not an automorphism, F possesses an inseparable algebraic extension.[12]
A field F is perfect if and only if all irreducible polynomials are separable. It follows that F is perfect if and only if either F has characteristic zero, or F has (non-zero) prime characteristic p and the Frobenius endomorphism of F is an automorphism. This includes every finite field.
Separable elements and separable extensions
Let $E\supseteq F$ be a field extension. An element $\alpha \in E$ is separable over F if it is algebraic over F, and its minimal polynomial is separable (the minimal polynomial of an element is necessarily irreducible).
If $\alpha ,\beta \in E$ are separable over F, then $\alpha +\beta $, $\alpha \beta $ and $1/\alpha $ are separable over F.
Thus the set of all elements in E separable over F forms a subfield of E, called the separable closure of F in E.[13]
The separable closure of F in an algebraic closure of F is simply called the separable closure of F. Like the algebraic closure, it is unique up to an isomorphism, and in general, this isomorphism is not unique.
A field extension $E\supseteq F$ is separable, if E is the separable closure of F in E. This is the case if and only if E is generated over F by separable elements.
If $E\supseteq L\supseteq F$ are field extensions, then E is separable over F if and only if E is separable over L and L is separable over F.[14]
If $E\supseteq F$ is a finite extension (that is E is a F-vector space of finite dimension), then the following are equivalent.
1. E is separable over F.
2. $E=F(a_{1},\ldots ,a_{r})$ where $a_{1},\ldots ,a_{r}$ are separable elements of E.
3. $E=F(a)$ where a is a separable element of E.
4. If K is an algebraic closure of F, then there are exactly $[E:F]$ field homomorphisms of E into K which fix F.
5. For any normal extension K of F which contains E, then there are exactly $[E:F]$ field homomorphisms of E into K which fix F.
The equivalence of 3. and 1. is known as the primitive element theorem or Artin's theorem on primitive elements. Properties 4. and 5. are the basis of Galois theory, and, in particular, of the fundamental theorem of Galois theory.
Separable extensions within algebraic extensions
Let $E\supseteq F$ be an algebraic extension of fields of characteristic p. The separable closure of F in E is $S=\{\alpha \in E\mid \alpha {\text{ is separable over }}F\}.$ For every element $x\in E\setminus S$ there exists a positive integer k such that $x^{p^{k}}\in S,$ and thus E is a purely inseparable extension of S. It follows that S is the unique intermediate field that is separable over F and over which E is purely inseparable.[15]
If $E\supseteq F$ is a finite extension, its degree [E : F] is the product of the degrees [S : F] and [E : S]. The former, often denoted [E : F]sep, is referred to as the separable part of [E : F], or as the separable degree of E/F; the latter is referred to as the inseparable part of the degree or the inseparable degree.[16] The inseparable degree is 1 in characteristic zero and a power of p in characteristic p > 0.[17]
On the other hand, an arbitrary algebraic extension $E\supseteq F$ may not possess an intermediate extension K that is purely inseparable over F and over which E is separable. However, such an intermediate extension may exist if, for example, $E\supseteq F$ is a finite degree normal extension (in this case, K is the fixed field of the Galois group of E over F). Suppose that such an intermediate extension does exist, and [E : F] is finite, then [S : F] = [E : K], where S is the separable closure of F in E.[18] The known proofs of this equality use the fact that if $K\supseteq F$ is a purely inseparable extension, and if f is a separable irreducible polynomial in F[X], then f remains irreducible in K[X][19]). This equality implies that, if [E : F] is finite, and U is an intermediate field between F and E, then [E : F]sep = [E : U]sep⋅[U : F]sep.[20]
The separable closure Fsep of a field F is the separable closure of F in an algebraic closure of F. It is the maximal Galois extension of F. By definition, F is perfect if and only if its separable and algebraic closures coincide.
Separability of transcendental extensions
Separability problems may arise when dealing with transcendental extensions. This is typically the case for algebraic geometry over a field of prime characteristic, where the function field of an algebraic variety has a transcendence degree over the ground field that is equal to the dimension of the variety.
For defining the separability of a transcendental extension, it is natural to use the fact that every field extension is an algebraic extension of a purely transcendental extension. This leads to the following definition.
A separating transcendence basis of an extension $E\supseteq F$ is a transcendence basis T of E such that E is a separable algebraic extension of F(T). A finitely generated field extension is separable if and only it has a separating transcendence basis; an extension that is not finitely generated is called separable if every finitely generated subextension has a separating transcendence basis.[21]
Let $E\supseteq F$ be a field extension of characteristic exponent p (that is p = 1 in characteristic zero and, otherwise, p is the characteristic). The following properties are equivalent:
• E is a separable extension of F,
• $E^{p}$ and F are linearly disjoint over $F^{p},$
• $F^{1/p}\otimes _{F}E$ is reduced,
• $L\otimes _{F}E$ is reduced for every field extension L of E,
where $\otimes _{F}$ denotes the tensor product of fields, $F^{p}$ is the field of the pth powers of the elements of F (for any field F), and $F^{1/p}$ is the field obtained by adjoining to F the pth root of all its elements (see Separable algebra for details).
Differential criteria
Separability can be studied with the aid of derivations. Let E be a finitely generated field extension of a field F. Denoting $\operatorname {Der} _{F}(E,E)$ the E-vector space of the F-linear derivations of E, one has
$\dim _{E}\operatorname {Der} _{F}(E,E)\geq \operatorname {tr.deg} _{F}E,$
and the equality holds if and only if E is separable over F (here "tr.deg" denotes the transcendence degree).
In particular, if $E/F$ is an algebraic extension, then $\operatorname {Der} _{F}(E,E)=0$ if and only if $E/F$ is separable.[22]
Let $D_{1},\ldots ,D_{m}$ be a basis of $\operatorname {Der} _{F}(E,E)$ and $a_{1},\ldots ,a_{m}\in E$. Then $E$ is separable algebraic over $F(a_{1},\ldots ,a_{m})$ if and only if the matrix $D_{i}(a_{j})$ is invertible. In particular, when $m=\operatorname {tr.deg} _{F}E$, this matrix is invertible if and only if $\{a_{1},\ldots ,a_{m}\}$ is a separating transcendence basis.
Notes
1. Isaacs, p. 281
2. Isaacs, Theorem 18.11, p. 281
3. Isaacs, Theorem 18.13, p. 282
4. Isaacs, p. 298
5. Isaacs, p. 280
6. Isaacs, Lemma 18.7, p. 280
7. Isaacs, Theorem 19.4, p. 295
8. Isaacs, Corollary 19.5, p. 296
9. Isaacs, Corollary 19.6, p. 296
10. Isaacs, Corollary 19.9, p. 298
11. Isaacs, Theorem 19.7, p. 297
12. Isaacs, p. 299
13. Isaacs, Lemma 19.15, p. 300
14. Isaacs, Corollary 18.12, p. 281 and Corollary 19.17, p. 301
15. Isaacs, Theorem 19.14, p. 300
16. Isaacs, p. 302
17. Lang 2002, Corollary V.6.2 harvnb error: no target: CITEREFLang2002 (help)
18. Isaacs, Theorem 19.19, p. 302
19. Isaacs, Lemma 19.20, p. 302
20. Isaacs, Corollary 19.21, p. 303
21. Fried & Jarden (2008) p.38
22. Fried & Jarden (2008) p.49
References
• Borel, A. Linear algebraic groups, 2nd ed.
• P.M. Cohn (2003). Basic algebra
• Fried, Michael D.; Jarden, Moshe (2008). Field arithmetic. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. Vol. 11 (3rd ed.). Springer-Verlag. ISBN 978-3-540-77269-9. Zbl 1145.12001.
• I. Martin Isaacs (1993). Algebra, a graduate course (1st ed.). Brooks/Cole Publishing Company. ISBN 0-534-19002-2.
• Kaplansky, Irving (1972). Fields and rings. Chicago lectures in mathematics (Second ed.). University of Chicago Press. pp. 55–59. ISBN 0-226-42451-0. Zbl 1001.16500.
• M. Nagata (1985). Commutative field theory: new edition, Shokabo. (Japanese)
• Silverman, Joseph (1993). The Arithmetic of Elliptic Curves. Springer. ISBN 0-387-96203-4.
External links
• "separable extension of a field k", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia
|
Connectivity (graph theory)
In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the minimum number of elements (nodes or edges) that need to be removed to separate the remaining nodes into two or more isolated subgraphs.[1] It is closely related to the theory of network flow problems. The connectivity of a graph is an important measure of its resilience as a network.
Connected vertices and graphs
In an undirected graph G, two vertices u and v are called connected if G contains a path from u to v. Otherwise, they are called disconnected. If the two vertices are additionally connected by a path of length 1, i.e. by a single edge, the vertices are called adjacent.
A graph is said to be connected if every pair of vertices in the graph is connected. This means that there is a path between every pair of vertices. An undirected graph that is not connected is called disconnected. An undirected graph G is therefore disconnected if there exist two vertices in G such that no path in G has these vertices as endpoints. A graph with just one vertex is connected. An edgeless graph with two or more vertices is disconnected.
A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. It is unilaterally connected or unilateral (also called semiconnected) if it contains a directed path from u to v or a directed path from v to u for every pair of vertices u, v.[2] It is strongly connected, or simply strong, if it contains a directed path from u to v and a directed path from v to u for every pair of vertices u, v.
Components and cuts
A connected component is a maximal connected subgraph of an undirected graph. Each vertex belongs to exactly one connected component, as does each edge. A graph is connected if and only if it has exactly one connected component.
The strong components are the maximal strongly connected subgraphs of a directed graph.
A vertex cut or separating set of a connected graph G is a set of vertices whose removal renders G disconnected. The vertex connectivity κ(G) (where G is not a complete graph) is the size of a minimal vertex cut. A graph is called k-vertex-connected or k-connected if its vertex connectivity is k or greater.
More precisely, any graph G (complete or not) is said to be k-vertex-connected if it contains at least k+1 vertices, but does not contain a set of k − 1 vertices whose removal disconnects the graph; and κ(G) is defined as the largest k such that G is k-connected. In particular, a complete graph with n vertices, denoted Kn, has no vertex cuts at all, but κ(Kn) = n − 1.
A vertex cut for two vertices u and v is a set of vertices whose removal from the graph disconnects u and v. The local connectivity κ(u, v) is the size of a smallest vertex cut separating u and v. Local connectivity is symmetric for undirected graphs; that is, κ(u, v) = κ(v, u). Moreover, except for complete graphs, κ(G) equals the minimum of κ(u, v) over all nonadjacent pairs of vertices u, v.
2-connectivity is also called biconnectivity and 3-connectivity is also called triconnectivity. A graph G which is connected but not 2-connected is sometimes called separable.
Analogous concepts can be defined for edges. In the simple case in which cutting a single, specific edge would disconnect the graph, that edge is called a bridge. More generally, an edge cut of G is a set of edges whose removal renders the graph disconnected. The edge-connectivity λ(G) is the size of a smallest edge cut, and the local edge-connectivity λ(u, v) of two vertices u, v is the size of a smallest edge cut disconnecting u from v. Again, local edge-connectivity is symmetric. A graph is called k-edge-connected if its edge connectivity is k or greater.
A graph is said to be maximally connected if its connectivity equals its minimum degree. A graph is said to be maximally edge-connected if its edge-connectivity equals its minimum degree.[3]
Super- and hyper-connectivity
A graph is said to be super-connected or super-κ if every minimum vertex cut isolates a vertex. A graph is said to be hyper-connected or hyper-κ if the deletion of each minimum vertex cut creates exactly two components, one of which is an isolated vertex. A graph is semi-hyper-connected or semi-hyper-κ if any minimum vertex cut separates the graph into exactly two components.[4]
More precisely: a G connected graph is said to be super-connected or super-κ if all minimum vertex-cuts consist of the vertices adjacent with one (minimum-degree) vertex. A G connected graph is said to be super-edge-connected or super-λ if all minimum edge-cuts consist of the edges incident on some (minimum-degree) vertex.[5]
A cutset X of G is called a non-trivial cutset if X does not contain the neighborhood N(u) of any vertex u ∉ X. Then the superconnectivity κ1 of G is:
κ1(G) = min{|X| : X is a non-trivial cutset}.
A non-trivial edge-cut and the edge-superconnectivity λ1(G) are defined analogously.[6]
Menger's theorem
Main article: Menger's theorem
One of the most important facts about connectivity in graphs is Menger's theorem, which characterizes the connectivity and edge-connectivity of a graph in terms of the number of independent paths between vertices.
If u and v are vertices of a graph G, then a collection of paths between u and v is called independent if no two of them share a vertex (other than u and v themselves). Similarly, the collection is edge-independent if no two paths in it share an edge. The number of mutually independent paths between u and v is written as κ′(u, v), and the number of mutually edge-independent paths between u and v is written as λ′(u, v).
Menger's theorem asserts that for distinct vertices u,v, λ(u, v) equals λ′(u, v), and if u is also not adjacent to v then κ(u, v) equals κ′(u, v).[7][8] This fact is actually a special case of the max-flow min-cut theorem.
Computational aspects
The problem of determining whether two vertices in a graph are connected can be solved efficiently using a search algorithm, such as breadth-first search. More generally, it is easy to determine computationally whether a graph is connected (for example, by using a disjoint-set data structure), or to count the number of connected components. A simple algorithm might be written in pseudo-code as follows:
1. Begin at any arbitrary node of the graph, G
2. Proceed from that node using either depth-first or breadth-first search, counting all nodes reached.
3. Once the graph has been entirely traversed, if the number of nodes counted is equal to the number of nodes of G, the graph is connected; otherwise it is disconnected.
By Menger's theorem, for any two vertices u and v in a connected graph G, the numbers κ(u, v) and λ(u, v) can be determined efficiently using the max-flow min-cut algorithm. The connectivity and edge-connectivity of G can then be computed as the minimum values of κ(u, v) and λ(u, v), respectively.
In computational complexity theory, SL is the class of problems log-space reducible to the problem of determining whether two vertices in a graph are connected, which was proved to be equal to L by Omer Reingold in 2004.[9] Hence, undirected graph connectivity may be solved in O(log n) space.
The problem of computing the probability that a Bernoulli random graph is connected is called network reliability and the problem of computing whether two given vertices are connected the ST-reliability problem. Both of these are #P-hard.[10]
Number of connected graphs
Main article: Graph enumeration
The number of distinct connected labeled graphs with n nodes is tabulated in the On-Line Encyclopedia of Integer Sequences as sequence A001187. The first few non-trivial terms are
n graphs
1 1
2 1
3 4
4 38
5 728
6 26704
7 1866256
8 251548592
Examples
• The vertex- and edge-connectivities of a disconnected graph are both 0.
• 1-connectedness is equivalent to connectedness for graphs of at least 2 vertices.
• The complete graph on n vertices has edge-connectivity equal to n − 1. Every other simple graph on n vertices has strictly smaller edge-connectivity.
• In a tree, the local edge-connectivity between every pair of vertices is 1.
Bounds on connectivity
• The vertex-connectivity of a graph is less than or equal to its edge-connectivity. That is, κ(G) ≤ λ(G). Both are less than or equal to the minimum degree of the graph, since deleting all neighbors of a vertex of minimum degree will disconnect that vertex from the rest of the graph.[1]
• For a vertex-transitive graph of degree d, we have: 2(d + 1)/3 ≤ κ(G) ≤ λ(G) = d.[11]
• For a vertex-transitive graph of degree d ≤ 4, or for any (undirected) minimal Cayley graph of degree d, or for any symmetric graph of degree d, both kinds of connectivity are equal: κ(G) = λ(G) = d.[12]
Other properties
• Connectedness is preserved by graph homomorphisms.
• If G is connected then its line graph L(G) is also connected.
• A graph G is 2-edge-connected if and only if it has an orientation that is strongly connected.
• Balinski's theorem states that the polytopal graph (1-skeleton) of a k-dimensional convex polytope is a k-vertex-connected graph.[13] Steinitz's previous theorem that any 3-vertex-connected planar graph is a polytopal graph (Steinitz theorem) gives a partial converse.
• According to a theorem of G. A. Dirac, if a graph is k-connected for k ≥ 2, then for every set of k vertices in the graph there is a cycle that passes through all the vertices in the set.[14][15] The converse is true when k = 2.
See also
• Algebraic connectivity
• Cheeger constant (graph theory)
• Dynamic connectivity, Disjoint-set data structure
• Expander graph
• Strength of a graph
References
1. Diestel, R. (2005). "Graph Theory, Electronic Edition". p. 12.
2. Chapter 11: Digraphs: Principle of duality for digraphs: Definition
3. Gross, Jonathan L.; Yellen, Jay (2004). Handbook of graph theory. CRC Press. p. 335. ISBN 978-1-58488-090-5.
4. Liu, Qinghai; Zhang, Zhao (2010-03-01). "The existence and upper bound for two types of restricted connectivity". Discrete Applied Mathematics. 158 (5): 516–521. doi:10.1016/j.dam.2009.10.017.
5. Gross, Jonathan L.; Yellen, Jay (2004). Handbook of graph theory. CRC Press. p. 338. ISBN 978-1-58488-090-5.
6. Balbuena, Camino; Carmona, Angeles (2001-10-01). "On the connectivity and superconnectivity of bipartite digraphs and graphs". Ars Combinatorica. 61: 3–22. CiteSeerX 10.1.1.101.1458.
7. Gibbons, A. (1985). Algorithmic Graph Theory. Cambridge University Press.
8. Nagamochi, H.; Ibaraki, T. (2008). Algorithmic Aspects of Graph Connectivity. Cambridge University Press.
9. Reingold, Omer (2008). "Undirected connectivity in log-space". Journal of the ACM. 55 (4): 1–24. doi:10.1145/1391289.1391291. S2CID 207168478.
10. Provan, J. Scott; Ball, Michael O. (1983). "The complexity of counting cuts and of computing the probability that a graph is connected". SIAM Journal on Computing. 12 (4): 777–788. doi:10.1137/0212053. MR 0721012..
11. Godsil, C.; Royle, G. (2001). Algebraic Graph Theory. Springer Verlag.
12. Babai, L. (1996). Automorphism groups, isomorphism, reconstruction. Technical Report TR-94-10. University of Chicago. Archived from the original on 2010-06-11. Chapter 27 of The Handbook of Combinatorics.
13. Balinski, M. L. (1961). "On the graph structure of convex polyhedra in n-space". Pacific Journal of Mathematics. 11 (2): 431–434. doi:10.2140/pjm.1961.11.431.
14. Dirac, Gabriel Andrew (1960). "In abstrakten Graphen vorhandene vollständige 4-Graphen und ihre Unterteilungen". Mathematische Nachrichten. 22 (1–2): 61–85. doi:10.1002/mana.19600220107. MR 0121311..
15. Flandrin, Evelyne; Li, Hao; Marczyk, Antoni; Woźniak, Mariusz (2007). "A generalization of Dirac's theorem on cycles through k vertices in k-connected graphs". Discrete Mathematics. 307 (7–8): 878–884. doi:10.1016/j.disc.2005.11.052. MR 2297171..
|
Wikipedia
|
Separable partial differential equation
A separable partial differential equation is one that can be broken into a set of separate equations of lower dimensionality (fewer independent variables) by a method of separation of variables. This generally relies upon the problem having some special form or symmetry. In this way, the partial differential equation (PDE) can be solved by solving a set of simpler PDEs, or even ordinary differential equations (ODEs) if the problem can be broken down into one-dimensional equations.
Differential equations
Scope
Fields
• Natural sciences
• Engineering
• Astronomy
• Physics
• Chemistry
• Biology
• Geology
Applied mathematics
• Continuum mechanics
• Chaos theory
• Dynamical systems
Social sciences
• Economics
• Population dynamics
List of named differential equations
Classification
Types
• Ordinary
• Partial
• Differential-algebraic
• Integro-differential
• Fractional
• Linear
• Non-linear
By variable type
• Dependent and independent variables
• Autonomous
• Coupled / Decoupled
• Exact
• Homogeneous / Nonhomogeneous
Features
• Order
• Operator
• Notation
Relation to processes
• Difference (discrete analogue)
• Stochastic
• Stochastic partial
• Delay
Solution
Existence and uniqueness
• Picard–Lindelöf theorem
• Peano existence theorem
• Carathéodory's existence theorem
• Cauchy–Kowalevski theorem
General topics
• Initial conditions
• Boundary values
• Dirichlet
• Neumann
• Robin
• Cauchy problem
• Wronskian
• Phase portrait
• Lyapunov / Asymptotic / Exponential stability
• Rate of convergence
• Series / Integral solutions
• Numerical integration
• Dirac delta function
Solution methods
• Inspection
• Method of characteristics
• Euler
• Exponential response formula
• Finite difference (Crank–Nicolson)
• Finite element
• Infinite element
• Finite volume
• Galerkin
• Petrov–Galerkin
• Green's function
• Integrating factor
• Integral transforms
• Perturbation theory
• Runge–Kutta
• Separation of variables
• Undetermined coefficients
• Variation of parameters
People
List
• Isaac Newton
• Gottfried Leibniz
• Jacob Bernoulli
• Leonhard Euler
• Józef Maria Hoene-Wroński
• Joseph Fourier
• Augustin-Louis Cauchy
• George Green
• Carl David Tolmé Runge
• Martin Kutta
• Rudolf Lipschitz
• Ernst Lindelöf
• Émile Picard
• Phyllis Nicolson
• John Crank
The most common form of separation of variables is simple separation of variables in which a solution is obtained by assuming a solution of the form given by a product of functions of each individual coordinate. There is a special form of separation of variables called $R$-separation of variables which is accomplished by writing the solution as a particular fixed function of the coordinates multiplied by a product of functions of each individual coordinate. Laplace's equation on ${\mathbb {R} }^{n}$ is an example of a partial differential equation which admits solutions through $R$-separation of variables; in the three-dimensional case this uses 6-sphere coordinates.
(This should not be confused with the case of a separable ODE, which refers to a somewhat different class of problems that can be broken into a pair of integrals; see separation of variables.)
Example
For example, consider the time-independent Schrödinger equation
$[-\nabla ^{2}+V(\mathbf {x} )]\psi (\mathbf {x} )=E\psi (\mathbf {x} )$
for the function $\psi (\mathbf {x} )$ (in dimensionless units, for simplicity). (Equivalently, consider the inhomogeneous Helmholtz equation.) If the function $V(\mathbf {x} )$ in three dimensions is of the form
$V(x_{1},x_{2},x_{3})=V_{1}(x_{1})+V_{2}(x_{2})+V_{3}(x_{3}),$
then it turns out that the problem can be separated into three one-dimensional ODEs for functions $\psi _{1}(x_{1})$, $\psi _{2}(x_{2})$, and $\psi _{3}(x_{3})$, and the final solution can be written as $\psi (\mathbf {x} )=\psi _{1}(x_{1})\cdot \psi _{2}(x_{2})\cdot \psi _{3}(x_{3})$. (More generally, the separable cases of the Schrödinger equation were enumerated by Eisenhart in 1948.[1])
References
1. Eisenhart, L. P. (1948-07-01). "Enumeration of Potentials for Which One-Particle Schroedinger Equations Are Separable". Physical Review. American Physical Society (APS). 74 (1): 87–89. doi:10.1103/physrev.74.87. ISSN 0031-899X.
|
Wikipedia
|
Separable permutation
In combinatorial mathematics, a separable permutation is a permutation that can be obtained from the trivial permutation 1 by direct sums and skew sums.[1] Separable permutations may be characterized by the forbidden permutation patterns 2413 and 3142;[2] they are also the permutations whose permutation graphs are cographs and the permutations that realize the series-parallel partial orders. It is possible to test in polynomial time whether a given separable permutation is a pattern in a larger permutation, or to find the longest common subpattern of two separable permutations.
Block structuring of the (transposed) permutation matrix of the separable permutation (4,5,2,1,3,8,6,7) and corresponding labeled binary tree; colors indicate depth in the tree
Definition and characterization
Bose, Buss & Lubiw (1998) define a separable permutation to be a permutation that has a separating tree: a rooted binary tree in which the elements of the permutation appear (in permutation order) at the leaves of the tree, and in which the descendants of each tree node form a contiguous subset of these elements. Each interior node of the tree is either a positive node in which all descendants of the left child are smaller than all descendants of the right node, or a negative node in which all descendants of the left node are greater than all descendants of the right node. There may be more than one tree for a given permutation: if two nodes that are adjacent in the same tree have the same sign, then they may be replaced by a different pair of nodes using a tree rotation operation.
Each subtree of a separating tree may be interpreted as itself representing a smaller separable permutation, whose element values are determined by the shape and sign pattern of the subtree. A one-node tree represents the trivial permutation, a tree whose root node is positive represents the direct sum of permutations given by its two child subtrees, and a tree whose root node is negative represents the skew sum of the permutations given by its two child subtrees. In this way, a separating tree is equivalent to a construction of the permutation by direct and skew sums, starting from the trivial permutation.
As Bose, Buss & Lubiw (1998) prove, separable permutations may also be characterized in terms of permutation patterns: a permutation is separable if and only if it contains neither 2413 nor 3142 as a pattern.[2]
The separable permutations also have a characterization from algebraic geometry: if a collection of distinct real polynomials all have equal values at some number x, then the permutation that describes how the numerical ordering of the polynomials changes at x is separable, and every separable permutation can be realized in this way.[3]
Combinatorial enumeration
The separable permutations are enumerated by the Schröder numbers. That is, there is one separable permutation of length one, two of length two, and in general the number of separable permutations of a given length (starting with length one) is
1, 2, 6, 22, 90, 394, 1806, 8558, .... (sequence A006318 in the OEIS)
This result was proven for a class of permutation matrices equivalent to the separable permutations by Shapiro & Stephens (1991), by using a canonical form of the separating tree in which the right child of each node has a different sign than the node itself and then applying the theory of generating functions to these trees. Another proof applying more directly to separable permutations themselves, was given by West (1995).[4]
Algorithms
Bose, Buss & Lubiw (1998) showed that it is possible to determine in polynomial time whether a given separable permutation is a pattern in a larger permutation, in contrast to the same problem for non-separable permutations, which is NP-complete.
The problem of finding the longest separable pattern that is common to a set of input permutations may be solved in polynomial time for a fixed number of input permutations, but is NP-hard when the number of input permutations may be variable, and remains NP-hard even when the inputs are all themselves separable.[5]
History
Separable permutations first arose in the work of Avis & Newborn (1981), who showed that they are precisely the permutations which can be sorted by an arbitrary number of pop-stacks in series, where a pop-stack is a restricted form of stack in which any pop operation pops all items at once.
Shapiro & Stephens (1991) considered separable permutations again in their study of bootstrap percolation, a process in which an initial permutation matrix is modified by repeatedly changing to one any matrix coefficient that has two or more orthogonal neighbors equal to one. As they show, the class of permutations that are transformed by this process into the all-one matrix is exactly the class of separable permutations.
The term "separable permutation" was introduced later by Bose, Buss & Lubiw (1998), who considered them for their algorithmic properties.
Related structures
Every permutation can be used to define a permutation graph, a graph whose vertices are the elements of the permutation and whose edges are the inversions of the permutation. In the case of a separable permutation, the structure of this graph can be read off from the separation tree of the permutation: two vertices of the graph are adjacent if and only if their lowest common ancestor in the separation tree is negative. The graphs that can be formed from trees in this way are called cographs (short for complement-reducible graphs) and the trees from which they are formed are called cotrees. Thus, the separable permutations are exactly the permutations whose permutation graphs are cographs.[6] The forbidden graph characterization of the cographs (they are the graphs with no four-vertex induced path) corresponds to the two four-element forbidden patterns of the separable permutations.
Separable permutations are also closely related to series-parallel partial orders, the partially ordered sets whose comparability graphs are the cographs. As with the cographs and separable permutations, the series-parallel partial orders may also be characterized by four-element forbidden suborders. Every permutation defines a partial order whose order dimension is two, in which the elements to be ordered are the elements of the permutation, and in which x ≤ y whenever x has a smaller numerical value than y and is left of it in the permutation. The permutations for which this partial order is series-parallel are exactly the separable permutations.
Separable permutations may also be used to describe hierarchical partitions of rectangles into smaller rectangles (so-called "slicing floorplans", used for instance in the design of integrated circuits) by using the positive and negative signs of the separating tree to describe horizontal and vertical slices of a rectangle into smaller rectangles.[7]
The separable permutations include as a special case the stack-sortable permutations, which avoid the pattern 231.
Notes
1. Kitaev (2011), p. 57.
2. Bose, Buss & Lubiw (1998); Kitaev (2011), Theorem 2.2.36, p. p.58.
3. Ghys (2017), p. 15.
4. See Kitaev (2011), Theorem 2.2.45, p. 60.
5. Bouvel, Rossin & Vialette (2007).
6. Bose, Buss & Lubiw (1998).
7. Szepieniec & Otten (1980); Ackerman, Barequet & Pinter (2006)
References
• Ackerman, Eyal; Barequet, Gill; Pinter, Ron Y. (2006), "A bijection between permutations and floorplans, and its applications", Discrete Applied Mathematics, 154 (12): 1674–1684, doi:10.1016/j.dam.2006.03.018, MR 2233287
• Avis, David; Newborn, Monroe (1981), "On pop-stacks in series", Utilitas Mathematica, 19: 129–140, MR 0624050.
• Bouvel, Mathilde; Rossin, Dominique; Vialette, Stéphane (2007), "Longest common separable pattern among permutations", Combinatorial Pattern Matching (CPM 2007), Lecture Notes in Computer Science, vol. 4580, Springer, pp. 316–327, doi:10.1007/978-3-540-73437-6_32, ISBN 978-3-540-73436-9.
• Bose, Prosenjit; Buss, Jonathan; Lubiw, Anna (1998), "Pattern matching for permutations", Information Processing Letters, 65 (5): 277–283, doi:10.1016/S0020-0190(97)00209-3, MR 1620935.
• Ghys, Étienne (2017), A singular mathematical promenade, Lyon: ENS Éditions, arXiv:1612.06373, ISBN 978-2-84788-939-0, MR 3702027
• Kitaev, Sergey (2011), "2.2.5 Separable permutations", Patterns in permutations and words, Monographs in Theoretical Computer Science. An EATCS Series, Berlin: Springer-Verlag, pp. 57–66, doi:10.1007/978-3-642-17333-2, ISBN 978-3-642-17332-5, Zbl 1257.68007.
• Shapiro, Louis; Stephens, Arthur B. (1991), "Bootstrap percolation, the Schröder numbers, and the N-kings problem", SIAM Journal on Discrete Mathematics, 4 (2): 275–280, doi:10.1137/0404025, MR 1093199.
• Szepieniec, A. A.; Otten, R. H. J. M. (1980), "The genealogical approach to the layout problem", 17th Conf. on Design Automation (DAC 1980), pp. 535–542, doi:10.1145/800139.804582, S2CID 2031785.
• West, Julian (1995), "Generating trees and the Catalan and Schröder numbers", Discrete Mathematics, 146 (1–3): 247–262, doi:10.1016/0012-365X(94)00067-1, MR 1360119.
|
Wikipedia
|
Separable polynomial
In mathematics, a polynomial P(X) over a given field K is separable if its roots are distinct in an algebraic closure of K, that is, the number of distinct roots is equal to the degree of the polynomial.[1]
This concept is closely related to square-free polynomial. If K is a perfect field then the two concepts coincide. In general, P(X) is separable if and only if it is square-free over any field that contains K, which holds if and only if P(X) is coprime to its formal derivative D P(X).
Older definition
In an older definition, P(X) was considered separable if each of its irreducible factors in K[X] is separable in the modern definition.[2] In this definition, separability depended on the field K; for example, any polynomial over a perfect field would have been considered separable. This definition, although it can be convenient for Galois theory, is no longer in use.
Separable field extensions
Separable polynomials are used to define separable extensions: A field extension K ⊂ L is a separable extension if and only if for every α in L which is algebraic over K, the minimal polynomial of α over K is a separable polynomial.
Inseparable extensions (that is, extensions which are not separable) may occur only in positive characteristic.
The criterion above leads to the quick conclusion that if P is irreducible and not separable, then D P(X) = 0. Thus we must have
P(X) = Q(X p)
for some polynomial Q over K, where the prime number p is the characteristic.
With this clue we can construct an example:
P(X) = X p − T
with K the field of rational functions in the indeterminate T over the finite field with p elements. Here one can prove directly that P(X) is irreducible and not separable. This is actually a typical example of why inseparability matters; in geometric terms P represents the mapping on the projective line over the finite field, taking co-ordinates to their pth power. Such mappings are fundamental to the algebraic geometry of finite fields. Put another way, there are coverings in that setting that cannot be 'seen' by Galois theory. (See Radical morphism for a higher-level discussion.)
If L is the field extension
K(T 1/p),
in other words the splitting field of P, then L/K is an example of a purely inseparable field extension. It is of degree p, but has no automorphism fixing K, other than the identity, because T 1/p is the unique root of P. This shows directly that Galois theory must here break down. A field such that there are no such extensions is called perfect. That finite fields are perfect follows a posteriori from their known structure.
One can show that the tensor product of fields of L with itself over K for this example has nilpotent elements that are non-zero. This is another manifestation of inseparability: that is, the tensor product operation on fields need not produce a ring that is a product of fields (so, not a commutative semisimple ring).
If P(x) is separable, and its roots form a group (a subgroup of the field K), then P(x) is an additive polynomial.
Applications in Galois theory
Separable polynomials occur frequently in Galois theory.
For example, let P be an irreducible polynomial with integer coefficients and p be a prime number which does not divide the leading coefficient of P. Let Q be the polynomial over the finite field with p elements, which is obtained by reducing modulo p the coefficients of P. Then, if Q is separable (which is the case for every p but a finite number) then the degrees of the irreducible factors of Q are the lengths of the cycles of some permutation of the Galois group of P.
Another example: P being as above, a resolvent R for a group G is a polynomial whose coefficients are polynomials in the coefficients of P, which provides some information on the Galois group of P. More precisely, if R is separable and has a rational root then the Galois group of P is contained in G. For example, if D is the discriminant of P then $X^{2}-D$ is a resolvent for the alternating group. This resolvent is always separable (assuming the characteristic is not 2) if P is irreducible, but most resolvents are not always separable.
See also
• Frobenius endomorphism
References
1. Pages 240-241 of Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley, ISBN 978-0-201-55540-0, Zbl 0848.13001
2. N. Jacobson, Basic Algebra I, p. 233
• Pages 240-241 of Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley, ISBN 978-0-201-55540-0, Zbl 0848.13001
|
Wikipedia
|
Algebraic closure
In mathematics, particularly abstract algebra, an algebraic closure of a field K is an algebraic extension of K that is algebraically closed. It is one of many closures in mathematics.
Using Zorn's lemma[1][2][3] or the weaker ultrafilter lemma,[4][5] it can be shown that every field has an algebraic closure, and that the algebraic closure of a field K is unique up to an isomorphism that fixes every member of K. Because of this essential uniqueness, we often speak of the algebraic closure of K, rather than an algebraic closure of K.
The algebraic closure of a field K can be thought of as the largest algebraic extension of K. To see this, note that if L is any algebraic extension of K, then the algebraic closure of L is also an algebraic closure of K, and so L is contained within the algebraic closure of K. The algebraic closure of K is also the smallest algebraically closed field containing K, because if M is any algebraically closed field containing K, then the elements of M that are algebraic over K form an algebraic closure of K.
The algebraic closure of a field K has the same cardinality as K if K is infinite, and is countably infinite if K is finite.[3]
Examples
• The fundamental theorem of algebra states that the algebraic closure of the field of real numbers is the field of complex numbers.
• The algebraic closure of the field of rational numbers is the field of algebraic numbers.
• There are many countable algebraically closed fields within the complex numbers, and strictly containing the field of algebraic numbers; these are the algebraic closures of transcendental extensions of the rational numbers, e.g. the algebraic closure of Q(π).
• For a finite field of prime power order q, the algebraic closure is a countably infinite field that contains a copy of the field of order qn for each positive integer n (and is in fact the union of these copies).[6]
Existence of an algebraic closure and splitting fields
Let $S=\{f_{\lambda }|\lambda \in \Lambda \}$ be the set of all monic irreducible polynomials in K[x]. For each $f_{\lambda }\in S$, introduce new variables $u_{\lambda ,1},\ldots ,u_{\lambda ,d}$ where $d={\rm {degree}}(f_{\lambda })$. Let R be the polynomial ring over K generated by $u_{\lambda ,i}$ for all $\lambda \in \Lambda $ and all $i\leq {\rm {degree}}(f_{\lambda })$. Write
$f_{\lambda }-\prod _{i=1}^{d}(x-u_{\lambda ,i})=\sum _{j=0}^{d-1}r_{\lambda ,j}\cdot x^{j}\in R[x]$
with $r_{\lambda ,j}\in R$. Let I be the ideal in R generated by the $r_{\lambda ,j}$. Since I is strictly smaller than R, Zorn's lemma implies that there exists a maximal ideal M in R that contains I. The field K1=R/M has the property that every polynomial $f_{\lambda }$ with coefficients in K splits as the product of $x-(u_{\lambda ,i}+M),$ and hence has all roots in K1. In the same way, an extension K2 of K1 can be constructed, etc. The union of all these extensions is the algebraic closure of K, because any polynomial with coefficients in this new field has its coefficients in some Kn with sufficiently large n, and then its roots are in Kn+1, and hence in the union itself.
It can be shown along the same lines that for any subset S of K[x], there exists a splitting field of S over K.
Separable closure
An algebraic closure Kalg of K contains a unique separable extension Ksep of K containing all (algebraic) separable extensions of K within Kalg. This subextension is called a separable closure of K. Since a separable extension of a separable extension is again separable, there are no finite separable extensions of Ksep, of degree > 1. Saying this another way, K is contained in a separably-closed algebraic extension field. It is unique (up to isomorphism).[7]
The separable closure is the full algebraic closure if and only if K is a perfect field. For example, if K is a field of characteristic p and if X is transcendental over K, $K(X)({\sqrt[{p}]{X}})\supset K(X)$ is a non-separable algebraic field extension.
In general, the absolute Galois group of K is the Galois group of Ksep over K.[8]
See also
• Algebraically closed field
• Algebraic extension
• Puiseux expansion
• Complete field
References
1. McCarthy (1991) p.21
2. M. F. Atiyah and I. G. Macdonald (1969). Introduction to commutative algebra. Addison-Wesley publishing Company. pp. 11–12.
3. Kaplansky (1972) pp.74-76
4. Banaschewski, Bernhard (1992), "Algebraic closure without choice.", Z. Math. Logik Grundlagen Math., 38 (4): 383–385, doi:10.1002/malq.19920380136, Zbl 0739.03027
5. Mathoverflow discussion
6. Brawley, Joel V.; Schnibben, George E. (1989), "2.2 The Algebraic Closure of a Finite Field", Infinite Algebraic Extensions of Finite Fields, Contemporary Mathematics, vol. 95, American Mathematical Society, pp. 22–23, ISBN 978-0-8218-5428-0, Zbl 0674.12009.
7. McCarthy (1991) p.22
8. Fried, Michael D.; Jarden, Moshe (2008). Field arithmetic. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. Vol. 11 (3rd ed.). Springer-Verlag. p. 12. ISBN 978-3-540-77269-9. Zbl 1145.12001.
• Kaplansky, Irving (1972). Fields and rings. Chicago lectures in mathematics (Second ed.). University of Chicago Press. ISBN 0-226-42451-0. Zbl 1001.16500.
• McCarthy, Paul J. (1991). Algebraic extensions of fields (Corrected reprint of the 2nd ed.). New York: Dover Publications. Zbl 0768.12001.
|
Wikipedia
|
Separated sets
In topology and related branches of mathematics, separated sets are pairs of subsets of a given topological space that are related to each other in a certain way: roughly speaking, neither overlapping nor touching. The notion of when two sets are separated or not is important both to the notion of connected spaces (and their connected components) as well as to the separation axioms for topological spaces.
Separation axioms
in topological spaces
Kolmogorov classification
T0 (Kolmogorov)
T1 (Fréchet)
T2 (Hausdorff)
T2½(Urysohn)
completely T2 (completely Hausdorff)
T3 (regular Hausdorff)
T3½(Tychonoff)
T4 (normal Hausdorff)
T5 (completely normal
Hausdorff)
T6 (perfectly normal
Hausdorff)
• History
Separated sets should not be confused with separated spaces (defined below), which are somewhat related but different. Separable spaces are again a completely different topological concept.
Definitions
There are various ways in which two subsets $A$ and $B$ of a topological space $X$ can be considered to be separated. A most basic way in which two sets can be separated is if they are disjoint, that is, if their intersection is the empty set. This property has nothing to do with topology as such, but only set theory. Each of the properties below is stricter than disjointness, incorporating some topological information. The properties are presented in increasing order of specificity, each being a stronger notion than the preceding one.
A more restrictive property is that $A$ and $B$ are separated in $X$ if each is disjoint from the other's closure:
$A\cap {\bar {B}}=\varnothing ={\bar {A}}\cap B.$
This property is known as the Hausdorff−Lennes Separation Condition.[1] Since every set is contained in its closure, two separated sets automatically must be disjoint. The closures themselves do not have to be disjoint from each other; for example, the intervals $[0,1)$ and $(1,2]$ are separated in the real line $\mathbb {R} ,$ even though the point 1 belongs to both of their closures. A more general example is that in any metric space, two open balls $B_{r}(p)=\{x\in X:d(p,x)<r\}$ and $B_{s}(q)=\{x\in X:d(q,x)<s\}$ are separated whenever $d(p,q)\geq r+s.$ The property of being separated can also be expressed in terms of derived set (indicated by the prime symbol): $A$ and $B$ are separated when they are disjoint and each is disjoint from the other's derived set, that is, $ A'\cap B=\varnothing =B'\cap A.$ (As in the case of the first version of the definition, the derived sets $A'$ and $B'$ are not required to be disjoint from each other.)
The sets $A$ and $B$ are separated by neighbourhoods if there are neighbourhoods $U$ of $A$ and $V$ of $B$ such that $U$ and $V$ are disjoint. (Sometimes you will see the requirement that $U$ and $V$ be open neighbourhoods, but this makes no difference in the end.) For the example of$A=[0,1)$ and $B=(1,2],$ you could take $U=(-1,1)$ and $V=(1,3).$ Note that if any two sets are separated by neighbourhoods, then certainly they are separated. If $A$ and $B$ are open and disjoint, then they must be separated by neighbourhoods; just take $U=A$ and $V=B.$ For this reason, separatedness is often used with closed sets (as in the normal separation axiom).
The sets $A$ and $B$ are separated by closed neighbourhoods if there is a closed neighbourhood $U$ of $A$ and a closed neighbourhood $V$ of $B$ such that $U$ and $V$ are disjoint. Our examples, $[0,1)$ and $(1,2],$ are not separated by closed neighbourhoods. You could make either $U$ or $V$ closed by including the point 1 in it, but you cannot make them both closed while keeping them disjoint. Note that if any two sets are separated by closed neighbourhoods, then certainly they are separated by neighbourhoods.
The sets $A$ and $B$ are separated by a continuous function if there exists a continuous function $f:X\to \mathbb {R} $ from the space $X$ to the real line $\mathbb {R} $ such that $A\subseteq f^{-1}(0)$ and $B\subseteq f^{-1}(1)$, that is, members of $A$ map to 0 and members of $B$ map to 1. (Sometimes the unit interval $[0,1]$ is used in place of $\mathbb {R} $ in this definition, but this makes no difference.) In our example, $[0,1)$ and $(1,2]$ are not separated by a function, because there is no way to continuously define $f$ at the point 1.[2] If two sets are separated by a continuous function, then they are also separated by closed neighbourhoods; the neighbourhoods can be given in terms of the preimage of $f$ as $U=f^{-1}[-c,c]$ and $V=f^{-1}[1-c,1+c],$ where $c$ is any positive real number less than $1/2.$
The sets $A$ and $B$ are precisely separated by a continuous function if there exists a continuous function $f:X\to \mathbb {R} $ such that $A=f^{-1}(0)$ and $B=f^{-1}(1).$ (Again, you may also see the unit interval in place of $\mathbb {R} ,$ and again it makes no difference.) Note that if any two sets are precisely separated by a function, then they are separated by a function. Since $\{0\}$ and $\{1\}$ are closed in $\mathbb {R} ,$ only closed sets are capable of being precisely separated by a function, but just because two sets are closed and separated by a function does not mean that they are automatically precisely separated by a function (even a different function).
Relation to separation axioms and separated spaces
Main article: separation axiom
The separation axioms are various conditions that are sometimes imposed upon topological spaces, many of which can be described in terms of the various types of separated sets. As an example we will define the T2 axiom, which is the condition imposed on separated spaces. Specifically, a topological space is separated if, given any two distinct points x and y, the singleton sets {x} and {y} are separated by neighbourhoods.
Separated spaces are usually called Hausdorff spaces or T2 spaces.
Relation to connected spaces
Main article: Connected space
Given a topological space X, it is sometimes useful to consider whether it is possible for a subset A to be separated from its complement. This is certainly true if A is either the empty set or the entire space X, but there may be other possibilities. A topological space X is connected if these are the only two possibilities. Conversely, if a nonempty subset A is separated from its own complement, and if the only subset of A to share this property is the empty set, then A is an open-connected component of X. (In the degenerate case where X is itself the empty set $\emptyset $, authorities differ on whether $\emptyset $ is connected and whether $\emptyset $ is an open-connected component of itself.)
Relation to topologically distinguishable points
Main article: Topological distinguishability
Given a topological space X, two points x and y are topologically distinguishable if there exists an open set that one point belongs to but the other point does not. If x and y are topologically distinguishable, then the singleton sets {x} and {y} must be disjoint. On the other hand, if the singletons {x} and {y} are separated, then the points x and y must be topologically distinguishable. Thus for singletons, topological distinguishability is a condition in between disjointness and separatedness.
See also
• Hausdorff space – Type of topological space
• Locally Hausdorff space
• Separation axiom – Axioms in topology defining notions of "separation"
Citations
1. Pervin 1964, p. 51
2. Munkres, James R. (2000). Topology (2 ed.). Prentice Hall. p. 211. ISBN 0-13-181629-2.
Sources
• Munkres, James R. (2000). Topology. Prentice-Hall. ISBN 0-13-181629-2.
• Willard, Stephen (2004). General Topology. Addison-Wesley. ISBN 0-486-43479-6.
• Pervin, William J. (1964), Foundations of General Topology, Academic Press
Topology
Fields
• General (point-set)
• Algebraic
• Combinatorial
• Continuum
• Differential
• Geometric
• low-dimensional
• Homology
• cohomology
• Set-theoretic
• Digital
Key concepts
• Open set / Closed set
• Interior
• Continuity
• Space
• compact
• Connected
• Hausdorff
• metric
• uniform
• Homotopy
• homotopy group
• fundamental group
• Simplicial complex
• CW complex
• Polyhedral complex
• Manifold
• Bundle (mathematics)
• Second-countable space
• Cobordism
Metrics and properties
• Euler characteristic
• Betti number
• Winding number
• Chern number
• Orientability
Key results
• Banach fixed-point theorem
• De Rham cohomology
• Invariance of domain
• Poincaré conjecture
• Tychonoff's theorem
• Urysohn's lemma
• Category
• Mathematics portal
• Wikibook
• Wikiversity
• Topics
• general
• algebraic
• geometric
• Publications
|
Wikipedia
|
Diagonal morphism (algebraic geometry)
In algebraic geometry, given a morphism of schemes $p:X\to S$, the diagonal morphism
$\delta :X\to X\times _{S}X$
is a morphism determined by the universal property of the fiber product $X\times _{S}X$ of p and p applied to the identity $1_{X}:X\to X$ and the identity $1_{X}$.
It is a special case of a graph morphism: given a morphism $f:X\to Y$ over S, the graph morphism of it is $X\to X\times _{S}Y$ induced by $f$ and the identity $1_{X}$. The diagonal embedding is the graph morphism of $1_{X}$.
By definition, X is a separated scheme over S ($p:X\to S$ is a separated morphism) if the diagonal morphism is a closed immersion. Also, a morphism $p:X\to S$ locally of finite presentation is an unramified morphism if and only if the diagonal embedding is an open immersion.
Explanation
As an example, consider an algebraic variety over an algebraically closed field k and $p:X\to \operatorname {Spec} (k)$ the structure map. Then, identifying X with the set of its k-rational points, $X\times _{k}X=\{(x,y)\in X\times X\}$ and $\delta :X\to X\times _{k}X$ is given as $x\mapsto (x,x)$; whence the name diagonal morphism.
Separated morphism
A separated morphism is a morphism $f$ such that the fiber product of $f$ with itself along $f$ has its diagonal as a closed subscheme — in other words, the diagonal morphism is a closed immersion.
As a consequence, a scheme $X$ is separated when the diagonal of $X$ within the scheme product of $X$ with itself is a closed immersion. Emphasizing the relative point of view, one might equivalently define a scheme to be separated if the unique morphism $X\rightarrow {\textrm {Spec}}(\mathbb {Z} )$ is separated.
Notice that a topological space Y is Hausdorff iff the diagonal embedding
$Y{\stackrel {\Delta }{\longrightarrow }}Y\times Y,\,y\mapsto (y,y)$
is closed. In algebraic geometry, the above formulation is used because a scheme which is a Hausdorff space is necessarily empty or zero-dimensional. The difference between the topological and algebro-geometric context comes from the topological structure of the fiber product (in the category of schemes) $X\times _{{\textrm {Spec}}(\mathbb {Z} )}X$, which is different from the product of topological spaces.
Any affine scheme Spec A is separated, because the diagonal corresponds to the surjective map of rings (hence is a closed immersion of schemes):
$A\otimes _{\mathbb {Z} }A\rightarrow A,a\otimes a'\mapsto a\cdot a'$.
Let $S$ be a scheme obtained by identifying two affine lines through the identity map except at the origins (see gluing scheme#Examples). It is not separated.[1] Indeed, the image of the diagonal morphism $S\to S\times S$ image has two origins, while its closure contains four origins.
Use in intersection theory
A classic way to define the intersection product of algebraic cycles $A,B$ on a smooth variety X is by intersecting (restricting) their cartesian product with (to) the diagonal: precisely,
$A\cdot B=\delta ^{*}(A\times B)$
where $\delta ^{*}$ is the pullback along the diagonal embedding $\delta :X\to X\times X$.
See also
• regular embedding
• Diagonal morphism
References
1. Hartshorne 1977, Example 4.0.1.
• Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
|
Wikipedia
|
Hyperplane separation theorem
In geometry, the hyperplane separation theorem is a theorem about disjoint convex sets in n-dimensional Euclidean space. There are several rather similar versions. In one version of the theorem, if both these sets are closed and at least one of them is compact, then there is a hyperplane in between them and even two parallel hyperplanes in between them separated by a gap. In another version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is a separating axis, because the orthogonal projections of the convex bodies onto the axis are disjoint.
Hyperplane separation theorem
Illustration of the hyperplane separation theorem.
TypeTheorem
Field
• Convex geometry
• Topological vector spaces
• Collision detection
Conjectured byHermann Minkowski
Open problemNo
GeneralizationsHahn–Banach separation theorem
The hyperplane separation theorem is due to Hermann Minkowski. The Hahn–Banach separation theorem generalizes the result to topological vector spaces.
A related result is the supporting hyperplane theorem.
In the context of support-vector machines, the optimally separating hyperplane or maximum-margin hyperplane is a hyperplane which separates two convex hulls of points and is equidistant from the two.[1][2][3]
Statements and proof
In all cases, assume $A,B$ to be disjoint, nonempty, and convex subsets of $\mathbb {R} ^{n}$. The summary of the results are as follows:
summary table
$A$ $B$ $\langle x,v\rangle $ $\langle y,v\rangle $
$\geq c$ $\leq c$
closed compact closed $>c_{1}$ $<c_{2}$ with $c_{2}<c_{1}$
closed closed compact $>c_{1}$ $<c_{2}$ with $c_{2}<c_{1}$
open $>c$ $\leq c$
open open $>c$ $<c$
Hyperplane separation theorem[4] — Let $A$ and $B$ be two disjoint nonempty convex subsets of $\mathbb {R} ^{n}$. Then there exist a nonzero vector $v$ and a real number $c$ such that
$\langle x,v\rangle \geq c\,{\text{ and }}\langle y,v\rangle \leq c$
for all $x$ in $A$ and $y$ in $B$; i.e., the hyperplane $\langle \cdot ,v\rangle =c$, $v$ the normal vector, separates $A$ and $B$.
If both sets are closed, and at least one of them is compact, then the separation can be strict, that is, $\langle x,v\rangle >c_{1}\,{\text{ and }}\langle y,v\rangle <c_{2}$ for some $c_{1}>c_{2}$
The number of dimensions must be finite. In infinite-dimensional spaces there are examples of two closed, convex, disjoint sets which cannot be separated by a closed hyperplane (a hyperplane where a continuous linear functional equals some constant) even in the weak sense where the inequalities are not strict.[5]
Here, the compactness in the hypothesis cannot be relaxed; see an example in the section Counterexamples and uniqueness. This version of the separation theorem does generalize to infinite-dimension; the generalization is more commonly known as the Hahn–Banach separation theorem.
The proof is based on the following lemma:
Lemma — Let $A$ and $B$ be two disjoint closed subsets of $\mathbb {R} ^{n}$, and assume $A$ is compact. Then there exist points $a_{0}\in A$ and $b_{0}\in B$ minimizing the distance $\|a-b\|$ over $a\in A$ and $b\in B$.
Proof of lemma
Let $a\in A$ and $b\in B$ be any pair of points, and let $r_{1}=\|b-a\|$. Since $A$ is compact, it is contained in some ball centered on $a$; let the radius of this ball be $r_{2}$. Let $S=B\cap {\overline {B_{r_{1}+r_{2}}(a)}}$ be the intersection of $B$ with a closed ball of radius $r_{1}+r_{2}$ around $a$. Then $S$ is compact and nonempty because it contains $b$. Since the distance function is continuous, there exist points $a_{0}$ and $b_{0}$ whose distance $\|a_{0}-b_{0}\|$ is the minimum over all pairs of points in $A\times S$. It remains to show that $a_{0}$ and $b_{0}$ in fact have the minimum distance over all pairs of points in $A\times B$. Suppose for contradiction that there exist points $a'$ and $b'$ such that $\|a'-b'\|<\|a_{0}-b_{0}\|$. Then in particular, $\|a'-b'\|<r_{1}$, and by the triangle inequality, $\|a-b'\|\leq \|a'-b'\|+\|a-a'\|<r_{1}+r_{2}$. Therefore $b'$ is contained in $S$, which contradicts the fact that $a_{0}$ and $b_{0}$ had minimum distance over $A\times S$. $\square $
Proof of theorem
We first prove the second case. (See the diagram.)
WLOG, $A$ is compact. By the lemma, there exist points $a_{0}\in A$ and $b_{0}\in B$ of minimum distance to each other. Since $A$ and $B$ are disjoint, we have $a_{0}\neq b_{0}$. Now, construct two hyperplanes $L_{A},L_{B}$ perpendicular to line segment $[a_{0},b_{0}]$, with $L_{A}$ across $a_{0}$ and $L_{B}$ across $b_{0}$. We claim that neither $A$ nor $B$ enters the space between $L_{A},L_{B}$, and thus the perpendicular hyperplanes to $(a_{0},b_{0})$ satisfy the requirement of the theorem.
Algebraically, the hyperplanes $L_{A},L_{B}$ are defined by the vector $v:=b_{0}-a_{0}$, and two constants $c_{A}:=\langle v,a_{0}\rangle <c_{B}:=\langle v,b_{0}\rangle $, such that $L_{A}=\{x:\langle v,x\rangle =c_{A}\},L_{B}=\{x:\langle v,x\rangle =c_{B}\}$. Our claim is that $\forall a\in A,\langle v,a\rangle \leq c_{A}$ and $\forall b\in B,\langle v,b\rangle \geq c_{B}$.
Suppose there is some $a\in A$ such that $\langle v,a\rangle >c_{A}$, then let $a'$ be the foot of perpendicular from $b_{0}$ to the line segment $[a_{0},a]$. Since $A$ is convex, $a'$ is inside $A$, and by planar geometry, $a'$ is closer to $b_{0}$ than $a_{0}$, contradiction. Similar argument applies to $B$.
Now for the first case.
Approach both $A,B$ from the inside by $A_{1}\subseteq A_{2}\subseteq \cdots \subseteq A$ and $B_{1}\subseteq B_{2}\subseteq \cdots \subseteq B$, such that each $A_{k},B_{k}$ is closed and compact, and the unions are the relative interiors $\mathrm {relint} (A),\mathrm {relint} (B)$. (See relative interior page for details.)
Now by the second case, for each pair $A_{k},B_{k}$ there exists some unit vector $v_{k}$ and real number $c_{k}$, such that $\langle v_{k},A_{k}\rangle <c_{k}<\langle v_{k},B_{k}\rangle $.
Since the unit sphere is compact, we can take a convergent subsequence, so that $v_{k}\to v$. Let $c_{A}:=\sup _{a\in A}\langle v,a\rangle ,c_{B}:=\inf _{b\in B}\langle v,b\rangle $. We claim that $c_{A}\leq c_{B}$, thus separating $A,B$.
Assume not, then there exists some $a\in A,b\in B$ such that $\langle v,a\rangle >\langle v,b\rangle $, then since $v_{k}\to v$, for large enough $k$, we have $\langle v_{k},a\rangle >\langle v_{k},b\rangle $, contradiction.
Since a separating hyperplane cannot intersect the interiors of open convex sets, we have a corollary:
Separation theorem I — Let $A$ and $B$ be two disjoint nonempty convex sets. If $A$ is open, then there exist a nonzero vector $v$ and real number $c$ such that
$\langle x,v\rangle >c\,{\text{ and }}\langle y,v\rangle \leq c$
for all $x$ in $A$ and $y$ in $B$. If both sets are open, then there exist a nonzero vector $v$ and real number $c$ such that
$\langle x,v\rangle >c\,{\text{ and }}\langle y,v\rangle <c$
for all $x$ in $A$ and $y$ in $B$.
Case with possible intersections
If the sets $A,B$ have possible intersections, but their relative interiors are disjoint, then the proof of the first case still applies with no change, thus yielding:
Separation theorem II — Let $A$ and $B$ be two nonempty convex subsets of $\mathbb {R} ^{n}$ with disjoint relative interiors. Then there exist a nonzero vector $v$ and a real number $c$ such that
$\langle x,v\rangle \geq c\,{\text{ and }}\langle y,v\rangle \leq c$
in particular, we have the supporting hyperplane theorem.
Supporting hyperplane theorem — if $A$ is a convex set in $\mathbb {R} ^{n},$ and $a_{0}$ is a point on the boundary of $A$, then there exists a supporting hyperplane of $A$ containing $a_{0}$.
Proof
If the affine span of $A$ is not all of $\mathbb {R} ^{n}$, then extend the affine span to a supporting hyperplane. Else, $\mathrm {relint} (A)=\mathrm {int} (A)$ is disjoint from $\mathrm {relint} (\{a_{0}\})=\{a_{0}\}$, so apply the above theorem.
Converse of theorem
Note that the existence of a hyperplane that only "separates" two convex sets in the weak sense of both inequalities being non-strict obviously does not imply that the two sets are disjoint. Both sets could have points located on the hyperplane.
Counterexamples and uniqueness
If one of A or B is not convex, then there are many possible counterexamples. For example, A and B could be concentric circles. A more subtle counterexample is one in which A and B are both closed but neither one is compact. For example, if A is a closed half plane and B is bounded by one arm of a hyperbola, then there is no strictly separating hyperplane:
$A=\{(x,y):x\leq 0\}$
$B=\{(x,y):x>0,y\geq 1/x\}.\ $
(Although, by an instance of the second theorem, there is a hyperplane that separates their interiors.) Another type of counterexample has A compact and B open. For example, A can be a closed square and B can be an open square that touches A.
In the first version of the theorem, evidently the separating hyperplane is never unique. In the second version, it may or may not be unique. Technically a separating axis is never unique because it can be translated; in the second version of the theorem, a separating axis can be unique up to translation.
The horn angle provides a good counterexample to many hyperplane separations. For example, in $\mathbb {R} ^{2}$, the unit disk is disjoint from the open interval $((1,0),(1,1))$, but the only line separating them contains the entirety of $((1,0),(1,1))$. This shows that if $A$ is closed and $B$ is relatively open, then there does not necessarily exist a separation that is strict for $B$. However, if $A$ is closed polytope then such a separation exists.[6]
More variants
Farkas' lemma and related results can be understood as hyperplane separation theorems when the convex bodies are defined by finitely many linear inequalities.
More results may be found.[6]
Use in collision detection
In collision detection, the hyperplane separation theorem is usually used in the following form:
Separating axis theorem — Two closed convex objects are disjoint if there exists a line ("separating axis") onto which the two objects' projections are disjoint.
Regardless of dimensionality, the separating axis is always a line. For example, in 3D, the space is separated by planes, but the separating axis is perpendicular to the separating plane.
The separating axis theorem can be applied for fast collision detection between polygon meshes. Each face's normal or other feature direction is used as a separating axis. Note that this yields possible separating axes, not separating lines/planes.
In 3D, using face normals alone will fail to separate some edge-on-edge non-colliding cases. Additional axes, consisting of the cross-products of pairs of edges, one taken from each object, are required.[7]
For increased efficiency, parallel axes may be calculated as a single axis.
See also
• Dual cone
• Farkas's lemma
• Kirchberger's theorem
• Optimal control
Notes
1. Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome (2008). The Elements of Statistical Learning : Data Mining, Inference, and Prediction (PDF) (Second ed.). New York: Springer. pp. 129–135.
2. Witten, Ian H.; Frank, Eibe; Hall, Mark A.; Pal, Christopher J. (2016). Data Mining: Practical Machine Learning Tools and Techniques (Fourth ed.). Morgan Kaufmann. pp. 253–254. ISBN 9780128043578.
3. Deisenroth, Marc Peter; Faisal, A. Aldo; Ong, Cheng Soon (2020). Mathematics for Machine Learning. Cambridge University Press. pp. 337–338. ISBN 978-1-108-45514-5.
4. Boyd & Vandenberghe 2004, Exercise 2.22.
5. Haïm Brezis, Analyse fonctionnelle : théorie et applications, 1983, remarque 4, p. 7.
6. Stoer, Josef; Witzgall, Christoph (1970). Convexity and Optimization in Finite Dimensions I. Springer Berlin, Heidelberg. (2.12.9). doi:10.1007/978-3-642-46216-0. ISBN 978-3-642-46216-0.
7. "Advanced vector math".
References
• Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (PDF). Cambridge University Press. ISBN 978-0-521-83378-3.
• Golshtein, E. G.; Tretyakov, N.V. (1996). Modified Lagrangians and monotone maps in optimization. New York: Wiley. p. 6. ISBN 0-471-54821-9.
• Shimizu, Kiyotaka; Ishizuka, Yo; Bard, Jonathan F. (1997). Nondifferentiable and two-level mathematical programming. Boston: Kluwer Academic Publishers. p. 19. ISBN 0-7923-9821-1.
• Soltan, V. (2021). Support and separation properties of convex sets in finite dimension. Extracta Math. Vol. 36, no. 2, 241-278.
External links
• Collision detection and response
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Topological vector spaces (TVSs)
Basic concepts
• Banach space
• Completeness
• Continuous linear operator
• Linear functional
• Fréchet space
• Linear map
• Locally convex space
• Metrizability
• Operator topologies
• Topological vector space
• Vector space
Main results
• Anderson–Kadec
• Banach–Alaoglu
• Closed graph theorem
• F. Riesz's
• Hahn–Banach (hyperplane separation
• Vector-valued Hahn–Banach)
• Open mapping (Banach–Schauder)
• Bounded inverse
• Uniform boundedness (Banach–Steinhaus)
Maps
• Bilinear operator
• form
• Linear map
• Almost open
• Bounded
• Continuous
• Closed
• Compact
• Densely defined
• Discontinuous
• Topological homomorphism
• Functional
• Linear
• Bilinear
• Sesquilinear
• Norm
• Seminorm
• Sublinear function
• Transpose
Types of sets
• Absolutely convex/disk
• Absorbing/Radial
• Affine
• Balanced/Circled
• Banach disks
• Bounding points
• Bounded
• Complemented subspace
• Convex
• Convex cone (subset)
• Linear cone (subset)
• Extreme point
• Pre-compact/Totally bounded
• Prevalent/Shy
• Radial
• Radially convex/Star-shaped
• Symmetric
Set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Convex hull
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Types of TVSs
• Asplund
• B-complete/Ptak
• Banach
• (Countably) Barrelled
• BK-space
• (Ultra-) Bornological
• Brauner
• Complete
• Convenient
• (DF)-space
• Distinguished
• F-space
• FK-AK space
• FK-space
• Fréchet
• tame Fréchet
• Grothendieck
• Hilbert
• Infrabarreled
• Interpolation space
• K-space
• LB-space
• LF-space
• Locally convex space
• Mackey
• (Pseudo)Metrizable
• Montel
• Quasibarrelled
• Quasi-complete
• Quasinormed
• (Polynomially
• Semi-) Reflexive
• Riesz
• Schwartz
• Semi-complete
• Smith
• Stereotype
• (B
• Strictly
• Uniformly) convex
• (Quasi-) Ultrabarrelled
• Uniformly smooth
• Webbed
• With the approximation property
• Mathematics portal
• Category
• Commons
|
Wikipedia
|
Separating set
In mathematics, a set $S$ of functions with domain $D$ is called a separating set for $D$ and is said to separate the points of $D$ (or just to separate points) if for any two distinct elements $x$ and $y$ of $D,$ there exists a function $f\in S$ such that $f(x)\neq f(y).$[1]
This article is about separating sets for functions. For use in graph theory, see connectivity (graph theory).
Separating sets can be used to formulate a version of the Stone–Weierstrass theorem for real-valued functions on a compact Hausdorff space $X,$ with the topology of uniform convergence. It states that any subalgebra of this space of functions is dense if and only if it separates points. This is the version of the theorem originally proved by Marshall H. Stone.[1]
Examples
• The singleton set consisting of the identity function on $\mathbb {R} $ separates the points of $\mathbb {R} .$
• If $X$ is a T1 normal topological space, then Urysohn's lemma states that the set $C(X)$ of continuous functions on $X$ with real (or complex) values separates points on $X.$
• If $X$ is a locally convex Hausdorff topological vector space over $\mathbb {R} $ or $\mathbb {C} ,$ then the Hahn–Banach separation theorem implies that continuous linear functionals on $X$ separate points.
See also
• Dual system
References
1. Carothers, N. L. (2000), Real Analysis, Cambridge University Press, pp. 201–204, ISBN 9781139643160.
|
Wikipedia
|
Separation oracle
A separation oracle (also called a cutting-plane oracle) is a concept in the mathematical theory of convex optimization. It is a method to describe a convex set that is given as an input to an optimization algorithm. Separation oracles are used as input to ellipsoid methods.[1]: 87, 96, 98
Definition
Let K be a convex and compact set in Rn. A strong separation oracle for K is an oracle (black box) that, given a vector y in Rn, returns one of the following:[1]: 48
• Assert that y is in K.
• Find a hyperplane that separates y from K: a vector a in Rn, such that $a\cdot y>a\cdot x$ for all x in K.
A strong separation oracle is completely accurate, and thus may be hard to construct. For practical reasons, a weaker version is considered, which allows for small errors in the boundary of K and the inequalities. Given a small error tolerance d>0, we say that:
• A vector y is d-near K if its Euclidean distance from K is at most d;
• A vector y is d-deep in K if it is in K, and its Euclidean distance from any point in outside K is at least d.
The weak version also considers rational numbers, which have a representation of finite length, rather than arbitrary real numbers. A weak separation oracle for K is an oracle that, given a vector y in Qn and a rational number d>0, returns one of the following::[1]: 51
• Assert that y is d-near K;
• Find a vector a in Qn, normalized such that its maximum element is 1, such that $a\cdot y+d\geq a\cdot x$ for all x that are d-deep in K.
Implementation
A special case of a convex set is a set represented by linear inequalities: $K=\{x|Ax\leq b\}$. Such a set is called a convex polytope. A strong separation oracle for a convex polytope can be implemented, but its run-time depends on the input format.
Representation by inequalities
If the matrix A and the vector b are given as input, so that $K=\{x|Ax\leq b\}$, then a strong separation oracle can be implemented as follows.[2] Given a point y, compute $Ay$:
• If the outcome is at most $b$, then y is in K by definition;
• Otherwise, there is at least one row $c$ of A, such that $c\cdot y$ is larger than the corresponding value in $b$; this row $c$ gives us the separating hyperplane, as $c\cdot y>b\geq c\cdot x$ for all x in K.
This oracle runs in polynomial time as long as the number of constraints is polynomial.
Representation by vertices
Suppose the set of vertices of K is given as an input, so that $K={\text{conv}}(v_{1},\ldots ,v_{k})=$ the convex hull of its vertices. Then, deciding whether y is in K requires to check whether y is a convex combination of the input vectors, that is, whether there exist coefficients z1,...,zk such that: [1]: 49
• $z_{1}\cdot v_{1}+\cdots +z_{k}\cdot v_{k}=y$;
• $0\leq z_{i}\leq 1$ for all i in 1,...,k.
This is a linear program with k variables and n equality constraints (one for each element of y). If y is not in K, then the above program has no solution, and the separation oracle needs to find a vector c such that
• $c\cdot y>c\cdot v_{i}$ for all i in 1,...,k.
Note that the two above representations can be very different in size: it is possible that a polytope can be represented by a small number of inequalities, but has exponentially many vertices (for example, an n-dimensional cube). Conversely, it is possible that a polytope has a small number of vertices, but requires exponentially many inequalities (for example, the convex hull of the 2n vectors of the form (0,...,±1,...,0).
Problem-specific representation
In some linear optimization problems, even though the number of constraints is exponential, one can still write a custom separation oracle that works in polynomial time. Some examples are:
• The minimum-cost arborescence problem: given a weighted directed graph and a vertex r in it, find a subgraph of minimum cost that contains a directed path from r to any other vertex. The problem can be presented as an LP with a constraint for each subset of vertices, which is an exponential number of constraints. However, a separation oracle can be implemented using n-1 applications of the minimum cut procedure.[3]
• The maximum independent set problem. It can be approximated by an LP with a constraint for every odd-length cycle. While there are exponentially-many such cycles, a separation oracle that works in polynomial time can be implemented by just finding an odd cycle of minimum length, which can be done in polynomial time.[3]
• The dual of the configuration linear program for the bin packing problem. It can be approximated by an LP with a constraint for each feasible configuration. While there are exponentially-many such cycles, a separation oracle that works in pseudopolynomial time can be implemented by solving a knapsack problem. This is used by the Karmarkar-Karp bin packing algorithms.
Non-linear sets
Let f be a convex function on Rn. The set $K=\{(x,t)|f(x)\leq t\}$ is a convex set in Rn+1. Given an evaluation oracle for f (a black box that returns the value of f for every given point), one can easily check whether a vector (y, t) is in K. In order to get a separation oracle, we need also an oracle to evaluate the subgradient of f.[1]: 49 Suppose some vector (y, s) is not in K, so f(y) > s. Let g be the subgradient of f at y (g is a vector in Rn). Denote $c:=(g,-1)$.Then, $c\cdot (y,s)=g\cdot y-s>g\cdot y-f(y)$, and for all (x, t) in K: $c\cdot (x,t)=g\cdot x-t\leq g\cdot x-f(x)$. By definition of a subgradient: $f(x)\geq f(y)+g\cdot (x-y)$ for all x in Rn. Therefore, $g\cdot y-f(y)\geq g\cdot x-f(x)$, so $c\cdot (y,s)>c\cdot (x,t)$ , and c represents a separating hyperplane.
Usage
A strong separation oracle can be given as an input to the ellipsoid method for solving a linear program. Consider the linear program ${\text{maximize}}~~c\cdot x~~{\text{subject to}}~~Ax\leq b,x\geq 0$. The ellipsoid method maintains an ellipsoid that initially contains the entire feasible domain $Ax\leq b$. At each iteration t, it takes the center $x_{t}$ of the current ellipsoid, and sends it to the separation oracle:
• If the oracle says that $x_{t}$ is feasible (that is, contained in the set $Ax\leq b$), then we do an "optimality cut" at $x_{t}$: we cut from the ellipsoid all points x for which $c\cdot x<c\cdot x_{t}$. These points are definitely not optimal.
• If the oracle says that $x_{t}$ is infeasible, then it typically returns a specific constraint that is violated by $x_{t}$, that is, a row $a_{j}$ in the matrix A, such that $a_{j}\cdot x_{t}>b_{j}$. Since $a_{j}\cdot x\leq b_{j}$ for all feasible x, this implies that $a_{j}\cdot x_{t}>a_{j}\cdot x$ for all feasible x. Then, we do a "feasibility cut" at $x_{t}$: we cut from the ellipsoid all points y for which $a_{j}\cdot y>a_{j}\cdot x_{t}$. These points are definitely not feasible.
After making a cut, we construct a new, smaller ellipsoid, that contains the remaining region. It can be shown that this process converges to an approximate solution, in time polynomial in the required accuracy.
Converting a weak oracle to a strong oracle
Given a weak separation oracle for a polyhedron, it is possible to construct a strong separation oracle by a careful method of rounding, or by diophantine approximations.[1]: 159
Relation to other oracles
Membership oracle
A membership oracle is a weaker variant of the separation oracle, which does not require the separating hyperplane.
Formally, a strong membership oracle for K is an oracle that, given a vector y in Rn, returns one of the following:
• Assert that y is in K.
• Assert that y is in not in K.
Obviously, a strong separation oracle implies a strong membership oracle.
A weak membership oracle for K is an oracle that, given a vector y in Qn and a rational number d>0, returns one of the following:
• Assert that y is d-near K;
• Assert that y is not d-deep in K.
Using a weak separation oracle (WSO), one can construct a weak membership oracle as follows.
• Run WSO(K,y,d). If it returns "yes", then return "yes".
• Otherwise, run WSO(K,y,d/3). If it returns "yes", then return "yes".
• Otherwise, return "no"; see [1]: 52 for proof of correctness.
Optimization oracle
An optimization oracle is an oracle which finds an optimal point in a given convex set.
Formally, a strong optimization oracle for K is an oracle that, given a vector c in Rn, returns one of the following:
• Assert that K is empty.
• Find a vector y in K that maximizes $c\cdot y$ (that is, $c\cdot y\geq c\cdot x$. for all x in K).
A weak optimization oracle for K is an oracle that, given a vector c in Qn and a rational number d>0, returns one of the following:
• Assert that no point is d-deep in K;
• Find a vector y that is d-near K, such that $c\cdot y+d\geq c\cdot x$ for all x that are d-deep in K.
Violation oracle
A strong violation oracle for K is an oracle that, given a vector c in Rn and a real number r, returns one of the following:
• Assert that $c\cdot x\leq r$ for all x in K;
• Find a vector y in K with $c\cdot y>r$.
A weak violation oracle for K is an oracle that, given a vector c in Qn, a rational number r, and a rational number d>0, returns one of the following:
• Assert that $c\cdot x\leq r+d$ for all x that are d-deep in K;
• Find a vector y that is d-near K, such that $c\cdot y+d\geq r$.
A major result in convex optimization is that, for any "well-described" polyhedron, the strong-separation problem, the strong-optimization problem and the strong-violation problem are equivalent
A major result in convex optimization is that, for any "well-described" polyhedron, the strong-separation problem, the strong-optimization problem and the strong-violation problem are polynomial-time-equivalent. That is, given an oracle for any one of these three problems, the other two problems can be solved in polynomial time.[1]: 158 A polyhedron is called "well-described" if the input contains n (the number of dimensions of the space it lies in), and contains a number p such K can be defined by linear inequalities with encoding-length at most p.[1]: 163 The result is proved using the ellipsoid method.
References
1. M. Grötschel, L. Lovász, A. Schrijver: Geometric Algorithms and Combinatorial Optimization, Springer, 1988.
2. "MIT 6.854 Spring 2016 Lecture 12: From Separation to Optimization and Back; Ellipsoid Method - YouTube". www.youtube.com. Retrieved 2021-01-03.
3. Vempala, Santosh (2016). "Separation oracle" (PDF).
|
Wikipedia
|
Separation relation
In mathematics, a separation relation is a formal way to arrange a set of objects in an unoriented circle. It is defined as a quaternary relation S(a, b, c, d) satisfying certain axioms, which is interpreted as asserting that a and c separate b from d.[1]
Whereas a linear order endows a set with a positive end and a negative end, a separation relation forgets not only which end is which, but also where the ends are located. In this way it is a final, further weakening of the concepts of a betweenness relation and a cyclic order. There is nothing else that can be forgotten: up to the relevant sense of interdefinability, these three relations are the only nontrivial reducts of the ordered set of rational numbers.[2]
Application
The separation may be used in showing the real projective plane is a complete space. The separation relation was described with axioms in 1898 by Giovanni Vailati.[3]
• abcd = badc
• abcd = adcb
• abcd ⇒ ¬ acbd
• abcd ∨ acdb ∨ adbc
• abcd ∧ acde ⇒ abde.
The relation of separation of points was written AC//BD by H. S. M. Coxeter in his textbook The Real Projective Plane.[4] The axiom of continuity used is "Every monotonic sequence of points has a limit." The separation relation is used to provide definitions:
• {An} is monotonic ≡ ∀ n > 1 $A_{0}A_{n}//A_{1}A_{n+1}.$
• M is a limit ≡ (∀ n > 2 $A_{1}A_{n}//A_{2}M$) ∧ (∀ P $A_{1}P//A_{2}M$ ⇒ ∃ n $A_{1}A_{n}//PM$ ).
References
1. Huntington, Edward V. (July 1935), "Inter-Relations Among the Four Principal Types of Order" (PDF), Transactions of the American Mathematical Society, 38 (1): 1–9, doi:10.1090/S0002-9947-1935-1501800-1, retrieved 8 May 2011
2. Macpherson, H. Dugald (2011), "A survey of homogeneous structures" (PDF), Discrete Mathematics, 311 (15): 1599–1634, doi:10.1016/j.disc.2011.01.024, retrieved 28 April 2011
3. Bertrand Russell (1903) Principles of Mathematics, page 214
4. H. S. M. Coxeter (1949) The Real Projective Plane, Chapter 10: Continuity, McGraw Hill
|
Wikipedia
|
Separation test
A separation test is a statistical procedure for early-phase research, to decide whether to pursue further research. It is designed to avoid the prevalent situation in early-phase research, when a statistically underpowered test gives a negative result.
Readings
• Aickin M. (2004) "Separation Tests for Early-Phase Complementary and Alternative Medicine Comparative Trials". Evidence-Based Integrative Medicine, 1(4), 225–231
|
Wikipedia
|
Generator (category theory)
In mathematics, specifically category theory, a family of generators (or family of separators) of a category ${\mathcal {C}}$ is a collection ${\mathcal {G}}\subseteq Ob({\mathcal {C}})$ of objects in ${\mathcal {C}}$, such that for any two distinct morphisms $f,g:X\to Y$ in ${\mathcal {C}}$, that is with $f\neq g$, there is some $G$ in ${\mathcal {G}}$ and some morphism $h:G\to X$ such that $f\circ h\neq g\circ h.$ If the collection consists of a single object $G$, we say it is a generator (or separator).
Generators are central to the definition of Grothendieck categories.
The dual concept is called a cogenerator or coseparator.
Examples
• In the category of abelian groups, the group of integers $\mathbf {Z} $ is a generator: If f and g are different, then there is an element $x\in X$, such that $f(x)\neq g(x)$. Hence the map $\mathbf {Z} \rightarrow X,$ $n\mapsto n\cdot x$ suffices.
• Similarly, the one-point set is a generator for the category of sets. In fact, any nonempty set is a generator.
• In the category of sets, any set with at least two elements is a cogenerator.
• In the category of modules over a ring R, a generator in a finite direct sum with itself contains an isomorphic copy of R as a direct summand. Consequently, a generator module is faithful, i.e. has zero annihilator.
References
• Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-98403-2, p. 123, section V.7
External links
• separator at the nLab
|
Wikipedia
|
Separoid
In mathematics, a separoid is a binary relation between disjoint sets which is stable as an ideal in the canonical order induced by inclusion. Many mathematical objects which appear to be quite different, find a common generalisation in the framework of separoids; e.g., graphs, configurations of convex sets, oriented matroids, and polytopes. Any countable category is an induced subcategory of separoids when they are endowed with homomorphisms[1] (viz., mappings that preserve the so-called minimal Radon partitions).
In this general framework, some results and invariants of different categories turn out to be special cases of the same aspect; e.g., the pseudoachromatic number from graph theory and the Tverberg theorem from combinatorial convexity are simply two faces of the same aspect, namely, complete colouring of separoids.
The axioms
A separoid[2] is a set $S$ endowed with a binary relation $\mid \ \subseteq 2^{S}\times 2^{S}$ on its power set, which satisfies the following simple properties for $A,B\subseteq S$:
$A\mid B\Leftrightarrow B\mid A,$
$A\mid B\Rightarrow A\cap B=\varnothing ,$
$A\mid B{\hbox{ and }}A'\subset A\Rightarrow A'\mid B.$
A related pair $A\mid B$ is called a separation and we often say that A is separated from B. It is enough to know the maximal separations to reconstruct the separoid.
A mapping $\varphi \colon S\to T$ is a morphism of separoids if the preimages of separations are separations; that is, for $A,B\subseteq T$
$A\mid B\Rightarrow \varphi ^{-1}(A)\mid \varphi ^{-1}(B).$
Examples
Examples of separoids can be found in almost every branch of mathematics.[3][4][5] Here we list just a few.
1. Given a graph G=(V,E), we can define a separoid on its vertices by saying that two (disjoint) subsets of V, say A and B, are separated if there are no edges going from one to the other; i.e.,
$A\mid B\Leftrightarrow \forall a\in A{\hbox{ and }}b\in B\colon ab\not \in E.$
2. Given an oriented matroid[5] M = (E,T), given in terms of its topes T, we can define a separoid on E by saying that two subsets are separated if they are contained in opposite signs of a tope. In other words, the topes of an oriented matroid are the maximal separations of a separoid. This example includes, of course, all directed graphs.
3. Given a family of objects in a Euclidean space, we can define a separoid in it by saying that two subsets are separated if there exists a hyperplane that separates them; i.e., leaving them in the two opposite sides of it.
4. Given a topological space, we can define a separoid saying that two subsets are separated if there exist two disjoint open sets which contains them (one for each of them).
The basic lemma
Every separoid can be represented with a family of convex sets in some Euclidean space and their separations by hyperplanes.
References
1. Strausz, Ricardo (1 March 2007). "Homomorphisms of separoids". Electronic Notes in Discrete Mathematics. 28: 461–468. doi:10.1016/j.endm.2007.01.064. Zbl 1291.05036.
2. Strausz, Ricardo (2005). "Separoids and a Tverberg-type problem". Geombinatorics. 15 (2): 79–92. Zbl 1090.52005.
3. Arocha, Jorge Luis; Bracho, Javier; Montejano, Luis; Oliveros, Deborah; Strausz, Ricardo (2002). "Separoids, their categories and a Hadwiger-type theorem for transversals". Discrete and Computational Geometry. 27 (3): 377–385. doi:10.1007/s00454-001-0075-2.
4. Nešetřil, Jaroslav; Strausz, Ricardo (2006). "Universality of separoids" (PDF). Archivum Mathematicum (Brno). 42 (1): 85–101.
5. Montellano-Ballesteros, Juan José; Strausz, Ricardo (July 2006). "A characterization of cocircuit graphs of uniform oriented matroids". Journal of Combinatorial Theory. Series B. 96 (4): 445–454. doi:10.1016/j.jctb.2005.09.008. Zbl 1109.52016.
Further reading
• Strausz, Ricardo (1998). "Separoides". Situs, Serie B, No 5. Universidad Nacional Autónoma de México.
• Montellano-Ballesteros, Juan José; Por, Attila; Strausz, Ricardo (2006). "Tverberg-type theorems for separoids". Discrete and Computational Geometry. 35 (3): 513–523. doi:10.1007/s00454-005-1229-4.
• Bracho, Javier; Strausz, Ricardo (2006). "Two geometric representations of separoids". Periodica Mathematica Hungarica. 53 (1–2): 115–120. doi:10.1007/s10998-006-0025-0.
• Strausz, Ricardo (2008). "Erdös-Szekeres 'happy end'-type theorems for separoids". European Journal of Combinatorics. 29 (4): 1076–1085. doi:10.1016/j.ejc.2007.11.011.
|
Wikipedia
|
Separable space
In mathematics, a topological space is called separable if it contains a countable, dense subset; that is, there exists a sequence $\{x_{n}\}_{n=1}^{\infty }$ of elements of the space such that every nonempty open subset of the space contains at least one element of the sequence.
Not to be confused with Separated space or Separation axiom.
Like the other axioms of countability, separability is a "limitation on size", not necessarily in terms of cardinality (though, in the presence of the Hausdorff axiom, this does turn out to be the case; see below) but in a more subtle topological sense. In particular, every continuous function on a separable space whose image is a subset of a Hausdorff space is determined by its values on the countable dense subset.
Contrast separability with the related notion of second countability, which is in general stronger but equivalent on the class of metrizable spaces.
First examples
Any topological space that is itself finite or countably infinite is separable, for the whole space is a countable dense subset of itself. An important example of an uncountable separable space is the real line, in which the rational numbers form a countable dense subset. Similarly the set of all length-$n$ vectors of rational numbers, ${\boldsymbol {r}}=(r_{1},\ldots ,r_{n})\in \mathbb {Q} ^{n}$, is a countable dense subset of the set of all length-$n$ vectors of real numbers, $\mathbb {R} ^{n}$; so for every $n$, $n$-dimensional Euclidean space is separable.
A simple example of a space that is not separable is a discrete space of uncountable cardinality.
Further examples are given below.
Separability versus second countability
Any second-countable space is separable: if $\{U_{n}\}$ is a countable base, choosing any $x_{n}\in U_{n}$ from the non-empty $U_{n}$ gives a countable dense subset. Conversely, a metrizable space is separable if and only if it is second countable, which is the case if and only if it is Lindelöf.
To further compare these two properties:
• An arbitrary subspace of a second-countable space is second countable; subspaces of separable spaces need not be separable (see below).
• Any continuous image of a separable space is separable (Willard 1970, Th. 16.4a); even a quotient of a second-countable space need not be second countable.
• A product of at most continuum many separable spaces is separable (Willard 1970, p. 109, Th 16.4c). A countable product of second-countable spaces is second countable, but an uncountable product of second-countable spaces need not even be first countable.
We can construct an example of a separable topological space that is not second countable. Consider any uncountable set $X$, pick some $x_{0}\in X$, and define the topology to be the collection of all sets that contain $x_{0}$ (or are empty). Then, the closure of ${x_{0}}$ is the whole space ($X$ is the smallest closed set containing $x_{0}$), but every set of the form $\{x_{0},x\}$ is open. Therefore, the space is separable but there cannot be a countable base.
Cardinality
The property of separability does not in and of itself give any limitations on the cardinality of a topological space: any set endowed with the trivial topology is separable, as well as second countable, quasi-compact, and connected. The "trouble" with the trivial topology is its poor separation properties: its Kolmogorov quotient is the one-point space.
A first-countable, separable Hausdorff space (in particular, a separable metric space) has at most the continuum cardinality ${\mathfrak {c}}$. In such a space, closure is determined by limits of sequences and any convergent sequence has at most one limit, so there is a surjective map from the set of convergent sequences with values in the countable dense subset to the points of $X$.
A separable Hausdorff space has cardinality at most $2^{\mathfrak {c}}$, where ${\mathfrak {c}}$ is the cardinality of the continuum. For this closure is characterized in terms of limits of filter bases: if $Y\subseteq X$ and $z\in X$, then $z\in {\overline {Y}}$ if and only if there exists a filter base ${\mathcal {B}}$ consisting of subsets of $Y$ that converges to $z$. The cardinality of the set $S(Y)$ of such filter bases is at most $2^{2^{|Y|}}$. Moreover, in a Hausdorff space, there is at most one limit to every filter base. Therefore, there is a surjection $S(Y)\rightarrow X$ when ${\overline {Y}}=X.$
The same arguments establish a more general result: suppose that a Hausdorff topological space $X$ contains a dense subset of cardinality $\kappa $. Then $X$ has cardinality at most $2^{2^{\kappa }}$ and cardinality at most $2^{\kappa }$ if it is first countable.
The product of at most continuum many separable spaces is a separable space (Willard 1970, p. 109, Th 16.4c). In particular the space $\mathbb {R} ^{\mathbb {R} }$ of all functions from the real line to itself, endowed with the product topology, is a separable Hausdorff space of cardinality $2^{\mathfrak {c}}$. More generally, if $\kappa $ is any infinite cardinal, then a product of at most $2^{\kappa }$ spaces with dense subsets of size at most $\kappa $ has itself a dense subset of size at most $\kappa $ (Hewitt–Marczewski–Pondiczery theorem).
Constructive mathematics
Separability is especially important in numerical analysis and constructive mathematics, since many theorems that can be proved for nonseparable spaces have constructive proofs only for separable spaces. Such constructive proofs can be turned into algorithms for use in numerical analysis, and they are the only sorts of proofs acceptable in constructive analysis. A famous example of a theorem of this sort is the Hahn–Banach theorem.
Further examples
Separable spaces
• Every compact metric space (or metrizable space) is separable.
• Any topological space that is the union of a countable number of separable subspaces is separable. Together, these first two examples give a different proof that $n$-dimensional Euclidean space is separable.
• The space $C(K)$ of all continuous functions from a compact subset $K\subseteq \mathbb {R} $ to the real line $\mathbb {R} $ is separable.
• The Lebesgue spaces $L^{p}\left(X,\mu \right)$, over a separable measure space $\left\langle X,{\mathcal {M}},\mu \right\rangle $, are separable for any $1\leq p<\infty $.
• The space $C([0,1])$ of continuous real-valued functions on the unit interval $[0,1]$ with the metric of uniform convergence is a separable space, since it follows from the Weierstrass approximation theorem that the set $\mathbb {Q} [x]$ of polynomials in one variable with rational coefficients is a countable dense subset of $C([0,1])$. The Banach–Mazur theorem asserts that any separable Banach space is isometrically isomorphic to a closed linear subspace of $C([0,1])$.
• A Hilbert space is separable if and only if it has a countable orthonormal basis. It follows that any separable, infinite-dimensional Hilbert space is isometric to the space $\ell ^{2}$ of square-summable sequences.
• An example of a separable space that is not second-countable is the Sorgenfrey line $\mathbb {S} $, the set of real numbers equipped with the lower limit topology.
• A separable σ-algebra is a σ-algebra ${\mathcal {F}}$ that is a separable space when considered as a metric space with metric $\rho (A,B)=\mu (A\triangle B)$ for $A,B\in {\mathcal {F}}$ and a given measure $\mu $ (and with $\triangle $ being the symmetric difference operator).[1]
Non-separable spaces
• The first uncountable ordinal $\omega _{1}$, equipped with its natural order topology, is not separable.
• The Banach space $\ell ^{\infty }$ of all bounded real sequences, with the supremum norm, is not separable. The same holds for $L^{\infty }$.
• The Banach space of functions of bounded variation is not separable; note however that this space has very important applications in mathematics, physics and engineering.
Properties
• A subspace of a separable space need not be separable (see the Sorgenfrey plane and the Moore plane), but every open subspace of a separable space is separable (Willard 1970, Th 16.4b). Also every subspace of a separable metric space is separable.
• In fact, every topological space is a subspace of a separable space of the same cardinality. A construction adding at most countably many points is given in (Sierpiński 1952, p. 49); if the space was a Hausdorff space then the space constructed that it embeds into is also a Hausdorff space.
• The set of all real-valued continuous functions on a separable space has a cardinality equal to ${\mathfrak {c}}$, the cardinality of the continuum. This follows since such functions are determined by their values on dense subsets.
• From the above property, one can deduce the following: If X is a separable space having an uncountable closed discrete subspace, then X cannot be normal. This shows that the Sorgenfrey plane is not normal.
• For a compact Hausdorff space X, the following are equivalent:
1. X is second countable.
2. The space ${\mathcal {C}}(X,\mathbb {R} )$ of continuous real-valued functions on X with the supremum norm is separable.
3. X is metrizable.
Embedding separable metric spaces
• Every separable metric space is homeomorphic to a subset of the Hilbert cube. This is established in the proof of the Urysohn metrization theorem.
• Every separable metric space is isometric to a subset of the (non-separable) Banach space l∞ of all bounded real sequences with the supremum norm; this is known as the Fréchet embedding. (Heinonen 2003)
• Every separable metric space is isometric to a subset of C([0,1]), the separable Banach space of continuous functions [0,1] → R, with the supremum norm. This is due to Stefan Banach. (Heinonen 2003)
• Every separable metric space is isometric to a subset of the Urysohn universal space.
For nonseparable spaces:
• A metric space of density equal to an infinite cardinal α is isometric to a subspace of C([0,1]α, R), the space of real continuous functions on the product of α copies of the unit interval. (Kleiber 1969) harv error: no target: CITEREFKleiber1969 (help)
References
1. Džamonja, Mirna; Kunen, Kenneth (1995). "Properties of the class of measure separable compact spaces" (PDF). Fundamenta Mathematicae: 262. arXiv:math/9408201. Bibcode:1994math......8201D. If $\mu $ is a Borel measure on $X$, the measure algebra of $(X,\mu )$ is the Boolean algebra of all Borel sets modulo $\mu $-null sets. If $\mu $ is finite, then such a measure algebra is also a metric space, with the distance between the two sets being the measure of their symmetric difference. Then, we say that $\mu $ is separable iff this metric space is separable as a topological space.
• Heinonen, Juha (January 2003), Geometric embeddings of metric spaces (PDF), retrieved 6 February 2009
• Kelley, John L. (1975), General Topology, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90125-1, MR 0370454
• Kleiber, Martin; Pervin, William J. (1969), "A generalized Banach-Mazur theorem", Bull. Austral. Math. Soc., 1 (2): 169–173, doi:10.1017/S0004972700041411
• Sierpiński, Wacław (1952), General topology, Mathematical Expositions, No. 7, Toronto, Ont.: University of Toronto Press, MR 0050870
• Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446
• Willard, Stephen (1970), General Topology, Addison-Wesley, ISBN 978-0-201-08707-9, MR 0264581
|
Wikipedia
|
Separation axiom
In topology and related fields of mathematics, there are several restrictions that one often makes on the kinds of topological spaces that one wishes to consider. Some of these restrictions are given by the separation axioms. These are sometimes called Tychonoff separation axioms, after Andrey Tychonoff.
For the axiom of set theory, see Axiom schema of separation.
Separation axioms
in topological spaces
Kolmogorov classification
T0 (Kolmogorov)
T1 (Fréchet)
T2 (Hausdorff)
T2½(Urysohn)
completely T2 (completely Hausdorff)
T3 (regular Hausdorff)
T3½(Tychonoff)
T4 (normal Hausdorff)
T5 (completely normal
Hausdorff)
T6 (perfectly normal
Hausdorff)
• History
The separation axioms are not fundamental axioms like those of set theory, but rather defining properties which may be specified to distinguish certain types of topological spaces. The separation axioms are denoted with the letter "T" after the German Trennungsaxiom ("separation axiom"), and increasing numerical subscripts denote stronger and stronger properties.
The precise definitions of the separation axioms have varied over time. Especially in older literature, different authors might have different definitions of each condition.
Preliminary definitions
Before we define the separation axioms themselves, we give concrete meaning to the concept of separated sets (and points) in topological spaces. (Separated sets are not the same as separated spaces, defined in the next section.)
The separation axioms are about the use of topological means to distinguish disjoint sets and distinct points. It's not enough for elements of a topological space to be distinct (that is, unequal); we may want them to be topologically distinguishable. Similarly, it's not enough for subsets of a topological space to be disjoint; we may want them to be separated (in any of various ways). The separation axioms all say, in one way or another, that points or sets that are distinguishable or separated in some weak sense must also be distinguishable or separated in some stronger sense.
Let X be a topological space. Then two points x and y in X are topologically distinguishable if they do not have exactly the same neighbourhoods (or equivalently the same open neighbourhoods); that is, at least one of them has a neighbourhood that is not a neighbourhood of the other (or equivalently there is an open set that one point belongs to but the other point does not). That is, at least one of the points does not belong to the other's closure.
Two points x and y are separated if each of them has a neighbourhood that is not a neighbourhood of the other; that is, neither belongs to the other's closure. More generally, two subsets A and B of X are separated if each is disjoint from the other's closure, though the closures themselves do not have to be disjoint. Equivalently, each subset is included in an open set disjoint from the other subset. All of the remaining conditions for separation of sets may also be applied to points (or to a point and a set) by using singleton sets. Points x and y will be considered separated, by neighbourhoods, by closed neighbourhoods, by a continuous function, precisely by a function, if and only if their singleton sets {x} and {y} are separated according to the corresponding criterion.
Subsets A and B are separated by neighbourhoods if they have disjoint neighbourhoods. They are separated by closed neighbourhoods if they have disjoint closed neighbourhoods. They are separated by a continuous function if there exists a continuous function f from the space X to the real line R such that A is a subset of the preimage f−1({0}) and B is a subset of the preimage f−1({1}). Finally, they are precisely separated by a continuous function if there exists a continuous function f from X to R such that A equals the preimage f−1({0}) and B equals f−1({1}).
These conditions are given in order of increasing strength: Any two topologically distinguishable points must be distinct, and any two separated points must be topologically distinguishable. Any two separated sets must be disjoint, any two sets separated by neighbourhoods must be separated, and so on.
For more on these conditions (including their use outside the separation axioms), see Separated sets and Topological distinguishability.
Main definitions
These definitions all use essentially the preliminary definitions above.
Many of these names have alternative meanings in some of mathematical literature; for example, the meanings of "normal" and "T4" are sometimes interchanged, similarly "regular" and "T3", etc. Many of the concepts also have several names; however, the one listed first is always least likely to be ambiguous.
Most of these axioms have alternative definitions with the same meaning; the definitions given here fall into a consistent pattern that relates the various notions of separation defined in the previous section. Other possible definitions can be found in the individual articles.
In all of the following definitions, X is again a topological space.
• X is T0, or Kolmogorov, if any two distinct points in X are topologically distinguishable. (It will be a common theme among the separation axioms to have one version of an axiom that requires T0 and one version that doesn't.)
• X is R0, or symmetric, if any two topologically distinguishable points in X are separated.
• X is T1, or accessible or Fréchet, if any two distinct points in X are separated. Equivalently, every single-point set is a closed set. Thus, X is T1 if and only if it is both T0 and R0. (Although one may say such things as "T1 space", "Fréchet topology", and "suppose that the topological space X is Fréchet"; one should avoid saying "Fréchet space" in this context, since there is another entirely different notion of Fréchet space in functional analysis.)
• X is R1, or preregular, if any two topologically distinguishable points in X are separated by neighbourhoods. Every R1 space is also R0.
• X is Hausdorff, or T2 or separated, if any two distinct points in X are separated by neighbourhoods. Thus, X is Hausdorff if and only if it is both T0 and R1. Every Hausdorff space is also T1.
• X is T2½, or Urysohn, if any two distinct points in X are separated by closed neighbourhoods. Every T2½ space is also Hausdorff.
• X is completely Hausdorff, or completely T2, if any two distinct points in X are separated by a continuous function. Every completely Hausdorff space is also T2½.
• X is regular if, given any point x and closed set F in X such that x does not belong to F, they are separated by neighbourhoods. (In fact, in a regular space, any such x and F will also be separated by closed neighbourhoods.) Every regular space is also R1.
• X is regular Hausdorff, or T3, if it is both T0 and regular.[1] Every regular Hausdorff space is also T2½.
• X is completely regular if, given any point x and closed set F in X such that x does not belong to F, they are separated by a continuous function.[2] Every completely regular space is also regular.
• X is Tychonoff, or T3½, completely T3, or completely regular Hausdorff, if it is both T0 and completely regular.[3] Every Tychonoff space is both regular Hausdorff and completely Hausdorff.
• X is normal if any two disjoint closed subsets of X are separated by neighbourhoods. (In fact, a space is normal if and only if any two disjoint closed sets can be separated by a continuous function; this is Urysohn's lemma.)
• X is normal regular if it is both R0 and normal. Every normal regular space is also completely regular.
• X is normal Hausdorff, or T4, if it is both T1 and normal. Every normal Hausdorff space is also both Tychonoff and normal regular.
• X is completely normal if any two separated sets are separated by neighbourhoods. Every completely normal space is also normal.
• X is completely normal Hausdorff, or T5 or completely T4, if it is both completely normal and T1. Every completely normal Hausdorff space is also normal Hausdorff.
• X is perfectly normal if any two disjoint closed sets are precisely separated by a continuous function. Every perfectly normal space is also both completely normal and completely regular.
• X is perfectly normal Hausdorff, or T6 or perfectly T4, if it is both perfectly normal and T0. Every perfectly normal Hausdorff space is also completely normal Hausdorff.
The following table summarizes the separation axioms as well as the implications between them: cells which are merged represent equivalent properties, each axiom implies the ones in the cells to its left, and if we assume the T1 axiom, then each axiom also implies the ones in the cells above it (for example, all normal T1 spaces are also completely regular).
Separated Separated by neighborhoods Separated by closed neighborhoods Separated by function Precisely separated by function
Distinguishable points Symmetric[4] Preregular
Distinct points Fréchet Hausdorff Urysohn Completely Hausdorff Perfectly Hausdorff
Closed set and point outside Symmetric[5] Regular Completely regular Perfectly normal
Disjoint closed sets always Normal
Separated sets always Completely normal discrete space
Relationships between the axioms
The T0 axiom is special in that it can not only be added to a property (so that completely regular plus T0 is Tychonoff) but also be subtracted from a property (so that Hausdorff minus T0 is R1), in a fairly precise sense; see Kolmogorov quotient for more information. When applied to the separation axioms, this leads to the relationships in the table to the left below. In this table, one goes from the right side to the left side by adding the requirement of T0, and one goes from the left side to the right side by removing that requirement, using the Kolmogorov quotient operation. (The names in parentheses given on the left side of this table are generally ambiguous or at least less well known; but they are used in the diagram below.)
T0 versionNon-T0 version
T0(No requirement)
T1R0
Hausdorff (T2)R1
T2½(No special name)
Completely Hausdorff(No special name)
Regular Hausdorff (T3)Regular
Tychonoff (T3½)Completely regular
Normal T0Normal
Normal Hausdorff (T4)Normal regular
Completely normal T0Completely normal
Completely normal Hausdorff (T5)Completely normal regular
Perfectly normal Hausdorff (T6)Perfectly normal
Other than the inclusion or exclusion of T0, the relationships between the separation axioms are indicated in the diagram to the right. In this diagram, the non-T0 version of a condition is on the left side of the slash, and the T0 version is on the right side. Letters are used for abbreviation as follows: "P" = "perfectly", "C" = "completely", "N" = "normal", and "R" (without a subscript) = "regular". A bullet indicates that there is no special name for a space at that spot. The dash at the bottom indicates no condition.
Two properties may be combined using this diagram by following the diagram upwards until both branches meet. For example, if a space is both completely normal ("CN") and completely Hausdorff ("CT2"), then following both branches up, one finds he spot "•/T5". Since completely Hausdorff spaces are T0 (even though completely normal spaces may not be), one takes the T0 side of the slash, so a completely normal completely Hausdorff space is the same as a T5 space (less ambiguously known as a completely normal Hausdorff space, as can be seen in the table above).
As can be seen from the diagram, normal and R0 together imply a host of other properties, since combining the two properties leads through the many nodes on the right-side branch. Since regularity is the most well known of these, spaces that are both normal and R0 are typically called "normal regular spaces". In a somewhat similar fashion, spaces that are both normal and T1 are often called "normal Hausdorff spaces" by people that wish to avoid the ambiguous "T" notation. These conventions can be generalised to other regular spaces and Hausdorff spaces.
[NB: This diagram does not reflect that perfectly normal spaces are always regular; the editors are working on this now.]
Other separation axioms
There are some other conditions on topological spaces that are sometimes classified with the separation axioms, but these don't fit in with the usual separation axioms as completely. Other than their definitions, they aren't discussed here; see their individual articles.
• X is sober if, for every closed set C that is not the (possibly nondisjoint) union of two smaller closed sets, there is a unique point p such that the closure of {p} equals C. More briefly, every irreducible closed set has a unique generic point. Any Hausdorff space must be sober, and any sober space must be T0.
• X is weak Hausdorff if, for every continuous map f to X from a compact Hausdorff space, the image of f is closed in X. Any Hausdorff space must be weak Hausdorff, and any weak Hausdorff space must be T1.
• X is semiregular if the regular open sets form a base for the open sets of X. Any regular space must also be semiregular.
• X is quasi-regular if for any nonempty open set G, there is a nonempty open set H such that the closure of H is contained in G.
• X is fully normal if every open cover has an open star refinement. X is fully T4, or fully normal Hausdorff, if it is both T1 and fully normal. Every fully normal space is normal and every fully T4 space is T4. Moreover, one can show that every fully T4 space is paracompact. In fact, fully normal spaces actually have more to do with paracompactness than with the usual separation axioms.
• The axiom that all compact subsets are closed is strictly between T1 and T2 (Hausdorff) in strength. A space satisfying this axiom is necessarily T1 because every single-point set is necessarily compact and thus closed, but the reverse is not necessarily true; for the cofinite topology on infinitely many points, which is T1, every subset is compact but not every subset is closed. Furthermore, every T2 (Hausdorff) space satisfies the axiom that all compact subsets are closed, but the reverse is not necessarily true; for the cocountable topology on uncountably many points, the compact sets are all finite and hence all closed but the space is not T2 (Hausdorff).
See also
• General topology
Notes
1. Schechter 1997, p. 441.
2. Schechter 1997, 16.16, p. 442.
3. Schechter 1997, 16.17, p. 443.
4. Schechter 1997, 16.6(D), p. 438.
5. Schechter 1997, 16.6(C), p. 438.
References
• Schechter, Eric (1997). Handbook of Analysis and its Foundations. San Diego: Academic Press. ISBN 0126227608. (has Ri axioms, among others)
• Willard, Stephen (1970). General topology. Reading, Mass.: Addison-Wesley Pub. Co. ISBN 0-486-43479-6. (has all of the non-Ri axioms mentioned in the Main Definitions, with these definitions)
External links
• Separation Axioms at ProvenMath
• Table of separation and metrisability axioms from Schechter
|
Wikipedia
|
Seppo Linnainmaa
Seppo Ilmari Linnainmaa (born 28 September 1945) is a Finnish mathematician and computer scientist known for creating the modern version of backpropagation.
Biography
He was born in Pori.[1] In 1974 he obtained the first doctorate ever awarded in computer science at the University of Helsinki.[2] In 1976, he became Assistant Professor. From 1984 to 1985 he was Visiting Professor at the University of Maryland, USA. From 1986 to 1989 he was Chairman of the Finnish Artificial Intelligence Society. From 1989 to 2007, he was Research Professor at the VTT Technical Research Centre of Finland. He retired in 2007.
Backpropagation
Explicit, efficient error backpropagation in arbitrary, discrete, possibly sparsely connected, neural networks-like networks was first described in Linnainmaa's 1970 master's thesis,[3][4] albeit without reference to NNs,[5] when he introduced the reverse mode of automatic differentiation (AD), in order to efficiently compute the derivative of a differentiable composite function that can be represented as a graph, by recursively applying the chain rule to the building blocks of the function.[2][3][4][6] Linnainmaa published it first, following Gerardi Ostrowski who had used it in the context of certain process models in chemical engineering some five years earlier, but didn't publish.
With faster computers emerging, the method has become heavily used in numerous applications. For example, backpropagation of errors in multi-layer perceptrons, a technique used in machine learning, is a special case of reverse mode AD.
Notes
1. Ellonen, Leena, ed. (2008). Suomen professorit 1640–2007 (in Finnish). Helsinki: Professoriliitto. p. 405. ISBN 978-952-99281-1-8.
2. Griewank, Andreas (2012). "Who Invented the Reverse Mode of Differentiation?" (PDF). Documenta Matematica, Extra Volume ISMP. pp. 389–400. S2CID 15568746.
3. Linnainmaa, Seppo (1970). Algoritmin kumulatiivinen pyöristysvirhe yksittäisten pyöristysvirheiden Taylor-kehitelmänä [The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors] (PDF) (Thesis) (in Finnish). pp. 6–7.
4. Linnainmaa, Seppo (1976). "Taylor expansion of the accumulated rounding error". BIT Numerical Mathematics. 16 (2): 146–160. doi:10.1007/BF01931367. S2CID 122357351.
5. Jürgen Schmidhuber, (2015). Who Invented Backpropagation?
6. Griewank, Andreas and Walther, A.. Principles and Techniques of Algorithmic Differentiation, Second Edition. SIAM, 2008.
External links
• Seppo Linnainmaa on LinkedIn
Authority control
International
• VIAF
National
• Germany
Academics
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
Other
• IdRef
|
Wikipedia
|
Heptagon
In geometry, a heptagon or septagon is a seven-sided polygon or 7-gon.
Regular heptagon
A regular heptagon
TypeRegular polygon
Edges and vertices7
Schläfli symbol{7}
Coxeter–Dynkin diagrams
Symmetry groupDihedral (D7), order 2×7
Internal angle (degrees)≈128.571°
PropertiesConvex, cyclic, equilateral, isogonal, isotoxal
Dual polygonSelf
The heptagon is sometimes referred to as the septagon, using "sept-" (an elision of septua-, a Latin-derived numerical prefix, rather than hepta-, a Greek-derived numerical prefix; both are cognate) together with the Greek suffix "-agon" meaning angle.
Regular heptagon
A regular heptagon, in which all sides and all angles are equal, has internal angles of 5π/7 radians (1284⁄7 degrees). Its Schläfli symbol is {7}.
Area
The area (A) of a regular heptagon of side length a is given by:
$A={\frac {7}{4}}a^{2}\cot {\frac {\pi }{7}}\simeq 3.634a^{2}.$
This can be seen by subdividing the unit-sided heptagon into seven triangular "pie slices" with vertices at the center and at the heptagon's vertices, and then halving each triangle using the apothem as the common side. The apothem is half the cotangent of $\pi /7,$ and the area of each of the 14 small triangles is one-fourth of the apothem.
The area of a regular heptagon inscribed in a circle of radius R is ${\tfrac {7R^{2}}{2}}\sin {\tfrac {2\pi }{7}},$ while the area of the circle itself is $\pi R^{2};$ thus the regular heptagon fills approximately 0.8710 of its circumscribed circle.
Construction
As 7 is a Pierpont prime but not a Fermat prime, the regular heptagon is not constructible with compass and straightedge but is constructible with a marked ruler and compass. It is the smallest regular polygon with this property. This type of construction is called a neusis construction. It is also constructible with compass, straightedge and angle trisector. The impossibility of straightedge and compass construction follows from the observation that $\scriptstyle {2\cos {\tfrac {2\pi }{7}}\approx 1.247}$ is a zero of the irreducible cubic x3 + x2 − 2x − 1. Consequently, this polynomial is the minimal polynomial of 2cos(2π⁄7), whereas the degree of the minimal polynomial for a constructible number must be a power of 2.
A neusis construction of the interior angle in a regular heptagon.
An animation from a neusis construction with radius of circumcircle ${\overline {OA}}=6$, according to Andrew M. Gleason[1] based on the angle trisection by means of the tomahawk. This construction relies on the fact that
$6\cos \left({\frac {2\pi }{7}}\right)=2{\sqrt {7}}\cos \left({\frac {1}{3}}\arctan \left(3{\sqrt {3}}\right)\right)-1.$
Approximation
An approximation for practical use with an error of about 0.2% is to use half the side of an equilateral triangle inscribed in the same circle as the length of the side of a regular heptagon. It is unknown who first found this approximation, but it was mentioned by Heron of Alexandria's Metrica in the 1st century AD, was well known to medieval Islamic mathematicians, and can be found in the work of Albrecht Dürer.[2][3] Let A lie on the circumference of the circumcircle. Draw arc BOC. Then $\scriptstyle {BD={1 \over 2}BC}$ gives an approximation for the edge of the heptagon.
This approximation uses $\scriptstyle {{\sqrt {3}} \over 2}\approx 0.86603$ for the side of the heptagon inscribed in the unit circle while the exact value is $\scriptstyle 2\sin {\pi \over 7}\approx 0.86777$.
Example to illustrate the error:
At a circumscribed circle radius r = 1 m, the absolute error of the 1st side would be approximately -1.7 mm
Symmetry
The regular heptagon belongs to the D7h point group (Schoenflies notation), order 28. The symmetry elements are: a 7-fold proper rotation axis C7, a 7-fold improper rotation axis, S7, 7 vertical mirror planes, σv, 7 2-fold rotation axes, C2, in the plane of the heptagon and a horizontal mirror plane, σh, also in the heptagon's plane.[5]
Diagonals and heptagonal triangle
Main article: Heptagonal triangle
The regular heptagon's side a, shorter diagonal b, and longer diagonal c, with a<b<c, satisfy[6]: Lemma 1
$a^{2}=c(c-b),$
$b^{2}=a(c+a),$
$c^{2}=b(a+b),$
${\frac {1}{a}}={\frac {1}{b}}+{\frac {1}{c}}$ (the optic equation)
and hence
$ab+ac=bc,$
and[6]: Coro. 2
$b^{3}+2b^{2}c-bc^{2}-c^{3}=0,$
$c^{3}-2c^{2}a-ca^{2}+a^{3}=0,$
$a^{3}-2a^{2}b-ab^{2}+b^{3}=0,$
Thus –b/c, c/a, and a/b all satisfy the cubic equation $t^{3}-2t^{2}-t+1=0.$ However, no algebraic expressions with purely real terms exist for the solutions of this equation, because it is an example of casus irreducibilis.
The approximate lengths of the diagonals in terms of the side of the regular heptagon are given by
$b\approx 1.80193\cdot a,\qquad c\approx 2.24698\cdot a.$
We also have[7]
$b^{2}-a^{2}=ac,$
$c^{2}-b^{2}=ab,$
$a^{2}-c^{2}=-bc,$
and
${\frac {b^{2}}{a^{2}}}+{\frac {c^{2}}{b^{2}}}+{\frac {a^{2}}{c^{2}}}=5.$
A heptagonal triangle has vertices coinciding with the first, second, and fourth vertices of a regular heptagon (from an arbitrary starting vertex) and angles $\pi /7,2\pi /7,$ and $4\pi /7.$ Thus its sides coincide with one side and two particular diagonals of the regular heptagon.[6]
In polyhedra
Apart from the heptagonal prism and heptagonal antiprism, no convex polyhedron made entirely out of regular polygons contains a heptagon as a face.
Star heptagons
Two kinds of star heptagons (heptagrams) can be constructed from regular heptagons, labeled by Schläfli symbols {7/2}, and {7/3}, with the divisor being the interval of connection.
Blue, {7/2} and green {7/3} star heptagons inside a red heptagon.
Tiling and packing
Triangle, heptagon, and 42-gon vertex
Hyperbolic heptagon tiling
A regular triangle, heptagon, and 42-gon can completely fill a plane vertex. However, there is no tiling of the plane with only these polygons, because there is no way to fit one of them onto the third side of the triangle without leaving a gap or creating an overlap. In the hyperbolic plane, tilings by regular heptagons are possible.
The regular heptagon has a double lattice packing of the Euclidean plane of packing density approximately 0.89269. This has been conjectured to be the lowest density possible for the optimal double lattice packing density of any convex set, and more generally for the optimal packing density of any convex set.[8]
Empirical examples
The United Kingdom, as of 2022, has two heptagonal coins, the 50p and 20p pieces, and the Barbados Dollar are also heptagonal. Strictly, the shape of the coins is a Reuleaux heptagon, a curvilinear heptagon which has curves of constant width; the sides are curved outwards to allow the coins to roll smoothly when they are inserted into a vending machine. Botswana pula coins in the denominations of 2 Pula, 1 Pula, 50 Thebe and 5 Thebe are also shaped as equilateral-curve heptagons. Coins in the shape of Reuleaux heptagons are also in circulation in Mauritius, U.A.E., Tanzania, Samoa, Papua New Guinea, São Tomé and Príncipe, Haiti, Jamaica, Liberia, Ghana, the Gambia, Jordan, Jersey, Guernsey, Isle of Man, Gibraltar, Guyana, Solomon Islands, Falkland Islands and Saint Helena. The 1000 Kwacha coin of Zambia is a true heptagon.
The Brazilian 25-cent coin has a heptagon inscribed in the coin's disk. Some old versions of the coat of arms of Georgia, including in Soviet days, used a {7/2} heptagram as an element.
A number of coins, including the 20 euro cent coin, have heptagonal symmetry in a shape called the Spanish flower.
In architecture, heptagonal floor plans are very rare. A remarkable example is the Mausoleum of Prince Ernst in Stadthagen, Germany.
Many police badges in the US have a {7/2} heptagram outline.
See also
• Heptagram
• Polygon
References
1. Gleason, Andrew Mattei (March 1988). "Angle trisection, the heptagon, and the triskaidecagon p. 186 (Fig.1) –187" (PDF). The American Mathematical Monthly. 95 (3): 185–194. doi:10.2307/2323624. JSTOR 2323624. Archived from the original (PDF) on 19 December 2015.
2. Hogendijk, Jan P. (1987). "Abu'l-Jūd's Answer to a Question of al-Bīrūnī Concerning the Regular Heptagon" (PDF). Annals of the New York Academy of Sciences. 500 (1): 175–183. doi:10.1111/j.1749-6632.1987.tb37202.x.
3. G.H. Hughes, "The Polygons of Albrecht Dürer-1525, The Regular Heptagon", Fig. 11 the side of the Heptagon (7) Fig. 15, image on the left side, retrieved on 4 December 2015
4. John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, (2008) The Symmetries of Things, ISBN 978-1-56881-220-5 (Chapter 20, Generalized Schaefli symbols, Types of symmetry of a polygon pp. 275-278)
5. Salthouse, J.A; Ware, M.J. (1972). Point group character tables and related data. Cambridge: Cambridge University Press. ISBN 0-521-08139-4.
6. Abdilkadir Altintas, "Some Collinearities in the Heptagonal Triangle", Forum Geometricorum 16, 2016, 249–256.http://forumgeom.fau.edu/FG2016volume16/FG201630.pdf
7. Leon Bankoff and Jack Garfunkel, "The heptagonal triangle", Mathematics Magazine 46 (1), January 1973, 7–19.
8. Kallus, Yoav (2015). "Pessimal packing shapes". Geometry & Topology. 19 (1): 343–363. arXiv:1305.0289. doi:10.2140/gt.2015.19.343. MR 3318753.
External links
Look up heptagon in Wiktionary, the free dictionary.
• Definition and properties of a heptagon With interactive animation
• Heptagon according Johnson
• Another approximate construction method
• Polygons – Heptagons
• Recently discovered and highly accurate approximation for the construction of a regular heptagon.
• Heptagon, an approximating construction as an animation
• A heptagon with a given side, an approximating construction as an animation
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
Heptagon
|
Wikipedia
|
Series acceleration
In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.
Definition
Given a sequence
$S=\{s_{n}\}_{n\in \mathbb {N} }$
having a limit
$\lim _{n\to \infty }s_{n}=\ell ,$
an accelerated series is a second sequence
$S'=\{s'_{n}\}_{n\in \mathbb {N} }$
which converges faster to $\ell $ than the original sequence, in the sense that
$\lim _{n\to \infty }{\frac {s'_{n}-\ell }{s_{n}-\ell }}=0.$
If the original sequence is divergent, the sequence transformation acts as an extrapolation method to the antilimit $\ell $.
The mappings from the original to the transformed series may be linear (as defined in the article sequence transformations), or non-linear. In general, the non-linear sequence transformations tend to be more powerful.
Overview
Two classical techniques for series acceleration are Euler's transformation of series[1] and Kummer's transformation of series.[2] A variety of much more rapidly convergent and special-case tools have been developed in the 20th century, including Richardson extrapolation, introduced by Lewis Fry Richardson in the early 20th century but also known and used by Katahiro Takebe in 1722; the Aitken delta-squared process, introduced by Alexander Aitken in 1926 but also known and used by Takakazu Seki in the 18th century; the epsilon method given by Peter Wynn in 1956; the Levin u-transform; and the Wilf-Zeilberger-Ekhad method or WZ method.
For alternating series, several powerful techniques, offering convergence rates from $5.828^{-n}$ all the way to $17.93^{-n}$ for a summation of $n$ terms, are described by Cohen et al.[3]
Euler's transform
A basic example of a linear sequence transformation, offering improved convergence, is Euler's transform. It is intended to be applied to an alternating series; it is given by
$\sum _{n=0}^{\infty }(-1)^{n}a_{n}=\sum _{n=0}^{\infty }(-1)^{n}{\frac {(\Delta ^{n}a)_{0}}{2^{n+1}}}$
where $\Delta $ is the forward difference operator, for which one has the formula
$(\Delta ^{n}a)_{0}=\sum _{k=0}^{n}(-1)^{k}{n \choose k}a_{n-k}.$
If the original series, on the left hand side, is only slowly converging, the forward differences will tend to become small quite rapidly; the additional power of two further improves the rate at which the right hand side converges.
A particularly efficient numerical implementation of the Euler transform is the van Wijngaarden transformation.[4]
Conformal mappings
A series
$S=\sum _{n=0}^{\infty }a_{n}$
can be written as f(1), where the function f is defined as
$f(z)=\sum _{n=0}^{\infty }a_{n}z^{n}.$
The function f(z) can have singularities in the complex plane (branch point singularities, poles or essential singularities), which limit the radius of convergence of the series. If the point z = 1 is close to or on the boundary of the disk of convergence, the series for S will converge very slowly. One can then improve the convergence of the series by means of a conformal mapping that moves the singularities such that the point that is mapped to z = 1 ends up deeper in the new disk of convergence.
The conformal transform $z=\Phi (w)$ needs to be chosen such that $\Phi (0)=0$, and one usually chooses a function that has a finite derivative at w = 0. One can assume that $\Phi (1)=1$ without loss of generality, as one can always rescale w to redefine $\Phi $. We then consider the function
$g(w)=f(\Phi (w)).$
Since $\Phi (1)=1$, we have f(1) = g(1). We can obtain the series expansion of g(w) by putting $z=\Phi (w)$ in the series expansion of f(z) because $\Phi (0)=0$; the first n terms of the series expansion for f(z) will yield the first n terms of the series expansion for g(w) if $\Phi '(0)\neq 0$. Putting w = 1 in that series expansion will thus yield a series such that if it converges, it will converge to the same value as the original series.
Non-linear sequence transformations
Examples of such nonlinear sequence transformations are Padé approximants, the Shanks transformation, and Levin-type sequence transformations.
Especially nonlinear sequence transformations often provide powerful numerical methods for the summation of divergent series or asymptotic series that arise for instance in perturbation theory, and may be used as highly effective extrapolation methods.
Aitken method
Main article: Aitken's delta-squared process
A simple nonlinear sequence transformation is the Aitken extrapolation or delta-squared method,
$\mathbb {A} :S\to S'=\mathbb {A} (S)={(s'_{n})}_{n\in \mathbb {N} }$
defined by
$s'_{n}=s_{n+2}-{\frac {(s_{n+2}-s_{n+1})^{2}}{s_{n+2}-2s_{n+1}+s_{n}}}.$
This transformation is commonly used to improve the rate of convergence of a slowly converging sequence; heuristically, it eliminates the largest part of the absolute error.
See also
• Shanks transformation
• Minimum polynomial extrapolation
• Van Wijngaarden transformation
References
1. Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 3, eqn 3.6.27". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 16. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
2. Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 3, eqn 3.6.26". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 16. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
3. Henri Cohen, Fernando Rodriguez Villegas, and Don Zagier, "Convergence Acceleration of Alternating Series", Experimental Mathematics, 9:1 (2000) page 3.
4. William H. Press, et al., Numerical Recipes in C, (1987) Cambridge University Press, ISBN 0-521-43108-5 (See section 5.1).
• C. Brezinski and M. Redivo Zaglia, Extrapolation Methods. Theory and Practice, North-Holland, 1991.
• G. A. Baker Jr. and P. Graves-Morris, Padé Approximants, Cambridge U.P., 1996.
• Weisstein, Eric W. "Convergence Improvement". MathWorld.
• Herbert H. H. Homeier: Scalar Levin-Type Sequence Transformations, Journal of Computational and Applied Mathematics, vol. 122, no. 1–2, p 81 (2000). Homeier, H. H. H. (2000). "Scalar Levin-type sequence transformations". Journal of Computational and Applied Mathematics. 122: 81. arXiv:math/0005209. Bibcode:2000JCoAM.122...81H. doi:10.1016/S0377-0427(00)00359-9., arXiv:math/0005209.
• Brezinski Claude and Redivo-Zaglia Michela : "The genesis and early developments of Aitken's process, Shanks transformation, the $\epsilon $-algorithm, and related fixed point methods", Numerical Algorithms, Vol.80, No.1, (2019), pp.11-133.
• Delahaye J. P. : "Sequence Transformations", Springer-Verlag, Berlin, ISBN 978-3540152835 (1988).
• Sidi Avram : "Vector Extrapolation Methods with Applications", SIAM, ISBN 978-1-61197-495-9 (2017).
• Brezinski Claude, Redivo-Zaglia Michela and Saad Yousef : "Shanks Sequence Transformations and Anderson Acceleration", SIAM Review, Vol.60, No.3 (2018), pp.646–669. doi:10.1137/17M1120725 .
• Brezinski Claude : "Reminiscences of Peter Wynn", Numerical Algorithms, Vol.80(2019), pp.5-10.
• Brezinski Claude and Redivo-Zaglia Michela : "Extrapolation and Rational Approximation", Springer, ISBN 978-3-030-58417-7 (2020).
External links
• Convergence acceleration of series
• GNU Scientific Library, Series Acceleration
• Digital Library of Mathematical Functions
|
Wikipedia
|
Sequence transformation
In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more generally, are commonly used for series acceleration, that is, for improving the rate of convergence of a slowly convergent sequence or series. Sequence transformations are also commonly used to compute the antilimit of a divergent series numerically, and are used in conjunction with extrapolation methods.
Overview
Classical examples for sequence transformations include the binomial transform, Möbius transform, Stirling transform and others.
Definitions
For a given sequence
$S=\{s_{n}\}_{n\in \mathbb {N} },\,$
the transformed sequence is
$\mathbf {T} (S)=S'=\{s'_{n}\}_{n\in \mathbb {N} },\,$
where the members of the transformed sequence are usually computed from some finite number of members of the original sequence, i.e.
$s_{n}'=T(s_{n},s_{n+1},\dots ,s_{n+k})$
for some $k$ which often depends on $n$ (cf. e.g. Binomial transform). In the simplest case, the $s_{n}$ and the $s'_{n}$ are real or complex numbers. More generally, they may be elements of some vector space or algebra.
In the context of acceleration of convergence, the transformed sequence is said to converge faster than the original sequence if
$\lim _{n\to \infty }{\frac {s'_{n}-\ell }{s_{n}-\ell }}=0$
where $\ell $ is the limit of $S$, assumed to be convergent. In this case, convergence acceleration is obtained. If the original sequence is divergent, the sequence transformation acts as extrapolation method to the antilimit $\ell $.
If the mapping $T$ is linear in each of its arguments, i.e., for
$s'_{n}=\sum _{m=0}^{k}c_{m}s_{n+m}$
for some constants $c_{0},\dots ,c_{k}$ (which may depend on n), the sequence transformation $\mathbf {T} $ is called a linear sequence transformation. Sequence transformations that are not linear are called nonlinear sequence transformations.
Examples
Simplest examples of (linear) sequence transformations include shifting all elements, $s'_{n}=s_{n+k}$ (resp. = 0 if n + k < 0) for a fixed k, and scalar multiplication of the sequence.
A less trivial example would be the discrete convolution with a fixed sequence. A particularly basic form is the difference operator, which is convolution with the sequence $(-1,1,0,\ldots ),$ and is a discrete analog of the derivative. The binomial transform is another linear transformation of a still more general type.
An example of a nonlinear sequence transformation is Aitken's delta-squared process, used to improve the rate of convergence of a slowly convergent sequence. An extended form of this is the Shanks transformation. The Möbius transform is also a nonlinear transformation, only possible for integer sequences.
See also
• Aitken's delta-squared process
• Minimum polynomial extrapolation
• Richardson extrapolation
• Series acceleration
• Steffensen's method
References
• Hugh J. Hamilton, "Mertens' Theorem and Sequence Transformations", AMS (1947)
External links
• Transformations of Integer Sequences, a subpage of the On-Line Encyclopedia of Integer Sequences
|
Wikipedia
|
Sequences (book)
Sequences is a mathematical monograph on integer sequences. It was written by Heini Halberstam and Klaus Roth, published in 1966 by the Clarendon Press, and republished in 1983 with minor corrections by Springer-Verlag. Although planned to be part of a two-volume set,[1][2] the second volume was never published.
Topics
The book has five chapters,[1] each largely self-contained[2][3] and loosely organized around different techniques used to solve problems in this area,[2] with an appendix on the background material in number theory needed for reading the book.[1] Rather than being concerned with specific sequences such as the prime numbers or square numbers, its topic is the mathematical theory of sequences in general.[4][5]
The first chapter considers the natural density of sequences, and related concepts such as the Schnirelmann density. It proves theorems on the density of sumsets of sequences, including Mann's theorem that the Schnirelmann density of a sumset is at least the sum of the Schnirelmann densities and Kneser's theorem on the structure of sequences whose lower asymptotic density is subadditive. It studies essential components, sequences that when added to another sequence of Schnirelmann density between zero and one, increase their density, proves that additive bases are essential components, and gives examples of essential components that are not additive bases.[1][4][5][6]
The second chapter concerns the number of representations of the integers as sums of a given number of elements from a given sequence, and includes the Erdős–Fuchs theorem according to which this number of representations cannot be close to a linear function. The third chapter continues the study of numbers of representations, using the probabilistic method; it includes the theorem that there exists an additive basis of order two whose number of representations is logarithmic, later strengthened to all orders in the Erdős–Tetali theorem.[1][4][5][6]
After a chapter on sieve theory and the large sieve (unfortunately missing significant developments that happened soon after the book's publication),[4][5] the final chapter concerns primitive sequences of integers, sequences like the prime numbers in which no element is divisible by another. It includes Behrend's theorem that such a sequence must have logarithmic density zero, and the seemingly-contradictory construction by Abram Samoilovitch Besicovitch of primitive sequences with natural density close to 1/2. It also discusses the sequences that contain all integer multiples of their members, the Davenport–Erdős theorem according to which the lower natural and logarithmic density exist and are equal for such sequences, and a related construction of Besicovitch of a sequence of multiples that has no natural density.[1][4][5]
Audience and reception
This book is aimed at other mathematicians and students of mathematics; it is not suitable for a general audience.[2] However, reviewer J. W. S. Cassels suggests that it could be accessible to advanced undergraduates in mathematics.[4]
Reviewer E. M. Wright notes the book's "accurate scholarship", "most readable exposition", and "fascinating topics".[3] Reviewer Marvin Knopp describes the book as "masterly", and as the first book to overview additive combinatorics.[2] Similarly, although Cassels notes the existence of material on additive combinatorics in the books Additive Zahlentheorie (Ostmann, 1956) and Addition Theorems (Mann, 1965), he calls this "the first connected account" of the area,[4] and reviewer Harold Stark notes that much of material covered by the book is "unique in book form".[5] Knopp also praises the book for, in many cases, correcting errors or deficiencies in the original sources that it surveys.[2] Reviewer Harold Stark writes that the book "should be a standard reference in this area for years to come".[5]
References
1. Kubilius, J., "Review of Sequences", Mathematical Reviews, MR 0210679
2. Knopp, Marvin I. (January 1967), "Questions and methods in number theory", Science, 155 (3761): 442–443, Bibcode:1967Sci...155..442H, doi:10.1126/science.155.3761.442, JSTOR 1720189, S2CID 241017491
3. Wright, E. M. (1968), "Review of Sequences", Journal of the London Mathematical Society, s1-43 (1): 157, doi:10.1112/jlms/s1-43.1.157a
4. Cassels, J. W. S. (February 1968), "Review of Sequences", The Mathematical Gazette, 52 (379): 85–86, doi:10.2307/3614509, JSTOR 3614509, S2CID 126260926
5. Stark, H. M. (1971), "Review of Sequences", Bulletin of the American Mathematical Society, 77 (6): 943–957, doi:10.1090/s0002-9904-1971-12812-4
6. Briggs, W. E., "Review of Sequences", zbMATH, Zbl 0141.04405
|
Wikipedia
|
Partial permutation
In combinatorial mathematics, a partial permutation, or sequence without repetition, on a finite set S is a bijection between two specified subsets of S. That is, it is defined by two subsets U and V of equal size, and a one-to-one mapping from U to V. Equivalently, it is a partial function on S that can be extended to a permutation.[1][2]
Representation
It is common to consider the case when the set S is simply the set {1, 2, ..., n} of the first n integers. In this case, a partial permutation may be represented by a string of n symbols, some of which are distinct numbers in the range from 1 to $n$ and the remaining ones of which are a special "hole" symbol ◊. In this formulation, the domain U of the partial permutation consists of the positions in the string that do not contain a hole, and each such position is mapped to the number in that position. For instance, the string "1 ◊ 2" would represent the partial permutation that maps 1 to itself and maps 3 to 2.[3] The seven partial permutations on two items are
◊◊, ◊1, ◊2, 1◊, 2◊, 12, 21.
Combinatorial enumeration
The number of partial permutations on n items, for n = 0, 1, 2, ..., is given by the integer sequence
1, 2, 7, 34, 209, 1546, 13327, 130922, 1441729, 17572114, 234662231, ... (sequence A002720 in the OEIS)
where the nth item in the sequence is given by the summation formula
$\sum _{i=0}^{n}i!{\binom {n}{i}}^{2}$
in which the ith term counts the number of partial permutations with support of size i, that is, the number of partial permutations with i non-hole entries. Alternatively, it can be computed by a recurrence relation
$P(n)=2nP(n-1)-(n-1)^{2}P(n-2).$
This is determined as follows:
1. $P(n-1)$ partial permutations where the final elements of each set are omitted:
2. $P(n-1)$ partial permutations where the final elements of each set map to each other.
3. $(n-1)P(n-1)$ partial permutations where the final element of the first set is included, but does not map to the final element of the second set
4. $(n-1)P(n-1)$ partial permutations where the final element of the second set is included, but does not map to the final element of the first set
5. $-(n-1)^{2}P(n-2)$, the partial permutations included in both counts 3 and 4, those permutations where the final elements of both sets are included, but do not map to each other.
Restricted partial permutations
Some authors restrict partial permutations so that either the domain[4] or the range[3] of the bijection is forced to consist of the first k items in the set of n items being permuted, for some k. In the former case, a partial permutation of length k from an n-set is just a sequence of k terms from the n-set without repetition. (In elementary combinatorics, these objects are sometimes confusingly called "k-permutations" of the n-set.)
References
1. Straubing, Howard (1983), "A combinatorial proof of the Cayley-Hamilton theorem", Discrete Mathematics, 43 (2–3): 273–279, doi:10.1016/0012-365X(83)90164-4, MR 0685635.
2. Ku, C. Y.; Leader, I. (2006), "An Erdős-Ko-Rado theorem for partial permutations", Discrete Mathematics, 306 (1): 74–86, doi:10.1016/j.disc.2005.11.007, MR 2202076.
3. Claesson, Anders; Jelínek, Vít; Jelínková, Eva; Kitaev, Sergey (2011), "Pattern avoidance in partial permutations", Electronic Journal of Combinatorics, 18 (1): Paper 25, 41, MR 2770130.
4. Burstein, Alexander; Lankham, Isaiah (2010), "Restricted patience sorting and barred pattern avoidance", Permutation patterns, London Math. Soc. Lecture Note Ser., vol. 376, Cambridge: Cambridge Univ. Press, pp. 233–257, arXiv:math/0512122, doi:10.1017/CBO9780511902499.013, MR 2732833.
|
Wikipedia
|
Sequent
In mathematical logic, a sequent is a very general kind of conditional assertion.
$A_{1},\,\dots ,A_{m}\,\vdash \,B_{1},\,\dots ,B_{n}.$
A sequent may have any number m of condition formulas Ai (called "antecedents") and any number n of asserted formulas Bj (called "succedents" or "consequents"). A sequent is understood to mean that if all of the antecedent conditions are true, then at least one of the consequent formulas is true. This style of conditional assertion is almost always associated with the conceptual framework of sequent calculus.
Introduction
The form and semantics of sequents
Sequents are best understood in the context of the following three kinds of logical judgments:
1. Unconditional assertion. No antecedent formulas.
• Example: ⊢ B
• Meaning: B is true.
2. Conditional assertion. Any number of antecedent formulas.
1. Simple conditional assertion. Single consequent formula.
• Example: A1, A2, A3 ⊢ B
• Meaning: IF A1 AND A2 AND A3 are true, THEN B is true.
2. Sequent. Any number of consequent formulas.
• Example: A1, A2, A3 ⊢ B1, B2, B3, B4
• Meaning: IF A1 AND A2 AND A3 are true, THEN B1 OR B2 OR B3 OR B4 is true.
Thus sequents are a generalization of simple conditional assertions, which are a generalization of unconditional assertions.
The word "OR" here is the inclusive OR.[1] The motivation for disjunctive semantics on the right side of a sequent comes from three main benefits.
1. The symmetry of the classical inference rules for sequents with such semantics.
2. The ease and simplicity of converting such classical rules to intuitionistic rules.
3. The ability to prove completeness for predicate calculus when it is expressed in this way.
All three of these benefits were identified in the founding paper by Gentzen (1934, p. 194).
Not all authors have adhered to Gentzen's original meaning for the word "sequent". For example, Lemmon (1965) used the word "sequent" strictly for simple conditional assertions with one and only one consequent formula.[2] The same single-consequent definition for a sequent is given by Huth & Ryan 2004, p. 5.
Syntax details
In a general sequent of the form
$\Gamma \vdash \Sigma $
both Γ and Σ are sequences of logical formulas, not sets. Therefore both the number and order of occurrences of formulas are significant. In particular, the same formula may appear twice in the same sequence. The full set of sequent calculus inference rules contains rules to swap adjacent formulas on the left and on the right of the assertion symbol (and thereby arbitrarily permute the left and right sequences), and also to insert arbitrary formulas and remove duplicate copies within the left and the right sequences. (However, Smullyan (1995, pp. 107–108), uses sets of formulas in sequents instead of sequences of formulas. Consequently the three pairs of structural rules called "thinning", "contraction" and "interchange" are not required.)
The symbol ' $\vdash $ ' is often referred to as the "turnstile", "right tack", "tee", "assertion sign" or "assertion symbol". It is often read, suggestively, as "yields", "proves" or "entails".
Effects of inserting and removing propositions
Since every formula in the antecedent (the left side) must be true to conclude the truth of at least one formula in the succedent (the right side), adding formulas to either side results in a weaker sequent, while removing them from either side gives a stronger one. This is one of the symmetry advantages which follows from the use of disjunctive semantics on the right hand side of the assertion symbol, whereas conjunctive semantics is adhered to on the left hand side.
Consequences of empty lists of formulas
In the extreme case where the list of antecedent formulas of a sequent is empty, the consequent is unconditional. This differs from the simple unconditional assertion because the number of consequents is arbitrary, not necessarily a single consequent. Thus for example, ' ⊢ B1, B2 ' means that either B1, or B2, or both must be true. An empty antecedent formula list is equivalent to the "always true" proposition, called the "verum", denoted "⊤". (See Tee (symbol).)
In the extreme case where the list of consequent formulas of a sequent is empty, the rule is still that at least one term on the right be true, which is clearly impossible. This is signified by the 'always false' proposition, called the "falsum", denoted "⊥". Since the consequence is false, at least one of the antecedents must be false. Thus for example, ' A1, A2 ⊢ ' means that at least one of the antecedents A1 and A2 must be false.
One sees here again a symmetry because of the disjunctive semantics on the right hand side. If the left side is empty, then one or more right-side propositions must be true. If the right side is empty, then one or more of the left-side propositions must be false.
The doubly extreme case ' ⊢ ', where both the antecedent and consequent lists of formulas are empty is "not satisfiable".[3] In this case, the meaning of the sequent is effectively ' ⊤ ⊢ ⊥ '. This is equivalent to the sequent ' ⊢ ⊥ ', which clearly cannot be valid.
Examples
A sequent of the form ' ⊢ α, β ', for logical formulas α and β, means that either α is true or β is true (or both). But it does not mean that either α is a tautology or β is a tautology. To clarify this, consider the example ' ⊢ B ∨ A, C ∨ ¬A '. This is a valid sequent because either B ∨ A is true or C ∨ ¬A is true. But neither of these expressions is a tautology in isolation. It is the disjunction of these two expressions which is a tautology.
Similarly, a sequent of the form ' α, β ⊢ ', for logical formulas α and β, means that either α is false or β is false. But it does not mean that either α is a contradiction or β is a contradiction. To clarify this, consider the example ' B ∧ A, C ∧ ¬A ⊢ '. This is a valid sequent because either B ∧ A is false or C ∧ ¬A is false. But neither of these expressions is a contradiction in isolation. It is the conjunction of these two expressions which is a contradiction.
Rules
Most proof systems provide ways to deduce one sequent from another. These inference rules are written with a list of sequents above and below a line. This rule indicates that if everything above the line is true, so is everything under the line.
A typical rule is:
${\frac {\Gamma ,\alpha \vdash \Sigma \qquad \Gamma \vdash \alpha }{\Gamma \vdash \Sigma }}$
This indicates that if we can deduce that $\Gamma ,\alpha $ yields $\Sigma $, and that $\Gamma $ yields $\alpha $, then we can also deduce that $\Gamma $ yields $\Sigma $. (See also the full set of sequent calculus inference rules.)
Interpretation
History of the meaning of sequent assertions
The assertion symbol in sequents originally meant exactly the same as the implication operator. But over time, its meaning has changed to signify provability within a theory rather than semantic truth in all models.
In 1934, Gentzen did not define the assertion symbol ' ⊢ ' in a sequent to signify provability. He defined it to mean exactly the same as the implication operator ' ⇒ '. Using ' → ' instead of ' ⊢ ' and ' ⊃ ' instead of ' ⇒ ', he wrote: "The sequent A1, ..., Aμ → B1, ..., Bν signifies, as regards content, exactly the same as the formula (A1 & ... & Aμ) ⊃ (B1 ∨ ... ∨ Bν)".[4] (Gentzen employed the right-arrow symbol between the antecedents and consequents of sequents. He employed the symbol ' ⊃ ' for the logical implication operator.)
In 1939, Hilbert and Bernays stated likewise that a sequent has the same meaning as the corresponding implication formula.[5]
In 1944, Alonzo Church emphasized that Gentzen's sequent assertions did not signify provability.
"Employment of the deduction theorem as primitive or derived rule must not, however, be confused with the use of Sequenzen by Gentzen. For Gentzen's arrow, →, is not comparable to our syntactical notation, ⊢, but belongs to his object language (as is clear from the fact that expressions containing it appear as premisses and conclusions in applications of his rules of inference)."[6]
Numerous publications after this time have stated that the assertion symbol in sequents does signify provability within the theory where the sequents are formulated. Curry in 1963,[7] Lemmon in 1965,[2] and Huth and Ryan in 2004[8] all state that the sequent assertion symbol signifies provability. However, Ben-Ari (2012, p. 69) states that the assertion symbol in Gentzen-system sequents, which he denotes as ' ⇒ ', is part of the object language, not the metalanguage.[9]
According to Prawitz (1965): "The calculi of sequents can be understood as meta-calculi for the deducibility relation in the corresponding systems of natural deduction."[10] And furthermore: "A proof in a calculus of sequents can be looked upon as an instruction on how to construct a corresponding natural deduction."[11] In other words, the assertion symbol is part of the object language for the sequent calculus, which is a kind of meta-calculus, but simultaneously signifies deducibility in an underlying natural deduction system.
Intuitive meaning
A sequent is a formalized statement of provability that is frequently used when specifying calculi for deduction. In the sequent calculus, the name sequent is used for the construct, which can be regarded as a specific kind of judgment, characteristic to this deduction system.
The intuitive meaning of the sequent $\Gamma \vdash \Sigma $ is that under the assumption of Γ the conclusion of Σ is provable. Classically, the formulae on the left of the turnstile can be interpreted conjunctively while the formulae on the right can be considered as a disjunction. This means that, when all formulae in Γ hold, then at least one formula in Σ also has to be true. If the succedent is empty, this is interpreted as falsity, i.e. $\Gamma \vdash $ means that Γ proves falsity and is thus inconsistent. On the other hand an empty antecedent is assumed to be true, i.e., $\vdash \Sigma $ means that Σ follows without any assumptions, i.e., it is always true (as a disjunction). A sequent of this form, with Γ empty, is known as a logical assertion.
Of course, other intuitive explanations are possible, which are classically equivalent. For example, $\Gamma \vdash \Sigma $ can be read as asserting that it cannot be the case that every formula in Γ is true and every formula in Σ is false (this is related to the double-negation interpretations of classical intuitionistic logic, such as Glivenko's theorem).
In any case, these intuitive readings are only pedagogical. Since formal proofs in proof theory are purely syntactic, the meaning of (the derivation of) a sequent is only given by the properties of the calculus that provides the actual rules of inference.
Barring any contradictions in the technically precise definition above we can describe sequents in their introductory logical form. $\Gamma $ represents a set of assumptions that we begin our logical process with, for example "Socrates is a man" and "All men are mortal". The $\Sigma $ represents a logical conclusion that follows under these premises. For example "Socrates is mortal" follows from a reasonable formalization of the above points and we could expect to see it on the $\Sigma $ side of the turnstile. In this sense, $\vdash $ means the process of reasoning, or "therefore" in English.
Variations
The general notion of sequent introduced here can be specialized in various ways. A sequent is said to be an intuitionistic sequent if there is at most one formula in the succedent (although multi-succedent calculi for intuitionistic logic are also possible). More precisely, the restriction of the general sequent calculus to single-succedent-formula sequents, with the same inference rules as for general sequents, constitutes an intuitionistic sequent calculus. (This restricted sequent calculus is denoted LJ.)
Similarly, one can obtain calculi for dual-intuitionistic logic (a type of paraconsistent logic) by requiring that sequents be singular in the antecedent.
In many cases, sequents are also assumed to consist of multisets or sets instead of sequences. Thus one disregards the order or even the numbers of occurrences of the formulae. For classical propositional logic this does not yield a problem, since the conclusions that one can draw from a collection of premises do not depend on these data. In substructural logic, however, this may become quite important.
Natural deduction systems use single-consequence conditional assertions, but they typically do not use the same sets of inference rules as Gentzen introduced in 1934. In particular, tabular natural deduction systems, which are very convenient for practical theorem-proving in propositional calculus and predicate calculus, were applied by Suppes (1957) harvtxt error: no target: CITEREFSuppes1957 (help) and Lemmon (1965) for teaching introductory logic in textbooks.
Etymology
Historically, sequents have been introduced by Gerhard Gentzen in order to specify his famous sequent calculus.[12] In his German publication he used the word "Sequenz". However, in English, the word "sequence" is already used as a translation to the German "Folge" and appears quite frequently in mathematics. The term "sequent" then has been created in search for an alternative translation of the German expression.
Kleene[13] makes the following comment on the translation into English: "Gentzen says 'Sequenz', which we translate as 'sequent', because we have already used 'sequence' for any succession of objects, where the German is 'Folge'."
See also
• Gerhard Gentzen
• Intuitionistic logic
• Natural deduction
• Sequent calculus
Notes
1. The disjunctive semantics for the right side of a sequent is stated and explained by Curry 1977, pp. 189–190, Kleene 2002, pp. 290, 297, Kleene 2009, p. 441, Hilbert & Bernays 1970, p. 385, Smullyan 1995, pp. 104–105, Takeuti 2013, p. 9, and Gentzen 1934, p. 180.
2. Lemmon 1965, p. 12, wrote: "Thus a sequent is an argument-frame containing a set of assumptions and a conclusion which is claimed to follow from them. [...] The propositions to the left of '⊢' become assumptions of the argument, and the proposition to the right becomes a conclusion validly drawn from those assumptions."
3. Smullyan 1995, p. 105.
4. Gentzen 1934, p. 180.
2.4. Die Sequenz A1, ..., Aμ → B1, ..., Bν bedeutet inhaltlich genau dasselbe wie die Formel
(A1 & ... & Aμ) ⊃ (B1 ∨ ... ∨ Bν).
5. Hilbert & Bernays 1970, p. 385.
Für die inhaltliche Deutung ist eine Sequenz
A1, ..., Ar → B1, ..., Bs,
worin die Anzahlen r und s von 0 verschieden sind, gleichbedeutend mit der Implikation
(A1 & ... & Ar) → (B1 ∨ ... ∨ Bs)
6. Church 1996, p. 165.
7. Curry 1977, p. 184
8. Huth & Ryan (2004, p. 5)
9. Ben-Ari 2012, p. 69, defines sequents to have the form U ⇒ V for (possibly non-empty) sets of formulas U and V. Then he writes:
"Intuitively, a sequent represents 'provable from' in the sense that the formulas in U are assumptions for the set of formulas V that are to be proved. The symbol ⇒ is similar to the symbol ⊢ in Hilbert systems, except that ⇒ is part of the object language of the deductive system being formalized, while ⊢ is a metalanguage notation used to reason about deductive systems."
10. Prawitz 2006, p. 90.
11. See Prawitz 2006, p. 91, for this and further details of interpretation.
12. Gentzen 1934, Gentzen 1935.
13. Kleene 2002, p. 441
References
• Ben-Ari, Mordechai (2012) [1993]. Mathematical logic for computer science. London: Springer. ISBN 978-1-4471-4128-0.
• Church, Alonzo (1996) [1944]. Introduction to mathematical logic. Princeton, New Jersey: Princeton University Press. ISBN 978-0-691-02906-1.
• Curry, Haskell Brooks (1977) [1963]. Foundations of mathematical logic. New York: Dover Publications Inc. ISBN 978-0-486-63462-3.
• Gentzen, Gerhard (1934). "Untersuchungen über das logische Schließen. I". Mathematische Zeitschrift. 39 (2): 176–210. doi:10.1007/bf01201353. S2CID 121546341.
• Gentzen, Gerhard (1935). "Untersuchungen über das logische Schließen. II". Mathematische Zeitschrift. 39 (3): 405–431. doi:10.1007/bf01201363. S2CID 186239837.
• Hilbert, David; Bernays, Paul (1970) [1939]. Grundlagen der Mathematik II (Second ed.). Berlin, New York: Springer-Verlag. ISBN 978-3-642-86897-9.
• Huth, Michael; Ryan, Mark (2004). Logic in Computer Science (Second ed.). Cambridge, United Kingdom: Cambridge University Press. ISBN 978-0-521-54310-1.
• Kleene, Stephen Cole (2009) [1952]. Introduction to metamathematics. Ishi Press International. ISBN 978-0-923891-57-2.
• Kleene, Stephen Cole (2002) [1967]. Mathematical logic. Mineola, New York: Dover Publications. ISBN 978-0-486-42533-7.
• Lemmon, Edward John (1965). Beginning logic. Thomas Nelson. ISBN 0-17-712040-1.
• Prawitz, Dag (2006) [1965]. Natural deduction: A proof-theoretical study. Mineola, New York: Dover Publications. ISBN 978-0-486-44655-4.
• Smullyan, Raymond Merrill (1995) [1968]. First-order logic. New York: Dover Publications. ISBN 978-0-486-68370-6.
• Suppes, Patrick Colonel (1999) [1957]. Introduction to logic. Mineola, New York: Dover Publications. ISBN 978-0-486-40687-9.
• Takeuti, Gaisi (2013) [1975]. Proof theory (Second ed.). Mineola, New York: Dover Publications. ISBN 978-0-486-49073-1.
External links
• "Sequent (in logic)", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia
|
Sequential auction
A sequential auction is an auction in which several items are sold, one after the other, to the same group of potential buyers. In a sequential first-price auction (SAFP), each individual item is sold using a first price auction, while in a sequential second-price auction (SASP), each individual item is sold using a second price auction.
A sequential auction differs from a combinatorial auction, in which many items are auctioned simultaneously and the agents can bid on bundles of items. A sequential auction is much simpler to implement and more common in practice. However, the bidders in each auction know that there are going to be future auctions, and this may affect their strategic considerations. Here are some examples.
Example 1.[1] There are two items for sale and two potential buyers: Alice and Bob, with the following valuations:
• Alice values each item as 5, and both items as 10 (i.e., her valuation is additive).
• Bob values each item as 4, and both items as 4 (i.e., his valuation is unit demand).
In a SASP, each item is put to a second-price-auction. Usually, such auction is a truthful mechanism, so if each item is sold in isolation, Alice wins both items and pays 4 for each item, her total payment is 4+4=8 and her net utility is 5 + 5 − 8 = 2. But, if Alice knows Bob's valuations, she has a better strategy: she can let Bob win the first item (e.g. by bidding 0). Then, Bob will not participate in the second auction at all, so Alice will win the second item and pay 0, and her net utility will be 5 − 0 = 5.
A similar outcome happens in a SAFP. If each item is sold in isolation, there is a Nash equilibrium in which Alice bids slightly above 4 and wins, and her net utility is slightly below 2. But, if Alice knows Bob's valuations, she can deviate to a strategy that lets Bob win in the first round so that in the second round she can win for a price slightly above 0.
Example 2.[2] Multiple identical objects are auctioned, and the agents have budget constraints. It may be advantageous for a bidder to bid aggressively on one object with a view to raising the price paid by his rival and depleting his budget so that the second object may then be obtained at a lower price. In effect, a bidder may wish to “raise a rival’s costs” in one market in order to gain advantage in another. Such considerations seem to have played a significant role in the auctions for radio spectrum licenses conducted by the Federal Communications Commission. Assessment of rival bidders’ budget constraints was a primary component of the pre-bidding preparation of GTE’s bidding team.
Nash equilibrium
A sequential auction is a special case of a sequential game. A natural question to ask for such a game is when there exists a subgame perfect equilibrium in pure strategies (SPEPS). When the players have full information (i.e., they know the sequence of auctions in advance), and a single item is sold in each round, a SAFP always has a SPEPS, regardless of the players' valuations. The proof is by backward induction:[1]: 872–874
• In the last round, we have a simple first price auction. It has a pure-strategy Nash equilibrium in which the highest-value agent wins by bidding slightly above the second-highest value.
• In each previous round, the situation is a special case of a first-price auction with externalities. In such an auction, each agent may gain value, not only when he wins, but also when other agents win. In general, the valuation of agent $i$ is represented by a vector $v_{i}[1],\dots ,v_{i}[n]$, where $v_{i}[j]$ is the value of agent $i$ when agent $j$ wins. In a sequential auction, the externalities are determined by the equilibrium outcomes in the future rounds. In the introductory example, there are two possible outcomes:
• If Alice wins the first round, then the equilibrium outcome in the second round is that Alice buys an item worth $5 for $4,[3] so her net gain is $1. Therefore, her total value for winning the first round is $v_{\text{Alice}}[{\text{Alice}}]=5+1=6$.
• If Bob wins the first round, then the equilibrium outcome in the second round is that Alice buys an item worth $5 for $0, so her net gain is $5. Therefore, her total value for letting Bob win is $v_{\text{Alice}}[{\text{Bob}}]=0+5=5$.
• Each first-price auction with externalities has a pure-strategy Nash equilibrium.[1] In the above example, the equilibrium in the first round is that Bob wins and pays $1.
• Therefore, by backward induction, each SAFP has a pure-strategy SPE.
Notes:
• The existence result also holds for SASP. In fact, any equilibrium-outcome of a first-price auction with externalities is also an equilibrium-outcome of a second-price auction with the same externalities.
• The existence result holds regardless of the valuations of the bidders – they may have arbitrary utility functions on indivisible goods. In contrast, if all auctions are done simultaneously, a pure-strategy Nash equilibrium does not always exist, even if the bidders have subadditive utility functions.[4]
Social welfare
Once we know that a subgame perfect equilibrium exists, the next natural question is how efficient it is – does it obtain the maximum social welfare? This is quantified by the price of anarchy (PoA) – the ratio of the maximum attainable social welfare to the social welfare in the worst equilibrium. In the introductory Example 1, the maximum attainable social welfare is 10 (when Alice wins both items), but the welfare in equilibrium is 9 (Bob wins the first item and Alice wins the second), so the PoA is 10/9. In general, the PoA of sequential auctions depends on the utility functions of the bidders.
The first five results apply to agents with complete information (all agents know the valuations of all other agents):
Case 1: Identical items.[5][6] There are several identical items. There are two bidders. At least one of them has a concave valuation function (diminishing returns). The PoA of SASP is at most $1/(1-e)\approx 1.58$. Numerical results show that, when there are many bidders with concave valuation functions, the efficiency loss decreases as the number of users increases.
Case 2: Additive bidders.[1]: 885 The items are different, and all bidders regard all items as independent goods, so their valuations are additive set functions. The PoA of SASP is unbounded – the welfare in a SPEPS might be arbitrarily small.
Case 3: Unit-demand bidders.[1] All bidders regard all items as pure substitute goods, so their valuations are unit demand. The PoA of SAFP is at most 2 – the welfare in a SPEPS is at least half the maximum (if mixed strategies are allowed, the PoA is at most 4). In contrast, the PoA in SASP is again unbounded.
These results are surprising and they emphasize the importance of the design decision of using a first-price auction (rather than a second-price auction) in each round.
Case 4: submodular bidders.[1] The bidders' valuations are arbitrary submodular set functions (note that additive and unit-demand are special cases of submodular). In this case, the PoA of both SAFP and SASP is unbounded, even when there are only four bidders. The intuition is that the high-value bidder might prefer to let a low-value bidder win, in order to decrease the competition that he might face in the future rounds.
Case 5: additive+UD.[7] Some bidders have additive valuations while others have unit-demand valuations. The PoA of SAFP might be at least $\min(n,m)$, where m is the number of items and n is the number of bidders. Moreover, the inefficient equilibria persist even under iterated elimination of weakly dominated strategies. This implies linear inefficiency for many natural settings, including:
• Bidders with gross substitute valuations,
• capacitated valuations,
• budget-additive valuations,
• additive valuations with hard budget constraints on the payments.
Case 6: unit-demand bidders with incomplete information.[8] The agents do not know the valuations of the other agents, but only the probability-distribution from which their valuations are drawn. The sequential auction is then a Bayesian game, and its PoA might be higher. When all bidders have unit demand valuations, the PoA of a Bayesian Nash equilibrium in a SAFP is at most 3.
Revenue maximization
An important practical question for sellers selling several items is how to design an auction that maximizes their revenue. There are several questions:
• 1. Is it better to use a sequential auction or a simultaneous auction? Sequential auctions with bids announced between sales seem preferable because the bids may convey information about the value of objects to be sold later. The auction literature shows that this information effect increases the seller's expected revenue since it reduces the winner's curse. However, there is also a deception effect which develops in the sequential sales. If a bidder knows that his current bid will reveal information about later objects then he has an incentive to underbid.[9]
• 2. If a sequential auction is used, in what order should the items be sold in order to maximize the seller's revenue?
Suppose there are two items and there is a group of bidders who are subject to budget constraints. The objects have common values to all bidders but need not be identical, and may be either complement goods or substitute goods. In a game with complete information:[2]
• 1. A sequential auction yields more revenue than a simultaneous ascending auction if: (a) the difference between the items' values is large, or (b) there are significant complementarities.
A hybrid simultaneous-sequential form yields higher revenue than the sequential auction.
• 2. If the objects are sold by means of a sequence of open ascending auctions, then it is always optimal to sell the more valuable object first (assuming the objects' values are common knowledge).
Moreover, budget constraints may arise endogenously. I.e, a bidding company may tell its representative "you may spend at most X on this auction", although the company itself has much more money to spend. Limiting the budget in advance gives the bidders some strategic advantages.
When multiple objects are sold, budget constraints can have some other unanticipated consequences. For example, a reserve price can raise the seller's revenue even though it is set at such a low level that it is never binding in equilibrium.
Composeable mechanisms
Sequential-auctions and simultaneous-auctions are both special case of a more general setting, in which the same bidders participate in several different mechanisms. Syrgkanis and Tardos[10] suggest a general framework for efficient mechanism design with guaranteed good properties even when players participate in multiple mechanisms simultaneously or sequentially. The class of smooth mechanisms – mechanisms that generate approximately market clearing prices – result in high-quality outcome both in equilibrium and in learning outcomes in the full information setting, as well as in Bayesian equilibrium with uncertainty about participants. Smooth mechanisms compose well: smoothness locally at each mechanism implies global efficiency. For mechanisms where good performance requires that bidders do not bid above their value, weakly smooth mechanisms can be used, such as the Vickrey auction. They are approximately efficient under the no-overbidding assumption, and the weak smoothness property is also maintained by composition. Some of the results are valid also when participants have budget constraints.
References
1. Leme, Renato Paes; Syrgkanis, Vasilis; Tardos, Eva (2012). "Sequential Auctions and Externalities". Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms. p. 869. arXiv:1108.2452. doi:10.1137/1.9781611973099.70. ISBN 978-1-61197-210-8.
2. Benoit, J.-P.; Krishna, V. (2001). "Multiple-Object Auctions with Budget Constrained Bidders". The Review of Economic Studies. 68: 155–179. doi:10.1111/1467-937X.00164.
3. In fact, Alice may pay slightly more than $4 (e.g, if the bids are in whole cents, Alice may pay $4.01). For simplicity, we ignore this infinitesimal difference.
4. Hassidim, Avinatan; Kaplan, Haim; Mansour, Yishay; Nisan, Noam (2011). "Non-price equilibria in markets of discrete goods". Proceedings of the 12th ACM conference on Electronic commerce – EC '11. p. 295. arXiv:1103.3950. doi:10.1145/1993574.1993619. ISBN 9781450302616.
5. Bae, Junjik; Beigman, Eyal; Berry, Randall; Honig, Michael; Vohra, Rakesh (2008). "Sequential Bandwidth and Power Auctions for Distributed Spectrum Sharing". IEEE Journal on Selected Areas in Communications. 26 (7): 1193. doi:10.1109/JSAC.2008.080916. S2CID 28436853.
6. Bae, Junjik; Beigman, Eyal; Berry, Randall; Honig, Michael L.; Vohra, Rakesh (2009). "On the efficiency of sequential auctions for spectrum sharing". 2009 International Conference on Game Theory for Networks. p. 199. doi:10.1109/gamenets.2009.5137402. ISBN 978-1-4244-4176-1.
7. Feldman, Michal; Lucier, Brendan; Syrgkanis, Vasilis (2013). "Limits of Efficiency in Sequential Auctions". Web and Internet Economics. Lecture Notes in Computer Science. Vol. 8289. p. 160. arXiv:1309.2529. doi:10.1007/978-3-642-45046-4_14. ISBN 978-3-642-45045-7.
8. Syrgkanis, Vasilis; Tardos, Eva (2012). "Bayesian sequential auctions". Proceedings of the 13th ACM Conference on Electronic Commerce – EC '12. p. 929. arXiv:1206.4771. doi:10.1145/2229012.2229082. ISBN 9781450314152.
9. Hausch, Donald B. (1986). "Multi-Object Auctions: Sequential vs. Simultaneous Sales". Management Science. 32 (12): 1599–1610. doi:10.1287/mnsc.32.12.1599.
10. Syrgkanis, Vasilis; Tardos, Eva (2013). "Composable and efficient mechanisms". Proceedings of the 45th annual ACM symposium on Symposium on theory of computing – STOC '13. p. 211. arXiv:1211.1325. doi:10.1145/2488608.2488635. ISBN 9781450320290.
|
Wikipedia
|
Sequential decoding
Recognised by John Wozencraft, sequential decoding is a limited memory technique for decoding tree codes. Sequential decoding is mainly used as an approximate decoding algorithm for long constraint-length convolutional codes. This approach may not be as accurate as the Viterbi algorithm but can save a substantial amount of computer memory. It was used to decode a convolutional code in 1968 Pioneer 9 mission.
Sequential decoding explores the tree code in such a way to try to minimise the computational cost and memory requirements to store the tree.
There is a range of sequential decoding approaches based on the choice of metric and algorithm. Metrics include:
• Fano metric
• Zigangirov metric
• Gallager metric
Algorithms include:
• Stack algorithm
• Fano algorithm
• Creeper algorithm
Fano metric
Given a partially explored tree (represented by a set of nodes which are limit of exploration), we would like to know the best node from which to explore further. The Fano metric (named after Robert Fano) allows one to calculate from which is the best node to explore further. This metric is optimal given no other constraints (e.g. memory).
For a binary symmetric channel (with error probability $p$) the Fano metric can be derived via Bayes theorem. We are interested in following the most likely path $P_{i}$ given an explored state of the tree $X$ and a received sequence ${\mathbf {r} }$. Using the language of probability and Bayes theorem we want to choose the maximum over $i$ of:
$\Pr(P_{i}|X,{\mathbf {r} })\propto \Pr({\mathbf {r} }|P_{i},X)\Pr(P_{i}|X)$
We now introduce the following notation:
• $N$ to represent the maximum length of transmission in branches
• $b$ to represent the number of bits on a branch of the code (the denominator of the code rate, $R$).
• $d_{i}$ to represent the number of bit errors on path $P_{i}$ (the Hamming distance between the branch labels and the received sequence)
• $n_{i}$ to be the length of $P_{i}$ in branches.
We express the likelihood $\Pr({\mathbf {r} }|P_{i},X)$ as $p^{d_{i}}(1-p)^{n_{i}b-d_{i}}2^{-(N-n_{i})b}$ (by using the binary symmetric channel likelihood for the first $n_{i}b$ bits followed by a uniform prior over the remaining bits).
We express the prior $\Pr(P_{i}|X)$ in terms of the number of branch choices one has made, $n_{i}$, and the number of branches from each node, $2^{Rb}$.
Therefore:
${\begin{aligned}\Pr(P_{i}|X,{\mathbf {r} })&\propto p^{d_{i}}(1-p)^{n_{i}b-d_{i}}2^{-(N-n_{i})b}2^{-n_{i}Rb}\\&\propto p^{d_{i}}(1-p)^{n_{i}b-d_{i}}2^{n_{i}b}2^{-n_{i}Rb}\end{aligned}}$
We can equivalently maximise the log of this probability, i.e.
${\begin{aligned}&d_{i}\log _{2}p+(n_{i}b-d_{i})\log _{2}(1-p)+n_{i}b-n_{i}Rb\\=&d_{i}(\log _{2}p+1-R)+(n_{i}b-d_{i})(\log _{2}(1-p)+1-R)\end{aligned}}$
This last expression is the Fano metric. The important point to see is that we have two terms here: one based on the number of wrong bits and one based on the number of right bits. We can therefore update the Fano metric simply by adding $\log _{2}p+1-R$ for each non-matching bit and $\log _{2}(1-p)+1-R$ for each matching bit.
Computational cutoff rate
For sequential decoding to a good choice of decoding algorithm, the number of states explored wants to remain small (otherwise an algorithm which deliberately explores all states, e.g. the Viterbi algorithm, may be more suitable). For a particular noise level there is a maximum coding rate $R_{0}$ called the computational cutoff rate where there is a finite backtracking limit. For the binary symmetric channel:
$R_{0}=1-\log _{2}(1+2{\sqrt {p(1-p)}})$
Algorithms
Stack algorithm
The simplest algorithm to describe is the "stack algorithm" in which the best $N$ paths found so far are stored. Sequential decoding may introduce an additional error above Viterbi decoding when the correct path has $N$ or more highly scoring paths above it; at this point the best path will drop off the stack and be no longer considered.
Fano algorithm
The famous Fano algorithm (named after Robert Fano) has a very low memory requirement and hence is suited to hardware implementations. This algorithm explores backwards and forward from a single point on the tree.
1. The Fano algorithm is a sequential decoding algorithm that does not require a stack.
2. The Fano algorithm can only operate over a code tree because it cannot examine path merging.
3. At each decoding stage, the Fano algorithm retains the information regarding three paths: the current path, its immediate predecessor path, and one of its successor paths.
4. Based on this information, the Fano algorithm can move from the current path to either its immediate predecessor path or the selected successor path; hence, no stack is required for queuing all examined paths.
5. The movement of the Fano algorithm is guided by a dynamic threshold T that is an integer multiple of a fixed step size ¢.
6. Only the path whose path metric is no less than T can be next visited. According to the algorithm, the process of codeword search continues to move forward along a code path, as long as the Fano metric along the code path remains non-decreasing.
7. Once all the successor path metrics are smaller than T, the algorithm moves backward to the predecessor path if the predecessor path metric beats T; thereafter, threshold examination will be subsequently performed on another successor path of this revisited predecessor.
8. In case the predecessor path metric is also less than T, the threshold T is one-step lowered so that the algorithm is not trapped on the current path.
9. For the Fano algorithm, if a path is revisited, the presently examined dynamic threshold is always lower than the momentary dynamic threshold at the previous visit, guaranteeing that looping in the algorithm does not occur, and that the algorithm can ultimately reach a terminal node of the code tree, and stop.
References
• John Wozencraft and B. Reiffen, Sequential decoding, ISBN 0-262-23006-2
• Rolf Johannesson and Kamil Sh. Zigangirov, Fundamentals of convolutional coding (chapter 6), ISBN 0-470-27683-5
External links
• "Correction trees" - simulator of correction process using priority queue to choose maximum metric node (called weight)
|
Wikipedia
|
Sequential dynamical system
Sequential dynamical systems (SDSs) are a class of graph dynamical systems. They are discrete dynamical systems which generalize many aspects of for example classical cellular automata, and they provide a framework for studying asynchronous processes over graphs. The analysis of SDSs uses techniques from combinatorics, abstract algebra, graph theory, dynamical systems and probability theory.
Definition
An SDS is constructed from the following components:
• A finite graph Y with vertex set v[Y] = {1,2, ... , n}. Depending on the context the graph can be directed or undirected.
• A state xv for each vertex i of Y taken from a finite set K. The system state is the n-tuple x = (x1, x2, ... , xn), and x[i] is the tuple consisting of the states associated to the vertices in the 1-neighborhood of i in Y (in some fixed order).
• A vertex function fi for each vertex i. The vertex function maps the state of vertex i at time t to the vertex state at time t + 1 based on the states associated to the 1-neighborhood of i in Y.
• A word w = (w1, w2, ... , wm) over v[Y].
It is convenient to introduce the Y-local maps Fi constructed from the vertex functions by
$F_{i}(x)=(x_{1},x_{2},\ldots ,x_{i-1},f_{i}(x[i]),x_{i+1},\ldots ,x_{n})\;.$
The word w specifies the sequence in which the Y-local maps are composed to derive the sequential dynamical system map F: Kn → Kn as
$[F_{Y},w]=F_{w(m)}\circ F_{w(m-1)}\circ \cdots \circ F_{w(2)}\circ F_{w(1)}\;.$
If the update sequence is a permutation one frequently speaks of a permutation SDS to emphasize this point. The phase space associated to a sequential dynamical system with map F: Kn → Kn is the finite directed graph with vertex set Kn and directed edges (x, F(x)). The structure of the phase space is governed by the properties of the graph Y, the vertex functions (fi)i, and the update sequence w. A large part of SDS research seeks to infer phase space properties based on the structure of the system constituents.
Example
Consider the case where Y is the graph with vertex set {1,2,3} and undirected edges {1,2}, {1,3} and {2,3} (a triangle or 3-circle) with vertex states from K = {0,1}. For vertex functions use the symmetric, boolean function nor : K3 → K defined by nor(x,y,z) = (1+x)(1+y)(1+z) with boolean arithmetic. Thus, the only case in which the function nor returns the value 1 is when all the arguments are 0. Pick w = (1,2,3) as update sequence. Starting from the initial system state (0,0,0) at time t = 0 one computes the state of vertex 1 at time t=1 as nor(0,0,0) = 1. The state of vertex 2 at time t=1 is nor(1,0,0) = 0. Note that the state of vertex 1 at time t=1 is used immediately. Next one obtains the state of vertex 3 at time t=1 as nor(1,0,0) = 0. This completes the update sequence, and one concludes that the Nor-SDS map sends the system state (0,0,0) to (1,0,0). The system state (1,0,0) is in turned mapped to (0,1,0) by an application of the SDS map.
See also
• Graph dynamical system
• Boolean network
• Gene regulatory network
• Dynamic Bayesian network
• Petri net
References
• Henning S. Mortveit, Christian M. Reidys (2008). An Introduction to Sequential Dynamical Systems. Springer. ISBN 978-0387306544.
• Predecessor and Permutation Existence Problems for Sequential Dynamical Systems
• Genetic Sequential Dynamical Systems
|
Wikipedia
|
Particle filter
Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference.[1] The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Pierre Del Moral about mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s.[2] The term "Sequential Monte Carlo" was coined by Jun S. Liu and Rong Chen in 1998.[3]
Particle filtering uses a set of particles (also called samples) to represent the posterior distribution of a stochastic process given the noisy and/or partial observations. The state-space model can be nonlinear and the initial state and noise distributions can take any form required. Particle filter techniques provide a well-established methodology[2][4][5] for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions. However, these methods do not perform well when applied to very high-dimensional systems.
Particle filters update their prediction in an approximate (statistical) manner. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents the probability of that particle being sampled from the probability density function. Weight disparity leading to weight collapse is a common issue encountered in these filtering algorithms. However, it can be mitigated by including a resampling step before the weights become uneven. Several adaptive resampling criteria can be used including the variance of the weights and the relative entropy concerning the uniform distribution.[6] In the resampling step, the particles with negligible weights are replaced by new particles in the proximity of the particles with higher weights.
From the statistical and probabilistic point of view, particle filters may be interpreted as mean-field particle interpretations of Feynman-Kac probability measures.[7][8][9][10][11] These particle integration techniques were developed in molecular chemistry and computational physics by Theodore E. Harris and Herman Kahn in 1951, Marshall N. Rosenbluth and Arianna W. Rosenbluth in 1955,[12] and more recently by Jack H. Hetherington in 1984.[13] In computational physics, these Feynman-Kac type path particle integration methods are also used in Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods.[14][15][16] Feynman-Kac interacting particle methods are also strongly related to mutation-selection genetic algorithms currently used in evolutionary computation to solve complex optimization problems.
The particle filter methodology is used to solve Hidden Markov Model (HMM) and nonlinear filtering problems. With the notable exception of linear-Gaussian signal-observation models (Kalman filter) or wider classes of models (Benes filter[17]), Mireille Chaleyat-Maurel and Dominique Michel proved in 1984 that the sequence of posterior distributions of the random states of a signal, given the observations (a.k.a. optimal filter), has no finite recursion.[18] Various other numerical methods based on fixed grid approximations, Markov Chain Monte Carlo techniques, conventional linearization, extended Kalman filters, or determining the best linear system (in the expected cost-error sense) are unable to cope with large-scale systems, unstable processes, or insufficiently smooth nonlinearities.
Particle filters and Feynman-Kac particle methodologies find application in signal and image processing, Bayesian inference, machine learning, risk analysis and rare event sampling, engineering and robotics, artificial intelligence, bioinformatics,[19] phylogenetics, computational science, economics and mathematical finance, molecular chemistry, computational physics, pharmacokinetics, and other fields.
History
Heuristic-like algorithms
From a statistical and probabilistic viewpoint, particle filters belong to the class of branching/genetic type algorithms, and mean-field type interacting particle methodologies. The interpretation of these particle methods depends on the scientific discipline. In Evolutionary Computing, mean-field genetic type particle methodologies are often used as heuristic and natural search algorithms (a.k.a. Metaheuristic). In computational physics and molecular chemistry, they are used to solve Feynman-Kac path integration problems or to compute Boltzmann-Gibbs measures, top eigenvalues, and ground states of Schrödinger operators. In Biology and Genetics, they represent the evolution of a population of individuals or genes in some environment.
The origins of mean-field type evolutionary computational techniques can be traced back to 1950 and 1954 with Alan Turing's work on genetic type mutation-selection learning machines[20] and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.[21][22] The first trace of particle filters in statistical methodology dates back to the mid-1950s; the 'Poor Man's Monte Carlo',[23] that was proposed by Hammersley et al., in 1954, contained hints of the genetic type particle filtering methods used today. In 1963, Nils Aall Barricelli simulated a genetic type algorithm to mimic the ability of individuals to play a simple game.[24] In evolutionary computing literature, genetic-type mutation-selection algorithms became popular through the seminal work of John Holland in the early 1970s, particularly his book[25] published in 1975.
In Biology and Genetics, the Australian geneticist Alex Fraser also published in 1957 a series of papers on the genetic type simulation of artificial selection of organisms.[26] The computer simulation of the evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)[27] and Crosby (1973).[28] Fraser's simulations included all of the essential elements of modern mutation-selection genetic particle algorithms.
From the mathematical viewpoint, the conditional distribution of the random states of a signal given some partial and noisy observations is described by a Feynman-Kac probability on the random trajectories of the signal weighted by a sequence of likelihood potential functions.[7][8] Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods can also be interpreted as a mean-field genetic type particle approximation of Feynman-Kac path integrals.[7][8][9][13][14][29][30] The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron-chain reactions,[31] but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984.[13] One can also quote the earlier seminal works of Theodore E. Harris and Herman Kahn in particle physics, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies.[32] In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.[12]
The use of genetic particle algorithms in advanced signal processing and Bayesian inference is more recent. In January 1993, Genshiro Kitagawa developed a "Monte Carlo filter",[33] a slightly modified version of this article appeared in 1996.[34] In April 1993, Gordon et al., published in their seminal work[35] an application of genetic type algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state space or the noise of the system. Independently, the ones by Pierre Del Moral[2] and Himilcon Carvalho, Pierre Del Moral, André Monin, and Gérard Salut[36] on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems.[37][38][39][40][41][42]
Mathematical foundations
From 1950 to 1996, all the publications on particle filters, and genetic algorithms, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and genealogical and ancestral tree-based algorithms.
The mathematical foundations and the first rigorous analysis of these particle algorithms are due to Pierre Del Moral[2][4] in 1996. The article[2] also contains proof of the unbiased properties of a particle approximation of likelihood functions and unnormalized conditional probability measures. The unbiased particle estimator of the likelihood functions presented in this article is used today in Bayesian statistical inference.
Dan Crisan, Jessica Gaines, and Terry Lyons,[43][44][45] as well as Dan Crisan, Pierre Del Moral, and Terry Lyons,[46] created branching-type particle techniques with various population sizes around the end of the 1990s. P. Del Moral, A. Guionnet, and L. Miclo[8][47][48] made more advances in this subject in 2000. Pierre Del Moral and Alice Guionnet[49] proved the first central limit theorems in 1999, and Pierre Del Moral and Laurent Miclo[8] proved them in 2000. The first uniform convergence results concerning the time parameter for particle filters were developed at the end of the 1990s by Pierre Del Moral and Alice Guionnet.[47][48] The first rigorous analysis of genealogical tree-ased particle filter smoothers is due to P. Del Moral and L. Miclo in 2001[50]
The theory on Feynman-Kac particle methodologies and related particle filter algorithms was developed in 2000 and 2004 in the books.[8][5] These abstract probabilistic models encapsulate genetic type algorithms, particle, and bootstrap filters, interacting Kalman filters (a.k.a. Rao–Blackwellized particle filter[51]), importance sampling and resampling style particle filter techniques, including genealogical tree-based and particle backward methodologies for solving filtering and smoothing problems. Other classes of particle filtering methodologies include genealogical tree-based models,[10][5][52] backward Markov particle models,[10][53] adaptive mean-field particle models,[6] island-type particle models,[54][55] and particle Markov chain Monte Carlo methodologies.[56][57]
The filtering problem
Objective
A particle filter's goal is to estimate the posterior density of state variables given observation variables. The particle filter is intended for use with a hidden Markov Model, in which the system includes both hidden and observable variables. The observable variables (observation process) are linked to the hidden variables (state-process) via a known functional form. Similarly, the probabilistic description of the dynamical system defining the evolution of the state variables is known.
A generic particle filter estimates the posterior distribution of the hidden states using the observation measurement process. With respect to a state-space such as the one below:
${\begin{array}{cccccccccc}X_{0}&\to &X_{1}&\to &X_{2}&\to &X_{3}&\to &\cdots &{\text{signal}}\\\downarrow &&\downarrow &&\downarrow &&\downarrow &&\cdots &\\Y_{0}&&Y_{1}&&Y_{2}&&Y_{3}&&\cdots &{\text{observation}}\end{array}}$
the filtering problem is to estimate sequentially the values of the hidden states $X_{k}$, given the values of the observation process $Y_{0},\cdots ,Y_{k},$ at any time step k.
All Bayesian estimates of $X_{k}$ follow from the posterior density $p(x_{k}|y_{0},y_{1},...,y_{k})$. The particle filter methodology provides an approximation of these conditional probabilities using the empirical measure associated with a genetic type particle algorithm. In contrast, the Markov Chain Monte Carlo or importance sampling approach would model the full posterior $p(x_{0},x_{1},...,x_{k}|y_{0},y_{1},...,y_{k})$.
The Signal-Observation model
Particle methods often assume $X_{k}$ and the observations $Y_{k}$ can be modeled in this form:
• $X_{0},X_{1},\cdots $ is a Markov process on $\mathbb {R} ^{d_{x}}$ (for some $d_{x}\geqslant 1$) that evolves according to the transition probability density $p(x_{k}|x_{k-1})$. This model is also often written in a synthetic way as
$X_{k}|X_{k-1}=x_{k}\sim p(x_{k}|x_{k-1})$
with an initial probability density $p(x_{0})$.
• The observations $Y_{0},Y_{1},\cdots $ take values in some state space on $\mathbb {R} ^{d_{y}}$ (for some $d_{y}\geqslant 1$) and are conditionally independent provided that $X_{0},X_{1},\cdots $ are known. In other words, each $Y_{k}$ only depends on $X_{k}$. In addition, we assume conditional distribution for $Y_{k}$ given $X_{k}=x_{k}$ are absolutely continuous, and in a synthetic way we have
$Y_{k}|X_{k}=y_{k}\sim p(y_{k}|x_{k})$
An example of system with these properties is:
$X_{k}=g(X_{k-1})+W_{k-1}$
$Y_{k}=h(X_{k})+V_{k}$
where both $W_{k}$ and $V_{k}$ are mutually independent sequences with known probability density functions and g and h are known functions. These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions g and h in the above example are linear, and if both $W_{k}$ and $V_{k}$ are Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter-based methods are a first-order approximation (EKF) or a second-order approximation (UKF in general, but if the probability distribution is Gaussian a third-order approximation is possible).
The assumption that the initial distribution and the transitions of the Markov chain are continuous for the Lebesgue measure can be relaxed. To design a particle filter we simply need to assume that we can sample the transitions $X_{k-1}\to X_{k}$ of the Markov chain $X_{k},$ and to compute the likelihood function $x_{k}\mapsto p(y_{k}|x_{k})$ (see for instance the genetic selection mutation description of the particle filter given below). The continuous assumption on the Markov transitions of $X_{k}$ is only used to derive in an informal (and rather abusive) way different formulae between posterior distributions using the Bayes' rule for conditional densities.
Approximate Bayesian computation models
Main article: Approximate Bayesian computation
In certain problems, the conditional distribution of observations, given the random states of the signal, may fail to have a density; the latter may be impossible or too complex to compute.[19] In this situation, an additional level of approximation is necessitated. One strategy is to replace the signal $X_{k}$ by the Markov chain ${\mathcal {X}}_{k}=\left(X_{k},Y_{k}\right)$ and to introduce a virtual observation of the form
${\mathcal {Y}}_{k}=Y_{k}+\epsilon {\mathcal {V}}_{k}\quad {\mbox{for some parameter}}\quad \epsilon \in [0,1]$
for some sequence of independent random variables ${\mathcal {V}}_{k}$ with known probability density functions. The central idea is to observe that
${\text{Law}}\left(X_{k}|{\mathcal {Y}}_{0}=y_{0},\cdots ,{\mathcal {Y}}_{k}=y_{k}\right)\approx _{\epsilon \downarrow 0}{\text{Law}}\left(X_{k}|Y_{0}=y_{0},\cdots ,Y_{k}=y_{k}\right)$
The particle filter associated with the Markov process ${\mathcal {X}}_{k}=\left(X_{k},Y_{k}\right)$ given the partial observations ${\mathcal {Y}}_{0}=y_{0},\cdots ,{\mathcal {Y}}_{k}=y_{k},$ is defined in terms of particles evolving in $\mathbb {R} ^{d_{x}+d_{y}}$ with a likelihood function given with some obvious abusive notation by $p({\mathcal {Y}}_{k}|{\mathcal {X}}_{k})$. These probabilistic techniques are closely related to Approximate Bayesian Computation (ABC). In the context of particle filters, these ABC particle filtering techniques were introduced in 1998 by P. Del Moral, J. Jacod and P. Protter.[58] They were further developed by P. Del Moral, A. Doucet and A. Jasra.[59][60]
The nonlinear filtering equation
Bayes' rule for conditional probability gives:
$p(x_{0},\cdots ,x_{k}|y_{0},\cdots ,y_{k})={\frac {p(y_{0},\cdots ,y_{k}|x_{0},\cdots ,x_{k})p(x_{0},\cdots ,x_{k})}{p(y_{0},\cdots ,y_{k})}}$
where
${\begin{aligned}p(y_{0},\cdots ,y_{k})&=\int p(y_{0},\cdots ,y_{k}|x_{0},\cdots ,x_{k})p(x_{0},\cdots ,x_{k})dx_{0}\cdots dx_{k}\\p(y_{0},\cdots ,y_{k}|x_{0},\cdots ,x_{k})&=\prod _{l=0}^{k}p(y_{l}|x_{l})\\p(x_{0},\cdots ,x_{k})&=p_{0}(x_{0})\prod _{l=1}^{k}p(x_{l}|x_{l-1})\end{aligned}}$
Particle filters are also an approximation, but with enough particles they can be much more accurate.[2][4][5][47][48] The nonlinear filtering equation is given by the recursion
${\begin{aligned}p(x_{k}|y_{0},\cdots ,y_{k-1})&{\stackrel {\text{updating}}{\longrightarrow }}p(x_{k}|y_{0},\cdots ,y_{k})={\frac {p(y_{k}|x_{k})p(x_{k}|y_{0},\cdots ,y_{k-1})}{\int p(y_{k}|x'_{k})p(x'_{k}|y_{0},\cdots ,y_{k-1})dx'_{k}}}\\&{\stackrel {\text{prediction}}{\longrightarrow }}p(x_{k+1}|y_{0},\cdots ,y_{k})=\int p(x_{k+1}|x_{k})p(x_{k}|y_{0},\cdots ,y_{k})dx_{k}\end{aligned}}$
(Eq. 1)
with the convention $p(x_{0}|y_{0},\cdots ,y_{k-1})=p(x_{0})$ for k = 0. The nonlinear filtering problem consists in computing these conditional distributions sequentially.
Feynman-Kac formulation
Main article: Feynman–Kac formula
We fix a time horizon n and a sequence of observations $Y_{0}=y_{0},\cdots ,Y_{n}=y_{n}$, and for each k = 0, ..., n we set:
$G_{k}(x_{k})=p(y_{k}|x_{k}).$
In this notation, for any bounded function F on the set of trajectories of $X_{k}$ from the origin k = 0 up to time k = n, we have the Feynman-Kac formula
${\begin{aligned}\int F(x_{0},\cdots ,x_{n})p(x_{0},\cdots ,x_{n}|y_{0},\cdots ,y_{n})dx_{0}\cdots dx_{n}&={\frac {\int F(x_{0},\cdots ,x_{n})\left\{\prod \limits _{k=0}^{n}p(y_{k}|x_{k})\right\}p(x_{0},\cdots ,x_{n})dx_{0}\cdots dx_{n}}{\int \left\{\prod \limits _{k=0}^{n}p(y_{k}|x_{k})\right\}p(x_{0},\cdots ,x_{n})dx_{0}\cdots dx_{n}}}\\&={\frac {E\left(F(X_{0},\cdots ,X_{n})\prod \limits _{k=0}^{n}G_{k}(X_{k})\right)}{E\left(\prod \limits _{k=0}^{n}G_{k}(X_{k})\right)}}\end{aligned}}$
Feynman-Kac path integration models arise in a variety of scientific disciplines, including in computational physics, biology, information theory and computer sciences.[8][10][5] Their interpretations are dependent on the application domain. For instance, if we choose the indicator function $G_{n}(x_{n})=1_{A}(x_{n})$ of some subset of the state space, they represent the conditional distribution of a Markov chain given it stays in a given tube; that is, we have:
$E\left(F(X_{0},\cdots ,X_{n})|X_{0}\in A,\cdots ,X_{n}\in A\right)={\frac {E\left(F(X_{0},\cdots ,X_{n})\prod \limits _{k=0}^{n}G_{k}(X_{k})\right)}{E\left(\prod \limits _{k=0}^{n}G_{k}(X_{k})\right)}}$
and
$P\left(X_{0}\in A,\cdots ,X_{n}\in A\right)=E\left(\prod \limits _{k=0}^{n}G_{k}(X_{k})\right)$
as soon as the normalizing constant is strictly positive.
Particle filters
A Genetic type particle algorithm
Initially, such an algorithm starts with N independent random variables $\left(\xi _{0}^{i}\right)_{1\leqslant i\leqslant N}$ with common probability density $p(x_{0})$. The genetic algorithm selection-mutation transitions[2][4]
$\xi _{k}:=\left(\xi _{k}^{i}\right)_{1\leqslant i\leqslant N}{\stackrel {\text{selection}}{\longrightarrow }}{\widehat {\xi }}_{k}:=\left({\widehat {\xi }}_{k}^{i}\right)_{1\leqslant i\leqslant N}{\stackrel {\text{mutation}}{\longrightarrow }}\xi _{k+1}:=\left(\xi _{k+1}^{i}\right)_{1\leqslant i\leqslant N}$
mimic/approximate the updating-prediction transitions of the optimal filter evolution (Eq. 1):
• During the selection-updating transition we sample N (conditionally) independent random variables ${\widehat {\xi }}_{k}:=\left({\widehat {\xi }}_{k}^{i}\right)_{1\leqslant i\leqslant N}$ with common (conditional) distribution
$\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k}^{i})}{\sum _{j=1}^{N}p(y_{k}|\xi _{k}^{j})}}\delta _{\xi _{k}^{i}}(dx_{k})$
where $\delta _{a}$ stands for the Dirac measure at a given state a.
• During the mutation-prediction transition, from each selected particle ${\widehat {\xi }}_{k}^{i}$ we sample independently a transition
${\widehat {\xi }}_{k}^{i}\longrightarrow \xi _{k+1}^{i}\sim p(x_{k+1}|{\widehat {\xi }}_{k}^{i}),\qquad i=1,\cdots ,N.$
In the above displayed formulae $p(y_{k}|\xi _{k}^{i})$ stands for the likelihood function $x_{k}\mapsto p(y_{k}|x_{k})$ evaluated at $x_{k}=\xi _{k}^{i}$, and $p(x_{k+1}|{\widehat {\xi }}_{k}^{i})$ stands for the conditional density $p(x_{k+1}|x_{k})$ evaluated at $x_{k}={\widehat {\xi }}_{k}^{i}$.
At each time k, we have the particle approximations
${\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{{\widehat {\xi }}_{k}^{i}}(dx_{k})\approx _{N\uparrow \infty }p(dx_{k}|y_{0},\cdots ,y_{k})\approx _{N\uparrow \infty }\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k}^{i})}{\sum _{i=1}^{N}p(y_{k}|\xi _{k}^{j})}}\delta _{\xi _{k}^{i}}(dx_{k})$
and
${\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k}^{i}}(dx_{k})\approx _{N\uparrow \infty }p(dx_{k}|y_{0},\cdots ,y_{k-1})$
In Genetic algorithms and Evolutionary computing community, the mutation-selection Markov chain described above is often called the genetic algorithm with proportional selection. Several branching variants, including with random population sizes have also been proposed in the articles.[5][43][46]
Monte Carlo principles
Particle methods, like all sampling-based approaches (e.g., Markov Chain Monte Carlo), generate a set of samples that approximate the filtering density
$p(x_{k}|y_{0},\cdots ,y_{k}).$
For example, we may have N samples from the approximate posterior distribution of $X_{k}$, where the samples are labeled with superscripts as:
${\widehat {\xi }}_{k}^{1},\cdots ,{\widehat {\xi }}_{k}^{N}.$
Then, expectations with respect to the filtering distribution are approximated by
$\int f(x_{k})p(x_{k}|y_{0},\cdots ,y_{k})\,dx_{k}\approx _{N\uparrow \infty }{\frac {1}{N}}\sum _{i=1}^{N}f\left({\widehat {\xi }}_{k}^{i}\right)=\int f(x_{k}){\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k})$
(Eq. 2)
with
${\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k})={\frac {1}{N}}\sum _{i=1}^{N}\delta _{{\widehat {\xi }}_{k}^{i}}(dx_{k})$
where $\delta _{a}$ stands for the Dirac measure at a given state a. The function f, in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some approximation error. When the approximation equation (Eq. 2) is satisfied for any bounded function f we write
$p(dx_{k}|y_{0},\cdots ,y_{k}):=p(x_{k}|y_{0},\cdots ,y_{k})dx_{k}\approx _{N\uparrow \infty }{\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k})={\frac {1}{N}}\sum _{i=1}^{N}\delta _{{\widehat {\xi }}_{k}^{i}}(dx_{k})$
Particle filters can be interpreted as a genetic type particle algorithm evolving with mutation and selection transitions. We can keep track of the ancestral lines
$\left({\widehat {\xi }}_{0,k}^{i},{\widehat {\xi }}_{1,k}^{i},\cdots ,{\widehat {\xi }}_{k-1,k}^{i},{\widehat {\xi }}_{k,k}^{i}\right)$
of the particles $i=1,\cdots ,N$. The random states ${\widehat {\xi }}_{l,k}^{i}$, with the lower indices l=0,...,k, stands for the ancestor of the individual ${\widehat {\xi }}_{k,k}^{i}={\widehat {\xi }}_{k}^{i}$ at level l=0,...,k. In this situation, we have the approximation formula
${\begin{aligned}\int F(x_{0},\cdots ,x_{k})p(x_{0},\cdots ,x_{k}|y_{0},\cdots ,y_{k})\,dx_{0}\cdots dx_{k}&\approx _{N\uparrow \infty }{\frac {1}{N}}\sum _{i=1}^{N}F\left({\widehat {\xi }}_{0,k}^{i},{\widehat {\xi }}_{1,k}^{i},\cdots ,{\widehat {\xi }}_{k,k}^{i}\right)\\&=\int F(x_{0},\cdots ,x_{k}){\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})\end{aligned}}$
(Eq. 3)
with the empirical measure
${\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\left({\widehat {\xi }}_{0,k}^{i},{\widehat {\xi }}_{1,k}^{i},\cdots ,{\widehat {\xi }}_{k,k}^{i}\right)}(d(x_{0},\cdots ,x_{k}))$
Here F stands for any founded function on the path space of the signal. In a more synthetic form (Eq. 3) is equivalent to
${\begin{aligned}p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})&:=p(x_{0},\cdots ,x_{k}|y_{0},\cdots ,y_{k})\,dx_{0}\cdots dx_{k}\\&\approx _{N\uparrow \infty }{\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})\\&:={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\left({\widehat {\xi }}_{0,k}^{i},\cdots ,{\widehat {\xi }}_{k,k}^{i}\right)}(d(x_{0},\cdots ,x_{k}))\end{aligned}}$
Particle filters can be interpreted in many different ways. From the probabilistic point of view they coincide with a mean-field particle interpretation of the nonlinear filtering equation. The updating-prediction transitions of the optimal filter evolution can also be interpreted as the classical genetic type selection-mutation transitions of individuals. The sequential importance resampling technique provides another interpretation of the filtering transitions coupling importance sampling with the bootstrap resampling step. Last, but not least, particle filters can be seen as an acceptance-rejection methodology equipped with a recycling mechanism.[10][5]
Mean-field particle simulation
The general probabilistic principle
The nonlinear filtering evolution can be interpreted as a dynamical system in the set of probability measures of the form $\eta _{n+1}=\Phi _{n+1}\left(\eta _{n}\right)$ where $\Phi _{n+1}$ stands for some mapping from the set of probability distribution into itself. For instance, the evolution of the one-step optimal predictor $\eta _{n}(dx_{n})=p(x_{n}|y_{0},\cdots ,y_{n-1})dx_{n}$
satisfies a nonlinear evolution starting with the probability distribution $\eta _{0}(dx_{0})=p(x_{0})dx_{0}$. One of the simplest ways to approximate these probability measures is to start with N independent random variables $\left(\xi _{0}^{i}\right)_{1\leqslant i\leqslant N}$ with common probability distribution $\eta _{0}(dx_{0})=p(x_{0})dx_{0}$ . Suppose we have defined a sequence of N random variables $\left(\xi _{n}^{i}\right)_{1\leqslant i\leqslant N}$ such that
${\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{n}^{i}}(dx_{n})\approx _{N\uparrow \infty }\eta _{n}(dx_{n})$
At the next step we sample N (conditionally) independent random variables $\xi _{n+1}:=\left(\xi _{n+1}^{i}\right)_{1\leqslant i\leqslant N}$ with common law .
$\Phi _{n+1}\left({\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{n}^{i}}\right)\approx _{N\uparrow \infty }\Phi _{n+1}\left(\eta _{n}\right)=\eta _{n+1}$
A particle interpretation of the filtering equation
We illustrate this mean-field particle principle in the context of the evolution of the one step optimal predictors
$p(x_{k}|y_{0},\cdots ,y_{k-1})dx_{k}\to p(x_{k+1}|y_{0},\cdots ,y_{k})=\int p(x_{k+1}|x'_{k}){\frac {p(y_{k}|x_{k}')p(x'_{k}|y_{0},\cdots ,y_{k-1})dx'_{k}}{\int p(y_{k}|x''_{k})p(x''_{k}|y_{0},\cdots ,y_{k-1})dx''_{k}}}$
(Eq. 4)
For k = 0 we use the convention $p(x_{0}|y_{0},\cdots ,y_{-1}):=p(x_{0})$.
By the law of large numbers, we have
${\widehat {p}}(dx_{0})={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{0}^{i}}(dx_{0})\approx _{N\uparrow \infty }p(x_{0})dx_{0}$
in the sense that
$\int f(x_{0}){\widehat {p}}(dx_{0})={\frac {1}{N}}\sum _{i=1}^{N}f(\xi _{0}^{i})\approx _{N\uparrow \infty }\int f(x_{0})p(dx_{0})dx_{0}$
for any bounded function $f$. We further assume that we have constructed a sequence of particles $\left(\xi _{k}^{i}\right)_{1\leqslant i\leqslant N}$ at some rank k such that
${\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k}^{i}}(dx_{k})\approx _{N\uparrow \infty }~p(x_{k}~|~y_{0},\cdots ,y_{k-1})dx_{k}$
in the sense that for any bounded function $f$ we have
$\int f(x_{k}){\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})={\frac {1}{N}}\sum _{i=1}^{N}f(\xi _{k}^{i})\approx _{N\uparrow \infty }\int f(x_{k})p(dx_{k}|y_{0},\cdots ,y_{k-1})dx_{k}$
In this situation, replacing $p(x_{k}|y_{0},\cdots ,y_{k-1})dx_{k}$ by the empirical measure ${\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})$ in the evolution equation of the one-step optimal filter stated in (Eq. 4) we find that
$p(x_{k+1}|y_{0},\cdots ,y_{k})\approx _{N\uparrow \infty }\int p(x_{k+1}|x'_{k}){\frac {p(y_{k}|x_{k}'){\widehat {p}}(dx'_{k}|y_{0},\cdots ,y_{k-1})}{\int p(y_{k}|x''_{k}){\widehat {p}}(dx''_{k}|y_{0},\cdots ,y_{k-1})}}$
Notice that the right hand side in the above formula is a weighted probability mixture
$\int p(x_{k+1}|x'_{k}){\frac {p(y_{k}|x_{k}'){\widehat {p}}(dx'_{k}|y_{0},\cdots ,y_{k-1})}{\int p(y_{k}|x''_{k}){\widehat {p}}(dx''_{k}|y_{0},\cdots ,y_{k-1})}}=\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k}^{i})}{\sum _{i=1}^{N}p(y_{k}|\xi _{k}^{j})}}p(x_{k+1}|\xi _{k}^{i})=:{\widehat {q}}(x_{k+1}|y_{0},\cdots ,y_{k})$
where $p(y_{k}|\xi _{k}^{i})$ stands for the density $p(y_{k}|x_{k})$ evaluated at $x_{k}=\xi _{k}^{i}$, and $p(x_{k+1}|\xi _{k}^{i})$ stands for the density $p(x_{k+1}|x_{k})$ evaluated at $x_{k}=\xi _{k}^{i}$ for $i=1,\cdots ,N.$
Then, we sample N independent random variable $\left(\xi _{k+1}^{i}\right)_{1\leqslant i\leqslant N}$ with common probability density ${\widehat {q}}(x_{k+1}|y_{0},\cdots ,y_{k})$ so that
${\widehat {p}}(dx_{k+1}|y_{0},\cdots ,y_{k}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k+1}^{i}}(dx_{k+1})\approx _{N\uparrow \infty }{\widehat {q}}(x_{k+1}|y_{0},\cdots ,y_{k})dx_{k+1}\approx _{N\uparrow \infty }p(x_{k+1}|y_{0},\cdots ,y_{k})dx_{k+1}$
Iterating this procedure, we design a Markov chain such that
${\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k}^{i}}(dx_{k})\approx _{N\uparrow \infty }p(dx_{k}|y_{0},\cdots ,y_{k-1}):=p(x_{k}|y_{0},\cdots ,y_{k-1})dx_{k}$
Notice that the optimal filter is approximated at each time step k using the Bayes' formulae
$p(dx_{k}|y_{0},\cdots ,y_{k})\approx _{N\uparrow \infty }{\frac {p(y_{k}|x_{k}){\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})}{\int p(y_{k}|x'_{k}){\widehat {p}}(dx'_{k}|y_{0},\cdots ,y_{k-1})}}=\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k}^{i})}{\sum _{j=1}^{N}p(y_{k}|\xi _{k}^{j})}}~\delta _{\xi _{k}^{i}}(dx_{k})$
The terminology "mean-field approximation" comes from the fact that we replace at each time step the probability measure $p(dx_{k}|y_{0},\cdots ,y_{k-1})$ by the empirical approximation ${\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})$. The mean-field particle approximation of the filtering problem is far from being unique. Several strategies are developed in the books.[10][5]
Some convergence results
The analysis of the convergence of particle filters was started in 1996[2][4] and in 2000 in the book[8] and the series of articles.[46][47][48][49][50][61][62] More recent developments can be found in the books,[10][5] When the filtering equation is stable (in the sense that it corrects any erroneous initial condition), the bias and the variance of the particle particle estimates
$I_{k}(f):=\int f(x_{k})p(dx_{k}|y_{0},\cdots ,y_{k-1})\approx _{N\uparrow \infty }{\widehat {I}}_{k}(f):=\int f(x_{k}){\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})$
are controlled by the non asymptotic uniform estimates
$\sup _{k\geqslant 0}\left\vert E\left({\widehat {I}}_{k}(f)\right)-I_{k}(f)\right\vert \leqslant {\frac {c_{1}}{N}}$
$\sup _{k\geqslant 0}E\left(\left[{\widehat {I}}_{k}(f)-I_{k}(f)\right]^{2}\right)\leqslant {\frac {c_{2}}{N}}$
for any function f bounded by 1, and for some finite constants $c_{1},c_{2}.$ In addition, for any $x\geqslant 0$:
$\mathbf {P} \left(\left|{\widehat {I}}_{k}(f)-I_{k}(f)\right|\leqslant c_{1}{\frac {x}{N}}+c_{2}{\sqrt {\frac {x}{N}}}\land \sup _{0\leqslant k\leqslant n}\left|{\widehat {I}}_{k}(f)-I_{k}(f)\right|\leqslant c{\sqrt {\frac {x\log(n)}{N}}}\right)>1-e^{-x}$
for some finite constants $c_{1},c_{2}$ related to the asymptotic bias and variance of the particle estimate, and some finite constant c. The same results are satisfied if we replace the one step optimal predictor by the optimal filter approximation.
Genealogical trees and Unbiasedness properties
Genealogical tree based particle smoothing
Tracing back in time the ancestral lines
$\left({\widehat {\xi }}_{0,k}^{i},{\widehat {\xi }}_{1,k}^{i},\cdots ,{\widehat {\xi }}_{k-1,k}^{i},{\widehat {\xi }}_{k,k}^{i}\right),\quad \left(\xi _{0,k}^{i},\xi _{1,k}^{i},\cdots ,\xi _{k-1,k}^{i},\xi _{k,k}^{i}\right)$
of the individuals ${\widehat {\xi }}_{k}^{i}\left(={\widehat {\xi }}_{k,k}^{i}\right)$ and $\xi _{k}^{i}\left(={\xi }_{k,k}^{i}\right)$ at every time step k, we also have the particle approximations
${\begin{aligned}{\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})&:={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\left({\widehat {\xi }}_{0,k}^{i},\cdots ,{\widehat {\xi }}_{0,k}^{i}\right)}(d(x_{0},\cdots ,x_{k}))\\&\approx _{N\uparrow \infty }p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})\\&\approx _{N\uparrow \infty }\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k,k}^{i})}{\sum _{j=1}^{N}p(y_{k}|\xi _{k,k}^{j})}}\delta _{\left(\xi _{0,k}^{i},\cdots ,\xi _{0,k}^{i}\right)}(d(x_{0},\cdots ,x_{k}))\\&\ \\{\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})&:={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\left(\xi _{0,k}^{i},\cdots ,\xi _{k,k}^{i}\right)}(d(x_{0},\cdots ,x_{k}))\\&\approx _{N\uparrow \infty }p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})\\&:=p(x_{0},\cdots ,x_{k}|y_{0},\cdots ,y_{k-1})dx_{0},\cdots ,dx_{k}\end{aligned}}$
These empirical approximations are equivalent to the particle integral approximations
${\begin{aligned}\int F(x_{0},\cdots ,x_{n}){\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})&:={\frac {1}{N}}\sum _{i=1}^{N}F\left({\widehat {\xi }}_{0,k}^{i},\cdots ,{\widehat {\xi }}_{0,k}^{i}\right)\\&\approx _{N\uparrow \infty }\int F(x_{0},\cdots ,x_{n})p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k})\\&\approx _{N\uparrow \infty }\sum _{i=1}^{N}{\frac {p(y_{k}|\xi _{k,k}^{i})}{\sum _{j=1}^{N}p(y_{k}|\xi _{k,k}^{j})}}F\left(\xi _{0,k}^{i},\cdots ,\xi _{k,k}^{i}\right)\\&\ \\\int F(x_{0},\cdots ,x_{n}){\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})&:={\frac {1}{N}}\sum _{i=1}^{N}F\left(\xi _{0,k}^{i},\cdots ,\xi _{k,k}^{i}\right)\\&\approx _{N\uparrow \infty }\int F(x_{0},\cdots ,x_{n})p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})\end{aligned}}$
for any bounded function F on the random trajectories of the signal. As shown in[52] the evolution of the genealogical tree coincides with a mean-field particle interpretation of the evolution equations associated with the posterior densities of the signal trajectories. For more details on these path space models, we refer to the books.[10][5]
Unbiased particle estimates of likelihood functions
We use the product formula
$p(y_{0},\cdots ,y_{n})=\prod _{k=0}^{n}p(y_{k}|y_{0},\cdots ,y_{k-1})$
with
$p(y_{k}|y_{0},\cdots ,y_{k-1})=\int p(y_{k}|x_{k})p(dx_{k}|y_{0},\cdots ,y_{k-1})$
and the conventions $p(y_{0}|y_{0},\cdots ,y_{-1})=p(y_{0})$ and $p(x_{0}|y_{0},\cdots ,y_{-1})=p(x_{0}),$ for k = 0. Replacing $p(x_{k}|y_{0},\cdots ,y_{k-1})dx_{k}$ by the empirical approximation
${\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1}):={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k}^{i}}(dx_{k})\approx _{N\uparrow \infty }p(dx_{k}|y_{0},\cdots ,y_{k-1})$
in the above displayed formula, we design the following unbiased particle approximation of the likelihood function
$p(y_{0},\cdots ,y_{n})\approx _{N\uparrow \infty }{\widehat {p}}(y_{0},\cdots ,y_{n})=\prod _{k=0}^{n}{\widehat {p}}(y_{k}|y_{0},\cdots ,y_{k-1})$
with
${\widehat {p}}(y_{k}|y_{0},\cdots ,y_{k-1})=\int p(y_{k}|x_{k}){\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})={\frac {1}{N}}\sum _{i=1}^{N}p(y_{k}|\xi _{k}^{i})$
where $p(y_{k}|\xi _{k}^{i})$ stands for the density $p(y_{k}|x_{k})$ evaluated at $x_{k}=\xi _{k}^{i}$. The design of this particle estimate and the unbiasedness property has been proved in 1996 in the article.[2] Refined variance estimates can be found in[5] and.[10]
Backward particle smoothers
Using Bayes' rule, we have the formula
$p(x_{0},\cdots ,x_{n}|y_{0},\cdots ,y_{n-1})=p(x_{n}|y_{0},\cdots ,y_{n-1})p(x_{n-1}|x_{n},y_{0},\cdots ,y_{n-1})\cdots p(x_{1}|x_{2},y_{0},y_{1})p(x_{0}|x_{1},y_{0})$
Notice that
${\begin{aligned}p(x_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))&\propto p(x_{k}|x_{k-1})p(x_{k-1}|(y_{0},\cdots ,y_{k-1}))\\p(x_{k-1}|(y_{0},\cdots ,y_{k-1})&\propto p(y_{k-1}|x_{k-1})p(x_{k-1}|(y_{0},\cdots ,y_{k-2})\end{aligned}}$
This implies that
$p(x_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))={\frac {p(y_{k-1}|x_{k-1})p(x_{k}|x_{k-1})p(x_{k-1}|y_{0},\cdots ,y_{k-2})}{\int p(y_{k-1}|x'_{k-1})p(x_{k}|x'_{k-1})p(x'_{k-1}|y_{0},\cdots ,y_{k-2})dx'_{k-1}}}$
Replacing the one-step optimal predictors $p(x_{k-1}|(y_{0},\cdots ,y_{k-2}))dx_{k-1}$ by the particle empirical measures
${\widehat {p}}(dx_{k-1}|(y_{0},\cdots ,y_{k-2}))={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{k-1}^{i}}(dx_{k-1})\left(\approx _{N\uparrow \infty }p(dx_{k-1}|(y_{0},\cdots ,y_{k-2})):={p}(x_{k-1}|(y_{0},\cdots ,y_{k-2}))dx_{k-1}\right)$
we find that
${\begin{aligned}p(dx_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))&\approx _{N\uparrow \infty }{\widehat {p}}(dx_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))\\&:={\frac {p(y_{k-1}|x_{k-1})p(x_{k}|x_{k-1}){\widehat {p}}(dx_{k-1}|y_{0},\cdots ,y_{k-2})}{\int p(y_{k-1}|x'_{k-1})~p(x_{k}|x'_{k-1}){\widehat {p}}(dx'_{k-1}|y_{0},\cdots ,y_{k-2})}}\\&=\sum _{i=1}^{N}{\frac {p(y_{k-1}|\xi _{k-1}^{i})p(x_{k}|\xi _{k-1}^{i})}{\sum _{j=1}^{N}p(y_{k-1}|\xi _{k-1}^{j})p(x_{k}|\xi _{k-1}^{j})}}\delta _{\xi _{k-1}^{i}}(dx_{k-1})\end{aligned}}$
We conclude that
$p(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))\approx _{N\uparrow \infty }{\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))$
with the backward particle approximation
${\begin{aligned}{\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))={\widehat {p}}(dx_{n}|(y_{0},\cdots ,y_{n-1})){\widehat {p}}(dx_{n-1}|x_{n},(y_{0},\cdots ,y_{n-1}))\cdots {\widehat {p}}(dx_{1}|x_{2},(y_{0},y_{1})){\widehat {p}}(dx_{0}|x_{1},y_{0})\end{aligned}}$
The probability measure
${\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))$
is the probability of the random paths of a Markov chain $\left(\mathbb {X} _{k,n}^{\flat }\right)_{0\leqslant k\leqslant n}$running backward in time from time k=n to time k=0, and evolving at each time step k in the state space associated with the population of particles $\xi _{k}^{i},i=1,\cdots ,N.$
• Initially (at time k=n) the chain $\mathbb {X} _{n,n}^{\flat }$ chooses randomly a state with the distribution
${\widehat {p}}(dx_{n}|(y_{0},\cdots ,y_{n-1}))={\frac {1}{N}}\sum _{i=1}^{N}\delta _{\xi _{n}^{i}}(dx_{n})$
• From time k to the time (k-1), the chain starting at some state $\mathbb {X} _{k,n}^{\flat }=\xi _{k}^{i}$ for some $i=1,\cdots ,N$ at time k moves at time (k-1) to a random state $\mathbb {X} _{k-1,n}^{\flat }$ chosen with the discrete weighted probability
${\widehat {p}}(dx_{k-1}|\xi _{k}^{i},(y_{0},\cdots ,y_{k-1}))=\sum _{j=1}^{N}{\frac {p(y_{k-1}|\xi _{k-1}^{j})p(\xi _{k}^{i}|\xi _{k-1}^{j})}{\sum _{l=1}^{N}p(y_{k-1}|\xi _{k-1}^{l})p(\xi _{k}^{i}|\xi _{k-1}^{l})}}~\delta _{\xi _{k-1}^{j}}(dx_{k-1})$
In the above displayed formula, ${\widehat {p}}(dx_{k-1}|\xi _{k}^{i},(y_{0},\cdots ,y_{k-1}))$ stands for the conditional distribution ${\widehat {p}}(dx_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))$ evaluated at $x_{k}=\xi _{k}^{i}$. In the same vein, $p(y_{k-1}|\xi _{k-1}^{j})$ and $p(\xi _{k}^{i}|\xi _{k-1}^{j})$ stand for the conditional densities $p(y_{k-1}|x_{k-1})$ and $p(x_{k}|x_{k-1})$ evaluated at $x_{k}=\xi _{k}^{i}$ and $x_{k-1}=\xi _{k-1}^{j}.$ These models allows to reduce integration with respect to the densities $p((x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))$ in terms of matrix operations with respect to the Markov transitions of the chain described above.[53] For instance, for any function $f_{k}$ we have the particle estimates
${\begin{aligned}\int p(d(x_{0},\cdots ,x_{n})&|(y_{0},\cdots ,y_{n-1}))f_{k}(x_{k})\\&\approx _{N\uparrow \infty }\int {\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))f_{k}(x_{k})\\&=\int {\widehat {p}}(dx_{n}|(y_{0},\cdots ,y_{n-1})){\widehat {p}}(dx_{n-1}|x_{n},(y_{0},\cdots ,y_{n-1}))\cdots {\widehat {p}}(dx_{k}|x_{k+1},(y_{0},\cdots ,y_{k}))f_{k}(x_{k})\\&=\underbrace {\left[{\tfrac {1}{N}},\cdots ,{\tfrac {1}{N}}\right]} _{N{\text{ times}}}\mathbb {M} _{n-1}\cdots \mathbb {M} _{k}{\begin{bmatrix}f_{k}(\xi _{k}^{1})\\\vdots \\f_{k}(\xi _{k}^{N})\end{bmatrix}}\end{aligned}}$
where
$\mathbb {M} _{k}=(\mathbb {M} _{k}(i,j))_{1\leqslant i,j\leqslant N}:\qquad \mathbb {M} _{k}(i,j)={\frac {p(\xi _{k}^{i}|\xi _{k-1}^{j})~p(y_{k-1}|\xi _{k-1}^{j})}{\sum \limits _{l=1}^{N}p(\xi _{k}^{i}|\xi _{k-1}^{l})p(y_{k-1}|\xi _{k-1}^{l})}}$
This also shows that if
${\overline {F}}(x_{0},\cdots ,x_{n}):={\frac {1}{n+1}}\sum _{k=0}^{n}f_{k}(x_{k})$
then
${\begin{aligned}\int {\overline {F}}(x_{0},\cdots ,x_{n})p(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))&\approx _{N\uparrow \infty }\int {\overline {F}}(x_{0},\cdots ,x_{n}){\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))\\&={\frac {1}{n+1}}\sum _{k=0}^{n}\underbrace {\left[{\tfrac {1}{N}},\cdots ,{\tfrac {1}{N}}\right]} _{N{\text{ times}}}\mathbb {M} _{n-1}\mathbb {M} _{n-2}\cdots \mathbb {M} _{k}{\begin{bmatrix}f_{k}(\xi _{k}^{1})\\\vdots \\f_{k}(\xi _{k}^{N})\end{bmatrix}}\end{aligned}}$
Some convergence results
We shall assume that filtering equation is stable, in the sense that it corrects any erroneous initial condition.
In this situation, the particle approximations of the likelihood functions are unbiased and the relative variance is controlled by
$E\left({\widehat {p}}(y_{0},\cdots ,y_{n})\right)=p(y_{0},\cdots ,y_{n}),\qquad E\left(\left[{\frac {{\widehat {p}}(y_{0},\cdots ,y_{n})}{p(y_{0},\cdots ,y_{n})}}-1\right]^{2}\right)\leqslant {\frac {cn}{N}},$
for some finite constant c. In addition, for any $x\geqslant 0$:
$\mathbf {P} \left(\left\vert {\frac {1}{n}}\log {{\widehat {p}}(y_{0},\cdots ,y_{n})}-{\frac {1}{n}}\log {p(y_{0},\cdots ,y_{n})}\right\vert \leqslant c_{1}{\frac {x}{N}}+c_{2}{\sqrt {\frac {x}{N}}}\right)>1-e^{-x}$
for some finite constants $c_{1},c_{2}$ related to the asymptotic bias and variance of the particle estimate, and for some finite constant c.
The bias and the variance of the particle particle estimates based on the ancestral lines of the genealogical trees
${\begin{aligned}I_{k}^{path}(F)&:=\int F(x_{0},\cdots ,x_{k})p(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})\\&\approx _{N\uparrow \infty }{\widehat {I}}_{k}^{path}(F)\\&:=\int F(x_{0},\cdots ,x_{k}){\widehat {p}}(d(x_{0},\cdots ,x_{k})|y_{0},\cdots ,y_{k-1})\\&={\frac {1}{N}}\sum _{i=1}^{N}F\left(\xi _{0,k}^{i},\cdots ,\xi _{k,k}^{i}\right)\end{aligned}}$
are controlled by the non asymptotic uniform estimates
$\left|E\left({\widehat {I}}_{k}^{path}(F)\right)-I_{k}^{path}(F)\right|\leqslant {\frac {c_{1}k}{N}},\qquad E\left(\left[{\widehat {I}}_{k}^{path}(F)-I_{k}^{path}(F)\right]^{2}\right)\leqslant {\frac {c_{2}k}{N}},$
for any function F bounded by 1, and for some finite constants $c_{1},c_{2}.$ In addition, for any $x\geqslant 0$:
$\mathbf {P} \left(\left|{\widehat {I}}_{k}^{path}(F)-I_{k}^{path}(F)\right|\leqslant c_{1}{\frac {kx}{N}}+c_{2}{\sqrt {\frac {kx}{N}}}\land \sup _{0\leqslant k\leqslant n}\left|{\widehat {I}}_{k}^{path}(F)-I_{k}^{path}(F)\right|\leqslant c{\sqrt {\frac {xn\log(n)}{N}}}\right)>1-e^{-x}$
for some finite constants $c_{1},c_{2}$ related to the asymptotic bias and variance of the particle estimate, and for some finite constant c. The same type of bias and variance estimates hold for the backward particle smoothers. For additive functionals of the form
${\overline {F}}(x_{0},\cdots ,x_{n}):={\frac {1}{n+1}}\sum _{0\leqslant k\leqslant n}f_{k}(x_{k})$
with
$I_{n}^{path}({\overline {F}})\approx _{N\uparrow \infty }I_{n}^{\flat ,path}({\overline {F}}):=\int {\overline {F}}(x_{0},\cdots ,x_{n}){\widehat {p}}_{backward}(d(x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))$
with functions $f_{k}$ bounded by 1, we have
$\sup _{n\geqslant 0}{\left\vert E\left({\widehat {I}}_{n}^{\flat ,path}({\overline {F}})\right)-I_{n}^{path}({\overline {F}})\right\vert }\leqslant {\frac {c_{1}}{N}}$
and
$E\left(\left[{\widehat {I}}_{n}^{\flat ,path}(F)-I_{n}^{path}(F)\right]^{2}\right)\leqslant {\frac {c_{2}}{nN}}+{\frac {c_{3}}{N^{2}}}$
for some finite constants $c_{1},c_{2},c_{3}.$ More refined estimates including exponentially small probability of errors are developed in.[10]
Sequential Importance Resampling (SIR)
Monte Carlo filter and bootstrap filter
Sequential importance Resampling (SIR), Monte Carlo filtering (Kitagawa 1993[33]), bootstrap filtering algorithm (Gordon et al. 1993[35]) single distribution resampling (Bejuri W.M.Y.B et al. 2017[63]), are also commonly applied filtering algorithms, which approximate the filtering probability density $p(x_{k}|y_{0},\cdots ,y_{k})$ by a weighted set of N samples
$\left\{\left(w_{k}^{(i)},x_{k}^{(i)}\right)\ :\ i\in \{1,\cdots ,N\}\right\}.$ :\ i\in \{1,\cdots ,N\}\right\}.}
The importance weights $w_{k}^{(i)}$ are approximations to the relative posterior probabilities (or densities) of the samples such that
$\sum _{i=1}^{N}w_{k}^{(i)}=1.$
Sequential importance sampling (SIS) is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function f can be approximated as a weighted average
$\int f(x_{k})p(x_{k}|y_{0},\dots ,y_{k})dx_{k}\approx \sum _{i=1}^{N}w_{k}^{(i)}f(x_{k}^{(i)}).$
For a finite set of samples, the algorithm performance is dependent on the choice of the proposal distribution
$\pi (x_{k}|x_{0:k-1},y_{0:k})\,$.
The "optimal" proposal distribution is given as the target distribution
$\pi (x_{k}|x_{0:k-1},y_{0:k})=p(x_{k}|x_{k-1},y_{k})={\frac {p(y_{k}|x_{k})}{\int p(y_{k}|x_{k})p(x_{k}|x_{k-1})dx_{k}}}~p(x_{k}|x_{k-1}).$
This particular choice of proposal transition has been proposed by P. Del Moral in 1996 and 1998.[4] When it is difficult to sample transitions according to the distribution $p(x_{k}|x_{k-1},y_{k})$ one natural strategy is to use the following particle approximation
${\begin{aligned}{\frac {p(y_{k}|x_{k})}{\int p(y_{k}|x_{k})p(x_{k}|x_{k-1})dx_{k}}}p(x_{k}|x_{k-1})dx_{k}&\simeq _{N\uparrow \infty }{\frac {p(y_{k}|x_{k})}{\int p(y_{k}|x_{k}){\widehat {p}}(dx_{k}|x_{k-1})}}{\widehat {p}}(dx_{k}|x_{k-1})\\&=\sum _{i=1}^{N}{\frac {p(y_{k}|X_{k}^{i}(x_{k-1}))}{\sum _{j=1}^{N}p(y_{k}|X_{k}^{j}(x_{k-1}))}}\delta _{X_{k}^{i}(x_{k-1})}(dx_{k})\end{aligned}}$
with the empirical approximation
${\widehat {p}}(dx_{k}|x_{k-1})={\frac {1}{N}}\sum _{i=1}^{N}\delta _{X_{k}^{i}(x_{k-1})}(dx_{k})~\simeq _{N\uparrow \infty }p(x_{k}|x_{k-1})dx_{k}$
associated with N (or any other large number of samples) independent random samples $X_{k}^{i}(x_{k-1}),i=1,\cdots ,N$with the conditional distribution of the random state $X_{k}$ given $X_{k-1}=x_{k-1}$. The consistency of the resulting particle filter of this approximation and other extensions are developed in.[4] In the above display $\delta _{a}$ stands for the Dirac measure at a given state a.
However, the transition prior probability distribution is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:
$\pi (x_{k}|x_{0:k-1},y_{0:k})=p(x_{k}|x_{k-1}).$
Sequential Importance Resampling (SIR) filters with transition prior probability distribution as importance function are commonly known as bootstrap filter and condensation algorithm.
Resampling is used to avoid the problem of the degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The stratified sampling proposed by Kitagawa (1993[33]) is optimal in terms of variance.
A single step of sequential importance resampling is as follows:
1) For $i=1,\cdots ,N$ draw samples from the proposal distribution
$x_{k}^{(i)}\sim \pi (x_{k}|x_{0:k-1}^{(i)},y_{0:k})$
2) For $i=1,\cdots ,N$ update the importance weights up to a normalizing constant:
${\hat {w}}_{k}^{(i)}=w_{k-1}^{(i)}{\frac {p(y_{k}|x_{k}^{(i)})p(x_{k}^{(i)}|x_{k-1}^{(i)})}{\pi (x_{k}^{(i)}|x_{0:k-1}^{(i)},y_{0:k})}}.$
Note that when we use the transition prior probability distribution as the importance function,
$\pi (x_{k}^{(i)}|x_{0:k-1}^{(i)},y_{0:k})=p(x_{k}^{(i)}|x_{k-1}^{(i)}),$
this simplifies to the following :
${\hat {w}}_{k}^{(i)}=w_{k-1}^{(i)}p(y_{k}|x_{k}^{(i)}),$
3) For $i=1,\cdots ,N$ compute the normalized importance weights:
$w_{k}^{(i)}={\frac {{\hat {w}}_{k}^{(i)}}{\sum _{j=1}^{N}{\hat {w}}_{k}^{(j)}}}$
4) Compute an estimate of the effective number of particles as
${\hat {N}}_{\mathit {eff}}={\frac {1}{\sum _{i=1}^{N}\left(w_{k}^{(i)}\right)^{2}}}$
This criterion reflects the variance of the weights. Other criteria can be found in the article,[6] including their rigorous analysis and central limit theorems.
5) If the effective number of particles is less than a given threshold ${\hat {N}}_{\mathit {eff}}<N_{thr}$, then perform resampling:
a) Draw N particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
b) For $i=1,\cdots ,N$ set $w_{k}^{(i)}=1/N.$
The term "Sampling Importance Resampling" is also sometimes used when referring to SIR filters, but the term Importance Resampling is more accurate because the word "resampling" implies that the initial sampling has already been done.[64]
Sequential importance sampling (SIS)
• Is the same as sequential importance resampling, but without the resampling stage.
"Direct version" algorithm
The "direct version" algorithm is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection. To generate a single sample x at k from $p_{x_{k}|y_{1:k}}(x|y_{1:k})$:
1) Set n = 0 (This will count the number of particles generated so far)
2) Uniformly choose an index i from the range $\{1,...,N\}$
3) Generate a test ${\hat {x}}$ from the distribution $p(x_{k}|x_{k-1})$ with $x_{k-1}=x_{k-1|k-1}^{(i)}$
4) Generate the probability of ${\hat {y}}$ using ${\hat {x}}$ from $p(y_{k}|x_{k}),~{\mbox{with}}~x_{k}={\hat {x}}$ where $y_{k}$ is the measured value
5) Generate another uniform u from $[0,m_{k}]$ where $m_{k}=\sup _{x_{k}}p(y_{k}|x_{k})$
6) Compare u and $p\left({\hat {y}}\right)$
6a) If u is larger then repeat from step 2
6b) If u is smaller then save ${\hat {x}}$ as $x_{k|k}^{(i)}$ and increment n
7) If n == N then quit
The goal is to generate P "particles" at k using only the particles from $k-1$. This requires that a Markov equation can be written (and computed) to generate a $x_{k}$ based only upon $x_{k-1}$. This algorithm uses the composition of the P particles from $k-1$ to generate a particle at k and repeats (steps 2–6) until P particles are generated at k.
This can be more easily visualized if x is viewed as a two-dimensional array. One dimension is k and the other dimension is the particle number. For example, $x(k,i)$ would be the ith particle at $k$ and can also be written $x_{k}^{(i)}$ (as done above in the algorithm). Step 3 generates a potential $x_{k}$ based on a randomly chosen particle ($x_{k-1}^{(i)}$) at time $k-1$ and rejects or accepts it in step 6. In other words, the $x_{k}$ values are generated using the previously generated $x_{k-1}$.
Applications
Particle filters and Feynman-Kac particle methodologies find application in several contexts, as an effective mean for tackling noisy observations or strong nonlinearities, such as:
• Bayesian inference, machine learning, risk analysis and rare event sampling
• Bioinformatics[19]
• Computational science
• Economics, financial mathematics and mathematical finance: particle filters can perform simulations which are needed to compute the high-dimensional and/or complex integrals related to problems such as dynamic stochastic general equilibrium models in macro-economics and option pricing[65]
• Engineering
• Fault detection and isolation: in observer-based schemas a particle filter can forecast expected sensors output enabling fault isolation[66][67][68]
• Molecular chemistry and computational physics
• Pharmacokinetics[69]
• Phylogenetics
• Robotics, artificial intelligence: Monte Carlo localization is a de facto standard in mobile robot localization[70][71][72]
• Signal and image processing: visual localization, tracking, feature recognition[73]
Other particle filters
• Auxiliary particle filter[74]
• Cost Reference particle filter
• Exponential Natural Particle Filter[75]
• Feynman-Kac and mean-field particle methodologies[2][10][5]
• Gaussian particle filter
• Gauss–Hermite particle filter
• Hierarchical/Scalable particle filter[76]
• Nudged particle filter[77]
• Particle Markov-Chain Monte-Carlo, see e.g. pseudo-marginal Metropolis–Hastings algorithm.
• Rao–Blackwellized particle filter[51]
• Regularized auxiliary particle filter[78]
• Rejection-sampling based optimal particle filter[79][80]
• Unscented particle filter
See also
• Ensemble Kalman filter
• Generalized filtering
• Genetic algorithm
• Mean-field particle methods
• Monte Carlo localization
• Moving horizon estimation
• Recursive Bayesian estimation
References
1. Wills, Adrian G.; Schön, Thomas B. (3 May 2023). "Sequential Monte Carlo: A Unified Review". Annual Review of Control, Robotics, and Autonomous Systems. 6 (1): 159–182. doi:10.1146/annurev-control-042920-015119. ISSN 2573-5144. S2CID 255638127.
2. Del Moral, Pierre (1996). "Non Linear Filtering: Interacting Particle Solution" (PDF). Markov Processes and Related Fields. 2 (4): 555–580.
3. Liu, Jun S.; Chen, Rong (1998-09-01). "Sequential Monte Carlo Methods for Dynamic Systems". Journal of the American Statistical Association. 93 (443): 1032–1044. doi:10.1080/01621459.1998.10473765. ISSN 0162-1459.
4. Del Moral, Pierre (1998). "Measure Valued Processes and Interacting Particle Systems. Application to Non Linear Filtering Problems". Annals of Applied Probability (Publications du Laboratoire de Statistique et Probabilités, 96-15 (1996) ed.). 8 (2): 438–495. doi:10.1214/aoap/1028903535.
5. Del Moral, Pierre (2004). Feynman-Kac formulae. Genealogical and interacting particle approximations. Springer. Series: Probability and Applications. p. 556. ISBN 978-0-387-20268-6.
6. Del Moral, Pierre; Doucet, Arnaud; Jasra, Ajay (2012). "On Adaptive Resampling Procedures for Sequential Monte Carlo Methods" (PDF). Bernoulli. 18 (1): 252–278. doi:10.3150/10-bej335. S2CID 4506682.
7. Del Moral, Pierre (2004). Feynman-Kac formulae. Genealogical and interacting particle approximations. Probability and its Applications. Springer. p. 575. ISBN 9780387202686. Series: Probability and Applications
8. Del Moral, Pierre; Miclo, Laurent (2000). "Branching and Interacting Particle Systems Approximations of Feynman-Kac Formulae with Applications to Non-Linear Filtering". In Jacques Azéma; Michel Ledoux; Michel Émery; Marc Yor (eds.). Séminaire de Probabilités XXXIV (PDF). Lecture Notes in Mathematics. Vol. 1729. pp. 1–145. doi:10.1007/bfb0103798. ISBN 978-3-540-67314-9.
9. Del Moral, Pierre; Miclo, Laurent (2000). "A Moran particle system approximation of Feynman-Kac formulae". Stochastic Processes and Their Applications. 86 (2): 193–216. doi:10.1016/S0304-4149(99)00094-0. S2CID 122757112.
10. Del Moral, Pierre (2013). Mean field simulation for Monte Carlo integration. Chapman & Hall/CRC Press. p. 626. Monographs on Statistics & Applied Probability
11. Moral, Piere Del; Doucet, Arnaud (2014). "Particle methods: An introduction with applications". ESAIM: Proc. 44: 1–46. doi:10.1051/proc/201444001.
12. Rosenbluth, Marshall, N.; Rosenbluth, Arianna, W. (1955). "Monte-Carlo calculations of the average extension of macromolecular chains". J. Chem. Phys. 23 (2): 356–359. Bibcode:1955JChPh..23..356R. doi:10.1063/1.1741967. S2CID 89611599.{{cite journal}}: CS1 maint: multiple names: authors list (link)
13. Hetherington, Jack, H. (1984). "Observations on the statistical iteration of matrices". Phys. Rev. A. 30 (2713): 2713–2719. Bibcode:1984PhRvA..30.2713H. doi:10.1103/PhysRevA.30.2713.{{cite journal}}: CS1 maint: multiple names: authors list (link)
14. Del Moral, Pierre (2003). "Particle approximations of Lyapunov exponents connected to Schrödinger operators and Feynman-Kac semigroups". ESAIM Probability & Statistics. 7: 171–208. doi:10.1051/ps:2003001.
15. Assaraf, Roland; Caffarel, Michel; Khelif, Anatole (2000). "Diffusion Monte Carlo Methods with a fixed number of walkers" (PDF). Phys. Rev. E. 61 (4): 4566–4575. Bibcode:2000PhRvE..61.4566A. doi:10.1103/physreve.61.4566. PMID 11088257. Archived from the original (PDF) on 2014-11-07.
16. Caffarel, Michel; Ceperley, David; Kalos, Malvin (1993). "Comment on Feynman-Kac Path-Integral Calculation of the Ground-State Energies of Atoms". Phys. Rev. Lett. 71 (13): 2159. Bibcode:1993PhRvL..71.2159C. doi:10.1103/physrevlett.71.2159. PMID 10054598.
17. Ocone, D. L. (January 1, 1999). "Asymptotic stability of beneš filters". Stochastic Analysis and Applications. 17 (6): 1053–1074. doi:10.1080/07362999908809648. ISSN 0736-2994.
18. Maurel, Mireille Chaleyat; Michel, Dominique (January 1, 1984). "Des resultats de non existence de filtre de dimension finie". Stochastics. 13 (1–2): 83–102. doi:10.1080/17442508408833312. ISSN 0090-9491.
19. Hajiramezanali, Ehsan; Imani, Mahdi; Braga-Neto, Ulisses; Qian, Xiaoning; Dougherty, Edward R. (2019). "Scalable optimal Bayesian classification of single-cell trajectories under regulatory model uncertainty". BMC Genomics. 20 (Suppl 6): 435. arXiv:1902.03188. Bibcode:2019arXiv190203188H. doi:10.1186/s12864-019-5720-3. PMC 6561847. PMID 31189480.
20. Turing, Alan M. (October 1950). "Computing machinery and intelligence". Mind. LIX (238): 433–460. doi:10.1093/mind/LIX.236.433.
21. Barricelli, Nils Aall (1954). "Esempi numerici di processi di evoluzione". Methodos: 45–68.
22. Barricelli, Nils Aall (1957). "Symbiogenetic evolution processes realized by artificial methods". Methodos: 143–182.
23. Hammersley, J. M.; Morton, K. W. (1954). "Poor Man's Monte Carlo". Journal of the Royal Statistical Society. Series B (Methodological). 16 (1): 23–38. doi:10.1111/j.2517-6161.1954.tb00145.x. JSTOR 2984008.
24. Barricelli, Nils Aall (1963). "Numerical testing of evolution theories. Part II. Preliminary tests of performance, symbiogenesis and terrestrial life". Acta Biotheoretica. 16 (3–4): 99–126. doi:10.1007/BF01556602. S2CID 86717105.
25. "Adaptation in Natural and Artificial Systems | The MIT Press". mitpress.mit.edu. Retrieved 2015-06-06.
26. Fraser, Alex (1957). "Simulation of genetic systems by automatic digital computers. I. Introduction". Aust. J. Biol. Sci. 10 (4): 484–491. doi:10.1071/BI9570484.
27. Fraser, Alex; Burnell, Donald (1970). Computer Models in Genetics. New York: McGraw-Hill. ISBN 978-0-07-021904-5.
28. Crosby, Jack L. (1973). Computer Simulation in Genetics. London: John Wiley & Sons. ISBN 978-0-471-18880-3.
29. Assaraf, Roland; Caffarel, Michel; Khelif, Anatole (2000). "Diffusion Monte Carlo Methods with a fixed number of walkers" (PDF). Phys. Rev. E. 61 (4): 4566–4575. Bibcode:2000PhRvE..61.4566A. doi:10.1103/physreve.61.4566. PMID 11088257. Archived from the original (PDF) on 2014-11-07.
30. Caffarel, Michel; Ceperley, David; Kalos, Malvin (1993). "Comment on Feynman-Kac Path-Integral Calculation of the Ground-State Energies of Atoms". Phys. Rev. Lett. 71 (13): 2159. Bibcode:1993PhRvL..71.2159C. doi:10.1103/physrevlett.71.2159. PMID 10054598.
31. Fermi, Enrique; Richtmyer, Robert, D. (1948). "Note on census-taking in Monte Carlo calculations" (PDF). LAM. 805 (A). Declassified report Los Alamos Archive{{cite journal}}: CS1 maint: multiple names: authors list (link)
32. Herman, Kahn; Harris, Theodore, E. (1951). "Estimation of particle transmission by random sampling" (PDF). Natl. Bur. Stand. Appl. Math. Ser. 12: 27–30.{{cite journal}}: CS1 maint: multiple names: authors list (link)
33. Kitagawa, G. (January 1993). "A Monte Carlo Filtering and Smoothing Method for Non-Gaussian Nonlinear State Space Models" (PDF). Proceedings of the 2nd U.S.-Japan Joint Seminar on Statistical Time Series Analysis: 110–131.
34. Kitagawa, G. (1996). "Monte carlo filter and smoother for non-Gaussian nonlinear state space models". Journal of Computational and Graphical Statistics. 5 (1): 1–25. doi:10.2307/1390750. JSTOR 1390750.
35. Gordon, N.J.; Salmond, D.J.; Smith, A.F.M. (April 1993). "Novel approach to nonlinear/non-Gaussian Bayesian state estimation". IEE Proceedings F - Radar and Signal Processing. 140 (2): 107–113. doi:10.1049/ip-f-2.1993.0015. ISSN 0956-375X.
36. Carvalho, Himilcon; Del Moral, Pierre; Monin, André; Salut, Gérard (July 1997). "Optimal Non-linear Filtering in GPS/INS Integration" (PDF). IEEE Transactions on Aerospace and Electronic Systems. 33 (3): 835. Bibcode:1997ITAES..33..835C. doi:10.1109/7.599254. S2CID 27966240.
37. P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : An unified framework for particle solutions
LAAS-CNRS, Toulouse, Research Report no. 91137, DRET-DIGILOG- LAAS/CNRS contract, April (1991).
38. P. Del Moral, G. Rigal, and G. Salut. Nonlinear and non-Gaussian particle filters applied to inertial platform repositioning.
LAAS-CNRS, Toulouse, Research Report no. 92207, STCAN/DIGILOG-LAAS/CNRS Convention STCAN no. A.91.77.013, (94p.) September (1991).
39. P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : Particle resolution in filtering and estimation. Experimental results.
Convention DRET no. 89.34.553.00.470.75.01, Research report no.2 (54p.), January (1992).
40. P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : Particle resolution in filtering and estimation. Theoretical results
Convention DRET no. 89.34.553.00.470.75.01, Research report no.3 (123p.), October (1992).
41. P. Del Moral, J.-Ch. Noyer, G. Rigal, and G. Salut. Particle filters in radar signal processing : detection, estimation and air targets recognition.
LAAS-CNRS, Toulouse, Research report no. 92495, December (1992).
42. P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : Particle resolution in filtering and estimation.
Studies on: Filtering, optimal control, and maximum likelihood estimation. Convention DRET no. 89.34.553.00.470.75.01. Research report no.4 (210p.), January (1993).
43. Crisan, Dan; Gaines, Jessica; Lyons, Terry (1998). "Convergence of a branching particle method to the solution of the Zakai". SIAM Journal on Applied Mathematics. 58 (5): 1568–1590. doi:10.1137/s0036139996307371. S2CID 39982562.
44. Crisan, Dan; Lyons, Terry (1997). "Nonlinear filtering and measure-valued processes". Probability Theory and Related Fields. 109 (2): 217–244. doi:10.1007/s004400050131. S2CID 119809371.
45. Crisan, Dan; Lyons, Terry (1999). "A particle approximation of the solution of the Kushner–Stratonovitch equation". Probability Theory and Related Fields. 115 (4): 549–578. doi:10.1007/s004400050249. S2CID 117725141.
46. Crisan, Dan; Del Moral, Pierre; Lyons, Terry (1999). "Discrete filtering using branching and interacting particle systems" (PDF). Markov Processes and Related Fields. 5 (3): 293–318.
47. Del Moral, Pierre; Guionnet, Alice (1999). "On the stability of Measure Valued Processes with Applications to filtering". C. R. Acad. Sci. Paris. 39 (1): 429–434.
48. Del Moral, Pierre; Guionnet, Alice (2001). "On the stability of interacting processes with applications to filtering and genetic algorithms". Annales de l'Institut Henri Poincaré. 37 (2): 155–194. Bibcode:2001AIHPB..37..155D. doi:10.1016/s0246-0203(00)01064-5. Archived from the original on 2014-11-07.
49. Del Moral, P.; Guionnet, A. (1999). "Central limit theorem for nonlinear filtering and interacting particle systems". The Annals of Applied Probability. 9 (2): 275–297. doi:10.1214/aoap/1029962742. ISSN 1050-5164.
50. Del Moral, Pierre; Miclo, Laurent (2001). "Genealogies and Increasing Propagation of Chaos For Feynman-Kac and Genetic Models". The Annals of Applied Probability. 11 (4): 1166–1198. doi:10.1214/aoap/1015345399. ISSN 1050-5164.
51. Doucet, A.; De Freitas, N.; Murphy, K.; Russell, S. (2000). Rao–Blackwellised particle filtering for dynamic Bayesian networks. Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence. pp. 176–183. CiteSeerX 10.1.1.137.5199.
52. Del Moral, Pierre; Miclo, Laurent (2001). "Genealogies and Increasing Propagations of Chaos for Feynman-Kac and Genetic Models". Annals of Applied Probability. 11 (4): 1166–1198.
53. Del Moral, Pierre; Doucet, Arnaud; Singh, Sumeetpal, S. (2010). "A Backward Particle Interpretation of Feynman-Kac Formulae" (PDF). M2AN. 44 (5): 947–976. doi:10.1051/m2an/2010048. S2CID 14758161.{{cite journal}}: CS1 maint: multiple names: authors list (link)
54. Vergé, Christelle; Dubarry, Cyrille; Del Moral, Pierre; Moulines, Eric (2013). "On parallel implementation of Sequential Monte Carlo methods: the island particle model". Statistics and Computing. 25 (2): 243–260. arXiv:1306.3911. Bibcode:2013arXiv1306.3911V. doi:10.1007/s11222-013-9429-x. S2CID 39379264.
55. Chopin, Nicolas; Jacob, Pierre, E.; Papaspiliopoulos, Omiros (2011). "SMC^2: an efficient algorithm for sequential analysis of state-space models". arXiv:1101.1528v3 [stat.CO].{{cite arXiv}}: CS1 maint: multiple names: authors list (link)
56. Andrieu, Christophe; Doucet, Arnaud; Holenstein, Roman (2010). "Particle Markov chain Monte Carlo methods". Journal of the Royal Statistical Society, Series B. 72 (3): 269–342. doi:10.1111/j.1467-9868.2009.00736.x.
57. Del Moral, Pierre; Patras, Frédéric; Kohn, Robert (2014). "On Feynman-Kac and particle Markov chain Monte Carlo models". arXiv:1404.5733 [math.PR].
58. Del Moral, Pierre; Jacod, Jean; Protter, Philip (2001-07-01). "The Monte-Carlo method for filtering with discrete-time observations". Probability Theory and Related Fields. 120 (3): 346–368. doi:10.1007/PL00008786. hdl:1813/9179. ISSN 0178-8051. S2CID 116274.
59. Del Moral, Pierre; Doucet, Arnaud; Jasra, Ajay (2011). "An adaptive sequential Monte Carlo method for approximate Bayesian computation". Statistics and Computing. 22 (5): 1009–1020. CiteSeerX 10.1.1.218.9800. doi:10.1007/s11222-011-9271-y. ISSN 0960-3174. S2CID 4514922.
60. Martin, James S.; Jasra, Ajay; Singh, Sumeetpal S.; Whiteley, Nick; Del Moral, Pierre; McCoy, Emma (May 4, 2014). "Approximate Bayesian Computation for Smoothing". Stochastic Analysis and Applications. 32 (3): 397–420. arXiv:1206.5208. doi:10.1080/07362994.2013.879262. ISSN 0736-2994. S2CID 17117364.
61. Del Moral, Pierre; Rio, Emmanuel (2011). "Concentration inequalities for mean field particle models". The Annals of Applied Probability. 21 (3): 1017–1052. arXiv:1211.1837. doi:10.1214/10-AAP716. ISSN 1050-5164. S2CID 17693884.
62. Del Moral, Pierre; Hu, Peng; Wu, Liming (2012). On the Concentration Properties of Interacting Particle Processes. Hanover, MA, USA: Now Publishers Inc. ISBN 978-1601985125.
63. Bejuri, Wan Mohd Yaakob Wan; Mohamad, Mohd Murtadha; Raja Mohd Radzi, Raja Zahilah; Salleh, Mazleena; Yusof, Ahmad Fadhil (2017-10-18). "Adaptive memory-based single distribution resampling for particle filter". Journal of Big Data. 4 (1): 33. doi:10.1186/s40537-017-0094-3. ISSN 2196-1115. S2CID 256407088.
64. Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013). Bayesian Data Analysis, Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5.
65. Creal, Drew (2012). "A Survey of Sequential Monte Carlo Methods for Economics and Finance". Econometric Reviews. 31 (2): 245–296. doi:10.1080/07474938.2011.607333. S2CID 2730761.
66. Shen, Yin; Xiangping, Zhu (2015). "Intelligent Particle Filter and Its Application to Fault Detection of Nonlinear System". IEEE Transactions on Industrial Electronics. 62 (6): 1. doi:10.1109/TIE.2015.2399396. S2CID 23951880.
67. D'Amato, Edigio; Notaro, Immacolata; Nardi, Vito Antonio; Scordamaglia, Valerio (2021). "A Particle Filtering Approach for Fault Detection and Isolation of UAV IMU Sensors: Design, Implementation and Sensitivity Analysis". Sensors. 21 (9): 3066. Bibcode:2021Senso..21.3066D. doi:10.3390/s21093066. PMC 8124649. PMID 33924891.
68. Kadirkamanathan, V.; Li, P.; Jaward, M. H.; Fabri, S. G. (2002). "Particle filtering-based fault detection in non-linear stochastic systems". International Journal of Systems Science. 33 (4): 259–265. doi:10.1080/00207720110102566. S2CID 28634585.
69. Bonate P: Pharmacokinetic-Pharmacodynamic Modeling and Simulation. Berlin: Springer; 2011.
70. Dieter Fox, Wolfram Burgard, Frank Dellaert, and Sebastian Thrun, "Monte Carlo Localization: Efficient Position Estimation for Mobile Robots." Proc. of the Sixteenth National Conference on Artificial Intelligence John Wiley & Sons Ltd, 1999.
71. Sebastian Thrun, Wolfram Burgard, Dieter Fox. Probabilistic Robotics MIT Press, 2005. Ch. 8.3 ISBN 9780262201629.
72. Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert. "Robust monte carlo localization for mobile robots." Artificial Intelligence 128.1 (2001): 99–141.
73. Abbasi, Mahdi; Khosravi, Mohammad R. (2020). "A Robust and Accurate Particle Filter-Based Pupil Detection Method for Big Datasets of Eye Video". Journal of Grid Computing. 18 (2): 305–325. doi:10.1007/s10723-019-09502-1. S2CID 209481431.
74. Pitt, M.K.; Shephard, N. (1999). "Filtering Via Simulation: Auxiliary Particle Filters". Journal of the American Statistical Association. 94 (446): 590–591. doi:10.2307/2670179. JSTOR 2670179. Retrieved 2008-05-06.
75. Zand, G.; Taherkhani, M.; Safabakhsh, R. (2015). "Exponential Natural Particle Filter". arXiv:1511.06603 [cs.LG].
76. Canton-Ferrer, C.; Casas, J.R.; Pardàs, M. (2011). "Human Motion Capture Using Scalable Body Models". Computer Vision and Image Understanding. 115 (10): 1363–1374. doi:10.1016/j.cviu.2011.06.001. hdl:2117/13393.
77. Akyildiz, Ömer Deniz; Míguez, Joaquín (2020-03-01). "Nudging the particle filter". Statistics and Computing. 30 (2): 305–330. doi:10.1007/s11222-019-09884-y. hdl:10044/1/100011. ISSN 1573-1375. S2CID 88515918.
78. Liu, J.; Wang, W.; Ma, F. (2011). "A Regularized Auxiliary Particle Filtering Approach for System State Estimation and Battery Life Prediction". Smart Materials and Structures. 20 (7): 1–9. Bibcode:2011SMaS...20g5021L. doi:10.1088/0964-1726/20/7/075021. S2CID 110670991.
79. Blanco, J.L.; Gonzalez, J.; Fernandez-Madrigal, J.A. (2008). An Optimal Filtering Algorithm for Non-Parametric Observation Models in Robot Localization. IEEE International Conference on Robotics and Automation (ICRA'08). pp. 461–466. CiteSeerX 10.1.1.190.7092.
80. Blanco, J.L.; Gonzalez, J.; Fernandez-Madrigal, J.A. (2010). "Optimal Filtering for Non-Parametric Observation Models: Applications to Localization and SLAM". The International Journal of Robotics Research. 29 (14): 1726–1742. CiteSeerX 10.1.1.1031.4931. doi:10.1177/0278364910364165. S2CID 453697.
Bibliography
• Del Moral, Pierre (1996). "Non Linear Filtering: Interacting Particle Solution" (PDF). Markov Processes and Related Fields. 2 (4): 555–580.
• Del Moral, Pierre (2004). Feynman-Kac formulae. Genealogical and interacting particle approximations. Springer. p. 575. "Series: Probability and Applications"
• Del Moral, Pierre (2013). Mean field simulation for Monte Carlo integration. Chapman & Hall/CRC Press. p. 626. "Monographs on Statistics & Applied Probability"
• Cappe, O.; Moulines, E.; Ryden, T. (2005). Inference in Hidden Markov Models. Springer.
• Liu, J.S.; Chen, R. (1998). "Sequential Monte Carlo methods for dynamic systems" (PDF). Journal of the American Statistical Association. 93 (443): 1032–1044. doi:10.1080/01621459.1998.10473765.
• Liu, J.S. (2001). Monte Carlo strategies in Scientific Computing. Springer.
• Kong, A.; Liu, J.S.; Wong, W.H. (1994). "Sequential imputations and Bayesian missing data problems" (PDF). Journal of the American Statistical Association. 89 (425): 278–288. doi:10.1080/01621459.1994.10476469.
• Liu, J.S.; Chen, R. (1995). "Blind deconvolution via sequential imputations" (PDF). Journal of the American Statistical Association. 90 (430): 567–576. doi:10.2307/2291068. JSTOR 2291068.
• Ristic, B.; Arulampalam, S.; Gordon, N. (2004). Beyond the Kalman Filter: Particle Filters for Tracking Applications. Artech House.
• Doucet, A.; Johansen, A.M. (December 2008). "A tutorial on particle filtering and smoothing: fifteen years later" (PDF). Technical Report.
• Doucet, A.; Godsill, S.; Andrieu, C. (2000). "On sequential Monte Carlo sampling methods for Bayesian filtering". Statistics and Computing. 10 (3): 197–208. doi:10.1023/A:1008935410038. S2CID 16288401.
• Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. (2002). "A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking". IEEE Transactions on Signal Processing. 50 (2): 174–188. Bibcode:2002ITSP...50..174A. CiteSeerX 10.1.1.471.8617. doi:10.1109/78.978374. S2CID 55577025.
• Cappe, O.; Godsill, S.; Moulines, E. (2007). "An overview of existing methods and recent advances in sequential Monte Carlo". Proceedings of the IEEE. 95 (5): 899–924. doi:10.1109/JPROC.2007.893250. S2CID 3081664.
• Kitagawa, G. (1996). "Monte carlo filter and smoother for non-Gaussian nonlinear state space models". Journal of Computational and Graphical Statistics. 5 (1): 1–25. doi:10.2307/1390750. JSTOR 1390750.
• Kotecha, J.H.; Djuric, P. (2003). "Gaussian Particle filtering". IEEE Transactions on Signal Processing. 51 (10).
• Haug, A.J. (2005). "A Tutorial on Bayesian Estimation and Tracking Techniques Applicable to Nonlinear and Non-Gaussian Processes" (PDF). The MITRE Corporation, USA, Tech. Rep., Feb. Archived (PDF) from the original on December 22, 2021. Retrieved 2021-12-22.
• Pitt, M.K.; Shephard, N. (1999). "Filtering Via Simulation: Auxiliary Particle Filters". Journal of the American Statistical Association. 94 (446): 590–591. doi:10.2307/2670179. JSTOR 2670179. Retrieved 2008-05-06.
• Gordon, N. J.; Salmond, D. J.; Smith, A. F. M. (1993). "Novel approach to nonlinear/non-Gaussian Bayesian state estimation". IEE Proceedings F - Radar and Signal Processing. 140 (2): 107–113. doi:10.1049/ip-f-2.1993.0015.
• Chen, Z. (2003). "Bayesian Filtering: From Kalman Filters to Particle Filters, and Beyond". CiteSeerX 10.1.1.107.7415. {{cite journal}}: Cite journal requires |journal= (help)
• Vaswani, N.; Rathi, Y.; Yezzi, A.; Tannenbaum, A. (2007). "Tracking deforming objects using particle filtering for geometric active contours". IEEE Transactions on Pattern Analysis and Machine Intelligence. 29 (8): 1470–1475. doi:10.1109/tpami.2007.1081. PMC 3663080. PMID 17568149.
External links
• Feynman–Kac models and interacting particle algorithms (a.k.a. Particle Filtering) Theoretical aspects and a list of application domains of particle filters
• Sequential Monte Carlo Methods (Particle Filtering) homepage on University of Cambridge
• Dieter Fox's MCL Animations
• Rob Hess' free software
• SMCTC: A Template Class for Implementing SMC algorithms in C++
• Java applet on particle filtering
• vSMC : Vectorized Sequential Monte Carlo
• Particle filter explained in the context of self driving car
Stochastic processes
Discrete time
• Bernoulli process
• Branching process
• Chinese restaurant process
• Galton–Watson process
• Independent and identically distributed random variables
• Markov chain
• Moran process
• Random walk
• Loop-erased
• Self-avoiding
• Biased
• Maximal entropy
Continuous time
• Additive process
• Bessel process
• Birth–death process
• pure birth
• Brownian motion
• Bridge
• Excursion
• Fractional
• Geometric
• Meander
• Cauchy process
• Contact process
• Continuous-time random walk
• Cox process
• Diffusion process
• Empirical process
• Feller process
• Fleming–Viot process
• Gamma process
• Geometric process
• Hawkes process
• Hunt process
• Interacting particle systems
• Itô diffusion
• Itô process
• Jump diffusion
• Jump process
• Lévy process
• Local time
• Markov additive process
• McKean–Vlasov process
• Ornstein–Uhlenbeck process
• Poisson process
• Compound
• Non-homogeneous
• Schramm–Loewner evolution
• Semimartingale
• Sigma-martingale
• Stable process
• Superprocess
• Telegraph process
• Variance gamma process
• Wiener process
• Wiener sausage
Both
• Branching process
• Galves–Löcherbach model
• Gaussian process
• Hidden Markov model (HMM)
• Markov process
• Martingale
• Differences
• Local
• Sub-
• Super-
• Random dynamical system
• Regenerative process
• Renewal process
• Stochastic chains with memory of variable length
• White noise
Fields and other
• Dirichlet process
• Gaussian random field
• Gibbs measure
• Hopfield model
• Ising model
• Potts model
• Boolean network
• Markov random field
• Percolation
• Pitman–Yor process
• Point process
• Cox
• Poisson
• Random field
• Random graph
Time series models
• Autoregressive conditional heteroskedasticity (ARCH) model
• Autoregressive integrated moving average (ARIMA) model
• Autoregressive (AR) model
• Autoregressive–moving-average (ARMA) model
• Generalized autoregressive conditional heteroskedasticity (GARCH) model
• Moving-average (MA) model
Financial models
• Binomial options pricing model
• Black–Derman–Toy
• Black–Karasinski
• Black–Scholes
• Chan–Karolyi–Longstaff–Sanders (CKLS)
• Chen
• Constant elasticity of variance (CEV)
• Cox–Ingersoll–Ross (CIR)
• Garman–Kohlhagen
• Heath–Jarrow–Morton (HJM)
• Heston
• Ho–Lee
• Hull–White
• LIBOR market
• Rendleman–Bartter
• SABR volatility
• Vašíček
• Wilkie
Actuarial models
• Bühlmann
• Cramér–Lundberg
• Risk process
• Sparre–Anderson
Queueing models
• Bulk
• Fluid
• Generalized queueing network
• M/G/1
• M/M/1
• M/M/c
Properties
• Càdlàg paths
• Continuous
• Continuous paths
• Ergodic
• Exchangeable
• Feller-continuous
• Gauss–Markov
• Markov
• Mixing
• Piecewise-deterministic
• Predictable
• Progressively measurable
• Self-similar
• Stationary
• Time-reversible
Limit theorems
• Central limit theorem
• Donsker's theorem
• Doob's martingale convergence theorems
• Ergodic theorem
• Fisher–Tippett–Gnedenko theorem
• Large deviation principle
• Law of large numbers (weak/strong)
• Law of the iterated logarithm
• Maximal ergodic theorem
• Sanov's theorem
• Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy)
Inequalities
• Burkholder–Davis–Gundy
• Doob's martingale
• Doob's upcrossing
• Kunita–Watanabe
• Marcinkiewicz–Zygmund
Tools
• Cameron–Martin formula
• Convergence of random variables
• Doléans-Dade exponential
• Doob decomposition theorem
• Doob–Meyer decomposition theorem
• Doob's optional stopping theorem
• Dynkin's formula
• Feynman–Kac formula
• Filtration
• Girsanov theorem
• Infinitesimal generator
• Itô integral
• Itô's lemma
• Karhunen–Loève theorem
• Kolmogorov continuity theorem
• Kolmogorov extension theorem
• Lévy–Prokhorov metric
• Malliavin calculus
• Martingale representation theorem
• Optional stopping theorem
• Prokhorov's theorem
• Quadratic variation
• Reflection principle
• Skorokhod integral
• Skorokhod's representation theorem
• Skorokhod space
• Snell envelope
• Stochastic differential equation
• Tanaka
• Stopping time
• Stratonovich integral
• Uniform integrability
• Usual hypotheses
• Wiener space
• Classical
• Abstract
Disciplines
• Actuarial mathematics
• Control theory
• Econometrics
• Ergodic theory
• Extreme value theory (EVT)
• Large deviations theory
• Mathematical finance
• Mathematical statistics
• Probability theory
• Queueing theory
• Renewal theory
• Ruin theory
• Signal processing
• Statistics
• Stochastic analysis
• Time series analysis
• Machine learning
• List of topics
• Category
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
|
Wikipedia
|
Sequential time
A sequential time is one in which the numbers form a normal sequence, such as 1:02:03 4/5/06 (two minutes and three seconds past 1 am on 4 May 2006 (or April 5, 2006 in the United States) or the same time and date in the "06" year of any other century). Short sequential times such as 1:23:45 or 12:34:56 appear every day. Larger sequential times rarely appear, such as 12:34:56 7/8/90, or 01:23:45 on 6/7/89. These times can be dependent on the date format being used; the month/day format will produce different results from the day/month format.
This term, however, is not limited to simple counting. Other sequences, such as the decimal numbers of the mathematical constants π (3/14/1592), e (2/7/1828), and the square root of two (1/4/1421) are also noted. Number sequences such as the Fibonacci sequence (1/1/2358) can also be found in time stamps.
These dates are particularly popular with couples getting married who are seeking unique wedding and anniversary dates. Dates with repeating numbers such as July 7, 2007 "7/7/07" are also popular.[1]
Palindromic times can also be observed, e.g. 10:02:10 on 11/01/2001 (two minutes and ten seconds after 10 am on 11 January 2001 in parts of the world using month/day format) was the first fully palindromic time sequence of the twenty-first century. The last palindromic time sequence was at 02:02:10 at 11/01/2020 (two minutes and twenty-one seconds past 2 am on 11 January 2020 in most of the world).
A sequential time occurred during Pi Day on 3/14/15 at 9:26:53.58979... following the sequence of pi to all digits.[2]
Historical events
• Prohibition ended in Finland April 5, 1932 at 10 am (5.4.32 10 o'clock)[3]
• Chernobyl Nuclear Disaster occurred on April 26, 1986 at 01:23:45 MSK (UTC +3).
• Beijing Summer Olympics started on 8 August 2008 at 8.08:08 pm (8 is a lucky number in China)
See also
• Square Root Day
• Numerology
References
1. Manchir, Michelle (11 December 2014). "Couples drawn to 12-13-14 wedding date". Chicago Tribune. Retrieved 13 December 2014.
2. Rosenthal, Jeffrey S. (October 2014). "Pi Instant". Retrieved 23 October 2014.
3. 5-4-3-2-1-0: History of Alko
• The perfect time by John Hand, BBC News
|
Wikipedia
|
Sequence
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called elements, or terms). The number of elements (possibly infinite) is called the length of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and unlike a set, the order does matter. Formally, a sequence can be defined as a function from natural numbers (the positions of elements in the sequence) to the elements at each position. The notion of a sequence can be generalized to an indexed family, defined as a function from an arbitrary index set.
For example, (M, A, R, Y) is a sequence of letters with the letter 'M' first and 'Y' last. This sequence differs from (A, R, M, Y). Also, the sequence (1, 1, 2, 3, 5, 8), which contains the number 1 at two different positions, is a valid sequence. Sequences can be finite, as in these examples, or infinite, such as the sequence of all even positive integers (2, 4, 6, ...).
The position of an element in a sequence is its rank or index; it is the natural number for which the element is the image. The first element has index 0 or 1, depending on the context or a specific convention. In mathematical analysis, a sequence is often denoted by letters in the form of $a_{n}$, $b_{n}$ and $c_{n}$, where the subscript n refers to the nth element of the sequence; for example, the nth element of the Fibonacci sequence $F$ is generally denoted as $F_{n}$.
In computing and computer science, finite sequences are sometimes called strings, words or lists, the different names commonly corresponding to different ways to represent them in computer memory; infinite sequences are called streams. The empty sequence ( ) is included in most notions of sequence, but may be excluded depending on the context.
Examples and notation
A sequence can be thought of as a list of elements with a particular order.[1][2] Sequences are useful in a number of mathematical disciplines for studying functions, spaces, and other mathematical structures using the convergence properties of sequences. In particular, sequences are the basis for series, which are important in differential equations and analysis. Sequences are also of interest in their own right, and can be studied as patterns or puzzles, such as in the study of prime numbers.
There are a number of ways to denote a sequence, some of which are more useful for specific types of sequences. One way to specify a sequence is to list all its elements. For example, the first four odd numbers form the sequence (1, 3, 5, 7). This notation is used for infinite sequences as well. For instance, the infinite sequence of positive odd integers is written as (1, 3, 5, 7, ...). Because notating sequences with ellipsis leads to ambiguity, listing is most useful for customary infinite sequences which can be easily recognized from their first few elements. Other ways of denoting a sequence are discussed after the examples.
Examples
The prime numbers are the natural numbers greater than 1 that have no divisors but 1 and themselves. Taking these in their natural order gives the sequence (2, 3, 5, 7, 11, 13, 17, ...). The prime numbers are widely used in mathematics, particularly in number theory where many results related to them exist.
The Fibonacci numbers comprise the integer sequence whose elements are the sum of the previous two elements. The first two elements are either 0 and 1 or 1 and 1 so that the sequence is (0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...).[1]
Other examples of sequences include those made up of rational numbers, real numbers and complex numbers. The sequence (.9, .99, .999, .9999, ...), for instance, approaches the number 1. In fact, every real number can be written as the limit of a sequence of rational numbers (e.g. via its decimal expansion). As another example, π is the limit of the sequence (3, 3.1, 3.14, 3.141, 3.1415, ...), which is increasing. A related sequence is the sequence of decimal digits of π, that is, (3, 1, 4, 1, 5, 9, ...). Unlike the preceding sequence, this sequence does not have any pattern that is easily discernible by inspection.
Another example of sequences is a sequence of functions, where each member of the sequence is a function whose shape is determined by a natural number indexing that function.
The On-Line Encyclopedia of Integer Sequences comprises a large list of examples of integer sequences.[3]
Indexing
Other notations can be useful for sequences whose pattern cannot be easily guessed or for sequences that do not have a pattern such as the digits of π. One such notation is to write down a general formula for computing the nth term as a function of n, enclose it in parentheses, and include a subscript indicating the set of values that n can take. For example, in this notation the sequence of even numbers could be written as $(2n)_{n\in \mathbb {N} }$. The sequence of squares could be written as $(n^{2})_{n\in \mathbb {N} }$. The variable n is called an index, and the set of values that it can take is called the index set.
It is often useful to combine this notation with the technique of treating the elements of a sequence as individual variables. This yields expressions like $(a_{n})_{n\in \mathbb {N} }$, which denotes a sequence whose nth element is given by the variable $a_{n}$. For example:
${\begin{aligned}a_{1}&=1{\text{st element of }}(a_{n})_{n\in \mathbb {N} }\\a_{2}&=2{\text{nd element }}\\a_{3}&=3{\text{rd element }}\\&\;\;\vdots \\a_{n-1}&=(n-1){\text{th element}}\\a_{n}&=n{\text{th element}}\\a_{n+1}&=(n+1){\text{th element}}\\&\;\;\vdots \end{aligned}}$
One can consider multiple sequences at the same time by using different variables; e.g. $(b_{n})_{n\in \mathbb {N} }$ could be a different sequence than $(a_{n})_{n\in \mathbb {N} }$. One can even consider a sequence of sequences: $((a_{m,n})_{n\in \mathbb {N} })_{m\in \mathbb {N} }$ denotes a sequence whose mth term is the sequence $(a_{m,n})_{n\in \mathbb {N} }$.
An alternative to writing the domain of a sequence in the subscript is to indicate the range of values that the index can take by listing its highest and lowest legal values. For example, the notation $(k^{2})_{k=1}^{10}$ denotes the ten-term sequence of squares $(1,4,9,\ldots ,100)$. The limits $\infty $ and $-\infty $ are allowed, but they do not represent valid values for the index, only the supremum or infimum of such values, respectively. For example, the sequence $(a_{n})_{n=1}^{\infty }$ is the same as the sequence $(a_{n})_{n\in \mathbb {N} }$, and does not contain an additional term "at infinity". The sequence $(a_{n})_{n=-\infty }^{\infty }$ is a bi-infinite sequence, and can also be written as $(\ldots ,a_{-1},a_{0},a_{1},a_{2},\ldots )$.
In cases where the set of indexing numbers is understood, the subscripts and superscripts are often left off. That is, one simply writes $(a_{k})$ for an arbitrary sequence. Often, the index k is understood to run from 1 to ∞. However, sequences are frequently indexed starting from zero, as in
$(a_{k})_{k=0}^{\infty }=(a_{0},a_{1},a_{2},\ldots ).$
In some cases, the elements of the sequence are related naturally to a sequence of integers whose pattern can be easily inferred. In these cases, the index set may be implied by a listing of the first few abstract elements. For instance, the sequence of squares of odd numbers could be denoted in any of the following ways.
• $(1,9,25,\ldots )$
• $(a_{1},a_{3},a_{5},\ldots ),\qquad a_{k}=k^{2}$
• $(a_{2k-1})_{k=1}^{\infty },\qquad a_{k}=k^{2}$
• $(a_{k})_{k=1}^{\infty },\qquad a_{k}=(2k-1)^{2}$
• $\left((2k-1)^{2}\right)_{k=1}^{\infty }$
Moreover, the subscripts and superscripts could have been left off in the third, fourth, and fifth notations, if the indexing set was understood to be the natural numbers. In the second and third bullets, there is a well-defined sequence $(a_{k})_{k=1}^{\infty }$, but it is not the same as the sequence denoted by the expression.
Defining a sequence by recursion
Main article: Recurrence relation
Sequences whose elements are related to the previous elements in a straightforward way are often defined using recursion. This is in contrast to the definition of sequences of elements as functions of their positions.
To define a sequence by recursion, one needs a rule, called recurrence relation to construct each element in terms of the ones before it. In addition, enough initial elements must be provided so that all subsequent elements of the sequence can be computed by successive applications of the recurrence relation.
The Fibonacci sequence is a simple classical example, defined by the recurrence relation
$a_{n}=a_{n-1}+a_{n-2},$
with initial terms $a_{0}=0$ and $a_{1}=1$. From this, a simple computation shows that the first ten terms of this sequence are 0, 1, 1, 2, 3, 5, 8, 13, 21, and 34.
A complicated example of a sequence defined by a recurrence relation is Recamán's sequence,[4] defined by the recurrence relation
${\begin{cases}a_{n}=a_{n-1}-n,\quad {\text{if the result is positive and not already in the previous terms,}}\\a_{n}=a_{n-1}+n,\quad {\text{otherwise}},\end{cases}}$
with initial term $a_{0}=0.$
A linear recurrence with constant coefficients is a recurrence relation of the form
$a_{n}=c_{0}+c_{1}a_{n-1}+\dots +c_{k}a_{n-k},$
where $c_{0},\dots ,c_{k}$ are constants. There is a general method for expressing the general term $a_{n}$ of such a sequence as a function of n; see Linear recurrence. In the case of the Fibonacci sequence, one has $c_{0}=0,c_{1}=c_{2}=1,$ and the resulting function of n is given by Binet's formula.
A holonomic sequence is a sequence defined by a recurrence relation of the form
$a_{n}=c_{1}a_{n-1}+\dots +c_{k}a_{n-k},$
where $c_{1},\dots ,c_{k}$ are polynomials in n. For most holonomic sequences, there is no explicit formula for expressing $a_{n}$ as a function of n. Nevertheless, holonomic sequences play an important role in various areas of mathematics. For example, many special functions have a Taylor series whose sequence of coefficients is holonomic. The use of the recurrence relation allows a fast computation of values of such special functions.
Not all sequences can be specified by a recurrence relation. An example is the sequence of prime numbers in their natural order (2, 3, 5, 7, 11, 13, 17, ...).
Formal definition and basic properties
There are many different notions of sequences in mathematics, some of which (e.g., exact sequence) are not covered by the definitions and notations introduced below.
Definition
In this article, a sequence is formally defined as a function whose domain is an interval of integers. This definition covers several different uses of the word "sequence", including one-sided infinite sequences, bi-infinite sequences, and finite sequences (see below for definitions of these kinds of sequences). However, many authors use a narrower definition by requiring the domain of a sequence to be the set of natural numbers. This narrower definition has the disadvantage that it rules out finite sequences and bi-infinite sequences, both of which are usually called sequences in standard mathematical practice. Another disadvantage is that, if one removes the first terms of a sequence, one needs reindexing the remainder terms for fitting this definition. In some contexts, to shorten exposition, the codomain of the sequence is fixed by context, for example by requiring it to be the set R of real numbers,[5] the set C of complex numbers,[6] or a topological space.[7]
Although sequences are a type of function, they are usually distinguished notationally from functions in that the input is written as a subscript rather than in parentheses, that is, an rather than a(n). There are terminological differences as well: the value of a sequence at the lowest input (often 1) is called the "first element" of the sequence, the value at the second smallest input (often 2) is called the "second element", etc. Also, while a function abstracted from its input is usually denoted by a single letter, e.g. f, a sequence abstracted from its input is usually written by a notation such as $(a_{n})_{n\in A}$, or just as $(a_{n}).$ Here A is the domain, or index set, of the sequence.
Sequences and their limits (see below) are important concepts for studying topological spaces. An important generalization of sequences is the concept of nets. A net is a function from a (possibly uncountable) directed set to a topological space. The notational conventions for sequences normally apply to nets as well.
Finite and infinite
The length of a sequence is defined as the number of terms in the sequence.
A sequence of a finite length n is also called an n-tuple. Finite sequences include the empty sequence ( ) that has no elements.
Normally, the term infinite sequence refers to a sequence that is infinite in one direction, and finite in the other—the sequence has a first element, but no final element. Such a sequence is called a singly infinite sequence or a one-sided infinite sequence when disambiguation is necessary. In contrast, a sequence that is infinite in both directions—i.e. that has neither a first nor a final element—is called a bi-infinite sequence, two-way infinite sequence, or doubly infinite sequence. A function from the set Z of all integers into a set, such as for instance the sequence of all even integers ( ..., −4, −2, 0, 2, 4, 6, 8, ... ), is bi-infinite. This sequence could be denoted $(2n)_{n=-\infty }^{\infty }$.
Increasing and decreasing
A sequence is said to be monotonically increasing if each term is greater than or equal to the one before it. For example, the sequence $(a_{n})_{n=1}^{\infty }$ is monotonically increasing if and only if an+1 $\geq $ an for all n ∈ N. If each consecutive term is strictly greater than (>) the previous term then the sequence is called strictly monotonically increasing. A sequence is monotonically decreasing if each consecutive term is less than or equal to the previous one, and is strictly monotonically decreasing if each is strictly less than the previous. If a sequence is either increasing or decreasing it is called a monotone sequence. This is a special case of the more general notion of a monotonic function.
The terms nondecreasing and nonincreasing are often used in place of increasing and decreasing in order to avoid any possible confusion with strictly increasing and strictly decreasing, respectively.
Bounded
If the sequence of real numbers (an) is such that all the terms are less than some real number M, then the sequence is said to be bounded from above. In other words, this means that there exists M such that for all n, an ≤ M. Any such M is called an upper bound. Likewise, if, for some real m, an ≥ m for all n greater than some N, then the sequence is bounded from below and any such m is called a lower bound. If a sequence is both bounded from above and bounded from below, then the sequence is said to be bounded.
Subsequences
A subsequence of a given sequence is a sequence formed from the given sequence by deleting some of the elements without disturbing the relative positions of the remaining elements. For instance, the sequence of positive even integers (2, 4, 6, ...) is a subsequence of the positive integers (1, 2, 3, ...). The positions of some elements change when other elements are deleted. However, the relative positions are preserved.
Formally, a subsequence of the sequence $(a_{n})_{n\in \mathbb {N} }$ is any sequence of the form $(a_{n_{k}})_{k\in \mathbb {N} }$, where $(n_{k})_{k\in \mathbb {N} }$ is a strictly increasing sequence of positive integers.
Other types of sequences
Some other types of sequences that are easy to define include:
• An integer sequence is a sequence whose terms are integers.
• A polynomial sequence is a sequence whose terms are polynomials.
• A positive integer sequence is sometimes called multiplicative, if anm = an am for all pairs n, m such that n and m are coprime.[8] In other instances, sequences are often called multiplicative, if an = na1 for all n. Moreover, a multiplicative Fibonacci sequence[9] satisfies the recursion relation an = an−1 an−2.
• A binary sequence is a sequence whose terms have one of two discrete values, e.g. base 2 values (0,1,1,0, ...), a series of coin tosses (Heads/Tails) H,T,H,H,T, ..., the answers to a set of True or False questions (T, F, T, T, ...), and so on.
Limits and convergence
Main article: Limit of a sequence
An important property of a sequence is convergence. If a sequence converges, it converges to a particular value known as the limit. If a sequence converges to some limit, then it is convergent. A sequence that does not converge is divergent.
Informally, a sequence has a limit if the elements of the sequence become closer and closer to some value $L$ (called the limit of the sequence), and they become and remain arbitrarily close to $L$, meaning that given a real number $d$ greater than zero, all but a finite number of the elements of the sequence have a distance from $L$ less than $d$.
For example, the sequence $ a_{n}={\frac {n+1}{2n^{2}}}$ shown to the right converges to the value 0. On the other hand, the sequences $ b_{n}=n^{3}$ (which begins 1, 8, 27, …) and $c_{n}=(-1)^{n}$ (which begins −1, 1, −1, 1, …) are both divergent.
If a sequence converges, then the value it converges to is unique. This value is called the limit of the sequence. The limit of a convergent sequence $(a_{n})$ is normally denoted $ \lim _{n\to \infty }a_{n}$. If $(a_{n})$ is a divergent sequence, then the expression $ \lim _{n\to \infty }a_{n}$ is meaningless.
Formal definition of convergence
A sequence of real numbers $(a_{n})$ converges to a real number $L$ if, for all $\varepsilon >0$, there exists a natural number $N$ such that for all $n\geq N$ we have[5]
$|a_{n}-L|<\varepsilon .$
If $(a_{n})$ is a sequence of complex numbers rather than a sequence of real numbers, this last formula can still be used to define convergence, with the provision that $|\cdot |$ denotes the complex modulus, i.e. $|z|={\sqrt {z^{*}z}}$. If $(a_{n})$ is a sequence of points in a metric space, then the formula can be used to define convergence, if the expression $|a_{n}-L|$ is replaced by the expression $\operatorname {dist} (a_{n},L)$, which denotes the distance between $a_{n}$ and $L$.
Applications and important results
If $(a_{n})$ and $(b_{n})$ are convergent sequences, then the following limits exist, and can be computed as follows:[5][10]
• $\lim _{n\to \infty }(a_{n}\pm b_{n})=\lim _{n\to \infty }a_{n}\pm \lim _{n\to \infty }b_{n}$
• $\lim _{n\to \infty }ca_{n}=c\lim _{n\to \infty }a_{n}$ for all real numbers $c$
• $\lim _{n\to \infty }(a_{n}b_{n})=\left(\lim _{n\to \infty }a_{n}\right)\left(\lim _{n\to \infty }b_{n}\right)$
• $\lim _{n\to \infty }{\frac {a_{n}}{b_{n}}}={\frac {\lim \limits _{n\to \infty }a_{n}}{\lim \limits _{n\to \infty }b_{n}}}$, provided that $\lim _{n\to \infty }b_{n}\neq 0$
• $\lim _{n\to \infty }a_{n}^{p}=\left(\lim _{n\to \infty }a_{n}\right)^{p}$ for all $p>0$ and $a_{n}>0$
Moreover:
• If $a_{n}\leq b_{n}$ for all $n$ greater than some $N$, then $\lim _{n\to \infty }a_{n}\leq \lim _{n\to \infty }b_{n}$.[lower-alpha 1]
• (Squeeze Theorem)
If $(c_{n})$ is a sequence such that $a_{n}\leq c_{n}\leq b_{n}$ for all $n>N$ and $\lim _{n\to \infty }a_{n}=\lim _{n\to \infty }b_{n}=L$,
then $(c_{n})$ is convergent, and $\lim _{n\to \infty }c_{n}=L$.
• If a sequence is bounded and monotonic then it is convergent.
• A sequence is convergent if and only if all of its subsequences are convergent.
Cauchy sequences
Main article: Cauchy sequence
A Cauchy sequence is a sequence whose terms become arbitrarily close together as n gets very large. The notion of a Cauchy sequence is important in the study of sequences in metric spaces, and, in particular, in real analysis. One particularly important result in real analysis is Cauchy characterization of convergence for sequences:
A sequence of real numbers is convergent (in the reals) if and only if it is Cauchy.
In contrast, there are Cauchy sequences of rational numbers that are not convergent in the rationals, e.g. the sequence defined by x1 = 1 and xn+1 = xn + 2/xn/2 is Cauchy, but has no rational limit, cf. here. More generally, any sequence of rational numbers that converges to an irrational number is Cauchy, but not convergent when interpreted as a sequence in the set of rational numbers.
Metric spaces that satisfy the Cauchy characterization of convergence for sequences are called complete metric spaces and are particularly nice for analysis.
Infinite limits
In calculus, it is common to define notation for sequences which do not converge in the sense discussed above, but which instead become and remain arbitrarily large, or become and remain arbitrarily negative. If $a_{n}$ becomes arbitrarily large as $n\to \infty $, we write
$\lim _{n\to \infty }a_{n}=\infty .$
In this case we say that the sequence diverges, or that it converges to infinity. An example of such a sequence is an = n.
If $a_{n}$ becomes arbitrarily negative (i.e. negative and large in magnitude) as $n\to \infty $, we write
$\lim _{n\to \infty }a_{n}=-\infty $
and say that the sequence diverges or converges to negative infinity.
Series
Main article: Series (mathematics)
A series is, informally speaking, the sum of the terms of a sequence. That is, it is an expression of the form $ \sum _{n=1}^{\infty }a_{n}$ or $a_{1}+a_{2}+\cdots $, where $(a_{n})$ is a sequence of real or complex numbers. The partial sums of a series are the expressions resulting from replacing the infinity symbol with a finite number, i.e. the Nth partial sum of the series $ \sum _{n=1}^{\infty }a_{n}$ is the number
$S_{N}=\sum _{n=1}^{N}a_{n}=a_{1}+a_{2}+\cdots +a_{N}.$
The partial sums themselves form a sequence $(S_{N})_{N\in \mathbb {N} }$, which is called the sequence of partial sums of the series $ \sum _{n=1}^{\infty }a_{n}$. If the sequence of partial sums converges, then we say that the series $ \sum _{n=1}^{\infty }a_{n}$ is convergent, and the limit $ \lim _{N\to \infty }S_{N}$ is called the value of the series. The same notation is used to denote a series and its value, i.e. we write $ \sum _{n=1}^{\infty }a_{n}=\lim _{N\to \infty }S_{N}$.
Use in other fields of mathematics
Topology
Sequences play an important role in topology, especially in the study of metric spaces. For instance:
• A metric space is compact exactly when it is sequentially compact.
• A function from a metric space to another metric space is continuous exactly when it takes convergent sequences to convergent sequences.
• A metric space is a connected space if and only if, whenever the space is partitioned into two sets, one of the two sets contains a sequence converging to a point in the other set.
• A topological space is separable exactly when there is a dense sequence of points.
Sequences can be generalized to nets or filters. These generalizations allow one to extend some of the above theorems to spaces without metrics.
Product topology
The topological product of a sequence of topological spaces is the cartesian product of those spaces, equipped with a natural topology called the product topology.
More formally, given a sequence of spaces $(X_{i})_{i\in \mathbb {N} }$, the product space
$X:=\prod _{i\in \mathbb {N} }X_{i},$
is defined as the set of all sequences $(x_{i})_{i\in \mathbb {N} }$ such that for each i, $x_{i}$ is an element of $X_{i}$. The canonical projections are the maps pi : X → Xi defined by the equation $p_{i}((x_{j})_{j\in \mathbb {N} })=x_{i}$. Then the product topology on X is defined to be the coarsest topology (i.e. the topology with the fewest open sets) for which all the projections pi are continuous. The product topology is sometimes called the Tychonoff topology.
Analysis
In analysis, when talking about sequences, one will generally consider sequences of the form
$(x_{1},x_{2},x_{3},\dots ){\text{ or }}(x_{0},x_{1},x_{2},\dots )$
which is to say, infinite sequences of elements indexed by natural numbers.
A sequence may start with an index different from 1 or 0. For example, the sequence defined by xn = 1/log(n) would be defined only for n ≥ 2. When talking about such infinite sequences, it is usually sufficient (and does not change much for most considerations) to assume that the members of the sequence are defined at least for all indices large enough, that is, greater than some given N.
The most elementary type of sequences are numerical ones, that is, sequences of real or complex numbers. This type can be generalized to sequences of elements of some vector space. In analysis, the vector spaces considered are often function spaces. Even more generally, one can study sequences with elements in some topological space.
Sequence spaces
Main article: Sequence space
A sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K, where K is either the field of real numbers or the field of complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space.
The most important sequences spaces in analysis are the ℓp spaces, consisting of the p-power summable sequences, with the p-norm. These are special cases of Lp spaces for the counting measure on the set of natural numbers. Other important classes of sequences like convergent sequences or null sequences form sequence spaces, respectively denoted c and c0, with the sup norm. Any sequence space can also be equipped with the topology of pointwise convergence, under which it becomes a special kind of Fréchet space called an FK-space.
Linear algebra
Sequences over a field may also be viewed as vectors in a vector space. Specifically, the set of F-valued sequences (where F is a field) is a function space (in fact, a product space) of F-valued functions over the set of natural numbers.
Abstract algebra
Abstract algebra employs several types of sequences, including sequences of mathematical objects such as groups or rings.
Free monoid
Main article: Free monoid
If A is a set, the free monoid over A (denoted A*, also called Kleene star of A) is a monoid containing all the finite sequences (or strings) of zero or more elements of A, with the binary operation of concatenation. The free semigroup A+ is the subsemigroup of A* containing all elements except the empty sequence.
Exact sequences
Main article: Exact sequence
In the context of group theory, a sequence
$G_{0}\;{\xrightarrow {f_{1}}}\;G_{1}\;{\xrightarrow {f_{2}}}\;G_{2}\;{\xrightarrow {f_{3}}}\;\cdots \;{\xrightarrow {f_{n}}}\;G_{n}$
of groups and group homomorphisms is called exact, if the image (or range) of each homomorphism is equal to the kernel of the next:
$\mathrm {im} (f_{k})=\mathrm {ker} (f_{k+1})$
The sequence of groups and homomorphisms may be either finite or infinite.
A similar definition can be made for certain other algebraic structures. For example, one could have an exact sequence of vector spaces and linear maps, or of modules and module homomorphisms.
Spectral sequences
Main article: Spectral sequence
In homological algebra and algebraic topology, a spectral sequence is a means of computing homology groups by taking successive approximations. Spectral sequences are a generalization of exact sequences, and since their introduction by Jean Leray (1946), they have become an important research tool, particularly in homotopy theory.
Set theory
An ordinal-indexed sequence is a generalization of a sequence. If α is a limit ordinal and X is a set, an α-indexed sequence of elements of X is a function from α to X. In this terminology an ω-indexed sequence is an ordinary sequence.
Computing
In computer science, finite sequences are called lists. Potentially infinite sequences are called streams. Finite sequences of characters or digits are called strings.
Streams
Infinite sequences of digits (or characters) drawn from a finite alphabet are of particular interest in theoretical computer science. They are often referred to simply as sequences or streams, as opposed to finite strings. Infinite binary sequences, for instance, are infinite sequences of bits (characters drawn from the alphabet {0, 1}). The set C = {0, 1}∞ of all infinite binary sequences is sometimes called the Cantor space.
An infinite binary sequence can represent a formal language (a set of strings) by setting the n th bit of the sequence to 1 if and only if the n th string (in shortlex order) is in the language. This representation is useful in the diagonalization method for proofs.[11]
See also
• Enumeration
• On-Line Encyclopedia of Integer Sequences
• Recurrence relation
• Sequence space
Operations
• Cauchy product
Examples
• Discrete-time signal
• Farey sequence
• Fibonacci sequence
• Look-and-say sequence
• Thue–Morse sequence
• List of integer sequences
Types
• ±1-sequence
• Arithmetic progression
• Automatic sequence
• Cauchy sequence
• Constant-recursive sequence
• Geometric progression
• Harmonic progression
• Holonomic sequence
• Regular sequence
• Pseudorandom binary sequence
• Random sequence
Related concepts
• List (computing)
• Net (topology) (a generalization of sequences)
• Ordinal-indexed sequence
• Recursion (computer science)
• Set (mathematics)
• Tuple
• Permutation
Notes
1. If the inequalities are replaced by strict inequalities then this is false: There are sequences such that $a_{n}<b_{n}$ for all $n$, but $\lim _{n\to \infty }a_{n}=\lim _{n\to \infty }b_{n}$.
References
1. "Sequences". www.mathsisfun.com. Archived from the original on 2020-08-12. Retrieved 2020-08-17.
2. Weisstein, Eric W. "Sequence". mathworld.wolfram.com. Archived from the original on 2020-07-25. Retrieved 2020-08-17.
3. Index to OEIS Archived 2022-10-18 at the Wayback Machine, On-Line Encyclopedia of Integer Sequences, 2020-12-03
4. Sloane, N. J. A. (ed.). "Sequence A005132 (Recamán's sequence)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 26 January 2018.
5. Gaughan, Edward (2009). "1.1 Sequences and Convergence". Introduction to Analysis. AMS (2009). ISBN 978-0-8218-4787-9.
6. Edward B. Saff & Arthur David Snider (2003). "Chapter 2.1". Fundamentals of Complex Analysis. Prentice Hall. ISBN 978-01-390-7874-3. Archived from the original on 2023-03-23. Retrieved 2015-11-15.
7. James R. Munkres (2000). "Chapters 1&2". Topology. Prentice Hall, Incorporated. ISBN 978-01-318-1629-9. Archived from the original on 2023-03-23. Retrieved 2015-11-15.
8. Lando, Sergei K. (2003-10-21). "7.4 Multiplicative sequences". Lectures on generating functions. AMS. ISBN 978-0-8218-3481-7.
9. Falcon, Sergio (2003). "Fibonacci's multiplicative sequence". International Journal of Mathematical Education in Science and Technology. 34 (2): 310–315. doi:10.1080/0020739031000158362. S2CID 121280842.
10. Dawikins, Paul. "Series and Sequences". Paul's Online Math Notes/Calc II (notes). Archived from the original on 30 November 2012. Retrieved 18 December 2012.
11. Oflazer, Kemal. "FORMAL LANGUAGES, AUTOMATA AND COMPUTATION: DECIDABILITY" (PDF). cmu.edu. Carnegie-Mellon University. Archived (PDF) from the original on 29 May 2015. Retrieved 24 April 2015.
External links
Look up sequence in Wiktionary, the free dictionary.
Look up enumerate or collection in Wiktionary, the free dictionary.
• "Sequence", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• The On-Line Encyclopedia of Integer Sequences
• Journal of Integer Sequences (free)
Sequences and series
Integer sequences
Basic
• Arithmetic progression
• Geometric progression
• Harmonic progression
• Square number
• Cubic number
• Factorial
• Powers of two
• Powers of three
• Powers of 10
Advanced (list)
• Complete sequence
• Fibonacci sequence
• Figurate number
• Heptagonal number
• Hexagonal number
• Lucas number
• Pell number
• Pentagonal number
• Polygonal number
• Triangular number
Properties of sequences
• Cauchy sequence
• Monotonic function
• Periodic sequence
Properties of series
Series
• Alternating
• Convergent
• Divergent
• Telescoping
Convergence
• Absolute
• Conditional
• Uniform
Explicit series
Convergent
• 1/2 − 1/4 + 1/8 − 1/16 + ⋯
• 1/2 + 1/4 + 1/8 + 1/16 + ⋯
• 1/4 + 1/16 + 1/64 + 1/256 + ⋯
• 1 + 1/2s + 1/3s + ... (Riemann zeta function)
Divergent
• 1 + 1 + 1 + 1 + ⋯
• 1 − 1 + 1 − 1 + ⋯ (Grandi's series)
• 1 + 2 + 3 + 4 + ⋯
• 1 − 2 + 3 − 4 + ⋯
• 1 + 2 + 4 + 8 + ⋯
• 1 − 2 + 4 − 8 + ⋯
• Infinite arithmetic series
• 1 − 1 + 2 − 6 + 24 − 120 + ⋯ (alternating factorials)
• 1 + 1/2 + 1/3 + 1/4 + ⋯ (harmonic series)
• 1/2 + 1/3 + 1/5 + 1/7 + 1/11 + ⋯ (inverses of primes)
Kinds of series
• Taylor series
• Power series
• Formal power series
• Laurent series
• Puiseux series
• Dirichlet series
• Trigonometric series
• Fourier series
• Generating series
Hypergeometric series
• Generalized hypergeometric series
• Hypergeometric function of a matrix argument
• Lauricella hypergeometric series
• Modular hypergeometric series
• Riemann's differential equation
• Theta hypergeometric series
• Category
Authority control
International
• FAST
National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
|
Wikipedia
|
Sequentially compact space
In mathematics, a topological space X is sequentially compact if every sequence of points in X has a convergent subsequence converging to a point in $X$.
Every metric space is naturally a topological space, and for metric spaces, the notions of compactness and sequential compactness are equivalent (if one assumes countable choice). However, there exist sequentially compact topological spaces that are not compact, and compact topological spaces that are not sequentially compact.
Examples and properties
The space of all real numbers with the standard topology is not sequentially compact; the sequence $(s_{n})$ given by $s_{n}=n$ for all natural numbers $n$ is a sequence that has no convergent subsequence.
If a space is a metric space, then it is sequentially compact if and only if it is compact.[1] The first uncountable ordinal with the order topology is an example of a sequentially compact topological space that is not compact. The product of $2^{\aleph _{0}}={\mathfrak {c}}$ copies of the closed unit interval is an example of a compact space that is not sequentially compact.[2]
Related notions
A topological space $X$ is said to be limit point compact if every infinite subset of $X$ has a limit point in $X$, and countably compact if every countable open cover has a finite subcover. In a metric space, the notions of sequential compactness, limit point compactness, countable compactness and compactness are all equivalent (if one assumes the axiom of choice).
In a sequential (Hausdorff) space sequential compactness is equivalent to countable compactness.[3]
There is also a notion of a one-point sequential compactification—the idea is that the non convergent sequences should all converge to the extra point.[4]
See also
• Bolzano–Weierstrass theorem – Bounded sequence in finite-dimensional Euclidean space has a convergent subsequence
• Fréchet–Urysohn space – Topological space
• Sequence covering maps
• Sequential space – Topological space characterized by sequences
Notes
1. Willard, 17G, p. 125.
2. Steen and Seebach, Example 105, pp. 125—126.
3. Engelking, General Topology, Theorem 3.10.31
K.P. Hart, Jun-iti Nagata, J.E. Vaughan (editors), Encyclopedia of General Topology, Chapter d3 (by P. Simon)
4. Brown, Ronald, "Sequentially proper maps and a sequential compactification", J. London Math Soc. (2) 7 (1973) 515-522.
References
• Munkres, James (1999). Topology (2nd ed.). Prentice Hall. ISBN 0-13-181629-2.
• Steen, Lynn A. and Seebach, J. Arthur Jr.; Counterexamples in Topology, Holt, Rinehart and Winston (1970). ISBN 0-03-079485-4.
• Willard, Stephen (2004). General Topology. Dover Publications. ISBN 0-486-43479-6.
|
Wikipedia
|
Sequentially complete
In mathematics, specifically in topology and functional analysis, a subspace S of a uniform space X is said to be sequentially complete or semi-complete if every Cauchy sequence in S converges to an element in S. X is called sequentially complete if it is a sequentially complete subset of itself.
Sequentially complete topological vector spaces
Every topological vector space is a uniform space so the notion of sequential completeness can be applied to them.
Properties of sequentially complete topological vector spaces
1. A bounded sequentially complete disk in a Hausdorff topological vector space is a Banach disk.[1]
2. A Hausdorff locally convex space that is sequentially complete and bornological is ultrabornological.[2]
Examples and sufficient conditions
1. Every complete space is sequentially complete but not conversely.
2. A metrizable space then it is complete if and only if it is sequentially complete.
3. Every complete topological vector space is quasi-complete and every quasi-complete topological vector space is sequentially complete.[3]
See also
• Cauchy net
• Complete space
• Complete topological vector space
• Quasi-complete space
• Topological vector space
• Uniform space
References
1. Narici & Beckenstein 2011, pp. 441–442.
2. Narici & Beckenstein 2011, p. 449.
3. Narici & Beckenstein 2011, pp. 155–176.
Bibliography
• Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370.
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Topological vector spaces (TVSs)
Basic concepts
• Banach space
• Completeness
• Continuous linear operator
• Linear functional
• Fréchet space
• Linear map
• Locally convex space
• Metrizability
• Operator topologies
• Topological vector space
• Vector space
Main results
• Anderson–Kadec
• Banach–Alaoglu
• Closed graph theorem
• F. Riesz's
• Hahn–Banach (hyperplane separation
• Vector-valued Hahn–Banach)
• Open mapping (Banach–Schauder)
• Bounded inverse
• Uniform boundedness (Banach–Steinhaus)
Maps
• Bilinear operator
• form
• Linear map
• Almost open
• Bounded
• Continuous
• Closed
• Compact
• Densely defined
• Discontinuous
• Topological homomorphism
• Functional
• Linear
• Bilinear
• Sesquilinear
• Norm
• Seminorm
• Sublinear function
• Transpose
Types of sets
• Absolutely convex/disk
• Absorbing/Radial
• Affine
• Balanced/Circled
• Banach disks
• Bounding points
• Bounded
• Complemented subspace
• Convex
• Convex cone (subset)
• Linear cone (subset)
• Extreme point
• Pre-compact/Totally bounded
• Prevalent/Shy
• Radial
• Radially convex/Star-shaped
• Symmetric
Set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Convex hull
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Types of TVSs
• Asplund
• B-complete/Ptak
• Banach
• (Countably) Barrelled
• BK-space
• (Ultra-) Bornological
• Brauner
• Complete
• Convenient
• (DF)-space
• Distinguished
• F-space
• FK-AK space
• FK-space
• Fréchet
• tame Fréchet
• Grothendieck
• Hilbert
• Infrabarreled
• Interpolation space
• K-space
• LB-space
• LF-space
• Locally convex space
• Mackey
• (Pseudo)Metrizable
• Montel
• Quasibarrelled
• Quasi-complete
• Quasinormed
• (Polynomially
• Semi-) Reflexive
• Riesz
• Schwartz
• Semi-complete
• Smith
• Stereotype
• (B
• Strictly
• Uniformly) convex
• (Quasi-) Ultrabarrelled
• Uniformly smooth
• Webbed
• With the approximation property
• Mathematics portal
• Category
• Commons
|
Wikipedia
|
Joseph Ser
Joseph Ser (1875–1954) was a French mathematician, of whom little was known till now. He published 45 papers between 1900 and 1954, among which four monographs, edited in Paris by Henry Gauthier-Villars. In the main, he worked on number theory and infinite series.
He got important results in the domain of factorial series. His representation of Euler's constant as a series of rational terms is well known. It was used in 1926 by Paul Appell (1855–1930), in an unsuccessful attempt to prove the irrationality of Euler's constant.
References
• Ser, Joseph : Sur une expression de la fonction ζ(s) de Riemann (Upon an expression for Riemann's ζ function). CRAS (Paris) vol.182(1926),1075-1077
• Ayoub, Raymond G.: Partial triumph or total failure ? The mathematical Intelligencer, vol.7, No 2(1985),55-58. This paper explains exactly Appell's mistake (4 - Appell and the irrationality of Euler's constant).
Authority control
International
• VIAF
Academics
• MathSciNet
• zbMATH
Other
• IdRef
|
Wikipedia
|
Serafim Kalliadasis
Serafim Kalliadasis is an applied mathematician and chemical engineer working at Imperial College London since 2004.[1]
Serafim Kalliadasis
FIMA FInstP APSFellow FIChemE
EducationAristotle University of Thessaloniki (Dipl.Ing.)
University of Notre Dame, USA (DPhil)
Known forMathematical modelling of falling liquid films
Scientific career
FieldsInterdisciplinary Applied Mathematics, Engineering Science, Complex Multiscale Systems, classical Density Functional Theory
InstitutionsImperial College London
ThesisSelf-similar interfacial and wetting dynamics (1994)
Doctoral advisorProf. H.-C. Chang
WebsitePersonal website
Complex Multiscale Systems
Career
Serafim Kalliadasis earned a five-year undergraduate degree in chemical engineering at the Polytechnic School of the Aristotle University of Thessaloniki, Greece. He graduated in 1989. In 1990 he started his PhD studies at the University of Notre Dame, USA. His doctoral thesis was in the general of fluid dynamics and was supervised by Prof. H.-C. Chang.
Following his PhD in 1994 he moved on to the University of Bristol, UK, as post-doctoral fellow in applied mathematics.
In 1995 he took up his first academic position at the Chemical Engineering Department of the University of Leeds, UK. In 2004 he was appointed to Readership in Fluid Mechanics at Department of Chemical Engineering, Imperial College, UK, in 2004 and was promoted to Professor in Engineering Science & Applied Mathematics at Imperial College in 2010.
Research
Serafim Kalliadasis' expertise is in the interface between Applied and Computational Mathematics, Complex Systems and Engineering, covering both fundamentals and applications. He leads the Complex Multiscale Systems Group of Imperial College London.[2]
Distinctions
• 2020, Institute of Mathematics and its Applications Fellow.[1]
• 2019, Institute of Physics Fellow.[1]
• 2014, American Physical Society Fellow. Citation reads: “For pioneering and rigorous contributions to fundamental fluid dynamics, particularly interfacial flows and dynamics of moving contact lines, statistical mechanics of inhomogeneous liquids, and coarse graining of complex multiscale systems.”[3]
• 2010–2016, ERC Frontier Research Advanced Investigator Grant holder.[4]
• 2009, Corporate Member and Fellow of IChemE.[1]
• 2004–2009, EPSRC Advanced Fellowship.[1]
Selected publications
1. Carrillo, J.A., Kalliadasis, S., Perez, S.P. & Shu, C.-W. 2020 “Well-balanced finite-volume schemes for hydrodynamic equations with general free energy,” SIAM Multiscale Model. Sim. 18 502–541[5]
2. Gomes, S.N., Kalliadasis, S., Pavliotis, G.A. & Yatsyshin, P. 2019 “Dynamics of the Desai-Zwanzig model in multiwell and random energy landscapes,” Phys. Rev. E 99 Art. No. 032109 (13 pp)[6]
3. Schmuck, M., Pavliotis, G.A. & Kalliadasis, S. 2019 “Recent advances in the evolution of interfaces: thermodynamics, upscaling, and universality,” Comp. Mater. Sci. 156 441–451 (Special issue following Euromat2017 conference)
4. Yatsyshin, P., Parry, A.O., Rascón, C. & Kalliadasis, S. 2018 ``Wetting of a plane with a narrow solvophobic stripe,” Mol. Phys. 116 1990–1997 (Special issue following Thermodynamics 2017 conference)[7]
5. Yatsyshin, P., Durán-Olivencia, M.A. & Kalliadasis, S. 2018 “Microscopic aspects of wetting using classical density functional theory,” J. Phys.-Condens. Matt. 30 Art. No. 274003 (9 pp) (Invited paper—special issue on “Physics of Integrated Microfluidics”)[8]
6. Dallaston, M.C., Fontelos, M.A., Tseluiko, D. & Kalliadasis S. 2018 “Discrete self-similarity in interfacial hydrodynamics and the formation of iterated structures,” Phys. Rev. Lett. 120} Art. No. 034505 (5 pp)[9]
7. Braga, C., Smith, E.R., Nold, A., Sibley, D.N. & Kalliadasis, S. 2018 “The pressure tensor across a liquid-vapour interface,” J. Chem. Phys. 149 Art. No. 044705 (8 pp)[10]
8. Schmuck, M. & Kalliadasis, S. 2017 “Rate of convergence of general phase field equations in strongly heterogeneous media towards their homogenized limit,” SIAM J. Appl. Math. 77 1471–1492[11]
9. Nold, A., Goddard, B.D., Yatsyshin, P., Savva, N. & Kalliadasis, S. 2017 “Pseudospectral methods for density functional theory in bounded and unbounded domains,” J. Comp. Phys. 334 639–664[12]
10. Durán-Olivencia, M.A., Yatsyshin, P., Goddard, B.D. & Kalliadasis, S. 2017 “General framework for fluctuating dynamic density functional theory,” New J. Phys. 19 Art. No. 123022 (16 pp)[13]
References
1. "Home – Professor Serafim Kalliadasis". www.imperial.ac.uk.
2. Complex Multiscale Systems Imperial College London
3. Illustrious Fellowship for Chemical Engineering Professor Imperial News – Imperial College London
4. "ERC 10th AnniversaryEvent" (PDF).
5. Carrillo, José A.; Kalliadasis, Serafim; Perez, Sergio P.; Shu, Chi-Wang (January 1, 2020). "Well-Balanced Finite-Volume Schemes for Hydrodynamic Equations with General Free Energy". Multiscale Modeling & Simulation. 18 (1): 502–541. arXiv:1812.00980. doi:10.1137/18M1230050. S2CID 89613823.
6. Gomes, Susana N.; Kalliadasis, Serafim; Pavliotis, Grigorios A.; Yatsyshin, Petr (March 6, 2019). "Dynamics of the Desai-Zwanzig model in multiwell and random energy landscapes". Physical Review E. 99 (3): 032109. arXiv:1810.06371. Bibcode:2019PhRvE..99c2109G. doi:10.1103/PhysRevE.99.032109. PMID 30999473. S2CID 53398077.
7. Yatsyshin, P.; Parry, A. O.; Rascón, C.; Kalliadasis, S. (August 18, 2018). "Wetting of a plane with a narrow solvophobic stripe". Molecular Physics. 116 (15–16): 1990–1997. Bibcode:2018MolPh.116.1990Y. doi:10.1080/00268976.2018.1473648. hdl:10016/29071. S2CID 102537449.
8. Yatsyshin, P., Durán-Olivencia, M.A. & Kalliadasis, S. 2018 “Microscopic aspects of wetting using classical density functional theory,” J. Phys.-Condens. Matt. 30 Art. No. 274003 (9 pp) (Invited paper—special issue on “Physics of Intergated Microfluidics”)
9. Dallaston, Michael C.; Fontelos, Marco A.; Tseluiko, Dmitri; Kalliadasis, Serafim (January 19, 2018). "Discrete Self-Similarity in Interfacial Hydrodynamics and the Formation of Iterated Structures". Physical Review Letters. 120 (3): 034505. Bibcode:2018PhRvL.120c4505D. doi:10.1103/PhysRevLett.120.034505. PMID 29400525.
10. Braga, Carlos; Smith, Edward R.; Nold, Andreas; Sibley, David N.; Kalliadasis, Serafim (July 28, 2018). "The pressure tensor across a liquid-vapour interface". The Journal of Chemical Physics. 149 (4): 044705. arXiv:1711.05986. Bibcode:2018JChPh.149d4705B. doi:10.1063/1.5020991. PMID 30068201. S2CID 51892025.
11. Schmuck, M.; Kalliadasis, S. (January 1, 2017). "Rate of Convergence of General Phase Field Equations in Strongly Heterogeneous Media Toward Their Homogenized Limit". SIAM Journal on Applied Mathematics. 77 (4): 1471–1492. doi:10.1137/16M1079646. hdl:10044/1/53735. S2CID 1290321.
12. Nold, Andreas; Goddard, Benjamin D.; Yatsyshin, Peter; Savva, Nikos; Kalliadasis, Serafim (April 1, 2017). "Pseudospectral methods for density functional theory in bounded and unbounded domains". Journal of Computational Physics. 334: 639–664. arXiv:1701.06182. Bibcode:2017JCoPh.334..639N. doi:10.1016/j.jcp.2016.12.023. S2CID 2175860 – via ScienceDirect.
13. Durán-Olivencia, Miguel A.; Yatsyshin, Peter; Goddard, Benjamin D.; Kalliadasis, Serafim (2017). "General framework for fluctuating dynamic density functional theory". New Journal of Physics. 19 (12): 123022. Bibcode:2017NJPh...19l3022D. doi:10.1088/1367-2630/aa9041.
Authority control
International
• ISNI
• VIAF
National
• Catalonia
• Israel
• United States
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
|
Wikipedia
|
Sergei Abramov (mathematician)
Sergei Mikhailovich Abramov (Russian: Сергей Михайлович Абрамов; born 25 March 1957) is a Russian mathematician, Professor, Dr.Sc., Corresponding Member of the Russian Academy of Sciences, Director of the Institute of Program Systems of the Russian Academy of Sciences, Rector of the University of Pereslavl (2003—2017). Specialist in the field of system programming and information technologies (supercomputer systems, telecommunication technologies, theory of constructive metasystems and meta-calculations).[1][2]
Sergey Abramov
Born (1957-03-25) 25 March 1957
Moscow
Alma materMoscow State University (1980)
Scientific career
FieldsMathematics
InstitutionsDirector of the Institute of Program Systems of the RAS
Biography
He graduated from the faculty MSU CMC (1980).
He defended the thesis «Meta-calculations and their application» for the degree of Doctor of Physical and Mathematical Sciences (1995).[3][4]
Was awarded the title of Professor (1996), Corresponding Member of the Russian Academy of Sciences (2006).
References
1. Russian Academy of Sciences(in Russian)
2. Biography Sergey Abramov(in Russian)
3. RAS Archive(in Russian)
4. Scientific works of Sergei Abramov(in English)
External links
• "Sergey Abramov". Russian Academy of Sciences (in Russian). Retrieved 2018-05-21.
• "Sergey Abramov". RAS Archive (in Russian). Retrieved 2018-05-21.
• "Biography Sergey Abramov". BOTIK.RU (in Russian). Retrieved 2018-05-21.
• Scientific works of Sergei Abramov(in English)
Authority control: Academics
• ORCID
• zbMATH
|
Wikipedia
|
Sergei Bernstein
Sergei Natanovich Bernstein (Ukrainian: Сергі́й Ната́нович Бернште́йн, sometimes Romanized as Bernshtein; 5 March 1880 – 26 October 1968) was a Ukrainian and Russian mathematician of Jewish origin known for contributions to partial differential equations, differential geometry, probability theory, and approximation theory.[1][2]
Sergei Bernstein
Born
Sergei Natanovich Bernstein
(1880-03-05)5 March 1880
Odessa, Kherson Governorate, Russian Empire
Died26 October 1968(1968-10-26) (aged 88)
Moscow, Soviet Union
NationalitySoviet
Alma materUniversity of Paris
Known forBernstein's inequality in analysis
Bernstein inequalities in probability theory
Bernstein polynomial
Bernstein's theorem (approximation theory)
Bernstein's theorem on monotone functions
Bernstein problem in mathematical genetics
Scientific career
FieldsMathematics
InstitutionsUniversity of Paris
University of Göttingen
University of Kharkiv
Leningrad University
Steklov Institute of Mathematics
Doctoral advisorCharles Émile Picard
David Hilbert
Doctoral studentsYakov Geronimus
Sergey Stechkin
Bernstein was born into a Jewish family living in Odessa. After high school Bernstein went to Paris to study mathematics. He returned to Russia in 1905 and taught at Kharkiv University from 1908 to 1933. He was made an ordinary professor in 1920. Bernstein later worked at the Mathematical Institute of the USSR Academy of Sciences in Leningrad, and also taught at the University and Polytechnic Institute. From January 1939, Bernstein also worked also at Moscow University. He and his wife were evacuated to Borovoe, Kazakhstan in 1941. From 1943 he worked at the Mathematical Institute in Moscow, and edited Chebyshev’s complete works. In 1947 he was dismissed from the University and became Head of the Department of Constructive Function Theory at the Steklov Institute. He died in Moscow in 1968.
Work
Partial differential equations
In his doctoral dissertation, submitted in 1904 to the Sorbonne, Bernstein solved Hilbert's nineteenth problem on the analytic solution of elliptic differential equations.[3] His later work was devoted to Dirichlet's boundary problem for non-linear equations of elliptic type, where, in particular, he introduced a priori estimates.
Probability theory
In 1917, Bernstein suggested the first axiomatic foundation of probability theory, based on the underlying algebraic structure.[4] It was later superseded by the measure-theoretic approach of Kolmogorov.
In the 1920s, he introduced a method for proving limit theorems for sums of dependent random variables.
Approximation theory
Through his application of Bernstein polynomials, he laid the foundations of constructive function theory, a field studying the connection between smoothness properties of a function and its approximations by polynomials.[5] In particular, he proved the Weierstrass approximation theorem[6][7] and Bernstein's theorem (approximation theory). Bernstein polynomials also form the mathematical basis for Bézier curves, which later became important in computer graphics.
International Congress of Mathematicians
Bernstein was an invited speaker at the International Congress of Mathematicians (ICM) in Cambridge, England in 1912 and in Bologna in 1928 and a plenary speaker at the ICM in Zurich.[8] His plenary address Sur les liaisons entre quantités aléatoires was read by Bohuslav Hostinsky.[9]
Publications
• S. N. Bernstein, Collected Works (Russian):
• vol. 1, The Constructive Theory of Functions (1905–1930), translated: Atomic Energy Commission, Springfield, Va, 1958
• vol. 2, The Constructive Theory of Functions (1931–1953)
• vol. 3, Differential equations, calculus of variations and geometry (1903–1947)
• vol. 4, Theory of Probability. Mathematical statistics (1911–1946)
• S. N. Bernstein, The Theory of Probabilities (Russian), Moscow, Leningrad, 1946
See also
• A priori estimate
• Bernstein algebra
• Bernstein's inequality (mathematical analysis)
• Bernstein inequalities in probability theory
• Bernstein polynomial
• Bernstein's problem
• Bernstein's theorem (approximation theory)
• Bernstein's theorem on monotone functions
• Bernstein–von Mises theorem
• Stone–Weierstrass theorem
Notes
1. Youschkevitch, A. P. "BERNSTEIN, SERGEY NATANOVICH". Dictionary of Scientific Biography.
2. Lozinskii, S. M. (1983). "On the hundredth anniversary of the birth of S. N. Bernstein". Russ. Math. Surv. 38 (3): 163. Bibcode:1983RuMaS..38..163L. doi:10.1070/RM1983v038n03ABEH003497.
3. Akhiezer, N.I.; Petrovskii, I.G. (1961). "S. N. Bernshtein's contribution to the theory of partial differential equations". Russ. Math. Surv. 16 (2): 1–15. Bibcode:1961RuMaS..16....1A. doi:10.1070/RM1961v016n02ABEH004101.
4. Linnik, Ju. V. (1961). "The contribution of S. N. Bernšteĭn to the theory of probability". Russ. Math. Surv. 16 (2): 21–22. doi:10.1070/rm1961v016n02abeh004103. MR 0130818.
5. Videnskii, V. S. (1961). "Sergei Natanovich Bernshtein — founder of the constructive theory of functions". Russ. Math. Surv. 16 (2): 17. Bibcode:1961RuMaS..16...17V. doi:10.1070/RM1961v016n02ABEH004102.
6. S. Bernstein (1912–13) "Démonstration du théroème de Weierstrass, fondeé sur le calcul des probabilités, Commun. Soc. Math. Kharkow (2) 13: 1-2
7. Kenneth M. Lavasseur (1984) A Probabilistic Proof of the Weierstrass Theorem, American Mathematical Monthly 91(4): 249,50
8. "Bernstein, S." ICM Plenary and Invited Speakers, International Mathematical Union.
9. "1932 ICM - Zurich". MacTutor.
References
• O'Connor, John J.; Robertson, Edmund F., "Sergei Bernstein", MacTutor History of Mathematics Archive, University of St Andrews
External links
• Sergei Bernstein at the Mathematics Genealogy Project
• Sergei Natanovich Bernstein and history of approximation theory from Technion — Israel Institute of Technology
• Author profile in the database zbMATH
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Czech Republic
• Greece
• Netherlands
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• Encyclopedia of Modern Ukraine
• IdRef
|
Wikipedia
|
Sergei Godunov
Sergei Konstantinovich Godunov (Russian: Серге́й Константи́нович Годуно́в; 17 July 1929 – 15 July 2023) was a Soviet and Russian professor at the Sobolev Institute of Mathematics of the Russian Academy of Sciences in Novosibirsk, Russia.
Sergei Godunov
Godunov in 2002
Born
Sergei Konstantinovich Godunov
(1929-07-17)17 July 1929
Moscow, Russian SFSR, USSR
Died15 July 2023(2023-07-15) (aged 93)
Novosibirsk, Russia
NationalityRussian
Alma materMoscow State University
Known forGodunov's theorem
Godunov's scheme
AwardsLenin Prize (1959)
Scientific career
FieldsApplied mathematics
InstitutionsSobolev Institute of Mathematics, Novosibirsk, Russia
Doctoral advisorIvan Petrovsky
Biography
Godunov's most influential work is in the area of applied and numerical mathematics, particularly in the development of methodologies used in Computational Fluid Dynamics (CFD) and other computational fields. Godunov's theorem (Godunov 1959) (also known as Godunov's order barrier theorem) : Linear numerical schemes for solving partial differential equations, having the property of not generating new extrema (a monotone scheme), can be at most first-order accurate. Godunov's scheme is a conservative numerical scheme for solving partial differential equations. In this method, the conservative variables are considered as piecewise constant over the mesh cells at each time step and the time evolution is determined by the exact solution of the Riemann (shock tube) problem at the inter-cell boundaries (Hirsch, 1990).
On 1–2 May 1997 a symposium entitled: Godunov-type numerical methods, was held at the University of Michigan to honour Godunov. These methods are widely used to compute continuum processes dominated by wave propagation. On the following day, 3 May, Godunov received an honorary degree from the University of Michigan. Godunov died on 15 July 2023, two days shy of his 94th birthday.[1]
Education
• 1946–1951 – Department of Mechanics and Mathematics, Moscow State University.
• 1951 – Diploma (M. S.), Moscow State University.
• 1954 – Candidate of Physical and Mathematical Sciences (Ph. D.).
• 1965 – Doctor of Physical and Mathematical Sciences (D. Sc.).
• 1976 – Corresponding member of the Academy of Sciences of the Soviet Union.
• 1994 – Member of the Russian Academy of Sciences (Academician).
• 1997 – Honorary professor of the University of Michigan (Ann-Arbor, USA).
Awards
• 1954 – Order of the Badge of Honour
• 1956 – Order of the Red Banner of Labour
• 1959 – Lenin Prize
• 1972 – A.N. Krylov Prize of the Academy of Sciences of the Soviet Union
• 1975 – Order of the Red Banner of Labour
• 1981 – Order of the Badge of Honour
• 1993 – M.A. Lavrentyev Prize of the Russian Academy of Sciences
• 2010 – Order of Honour
• 2020 - SAE/Ramesh Agarwal Computational Fluid Dynamics Award
• 2023 – Order of Alexander Nevsky
See also
• Riemann solver
• Total variation diminishing
• Upwind scheme
Notes
1. В Новосибирске скончался выдающийся математик, академик РАН Сергей Годунов (in Russian)
References
• Godunov, Sergei K. (1954), Ph. D. Dissertation: Difference Methods for Shock Waves, Moscow State University.
• Godunov, S. K. (1959), A Difference Scheme for Numerical Solution of Discontinuous Solution of Hydrodynamic Equations, Mat. Sbornik, 47, 271-306, translated US Joint Publ. Res. Service, JPRS 7225 November 29, 1960.
• Godunov, Sergei K. and Romenskii, Evgenii I. (2003) Elements of Continuum Mechanics and Conservation Laws, Springer, ISBN 0-306-47735-1.
• Hirsch, C. (1990), Numerical Computation of Internal and External Flows, vol 2, Wiley.
External links
• Sergei Godunov at the Mathematics Genealogy Project
• Godunov's Personal Web Page
• Sobolev Institute of Mathematics Archived 16 December 2014 at the Wayback Machine
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• Germany
• Israel
• United States
• Latvia
• Czech Republic
• Australia
• Netherlands
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Sergei Chernikov
Sergei Nikolaevich Chernikov (11 May 1912 – 23 January 1987; Russian: Сергей Николаевич Черников) was a Russian mathematician who contributed significantly to the development of infinite group theory and linear inequalities.
Sergei Chernikov
Сергей Николаевич Черников
Born
Sergei Nikolaevich Chernikov
(1912-05-11)11 May 1912
Sergiyev Posad, Russia
Died23 January 1987(1987-01-23) (aged 74)
NationalityRussian
Known forGroup theory, linear programming
Awards
• Krylov Prize (1973)
• Order of Friendship of Peoples (1982)
Scientific career
FieldsMathematics
Doctoral advisorAleksandr Gennadievich Kurosh
Notable studentsVictor Glushkov
Biography
Chernikov was born on 11 May 1912 in Sergiyev Posad, in Moscow Oblast, Russia, to Nikolai Nikolaevich, a priest, and Anna Alekseevna, a housewife.[1] After graduating from secondary school, he worked as a labourer, as a driver, as a book-keeper and as an accountant. Until November 1931 he taught mathematics in a school for workers. From 1930 he was an external student of the Pedagogic Institute of Saratov State University, where he graduated in 1933.[1] He began graduate studies at the Ural Industrial Institute under the outside tutelage of Alexandr G. Kurosh (of the University of Moscow).[2] A remarkable student, Chernikov was made head of the Ural Mathematics department (1939–1946) immediately after earning his PhD in 1938, even before defending his DSc in 1940.[3] He went on to be head of mathematical departments at Ural State University (1946–1951), Perm State University (1951–1961), the Steklov Institute of Mathematics (1961–1964), and finally the National Academy of Sciences of Ukraine from 1964 until days before his death in 1987.[3] During his career, he trained more than 40 PhD and 7 DSc students,[3] and published dozens of papers that remained influential 100 years after his birth.[4]
Contributions
Chernikov is credited with introducing a number of fundamental concepts to group theory, including the locally finite group, and nilpotent group.[3][5] As with many of his other contributions, these allow infinite groups to be partially or locally solved, establishing important early links between finite and infinite group theories. Later in his career, he was hailed as "one of the pioneers of linear programming",[3] for his breakthrough algebraic theory of linear inequalities.[6]
Published works
• Chernikov S.N. (1939) Infinite special groups. Mat. Sbornik 6, 199–214
• Chernikov S.N. (1940) Infinite locally soluble groups. Mat. Sbornik 7, 35–61
• Chernikov S.N. (1940) To theory of infinite special groups. Mat. Sbornik 7, 539–548.
• Chernikov S.N. (1940) On groups with Sylow sets. Mat. Sbornik 8, 377–394.
• Chernikov S.N. (1943) To theory of locally soluble groups. Mat. Sbornik 13, 317–333.
• Chernikov S.N. (1946) Divisible groups possesses an ascending central series. Mat. Sbornik 18, 397–422.
• Chernikov S.N. (1947) To the theory of finite p – extensions of abelian p – groups. Doklady AN USSR, 58, 1287–1289. Journal Algebra Discrete Math. M.
• Kurosh A.G., Chernikov S.N. (1947) Soluble and nilpotent groups. Uspekhi Math. Nauk 2, number 3, 18 – 59.
• Chernikov S.N. (1948) Infinite layer – finite groups. Mat. Sbornik 22, 101–133.
• Chernikov S.N. (1948) To the theory of divisible groups. Mat. Sbornik 22, 319–348.
• Chernikov S.N. (1948) A complement to the paper “To the theory of divisible groups”. Mat. Sbornik 22, 455–456.
• Chernikov S.N. (1949) To the theory of torsion – free groups possesses an ascending central series. Uchenye zapiski Ural University 7, 3–21.
• Chernikov S.N. (1950) On divisible groups with ascending central series. Doklady AN USSR 70, 965–968.
• Chernikov S.N. (1950) On a centralizer of divisible abelian normal subgroups in infinite periodic groups. Doklady AN USSR 72, 243–246.
• Chernikov S.N. (1950) Periodic ZA – extension of divisible groups. Mat. Sbornik 27, 117 – 128.
• Chernikov S.N. (1955) On complementability of Sylow p-subgroups in some classes of infinite groups. Mat. Sbornik. – 37, 557 – 566.
• Chernikov S.N. (1957) On groups with finite conjugacy classes. Doklady AN USSR 114, 1177 – 1179
• Chernikov S.N. (1957) On a structure of groups with finite conjugate classes. Doklady AN SSSR – 115, 60 – 63.
• Chernikov S.N. (1958) On layer – finite groups. Mat. Sbornik 45, 415–416.
• Chernikov S.N. (1959) Finiteness conditions in general group theory. Uspekhi Math. Nauk 14, 45 – 96.
• Chernikov S.N. (1960) On infinite locally finite groups with finite Sylow subgroups. Mat. Sbornik 52, 647 – 652.
• Chernikov S.N. (1967) Groups with prescribed properties of a system of infinite subgroups. Ukrain. Math. Journal 19, 111 – 131.
• Chernikov S.N. (1969) Investigations of groups with prescribed properties of subgroups. Ukrain. Math. Journal 21, 193 – 209.
• Chernikov S.N. (1971) On a problem of Schmidt. Ukrain. Math. Journal 23, 598 – 603
• Chernikov S.N. (1971) On groups with the restrictions for subgroups. “Groups with the restrictions for subgroups”, NAUKOVA DUMKA: Kyiv 17 – 39.
• Chernikov S.N. (1975) Groups with dense system of complement subgroups. “Some problems of group theory”, MATH. INSTITUT: Kyiv 5 – 29.
• Chernikov S.N. (1980) The groups with prescribed properties of systems of subgroups. NAUKA : Moscow.
• Chernikov S.N. (1980) Infinite groups, defined by the properties of system of infinite subgroups. “VI Simposium on group theory”, NAUKOVA DUMKA: Kyiv 5 – 22.
References
1. Eremin, I. I.; Makhnev, A. A. (2013). "On the 100th Birthday of Sergei Nikolaevich Chernikov" (PDF). Proceedings of the Steklov Institute of Mathematics. 283: S1–S5. doi:10.1134/S0081543813090010. hdl:10995/27285. S2CID 255273815.
2. J. J. O'Connor, E. F. Robertson (January 1999). "Sergei Nikolaevich Chernikov". MacTutor History of Mathematics Archive. University of Saint Andrews School of Mathematics and Statistics. Retrieved 7 July 2016.
3. Ershov, Y.L.; et al. (1988). "Sergei Nikolaevich Chernikov (obituary)". Russian Math. Surveys. 43 (2): 153–155. Bibcode:1988RuMaS..43..153E. doi:10.1070/RM1988v043n02ABEH001714. S2CID 250872381.
4. Dixon, M. R.; Kirichenko, V. V.; Kurdachenko, L. A.; Otal, J.; Semko, N. N.; Shemetkov, L. A.; Subbotin, I. Ya. (2012). "S. N. Chernikov and the development of infinite group theory" (PDF). Algebra and Discrete Mathematics. 13 (2): 169–208.
5. Plotkin, Boris. "Sergei Nikolaevich Chernikov. Memoirs" (PDF). Algebra and Discrete Mathematics. 14 (1): C–F. Retrieved 7 July 2016.
6. Chernikov S.N. (1971) On groups with the restrictions for subgroups. “Groups with the restrictions for subgroups”, NAUKOVA DUMKA: Kyiv 17–39.
External links
• Sergei Nikolaevich Chernikov's entry on Math-Net.ru
Authority control
International
• ISNI
• VIAF
National
• Norway
• Germany
• Israel
• United States
• Latvia
• Czech Republic
• Australia
• Netherlands
• Sweden
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
• 2
• 3
People
• Trove
Other
• IdRef
|
Wikipedia
|
Sergei Starchenko
Sergei Stepanovich Starchenko (Сергей Степанович Старченко) is a mathematical logician who was born and grew up in the Soviet Union and now works in the USA.
Starchenko graduated from the Novosibirsk State University in 1983 with M.S. and then in 1987 received his Ph.D. (Russian Candidate degree) there. His doctoral dissertation Number of models of Horn theories was written under the supervision of Evgenii Andreevich Palyutin. Starchenko was an assistant professor of mathematics at Vanderbilt University and is now a full professor at the University of Notre Dame.
2013 he received the Karp Prize with Ya’acov Peterzil for collaborative work with two other mathematicians. With Peterzil he applied the theory of o-minimal structures to problems in algebra and real and complex analysis.
In 2010 Starchenko was, along with Peterzil, Invited Speaker with the talk Tame complex analysis and o-minimality at the International Congress of Mathematicians in Hyderabad. Starchenko became a Fellow of the American Mathematical Society in the class of 2017.
Selected publications
• with Y. Peterzil: Geometry, Calculus and Zil'ber Conjecture, Bulletin of Symbolic Logic, vol. 2, 1996, pp. 72–83. doi:10.2307/421047
• with Y. Peterzil: A trichotomy theorem for o-minimal structures, Proc. London Math. Soc., vol. 77, 1998, pp. 481–523 doi:10.1112/S0024611598000549
• with Y. Peterzil and A. Pillay: Definably simple groups in o-minimal structures, Transactions American Mathematical Society, vol. 352, 2000, pp. 4397–4419 doi:10.1090/S0002-9947-00-02593-9
• with Y. Peterzil: Uniform definability of the Weierstrass ℘-functions and generalized tori of dimension one, Selecta Math. (N.S.), vol. 10, 2004, pp. 525–550. doi:10.1007/s00029-005-0393-y
• with Y. Peterzil: Definability of restricted theta functions and families of abelian varieties, Duke Math. J., vol. 162, 2013, pp., 731–765. doi:10.1215/00127094-2080018
• with Peterzil: Mild manifolds and a non-standard Riemann existence theorem, Selecta Math. (N.S.), vol. 14, 2009, pp. 275–298. doi:10.1007/s00029-008-0064-x
• On the tomography theorem by P. Schapira: in: Model theory with applications to algebra and analysis. vol. 1, London Math. Soc. Lecture Note Ser., 349, Cambridge Univ. Press, Cambridge, 2008, pp. 283–292 doi:10.1017/CBO9780511735226.014
• with Rahim Moosa: K-analytic versus ccm-analytic sets in nonstandard compact complex manifolds, Fund. Math., vol. 198, 2008, pp. 139–148. doi:10.4064/fm198-2-4]
References
External links
• Sergei Starchenko, University of Notre Dame, selected publications with online links
Authority control
International
• VIAF
National
• Poland
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Sergei Tabachnikov
Sergei Tabachnikov, also spelled Serge, (in born in 1956) is an American mathematician who works in geometry and dynamical systems. He is currently a Professor of Mathematics at Pennsylvania State University.
Biography
He earned his Ph.D. from Moscow State University in 1987 under the supervision of Dmitry Fuchs and Anatoly Fomenko.[1] He has been living and working in the USA since 1990.
From 2013 to 2015 Tabachnikov served as Deputy Director of the Institute for Computational and Experimental Research in Mathematics (ICERM) in Providence, Rhode Island.[2] He is now Emeritus Deputy Director of ICERM.[3]
He is a fellow of the American Mathematical Society.[4] He currently serves as Editor in Chief of the journal Experimental Mathematics.[5]
A paper on the variability hypothesis by Theodore Hill and Tabachnikov was accepted and retracted by The Mathematical Intelligencer and later The New York Journal of Mathematics (NYJM). There was some controversy over the mathematical model, the peer-review process, and the lack of an official retraction notice from the NYJM.[6]
Selected publications
• Tabachnikov, Serge, ed. (1999). Differential and symplectic topology of knots and curves. Advances in the Mathematical Sciences. Vol. 42. American Mathematical Society. ISBN 978-0-8218-1354-6. MR 1738386.
• Farber, Michael; Tabachnikov, Serge (2002), "Topology of cyclic configuration spaces and periodic trajectories of multi-dimensional billiards", Topology, 41 (3): 553–589, arXiv:math/9911226, doi:10.1016/S0040-9383(01)00021-0, MR 1910041, S2CID 10350816
• Tabachnikov, Serge (2005), Geometry and Billiards, Providence, RI: American Mathematical Society, ISBN 978-0-8218-3919-5, MR 2168892
• Ovsienko, Valentin; Tabachnikov, Serge (2005), Projective differential geometry old and new. From the Schwarzian derivative to the cohomology of diffeomorphism groups, Cambridge Tracts in Mathematics, vol. 165, Cambridge University Press, ISBN 978-0-521-83186-4, MR 2177471
• Fuchs, Dmitry; Tabachnikov, Serge (2007), Mathematical Omnibus. 30 Lectures on Classical Mathematics, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4316-1, MR 2350979
References
1. "The Mathematics Genealogy Project - Serge Tabachnikov". Archived from the original on 2015-04-02. Retrieved 2015-03-13.
2. https://icerm.brown.edu/about/include/icerm%20newsletter_summer%202015.pdf ICERM Summer 2015 Newsletter
3. ICERM Emeritus Leadership
4. "American Mathematical Society".
5. Experimental Mathematics: Editorial board
6. Azvolinsky, Anna (2018-09-27). "A Twice-Retracted Paper on Sex Differences Ignites Debate". The Scientist. Retrieved 2018-11-03.
External links
• Sergei Tabachnikov at the Mathematics Genealogy Project
• Sergei Tabachnikov publications indexed by Google Scholar
• Homepage
Authority control
International
• ISNI
• VIAF
National
• Norway
• 2
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
• Netherlands
• Poland
Academics
• CiNii
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
Other
• IdRef
|
Wikipedia
|
Sergei Vasilyevich Kerov
Sergei Vasilyevich Kerov (Russian: Сергей Васильевич Керов; born 21 June 1946 in Leningrad died 30 July 2000) was a Russian mathematician and university professor. His research included operator algebras, combinatorics, probability and representation theory.[1][2]
Life
Kerov was born in 1946 in Leningrad (now St. Petersburg). His father Vasily Kerov was a teacher for analytical chemistry at a university in Leningrad and his mother Marianna Nikolayeva was an expert in seed physiology.[1]
Kerov studied at the Saint Petersburg State University. He obtained a PhD in 1975 under the supervision of Anatoly Vershik. He was then a professor at various universities in St. Petersburg, including the Herzen Pedagogical University and the University of Saint Petersburg. From 1993 he did research at the Steklov Institute of Mathematics in St. Petersburg. In 1994 he received a Sc.D. (Doctor of Science) from the Steklov Institute for his work Asymptotic Representation Theory of the Symmetric Group, with Applications to Analysis. From 1995 he was a professor at the University of Saint Petersburg.[1]
In 2000 he died of a brain tumor.[1]
Work
A list of Kerovs scientific articles was published in the Journal of Mathematical Sciences.[3]
In 1977 he proved together with Anatoly Vershik a limit theorem for the Plancherel measure for the symmetric group with a limiting shape which is now called Vershik-Kerov curve.[4][5] The same result was also independently proved by Logan and Shepp and thus is also called Logan-Shepp curve. The result was later improved by Kerov to a central limit theorem[6]
Publications (Selections)
Research papers
• Kerov, S.V. (1999). "Rooks on ferrers boards and matrix integrals". Journal of Mathematical Sciences. Springer. 96 (5): 3531–3536. doi:10.1007/BF02175831.{{cite journal}}: CS1 maint: uses authors parameter (link)
• S. V. Kerov; G. I. Olshanski (1994). "Polynomial functions on the set of Young diagrams". C. R. Acad. Sci. Paris Sér. I. 319: 121–126.
• Vershik, A.M., Kerov, S.V. (2007). "Four drafts on the representation theory of the group of infinite matrices over a finite field". Journal of Mathematical Sciences. 147 (6): 7129–7144. doi:10.1007/s10958-007-0535-1. S2CID 16877947.{{cite journal}}: CS1 maint: uses authors parameter (link)
Translated in English
• S. V. Kerov (2003). Asymptotic Representation Theory of the Symmetric Group and its Applications in Analysis. American Mathematical Society. ISBN 978-0-8218-3440-4.
References
1. "Brief Biography of Sergei Kerov". mathsoc.spb.ru. St. Petersburg Mathematical Society. Retrieved 2022-11-19.
2. "Научные сотрудники ЛОМИ / ПОМИ прошлых лет". www.pdmi.ras.ru. St. Petersburg Department of Steklov Mathematical Institute. Retrieved 2022-11-19.
3. "List of Publications of Sergei Kerov". Journal of Mathematical Sciences. Spriger. 121 (3): 2300–2302. 2004. doi:10.1023/B:JOTH.0000024610.98657.ac. S2CID 189866581.
4. A.M. Vershik; S.V. Kerov (1977). "Asymptotics of the Plancherel measure of the symmetric group and the limiting shape of Young tableaux". Soviet Math. Dokl. 18: 527–531.
5. Borodin, Alexei; Andrei Okounkov; Grigori Olshanski (2000). ""Asymptotics of Plancherel Measures for Symmetric Groups."". Journal of the American Mathematical Society. 13 (3): 481–515. doi:10.1090/S0894-0347-00-00337-4. JSTOR 2646116. S2CID 14183320.
6. V. Ivanov and G. Olshanski (2002). "Kerov's central limit theorem for the Plancherel measure on Young diagrams". In Symmetric Functions 2001: Surveys of Developments and Perspectives. Springer: 93–151. doi:10.1007/978-94-010-0524-1_3. ISBN 978-1-4020-0774-3. S2CID 9763630.
External links
• Сергей Васильевич Керов on the website of the Saint Petersburg Mathematical Society
• Publications
• Sergei Kerov on mathnet.ru
• Homepage at the Steklov Institute
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
|
Wikipedia
|
Sergei Konyagin
Sergei Vladimirovich Konyagin (Russian: Серге́й Владимирович Конягин; born 25 April 1957)[1] is a Russian mathematician. He is a professor of mathematics at the Moscow State University.
Sergei Konyagin
Born (1957-04-25) 25 April 1957
NationalityRussian
AwardsSalem Prize
Scientific career
FieldsMathematics
InstitutionsMoscow State University
Doctoral advisorSergey Stechkin
Konyagin participated in the International Mathematical Olympiad for the Soviet Union, winning two consecutive gold medals with perfect scores in 1972 and 1973. At the age of 15, he became one of the youngest people to achieve a perfect score at the IMO.
In 1990 Konyagin was awarded the Salem Prize.
In 2012 he became a fellow of the American Mathematical Society.[2]
Selected works
• Konyagin, S.; Shaparlinski, I. (1999). Character sums with exponential functions and their applications. Cambridge: Cambridge University Press. ISBN 0-521-64263-9.[3]
• Konyagin, S. V.; Schlag, W. (1999). "Lower bounds for the absolute value of random polynomials on a neighborhood of the unit circle". Transactions of the American Mathematical Society. 351 (12): 4963–4980. doi:10.1090/S0002-9947-99-02241-2. ISSN 0002-9947.
• Green, Ben; Konyagin, Sergei (2009). "On the Littlewood Problem Modulo a Prime". Can. J. Math. 61 (1): 141. arXiv:math/0601565. doi:10.4153/cjm-2009-007-4. S2CID 14997570.
• Filaseta, Michael; Ford, Kevin; Konyagin, Sergei; Pomerance, Carl; Yu, Gang (2006). "Sieving by large integers and covering systems of congruences". Journal of the American Mathematical Society. 20 (2): 495–517. doi:10.1090/S0894-0347-06-00549-2. S2CID 8529007.
• Bourgain, Jean; Konyagin, Sergei V.; Shparlinski, Igor E. (2015). "Character sums and deterministic polynomial root finding in finite fields". Mathematics of Computation. 84 (296): 2969–2977. arXiv:1308.4803. doi:10.1090/mcom/2946. S2CID 14451901.
• Konyagin, Sergei V.; Shparlinski, Igor E. (2015). "Quadratic non-residues in short intervals". Proceedings of the American Mathematical Society. 143 (10): 4261–4269. doi:10.1090/S0002-9939-2015-12584-1. S2CID 119171768.
References
1. Info at MathNet.ru
2. List of Fellows of the American Mathematical Society, retrieved 2013-01-27.
3. Duke, W. (2002). "Book Review: Character sums with exponential functions and their applications". Bulletin of the American Mathematical Society. 39 (2): 293–298. doi:10.1090/S0273-0979-02-00937-0. ISSN 0273-0979.
External links
• Sergei Konyagin at the Mathematics Genealogy Project
• Sergei Konyagin's results at International Mathematical Olympiad
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
• Poland
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Scopus
• zbMATH
Other
• IdRef
|
Wikipedia
|
Sergei Stepanov (mathematician)
Sergei Aleksandrovich Stepanov (Сергей Александрович Степанов; [1] 24 February 1941) is a Russian mathematician, specializing in number theory. He is known for his 1969 proof using elementary methods of the Riemann hypothesis for zeta-functions of hyperelliptic curves over finite fields, first proved by André Weil in 1940–1941 using sophisticated, deep methods in algebraic geometry.
Stepanov received in 1977 his Russian doctorate (higher doctoral degree) from the Steklov Institute under Dmitry Konstantinovich Faddeev with dissertation (translated title) An elementary method in algebraic number theory.[2] He was from 1987 to 2000 a professor at the Steklov Institute in Moscow.[3] In the 1990s he was also at Bilkent University in Ankara. He is at the Institute for Problems of Information Transmission of the Russian Academy of Sciences.
Stepanov is best known for his work in arithmetic algebraic geometry, especially for the Weil conjectures on algebraic curves. He gave in 1969 an "elementary" (i.e. using elementary methods) proof of a result first proved by André Weil using sophisticated methods, not readily understable by mathematicians who are not specialists in algebraic geometry. Wolfgang M. Schmidt extended Stepanov's methods to prove the general result, and Enrico Bombieri succeeded in using the work of Stepanov and Schmidt to give a substantially simplified, elementary proof of the Riemann hypothesis for zeta-functions of curves over finite fields.[4][5][6] Stepanov's research also deals with applications of algebraic geometry to coding theory.
He was an Invited Speaker of the ICM in 1974 in Vancouver.[7][8] He received in 1975 the USSR State Prize.[3] He was elected a Fellow of the American Mathematical Society in 2012.
Selected publications
• Codes on Algebraic Curves, Kluwer 1999
• Arithmetic of Algebraic Curves, New York, Plenum Publishing 1994,[9] Russian original: Moscow, Nauka, 1991.
• as editor with Cem Yildirim: Number theory and its applications, Marcel Dekker 1999
References
1. sometimes transliterated Serguei A. Stepanov, e.g. in the book edited by him Number theory and its applications, 1999
2. S. A. Stepanov, An elementary method in algebraic number theory, Translated from Matematicheskie Zametki, Vol. 24, No. 3, pp. 425–431, September 1978. doi:10.1007/BF01097766
3. Steklov Mathematical Institute
4. Rosen, Michael (2002). Number Theory in Function Fields. Springer. p. 329. ISBN 9781475760460.
5. Bombieri, Enrico. "Counting points on curves over finite fields (d´après Stepanov)". In: Seminaire Bourbaki, Nr.431, 1972/73. Lecture Notes in Mathematics, vol. 383. Springer.
6. Stepanov, S. A. (1969). "On the number of points of a hyperelliptic curve over a finite prime field". Mathematics of the USSR-Izvestiya. 3 (5): 1103. doi:10.1070/IM1969v003n05ABEH000834.
7. S. A. Stepanov, "элементарный метод в теории уравнений над конечными полями" “An elementary method in the theory of equations over finite fields,” in: Proc. Int. Cong. Mathematicians, Vancouver (1974), vol. 1, pp. 383–391.
8. Stepanov, S. A. (1977). "An elementary method in the theory of equations over finite fields". In Anosov, Dmitrij V. (ed.). 20 lectures delivered at the International Congress of Mathematicians in Vancouver, 1974. American Mathematical Society Translations, Series 2, Vol. 109. American Mathematical Soc. pp. 13–20. ISBN 9780821895467.
9. Silverman, Joseph H. (1996). "Review of Arithmetic of algebraic curves by Serguei Stepanov". Bull. Amer. Math. Soc. 33: 251–254. doi:10.1090/S0273-0979-96-00641-6.
External links
• Stepanov at Mathnet.ru
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
Academics
• MathSciNet
• zbMATH
Other
• IdRef
|
Wikipedia
|
Sergey Bobkov
Sergey Bobkov (Russian: Cергей Германович Бобков; born March 15, 1961) is a mathematician. Currently Bobkov is a professor at the University of Minnesota, Twin Cities.
He was born in Vorkuta (Komi Republic, Russia) and graduated from the Department of Mathematics and Mechanics in Leningrad State University. In 1988 he earned PhD in Mathematics and Physics (under direction of Vladimir N. Sudakov, Steklov Institute of Mathematics) and in 1997 earned his Doctor of Science. During 1998–2000 Bobkov held positions at Syktyvkar State University, Russia.[1] From 1995 to 1996 he was an Alexander von Humboldt Fellow at Bielefeld University, Germany. He spent the summers of 2001 and 2002 as an EPSRC Fellow at Imperial College London, UK.[2] Bobkov was awarded a Simons Fellowship (2012) and Humboldt Research Award (2014).[3][4]
Bobkov is known for research in mathematics on the border of probability theory, analysis, convex geometry and information theory. He has achieved important results about isoperimetric problems, concentration of measure and other high-dimensional phenomena.
Bobkov's inequality is named after him.
References
1. "Curriculum Vitae: Sergey G. Bobkov" (PDF). University of Minnesota School of Mathematics. November 2013. Retrieved 27 May 2014.
2. "Sergey Bobkov". Current long-term visitors. Simons Institute for the Theory of Computing. Retrieved 2014-05-05.
3. "Awardees: Mathematics". Humboldt Network. Alexander von Humboldt-Stiftung/Foundation. Retrieved 2014-05-05.
4. "2012 Simons Fellows Awardees: Mathematics". Simons Foundation. Retrieved 27 May 2014.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Netherlands
• Poland
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Sergey Kislitsyn
Sergey S. Kislitsyn, (Russian: Серге́й Серге́евич Кисли́цын) is a Russian mathematician, specializing in combinatorics and coding theory.
Kislitsyn was born January 5, 1935, in Ivanovo, Soviet Union. He received his M.S. in mathematics from Leningrad State University in 1957. From 1962 until 1970 he worked at Yekaterinburg branch of the Steklov Institute of Mathematics (Krasovsky Institute of Mathematics and Mechanics). He defended his Ph.D. thesis in 1964 and continued working as a lecturer at Krasnoyarsk State University.[1]
Kislitsyn is known for posing the 1/3–2/3 conjecture for linear extensions of finite posets, which he published in 1968.[2] The conjecture is established in several special cases but open in full generality.[3][4]
References
1. "Кислицын Сергей Сергеевич". Биобиблиографический Указатель Научных Трудов Сотрудников Института Математики и Механики УрО РАН до 1975 г [Bio-Bibliographic Index of Scientific Works of Employees of the Institute of Mathematics and Mechanics UB RAS before 1975] (PDF) (in Russian). Vol. 1. Yekaterinburg, Russia: Ural Branch of the Russian Academy of Sciences. 2010. pp. 87–89.
2. Kislitsyn, S. S. (1968). "A finite partially ordered set and its corresponding set of permutations". Mathematical Notes. 4 (5): 798–801. doi:10.1007/BF01111312. S2CID 120228193.
3. Olson, Emily J.; Sagan, Bruce E. (2018). "On the 1/3–2/3 conjecture". Order. 35 (3): 581–596. doi:10.1007/s11083-017-9450-3. MR 3861401. S2CID 52965439.
4. Brightwell, Graham (1999-04-28). "Balanced pairs in partial orders". Discrete Mathematics. 201 (1): 25–52. doi:10.1016/S0012-365X(98)00311-2. ISSN 0012-365X.
Authority control: Academics
• MathSciNet
• Scopus
• zbMATH
|
Wikipedia
|
Sergei Novikov (mathematician)
Sergei Petrovich Novikov (also Serguei) (Russian: Серге́й Петро́вич Но́виков) (born 20 March 1938) is a Soviet and Russian mathematician, noted for work in both algebraic topology and soliton theory. In 1970, he won the Fields Medal.
Sergei Petrovich Novikov
Born (1938-03-20) 20 March 1938
Gorky, Russian SFSR, Soviet Union
Alma materMoscow State University
Known forAdams–Novikov spectral sequence
Krichever–Novikov algebras
Morse–Novikov theory
Novikov conjecture
Novikov ring
Novikov–Shubin invariant
Novikov–Veselov equation
Novikov's compact leaf theorem
Wess–Zumino–Novikov–Witten model
AwardsLenin Prize (1967)
Fields Medal (1970)
Lobachevsky Medal (1981)
Wolf Prize (2005)
Lomonosov Gold Medal (2020)
Scientific career
FieldsMathematics
InstitutionsMoscow State University
Independent University of Moscow
Steklov Institute of Mathematics University of Maryland
Doctoral advisorMikhail Postnikov
Doctoral studentsVictor Buchstaber
Boris Dubrovin
Sabir Gusein-Zade
Gennadi Kasparov
Alexandr Mishchenko
Iskander Taimanov
Anton Zorich
Fedor Bogomolov
Early life
Novikov was born on 20 March 1938 in Gorky, Soviet Union (now Nizhny Novgorod, Russia).[1]
He grew up in a family of talented mathematicians. His father was Pyotr Sergeyevich Novikov, who gave a negative solution to the word problem for groups. His mother, Lyudmila Vsevolodovna Keldysh, and maternal uncle, Mstislav Vsevolodovich Keldysh, were also important mathematicians.[1]
In 1955 Novikov entered Moscow State University, from which he graduated in 1960. Four years later he received the Moscow Mathematical Society Award for young mathematicians. In the same year he defended a dissertation for the Candidate of Science in Physics and Mathematics degree (equivalent to the PhD) at Moscow State University. In 1965 he defended a dissertation for the Doctor of Science in Physics and Mathematics degree there. In 1966 he became a Corresponding member of the Academy of Sciences of the Soviet Union.
Research in topology
Novikov's early work was in cobordism theory, in relative isolation. Among other advances he showed how the Adams spectral sequence, a powerful tool for proceeding from homology theory to the calculation of homotopy groups, could be adapted to the new (at that time) cohomology theory typified by cobordism and K-theory. This required the development of the idea of cohomology operations in the general setting, since the basis of the spectral sequence is the initial data of Ext functors taken with respect to a ring of such operations, generalising the Steenrod algebra. The resulting Adams–Novikov spectral sequence is now a basic tool in stable homotopy theory.[2][3]
Novikov also carried out important research in geometric topology, being one of the pioneers with William Browder, Dennis Sullivan, and C. T. C. Wall of the surgery theory method for classifying high-dimensional manifolds. He proved the topological invariance of the rational Pontryagin classes, and posed the Novikov conjecture. This work was recognised by the award in 1970 of the Fields Medal. He was not allowed to travel to Nice to accept his medal, but he received it in 1971 when the International Mathematical Union met in Moscow. From about 1971 he moved to work in the field of isospectral flows, with connections to the theory of theta functions. Novikov's conjecture about the Riemann–Schottky problem (characterizing principally polarized abelian varieties that are the Jacobian of some algebraic curve) stated, essentially, that this was the case if and only if the corresponding theta function provided a solution to the Kadomtsev–Petviashvili equation of soliton theory. This was proved by Takahiro Shiota (1986),[4] following earlier work by Enrico Arbarello and Corrado de Concini (1984),[5] and by Motohico Mulase (1984).[6]
Later career
Since 1971 Novikov has worked at the Landau Institute for Theoretical Physics of the USSR Academy of Sciences. In 1981 he was elected a Full Member of the USSR Academy of Sciences (Russian Academy of Sciences since 1991). In 1982 Novikov was also appointed the Head of the Chair in Higher Geometry and Topology at the Moscow State University.
In 1984 he was elected as a member of Serbian Academy of Sciences and Arts.
As of 2004, Novikov is the Head of the Department of geometry and topology at the Steklov Mathematical Institute. He is also a Distinguished University Professor for the Institute for Physical Science and Technology, which is part of the University of Maryland College of Computer, Mathematical, and Natural Sciences at University of Maryland, College Park[7] and is a Principal Researcher of the Landau Institute for Theoretical Physics in Moscow.
In 2005 Novikov was awarded the Wolf Prize for his contributions to algebraic topology, differential topology and to mathematical physics.[8] He is one of just eleven mathematicians who received both the Fields Medal and the Wolf Prize. In 2020 he received the Lomonosov Gold Medal of the Russian Academy of Sciences.[9]
Writings
• Novikov, S. P.; Fomenko, A. T. (1990). Basic Elements of Differential Geometry and Topology. Mathematics and Its Applications. Vol. 60. Dordrecht: Springer Netherlands. doi:10.1007/978-94-015-7895-0. ISBN 978-90-481-4080-0.
• Novikov, S. P.; Manakov, S. V.; Pitaevskii, L. P.; Zakharov, V. E. (1984). Theory of solitons: the inverse scattering method. New York: Consultants Bureau. ISBN 0-306-10977-8. OCLC 10071941.
• with Dubrovin and Fomenko: Modern geometry- methods and applications, Vol.1-3, Springer, Graduate Texts in Mathematics (originally 1984, 1988, 1990, V.1 The geometry of surfaces and transformation groups, V.2 The geometry and topology of manifolds, V.3 Introduction to homology theory)
• Topics in Topology and mathematical physics, AMS (American Mathematical Society) 1995
• Integrable systems - selected papers, Cambridge University Press 1981 (London Math. Society Lecture notes)
• Novikov, S. P.; Taimanov, I. A. (2007). Topological Library: Part 1: Cobordisms and Their Applications. Series on Knots and Everything. Vol. 39. Translated by Manturov, V. O. World Scientific. doi:10.1142/6379. ISBN 978-981-270-559-4.
• with V. I. Arnold as editor and co-author: Dynamical systems, 1994, Encyclopedia of mathematical sciences, Springer
• Topology I: general survey, V. 12 of Topology Series of Encyclopedia of mathematical sciences, Springer 1996; 2013 edition
• Solitons and geometry, Cambridge 1994
• as editor, with Buchstaber: Solitons, geometry and topology: on the crossroads, AMS, 1997
• with Dubrovin and Krichever: Topological and Algebraic Geometry Methods in contemporary mathematical physics V.2, Cambridge
• My generation in mathematics, Russian Mathematical Surveys V.49, 1994, p. 1 doi:10.1070/RM1994v049n06ABEH002446
See also
• Novikov–Shubin invariant
• Novikov ring
• Novikov inequalities
References
1. O'Connor, John J.; Robertson, Edmund F., "Sergei Petrovich Novikov", MacTutor History of Mathematics Archive, University of St Andrews
2. Zahler, Raphael (1972). "The Adams-Novikov Spectral Sequence for the Spheres". Annals of Mathematics. 96 (3): 480–504. doi:10.2307/1970821. JSTOR 1970821.
3. Botvinnik, Boris I. (1992). Manifolds with Singularities and the Adams-Novikov Spectral Sequence. Cambridge University Press. p. xi. ISBN 9780521426084.
4. Shiota, Takahiro (1986). "Characterization of Jacobian varieties in terms of soliton equations". Inventiones Mathematicae. 83 (2): 333–382. Bibcode:1986InMat..83..333S. doi:10.1007/BF01388967. S2CID 120739493.
5. Arbarello, Enrico; De Concini, Corrado (1984). "On a set of equations characterizing Riemann matrices". Annals of Mathematics. 120 (1): 119–140. doi:10.2307/2007073. JSTOR 2007073.
6. Mulase, Motohico (1984). "Cohomological structure in soliton equations and Jacobian varieties". Journal of Differential Geometry. 19 (2): 403–430. doi:10.4310/jdg/1214438685. MR 0755232.
7. "Faculty/Staff Directory Search". University of Maryland. Retrieved 22 April 2016.
8. The Wolf Foundation – "Sergei P. Novikov Winner of Wolf Prize in Mathematics - 2005"
9. Lomonosov Gold Medal 2020
External links
• Homepage and Curriculum Vitae on the website of Steklov Mathematical Institute
• Biography (in Russian) on the website of Moscow State University
• O'Connor, John J.; Robertson, Edmund F., "Sergei Novikov (mathematician)", MacTutor History of Mathematics Archive, University of St Andrews
• Sergei Novikov at the Mathematics Genealogy Project
Fields Medalists
• 1936 Ahlfors
• Douglas
• 1950 Schwartz
• Selberg
• 1954 Kodaira
• Serre
• 1958 Roth
• Thom
• 1962 Hörmander
• Milnor
• 1966 Atiyah
• Cohen
• Grothendieck
• Smale
• 1970 Baker
• Hironaka
• Novikov
• Thompson
• 1974 Bombieri
• Mumford
• 1978 Deligne
• Fefferman
• Margulis
• Quillen
• 1982 Connes
• Thurston
• Yau
• 1986 Donaldson
• Faltings
• Freedman
• 1990 Drinfeld
• Jones
• Mori
• Witten
• 1994 Bourgain
• Lions
• Yoccoz
• Zelmanov
• 1998 Borcherds
• Gowers
• Kontsevich
• McMullen
• 2002 Lafforgue
• Voevodsky
• 2006 Okounkov
• Perelman
• Tao
• Werner
• 2010 Lindenstrauss
• Ngô
• Smirnov
• Villani
• 2014 Avila
• Bhargava
• Hairer
• Mirzakhani
• 2018 Birkar
• Figalli
• Scholze
• Venkatesh
• 2022 Duminil-Copin
• Huh
• Maynard
• Viazovska
• Category
• Mathematics portal
Laureates of the Wolf Prize in Mathematics
1970s
• Israel Gelfand / Carl L. Siegel (1978)
• Jean Leray / André Weil (1979)
1980s
• Henri Cartan / Andrey Kolmogorov (1980)
• Lars Ahlfors / Oscar Zariski (1981)
• Hassler Whitney / Mark Krein (1982)
• Shiing-Shen Chern / Paul Erdős (1983/84)
• Kunihiko Kodaira / Hans Lewy (1984/85)
• Samuel Eilenberg / Atle Selberg (1986)
• Kiyosi Itô / Peter Lax (1987)
• Friedrich Hirzebruch / Lars Hörmander (1988)
• Alberto Calderón / John Milnor (1989)
1990s
• Ennio de Giorgi / Ilya Piatetski-Shapiro (1990)
• Lennart Carleson / John G. Thompson (1992)
• Mikhail Gromov / Jacques Tits (1993)
• Jürgen Moser (1994/95)
• Robert Langlands / Andrew Wiles (1995/96)
• Joseph Keller / Yakov G. Sinai (1996/97)
• László Lovász / Elias M. Stein (1999)
2000s
• Raoul Bott / Jean-Pierre Serre (2000)
• Vladimir Arnold / Saharon Shelah (2001)
• Mikio Sato / John Tate (2002/03)
• Grigory Margulis / Sergei Novikov (2005)
• Stephen Smale / Hillel Furstenberg (2006/07)
• Pierre Deligne / Phillip A. Griffiths / David B. Mumford (2008)
2010s
• Dennis Sullivan / Shing-Tung Yau (2010)
• Michael Aschbacher / Luis Caffarelli (2012)
• George Mostow / Michael Artin (2013)
• Peter Sarnak (2014)
• James G. Arthur (2015)
• Richard Schoen / Charles Fefferman (2017)
• Alexander Beilinson / Vladimir Drinfeld (2018)
• Jean-François Le Gall / Gregory Lawler (2019)
2020s
• Simon K. Donaldson / Yakov Eliashberg (2020)
• George Lusztig (2022)
• Ingrid Daubechies (2023)
Mathematics portal
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
|
Wikipedia
|
Sergey Solovyov (mathematician)
Sergey Solovyov (Russian: Серге́й Ю́рьевич Соловьёв) (born 1955) is a Russian mathematician, Dr.Sc., Professor, a professor at the Faculty of Computer Science at the Moscow State University.[1]
Sergey Solovyov
Sergey Solovyov (1980)
Born (1955-02-03) 3 February 1955
Kiev
EducationDoctor of Science (1996)
Professor (2003)
Alma materMoscow State University (1977)
Scientific career
FieldsMathematics
InstitutionsMSU CMC
ThesisMathematical methods and principles of building automated knowledge engineering systems (1996)
Doctoral advisorTrifonov, Nikolay Pavlovich|Nikolay Trifonov
Mikhail Malkovsky
He graduated from the faculty MSU CMC (1977).
He defended the thesis «Mathematical methods and principles of building automated knowledge engineering systems» for the degree of Doctor of Physical and Mathematical Sciences (1996).[2]
Was awarded the title of Professor (2003).
Area of scientific interests: information systems. Project Manager Glossary .[3] Author of more than 70 scientific works on formal grammars, expert systems, experimental data processing systems and network technologies.[4][5]
References
1. Faculty of Computational Mathematics and Cybernetics 2010, p. 413.
2. Solovyov, Sergey (1996). "Mathematical methods and principles of building automated knowledge engineering systems". Russian State Library (in Russian). Retrieved 2018-06-14.
3. Faculty of Computational Mathematics and Cybernetics 2010, p. 414.
4. Scientific works of Sergey Solovyov
5. Scientific works of Sergey Solovyov
Literature
• Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory (1 500 экз ed.). Moscow: Publishing house of Moscow University. Author-compiler Evgeny Grigoriev. 2010. pp. 413–414. ISBN 978-5-211-05838-5.
External links
• Solovyov, Sergey (1996). "Mathematical methods and principles of building automated knowledge engineering systems". Russian State Library (in Russian). Retrieved 2018-06-14.
• Scientific works of Sergey Solovyov
• Scientific works of Sergey Solovyov
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Sergey Bolotin
Sergey Vladimirovich Bolotin (Сергей Владимирович Болотин, born 1 December 1954 in Moscow) is a Russian mathematician, specializing in dynamical systems of classical mechanics.[1]
Biography
Bolotin graduated in 1976 from the Faculty of Mechanics and Mathematics of Moscow State University. There he received in 1981 his Candidate of Sciences degree (PhD)[2] with thesis Либрационные движения обратимых механических систем (Librational motions of reversible mechanical systems). He received in 1998[1] his Russian Doctor of Sciences degree (habilitation) with thesis Двоякоасимптотические траектории и условия интегрируемости гамильтоновых систем[3] (Double-asymptotic trajectories and integrability conditions for Hamiltonian systems).
Since 1998 Bolotin is a professor in the Department of Theoretical Mechanics, Faculty of Mechanics and Mathematics, Moscow State University.[2] He is now the head of the Mechanics Department of the Steklov Institute of Mathematics.[4] His research deals with dynamical systems of classical mechanics, Hamiltonian systems, and variational methods.[1] He has supervised four PhD (Candidate of Sciences) students. He is the author or coauthor of over 75 scientific publications, including a textbook on theoretical mechanics (2010).[2] He has served on the editorial board of the journal Regular and Chaotic Dynamics.
In 1994 he was an invited speaker with talk Invariant Sets of Hamiltonian Systems and Variational Methods at the International Congress of Mathematicians in Zurich.[5] In 2016 he was elected a corresponding member of the Russian Academy of Sciences.[2]
As a hobby, Bolotin sails in Olympic class Finn dinghies.[6]
His brother Yuri Vladimirovich Bolotin (born December 1, 1954) is a professor at Moscow State University.[7] Both brothers in 2020 became champions of Russia in the class of yachts "Carter 30".[8]
Selected publications
• Bolotin, S.V.; Kozlov, V.V. (1978). "Libration in systems with many degrees of freedom". Journal of Applied Mathematics and Mechanics. 42 (2): 256–261. doi:10.1016/0021-8928(78)90141-7.
• Bolotin, S. V. (1984). "First integrals of systems with gyroscopic forces". Moskovskii Universitet Vestnik Seriia Matematika Mekhanika: 75. Bibcode:1984MVSMM.......75B.
• Bolotin, S.V. (1984). "The effect of singularities of the potential energy on the integrability of mechanical systems". Journal of Applied Mathematics and Mechanics. 48 (3): 255–260. Bibcode:1984JApMM..48..255B. doi:10.1016/0021-8928(84)90128-X.
• Bolotin, S. V. (1988). "The Hill determinant of a periodic orbit". Moskovskii Universitet Vestnik Seriia Matematika Mekhanika: 30. Bibcode:1988MVSMM.......30B.
• Bolotin, S. V. (1992). "Integrable billiards on surfaces of constant curvature". Mathematical Notes. 51 (2): 117–123. doi:10.1007/BF02102114. S2CID 124726822.
• Bolotin, Sergey V. (1995). "Homoclinic orbits to invariant tori of Hamiltonian systems". Dynamical Systems in Classical Mechanics. Translations of the American Mathematical Society, Series 2. Vol. 168. pp. 21–90. ISBN 9780821804278.
• Bolotin, S.V.; Rabinowitz, P.H. (1998). "A Variational Construction of Chaotic Trajectories for a Reversible Hamiltonian System". Journal of Differential Equations. 148 (2): 364–387. Bibcode:1998JDE...148..364B. doi:10.1006/jdeq.1998.3470.
• Bolotin, S. V.; MacKay, R. S. (2000). "Periodic and Chaotic Trajectories of the Second Species for the n-Centre Problem". Celestial Mechanics and Dynamical Astronomy. 77 (1): 49–75. Bibcode:2000CeMDA..77...49B. doi:10.1023/A:1008393706818. S2CID 116941485.
• Bolotin, S.V.; Treschev, D.V. (2000). Regular and Chaotic Dynamics. 5 (4): 401. doi:10.1070/RD2000v005n04ABEH000156. {{cite journal}}: Missing or empty |title= (help)
• Bolotin, Sergey V.; Treschev, Dmitrii V. (2010). "Hill's formula". Russian Mathematical Surveys. 65 (2): 191–257. arXiv:1006.1532. Bibcode:2010RuMaS..65..191B. doi:10.1070/RM2010v065n02ABEH004671. S2CID 119306867.
• Bolotin, S. V.; Kozlov, V. V. (2015). "Calculus of variations in the large, existence of trajectories in a domain with boundary, and Whitney's inverted pendulum problem". Izvestiya: Mathematics. 79 (5): 894–901. Bibcode:2015IzMat..79..894B. doi:10.1070/IM2015v079n05ABEH002765.
• Bolotin, S. V.; Treschev, D. V. (2015). "The anti-integrable limit". Russian Mathematical Surveys. 70 (6): 975–1030. Bibcode:2015RuMaS..70..975B. doi:10.1070/RM2015v070n06ABEH004972.
References
1. "Bolotin, Sergey Vladimirovich (with list of publications)". mathnet.ru.
2. "Болотин Сергей Владимирович". Летопись Московского университета (Annals of Moscow State University.
3. "Двоякоасимптотические траектории и условия интегрируемости гамильтоновых систем". fizmathim.com.
4. "Болотин Сергей Владимирович". istina.msu.ru.
5. Bolotin, Sergey V. (1995). "Invariant Sets of Hamiltonian Systems and Variational Methods". Proceedings of the International Congress of Mathematicians, 1994 Zürich. Birkhäuser, Basel. pp. 1169–1178. doi:10.1007/978-3-0348-9078-6_110. ISBN 978-3-0348-9897-3.
6. Результаты (Results). Регата памяти Евгения Истомина . 9 августа 2020 года. Официальный сайт Российской ассоциации класса "Финн". Болотин Сергей ... (Regatta in memory of Yevgeny Istomin. 9 August 2020. The official site of the Russian Finn Class Association. Bolotin Sergey ... )
7. "К юбилею Юрия Владимировича Болотина] (On the anniversary of Yuri Vladimirovich Bolotin)". МГУ, официальный сайт, 1 декабря 2014 года (МГУ Moscow State University, official site). 1 December 2014.
8. "Чемпионат России по парусному спорту в классе "Крейсеркая яхта "Картер 30" (Russian sailing championship in the class "Cruising yacht" Carter 30)" (PDF). Официальный сайт Федерации парусного спорта Московской области, сентябрь 2020 года (Official site of the Sailing Federation of the Moscow Region). September 2020.
Authority control
International
• ISNI
Academics
• MathSciNet
• zbMATH
|
Wikipedia
|
Sergey Fomin
Sergey Vladimirovich Fomin (Сергей Владимирович Фомин) (born 16 February 1958 in Saint Petersburg, Russia) is a Russian American mathematician who has made important contributions in combinatorics and its relations with algebra, geometry, and representation theory. Together with Andrei Zelevinsky, he introduced cluster algebras.
Sergey Fomin
Born (1958-02-16) 16 February 1958
Saint Petersburg, Russia
NationalityRussia, United States
Alma materSaint Petersburg State University
Known forCluster algebras
AwardsLeroy P. Steele Prize (2018)
Scientific career
FieldsMathematics
InstitutionsUniversity of Michigan
Doctoral advisorAnatoly Vershik
Leonid Osipov
Biography
Fomin received his M.Sc in 1979 and his Ph.D in 1982 from St. Petersburg State University under the direction of Anatoly Vershik and Leonid Osipov.[1] Previous to his appointment at the University of Michigan, he held positions at the Massachusetts Institute of Technology from 1992 to 2000, at the St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, and at the Saint Petersburg Electrotechnical University. Sergey Fomin studied at the 45th Physics-Mathematics School and later taught mathematics there.[2]
Research
Fomin's contributions include
• Discovery (with A. Zelevinsky) of cluster algebras.
• Work (jointly with A. Berenstein and A. Zelevinsky) on total positivity.
• Work (with A. Zelevinsky) on the Laurent phenomenon, including its applications to Somos sequences.
Awards and honors
• Simons Fellow (2019) [3]
• Steele Prize for Seminal Contribution to Research (2018).[4]
• Invited lecture at the International Congress of Mathematicians (Hyderabad, 2010).[5]
• Robert M. Thrall Collegiate Professor of Mathematics at the University of Michigan.
• Fellow (2012) of the American Mathematical Society.[6]
• Elected to the American Academy of Arts and Sciences, 2023.[7]
Selected publications
• Fomin, S.; Zelevinsky, A. (2003). "Y-systems and generalized associahedra". Annals of Mathematics. 158 (3): 977–1018. arXiv:math/0505518. doi:10.4007/annals.2003.158.977. S2CID 5153512.
• Fomin, S.; Zelevinsky, A. (2003). "Cluster algebras II: Finite type classification". Inventiones Mathematicae. 154 (1): 63–121. arXiv:math/0208229. Bibcode:2003InMat.154...63F. doi:10.1007/s00222-003-0302-y. S2CID 14540263.
• Fomin, S.; Zelevinsky, A. (2002). "Cluster algebras I: Foundations". Journal of the AMS. 15: 497–529.
• Fomin, S.; Gelfand, S.; Postnikov, A. (1997). "Quantum Schubert Polynomials". Journal of the AMS. 10: 565–596.
References
1. Sergey Fomin at the Mathematics Genealogy Project
2. "Буря в пустыне Математик Сергей Фомин оценил попытки властей спасти науку". No. 2 July 2010. Lenta.ru.
3. Simons Fellows in Mathematics
4. 2018 Steele Prize for Seminal Contribution to Research in Discrete Mathematics/Logic to Sergey Fomin and Andrei Zelevinsky
5. ICM'10 invited speakers
6. List of Fellows of the American Mathematical Society
7. "New members". American Academy of Arts and Sciences. 2023. Retrieved 2023-04-21.
External links
• Home page of Sergey Fomin
Authority control
International
• VIAF
National
• Germany
• Israel
• United States
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• ResearcherID
• Scopus
• zbMATH
Other
• IdRef
|
Wikipedia
|
Sergey Yablonsky
Sergey Vsevolodovich Yablonsky (Russian: Серге́й Все́володович Ябло́нский, 6 December 1924 – 26 May 1998) was a Soviet and Russian mathematician, one of the founders of the Soviet school of mathematical cybernetics and discrete mathematics. He is the author of a number of classic results on synthesis, reliability, and classification of control systems (Russian: Управляющие системы), the term used in the USSR and Russia for a generalization of finite state automata, Boolean circuits and multi-valued logic circuits.
Sergey Vsevolodovich Yablonsky
Sergey Yablonsky
Born(1924-12-06)6 December 1924
Moscow, Russia
Died26 May 1998(1998-05-26) (aged 73)
Moscow, Russia
NationalityRussian
Alma materMoscow State University
AwardsLenin Prize
Scientific career
FieldsMathematics and discrete mathematics
InstitutionsMoscow State University
Steklov Institute of Mathematics
Institute of Applied Mathematics
Doctoral advisorNina Bari
Pyotr Novikov
Doctoral studentsOleg Lupanov, Rafail Krichevskii
Yablonsky is credited for helping to overcome the pressure from Soviet ideologists against the term and the discipline of cybernetics and establishing what in the Soviet Union was called mathematical cybernetics as a separate field of mathematics. Yablonsky and his students were ones of the first in the world to raise the issues of potentially inherent unavoidability of the brute force search for some problems, the precursor of the P = NP problem, though Gödel's letter to von Neumann, dated 20 March 1956 and discovered in 1988, may have preceded them.[1]
In Russia, a group led by Yablonsky had the idea that combinatorial problems are hard in proportion to the amount of brute-force search required to find a solution. In particular, they noticed that for many problems they could not find a useful way to organize the space of potential solutions so as to avoid brute force search. They began to suspect that these problems had an inherently unorganized solution space, and the best method for solving them would require enumerating an exponential (in the size of the problem instance) number of potential solutions. That is, the problems seem to require $c^{n}$ "shots in the dark" (for some constant $c$) when the length of the problem description is $n$. However, despite their "leading-edge" taste in mathematics, Yablonsky's group never quite formulated this idea precisely.[2]
Biography
Childhood
Yablonsky was born in Moscow, to the family of a professor of mechanics. His mathematical talents became apparent in early age. In 1940 he became the winner of the sixth Moscow secondary school mathematical olympiad.[3]
War
In August 1942, after completing his first year at Moscow State University's Faculty of Mechanics and Mathematics, Yablonsky, then 17, went to serve in the Soviet Army, fighting in the second world war as a member of the tank brigade 242. For his service he was awarded two Orders of the Patriotic War, two Orders of the Red Star, Order of Glory of the 3rd class, and numerous medals. He returned to his study after the war has ended in 1945 and went on to graduate with distinction.
Post-war period
Yablonsky graduated the Faculty of Mechanics and Mathematics of Moscow State University in 1950. During his student years he worked under supervision of Nina Bari. This collaboration resulted in his first research paper, "On the converging sequences of continuous functions" (1950).
He joined the graduate program of the Faculty of Mechanics and Mathematics in 1950 where his advisor was Pyotr Novikov. There Yablonsky's research was on the issues of the expressibility in mathematical logic. He approached this problem in terms of the theory of k-valued discrete functions. Among the problems that were addressed in his PhD thesis titled "Issues of functional completeness in k-valued calculus" (1953) is the definitive answer to the question of completeness in 3-valued logic.
Starting from 1953, Yablonsky worked at the Department of Applied Mathematics of Steklov Institute of Mathematics, that in 1966 became the separate Institute of Applied Mathematics. Over the period of the 1950s and 1960s, together with Alexey Lyapunov, Yablonsky organized the seminar on cybernetics, showing his support to the new field of mathematics that had been a subject of a significant controversy fueled by Soviet ideologists. He actively participated in the creation of the periodical publication Problems of Cybernetics, with Lyapunov as its first editor-in-chief. Yablonsky succeeded Lyapunov as the editor-in-chief of Problems of Cybernetics in 1974 (the publication changed its name to Mathematical Issues of Cybernetics in 1989). In 1966 Yablonsky (together with Yuri Zhuravlyov and Oleg Lupanov) was awarded Lenin Prize for their work on the theory of control systems (in the discrete-mathematical sense, as explained above). In 1968 Yablonsky was elected a corresponding member of the Academy of Sciences of the Soviet Union (division of mathematics).
Yablonsky played an active role in the creation of the Faculty of Computational Mathematics and Cybernetics at Moscow State University in 1970. In 1971 he became the founding head of the department of mathematical cybernetics (initially department of automata theory and mathematical logic) at the Faculty of Computational Mathematics and Cybernetics.[4]
References
1. Sipser, M. (1992), The history and status of the P versus NP question, in ‘Proceedings of the 24th Annual ACM Symposium on the Theory of Computing’, pp. 603–618.
2. Computational Complexity Theory (2004), Steven Rudich, Avi Wigderson, Editors, American Mathematical Society, page 12.
3. История информатики в России. Ученые и их школы. Сергей Всеволодович Яблонский (2003) , Валерий Борисович Алексеев, Nauka Publishers, page 241.
4. Яблонский Biography of S. V. Yablonsky at the website of the department of Mathematical Cybernetics, Moscow State University (in Russian)
Authority control
International
• VIAF
National
• Germany
• Israel
• United States
• Latvia
• Czech Republic
• Netherlands
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
Other
• IdRef
|
Wikipedia
|
Sergio Campanato
Sergio Campanato (17 February 1930 – 1 March 2005) was an Italian mathematician who studied the theory of regularity for elliptic and parabolic partial differential equations.
Sergio Campanato
Born(1930-02-17)17 February 1930
Venice, Italy
Died1 March 2005(2005-03-01) (aged 75)
Pisa, Italy
NationalityItalian
Alma materUniversity of Modena
Scientific career
FieldsMathematics
InstitutionsUniversity of Pisa,
Scuola Normale Superiore di Pisa
Career
He graduated in mathematics and physics at the University of Modena in the academic year 1952/54 with a thesis relating to the heat equation.[1] In 1956, he became an assistant to Enrico Magenes, with whom he worked on a problem of Picone relating to the equilibrium state of an elastic body, and on other differential equations related to electrostatics.
In 1964, he moved to the University of Pisa at the invitation of Alessandro Faedo, joining a group of mathematicians which included Aldo Andreotti, Jacopo Barsotti, Enrico Bombieri, Gianfranco Capriz, Ennio De Giorgi, Giovanni Prodi, Edoardo Vesentini, and Guido Stampacchia, with whom Campanato collaborated fruitfully.[2]
From 1975 until 2000 he taught Nonlinear Analysis at the Scuola Normale Superiore di Pisa. He died in Pisa on 1 March 2005.
Honors
• In 1985, the Accademia dei Lincei awarded him the "Premio Linceo" prize for his work on the regularity of nonlinear problems as relating to his eponymous Morrey–Campanato spaces.[3][4]
• In 2000, a conference was held in honor of his 70th birthday at SNS Pisa.
• In 2006, there was a conference held to commemorate his work at Erice, Sicily.[5]
Selected works
• Sui problemi al contorno relativi al sistema di equazioni differenziali dell'elastostatica piana. Rend. Sem. Mat. Univ. Di Padova 1956 XXV pp. 307–342
• Osservazioni sul problema di trasmissione per equazioni differenziali lineari del secondo ordine, Edizioni dell'Università di Genova, 1960.
• Sergio Campanato, Guido Stampacchia, Sulle maggiorazioni in Lp nella teoria delle equazioni ellittiche, Bollettino dell'Unione Matematica Italiana, Serie 3, Vol. 20 (1965), n.3, p. 393–399. Bologna, Zanichelli, 1965.
• Lezioni di analisi matematica, Pisa, Libreria scientifica Giordano Pellegrini, 1966.
• Sistemi ellittici in forma divergenza: regolarità all'interno, Pisa, edizioni della Scuola Normale Superiore, 1980.
• Regolarità Hölderiana parziale delle soluzioni di una classe di sistemi ellittici non lineari del secondo ordine, Bari, Laterza, 1982:
• Recent regularity results for H1,q-solutions on non linear elliptic systems, Volume 186 di Conferenze del Seminario di matematica dell'Università di Bari, Bari, Laterza, 1983.
• Teoria ... [L] e sistemi parabolici non lineari, Volume 196 di Conferenze del Seminario di Matematica dell'Università di Bari, Bari Laterza, 1984.
• Non variational basic parabolic systems of second order-(Sistemi parabolici base non variazionali del 2º ordine), in: Atti dell'Accademia Nazionale dei Lincei. Classe di Scienze Fisiche, Matematiche e Naturali. Rendiconti Lincei. Matematica e Applicazioni Serie 9 2, fasc. n.2, p. 129–136, 1991.
• Attuale formulazione della teoria degli operatori vicini e attuale definizione di operatore ellittico, Le Matematiche, Vol. LI (1996) Fasc. II, pp. 291–298, 1996.
References
1. Pagni, Mauro (1957). "Su un problema al contorno tipico per l'equazione del calore in n+1 dimensioni" (PDF). Annali della Scuola Normale Superiore di Pisa, Classe di Scienze. 11: 209–216. Retrieved 4 November 2012.
2. "Presentazione del Dipartimento di Matematica dell'Università di Pisa". Retrieved 2 November 2012.
3. "Dalla presentazione al "Convegno sulle equazioni a derivate parziali per i 70 anni di Sergio Campanato". Retrieved 3 November 2012.
4. "Accademia Nazionale dei Lincei – Premio Linceo". Retrieved 3 November 2012.
5. "44th Workshop: VARIATIONAL ANALYSIS AND PARTIAL DIFFERENTIAL EQUATIONS. In memory of Sergio Campanato. The Award of the Second Gold Medal "G. Stampacchia"" (PDF). Retrieved 2 November 2012.
• Gary M. Lieberman, Second Order Parabolic Differential Equations, World Scientific Pub Co, 1996.
• Convegno sulle Equazioni a Derivate Parziali: per i 70 anni di Sergio Campanato, Scuola Normale Superiore di Pisa, 25–26 February 2000. Pisa, Edizioni del Dipartimento di Matematica e Informatica dell'Univ. in collaborazione con Sergio Campanato. 2000.
• Wen Yuan, Winfried Sickel, Dachun Yang, Morrey and Campanato meet Besov, Lizorkin and Triebel, London-New York, Springer, 2005.
External links
• Sergio Campanato at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• Italy
• United States
Academics
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Sergiu Klainerman
Sergiu Klainerman (born May 13, 1950) is a mathematician known for his contributions to the study of hyperbolic differential equations and general relativity. He is currently the Eugene Higgins Professor of Mathematics at Princeton University, where he has been teaching since 1987.
Sergiu Klainerman
Sergiu Klainerman in 1978
Born (1950-05-13) May 13, 1950
Bucharest, Romanian People's Republic
NationalityRomanian American
Alma materUniversity of Bucharest
New York University
AwardsBôcher Prize (1999)[1]
Scientific career
FieldsMathematics
InstitutionsUniversity of California, Berkeley
New York University
Princeton University
ThesisGlobal Existence for Nonlinear Wave Equations (1978)
Doctoral advisorsFritz John
Louis Nirenberg
Doctoral studentsGustavo Ponce
Biography
He was born in 1950 in Bucharest, Romania, into a Jewish family. After attending the Petru Groza High School, he studied mathematics at the University of Bucharest from 1969 to 1974. For graduate studies he went to New York University, obtaining his Ph.D. in 1978.[2] His thesis, written under the direction of Fritz John and Louis Nirenberg, was titled Global Existence for Nonlinear Wave Equations.[3] From 1978 to 1980 Klainerman was a Miller Research Fellow at the University of California, Berkeley, while from 1980 to 1987 he was a faculty member at New York University's Courant Institute of Mathematical Sciences, rising in rank to Professor in 1986.[2]
Klainerman is a member of the U.S. National Academy of Sciences (elected 2005),[4] a foreign member of the French Academy of Sciences (elected 2002)[5] and a Fellow of the American Academy of Arts and Sciences (elected 1996).[6] He was elected to the 2018 class of fellows of the American Mathematical Society.[7]
He was named a MacArthur Fellow in 1991[8] and Guggenheim Fellow in 1997.[9] Klainerman was awarded the Bôcher Memorial Prize by the American Mathematical Society in 1999 "for his contributions to nonlinear hyperbolic equations".[1] He is currently a co-Editor-in-Chief of Publications Mathématiques de l'IHÉS.[10]
Major publications
• Klainerman, Sergiu (1980). "Global existence for nonlinear wave equations". Communications on Pure and Applied Mathematics. 33 (1): 43–101. doi:10.1002/cpa.3160330104. MR 0544044.
• Klainerman, Sergiu; Majda, Andrew (1981). "Singular limits of quasilinear hyperbolic systems with large parameters and the incompressible limit of compressible fluids". Communications on Pure and Applied Mathematics. 34 (4): 481–524. Bibcode:1981CPAM...34..481K. doi:10.1002/cpa.3160340405. MR 0615627.
• Klainerman, Sergiu; Majda, Andrew. Compressible and incompressible fluids. Comm. Pure Appl. Math. 35 (1982), no. 5, 629–651.
• Klainerman, Sergiu. Global existence of small amplitude solutions to nonlinear Klein-Gordon equations in four space-time dimensions. Comm. Pure Appl. Math. 38 (1985), no. 5, 631–641.
• Klainerman, Sergiu. Uniform decay estimates and the Lorentz invariance of the classical wave equation. Comm. Pure Appl. Math. 38 (1985), no. 3, 321–332.
• Klainerman, S. The null condition and global existence to nonlinear wave equations. Nonlinear systems of partial differential equations in applied mathematics, Part 1 (Santa Fe, N.M., 1984), 293–326, Lectures in Appl. Math., 23, Amer. Math. Soc., Providence, RI, 1986.
• Klainerman, S.; Machedon, M. Space-time estimates for null forms and the local existence theorem. Comm. Pure Appl. Math. 46 (1993), no. 9, 1221–1268.
• Klainerman, S.; Machedon, M. Smoothing estimates for null forms and applications. A celebration of John F. Nash, Jr., Duke Mathematical Journal 81 (1995), no. 1, 99–133 (1996).
• Klainerman, Sergiu; Sideris, Thomas C. On almost global existence for nonrelativistic wave equations in 3D. Comm. Pure Appl. Math. 49 (1996), no. 3, 307–321.
Books
• Christodoulou, Demetrios; Klainerman, Sergiu. The global nonlinear stability of the Minkowski space. Princeton Mathematical Series, 41. Princeton University Press, Princeton, NJ, 1993. x+514 pp. ISBN 0-691-08777-6
• Klainerman, Sergiu; Nicolò, Francesco. The evolution problem in general relativity. Progress in Mathematical Physics, 25. Birkhäuser Boston, Inc., Boston, MA, 2003. xiv+385 pp. ISBN 0-8176-4254-4
References
1. 1999 Bôcher Prize, Notices of the American Mathematical Society, vol. 46 (1999), no. 4, pp. 463-466
2. "Sergiu Klainerman's Curriculum Vitae" (PDF).
3. Sergiu Klainerman at the Mathematics Genealogy Project
4. National Academy of Sciences elections, Notices of the American Mathematical Society, vol. 52 (2005), no. 7, p. 764
5. Sergiu Klainerman bio page Archived 2009-11-29 at the Wayback Machine, French Academy of Sciences. Accessed January 13, 2010.
6. American Academy Elections. Notices of the American Mathematical Society, vol. 43 (1996), no. 7, p. 781
7. 2018 Class of the Fellows of the AMS, American Mathematical Society, retrieved 2017-11-03
8. MacArthur Fellows, July 1991 Archived 2007-09-29 at the Wayback Machine, MacArthur Foundation. Accessed January 13, 2010.
9. Fellows list, Archived 2011-06-03 at the Wayback Machine Guggenheim Foundation. Accessed January 13, 2010.
10. Editorial Board, Publications Mathématiques de l'IHÉS. Accessed January 13, 2010
External links
• Sergiu Klainerman personal webpage, Department of Mathematics, Princeton University
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
• Croatia
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Serial concatenated convolutional codes
Serial concatenated convolutional codes (SCCC) are a class of forward error correction (FEC) codes highly suitable for turbo (iterative) decoding.[1][2] Data to be transmitted over a noisy channel may first be encoded using an SCCC. Upon reception, the coding may be used to remove any errors introduced during transmission. The decoding is performed by repeated decoding and [de]interleaving of the received symbols.
SCCCs typically include an inner code, an outer code, and a linking interleaver. A distinguishing feature of SCCCs is the use of a recursive convolutional code as the inner code. The recursive inner code provides the 'interleaver gain' for the SCCC, which is the source of the excellent performance of these codes.
The analysis of SCCCs was spawned in part by the earlier discovery of turbo codes in 1993. This analysis of SCCC's took place in the 1990s in a series of publications from NASA's Jet Propulsion Laboratory (JPL). The research offered SCCC's as a form of turbo-like serial concatenated codes that 1) were iteratively ('turbo') decodable with reasonable complexity, and 2) gave error correction performance comparable with the turbo codes.
Prior forms of serial concatenated codes typically did not use recursive inner codes. Additionally, the constituent codes used in prior forms of serial concatenated codes were generally too complex for reasonable soft-in-soft-out (SISO) decoding. SISO decoding is considered essential for turbo decoding.
Serial concatenated convolutional codes have not found widespread commercial use, although they were proposed for communications standards such as DVB-S2. Nonetheless, the analysis of SCCCs has provided insight into the performance and bounds of all types of iterative decodable codes including turbo codes and LDPC codes.
US patent 6,023,783 covers some forms of SCCCs. The patent expired on May 15, 2016.[3]
History
Serial concatenated convolutional codes were first analyzed with a view toward turbo decoding in "Serial Concatenation of Interleaved Codes: Performance Analysis, Design, and Iterative Decoding" by S. Benedetto, D. Divsalar, G. Montorsi and F. Pollara.[4] This analysis yielded a set of observations for designing high performance, turbo decodable serial concatenated codes that resembled turbo codes. One of these observations was that "the use of a recursive convolutional inner encoder always yields an interleaver gain." This is in contrast to the use of block codes or non-recursive convolutional codes, which do not provide comparable interleaver gain.
Additional analysis of SCCCs was done in "Coding Theorems for 'Turbo-Like' Codes" by D. Divsalar, Hui Jin, and Robert J. McEliece.[5] This paper analyzed repeat-accumulate (RA) codes which are the serial concatenation of an inner two-state recursive convolutional code (also called an 'accumulator' or parity-check code) with a simple repeat code as the outer code, with both codes linked by an interleaver. The performance of the RA codes is quite good considering the simplicity of the constituent codes themselves.
SCCC codes were further analyzed in "Serial Turbo Trellis Coded Modulation with Rate-1 Inner Code".[6] In this paper SCCCs were designed for use with higher order modulation schemes. Excellent performing codes with inner and outer constituent convolutional codes of only two or four states were presented.
Example Encoder
Fig 1 is an example of a SCCC.
The example encoder is composed of a 16-state outer convolutional code and a 2-state inner convolutional code linked by an interleaver. The natural code rate of the configuration shown is 1/4, however, the inner and/or outer codes may be punctured to achieve higher code rates as needed. For example, an overall code rate of 1/2 may be achieved by puncturing the outer convolutional code to rate 3/4 and the inner convolutional code to rate 2/3.
A recursive inner convolutional code is preferable for turbo decoding of the SCCC. The inner code may be punctured to a rate as high as 1/1 with reasonable performance.
Example Decoder
An example of an iterative SCCC decoder.
The SCCC decoder includes two soft-in-soft-out (SISO) decoders and an interleaver. While shown as separate units, the two SISO decoders may share all or part of their circuitry. The SISO decoding may be done is serial or parallel fashion, or some combination thereof. The SISO decoding is typically done using Maximum a posteriori (MAP) decoders using the BCJR algorithm.
Performance
SCCCs provide performance comparable to other iteratively decodable codes including turbo codes and LDPC codes. They are noted for having slightly worse performance at lower SNR environments (i.e. worse waterfall region), but slightly better performance at higher SNR environments (i.e. lower error floor).
See also
• Convolutional code
• Viterbi algorithm
• Soft-decision decoding
• Interleaver
• BCJR algorithm
• Low-density parity-check code
• Repeat-accumulate code
• Turbo equalizer
References
1. Minoli, Daniel (2008-12-18). Satellite Systems Engineering in an IPv6 Environment. CRC Press. pp. 152–. ISBN 9781420078695. Retrieved 4 June 2014.
2. Ryan, William; Lin, Shu (2009-09-17). Channel Codes: Classical and Modern. Cambridge University Press. pp. 320–. ISBN 9781139483018. Retrieved 4 June 2014.
3. "Patent US6023783 - Hybrid concatenated codes and iterative decoding - Google Patents". Retrieved 2014-06-04.
4. "Archived copy" (PDF). Archived from the original (PDF) on 2017-08-13. Retrieved 2014-04-02.{{cite web}}: CS1 maint: archived copy as title (link)
5. "Allerton98.tex" (PDF). Retrieved 2014-06-04.
6. NASA.gov
External links
• "Concatenated codes", Scholarpedia
• "Concatenated Convolutional Codes and Iterative Decoding", Willian E. Ryan
|
Wikipedia
|
Serial relation
In set theory a serial relation is a homogeneous relation expressing the connection of an element of a sequence to the following element. The successor function used by Peano to define natural numbers is the prototype for a serial relation.
Bertrand Russell used serial relations in The Principles of Mathematics[1] (1903) as he explored the foundations of order theory and its applications. The term serial relation was also used by B. A. Bernstein for an article showing that particular common axioms in order theory are nearly incompatible: connectedness, irreflexivity, and transitivity.[2]
A serial relation R is an endorelation on a set U. As stated by Russell, $\forall x\exists y\ xRy,$ where the universal and existential quantifiers refer to U. In contemporary language of relations, this property defines a total relation. But a total relation may be heterogeneous. Serial relations are of historic interest.
For a relation R, let {y: xRy } denote the "successor neighborhood" of x. A serial relation can be equivalently characterized as a relation for which every element has a non-empty successor neighborhood. Similarly, an inverse serial relation is a relation in which every element has non-empty "predecessor neighborhood".[3]
In normal modal logic, the extension of fundamental axiom set K by the serial property results in axiom set D.[4]
Russell's series
Relations are used to develop series in The Principles of Mathematics. The prototype is Peano's successor function as a one-one relation on the natural numbers. Russell's series may be finite or generated by a relation giving cyclic order. In that case, the point-pair separation relation is used for description. To define a progression, he requires the generating relation to be a connected relation. Then ordinal numbers are derived from progressions, the finite ones are finite ordinals.[1]: Chapter 28: Progressions and ordinal numbers Distinguishing open and closed series[1]: 234 results in four total orders: finite, one end, no end and open, and no end and closed.[1]: 202
Contrary to other writers, Russell admits negative ordinals. For motivation, consider the scales of measurement using scientific notation, where a power of ten represents a decade of measure. Informally, this parameter corresponds to orders of magnitude used to quantify physical units. The parameter takes on negative as well as positive values.
Stretch
Russell adopted the term stretch from Meinong, who had contributed to the theory of distance.[5] Stretch refers to the intermediate terms between two points in a series, and the "number of terms measures the distance and divisibility of the whole."[1]: 181 To explain Meinong, Russell refers to the Cayley-Klein metric, which uses stretch coordinates in anharmonic ratios which determine distance by using logarithm.[1]: 255 [6]
References
1. Russell, Bertrand. Principles of mathematics. ISBN 978-1-136-76573-5. OCLC 1203009858.
2. B. A. Bernstein (1926) "On the Serial Relations in Boolean Algebras", Bulletin of the American Mathematical Society 32(5): 523,4
3. Yao, Y. (2004). "Semantics of Fuzzy Sets in Rough Set Theory". Transactions on Rough Sets II. Lecture Notes in Computer Science. Vol. 3135. p. 309. doi:10.1007/978-3-540-27778-1_15. ISBN 978-3-540-23990-1.
4. James Garson (2013) Modal Logic for Philosophers, chapter 11: Relationships between modal logics, figure 11.1 page 220, Cambridge University Press doi:10.1017/CBO97811393421117.014
5. Alexius Meinong (1896) Uber die Bedeutung der Weberische Gesetze
6. Russell (1897) An Essay on the Foundations of Geometry
External links
• Jing Tao Yao and Davide Ciucci and Yan Zhang (2015). "Generalized Rough Sets". In Janusz Kacprzyk and Witold Pedrycz (ed.). Handbook of Computational Intelligence. Springer. pp. 413–424. ISBN 9783662435052. Here: page 416.
• Yao, Y.Y.; Wong, S.K.M. (1995). "Generalization of rough sets using relationships between attribute values" (PDF). Proceedings of the 2nd Annual Joint Conference on Information Sciences: 30–33..
|
Wikipedia
|
Serial subgroup
In the mathematical field of group theory, a subgroup H of a given group G is a serial subgroup of G if there is a chain C of subgroups of G extending from H to G such that for consecutive subgroups X and Y in C, X is a normal subgroup of Y.[1] The relation is written H ser G or H is serial in G.[2]
If the chain is finite between H and G, then H is a subnormal subgroup of G. Then every subnormal subgroup of G is serial. If the chain C is well-ordered and ascending, then H is an ascendant subgroup of G; if descending, then H is a descendant subgroup of G. If G is a locally finite group, then the set of all serial subgroups of G form a complete sublattice in the lattice of all normal subgroups of G.[2]
See also
• Characteristic subgroup
• Normal closure
• Normal core
References
1. de Giovanni, F.; A. Russo; G. Vincenzi (2002). "GROUPS WITH RESTRICTED CONJUGACY CLASSES". Serdica Math. J. 28: 241–254.
2. Hartley, B. (24 October 2008) [1972]. "Serial subgroups of locally finite groups". Mathematical Proceedings of the Cambridge Philosophical Society. 71 (2): 199–201. Bibcode:1972PCPS...71..199H. doi:10.1017/S0305004100050441. S2CID 120958627.
|
Wikipedia
|
Seriation (statistics)
In combinatorial data analysis, seriation is the process of finding an arrangement of all objects in a set, in a linear order, given a loss function.[1] The main goal is exploratory, to reveal structural information.
References
1. Hahsler, Michael; Hornik, Kurt; Buchta, Christian (2008). "Getting Things in Order: An Introduction to the R Package seriation". Journal of Statistical Software. 25 (3). doi:10.18637/jss.v025.i03.
|
Wikipedia
|
Series-parallel partial order
In order-theoretic mathematics, a series-parallel partial order is a partially ordered set built up from smaller series-parallel partial orders by two simple composition operations.[1][2]
The series-parallel partial orders may be characterized as the N-free finite partial orders; they have order dimension at most two.[1][3] They include weak orders and the reachability relationship in directed trees and directed series–parallel graphs.[2][3] The comparability graphs of series-parallel partial orders are cographs.[2][4]
Series-parallel partial orders have been applied in job shop scheduling,[5] machine learning of event sequencing in time series data,[6] transmission sequencing of multimedia data,[7] and throughput maximization in dataflow programming.[8]
Series-parallel partial orders have also been called multitrees;[4] however, that name is ambiguous: multitrees also refer to partial orders with no four-element diamond suborder[9] and to other structures formed from multiple trees.
Definition
Consider P and Q, two partially ordered sets. The series composition of P and Q, written P; Q,[7] P * Q,[2] or P ⧀ Q,[1]is the partially ordered set whose elements are the disjoint union of the elements of P and Q. In P; Q, two elements x and y that both belong to P or that both belong to Q have the same order relation that they do in P or Q respectively. However, for every pair x, y where x belongs to P and y belongs to Q, there is an additional order relation x ≤ y in the series composition. Series composition is an associative operation: one can write P; Q; R as the series composition of three orders, without ambiguity about how to combine them pairwise, because both of the parenthesizations (P; Q); R and P; (Q; R) describe the same partial order. However, it is not a commutative operation, because switching the roles of P and Q will produce a different partial order that reverses the order relations of pairs with one element in P and one in Q.[1]
The parallel composition of P and Q, written P || Q,[7] P + Q,[2] or P ⊕ Q,[1] is defined similarly, from the disjoint union of the elements in P and the elements in Q, with pairs of elements that both belong to P or both to Q having the same order as they do in P or Q respectively. In P || Q, a pair x, y is incomparable whenever x belongs to P and y belongs to Q. Parallel composition is both commutative and associative.[1]
The class of series-parallel partial orders is the set of partial orders that can be built up from single-element partial orders using these two operations. Equivalently, it is the smallest set of partial orders that includes the single-element partial order and is closed under the series and parallel composition operations.[1][2]
A weak order is the series parallel partial order obtained from a sequence of composition operations in which all of the parallel compositions are performed first, and then the results of these compositions are combined using only series compositions.[2]
Forbidden suborder characterization
The partial order N with the four elements a, b, c, and d and exactly the three order relations a ≤ b ≥ c ≤ d is an example of a fence or zigzag poset; its Hasse diagram has the shape of the capital letter "N". It is not series-parallel, because there is no way of splitting it into the series or parallel composition of two smaller partial orders. A partial order P is said to be N-free if there does not exist a set of four elements in P such that the restriction of P to those elements is order-isomorphic to N. The series-parallel partial orders are exactly the nonempty finite N-free partial orders.[1][2][3]
It follows immediately from this (although it can also be proven directly) that any nonempty restriction of a series-parallel partial order is itself a series-parallel partial order.[1]
Order dimension
The order dimension of a partial order P is the minimum size of a realizer of P, a set of linear extensions of P with the property that, for every two distinct elements x and y of P, x ≤ y in P if and only if x has an earlier position than y in every linear extension of the realizer. Series-parallel partial orders have order dimension at most two. If P and Q have realizers {L1, L2} and {L3, L4}, respectively, then {L1L3, L2L4} is a realizer of the series composition P; Q, and {L1L3, L4L2} is a realizer of the parallel composition P || Q.[2][3] A partial order is series-parallel if and only if it has a realizer in which one of the two permutations is the identity and the other is a separable permutation.
It is known that a partial order P has order dimension two if and only if there exists a conjugate order Q on the same elements, with the property that any two distinct elements x and y are comparable on exactly one of these two orders. In the case of series parallel partial orders, a conjugate order that is itself series parallel may be obtained by performing a sequence of composition operations in the same order as the ones defining P on the same elements, but performing a series composition for each parallel composition in the decomposition of P and vice versa. More strongly, although a partial order may have many different conjugates, every conjugate of a series parallel partial order must itself be series parallel.[2]
Connections to graph theory
Any partial order may be represented (usually in more than one way) by a directed acyclic graph in which there is a path from x to y whenever x and y are elements of the partial order with x ≤ y. The graphs that represent series-parallel partial orders in this way have been called vertex series parallel graphs, and their transitive reductions (the graphs of the covering relations of the partial order) are called minimal vertex series parallel graphs.[3] Directed trees and (two-terminal) series parallel graphs are examples of minimal vertex series parallel graphs; therefore, series parallel partial orders may be used to represent reachability relations in directed trees and series parallel graphs.[2][3]
The comparability graph of a partial order is the undirected graph with a vertex for each element and an undirected edge for each pair of distinct elements x, y with either x ≤ y or y ≤ x. That is, it is formed from a minimal vertex series parallel graph by forgetting the orientation of each edge. The comparability graph of a series-parallel partial order is a cograph: the series and parallel composition operations of the partial order give rise to operations on the comparability graph that form the disjoint union of two subgraphs or that connect two subgraphs by all possible edges; these two operations are the basic operations from which cographs are defined. Conversely, every cograph is the comparability graph of a series-parallel partial order. If a partial order has a cograph as its comparability graph, then it must be a series-parallel partial order, because every other kind of partial order has an N suborder that would correspond to an induced four-vertex path in its comparability graph, and such paths are forbidden in cographs.[2][4]
Computational complexity
The forbidden suborder characterization of series-parallel partial orders can be used as a basis for an algorithm that tests whether a given binary relation is a series-parallel partial order, in an amount of time that is linear in the number of related pairs.[2][3] Alternatively, if a partial order is described as the reachability order of a directed acyclic graph, it is possible to test whether it is a series-parallel partial order, and if so compute its transitive closure, in time proportional to the number of vertices and edges in the transitive closure; it remains open whether the time to recognize series-parallel reachability orders can be improved to be linear in the size of the input graph.[10]
If a series-parallel partial order is represented as an expression tree describing the series and parallel composition operations that formed it, then the elements of the partial order may be represented by the leaves of the expression tree. A comparison between any two elements may be performed algorithmically by searching for the lowest common ancestor of the corresponding two leaves; if that ancestor is a parallel composition, the two elements are incomparable, and otherwise the order of the series composition operands determines the order of the elements. In this way, a series-parallel partial order on n elements may be represented in O(n) space with O(1) time to determine any comparison value.[2]
It is NP-complete to test, for two given series-parallel partial orders P and Q, whether P contains a restriction isomorphic to Q.[3]
Although the problem of counting the number of linear extensions of an arbitrary partial order is #P-complete,[11] it may be solved in polynomial time for series-parallel partial orders. Specifically, if L(P) denotes the number of linear extensions of a partial order P, then L(P; Q) = L(P)L(Q) and
$L(P||Q)={\frac {(|P|+|Q|)!}{|P|!|Q|!}}L(P)L(Q),$
so the number of linear extensions may be calculated using an expression tree with the same form as the decomposition tree of the given series-parallel order.[2]
Applications
Mannila & Meek (2000) use series-parallel partial orders as a model for the sequences of events in time series data. They describe machine learning algorithms for inferring models of this type, and demonstrate its effectiveness at inferring course prerequisites from student enrollment data and at modeling web browser usage patterns.[6]
Amer et al. (1994) argue that series-parallel partial orders are a good fit for modeling the transmission sequencing requirements of multimedia presentations. They use the formula for computing the number of linear extensions of a series-parallel partial order as the basis for analyzing multimedia transmission algorithms.[7]
Choudhary et al. (1994) use series-parallel partial orders to model the task dependencies in a dataflow model of massive data processing for computer vision. They show that, by using series-parallel orders for this problem, it is possible to efficiently construct an optimized schedule that assigns different tasks to different processors of a parallel computing system in order to optimize the throughput of the system.[8]
A class of orderings somewhat more general than series-parallel partial orders is provided by PQ trees, data structures that have been applied in algorithms for testing whether a graph is planar and recognizing interval graphs.[12] A P node of a PQ tree allows all possible orderings of its children, like a parallel composition of partial orders, while a Q node requires the children to occur in a fixed linear ordering, like a series composition of partial orders. However, unlike series-parallel partial orders, PQ trees allow the linear ordering of any Q node to be reversed.
See also
• Series and parallel circuits
References
1. Bechet, Denis; De Groote, Philippe; Retoré, Christian (1997), "A complete axiomatisation for the inclusion of series-parallel partial orders", Rewriting Techniques and Applications, Lecture Notes in Computer Science, vol. 1232, Springer-Verlag, pp. 230–240, doi:10.1007/3-540-62950-5_74.
2. Möhring, Rolf H. (1989), "Computationally tractable classes of ordered sets", in Rival, Ivan (ed.), Algorithms and Order: Proceedings of the NATO Advanced Study Institute on Algorithms and Order, Ottawa, Canada, May 31-June 13, 1987, NATO Science Series C, vol. 255, Springer-Verlag, pp. 105–194, ISBN 978-0-7923-0007-6.
3. Valdes, Jacobo; Tarjan, Robert E.; Lawler, Eugene L. (1982), "The recognition of series parallel digraphs", SIAM Journal on Computing, 11 (2): 298–313, doi:10.1137/0211023.
4. Jung, H. A. (1978), "On a class of posets and the corresponding comparability graphs", Journal of Combinatorial Theory, Series B, 24 (2): 125–133, doi:10.1016/0095-8956(78)90013-8, MR 0491356.
5. Lawler, Eugene L. (1978), "Sequencing jobs to minimize total weighted completion time subject to precedence constraints", Annals of Discrete Mathematics, 2: 75–90, doi:10.1016/S0167-5060(08)70323-6, ISBN 9780720410433, MR 0495156.
6. Mannila, Heikki; Meek, Christopher (2000), "Global partial orders from sequential data", Proc. 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2000), pp. 161–168, doi:10.1145/347090.347122, S2CID 14735932.
7. Amer, Paul D.; Chassot, Christophe; Connolly, Thomas J.; Diaz, Michel; Conrad, Phillip (1994), "Partial-order transport service for multimedia and other applications", IEEE/ACM Transactions on Networking, 2 (5): 440–456, doi:10.1109/90.336326, S2CID 1974607.
8. Choudhary, A. N.; Narahari, B.; Nicol, D. M.; Simha, R. (1994), "Optimal processor assignment for a class of pipelined computations", IEEE Transactions on Parallel and Distributed Systems, 5 (4): 439–445, doi:10.1109/71.273050, S2CID 5588390.
9. Furnas, George W.; Zacks, Jeff (1994), "Multitrees: enriching and reusing hierarchical structure", Proc. SIGCHI conference on Human Factors in Computing Systems (CHI '94), pp. 330–336, doi:10.1145/191666.191778, S2CID 18710118.
10. Ma, Tze-Heng; Spinrad, Jeremy (1991), "Transitive closure for restricted classes of partial orders", Order, 8 (2): 175–183, doi:10.1007/BF00383402, S2CID 120935610.
11. Brightwell, Graham R.; Winkler, Peter (1991), "Counting linear extensions", Order, 8 (3): 225–242, doi:10.1007/BF00383444, S2CID 119697949.
12. Booth, Kellogg S.; Lueker, George S. (1976), "Testing for the consecutive ones property, interval graphs, and graph planarity using PQ-tree algorithms", Journal of Computer and System Sciences, 13 (3): 335–379, doi:10.1016/S0022-0000(76)80045-1.
Order theory
• Topics
• Glossary
• Category
Key concepts
• Binary relation
• Boolean algebra
• Cyclic order
• Lattice
• Partial order
• Preorder
• Total order
• Weak ordering
Results
• Boolean prime ideal theorem
• Cantor–Bernstein theorem
• Cantor's isomorphism theorem
• Dilworth's theorem
• Dushnik–Miller theorem
• Hausdorff maximal principle
• Knaster–Tarski theorem
• Kruskal's tree theorem
• Laver's theorem
• Mirsky's theorem
• Szpilrajn extension theorem
• Zorn's lemma
Properties & Types (list)
• Antisymmetric
• Asymmetric
• Boolean algebra
• topics
• Completeness
• Connected
• Covering
• Dense
• Directed
• (Partial) Equivalence
• Foundational
• Heyting algebra
• Homogeneous
• Idempotent
• Lattice
• Bounded
• Complemented
• Complete
• Distributive
• Join and meet
• Reflexive
• Partial order
• Chain-complete
• Graded
• Eulerian
• Strict
• Prefix order
• Preorder
• Total
• Semilattice
• Semiorder
• Symmetric
• Total
• Tolerance
• Transitive
• Well-founded
• Well-quasi-ordering (Better)
• (Pre) Well-order
Constructions
• Composition
• Converse/Transpose
• Lexicographic order
• Linear extension
• Product order
• Reflexive closure
• Series-parallel partial order
• Star product
• Symmetric closure
• Transitive closure
Topology & Orders
• Alexandrov topology & Specialization preorder
• Ordered topological vector space
• Normal cone
• Order topology
• Order topology
• Topological vector lattice
• Banach
• Fréchet
• Locally convex
• Normed
Related
• Antichain
• Cofinal
• Cofinality
• Comparability
• Graph
• Duality
• Filter
• Hasse diagram
• Ideal
• Net
• Subnet
• Order morphism
• Embedding
• Isomorphism
• Order type
• Ordered field
• Ordered vector space
• Partially ordered
• Positive cone
• Riesz space
• Upper set
• Young's lattice
|
Wikipedia
|
Series expansion
In mathematics, a series expansion is a technique that expresses a function as an infinite sum, or series, of simpler functions. It is a method for calculating a function that cannot be expressed by just elementary operators (addition, subtraction, multiplication and division).[1]
The resulting so-called series often can be limited to a finite number of terms, thus yielding an approximation of the function. The fewer terms of the sequence are used, the simpler this approximation will be. Often, the resulting inaccuracy (i.e., the partial sum of the omitted terms) can be described by an equation involving Big O notation (see also asymptotic expansion). The series expansion on an open interval will also be an approximation for non-analytic functions.[2]
Types of series expansions
There are several kinds of series expansions, listed below.
Taylor series
A Taylor series is a power series based on a function's derivatives at a single point.[3] More specifically, if a function $f:U\to \mathbb {R} $ is infinitely differentiable around a point $x_{0}$, then the Taylor series of f around this point is given by
$\sum _{n=0}^{\infty }{\frac {f^{(n)}(x_{0})}{n!}}(x-x_{0})^{n}$
under the convention $0^{0}:=1$.[3][4] The Maclaurin series of f is its Taylor series about $x_{0}=0$.[5][4]
Laurent series
A Laurent series is a generalization of the Taylor series, allowing terms with negative exponents; it takes the form $ \sum _{k=-\infty }^{\infty }c_{k}(z-a)^{k}$ and converges in an annulus.[6] In particular, a Laurent series can be used to examine the behavior of a complex function near a singularity by considering the series expansion on an annulus centered at the singularity.
Dirichlet series
A general Dirichlet series is a series of the form $ \sum _{n=1}^{\infty }a_{n}e^{-\lambda _{n}s}.$ One important special case of this is the ordinary Dirichlet series $ \sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}.$[7] Used in number theory.
Fourier series
A Fourier series is an expansion of periodic functions as a sum of many sine and cosine functions.[8] More specifically, the Fourier series of a function $f(t)$ of period $2L$ is given by the expression
$a_{0}+\sum _{n=1}^{\infty }\left[a_{n}\cos \left({\frac {n\pi t}{L}}\right)+b_{n}\sin \left({\frac {n\pi t}{L}}\right)\right]$
where the coefficients are given by the formulae[8][9]
${\begin{aligned}a_{n}&:={\frac {1}{L}}\int _{-L}^{L}f(t)\cos \left({\frac {n\pi t}{L}}\right)dt,\\b_{n}&:={\frac {1}{L}}\int _{-L}^{L}f(t)\sin \left({\frac {n\pi t}{L}}\right)dt.\end{aligned}}$
Other series
In acoustics, e.g., the fundamental tone and the overtones together form an example of a Fourier series.
Newtonian series
Legendre polynomials: Used in physics to describe an arbitrary electrical field as a superposition of a dipole field, a quadrupole field, an octupole field, etc.
Zernike polynomials: Used in optics to calculate aberrations of optical systems. Each term in the series describes a particular type of aberration.
The Stirling series
${\text{Ln}}\Gamma \left(z\right)\sim \left(z-{\tfrac {1}{2}}\right)\ln z-z+{\tfrac {1}{2}}\ln \left(2\pi \right)+\sum _{k=1}^{\infty }{\frac {B_{2k}}{2k(2k-1)z^{2k-1}}}$
is an approximation of the log-gamma function.[10]
Examples
The following is the Taylor series of $e^{x}$:
$e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}...$
[11][12] The Dirichlet series of the Riemann zeta function is
$\zeta (s):=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+\cdots $
[7]
References
1. "Series and Expansions". Mathematics LibreTexts. 2013-11-07. Retrieved 2021-12-24.
2. Gil, Amparo; Segura, Javier; Temme, Nico M. (2007-01-01). Numerical Methods for Special Functions. SIAM. ISBN 978-0-89871-782-2.
3. "Taylor series - Encyclopedia of Mathematics". encyclopediaofmath.org. 27 December 2013. Retrieved 22 March 2022.
4. Edwards, C. Henry; Penney, David E. (2008). Elementary Differential Equations with Boundary Value Problems. p. 196. ISBN 978-0-13-600613-8.
5. Weisstein, Eric W. "Maclaurin Series". mathworld.wolfram.com. Retrieved 2022-03-22.
6. "Laurent series - Encyclopedia of Mathematics". encyclopediaofmath.org. Retrieved 2022-03-22.
7. "Dirichlet series - Encyclopedia of Mathematics". encyclopediaofmath.org. 26 January 2022. Retrieved 22 March 2022.
8. "Fourier series - Encyclopedia of Mathematics". encyclopediaofmath.org. Retrieved 2022-03-22.
9. Edwards, C. Henry; Penney, David E. (2008). Elementary Differential Equations with Boundary Value Problems. pp. 558, 564. ISBN 978-0-13-600613-8.
10. "DLMF: 5.11 Asymptotic Expansions". dlmf.nist.gov. Retrieved 22 March 2022.
11. Weisstein, Eric W. "Exponential Function". mathworld.wolfram.com. Retrieved 2021-08-12.
12. "Exponential function - Encyclopedia of Mathematics". encyclopediaofmath.org. 5 June 2020. Retrieved 12 August 2021.{{cite web}}: CS1 maint: url-status (link)
|
Wikipedia
|
Cauchy product
In mathematics, more specifically in mathematical analysis, the Cauchy product is the discrete convolution of two infinite series. It is named after the French mathematician Augustin-Louis Cauchy.
Definitions
The Cauchy product may apply to infinite series[1][2][3][4][5][6][7][8][9][10][11] or power series.[12][13] When people apply it to finite sequences[14] or finite series, that can be seen merely as a particular case of a product of series with a finite number of non-zero coefficients (see discrete convolution).
Convergence issues are discussed in the next section.
Cauchy product of two infinite series
Let $ \sum _{i=0}^{\infty }a_{i}$ and $ \sum _{j=0}^{\infty }b_{j}$ be two infinite series with complex terms. The Cauchy product of these two infinite series is defined by a discrete convolution as follows:
$\left(\sum _{i=0}^{\infty }a_{i}\right)\cdot \left(\sum _{j=0}^{\infty }b_{j}\right)=\sum _{k=0}^{\infty }c_{k}$ where $c_{k}=\sum _{l=0}^{k}a_{l}b_{k-l}$.
Cauchy product of two power series
Consider the following two power series
$\sum _{i=0}^{\infty }a_{i}x^{i}$ and $\sum _{j=0}^{\infty }b_{j}x^{j}$
with complex coefficients $\{a_{i}\}$ and $\{b_{j}\}$. The Cauchy product of these two power series is defined by a discrete convolution as follows:
$\left(\sum _{i=0}^{\infty }a_{i}x^{i}\right)\cdot \left(\sum _{j=0}^{\infty }b_{j}x^{j}\right)=\sum _{k=0}^{\infty }c_{k}x^{k}$ where $c_{k}=\sum _{l=0}^{k}a_{l}b_{k-l}$.
Convergence and Mertens' theorem
Not to be confused with Mertens' theorems concerning distribution of prime numbers.
Let (an)n≥0 and (bn)n≥0 be real or complex sequences. It was proved by Franz Mertens that, if the series $ \sum _{n=0}^{\infty }a_{n}$ converges to A and $ \sum _{n=0}^{\infty }b_{n}$ converges to B, and at least one of them converges absolutely, then their Cauchy product converges to AB.[15] The theorem is still valid in a Banach algebra (see first line of the following proof).
It is not sufficient for both series to be convergent; if both sequences are conditionally convergent, the Cauchy product does not have to converge towards the product of the two series, as the following example shows:
Example
Consider the two alternating series with
$a_{n}=b_{n}={\frac {(-1)^{n}}{\sqrt {n+1}}}\,,$
which are only conditionally convergent (the divergence of the series of the absolute values follows from the direct comparison test and the divergence of the harmonic series). The terms of their Cauchy product are given by
$c_{n}=\sum _{k=0}^{n}{\frac {(-1)^{k}}{\sqrt {k+1}}}\cdot {\frac {(-1)^{n-k}}{\sqrt {n-k+1}}}=(-1)^{n}\sum _{k=0}^{n}{\frac {1}{\sqrt {(k+1)(n-k+1)}}}$
for every integer n ≥ 0. Since for every k ∈ {0, 1, ..., n} we have the inequalities k + 1 ≤ n + 1 and n – k + 1 ≤ n + 1, it follows for the square root in the denominator that √(k + 1)(n − k + 1) ≤ n +1, hence, because there are n + 1 summands,
$|c_{n}|\geq \sum _{k=0}^{n}{\frac {1}{n+1}}=1$
for every integer n ≥ 0. Therefore, cn does not converge to zero as n → ∞, hence the series of the (cn)n≥0 diverges by the term test.
Proof of Mertens' theorem
For simplicity, we will prove it for complex numbers. However, the proof we are about to give is formally identical for an arbitrary Banach algebra (not even commutativity or associativity is required).
Assume without loss of generality that the series $ \sum _{n=0}^{\infty }a_{n}$ converges absolutely. Define the partial sums
$A_{n}=\sum _{i=0}^{n}a_{i},\quad B_{n}=\sum _{i=0}^{n}b_{i}\quad {\text{and}}\quad C_{n}=\sum _{i=0}^{n}c_{i}$
with
$c_{i}=\sum _{k=0}^{i}a_{k}b_{i-k}\,.$
Then
$C_{n}=\sum _{i=0}^{n}a_{n-i}B_{i}$
by rearrangement, hence
$C_{n}=\sum _{i=0}^{n}a_{n-i}(B_{i}-B)+A_{n}B\,.$
(1)
Fix ε > 0. Since $ \sum _{k\in \mathbb {N} }|a_{k}|<\infty $ by absolute convergence, and since Bn converges to B as n → ∞, there exists an integer N such that, for all integers n ≥ N,
$|B_{n}-B|\leq {\frac {\varepsilon /3}{\sum _{k\in \mathbb {N} }|a_{k}|+1}}$
(2)
(this is the only place where the absolute convergence is used). Since the series of the (an)n≥0 converges, the individual an must converge to 0 by the term test. Hence there exists an integer M such that, for all integers n ≥ M,
$|a_{n}|\leq {\frac {\varepsilon }{3N(\max _{i\in \{0,\dots ,N-1\}}|B_{i}-B|+1)}}\,.$
(3)
Also, since An converges to A as n → ∞, there exists an integer L such that, for all integers n ≥ L,
$|A_{n}-A|\leq {\frac {\varepsilon /3}{|B|+1}}\,.$
(4)
Then, for all integers n ≥ max{L, M + N}, use the representation (1) for Cn, split the sum in two parts, use the triangle inequality for the absolute value, and finally use the three estimates (2), (3) and (4) to show that
${\begin{aligned}|C_{n}-AB|&={\biggl |}\sum _{i=0}^{n}a_{n-i}(B_{i}-B)+(A_{n}-A)B{\biggr |}\\&\leq \sum _{i=0}^{N-1}\underbrace {|a_{\underbrace {\scriptstyle n-i} _{\scriptscriptstyle \geq M}}|\,|B_{i}-B|} _{\leq \,\varepsilon /(3N){\text{ by (3)}}}+{}\underbrace {\sum _{i=N}^{n}|a_{n-i}|\,|B_{i}-B|} _{\leq \,\varepsilon /3{\text{ by (2)}}}+{}\underbrace {|A_{n}-A|\,|B|} _{\leq \,\varepsilon /3{\text{ by (4)}}}\leq \varepsilon \,.\end{aligned}}$
By the definition of convergence of a series, Cn → AB as required.
Cesàro's theorem
In cases where the two sequences are convergent but not absolutely convergent, the Cauchy product is still Cesàro summable. Specifically:
If $ (a_{n})_{n\geq 0}$, $ (b_{n})_{n\geq 0}$ are real sequences with $ \sum a_{n}\to A$ and $ \sum b_{n}\to B$ then
${\frac {1}{N}}\left(\sum _{n=1}^{N}\sum _{i=1}^{n}\sum _{k=0}^{i}a_{k}b_{i-k}\right)\to AB.$
This can be generalised to the case where the two sequences are not convergent but just Cesàro summable:
Theorem
For $ r>-1$ and $ s>-1$, suppose the sequence $ (a_{n})_{n\geq 0}$ is $ (C,\;r)$ summable with sum A and $ (b_{n})_{n\geq 0}$ is $ (C,\;s)$ summable with sum B. Then their Cauchy product is $ (C,\;r+s+1)$ summable with sum AB.
Examples
• For some $ x,y\in \mathbb {R} $, let $ a_{n}=x^{n}/n!$ and $ b_{n}=y^{n}/n!$. Then
$c_{n}=\sum _{i=0}^{n}{\frac {x^{i}}{i!}}{\frac {y^{n-i}}{(n-i)!}}={\frac {1}{n!}}\sum _{i=0}^{n}{\binom {n}{i}}x^{i}y^{n-i}={\frac {(x+y)^{n}}{n!}}$
by definition and the binomial formula. Since, formally, $ \exp(x)=\sum a_{n}$ and $ \exp(y)=\sum b_{n}$, we have shown that $ \exp(x+y)=\sum c_{n}$. Since the limit of the Cauchy product of two absolutely convergent series is equal to the product of the limits of those series, we have proven the formula $ \exp(x+y)=\exp(x)\exp(y)$ for all $ x,y\in \mathbb {R} $.
• As a second example, let $ a_{n}=b_{n}=1$ for all $ n\in \mathbb {N} $. Then $ c_{n}=n+1$ for all $n\in \mathbb {N} $ so the Cauchy product
$\sum c_{n}=(1,1+2,1+2+3,1+2+3+4,\dots )$
does not converge.
Generalizations
All of the foregoing applies to sequences in $ \mathbb {C} $ (complex numbers). The Cauchy product can be defined for series in the $ \mathbb {R} ^{n}$ spaces (Euclidean spaces) where multiplication is the inner product. In this case, we have the result that if two series converge absolutely then their Cauchy product converges absolutely to the inner product of the limits.
Products of finitely many infinite series
Let $n\in \mathbb {N} $ such that $n\geq 2$ (actually the following is also true for $n=1$ but the statement becomes trivial in that case) and let $ \sum _{k_{1}=0}^{\infty }a_{1,k_{1}},\ldots ,\sum _{k_{n}=0}^{\infty }a_{n,k_{n}}$ be infinite series with complex coefficients, from which all except the $n$th one converge absolutely, and the $n$th one converges. Then the limit
$\lim _{N\to \infty }\sum _{k_{1}+\ldots +k_{n}\leq N}a_{1,k_{1}}\cdots a_{n,k_{n}}$
exists and we have:
$\prod _{j=1}^{n}\left(\sum _{k_{j}=0}^{\infty }a_{j,k_{j}}\right)=\lim _{N\to \infty }\sum _{k_{1}+\ldots +k_{n}\leq N}a_{1,k_{1}}\cdots a_{n,k_{n}}$
Proof
Because
$\forall N\in \mathbb {N} :\sum _{k_{1}+\ldots +k_{n}\leq N}a_{1,k_{1}}\cdots a_{n,k_{n}}=\sum _{k_{1}=0}^{N}\sum _{k_{2}=0}^{k_{1}}\cdots \sum _{k_{n}=0}^{k_{n-1}}a_{1,k_{n}}a_{2,k_{n-1}-k_{n}}\cdots a_{n,k_{1}-k_{2}}$ :\sum _{k_{1}+\ldots +k_{n}\leq N}a_{1,k_{1}}\cdots a_{n,k_{n}}=\sum _{k_{1}=0}^{N}\sum _{k_{2}=0}^{k_{1}}\cdots \sum _{k_{n}=0}^{k_{n-1}}a_{1,k_{n}}a_{2,k_{n-1}-k_{n}}\cdots a_{n,k_{1}-k_{2}}}
the statement can be proven by induction over $n$: The case for $n=2$ is identical to the claim about the Cauchy product. This is our induction base.
The induction step goes as follows: Let the claim be true for an $n\in \mathbb {N} $ such that $n\geq 2$, and let $ \sum _{k_{1}=0}^{\infty }a_{1,k_{1}},\ldots ,\sum _{k_{n+1}=0}^{\infty }a_{n+1,k_{n+1}}$ be infinite series with complex coefficients, from which all except the $n+1$th one converge absolutely, and the $n+1$-th one converges. We first apply the induction hypothesis to the series $ \sum _{k_{1}=0}^{\infty }|a_{1,k_{1}}|,\ldots ,\sum _{k_{n}=0}^{\infty }|a_{n,k_{n}}|$. We obtain that the series
$\sum _{k_{1}=0}^{\infty }\sum _{k_{2}=0}^{k_{1}}\cdots \sum _{k_{n}=0}^{k_{n-1}}|a_{1,k_{n}}a_{2,k_{n-1}-k_{n}}\cdots a_{n,k_{1}-k_{2}}|$
converges, and hence, by the triangle inequality and the sandwich criterion, the series
$\sum _{k_{1}=0}^{\infty }\left|\sum _{k_{2}=0}^{k_{1}}\cdots \sum _{k_{n}=0}^{k_{n-1}}a_{1,k_{n}}a_{2,k_{n-1}-k_{n}}\cdots a_{n,k_{1}-k_{2}}\right|$
converges, and hence the series
$\sum _{k_{1}=0}^{\infty }\sum _{k_{2}=0}^{k_{1}}\cdots \sum _{k_{n}=0}^{k_{n-1}}a_{1,k_{n}}a_{2,k_{n-1}-k_{n}}\cdots a_{n,k_{1}-k_{2}}$
converges absolutely. Therefore, by the induction hypothesis, by what Mertens proved, and by renaming of variables, we have:
${\begin{aligned}\prod _{j=1}^{n+1}\left(\sum _{k_{j}=0}^{\infty }a_{j,k_{j}}\right)&=\left(\sum _{k_{n+1}=0}^{\infty }\overbrace {a_{n+1,k_{n+1}}} ^{=:a_{k_{n+1}}}\right)\left(\sum _{k_{1}=0}^{\infty }\overbrace {\sum _{k_{2}=0}^{k_{1}}\cdots \sum _{k_{n}=0}^{k_{n-1}}a_{1,k_{n}}a_{2,k_{n-1}-k_{n}}\cdots a_{n,k_{1}-k_{2}}} ^{=:b_{k_{1}}}\right)\\&=\left(\sum _{k_{1}=0}^{\infty }\overbrace {\sum _{k_{2}=0}^{k_{1}}\sum _{k_{3}=0}^{k_{2}}\cdots \sum _{k_{n}=0}^{k_{n-1}}a_{1,k_{n}}a_{2,k_{n-1}-k_{n}}\cdots a_{n,k_{1}-k_{2}}} ^{=:a_{k_{1}}}\right)\left(\sum _{k_{n+1}=0}^{\infty }\overbrace {a_{n+1,k_{n+1}}} ^{=:b_{k_{n+1}}}\right)\\&=\left(\sum _{k_{1}=0}^{\infty }\overbrace {\sum _{k_{3}=0}^{k_{1}}\sum _{k_{4}=0}^{k_{3}}\cdots \sum _{k_{n}+1=0}^{k_{n}}a_{1,k_{n+1}}a_{2,k_{n}-k_{n+1}}\cdots a_{n,k_{1}-k_{3}}} ^{=:a_{k_{1}}}\right)\left(\sum _{k_{2}=0}^{\infty }\overbrace {a_{n+1,k_{2}}} ^{=:b_{n+1,k_{2}}=:b_{k_{2}}}\right)\\&=\left(\sum _{k_{1}=0}^{\infty }a_{k_{1}}\right)\left(\sum _{k_{2}=0}^{\infty }b_{k_{2}}\right)\\&=\left(\sum _{k_{1}=0}^{\infty }\sum _{k_{2}=0}^{k_{1}}a_{k_{2}}b_{k_{1}-k_{2}}\right)\\&=\left(\sum _{k_{1}=0}^{\infty }\sum _{k_{2}=0}^{k_{1}}\left(\overbrace {\sum _{k_{3}=0}^{k_{2}}\cdots \sum _{k_{n}+1=0}^{k_{n}}a_{1,k_{n+1}}a_{2,k_{n}-k_{n+1}}\cdots a_{n,k_{2}-k_{3}}} ^{=:a_{k_{2}}}\right)\left(\overbrace {a_{n+1,k_{1}-k_{2}}} ^{=:b_{k_{1}-k_{2}}}\right)\right)\\&=\left(\sum _{k_{1}=0}^{\infty }\sum _{k_{2}=0}^{k_{1}}\overbrace {\sum _{k_{3}=0}^{k_{2}}\cdots \sum _{k_{n}+1=0}^{k_{n}}a_{1,k_{n+1}}a_{2,k_{n}-k_{n+1}}\cdots a_{n,k_{2}-k_{3}}} ^{=:a_{k_{2}}}\overbrace {a_{n+1,k_{1}-k_{2}}} ^{=:b_{k_{1}-k_{2}}}\right)\\&=\sum _{k_{1}=0}^{\infty }\sum _{k_{2}=0}^{k_{1}}a_{n+1,k_{1}-k_{2}}\sum _{k_{3}=0}^{k_{2}}\cdots \sum _{k_{n+1}=0}^{k_{n}}a_{1,k_{n+1}}a_{2,k_{n}-k_{n+1}}\cdots a_{n,k_{2}-k_{3}}\end{aligned}}$
Therefore, the formula also holds for $n+1$.
Relation to convolution of functions
A finite sequence can be viewed as an infinite sequence with only finitely many nonzero terms, or in other words as a function $f:\mathbb {N} \to \mathbb {C} $ with finite support. For any complex-valued functions f, g on $\mathbb {N} $ with finite support, one can take their convolution:
$(f*g)(n)=\sum _{i+j=n}f(i)g(j).$
Then $ \sum (f*g)(n)$ is the same thing as the Cauchy product of $ \sum f(n)$ and $ \sum g(n)$.
More generally, given a monoid S, one can form the semigroup algebra $\mathbb {C} [S]$ of S, with the multiplication given by convolution. If one takes, for example, $S=\mathbb {N} ^{d}$, then the multiplication on $\mathbb {C} [S]$ is a generalization of the Cauchy product to higher dimension.
Notes
1. Canuto & Tabacco 2015, p. 20.
2. Bloch 2011, p. 463.
3. Friedman & Kandel 2011, p. 204.
4. Ghorpade & Limaye 2006, p. 416.
5. Hijab 2011, p. 43.
6. Montesinos, Zizler & Zizler 2015, p. 98.
7. Oberguggenberger & Ostermann 2011, p. 322.
8. Pedersen 2015, p. 210.
9. Ponnusamy 2012, p. 200.
10. Pugh 2015, p. 210.
11. Sohrab 2014, p. 73.
12. Canuto & Tabacco 2015, p. 53.
13. Mathonline, Cauchy Product of Power Series.
14. Weisstein, Cauchy Product.
15. Rudin, Walter (1976). Principles of Mathematical Analysis. McGraw-Hill. p. 74.
References
• Apostol, Tom M. (1974), Mathematical Analysis (2nd ed.), Addison Wesley, p. 204, ISBN 978-0-201-00288-1.
• Bloch, Ethan D. (2011), The Real Numbers and Real Analysis, Springer, ISBN 9780387721767.
• Canuto, Claudio; Tabacco, Anita (2015), Mathematical Analysis II (2nd ed.), Springer.
• Friedman, Menahem; Kandel, Abraham (2011), Calculus Light, Springer, ISBN 9783642178481.
• Ghorpade, Sudhir R.; Limaye, Balmohan V. (2006), A Course in Calculus and Real Analysis, Springer.
• Hardy, G. H. (1949), Divergent Series, Oxford University Press, pp. 227–229.
• Hijab, Omar (2011), Introduction to Calculus and Classical Analysis (3rd ed.), Springer.
• Montesinos, Vicente; Zizler, Peter; Zizler, Václav (2015), An Introduction to Modern Analysis, Springer.
• Oberguggenberger, Michael; Ostermann, Alexander (2011), Analysis for Computer Scientists, Springer.
• Pedersen, Steen (2015), From Calculus to Analysis, Springer, doi:10.1007/978-3-319-13641-7, ISBN 978-3-319-13640-0.
• Ponnusamy, S. (2012), Foundations of Mathematical Analysis, Birkhäuser, ISBN 9780817682927.
• Pugh, Charles C. (2015), Real Mathematical Analysis (2nd ed.), Springer.
• Sohrab, Houshang H. (2014), Basic Real Analysis (2nd ed.), Birkhäuser.
External links
• Mathonline. "Cauchy Product of Power Series"..
• Weisstein, Eric W., "Cauchy Product", From MathWorld – A Wolfram Web Resource.
|
Wikipedia
|
Series multisection
In mathematics, a multisection of a power series is a new power series composed of equally spaced terms extracted unaltered from the original series. Formally, if one is given a power series
$\sum _{n=-\infty }^{\infty }a_{n}\cdot z^{n}$
then its multisection is a power series of the form
$\sum _{m=-\infty }^{\infty }a_{qm+p}\cdot z^{qm+p}$
where p, q are integers, with 0 ≤ p < q. Series multisection represents one of the common transformations of generating functions.
Multisection of analytic functions
A multisection of the series of an analytic function
$f(z)=\sum _{n=0}^{\infty }a_{n}\cdot z^{n}$
has a closed-form expression in terms of the function $f(x)$:
$\sum _{m=0}^{\infty }a_{qm+p}\cdot z^{qm+p}={\frac {1}{q}}\cdot \sum _{k=0}^{q-1}\omega ^{-kp}\cdot f(\omega ^{k}\cdot z),$
where $\omega =e^{\frac {2\pi i}{q}}$ is a primitive q-th root of unity. This expression is often called a root of unity filter. This solution was first discovered by Thomas Simpson.[1] This expression is especially useful in that it can convert an infinite sum into a finite sum. It is used, for example, in a key step of a standard proof of Gauss's digamma theorem, which gives a closed-form solution to the digamma function evaluated at rational values p/q.
Examples
Bisection
In general, the bisections of a series are the even and odd parts of the series.
Geometric series
Consider the geometric series
$\sum _{n=0}^{\infty }z^{n}={\frac {1}{1-z}}\quad {\text{ for }}|z|<1.$
By setting $z\rightarrow z^{q}$ in the above series, its multisections are easily seen to be
$\sum _{m=0}^{\infty }z^{qm+p}={\frac {z^{p}}{1-z^{q}}}\quad {\text{ for }}|z|<1.$
Remembering that the sum of the multisections must equal the original series, we recover the familiar identity
$\sum _{p=0}^{q-1}z^{p}={\frac {1-z^{q}}{1-z}}.$
Exponential function
The exponential function
$e^{z}=\sum _{n=0}^{\infty }{z^{n} \over n!}$
by means of the above formula for analytic functions separates into
$\sum _{m=0}^{\infty }{z^{qm+p} \over (qm+p)!}={\frac {1}{q}}\cdot \sum _{k=0}^{q-1}\omega ^{-kp}e^{\omega ^{k}z}.$
The bisections are trivially the hyperbolic functions:
$\sum _{m=0}^{\infty }{z^{2m} \over (2m)!}={\frac {1}{2}}\left(e^{z}+e^{-z}\right)=\cosh {z}$
$\sum _{m=0}^{\infty }{z^{2m+1} \over (2m+1)!}={\frac {1}{2}}\left(e^{z}-e^{-z}\right)=\sinh {z}.$
Higher order multisections are found by noting that all such series must be real-valued along the real line. By taking the real part and using standard trigonometric identities, the formulas may be written in explicitly real form as
$\sum _{m=0}^{\infty }{z^{qm+p} \over (qm+p)!}={\frac {1}{q}}\cdot \sum _{k=0}^{q-1}e^{z\cos(2\pi k/q)}\cos {\left(z\sin {\left({\frac {2\pi k}{q}}\right)}-{\frac {2\pi kp}{q}}\right)}.$
These can be seen as solutions to the linear differential equation $f^{(q)}(z)=f(z)$ with boundary conditions $f^{(k)}(0)=\delta _{k,p}$, using Kronecker delta notation. In particular, the trisections are
$\sum _{m=0}^{\infty }{z^{3m} \over (3m)!}={\frac {1}{3}}\left(e^{z}+2e^{-z/2}\cos {\frac {{\sqrt {3}}z}{2}}\right)$
$\sum _{m=0}^{\infty }{z^{3m+1} \over (3m+1)!}={\frac {1}{3}}\left(e^{z}-e^{-z/2}\left(\cos {\frac {{\sqrt {3}}z}{2}}-{\sqrt {3}}\sin {\frac {{\sqrt {3}}z}{2}}\right)\right)$
$\sum _{m=0}^{\infty }{z^{3m+2} \over (3m+2)!}={\frac {1}{3}}\left(e^{z}-e^{-z/2}\left(\cos {\frac {{\sqrt {3}}z}{2}}+{\sqrt {3}}\sin {\frac {{\sqrt {3}}z}{2}}\right)\right),$
and the quadrisections are
$\sum _{m=0}^{\infty }{z^{4m} \over (4m)!}={\frac {1}{2}}\left(\cosh {z}+\cos {z}\right)$
$\sum _{m=0}^{\infty }{z^{4m+1} \over (4m+1)!}={\frac {1}{2}}\left(\sinh {z}+\sin {z}\right)$
$\sum _{m=0}^{\infty }{z^{4m+2} \over (4m+2)!}={\frac {1}{2}}\left(\cosh {z}-\cos {z}\right)$
$\sum _{m=0}^{\infty }{z^{4m+3} \over (4m+3)!}={\frac {1}{2}}\left(\sinh {z}-\sin {z}\right).$
Binomial series
Multisection of a binomial expansion
$(1+x)^{n}={n \choose 0}x^{0}+{n \choose 1}x+{n \choose 2}x^{2}+\cdots $
at x = 1 gives the following identity for the sum of binomial coefficients with step q:
${n \choose p}+{n \choose p+q}+{n \choose p+2q}+\cdots ={\frac {1}{q}}\cdot \sum _{k=0}^{q-1}\left(2\cos {\frac {\pi k}{q}}\right)^{n}\cdot \cos {\frac {\pi (n-2p)k}{q}}.$
References
1. Simpson, Thomas (1757). "CIII. The invention of a general method for determining the sum of every 2d, 3d, 4th, or 5th, &c. term of a series, taken in order; the sum of the whole series being known". Philosophical Transactions of the Royal Society of London. 51: 757–759. doi:10.1098/rstl.1757.0104.
• Weisstein, Eric W. "Series Multisection". MathWorld.
• Somos, Michael A Multisection of q-Series, 2006.
• John Riordan (1968). Combinatorial identities. New York: John Wiley and Sons.
|
Wikipedia
|
Basel problem
The Basel problem is a problem in mathematical analysis with relevance to number theory, concerning an infinite sum of inverse squares. It was first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734,[1] and read on 5 December 1735 in The Saint Petersburg Academy of Sciences.[2] Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up years later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem.
Part of a series of articles on the
mathematical constant π
3.1415926535897932384626433...
Uses
• Area of a circle
• Circumference
• Use in other formulae
Properties
• Irrationality
• Transcendence
Value
• Less than 22/7
• Approximations
• Madhava's correction term
• Memorization
People
• Archimedes
• Liu Hui
• Zu Chongzhi
• Aryabhata
• Madhava
• Jamshīd al-Kāshī
• Ludolph van Ceulen
• François Viète
• Seki Takakazu
• Takebe Kenko
• William Jones
• John Machin
• William Shanks
• Srinivasa Ramanujan
• John Wrench
• Chudnovsky brothers
• Yasumasa Kanada
History
• Chronology
• A History of Pi
In culture
• Indiana Pi Bill
• Pi Day
Related topics
• Squaring the circle
• Basel problem
• Six nines in π
• Other topics related to π
The Basel problem asks for the precise summation of the reciprocals of the squares of the natural numbers, i.e. the precise sum of the infinite series:
$\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\cdots .$
The sum of the series is approximately equal to 1.644934.[3] The Basel problem asks for the exact sum of this series (in closed form), as well as a proof that this sum is correct. Euler found the exact sum to be $\pi ^{2}/6$ and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, although he was later proven correct. He produced an accepted proof in 1741.
The solution to this problem can be used to estimate the probability that two large random numbers are coprime. Two random integers in the range from 1 to $n$, in the limit as $n$ goes to infinity, are relatively prime with a probability that approaches $6/\pi ^{2}$, the reciprocal of the solution to the Basel problem.[4]
Euler's approach
Euler's original derivation of the value $\pi ^{2}/6$ essentially extended observations about finite polynomials and assumed that these same properties hold true for infinite series.
Of course, Euler's original reasoning requires justification (100 years later, Karl Weierstrass proved that Euler's representation of the sine function as an infinite product is valid, by the Weierstrass factorization theorem), but even without justification, by simply obtaining the correct value, he was able to verify it numerically against partial sums of the series. The agreement he observed gave him sufficient confidence to announce his result to the mathematical community.
To follow Euler's argument, recall the Taylor series expansion of the sine function
$\sin x=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots $
Dividing through by $x$ gives
${\frac {\sin x}{x}}=1-{\frac {x^{2}}{3!}}+{\frac {x^{4}}{5!}}-{\frac {x^{6}}{7!}}+\cdots .$
The Weierstrass factorization theorem shows that the left-hand side is the product of linear factors given by its roots, just as for finite polynomials. Euler assumed this as a heuristic for expanding an infinite degree polynomial in terms of its roots, but in fact is not always true for general $P(x)$.[5] This factorization expands the equation into:
${\begin{aligned}{\frac {\sin x}{x}}&=\left(1-{\frac {x}{\pi }}\right)\left(1+{\frac {x}{\pi }}\right)\left(1-{\frac {x}{2\pi }}\right)\left(1+{\frac {x}{2\pi }}\right)\left(1-{\frac {x}{3\pi }}\right)\left(1+{\frac {x}{3\pi }}\right)\cdots \\&=\left(1-{\frac {x^{2}}{\pi ^{2}}}\right)\left(1-{\frac {x^{2}}{4\pi ^{2}}}\right)\left(1-{\frac {x^{2}}{9\pi ^{2}}}\right)\cdots \end{aligned}}$
If we formally multiply out this product and collect all the x2 terms (we are allowed to do so because of Newton's identities), we see by induction that the x2 coefficient of sin x/x is [6]
$-\left({\frac {1}{\pi ^{2}}}+{\frac {1}{4\pi ^{2}}}+{\frac {1}{9\pi ^{2}}}+\cdots \right)=-{\frac {1}{\pi ^{2}}}\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}.$
But from the original infinite series expansion of sin x/x, the coefficient of x2 is −1/3! = −1/6. These two coefficients must be equal; thus,
$-{\frac {1}{6}}=-{\frac {1}{\pi ^{2}}}\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}.$
Multiplying both sides of this equation by −π2 gives the sum of the reciprocals of the positive square integers.
$\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {\pi ^{2}}{6}}.$
This method of calculating $\zeta (2)$ is detailed in expository fashion most notably in Havil's Gamma book which details many zeta function and logarithm-related series and integrals, as well as a historical perspective, related to the Euler gamma constant.[7]
Generalizations of Euler's method using elementary symmetric polynomials
Using formulae obtained from elementary symmetric polynomials,[8] this same approach can be used to enumerate formulae for the even-indexed even zeta constants which have the following known formula expanded by the Bernoulli numbers:
$\zeta (2n)={\frac {(-1)^{n-1}(2\pi )^{2n}}{2\cdot (2n)!}}B_{2n}.$
For example, let the partial product for $\sin(x)$ expanded as above be defined by ${\frac {S_{n}(x)}{x}}:=\prod \limits _{k=1}^{n}\left(1-{\frac {x^{2}}{k^{2}\cdot \pi ^{2}}}\right)$. Then using known formulas for elementary symmetric polynomials (a.k.a., Newton's formulas expanded in terms of power sum identities), we can see (for example) that
${\begin{aligned}\left[x^{4}\right]{\frac {S_{n}(x)}{x}}&={\frac {1}{2\pi ^{4}}}\left(\left(H_{n}^{(2)}\right)^{2}-H_{n}^{(4)}\right)\qquad \xrightarrow {n\rightarrow \infty } \qquad {\frac {1}{2\pi ^{4}}}\left(\zeta (2)^{2}-\zeta (4)\right)\\[4pt]&\qquad \implies \zeta (4)={\frac {\pi ^{4}}{90}}=-2\pi ^{4}\cdot [x^{4}]{\frac {\sin(x)}{x}}+{\frac {\pi ^{4}}{36}}\\[8pt]\left[x^{6}\right]{\frac {S_{n}(x)}{x}}&=-{\frac {1}{6\pi ^{6}}}\left(\left(H_{n}^{(2)}\right)^{3}-2H_{n}^{(2)}H_{n}^{(4)}+2H_{n}^{(6)}\right)\qquad \xrightarrow {n\rightarrow \infty } \qquad {\frac {1}{6\pi ^{6}}}\left(\zeta (2)^{3}-3\zeta (2)\zeta (4)+2\zeta (6)\right)\\[4pt]&\qquad \implies \zeta (6)={\frac {\pi ^{6}}{945}}=-3\cdot \pi ^{6}[x^{6}]{\frac {\sin(x)}{x}}-{\frac {2}{3}}{\frac {\pi ^{2}}{6}}{\frac {\pi ^{4}}{90}}+{\frac {\pi ^{6}}{216}},\end{aligned}}$
and so on for subsequent coefficients of $[x^{2k}]{\frac {S_{n}(x)}{x}}$. There are other forms of Newton's identities expressing the (finite) power sums $H_{n}^{(2k)}$ in terms of the elementary symmetric polynomials, $e_{i}\equiv e_{i}\left(-{\frac {\pi ^{2}}{1^{2}}},-{\frac {\pi ^{2}}{2^{2}}},-{\frac {\pi ^{2}}{3^{2}}},-{\frac {\pi ^{2}}{4^{2}}},\ldots \right),$ but we can go a more direct route to expressing non-recursive formulas for $\zeta (2k)$ using the method of elementary symmetric polynomials. Namely, we have a recurrence relation between the elementary symmetric polynomials and the power sum polynomials given as on this page by
$(-1)^{k}ke_{k}(x_{1},\ldots ,x_{n})=\sum _{j=1}^{k}(-1)^{k-j-1}p_{j}(x_{1},\ldots ,x_{n})e_{k-j}(x_{1},\ldots ,x_{n}),$
which in our situation equates to the limiting recurrence relation (or generating function convolution, or product) expanded as
${\frac {\pi ^{2k}}{2}}\cdot {\frac {(2k)\cdot (-1)^{k}}{(2k+1)!}}=-[x^{2k}]{\frac {\sin(\pi x)}{\pi x}}\times \sum _{i\geq 1}\zeta (2i)x^{i}.$
Then by differentiation and rearrangement of the terms in the previous equation, we obtain that
$\zeta (2k)=[x^{2k}]{\frac {1}{2}}\left(1-\pi x\cot(\pi x)\right).$
Consequences of Euler's proof
By the above results, we can conclude that $\zeta (2k)$ is always a rational multiple of $\pi ^{2k}$. In particular, since $\pi $ and integer powers of it are transcendental, we can conclude at this point that $\zeta (2k)$ is irrational, and more precisely, transcendental for all $k\geq 1$. By contrast, the properties of the odd-indexed zeta constants, including Apéry's constant $\zeta (3)$, are almost completely unknown.
The Riemann zeta function
The Riemann zeta function ζ(s) is one of the most significant functions in mathematics because of its relationship to the distribution of the prime numbers. The zeta function is defined for any complex number s with real part greater than 1 by the following formula:
$\zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}.$
Taking s = 2, we see that ζ(2) is equal to the sum of the reciprocals of the squares of all positive integers:
$\zeta (2)=\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+{\frac {1}{4^{2}}}+\cdots ={\frac {\pi ^{2}}{6}}\approx 1.644934.$
Convergence can be proven by the integral test, or by the following inequality:
${\begin{aligned}\sum _{n=1}^{N}{\frac {1}{n^{2}}}&<1+\sum _{n=2}^{N}{\frac {1}{n(n-1)}}\\&=1+\sum _{n=2}^{N}\left({\frac {1}{n-1}}-{\frac {1}{n}}\right)\\&=1+1-{\frac {1}{N}}\;{\stackrel {N\to \infty }{\longrightarrow }}\;2.\end{aligned}}$
This gives us the upper bound 2, and because the infinite sum contains no negative terms, it must converge to a value strictly between 0 and 2. It can be shown that ζ(s) has a simple expression in terms of the Bernoulli numbers whenever s is a positive even integer. With s = 2n:[9]
$\zeta (2n)={\frac {(2\pi )^{2n}(-1)^{n+1}B_{2n}}{2\cdot (2n)!}}.$
A proof using Euler's formula and L'Hôpital's rule
The normalized sinc function ${\text{sinc}}(x)={\frac {\sin(\pi x)}{\pi x}}$ has a Weierstrass factorization representation as an infinite product:
${\frac {\sin(\pi x)}{\pi x}}=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{n^{2}}}\right).$
The infinite product is analytic, so taking the natural logarithm of both sides and differentiating yields
${\frac {\pi \cos(\pi x)}{\sin(\pi x)}}-{\frac {1}{x}}=-\sum _{n=1}^{\infty }{\frac {2x}{n^{2}-x^{2}}}$
(by uniform convergence, the interchange of the derivative and infinite series is permissible). After dividing the equation by $2x$ and regrouping one gets
${\frac {1}{2x^{2}}}-{\frac {\pi \cot(\pi x)}{2x}}=\sum _{n=1}^{\infty }{\frac {1}{n^{2}-x^{2}}}.$
We make a change of variables ($x=-it$):
$-{\frac {1}{2t^{2}}}+{\frac {\pi \cot(-\pi it)}{2it}}=\sum _{n=1}^{\infty }{\frac {1}{n^{2}+t^{2}}}.$
Euler's formula can be used to deduce that
${\frac {\pi \cot(-\pi it)}{2it}}={\frac {\pi }{2it}}{\frac {i\left(e^{2\pi t}+1\right)}{e^{2\pi t}-1}}={\frac {\pi }{2t}}+{\frac {\pi }{t\left(e^{2\pi t}-1\right)}}.$
or using the corresponding hyperbolic function:
${\frac {\pi \cot(-\pi it)}{2it}}={\frac {\pi }{2t}}{i\cot(\pi it)}={\frac {\pi }{2t}}\coth(\pi t).$
Then
$\sum _{n=1}^{\infty }{\frac {1}{n^{2}+t^{2}}}={\frac {\pi \left(te^{2\pi t}+t\right)-e^{2\pi t}+1}{2\left(t^{2}e^{2\pi t}-t^{2}\right)}}=-{\frac {1}{2t^{2}}}+{\frac {\pi }{2t}}\coth(\pi t).$
Now we take the limit as $t$ approaches zero and use L'Hôpital's rule thrice. By Tannery's theorem applied to $ \lim _{t\to \infty }\sum _{n=1}^{\infty }1/(n^{2}+1/t^{2})$, we can interchange the limit and infinite series so that $ \lim _{t\to 0}\sum _{n=1}^{\infty }1/(n^{2}+t^{2})=\sum _{n=1}^{\infty }1/n^{2}$ and by L'Hôpital's rule
${\begin{aligned}\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}&=\lim _{t\to 0}{\frac {\pi }{4}}{\frac {2\pi te^{2\pi t}-e^{2\pi t}+1}{\pi t^{2}e^{2\pi t}+te^{2\pi t}-t}}\\[6pt]&=\lim _{t\to 0}{\frac {\pi ^{3}te^{2\pi t}}{2\pi \left(\pi t^{2}e^{2\pi t}+2te^{2\pi t}\right)+e^{2\pi t}-1}}\\[6pt]&=\lim _{t\to 0}{\frac {\pi ^{2}(2\pi t+1)}{4\pi ^{2}t^{2}+12\pi t+6}}\\[6pt]&={\frac {\pi ^{2}}{6}}.\end{aligned}}$
A proof using Fourier series
Use Parseval's identity (applied to the function f(x) = x) to obtain
$\sum _{n=-\infty }^{\infty }|c_{n}|^{2}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }x^{2}\,dx,$
where
${\begin{aligned}c_{n}&={\frac {1}{2\pi }}\int _{-\pi }^{\pi }xe^{-inx}\,dx\\[4pt]&={\frac {n\pi \cos(n\pi )-\sin(n\pi )}{\pi n^{2}}}i\\[4pt]&={\frac {\cos(n\pi )}{n}}i\\[4pt]&={\frac {(-1)^{n}}{n}}i\end{aligned}}$
for n ≠ 0, and c0 = 0. Thus,
$|c_{n}|^{2}={\begin{cases}{\dfrac {1}{n^{2}}},&{\text{for }}n\neq 0,\\0,&{\text{for }}n=0,\end{cases}}$
and
$\sum _{n=-\infty }^{\infty }|c_{n}|^{2}=2\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }x^{2}\,dx.$
Therefore,
$\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {1}{4\pi }}\int _{-\pi }^{\pi }x^{2}\,dx={\frac {\pi ^{2}}{6}}$
as required.
Another proof using Parseval's identity
Given a complete orthonormal basis in the space $L_{\operatorname {per} }^{2}(0,1)$ of L2 periodic functions over $(0,1)$ (i.e., the subspace of square-integrable functions which are also periodic), denoted by $\{e_{i}\}_{i=-\infty }^{\infty }$, Parseval's identity tells us that
$\|x\|^{2}=\sum _{i=-\infty }^{\infty }|\langle e_{i},x\rangle |^{2},$
where $\|x\|:={\sqrt {\langle x,x\rangle }}$ is defined in terms of the inner product on this Hilbert space given by
$\langle f,g\rangle =\int _{0}^{1}f(x){\overline {g(x)}}\,dx,\ f,g\in L_{\operatorname {per} }^{2}(0,1).$
We can consider the orthonormal basis on this space defined by $e_{k}\equiv e_{k}(\vartheta ):=\exp(2\pi \imath k\vartheta )$ such that $\langle e_{k},e_{j}\rangle =\int _{0}^{1}e^{2\pi \imath (k-j)\vartheta }\,d\vartheta =\delta _{k,j}$. Then if we take $f(\vartheta ):=\vartheta $, we can compute both that
${\begin{aligned}\|f\|^{2}&=\int _{0}^{1}\vartheta ^{2}\,d\vartheta ={\frac {1}{3}}\\\langle f,e_{k}\rangle &=\int _{0}^{1}\vartheta e^{-2\pi \imath k\vartheta }\,d\vartheta ={\Biggl \{}{\begin{array}{ll}{\frac {1}{2}},&k=0\\-{\frac {1}{2\pi \imath k}}&k\neq 0,\end{array}}\end{aligned}}$
by elementary calculus and integration by parts, respectively. Finally, by Parseval's identity stated in the form above, we obtain that
${\begin{aligned}\|f\|^{2}={\frac {1}{3}}&=\sum _{\stackrel {k=-\infty }{k\neq 0}}^{\infty }{\frac {1}{(2\pi k)^{2}}}+{\frac {1}{4}}=2\sum _{k=1}^{\infty }{\frac {1}{(2\pi k)^{2}}}+{\frac {1}{4}}\\&\implies {\frac {\pi ^{2}}{6}}={\frac {2\pi ^{2}}{3}}-{\frac {\pi ^{2}}{2}}=\zeta (2).\end{aligned}}$
Generalizations and recurrence relations
Note that by considering higher-order powers of $f_{j}(\vartheta ):=\vartheta ^{j}\in L_{\operatorname {per} }^{2}(0,1)$ we can use integration by parts to extend this method to enumerating formulas for $\zeta (2j)$ when $j>1$. In particular, suppose we let
$I_{j,k}:=\int _{0}^{1}\vartheta ^{j}e^{-2\pi \imath k\vartheta }\,d\vartheta ,$
so that integration by parts yields the recurrence relation that
${\begin{aligned}I_{j,k}&={\begin{cases}{\frac {1}{j+1}},&k=0;\\[4pt]-{\frac {1}{2\pi \imath \cdot k}}+{\frac {j}{2\pi \imath \cdot k}}I_{j-1,k},&k\neq 0\end{cases}}\\[6pt]&={\begin{cases}{\frac {1}{j+1}},&k=0;\\[4pt]-\sum \limits _{m=1}^{j}{\frac {j!}{(j+1-m)!}}\cdot {\frac {1}{(2\pi \imath \cdot k)^{m}}},&k\neq 0.\end{cases}}\end{aligned}}$
Then by applying Parseval's identity as we did for the first case above along with the linearity of the inner product yields that
${\begin{aligned}\|f_{j}\|^{2}={\frac {1}{2j+1}}&=2\sum _{k\geq 1}I_{j,k}{\bar {I}}_{j,k}+{\frac {1}{(j+1)^{2}}}\\[6pt]&=2\sum _{m=1}^{j}\sum _{r=1}^{j}{\frac {j!^{2}}{(j+1-m)!(j+1-r)!}}{\frac {(-1)^{r}}{\imath ^{m+r}}}{\frac {\zeta (m+r)}{(2\pi )^{m+r}}}+{\frac {1}{(j+1)^{2}}}.\end{aligned}}$
Cauchy's proof
While most proofs use results from advanced mathematics, such as Fourier analysis, complex analysis, and multivariable calculus, the following does not even require single-variable calculus (until a single limit is taken at the end).
For a proof using the residue theorem, see here.
History of this proof
The proof goes back to Augustin Louis Cauchy (Cours d'Analyse, 1821, Note VIII). In 1954, this proof appeared in the book of Akiva and Isaak Yaglom "Nonelementary Problems in an Elementary Exposition". Later, in 1982, it appeared in the journal Eureka,[10] attributed to John Scholes, but Scholes claims he learned the proof from Peter Swinnerton-Dyer, and in any case he maintains the proof was "common knowledge at Cambridge in the late 1960s".
The proof
The main idea behind the proof is to bound the partial (finite) sums
$\sum _{k=1}^{m}{\frac {1}{k^{2}}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{m^{2}}}$
between two expressions, each of which will tend to π2/6 as m approaches infinity. The two expressions are derived from identities involving the cotangent and cosecant functions. These identities are in turn derived from de Moivre's formula, and we now turn to establishing these identities.
Let x be a real number with 0 < x < π/2, and let n be a positive odd integer. Then from de Moivre's formula and the definition of the cotangent function, we have
${\begin{aligned}{\frac {\cos(nx)+i\sin(nx)}{\sin ^{n}x}}&={\frac {(\cos x+i\sin x)^{n}}{\sin ^{n}x}}\\[4pt]&=\left({\frac {\cos x+i\sin x}{\sin x}}\right)^{n}\\[4pt]&=(\cot x+i)^{n}.\end{aligned}}$
From the binomial theorem, we have
${\begin{aligned}(\cot x+i)^{n}=&{n \choose 0}\cot ^{n}x+{n \choose 1}(\cot ^{n-1}x)i+\cdots +{n \choose {n-1}}(\cot x)i^{n-1}+{n \choose n}i^{n}\\[6pt]=&{\Bigg (}{n \choose 0}\cot ^{n}x-{n \choose 2}\cot ^{n-2}x\pm \cdots {\Bigg )}\;+\;i{\Bigg (}{n \choose 1}\cot ^{n-1}x-{n \choose 3}\cot ^{n-3}x\pm \cdots {\Bigg )}.\end{aligned}}$
Combining the two equations and equating imaginary parts gives the identity
${\frac {\sin(nx)}{\sin ^{n}x}}={\Bigg (}{n \choose 1}\cot ^{n-1}x-{n \choose 3}\cot ^{n-3}x\pm \cdots {\Bigg )}.$
We take this identity, fix a positive integer m, set n = 2m + 1, and consider xr = rπ/2m + 1 for r = 1, 2, ..., m. Then nxr is a multiple of π and therefore sin(nxr) = 0. So,
$0={{2m+1} \choose 1}\cot ^{2m}x_{r}-{{2m+1} \choose 3}\cot ^{2m-2}x_{r}\pm \cdots +(-1)^{m}{{2m+1} \choose {2m+1}}$
for every r = 1, 2, ..., m. The values xr = x1, x2, ..., xm are distinct numbers in the interval 0 < xr < π/2. Since the function cot2 x is one-to-one on this interval, the numbers tr = cot2 xr are distinct for r = 1, 2, ..., m. By the above equation, these m numbers are the roots of the mth degree polynomial
$p(t)={{2m+1} \choose 1}t^{m}-{{2m+1} \choose 3}t^{m-1}\pm \cdots +(-1)^{m}{{2m+1} \choose {2m+1}}.$
By Vieta's formulas we can calculate the sum of the roots directly by examining the first two coefficients of the polynomial, and this comparison shows that
$\cot ^{2}x_{1}+\cot ^{2}x_{2}+\cdots +\cot ^{2}x_{m}={\frac {\binom {2m+1}{3}}{\binom {2m+1}{1}}}={\frac {2m(2m-1)}{6}}.$
Substituting the identity csc2 x = cot2 x + 1, we have
$\csc ^{2}x_{1}+\csc ^{2}x_{2}+\cdots +\csc ^{2}x_{m}={\frac {2m(2m-1)}{6}}+m={\frac {2m(2m+2)}{6}}.$
Now consider the inequality cot2 x < 1/x2 < csc2 x (illustrated geometrically above). If we add up all these inequalities for each of the numbers xr = rπ/2m + 1, and if we use the two identities above, we get
${\frac {2m(2m-1)}{6}}<\left({\frac {2m+1}{\pi }}\right)^{2}+\left({\frac {2m+1}{2\pi }}\right)^{2}+\cdots +\left({\frac {2m+1}{m\pi }}\right)^{2}<{\frac {2m(2m+2)}{6}}.$
Multiplying through by (π/2m + 1)2
, this becomes
${\frac {\pi ^{2}}{6}}\left({\frac {2m}{2m+1}}\right)\left({\frac {2m-1}{2m+1}}\right)<{\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{m^{2}}}<{\frac {\pi ^{2}}{6}}\left({\frac {2m}{2m+1}}\right)\left({\frac {2m+2}{2m+1}}\right).$
As m approaches infinity, the left and right hand expressions each approach π2/6, so by the squeeze theorem,
$\zeta (2)=\sum _{k=1}^{\infty }{\frac {1}{k^{2}}}=\lim _{m\to \infty }\left({\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{m^{2}}}\right)={\frac {\pi ^{2}}{6}}$
and this completes the proof.
Proof assuming Weil's conjecture on Tamagawa numbers
A proof is also possible assuming Weil's conjecture on Tamagawa numbers.[11] The conjecture asserts for the case of the algebraic group SL2(R) that the Tamagawa number of the group is one. That is, the quotient of the special linear group over the rational adeles by the special linear group of the rationals (a compact set, because $SL_{2}(\mathbb {Q} )$ is a lattice in the adeles) has Tamagawa measure 1:
$\tau (SL_{2}(\mathbb {Q} )\setminus SL_{2}(A_{\mathbb {Q} }))=1.$
To determine a Tamagawa measure, the group $SL_{2}$ consists of matrices
${\begin{bmatrix}x&y\\z&t\end{bmatrix}}$
with $xt-yz=1$. An invariant volume form on the group is
$\omega ={\frac {1}{x}}dx\wedge dy\wedge dz.$
The measure of the quotient is the product of the measures of $SL_{2}(\mathbb {Z} )\setminus SL_{2}(\mathbb {R} )$ corresponding to the infinite place, and the measures of $SL_{2}(\mathbb {Z} _{p})$ in each finite place, where $\mathbb {Z} _{p}$ is the p-adic integers.
For the local factors,
$\omega (SL_{2}(\mathbb {Z} _{p}))=|SL_{2}(F_{p})|\omega (SL_{2}(\mathbb {Z} _{p},p))$
where $F_{p}$ is the field with $p$ elements, and $SL_{2}(\mathbb {Z} _{p},p)$ is the congruence subgroup modulo $p$. Since each of the coordinates $x,y,z$ map the latter group onto $p\mathbb {Z} _{p}$ and $\left|{\frac {1}{x}}\right|_{p}=1$, the measure of $SL_{2}(\mathbb {Z} _{p},p)$ is $\mu _{p}(p\mathbb {Z} _{p})^{3}=p^{-3}$, where $\mu _{p}$ is the normalized Haar measure on $\mathbb {Z} _{p}$. Also, a standard computation shows that $|SL_{2}(F_{p})|=p(p^{2}-1)$. Putting these together gives $\omega (SL_{2}(\mathbb {Z} _{p}))=(1-1/p^{2})$.
At the infinite place, an integral computation over the fundamental domain of $SL_{2}(\mathbb {Z} )$ shows that $\omega (SL_{2}(\mathbb {Z} )\setminus SL_{2}(\mathbb {R} )=\pi ^{2}/6$, and therefore the Weil conjecture finally gives
$1={\frac {\pi ^{2}}{6}}\prod _{p}\left(1-{\frac {1}{p^{2}}}\right).$
On the right-hand side, we recognize the Euler product for $1/\zeta (2)$, and so this gives the solution to the Basel problem.
This approach shows the connection between (hyperbolic) geometry and arithmetic, and can be inverted to give a proof of the Weil conjecture for the special case of $SL_{2}$, contingent on an independent proof that $\zeta (2)=\pi ^{2}/6$.
Other identities
See the special cases of the identities for the Riemann zeta function when $s=2.$ Other notably special identities and representations of this constant appear in the sections below.
Series representations
The following are series representations of the constant:[12]
${\begin{aligned}\zeta (2)&=3\sum _{k=1}^{\infty }{\frac {1}{k^{2}{\binom {2k}{k}}}}\\[6pt]&=\sum _{i=1}^{\infty }\sum _{j=1}^{\infty }{\frac {(i-1)!(j-1)!}{(i+j)!}}.\end{aligned}}$
There are also BBP-type series expansions for ζ(2).[12]
Integral representations
The following are integral representations of $\zeta (2){\text{:}}$[13][14][15]
${\begin{aligned}\zeta (2)&=-\int _{0}^{1}{\frac {\log x}{1-x}}\,dx\\[6pt]&=\int _{0}^{\infty }{\frac {x}{e^{x}-1}}\,dx\\[6pt]&=\int _{0}^{1}{\frac {(\log x)^{2}}{(1+x)^{2}}}\,dx\\[6pt]&=2+2\int _{1}^{\infty }{\frac {\lfloor x\rfloor -x}{x^{3}}}\,dx\\[6pt]&=\exp \left(2\int _{2}^{\infty }{\frac {\pi (x)}{x(x^{2}-1)}}\,dx\right)\\[6pt]&=\int _{0}^{1}\int _{0}^{1}{\frac {dx\,dy}{1-xy}}\\[6pt]&={\frac {4}{3}}\int _{0}^{1}\int _{0}^{1}{\frac {dx\,dy}{1-(xy)^{2}}}\\[6pt]&=\int _{0}^{1}\int _{0}^{1}{\frac {1-x}{1-xy}}\,dx\,dy+{\frac {2}{3}}.\end{aligned}}$
Continued fractions
In van der Poorten's classic article chronicling Apéry's proof of the irrationality of $\zeta (3)$,[16] the author notes several parallels in proving the irrationality of $\zeta (2)$ to Apéry's proof. In particular, he documents recurrence relations for almost integer sequences converging to the constant and continued fractions for the constant. Other continued fractions for this constant include[17]
${\frac {\zeta (2)}{2}}={\cfrac {1}{v_{1}-{\cfrac {1^{4}}{v_{2}-{\cfrac {2^{4}}{v_{3}-{\cfrac {3^{4}}{v_{4}-\ddots }}}}}}}},$
and[18]
${\frac {\zeta (2)}{5}}={\cfrac {1}{{\widetilde {v}}_{1}-{\cfrac {1^{4}}{{\widetilde {v}}_{2}-{\cfrac {2^{4}}{{\widetilde {v}}_{3}-{\cfrac {3^{4}}{{\widetilde {v}}_{4}-\ddots }}}}}}}},$
where $v_{n}=2n-1\mapsto \{1,3,5,7,9,\ldots \}$ and ${\widetilde {v}}_{n}=11n^{2}-11n+3\mapsto \{3,25,69,135,\ldots \}$.
See also
• Riemann zeta function
• Apéry's constant
• List of sums of reciprocals
References
• Weil, André (1983), Number Theory: An Approach Through History, Springer-Verlag, ISBN 0-8176-3141-0.
• Dunham, William (1999), Euler: The Master of Us All, Mathematical Association of America, ISBN 0-88385-328-0.
• Derbyshire, John (2003), Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics, Joseph Henry Press, ISBN 0-309-08549-7.
• Aigner, Martin; Ziegler, Günter M. (1998), Proofs from THE BOOK, Berlin, New York: Springer-Verlag
• Edwards, Harold M. (2001), Riemann's Zeta Function, Dover, ISBN 0-486-41740-9.
Notes
1. Ayoub, Raymond (1974). "Euler and the zeta function". Amer. Math. Monthly. 81 (10): 1067–86. doi:10.2307/2319041. JSTOR 2319041.
2. E41 – De summis serierum reciprocarum
3. Sloane, N. J. A. (ed.). "Sequence A013661". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
4. Vandervelde, Sam (2009). "Chapter 9: Sneaky segments". Circle in a Box. MSRI Mathematical Circles Library. Mathematical Sciences Research Institute and American Mathematical Society. pp. 101–106.
5. A priori, since the left-hand-side is a polynomial (of infinite degree) we can write it as a product of its roots as
${\begin{aligned}\sin(x)&=x(x^{2}-\pi ^{2})(x^{2}-4\pi ^{2})(x^{2}-9\pi ^{2})\cdots \\&=Ax\left(1-{\frac {x^{2}}{\pi ^{2}}}\right)\left(1-{\frac {x^{2}}{4\pi ^{2}}}\right)\left(1-{\frac {x^{2}}{9\pi ^{2}}}\right)\cdots .\end{aligned}}$
Then since we know from elementary calculus that $\lim _{x\rightarrow 0}{\frac {\sin(x)}{x}}=1$, we conclude that the leading constant must satisfy $A=1$.
6. In particular, letting $H_{n}^{(2)}:=\sum _{k=1}^{n}k^{-2}$ denote a generalized second-order harmonic number, we can easily prove by induction that $[x^{2}]\prod _{k=1}^{n}\left(1-{\frac {x^{2}}{\pi ^{2}}}\right)=-{\frac {H_{n}^{(2)}}{\pi ^{2}}}\rightarrow -{\frac {\zeta (2)}{\pi ^{2}}}$ as $n\rightarrow \infty $.
7. Havil, J. (2003). Gamma: Exploring Euler's Constant. Princeton, New Jersey: Princeton University Press. pp. 37–42 (Chapter 4). ISBN 0-691-09983-9.
8. Cf., the formulae for generalized Stirling numbers proved in: Schmidt, M. D. (2018). "Combinatorial Identities for Generalized Stirling Numbers Expanding f-Factorial Functions and the f-Harmonic Numbers". J. Integer Seq. 21 (Article 18.2.7).
9. Arakawa, Tsuneo; Ibukiyama, Tomoyoshi; Kaneko, Masanobu (2014). Bernoulli Numbers and Zeta Functions. Springer. p. 61. ISBN 978-4-431-54919-2.
10. Ransford, T J (Summer 1982). "An Elementary Proof of $\sum _{1}^{\infty }{\frac {1}{n^{2}}}={\frac {\pi ^{2}}{6}}$" (PDF). Eureka. 42 (1): 3–4.
11. Vladimir Platonov; Andrei Rapinchuk (1994), Algebraic groups and number theory, translated by Rachel Rowen, Academic Press|
12. Weisstein, Eric W. "Riemann Zeta Function \zeta(2)". MathWorld. Retrieved 29 April 2018.
13. Connon, D. F. (2007). "Some series and integrals involving the Riemann zeta function, binomial coefficients and the harmonic numbers (Volume I)". arXiv:0710.4022 [math.HO].
14. Weisstein, Eric W. "Double Integral". MathWorld. Retrieved 29 April 2018.
15. Weisstein, Eric W. "Hadjicostas's Formula". MathWorld. Retrieved 29 April 2018.
16. van der Poorten, Alfred (1979), "A proof that Euler missed ... Apéry's proof of the irrationality of ζ(3)" (PDF), The Mathematical Intelligencer, 1 (4): 195–203, doi:10.1007/BF03028234, S2CID 121589323, archived from the original (PDF) on 2011-07-06
17. Berndt, Bruce C. (1989). Ramanujan's Notebooks: Part II. Springer-Verlag. p. 150. ISBN 978-0-387-96794-3.
18. "Continued fractions for Zeta(2) and Zeta(3)". tpiezas: A COLLECTION OF ALGEBRAIC IDENTITIES. 4 May 2012. Retrieved 29 April 2018.
External links
• An infinite series of surprises by C. J. Sangwin
• From ζ(2) to Π. The Proof. step-by-step proof
• "Remarques sur un beau rapport entre les series des puissances tant directes que reciproques" (PDF)., English translation with notes of Euler's paper by Lucas Willis and Thomas J. Osler
• Ed Sandifer. "How Euler did it" (PDF).
• James A. Sellers (February 5, 2002). "Beyond Mere Convergence" (PDF). Retrieved 2004-02-27.
• Robin Chapman. "Evaluating ζ(2)" (PDF). (fourteen proofs)
• Visualization of Euler's factorization of the sine function
• Johan W Ästlund (December 8, 2010). "Summing inverse squares by Euclidean geometry" (PDF).
• Why is pi here? And why is it squared? A geometric answer to the Basel problem on YouTube (animated proof based on the above)
|
Wikipedia
|
Series–parallel graph
In graph theory, series–parallel graphs are graphs with two distinguished vertices called terminals, formed recursively by two simple composition operations. They can be used to model series and parallel electric circuits.
Definition and terminology
In this context, the term graph means multigraph.
There are several ways to define series–parallel graphs. The following definition basically follows the one used by David Eppstein.[1]
A two-terminal graph (TTG) is a graph with two distinguished vertices, s and t called source and sink, respectively.
The parallel composition Pc = Pc(X,Y) of two TTGs X and Y is a TTG created from the disjoint union of graphs X and Y by merging the sources of X and Y to create the source of Pc and merging the sinks of X and Y to create the sink of Pc.
The series composition Sc = Sc(X,Y) of two TTGs X and Y is a TTG created from the disjoint union of graphs X and Y by merging the sink of X with the source of Y. The source of X becomes the source of Sc and the sink of Y becomes the sink of Sc.
A two-terminal series–parallel graph (TTSPG) is a graph that may be constructed by a sequence of series and parallel compositions starting from a set of copies of a single-edge graph K2 with assigned terminals.
Definition 1. Finally, a graph is called series–parallel (SP-graph), if it is a TTSPG when some two of its vertices are regarded as source and sink.
In a similar way one may define series–parallel digraphs, constructed from copies of single-arc graphs, with arcs directed from the source to the sink.
Alternative definition
The following definition specifies the same class of graphs.[2]
Definition 2. A graph is an SP-graph, if it may be turned into K2 by a sequence of the following operations:
• Replacement of a pair of parallel edges with a single edge that connects their common endpoints
• Replacement of a pair of edges incident to a vertex of degree 2 other than s or t with a single edge.
Properties
Every series–parallel graph has treewidth at most 2 and branchwidth at most 2.[3] Indeed, a graph has treewidth at most 2 if and only if it has branchwidth at most 2, if and only if every biconnected component is a series–parallel graph.[4][5] The maximal series–parallel graphs, graphs to which no additional edges can be added without destroying their series–parallel structure, are exactly the 2-trees.
2-connected series–parallel graphs are characterised by having no subgraph homeomorphic to K4.[3]
Series parallel graphs may also be characterized by their ear decompositions.[1]
Computational complexity
SP-graphs may be recognized in linear time[6] and their series–parallel decomposition may be constructed in linear time as well.
Besides being a model of certain types of electric networks, these graphs are of interest in computational complexity theory, because a number of standard graph problems are solvable in linear time on SP-graphs,[7] including finding of the maximum matching, maximum independent set, minimum dominating set and Hamiltonian completion. Some of these problems are NP-complete for general graphs. The solution capitalizes on the fact that if the answers for one of these problems are known for two SP-graphs, then one can quickly find the answer for their series and parallel compositions.
Generalization
The generalized series–parallel graphs (GSP-graphs) are an extension of the SP-graphs[8] with the same algorithmic efficiency for the mentioned problems. The class of GSP-graphs include the classes of SP-graphs and outerplanar graphs.
GSP graphs may be specified by Definition 2 augmented with the third operation of deletion of a dangling vertex (vertex of degree 1). Alternatively, Definition 1 may be augmented with the following operation.
• The source merge S = M(X,Y) of two TTGs X and Y is a TTG created from the disjoint union of graphs X and Y by merging the source of X with the source of Y. The source and sink of X become the source and sink of P respectively.
An SPQR tree is a tree structure that can be defined for an arbitrary 2-vertex-connected graph. It has S-nodes, which are analogous to the series composition operations in series–parallel graphs, P-nodes, which are analogous to the parallel composition operations in series–parallel graphs, and R-nodes, which do not correspond to series–parallel composition operations. A 2-connected graph is series–parallel if and only if there are no R-nodes in its SPQR tree.
See also
• Threshold graph
• Cograph
• Hanner polytope
• Series-parallel partial order
References
1. Eppstein, David (1992). "Parallel recognition of series–parallel graphs" (PDF). Information and Computation. 98 (1): 41–55. doi:10.1016/0890-5401(92)90041-D.
2. Duffin, R. J. (1965). "Topology of Series–Parallel Networks". Journal of Mathematical Analysis and Applications. 10 (2): 303–313. doi:10.1016/0022-247X(65)90125-3.
3. Brandstädt, Andreas; Le, Van Bang; Spinrad, Jeremy P. (1999). Graph classes: a survey. SIAM Monographs on Discrete Mathematics. and Applications. Vol. 3. Philadelphia, PA: Society for Industrial and Applied Mathematics. pp. 172–174. ISBN 978-0-898714-32-6. Zbl 0919.05001.
4. Bodlaender, H. (1998). "A partial k-arboretum of graphs with bounded treewidth". Theoretical Computer Science. 209 (1–2): 1–45. doi:10.1016/S0304-3975(97)00228-4. hdl:1874/18312.
5. Hall, Rhiannon; Oxley, James; Semple, Charles; Whittle, Geoff (2002). "On matroids of branch-width three". Journal of Combinatorial Theory, Series B. 86 (1): 148–171. doi:10.1006/jctb.2002.2120.
6. Valdes, Jacobo; Tarjan, Robert E.; Lawler, Eugene L. (1982). "The recognition of series parallel digraphs". SIAM Journal on Computing. 11 (2): 289–313. doi:10.1137/0211023.
7. Takamizawa, K.; Nishizeki, T.; Saito, N. (1982). "Linear-time computability of combinatorial problems on series–parallel graphs". Journal of the ACM. 29 (3): 623–641. doi:10.1145/322326.322328. S2CID 16082154.
8. Korneyenko, N. M. (1994). "Combinatorial algorithms on a class of graphs". Discrete Applied Mathematics. 54 (2–3): 215–217. doi:10.1016/0166-218X(94)90022-1. Translated from Notices of the BSSR Academy of Sciences, Ser. Phys.-Math. Sci., (1984) no. 3, pp. 109–111 (in Russian)
|
Wikipedia
|
Convergence tests
In mathematics, convergence tests are methods of testing for the convergence, conditional convergence, absolute convergence, interval of convergence or divergence of an infinite series $\sum _{n=1}^{\infty }a_{n}$.
Part of a series of articles about
Calculus
• Fundamental theorem
• Limits
• Continuity
• Rolle's theorem
• Mean value theorem
• Inverse function theorem
Differential
Definitions
• Derivative (generalizations)
• Differential
• infinitesimal
• of a function
• total
Concepts
• Differentiation notation
• Second derivative
• Implicit differentiation
• Logarithmic differentiation
• Related rates
• Taylor's theorem
Rules and identities
• Sum
• Product
• Chain
• Power
• Quotient
• L'Hôpital's rule
• Inverse
• General Leibniz
• Faà di Bruno's formula
• Reynolds
Integral
• Lists of integrals
• Integral transform
• Leibniz integral rule
Definitions
• Antiderivative
• Integral (improper)
• Riemann integral
• Lebesgue integration
• Contour integration
• Integral of inverse functions
Integration by
• Parts
• Discs
• Cylindrical shells
• Substitution (trigonometric, tangent half-angle, Euler)
• Euler's formula
• Partial fractions
• Changing order
• Reduction formulae
• Differentiating under the integral sign
• Risch algorithm
Series
• Geometric (arithmetico-geometric)
• Harmonic
• Alternating
• Power
• Binomial
• Taylor
Convergence tests
• Summand limit (term test)
• Ratio
• Root
• Integral
• Direct comparison
• Limit comparison
• Alternating series
• Cauchy condensation
• Dirichlet
• Abel
Vector
• Gradient
• Divergence
• Curl
• Laplacian
• Directional derivative
• Identities
Theorems
• Gradient
• Green's
• Stokes'
• Divergence
• generalized Stokes
Multivariable
Formalisms
• Matrix
• Tensor
• Exterior
• Geometric
Definitions
• Partial derivative
• Multiple integral
• Line integral
• Surface integral
• Volume integral
• Jacobian
• Hessian
Advanced
• Calculus on Euclidean space
• Generalized functions
• Limit of distributions
Specialized
• Fractional
• Malliavin
• Stochastic
• Variations
Miscellaneous
• Precalculus
• History
• Glossary
• List of topics
• Integration Bee
• Mathematical analysis
• Nonstandard analysis
List of tests
Limit of the summand
If the limit of the summand is undefined or nonzero, that is $\lim _{n\to \infty }a_{n}\neq 0$, then the series must diverge. In this sense, the partial sums are Cauchy only if this limit exists and is equal to zero. The test is inconclusive if the limit of the summand is zero. This is also known as the nth-term test, test for divergence, or the divergence test.
Ratio test
This is also known as d'Alembert's criterion.
Suppose that there exists $r$ such that
$\lim _{n\to \infty }\left|{\frac {a_{n+1}}{a_{n}}}\right|=r.$
If r < 1, then the series is absolutely convergent. If r > 1, then the series diverges. If r = 1, the ratio test is inconclusive, and the series may converge or diverge.
Root test
This is also known as the nth root test or Cauchy's criterion.
Let
$r=\limsup _{n\to \infty }{\sqrt[{n}]{|a_{n}|}},$
where $\limsup $ denotes the limit superior (possibly $\infty $; if the limit exists it is the same value).
If r < 1, then the series converges absolutely. If r > 1, then the series diverges. If r = 1, the root test is inconclusive, and the series may converge or diverge.
The root test is stronger than the ratio test: whenever the ratio test determines the convergence or divergence of an infinite series, the root test does too, but not conversely.[1]
Integral test
The series can be compared to an integral to establish convergence or divergence. Let $f:[1,\infty )\to \mathbb {R} _{+}$ be a non-negative and monotonically decreasing function such that $f(n)=a_{n}$. If
$\int _{1}^{\infty }f(x)\,dx=\lim _{t\to \infty }\int _{1}^{t}f(x)\,dx<\infty ,$
then the series converges. But if the integral diverges, then the series does so as well. In other words, the series ${a_{n}}$ converges if and only if the integral converges.
p-series test
A commonly-used corollary of the integral test is the p-series test. Let $k>0$. Then $\sum _{n=k}^{\infty }{\bigg (}{\frac {1}{n^{p}}}{\bigg )}$ converges if $p>1$.
The case of $p=1,k=1$ yields the harmonic series, which diverges. The case of $p=2,k=1$ is the Basel problem and the series converges to ${\frac {\pi ^{2}}{6}}$. In general, for $p>1,k=1$, the series is equal to the Riemann zeta function applied to $p$, that is $\zeta (p)$.
Direct comparison test
If the series $\sum _{n=1}^{\infty }b_{n}$ is an absolutely convergent series and $|a_{n}|\leq |b_{n}|$ for sufficiently large n , then the series $\sum _{n=1}^{\infty }a_{n}$ converges absolutely.
Limit comparison test
If $\{a_{n}\},\{b_{n}\}>0$, (that is, each element of the two sequences is positive) and the limit $\lim _{n\to \infty }{\frac {a_{n}}{b_{n}}}$ exists, is finite and non-zero, then $\sum _{n=1}^{\infty }a_{n}$ diverges if and only if $\sum _{n=1}^{\infty }b_{n}$ diverges.
Cauchy condensation test
Let $\left\{a_{n}\right\}$ be a non-negative non-increasing sequence. Then the sum $A=\sum _{n=1}^{\infty }a_{n}$ converges if and only if the sum $A^{*}=\sum _{n=0}^{\infty }2^{n}a_{2^{n}}$ converges. Moreover, if they converge, then $A\leq A^{*}\leq 2A$ holds.
Abel's test
Suppose the following statements are true:
1. $\sum a_{n}$ is a convergent series,
2. $\left\{b_{n}\right\}$ is a monotonic sequence, and
3. $\left\{b_{n}\right\}$ is bounded.
Then $\sum a_{n}b_{n}$ is also convergent.
Absolute convergence test
Every absolutely convergent series converges.
Alternating series test
Suppose the following statements are true:
• $a_{n}$ are all positive,
• $\lim _{n\to \infty }a_{n}=0$ and
• for every n, $a_{n+1}\leq a_{n}$.
Then $\sum _{n=1}^{\infty }(-1)^{n}a_{n}$ and $\sum _{n=1}^{\infty }(-1)^{n+1}a_{n}$ are convergent series. This test is also known as the Leibniz criterion.
Dirichlet's test
If $\{a_{n}\}$ is a sequence of real numbers and $\{b_{n}\}$ a sequence of complex numbers satisfying
• $a_{n}\geq a_{n+1}$
• $\lim _{n\rightarrow \infty }a_{n}=0$
• $\left|\sum _{n=1}^{N}b_{n}\right|\leq M$ for every positive integer N
where M is some constant, then the series
$\sum _{n=1}^{\infty }a_{n}b_{n}$
converges.
Cauchy's convergence test
A series $\sum _{i=0}^{\infty }a_{i}$ is convergent if and only if for every $\varepsilon >0$ there is a natural number N such that
$|a_{n+1}+a_{n+2}+\cdots +a_{n+p}|<\varepsilon $
holds for all n > N and all p ≥ 1.
Stolz–Cesàro theorem
Let $(a_{n})_{n\geq 1}$ and $(b_{n})_{n\geq 1}$ be two sequences of real numbers. Assume that $(b_{n})_{n\geq 1}$ is a strictly monotone and divergent sequence and the following limit exists:
$\lim _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}=l.\ $
Then, the limit
$\lim _{n\to \infty }{\frac {a_{n}}{b_{n}}}=l.\ $
Weierstrass M-test
Suppose that (fn) is a sequence of real- or complex-valued functions defined on a set A, and that there is a sequence of non-negative numbers (Mn) satisfying the conditions
• $|f_{n}(x)|\leq M_{n}$ for all $n\geq 1$ and all $x\in A$, and
• $\sum _{n=1}^{\infty }M_{n}$ converges.
Then the series
$\sum _{n=1}^{\infty }f_{n}(x)$
converges absolutely and uniformly on A.
Extensions to the ratio test
The ratio test may be inconclusive when the limit of the ratio is 1. Extensions to the ratio test, however, sometimes allows one to deal with this case.
Raabe–Duhamel's test
Let { an } be a sequence of positive numbers.
Define
$b_{n}=n\left({\frac {a_{n}}{a_{n+1}}}-1\right).$
If
$L=\lim _{n\to \infty }b_{n}$
exists there are three possibilities:
• if L > 1 the series converges (this includes the case L = ∞)
• if L < 1 the series diverges
• and if L = 1 the test is inconclusive.
An alternative formulation of this test is as follows. Let { an } be a series of real numbers. Then if b > 1 and K (a natural number) exist such that
$\left|{\frac {a_{n+1}}{a_{n}}}\right|\leq 1-{\frac {b}{n}}$
for all n > K then the series {an} is convergent.
Bertrand's test
Let { an } be a sequence of positive numbers.
Define
$b_{n}=\ln n\left(n\left({\frac {a_{n}}{a_{n+1}}}-1\right)-1\right).$
If
$L=\lim _{n\to \infty }b_{n}$
exists, there are three possibilities:[2][3]
• if L > 1 the series converges (this includes the case L = ∞)
• if L < 1 the series diverges
• and if L = 1 the test is inconclusive.
Gauss's test
Let { an } be a sequence of positive numbers. If ${\frac {a_{n}}{a_{n+1}}}=1+{\frac {\alpha }{n}}+O(1/n^{\beta })$ for some β > 1, then $\sum a_{n}$ converges if α > 1 and diverges if α ≤ 1.[4]
Kummer's test
Let { an } be a sequence of positive numbers. Then:[5][6][7]
(1) $\sum a_{n}$ converges if and only if there is a sequence $b_{n}$ of positive numbers and a real number c > 0 such that $b_{k}(a_{k}/a_{k+1})-b_{k+1}\geq c$.
(2) $\sum a_{n}$ diverges if and only if there is a sequence $b_{n}$ of positive numbers such that $b_{k}(a_{k}/a_{k+1})-b_{k+1}\leq 0$
and $\sum 1/b_{n}$ diverges.
Notes
• For some specific types of series there are more specialized convergence tests, for instance for Fourier series there is the Dini test.
Examples
Consider the series
$\sum _{n=1}^{\infty }{\frac {1}{n^{\alpha }}}.$
(i)
Cauchy condensation test implies that (i) is finitely convergent if
$\sum _{n=1}^{\infty }2^{n}\left({\frac {1}{2^{n}}}\right)^{\alpha }$
(ii)
is finitely convergent. Since
$\sum _{n=1}^{\infty }2^{n}\left({\frac {1}{2^{n}}}\right)^{\alpha }=\sum _{n=1}^{\infty }2^{n-n\alpha }=\sum _{n=1}^{\infty }2^{(1-\alpha )n}$
(ii) is a geometric series with ratio $2^{(1-\alpha )}$. (ii) is finitely convergent if its ratio is less than one (namely $\alpha >1$). Thus, (i) is finitely convergent if and only if $\alpha >1$.
Convergence of products
While most of the tests deal with the convergence of infinite series, they can also be used to show the convergence or divergence of infinite products. This can be achieved using following theorem: Let $\left\{a_{n}\right\}_{n=1}^{\infty }$ be a sequence of positive numbers. Then the infinite product $\prod _{n=1}^{\infty }(1+a_{n})$ converges if and only if the series $\sum _{n=1}^{\infty }a_{n}$ converges. Also similarly, if $0<a_{n}<1$ holds, then $\prod _{n=1}^{\infty }(1-a_{n})$ approaches a non-zero limit if and only if the series $\sum _{n=1}^{\infty }a_{n}$ converges .
This can be proved by taking the logarithm of the product and using limit comparison test.[8]
See also
• L'Hôpital's rule
• Shift rule
References
1. Wachsmuth, Bert G. "MathCS.org - Real Analysis: Ratio Test". www.mathcs.org.
2. František Ďuriš, Infinite series: Convergence tests, pp. 24–9. Bachelor's thesis.
3. Weisstein, Eric W. "Bertrand's Test". mathworld.wolfram.com. Retrieved 2020-04-16.
• "Gauss criterion", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
4. "Über die Convergenz und Divergenz der unendlichen Reihen". Journal für die reine und angewandte Mathematik. 1835 (13): 171–184. 1835-01-01. doi:10.1515/crll.1835.13.171. ISSN 0075-4102. S2CID 121050774.
5. Tong, Jingcheng (1994). "Kummer's Test Gives Characterizations for Convergence or Divergence of all Positive Series". The American Mathematical Monthly. 101 (5): 450–452. doi:10.2307/2974907. JSTOR 2974907.
6. Samelson, Hans (1995). "More on Kummer's Test". The American Mathematical Monthly. 102 (9): 817–818. doi:10.1080/00029890.1995.12004667. ISSN 0002-9890.
7. Belk, Jim (26 January 2008). "Convergence of Infinite Products".
Further reading
• Leithold, Louis (1972). The Calculus, with Analytic Geometry (2nd ed.). New York: Harper & Row. pp. 655–737. ISBN 0-06-043959-9.
|
Wikipedia
|
Serpent (cipher)
Serpent is a symmetric key block cipher that was a finalist in the Advanced Encryption Standard (AES) contest, where it was ranked second to Rijndael.[2] Serpent was designed by Ross Anderson, Eli Biham, and Lars Knudsen.[3]
Serpent
Serpent's linear mixing stage
General
DesignersRoss Anderson, Eli Biham, Lars Knudsen
First published1998-08-21
Derived fromSquare
CertificationAES finalist
Cipher detail
Key sizes128, 192 or 256 bits
Block sizes128 bits
StructureSubstitution–permutation network
Rounds32
Best public cryptanalysis
All publicly known attacks are computationally infeasible, and none of them affect the full 32-round Serpent. A 2011 attack breaks 11 round Serpent (all key sizes) with 2116 known plaintexts, 2107.5 time and 2104 memory (as described in[1]). The same paper also describes two attacks which break 12 rounds of Serpent-256. The first requires 2118 known plaintexts, 2228.8 time and 2228 memory. The other attack requires 2116 known plaintexts and 2121 memory but also requires 2237.5 time.
Like other AES submissions, Serpent has a block size of 128 bits and supports a key size of 128, 192 or 256 bits.[4] The cipher is a 32-round substitution–permutation network operating on a block of four 32-bit words. Each round applies one of eight 4-bit to 4-bit S-boxes 32 times in parallel. Serpent was designed so that all operations can be executed in parallel, using 32 bit slices. This maximizes parallelism, but also allows use of the extensive cryptanalysis work performed on DES.
Serpent took a conservative approach to security, opting for a large security margin: the designers deemed 16 rounds to be sufficient against known types of attack, but specified 32 rounds as insurance against future discoveries in cryptanalysis.[5] The official NIST report on AES competition classified Serpent as having a high security margin along with MARS and Twofish, in contrast to the adequate security margin of RC6 and Rijndael (currently AES).[2] In final voting, Serpent had the fewest negative votes among the finalists, but scored second place overall because Rijndael had substantially more positive votes, the deciding factor being that Rijndael allowed for a far more efficient software implementation.
The Serpent cipher algorithm is in the public domain and has not been patented.[6] The reference code is public domain software and the optimized code is under GPL.[7] There are no restrictions or encumbrances whatsoever regarding its use. As a result, anyone is free to incorporate Serpent in their software (or hardware implementations) without paying license fees.
Key Schedule
The Serpent key schedule consists of 3 main stages. In the first stage the key is initialized by adding padding if necessary. This is done in order to make short keys map to long keys of 256-bits, one "1" bit is appended to the end of the short key followed by "0" bits until the short key is mapped to a long key length.[4]
In the next phase, the "prekeys" are derived using the previously initialized key. 32-bit key parts XORed, the FRAC which is the fraction of the Golden ratio and the round index is XORed with the key parts, the result of the XOR operation is rotated to left by 11. The FRAC and round index were added to achieve an even distribution of the keys bits during the rounds.[4]
Finally the "subkeys" are derived from the previously generated "prekeys". This results in a total of 33 128-bit "subkeys".[4]
At the end the round key or "subkey" are placed in the "initial permutation IP" to place the key bits in the correct column.[4]
Key Schedule pseudo code
#define FRAC 0x9e3779b9 // fractional part of the golden ratio
#define ROTL(A, n) (A << n) | (A >> (32 - n))
uint32_t key[8] // k
uint32_t words[132]; // w
uint32_t subkey[33][4] // sk
/* key schedule: get prekeys */
void w(uint32_t *w, uint32_t *k) {
uint32_t imm[140];
for (short i = 0; i < 8; i++){
imm[i]=k[i];
}
for (short i = 8; i < 140; i++) {
imm[i] = ROTL((w[i - 8] ^ w[i - 5] ^ w[i - 3] ^ w[i - 1] ^ FRAC ^ (i - 8)), 11);
w[i-8] = imm[i]
}
}
/* key schedule: get subkeys */
void k(uint32_t *w, uint32_t (*sk)[4]) {
uint8_t i, p, j, s, k;
for (i = 0; i < 33; i++) {
p = (32 + 3 - i) % 32;
for (k = 0; k < 32; k++) {
s = S[p % 8][((w[4 * i + 0] >> k) & 0x1) << 0 |
((w[4 * i + 1] >> k) & 0x1) << 1 |
((w[4 * i + 2] >> k) & 0x1) << 2 |
((w[4 * i + 3] >> k) & 0x1) << 3 ];
for (j = 0; j < 4; j++) {
sk[i][j] |= ((s >> j) & 0x1) << k;
}
}
}
}
S-Boxes
The Serpent s-boxes are 4-bit permutations, and subject to the following properties:
• a 1-bit input difference will never lead to a 1-bit output difference, a differential characteristic has a probability of 1:4 or less.[8]
• linear characteristics have a probability between 1:2 and 1:4, linear relationship between input and output bits has a probability between 1:2 and 1:8.[8]
• the nonlinear order of the output bits as function of the input bits is 3. However there have been output bits found which in function of the input bits have an order of only 2.[8]
The Serpent s-boxes have been constructed based on the 32 rows of the DES s-boxes. These were transformed by swapping entries, resulting arrays with desired properties were stored as the Serpent s-boxes. This process was repeated until a total of 8 s-boxes were found. The following key was used in this process: "sboxesforserpent".[4]
Permutations and Transformations
Initial permutation (IP)
The initial permutation works on 128 bits at a time moving bits around.
for i in 0 .. 127
swap( bit(i), bit((32 * i) % 127) )
Final permutation (FP)
The final permutation works on 128 bits at a time moving bits around.
for i in 0 .. 127
swap( bit(i), bit((2 * i) % 127) )
Linear transformation (LT)
Consists of XOR, S-Box, bit shift left and bit rotate left operations. These operations are performed on 4 32-bit words.
for (short i = 0; i < 4; i++) {
X[i] = S[i][B[i] ^ K[i]];
}
X[0] = ROTL(X[0], 13);
X[2] = ROTL(X[2], 3 );
X[1] = X[1] ^ X[0] ^ X[2];
X[3] = X[3] ^ X[2] ^ (X[0] << 3);
X[1] = ROTL(X[1], 1 );
X[3] = ROTL(X[3], 7 );
X[0] = X[0] ^ X[1] ^ X[3];
X[2] = X[2] ^ X[3] ^ (X[1] << 7);
X[0] = ROTL(X[0], 5 );
X[2] = ROTL(X[2], 22);
for (short i = 0; i < 4; i++) {
B[i + 1] = X[i];
}
Rijndael vs. Serpent
Rijndael is a substitution-linear transformation network with ten, twelve, or fourteen rounds, depending on the key size, and with key sizes of 128 bits, 192 bits, or 256 bits, independently specified. Serpent is a substitution–permutation network which has thirty-two rounds, plus an initial and a final permutation to simplify an optimized implementation. The round function in Rijndael consists of three parts: a nonlinear layer, a linear mixing layer, and a key-mixing XOR layer. The round function in Serpent consists of key-mixing XOR, thirty-two parallel applications of the same 4×4 S-box, and a linear transformation, except in the last round, wherein another key-mixing XOR replaces the linear transformation. The nonlinear layer in Rijndael uses an 8×8 S-box whereas Serpent uses eight different 4×4 S-boxes. The 32 rounds mean that Serpent has a higher security margin than Rijndael; however, Rijndael with 10 rounds is faster and easier to implement for small blocks.[9] Hence, Rijndael was selected as the winner in the AES competition.
Serpent-0 vs. Serpent-1
The original Serpent, Serpent-0, was presented at the 5th workshop on Fast Software Encryption, but a somewhat tweaked version, Serpent-1, was submitted to the AES competition. The AES submission paper discusses the changes, which include key-scheduling differences.
Security
The XSL attack, if effective, would weaken Serpent (though not as much as it would weaken Rijndael, which became AES). However, many cryptanalysts believe that once implementation considerations are taken into account the XSL attack would be more expensive than a brute force attack.
In 2000, a paper by Kohno et al. presents a meet-in-the-middle attack against 6 of 32 rounds of Serpent and an amplified boomerang attack against 9 of 32 rounds in Serpent.[10]
A 2001 attack by Eli Biham, Orr Dunkelman and Nathan Keller presents a linear cryptanalysis attack that breaks 10 of 32 rounds of Serpent-128 with 2118 known plaintexts and 289 time, and 11 rounds of Serpent-192/256 with 2118 known plaintexts and 2187 time.[11]
A 2009 paper has noticed that the nonlinear order of Serpent S-boxes were not 3 as was claimed by the designers.[8]
A 2011 attack by Hongjun Wu, Huaxiong Wang and Phuong Ha Nguyen, also using linear cryptanalysis, breaks 11 rounds of Serpent-128 with 2116 known plaintexts, 2107.5 time and 2104 memory.[1]
The same paper also describes two attacks which break 12 rounds of Serpent-256. The first requires 2118 known plaintexts, 2228.8 time and 2228 memory. The other attack requires 2116 known plaintexts and 2121 memory but also requires 2237.5 time.
See also
• Tiger – hash function by the same authors
Footnotes
1. Huaxiong Wang, Hongjun Wu & Phuong Ha Nguyen (2011). "Improving the Algorithm 2 in Multidimensional Linear Cryptanalysis" (PDF). Information Security and Privacy. Lecture Notes in Computer Science. Vol. 6812. ACISP 2011. pp. 61–74. doi:10.1007/978-3-642-22497-3_5. ISBN 978-3-642-22496-6.
2. Nechvatal, J.; Barker, E.; Bassham, L.; Burr, W.; Dworkin, M.; Foti, J.; Roback, E. (May 2001). "Report on the development of the Advanced Encryption Standard (AES)". Journal of Research of the National Institute of Standards and Technology. 106 (3): 511–577. doi:10.6028/jres.106.023. ISSN 1044-677X. PMC 4863838. PMID 27500035.
3. "Serpent Home Page".{{cite web}}: CS1 maint: url-status (link)
4. Ross J. Anderson (23 October 2006). "Serpent: A Candidate Block Cipher for the Advanced Encryption Standard". University of Cambridge Computer Laboratory. Retrieved 14 January 2013.
5. "serpent.pdf" (PDF). Retrieved 25 April 2022.
6. Serpent Holds the Key to Internet Security – Finalists in world-wide encryption competition announced (1999)
7. SERPENT – A Candidate Block Cipher for the Advanced Encryption Standard "Serpent is now completely in the public domain, and we impose no restrictions on its use. This was announced on the 21st August at the First AES Candidate Conference. The optimised implementations in the submission package are now under the General Public License (GPL), although some comments in the code still say otherwise. You are welcome to use Serpent for any application. If you do use it, we would appreciate it if you would let us know!" (1999)
8. Bhupendra Singh; Lexy Alexander; Sanjay Burman (2009). "On Algebraic Relations of Serpent S-boxes" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
9. Bruce Schneier; John Kelsey; Doug Whiting; David Wagner; Chris Hall. Niels Fergusonk; Tadayoshi Kohno; Mike Stay (2000). "The Twofish Team's Final Comments on AES Selection" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
10. Tadayoshi Kohno; John Kelsey & Bruce Schneier (2000). "Preliminary Cryptanalysis of Reduced-Round Serpent". {{cite journal}}: Cite journal requires |journal= (help)
11. Eli Biham, Orr Dunkelman & Nathan Keller (2001). "Linear Cryptanalysis of Reduced Round Serpent". FSE 2001. CiteSeerX 10.1.1.78.6148. {{cite journal}}: Cite journal requires |journal= (help)
Further reading
• Anderson, Ross; Biham, Eli; Knudsen, Lars (1998). "Cryptography – 256 bit ciphers: Reference (AES submission) implementation".
• Biham, Eli. "Serpent – A New Block Cipher Proposal for AES".
• Halbfinger, David M (5 May 2008). "In Pellicano Case, Lessons in Wiretapping Skills". The New York Times.
• Stajano, Frank (10 February 2006). "Serpent reference implementation". University of Cambridge Computer Laboratory.
External links
• Official website
• 256 bit ciphers – SERPENT Reference implementation and derived code
Block ciphers (security summary)
Common
algorithms
• AES
• Blowfish
• DES (internal mechanics, Triple DES)
• Serpent
• Twofish
Less common
algorithms
• ARIA
• Camellia
• CAST-128
• GOST
• IDEA
• LEA
• RC2
• RC5
• RC6
• SEED
• Skipjack
• TEA
• XTEA
Other
algorithms
• 3-Way
• Akelarre
• Anubis
• BaseKing
• BassOmatic
• BATON
• BEAR and LION
• CAST-256
• Chiasmus
• CIKS-1
• CIPHERUNICORN-A
• CIPHERUNICORN-E
• CLEFIA
• CMEA
• Cobra
• COCONUT98
• Crab
• Cryptomeria/C2
• CRYPTON
• CS-Cipher
• DEAL
• DES-X
• DFC
• E2
• FEAL
• FEA-M
• FROG
• G-DES
• Grand Cru
• Hasty Pudding cipher
• Hierocrypt
• ICE
• IDEA NXT
• Intel Cascade Cipher
• Iraqi
• Kalyna
• KASUMI
• KeeLoq
• KHAZAD
• Khufu and Khafre
• KN-Cipher
• Kuznyechik
• Ladder-DES
• LOKI (97, 89/91)
• Lucifer
• M6
• M8
• MacGuffin
• Madryga
• MAGENTA
• MARS
• Mercy
• MESH
• MISTY1
• MMB
• MULTI2
• MultiSwap
• New Data Seal
• NewDES
• Nimbus
• NOEKEON
• NUSH
• PRESENT
• Prince
• Q
• REDOC
• Red Pike
• S-1
• SAFER
• SAVILLE
• SC2000
• SHACAL
• SHARK
• Simon
• SM4
• Speck
• Spectr-H64
• Square
• SXAL/MBAL
• Threefish
• Treyfer
• UES
• xmx
• XXTEA
• Zodiac
Design
• Feistel network
• Key schedule
• Lai–Massey scheme
• Product cipher
• S-box
• P-box
• SPN
• Confusion and diffusion
• Round
• Avalanche effect
• Block size
• Key size
• Key whitening (Whitening transformation)
Attack
(cryptanalysis)
• Brute-force (EFF DES cracker)
• MITM
• Biclique attack
• 3-subset MITM attack
• Linear (Piling-up lemma)
• Differential
• Impossible
• Truncated
• Higher-order
• Differential-linear
• Distinguishing (Known-key)
• Integral/Square
• Boomerang
• Mod n
• Related-key
• Slide
• Rotational
• Side-channel
• Timing
• Power-monitoring
• Electromagnetic
• Acoustic
• Differential-fault
• XSL
• Interpolation
• Partitioning
• Rubber-hose
• Black-bag
• Davies
• Rebound
• Weak key
• Tau
• Chi-square
• Time/memory/data tradeoff
Standardization
• AES process
• CRYPTREC
• NESSIE
Utilization
• Initialization vector
• Mode of operation
• Padding
Cryptography
General
• History of cryptography
• Outline of cryptography
• Cryptographic protocol
• Authentication protocol
• Cryptographic primitive
• Cryptanalysis
• Cryptocurrency
• Cryptosystem
• Cryptographic nonce
• Cryptovirology
• Hash function
• Cryptographic hash function
• Key derivation function
• Digital signature
• Kleptography
• Key (cryptography)
• Key exchange
• Key generator
• Key schedule
• Key stretching
• Keygen
• Cryptojacking malware
• Ransomware
• Random number generation
• Cryptographically secure pseudorandom number generator (CSPRNG)
• Pseudorandom noise (PRN)
• Secure channel
• Insecure channel
• Subliminal channel
• Encryption
• Decryption
• End-to-end encryption
• Harvest now, decrypt later
• Information-theoretic security
• Plaintext
• Codetext
• Ciphertext
• Shared secret
• Trapdoor function
• Trusted timestamping
• Key-based routing
• Onion routing
• Garlic routing
• Kademlia
• Mix network
Mathematics
• Cryptographic hash function
• Block cipher
• Stream cipher
• Symmetric-key algorithm
• Authenticated encryption
• Public-key cryptography
• Quantum key distribution
• Quantum cryptography
• Post-quantum cryptography
• Message authentication code
• Random numbers
• Steganography
• Category
|
Wikipedia
|
Serpentine curve
A serpentine curve is a curve whose equation is of the form
$x^{2}y+a^{2}y-abx=0,\quad ab>0.$
This article is about the mathematical concept. For the design feature, see Serpentine shape.
Equivalently, it has a parametric representation
$x=a\cot(t)$, $y=b\sin(t)\cos(t),$
or functional representation
$y={\frac {abx}{x^{2}+a^{2}}}.$
The curve has an inflection point at the origin. It has local extrema at $x=\pm a$, with a maximum value of $y=b/2$ and a minimum value of $y=-b/2$.
History
Serpentine curves were studied by L'Hôpital and Huygens, and named and classified by Newton.
Visual appearance
External links
• MathWorld – Serpentine Equation
|
Wikipedia
|
Serpentine shape
A serpentine shape is any of certain curved shapes of an object or design, which are suggestive of the shape of a snake (the adjective "serpentine" is derived from the word serpent). Serpentine shapes occur in architecture, in furniture, and in mathematics.
This article is about the design feature. For the mathematical concept, see Serpentine curve.
In architecture and urban design
The serpentine shape is observed in many architectural settings. It may provide strength, as in serpentine walls, it may allow the facade of a building to face in multiple directions, or it may be chosen for purely aesthetic reasons.
• At the University of Virginia, serpentine walls (crinkle crankle walls) extend down the length of the main lawn at the University of Virginia and flank both sides of the rotunda. They are one of the many structures Thomas Jefferson created that combine aesthetics with utility. The sinusoidal path of the wall provides strength against toppling over, allowing the wall to be only a single brick thick.
• At the Massachusetts Institute of Technology, the Baker House dormitory has a serpentine shape which allows most rooms a view of the Charles River, and gives many of the rooms a wedge-shaped layout.[1]
• At San Carlo alle Quattro Fontane, Rome, Italy (The Church of Saint Charles at the Four Fountains), designed by Francesco Borromini, is a serpentine facade constructed towards the end of Borromini's life. The concave-convex facade of the church undulates in a non-classic way. Tall Corinthian columns stand on plinths and support the main entablatures; these define the main framework of two stories and the tripartite bay division. Between the columns, smaller columns with their entablatures weave behind the main columns and in turn they frame many architectural features of the church.
• The London parks Hyde Park and Kensington Gardens contain 'The Serpentine', a lake that spans both parks. It received the name from its snake-like, curving shape. A central bridge divides the lake into two parts and defines the boundaries between Hyde Park and Kensington Gardens.[2]
• Among Castle Howard's gardens is a large, formal path behind the building, where a serpentine path is situated on a ridge. The serpentine path serves as a connection between the formal garden and the surrounding park, seamlessly integrating with the landscape. It meanders through the site, linking various buildings and site elements along the way. This natural shape of the path facilitates its integration into the garden, creating a harmonious flow between the features and the overall landscape.
• A serpentine street is a winding roadway sometimes used to slow traffic in residential neighbourhoods, possibly bordered by landscaping features.[3]
In furniture
In furniture, serpentine-front dressers and cabinets have a convex section between two concave ones.[4] This design was common in the Rococo period.[5] Examples include Louis XV commodes and 18th-century English furniture.[6]
Furniture with a concave section between two convex ones is sometimes referred to as reverse serpentine or oxbow.[7][8]
In mathematics
Main article: Serpentine curve
The serpentine curve is a cubic curve as described by Isaac Newton, given by the cartesian equation y(a2 + x2) = abx. The origin is a point of inflection, the axis of x being an asymptote and the curve lies between the parallel lines 2y = ±b. [9][10]
See also
• BACH motif
• Serpent (instrument)
• S-curve (art)
• Tribhanga
• Figura serpentinata
References and footnotes
1. Lester Wertheimer (2004), Architectural History, Kaplan AEC Architecture, p. 123.
2. "Hyde Park History & Architecture". The Royal Parks. 2007. Retrieved 2012-03-29.
3. US Federal Highway Administration (2002), Pedestrian Facilities Users Guide, p. 80.
4. Popular Science, Feb 1932, p. 100.
5. Charles Boyce (2013), Dictionary of Furniture: Third Edition, Skyhorse Publishing, p. 664.
6. Holly, "Things that inspire", August 12, 2007.
7. Chuck Bender, "The oxbow, or reverse serpentine, chest, April 24, 2008.
8. Charles Boyce (2013), Dictionary of Furniture: Third Edition, Skyhorse Publishing, p. 536.
9. 1911 Encyclopædia Britannica
• O'Connor, John J.; Robertson, Edmund F., "Serpentine", MacTutor History of Mathematics Archive, University of St Andrews
This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Serpentine (geometry)". Encyclopædia Britannica (11th ed.). Cambridge University Press.
|
Wikipedia
|
Serre's conjecture II (algebra)
In mathematics, Jean-Pierre Serre conjectured[1][2] the following statement regarding the Galois cohomology of a simply connected semisimple algebraic group. Namely, he conjectured that if G is such a group over a perfect field F of cohomological dimension at most 2, then the Galois cohomology set H1(F, G) is zero.
A converse of the conjecture holds: if the field F is perfect and if the cohomology set H1(F, G) is zero for every semisimple simply connected algebraic group G then the p-cohomological dimension of F is at most 2 for every prime p.[3]
The conjecture holds in the case where F is a local field (such as p-adic field) or a global field with no real embeddings (such as Q(√−1)). This is a special case of the Kneser–Harder–Chernousov Hasse principle for algebraic groups over global fields. (Note that such fields do indeed have cohomological dimension at most 2.[2]) The conjecture also holds when F is finitely generated over the complex numbers and has transcendence degree at most 2.[4]
The conjecture is also known to hold for certain groups G. For special linear groups, it is a consequence of the Merkurjev–Suslin theorem.[5] Building on this result, the conjecture holds if G is a classical group.[6] The conjecture also holds if G is one of certain kinds of exceptional group.[7]
References
1. Serre, J-P. (1962). "Cohomologie galoisienne des groupes algébriques linéaires". Colloque sur la théorie des groupes algébriques: 53–68.
2. Serre, J-P. (1964). Cohomologie galoisienne. Lecture Notes in Mathematics. Vol. 5. Springer.
3. Serre, Jean-Pierre (1995). "Cohomologie galoisienne : progrès et problèmes". Astérisque. 227: 229–247. MR 1321649. Zbl 0837.12003 – via NUMDAM.
4. de Jong, A.J.; He, Xuhua; Starr, Jason Michael (2008). "Families of rationally simply connected varieties over surfaces and torsors for semisimple groups". arXiv:0809.5224 [math.AG].
5. Merkurjev, A.S.; Suslin, A.A. (1983). "K-cohomology of Severi-Brauer varieties and the norm-residue homomorphism". Math. USSR Izvestiya. 21 (2): 307–340. Bibcode:1983IzMat..21..307M. doi:10.1070/im1983v021n02abeh001793.
6. Bayer-Fluckiger, E.; Parimala, R. (1995). "Galois cohomology of the classical groups over fields of cohomological dimension ≤ 2". Inventiones Mathematicae. 122: 195–229. Bibcode:1995InMat.122..195B. doi:10.1007/BF01231443. S2CID 124673233.
7. Gille, P. (2001). "Cohomologie galoisienne des groupes algebriques quasi-déployés sur des corps de dimension cohomologique ≤ 2". Compositio Mathematica. 125 (3): 283–325. doi:10.1023/A:1002473132282. S2CID 124765999.
External links
• Philippe Gille's survey of the conjecture
|
Wikipedia
|
Serre's criterion for normality
In algebra, Serre's criterion for normality, introduced by Jean-Pierre Serre, gives necessary and sufficient conditions for a commutative Noetherian ring A to be a normal ring. The criterion involves the following two conditions for A:
• $R_{k}:A_{\mathfrak {p}}$ is a regular local ring for any prime ideal ${\mathfrak {p}}$ of height ≤ k.
• $S_{k}:\operatorname {depth} A_{\mathfrak {p}}\geq \inf\{k,\operatorname {ht} ({\mathfrak {p}})\}$ for any prime ideal ${\mathfrak {p}}$.[1]
The statement is:
• A is a reduced ring $\Leftrightarrow R_{0},S_{1}$ hold.
• A is a normal ring $\Leftrightarrow R_{1},S_{2}$ hold.
• A is a Cohen–Macaulay ring $\Leftrightarrow S_{k}$ hold for all k.
Items 1, 3 trivially follow from the definitions. Item 2 is much deeper.
For an integral domain, the criterion is due to Krull. The general case is due to Serre.
Proof
Sufficiency
(After EGA IV2. Theorem 5.8.6.)
Suppose A satisfies S2 and R1. Then A in particular satisfies S1 and R0; hence, it is reduced. If ${\mathfrak {p}}_{i},\,1\leq i\leq r$ are the minimal prime ideals of A, then the total ring of fractions K of A is the direct product of the residue fields $\kappa ({\mathfrak {p}}_{i})=Q(A/{\mathfrak {p}}_{i})$: see total ring of fractions of a reduced ring. That means we can write $1=e_{1}+\dots +e_{r}$ where $e_{i}$ are idempotents in $\kappa ({\mathfrak {p}}_{i})$ and such that $e_{i}e_{j}=0,\,i\neq j$. Now, if A is integrally closed in K, then each $e_{i}$ is integral over A and so is in A; consequently, A is a direct product of integrally closed domains Aei's and we are done. Thus, it is enough to show that A is integrally closed in K.
For this end, suppose
$(f/g)^{n}+a_{1}(f/g)^{n-1}+\dots +a_{n}=0$
where all f, g, ai's are in A and g is moreover a non-zerodivisor. We want to show:
$f\in gA$.
Now, the condition S2 says that $gA$ is unmixed of height one; i.e., each associated primes ${\mathfrak {p}}$ of $A/gA$ has height one. This is because if ${\mathfrak {p}}$ has height greater than one, then ${\mathfrak {p}}$ would contain a non zero divisor in $A/gA$. However, ${\mathfrak {p}}$ is associated to the zero ideal in $A/gA$ so it can only contain zero divisors, see here. By the condition R1, the localization $A_{\mathfrak {p}}$ is integrally closed and so $\phi (f)\in \phi (g)A_{\mathfrak {p}}$, where $\phi :A\to A_{\mathfrak {p}}$ is the localization map, since the integral equation persists after localization. If $gA=\cap _{i}{\mathfrak {q}}_{i}$ is the primary decomposition, then, for any i, the radical of ${\mathfrak {q}}_{i}$ is an associated prime ${\mathfrak {p}}$ of $A/gA$ and so $f\in \phi ^{-1}({\mathfrak {q}}_{i}A_{\mathfrak {p}})={\mathfrak {q}}_{i}$; the equality here is because ${\mathfrak {q}}_{i}$ is a ${\mathfrak {p}}$-primary ideal. Hence, the assertion holds.
Necessity
Suppose A is a normal ring. For S2, let ${\mathfrak {p}}$ be an associated prime of $A/fA$ for a non-zerodivisor f; we need to show it has height one. Replacing A by a localization, we can assume A is a local ring with maximal ideal ${\mathfrak {p}}$. By definition, there is an element g in A such that ${\mathfrak {p}}=\{x\in A|xg\equiv 0{\text{ mod }}fA\}$ and $g\not \in fA$. Put y = g/f in the total ring of fractions. If $y{\mathfrak {p}}\subset {\mathfrak {p}}$, then ${\mathfrak {p}}$ is a faithful $A[y]$-module and is a finitely generated A-module; consequently, $y$ is integral over A and thus in A, a contradiction. Hence, $y{\mathfrak {p}}=A$ or ${\mathfrak {p}}=f/gA$, which implies ${\mathfrak {p}}$ has height one (Krull's principal ideal theorem).
For R1, we argue in the same way: let ${\mathfrak {p}}$ be a prime ideal of height one. Localizing at ${\mathfrak {p}}$ we assume ${\mathfrak {p}}$ is a maximal ideal and the similar argument as above shows that ${\mathfrak {p}}$ is in fact principal. Thus, A is a regular local ring. $\square $
Notes
1. Grothendieck & Dieudonné 1961, § 5.7.
References
• Grothendieck, Alexandre; Dieudonné, Jean (1965). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Seconde partie". Publications Mathématiques de l'IHÉS. 24. doi:10.1007/bf02684322. MR 0199181.
• H. Matsumura, Commutative algebra, 1970.
|
Wikipedia
|
Serre's inequality on height
In algebra, specifically in the theory of commutative rings, Serre's inequality on height states: given a (Noetherian) regular ring A and a pair of prime ideals ${\mathfrak {p}},{\mathfrak {q}}$ in it, for each prime ideal ${\mathfrak {r}}$ that is a minimal prime ideal over the sum ${\mathfrak {p}}+{\mathfrak {q}}$, the following inequality on heights holds:[1][2]
$\operatorname {ht} ({\mathfrak {r}})\leq \operatorname {ht} ({\mathfrak {p}})+\operatorname {ht} ({\mathfrak {q}}).$
Without the assumption on regularity, the inequality can fail; see scheme-theoretic intersection#Proper intersection.
Sketch of Proof
Serre gives the following proof of the inequality, based on the validity of Serre's multiplicity conjectures for formal power series ring over a complete discrete valuation ring.[3]
By replacing $A$ by the localization at ${\mathfrak {r}}$, we assume $(A,{\mathfrak {r}})$ is a local ring. Then the inequality is equivalent to the following inequality: for finite $A$-modules $M,N$ such that $M\otimes _{A}N$ has finite length,
$\dim _{A}M+\dim _{A}N\leq \dim A$
where $\dim _{A}M=\dim(A/\operatorname {Ann} _{A}(M))$ = the dimension of the support of $M$ and similar for $\dim _{A}N$. To show the above inequality, we can assume $A$ is complete. Then by Cohen's structure theorem, we can write $A=A_{1}/a_{1}A_{1}$ where $A_{1}$ is a formal power series ring over a complete discrete valuation ring and $a_{1}$ is a nonzero element in $A_{1}$. Now, an argument with the Tor spectral sequence shows that $\chi ^{A_{1}}(M,N)=0$. Then one of Serre's conjectures says $\dim _{A_{1}}M+\dim _{A_{1}}N<\dim A_{1}$, which in turn gives the asserted inequality. $\square $
References
1. Serre 2000, Ch. V, § B.6, Theorem 3.
2. Fulton 1998, § 20.4.
3. Serre 2000, Ch. V, § B. 6.
• Fulton, William (1998), Intersection theory, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge., vol. 2 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-62046-4, MR 1644323
• Serre, Jean-Pierre (2000). Local Algebra. Springer Monographs in Mathematics (in German). doi:10.1007/978-3-662-04203-8. ISBN 978-3-662-04203-8. OCLC 864077388.
|
Wikipedia
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.