text
stringlengths
100
500k
subset
stringclasses
4 values
Star (graph theory) In graph theory, a star Sk is the complete bipartite graph K1,k : a tree with one internal node and k leaves (but no internal nodes and k + 1 leaves when k ≤ 1). Alternatively, some authors define Sk to be the tree of order k with maximum diameter 2; in which case a star of k > 2 has k − 1 leaves. Star The star S7. (Some authors index this as S8.) Verticesk + 1 Edgesk Diameter2 Girth∞ Chromatic number2 Chromatic indexk PropertiesEdge-transitive Tree Unit distance Bipartite NotationSk Table of graphs and parameters A star with 3 edges is called a claw. The star Sk is edge-graceful when k is even and not when k is odd. It is an edge-transitive matchstick graph, and has diameter 2 (when l > 1), girth ∞ (it has no cycles), chromatic index k, and chromatic number 2 (when k > 0). Additionally, the star has large automorphism group, namely, the symmetric group on k letters. Stars may also be described as the only connected graphs in which at most one vertex has degree greater than one. Relation to other graph families Claws are notable in the definition of claw-free graphs, graphs that do not have any claw as an induced subgraph.[1][2] They are also one of the exceptional cases of the Whitney graph isomorphism theorem: in general, graphs with isomorphic line graphs are themselves isomorphic, with the exception of the claw and the triangle K3.[3] A star is a special kind of tree. As with any tree, stars may be encoded by a Prüfer sequence; the Prüfer sequence for a star K1,k consists of k − 1 copies of the center vertex.[4] Several graph invariants are defined in terms of stars. Star arboricity is the minimum number of forests that a graph can be partitioned into such that each tree in each forest is a star,[5] and the star chromatic number of a graph is the minimum number of colors needed to color its vertices in such a way that every two color classes together form a subgraph in which all connected components are stars.[6] The graphs of branchwidth 1 are exactly the graphs in which each connected component is a star.[7] Other applications The set of distances between the vertices of a claw provides an example of a finite metric space that cannot be embedded isometrically into a Euclidean space of any dimension.[8] The star network, a computer network modeled after the star graph, is important in distributed computing. A geometric realization of the star graph, formed by identifying the edges with intervals of some fixed length, is used as a local model of curves in tropical geometry. A tropical curve is defined to be a metric space that is locally isomorphic to a star shaped metric graph. See also • Star (simplicial complex) - a generalization of the concept of a star from a graph to an arbitrary simplicial complex. References Wikimedia Commons has media related to Star graphs. 1. Faudree, Ralph; Flandrin, Evelyne; Ryjáček, Zdeněk (1997), "Claw-free graphs — A survey", Discrete Mathematics, 164 (1–3): 87–147, doi:10.1016/S0012-365X(96)00045-3, MR 1432221. 2. Chudnovsky, Maria; Seymour, Paul (2005), "The structure of claw-free graphs", Surveys in combinatorics 2005 (PDF), London Math. Soc. Lecture Note Ser., vol. 327, Cambridge: Cambridge Univ. Press, pp. 153–171, MR 2187738. 3. Whitney, Hassler (January 1932), "Congruent Graphs and the Connectivity of Graphs", American Journal of Mathematics, 54 (1): 150–168, doi:10.2307/2371086, hdl:10338.dmlcz/101067, JSTOR 2371086. 4. Gottlieb, J.; Julstrom, B. A.; Rothlauf, F.; Raidl, G. R. (2001). "Prüfer numbers: A poor representation of spanning trees for evolutionary search" (PDF). GECCO-2001: Proceedings of the Genetic and Evolutionary Computation Conference. Morgan Kaufmann. pp. 343–350. ISBN 1558607749. 5. Hakimi, S. L.; Mitchem, J.; Schmeichel, E. E. (1996), "Star arboricity of graphs", Discrete Math., 149: 93–98, doi:10.1016/0012-365X(94)00313-8 6. Fertin, Guillaume; Raspaud, André; Reed, Bruce (2004), "Star coloring of graphs", Journal of Graph Theory, 47 (3): 163–182, doi:10.1002/jgt.20029. 7. Robertson, Neil; Seymour, Paul D. (1991), "Graph minors. X. Obstructions to tree-decomposition", Journal of Combinatorial Theory, 52 (2): 153–190, doi:10.1016/0095-8956(91)90061-N. 8. Linial, Nathan (2002), "Finite metric spaces–combinatorics, geometry and algorithms", Proc. International Congress of Mathematicians, Beijing, vol. 3, pp. 573–586, arXiv:math/0304466, Bibcode:2003math......4466L
Wikipedia
Star height In theoretical computer science, more precisely in the theory of formal languages, the star height is a measure for the structural complexity of regular expressions and regular languages. The star height of a regular expression equals the maximum nesting depth of stars appearing in that expression. The star height of a regular language is the least star height of any regular expression for that language. The concept of star height was first defined and studied by Eggan (1963). Formal definition More formally, the star height of a regular expression E over a finite alphabet A is inductively defined as follows: • $\textstyle h\left(\emptyset \right)\,=\,0$, $\textstyle h\left(\varepsilon \right)\,=\,0$, and $\textstyle h\left(a\right)\,=\,0$ for all alphabet symbols a in A. • $\textstyle h\left(EF\right)\,=\,h\left(E\,\mid \,F\right)\,=\,\max \left(\,h(E),h(F)\,\right)$ • $\textstyle h\left(E^{*}\right)\,=\,h(E)+1.$ Here, $\scriptstyle \emptyset $ is the special regular expression denoting the empty set and ε the special one denoting the empty word; E and F are arbitrary regular expressions. The star height h(L) of a regular language L is defined as the minimum star height among all regular expressions representing L. The intuition is here that if the language L has large star height, then it is in some sense inherently complex, since it cannot be described by means of an "easy" regular expression, of low star height. Examples While computing the star height of a regular expression is easy, determining the star height of a language can be sometimes tricky. For illustration, the regular expression $\textstyle \left(b\,\mid \,aa^{*}b\right)^{*}aa^{*}$ over the alphabet A = {a,b} has star height 2. However, the described language is just the set of all words ending in an a: thus the language can also be described by the expression $\textstyle (a\,\mid \,b)^{*}a$ which is only of star height 1. To prove that this language indeed has star height 1, one still needs to rule out that it could be described by a regular expression of lower star height. For our example, this can be done by an indirect proof: One proves that a language of star height 0 contains only finitely many words. Since the language under consideration is infinite, it cannot be of star height 0. The star height of a group language is computable: for example, the star height of the language over {a,b} in which the number of occurrences of a and b are congruent modulo 2n is n.[1] Eggan's theorem In his seminal study of the star height of regular languages, Eggan (1963) established a relation between the theories of regular expressions, finite automata, and of directed graphs. In subsequent years, this relation became known as Eggan's theorem, cf. Sakarovitch (2009). We recall a few concepts from graph theory and automata theory. In graph theory, the cycle rank r(G) of a directed graph (digraph) G = (V, E) is inductively defined as follows: • If G is acyclic, then r(G) = 0. This applies in particular if G is empty. • If G is strongly connected and E is nonempty, then $r(G)=1+\min _{v\in V}r(G-v),\,$ where $G-v$ is the digraph resulting from deletion of vertex v and all edges beginning or ending at v. • If G is not strongly connected, then r(G) is equal to the maximum cycle rank among all strongly connected components of G. In automata theory, a nondeterministic finite automaton with ε-transitions (ε-NFA) is defined as a 5-tuple, (Q, Σ, δ, q0, F), consisting of • a finite set of states Q • a finite set of input symbols Σ • a set of labeled edges δ, referred to as transition relation: Q × (Σ ∪{ε}) × Q. Here ε denotes the empty word. • an initial state q0 ∈ Q • a set of states F distinguished as accepting states F ⊆ Q. A word w ∈ Σ* is accepted by the ε-NFA if there exists a directed path from the initial state q0 to some final state in F using edges from δ, such that the concatenation of all labels visited along the path yields the word w. The set of all words over Σ* accepted by the automaton is the language accepted by the automaton A. When speaking of digraph properties of a nondeterministic finite automaton A with state set Q, we naturally address the digraph with vertex set Q induced by its transition relation. Now the theorem is stated as follows. Eggan's Theorem: The star height of a regular language L equals the minimum cycle rank among all nondeterministic finite automata with ε-transitions accepting L. Proofs of this theorem are given by Eggan (1963), and more recently by Sakarovitch (2009). Generalized star height The above definition assumes that regular expressions are built from the elements of the alphabet A using only the standard operators set union, concatenation, and Kleene star. Generalized regular expressions are defined just as regular expressions, but here also the set complement operator is allowed (the complement is always taken with respect to the set of all words over A). If we alter the definition such that taking complements does not increase the star height, that is, $\textstyle h\left(E^{c}\right)\,=\,h(E)$ we can define the generalized star height of a regular language L as the minimum star height among all generalized regular expressions representing L. It is an open problem whether some languages can only be expressed with a generalized star height greater than one: this is the generalized star-height problem. Note that, whereas it is immediate that a language of (ordinary) star height 0 can contain only finitely many words, there exist infinite languages having generalized star height 0. For instance, the regular expression $\textstyle (a\,\mid \,b)^{*}a,$ which we saw in the example above, can be equivalently described by the generalized regular expression $\textstyle \emptyset ^{c}a$, since the complement of the empty set is precisely the set of all words over A. Thus the set of all words over the alphabet A ending in the letter a has star height one, while its generalized star height equals zero. Languages of generalized star height zero are also called star-free languages. It can be shown that a language L is star-free if and only if its syntactic monoid is aperiodic (Schützenberger (1965)). See also • Star height problem • Generalized star height problem References 1. Sakarovitch (2009) p.342 • Berstel, Jean; Reutenauer, Christophe (2011), Noncommutative rational series with applications, Encyclopedia of Mathematics and Its Applications, vol. 137, Cambridge: Cambridge University Press, ISBN 978-0-521-19022-0, Zbl 1250.68007 • Cohen, Rina S. (1971), "Techniques for establishing star height of regular sets", Theory of Computing Systems, 5 (2): 97–114, doi:10.1007/BF01702866, ISSN 1432-4350, S2CID 1970902, Zbl 0218.94028 • Cohen, Rina S.; Brzozowski, J.A. (1970), "General properties of star height of regular events", Journal of Computer and System Sciences, 4 (3): 260–280, doi:10.1016/S0022-0000(70)80024-1, ISSN 0022-0000, Zbl 0245.94038 • Eggan, Lawrence C. (1963), "Transition graphs and the star-height of regular events", Michigan Mathematical Journal, 10 (4): 385–397, doi:10.1307/mmj/1028998975, Zbl 0173.01504 • Sakarovitch, Jacques (2009), Elements of automata theory, Translated from the French by Reuben Thomas, Cambridge: Cambridge University Press, ISBN 978-0-521-84425-3, Zbl 1188.68177 • Salomaa, Arto (1981), Jewels of formal language theory, Rockville, Maryland: Computer Science Press, ISBN 978-0-914894-69-8, Zbl 0487.68064 • Schützenberger, M.P. (1965), "On finite monoids having only trivial subgroups", Information and Control, 8 (2): 190–194, doi:10.1016/S0019-9958(65)90108-7, ISSN 0019-9958, Zbl 0131.02001
Wikipedia
Radar chart A radar chart is a graphical method of displaying multivariate data in the form of a two-dimensional chart of three or more quantitative variables represented on axes starting from the same point. The relative position and angle of the axes is typically uninformative, but various heuristics, such as algorithms that plot data as the maximal total area, can be applied to sort the variables (axes) into relative positions that reveal distinct correlations, trade-offs, and a multitude of other comparative measures.[1] "Spider chart" redirects here. For the extension of Euler diagrams, see Spider diagram. The radar chart is also known as web chart, spider chart, spider graph, spider web chart, star chart,[2] star plot, cobweb chart, irregular polygon, polar chart, or Kiviat diagram.[3][4] It is equivalent to a parallel coordinates plot, with the axes arranged radially. Overview The radar chart is a chart and/or plot that consists of a sequence of equi-angular spokes, called radii, with each spoke representing one of the variables. The data length of a spoke is proportional to the magnitude of the variable for the data point relative to the maximum magnitude of the variable across all data points. A line is drawn connecting the data values for each spoke. This gives the plot a star-like appearance and the origin of one of the popular names for this plot. The star plot can be used to answer the following questions:[5] • Which observations are most similar, i.e., are there clusters of observations? (Radar charts are used to examine the relative values for a single data point (e.g., point 3 is large for variables 2 and 4, small for variables 1, 3, 5, and 6) and to locate similar points or dissimilar points.)[5] • Are there outliers? Radar charts are a useful way to display multivariate observations with an arbitrary number of variables.[6] Each star represents a single observation. Typically, radar charts are generated in a multi-plot format with many stars on each page and each star representing one observation.[5] The star plot was first used by Georg von Mayr in 1877.[7][8] Radar charts differ from glyph plots in that all variables are used to construct the plotted star figure. There is no separation into foreground and background variables. Instead, the star-shaped figures are usually arranged in a rectangular array on the page. It is somewhat easier to see patterns in the data if the observations are arranged in some non-arbitrary order (if the variables are assigned to the rays of the star in some meaningful order).[9] Applications Radar charts can be used in sports to chart players' strengths and weaknesses[10] by calculating various statistics related to the player that can tracked along the central axis of the chart. Examples include a basket players shots made, rebounds, assists, etc., or the batting or pitching stats of a baseball player. This creates a centralized visualization of the strengths and weaknesses of a player, and if overlapped with the statistics of other players or league averages, can display where a player excels and where they could improve.[11] These insights into player strengths and weakness could prove crucial to player development as it allows coaches and trainers to adjust a player's training regiment to help improve on their weaknesses. The results of the radar chart can also be useful in situational play. If a batter is shown to hit poorly against left-handed pitching, then his team knows to limit his plate appearances against left-handed pitchers, while the opposing team may try to force a situation where the batter is forced to hit against the pitcher. Another application of radar charts is the control of quality improvement to display the performance metrics various objects including computer programs,[12] computers, phones, vehicles, and more. Computer programmer often use analytics to test the performance of their programs versus others. An example of this where radar charts may be useful is the performance analysis of various sorting algorithms. A programmer could gather up several different sorting algorithms such as selection, bubble, and quick, then analyze the performance of these algorithms by measuring their speed, memory usage, and power usage, then graph these on a radar chart to see how each sort performs under various sizes of data. Another performance application is measuring the performance of similar cars against each other. A consumer could look at variables such as the cars’ top speed, miles per gallon, horsepower, and torque. Then after using a radar chart to visualize the data, they could decide on what car is best for them based on the results. Radar charts can be used in life sciences to display the strengths and weakness of drugs and other medications.[13] Using the example of two anti-depressants, a researcher can rank variables such as efficacy, side effects, cost, etc. on a scale of one to ten. They could then graph the results using a radar chart to see the spread of variables and find how the differ, such as one anti-depressant being cheaper and quicker acting, but not having great relief over time. Meanwhile, the other anti-depressant provides stronger relief and holds up better over time but is more expensive. Another life science application is in patient analysis. Radar charts can be used to graph the variables of life affecting a person's wellness, and then be analyzed to help them. A more specific example is in the case of athletes, whose various wellness habits such as sleep, diet, and stress are monitored to make sure they stay in peak physical condition.[14] If any areas would be shown dipping, doctors and trainers could step in to assist the athlete and improve their wellness. Limitations Radar charts are primarily suited for strikingly showing outliers and commonality, or when one chart is greater in every variable than another, and primarily used for ordinal measurements – where each variable corresponds to "better" in some respect, and all variables on the same scale. Conversely, radar charts have been criticized as poorly suited for making trade-off decisions – when one chart is greater than another on some variables, but less on others.[15] Further, it is hard to visually compare lengths of different spokes, because radial distances are hard to judge, though concentric circles help as grid lines. Instead, one may use a simple line graph, particularly for time series.[16] Radar charts can distort data to some extent, especially when areas are filled in, because the area contained becomes proportional to the square of the linear measures. For example, in a chart with 5 variables that range from 1 to 100, the area contained by the polygon bounded by 5 points when all measures are 90, is more than 10% larger than the same for a chart with all values of 82. Radar charts can also become hard to visually compare between different samples on the chart when their values are close as their lines or areas bleed into each other, as shown in Figure 5. Artificial structure Radar charts impose several structures on data, which are often artificial: • Relatedness of neighbors – radar charts are often used when neighboring variables are unrelated, creating spurious connections. • Cyclic structure – the first and last variables are placed next to each other. • Length – variables are often most naturally ordinal: better or worse, though the degree of difference may be artificial. • Area – area scales as the square of values, exaggerating the effect of large numbers. For example, 2, 2 takes up 4 times the area of 1, 1. This is a general issue with area graphs, and area is hard to judge – see "Cleveland's hierarchy".[17] For example, the alternating data 9, 1, 9, 1, 9, 1 yields a spiking radar chart (which goes in and out), while reordering the data as 9, 9, 9, 1, 1, 1 instead yields two distinct wedges (sectors). In some cases there is a natural structure, and radar charts can be well-suited. For example, for diagrams of data that vary over a 24-hour cycle, the hourly data is naturally related to its neighbor, and has a cyclic structure, so it can naturally be displayed as a radar chart.[16][18][19] One set of guidelines on the use of radar charts (or rather the closely related "polar area graph") is:[19] • you don't mind reading stacked areas instead of position along a common scale (see Cleveland's Hierarchy), • the data set is truly cyclic, not linear, and • there are two series to compare, one much smaller than the other Data set size Radar charts are helpful for small-to-moderate-sized multivariate data sets. Their primary weakness is that their effectiveness is limited to data sets with less than a few hundred points. After that, they tend to be overwhelming.[5] Further, when using radar charts with multiple dimensions or samples, the radar chart may become cluttered and harder to interpret as the number of samples grows. For example, take the batting stats table comparing MLB 2021 MVP Shohei Ohtani, vs the stats of the leagues average designated hitters and some Hall of Fame players. These stats represent the percentage of hits, home runs, strike outs, etc. per at bat of a player. For more information on what each stat used in the table represents, you can refer to this reference by the MLB.[20] We will use this table below to create Radar charts comparing the 2021 MVP batting stats to the league averages for Designated Hitters and regular batters, in an attempt to visualize performance metrics and visually come to a conclusion that Shohei out performed the average player. Next we will include additional samples into the Radar chart, using Hall of Fame players Jackie Robinson, Jim Thome, and Frank Thomas to compare Shohei to a few of the greatest batters of all time. This Radar chart not only can give us intuition of how Shohei compares to the top historical players, but will also serve a purpose in showing the limitations of having too many samples in a Radar chart. TargetBAOBPSLGOPSHR%SO%BB% MLB0.2440.3170.4110.7280.0370.2320.087 DH0.2390.3160.4340.750.0470.2560.093 Shohei Ohtani0.2570.3720.5920.9650.0860.2960.15 Jackie Robinson0.3130.410.4770.8870.02820.05820.151 Jim Thome0.2760.4020.5540.9560.0720.3020.207 Frank Thomas0.3010.4190.5550.9740.0630.170.203 We can see in Figure 10 how a radar chart can be easily interpreted when the number of spokes and samples is relatively small. When we compare more samples in Figure 11, even without an area fill on the radar chart, it becomes apparent how difficult it can become to interpret or make trade-off decisions. Example The chart on the right[5] contains the star plots of 15 cars. The variable list for the sample star plot is: 1. Price 2. Mileage (MPG) 3. 1978 Repair Record (1 = Worst, 5 = Best) 4. 1977 Repair Record (1 = Worst, 5 = Best) 5. Headroom 6. Rear Seat Room 7. Trunk Space 8. Weight 9. Length We can look at these plots individually or we can use them to identify clusters of cars with similar features. For example, we can look at the star plot of the Cadillac Seville (the last one on the image) and see that it is one of the most expensive cars, gets below average (but not among the worst) gas mileage, has an average repair record, and has average-to-above-average roominess and size. We can then compare the Cadillac models (the last three plots) with the AMC models (the first three plots). This comparison shows distinct patterns. The AMC models tend to be inexpensive, have below average gas mileage, and are small in both height and weight and in roominess. The Cadillac models are expensive, have poor gas mileage, and are large in both size and roominess.[5] Alternatives One may use line graphs for time series and other data,[16] in the form of parallel coordinates. For graphical qualitative comparison of 2-dimensional tabular data in several variables, a common alternative are Harvey balls, which are used extensively by Consumer Reports.[21] Comparison in Harvey balls (and radar charts) may be significantly aided by ordering the variables algorithmically to add order.[22] An excellent way for visualising structures within multivariate data is offered by principal component analysis (PCA). Another alternative is to use small, inline bar charts, which may be compared to sparklines.[22] Although radar and polar charts are often described as the same chart types,[4] some sources make a difference between them and even consider the radar chart to be a polar chart's variation that does not display data in terms of polar coordinate.[23] See also • Plan position indicator • Plot (graphics) • Polar area diagram • Parallel coordinates • Radial tree References  This article incorporates public domain material from the National Institute of Standards and Technology. 1. Porter, Michael M; Niksiar, Pooya (2018). "Multidimensional mechanics: Performance mapping of natural biological systems using permutated radar charts". PLOS ONE. 13 (9): e0204309. Bibcode:2018PLoSO..1304309P. doi:10.1371/journal.pone.0204309. PMC 6161877. PMID 30265707. 2. Nancy R. Tague (2005) The quality toolbox. page 437. 3. Kolence, Kenneth W. (1973). "The Software Empiricist". ACM SIGMETRICS Performance Evaluation Review. 2 (2): 31–36. doi:10.1145/1113644.1113647. S2CID 18600391. Dr. Philip J. Kiviat suggested at a recent NBS/ACM workshop on performance measurement that a circular graph, using radii as the variable axes might be a useful form. […] I recommend they be called "Kiviat Plots" or "Kiviat Graphs" to recognize his insight as to their importance. 4. "Find Content Gaps Using Radar Charts". Content Strategy Workshops. March 3, 2015. Retrieved December 17, 2015. 5. NIST/SEMATECH (2003). Star Plot in: e-Handbook of Statistical Methods. 6/01/2003 (Date created) 6. Chambers, John, William Cleveland, Beat Kleiner, and Paul Tukey, (1983). Graphical Methods for Data Analysis. Wadsworth. pp. 158–162 7. Mayr, Georg von (1877), Die Gesetzmäßigkeit im Gesellschaftsleben (in German), Munich: Oldenbourg, OL 23294909M, p.78. Linien-Diagramme im Kreise: Line charts in circles. 8. Michael Friendly (2008). "Milestones in the history of thematic cartography, statistical graphics, and data visualization" Archived 2018-09-26 at the Wayback Machine. 9. Michael Friendly (1991). "Statistical Graphics for Multivariate Data". Paper presented at the SAS SUGI 16 Conference, Apr, 1991. 10. Spider Graphs: Charting Basketball Statistics 11. Seeing Data. "Making sense of data visualizations". Seeing Data. 12. Ron Basu (2004). Implementing Quality: A Practical Guide to Tools and Techniques. p.131. 13. Model Systems Knowledge Translation Center. "Effective Use of Radar Charts" (PDF). Model Systems Knowledge Translation Center. 14. John Maguire. "De-normalized Spider and Radar Graphs". Kitman Labs. 15. You are NOT spider man, so why do you use radar charts?, by Chandoo, September 18th, 2008 16. Peltier, Jon (2008-08-14). "Rock Around The Clock - Peltier Tech Blog". Peltiertech.com. Retrieved 2013-09-11. 17. Cleveland, William; McGill, Robert (1984). "Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods". Journal of the American Statistical Association. 79 (387): 531–554. JSTOR 2288400. Summary of Cleveland's hierarchy 18. "Charting around the clock The Excel Charts Blog". Excelcharts.com. 2008-08-15. Retrieved 2013-09-11. 19. Clock This 20. "Standard Stats". www.mlb.com. Retrieved 2022-04-26. 21. "Qualitative Comparison". Support Analytics Blog. 11 December 2007. Archived from the original on 2012-04-08. 22. "Information Ocean: Reorderable tables II: Bertin versus the Spiders". I-ocean.blogspot.com. 2008-09-24. Retrieved 2013-09-11. 23. "Polar Charts (Report Builder and SSRS)". Microsoft Developer Network. Retrieved December 17, 2015. External links Wikimedia Commons has media related to Radar charts. • Star Plot – NIST/SEMATECH e-Handbook of Statistical Methods Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Star polygon In geometry, a star polygon is a type of non-convex polygon. Regular star polygons have been studied in depth; while star polygons in general appear not to have been formally defined, certain notable ones can arise through truncation operations on regular simple and star polygons. Not to be confused with Star-shaped polygon. Two types of star pentagons {5/2} |5/2| A regular star pentagon, {5/2}, has five corner vertices and intersecting edges, while a concave decagon, |5/2|, has ten edges and two sets of five vertices. The first are used in definitions of star polyhedra and star uniform tilings, while the second are sometimes used in planar tilings. Small stellated dodecahedron Tessellation Branko Grünbaum identified two primary definitions used by Johannes Kepler, one being the regular star polygons with intersecting edges that don't generate new vertices, and the second being simple isotoxal concave polygons.[1] The first usage is included in polygrams which includes polygons like the pentagram but also compound figures like the hexagram. One definition of a star polygon, used in turtle graphics, is a polygon having 2 or more turns (turning number and density), like in spirolaterals.[2] Names Star polygon names combine a numeral prefix, such as penta-, with the Greek suffix -gram (in this case generating the word pentagram). The prefix is normally a Greek cardinal, but synonyms using other prefixes exist. For example, a nine-pointed polygon or enneagram is also known as a nonagram, using the ordinal nona from Latin. The -gram suffix derives from γραμμή (grammḗ) meaning a line.[3] Regular star polygon Further information: Regular polygon § Regular star polygons {5/2} {7/2} {7/3}... A "regular star polygon" is a self-intersecting, equilateral equiangular polygon. A regular star polygon is denoted by its Schläfli symbol {p/q}, where p (the number of vertices) and q (the density) are relatively prime (they share no factors) and q ≥ 2. The density of a polygon can also be called its turning number, the sum of the turn angles of all the vertices divided by 360°. The symmetry group of {n/k} is dihedral group Dn of order 2n, independent of k. Regular star polygons were first studied systematically by Thomas Bradwardine, and later Johannes Kepler.[4] Construction via vertex connection Regular star polygons can be created by connecting one vertex of a simple, regular, p-sided polygon to another, non-adjacent vertex and continuing the process until the original vertex is reached again.[5] Alternatively for integers p and q, it can be considered as being constructed by connecting every qth point out of p points regularly spaced in a circular placement.[6] For instance, in a regular pentagon, a five-pointed star can be obtained by drawing a line from the first to the third vertex, from the third vertex to the fifth vertex, from the fifth vertex to the second vertex, from the second vertex to the fourth vertex, and from the fourth vertex to the first vertex. If q is greater than half of p, then the construction will result in the same polygon as p-q; connecting every third vertex of the pentagon will yield an identical result to that of connecting every second vertex. However, the vertices will be reached in the opposite direction, which makes a difference when retrograde polygons are incorporated in higher-dimensional polytopes. For example, an antiprism formed from a prograde pentagram {5/2} results in a pentagrammic antiprism; the analogous construction from a retrograde "crossed pentagram" {5/3} results in a pentagrammic crossed-antiprism. Another example is the tetrahemihexahedron, which can be seen as a "crossed triangle" {3/2} cuploid. Degenerate regular star polygons If p and q are not coprime, a degenerate polygon will result with coinciding vertices and edges. For example {6/2} will appear as a triangle, but can be labeled with two sets of vertices 1-6. This should be seen not as two overlapping triangles, but a double-winding of a single unicursal hexagon.[7][8] Construction via stellation Alternatively, a regular star polygon can also be obtained as a sequence of stellations of a convex regular core polygon. Constructions based on stellation also allow for regular polygonal compounds to be obtained in cases where the density and amount of vertices are not coprime. When constructing star polygons from stellation, however, if q is greater than p/2, the lines will instead diverge infinitely, and if q is equal to p/2, the lines will be parallel, with both resulting in no further intersection in Euclidean space. However, it may be possible to construct some such polygons in spherical space, similarly to the monogon and digon; such polygons do not yet appear to have been studied in detail. Simple isotoxal star polygons When the intersecting lines are removed, the star polygons are no longer regular, but can be seen as simple concave isotoxal 2n-gons, alternating vertices at two different radii, which do not necessarily have to match the regular star polygon angles. Branko Grünbaum in Tilings and Patterns represents these stars as |n/d| that match the geometry of polygram {n/d} with a notation {nα} more generally, representing an n-sided star with each internal angle α<180°(1-2/n) degrees.[1] For |n/d|, the inner vertices have an exterior angle, β, as 360°(d-1)/n. Simple isotoxal star examples |n/d| {nα}   {330°}   {630°} |5/2| {536°}   {445°} |8/3| {845°} |6/2| {660°}   {572°} α 30° 36° 45° 60° 72° β 150° 90° 72° 135° 90° 120° 144° Isotoxal star Related polygram   {n/d} {12/5} {5/2} {8/3} 2{3} Star figure {10/3} Examples in tilings Further information: Uniform tiling § Uniform tilings using star polygons These polygons are often seen in tiling patterns. The parametric angle α (degrees or radians) can be chosen to match internal angles of neighboring polygons in a tessellation pattern. Johannes Kepler in his 1619 work Harmonices Mundi, including among other period tilings, nonperiodic tilings like that three regular pentagons, and a regular star pentagon (5.5.5.5/2) can fit around a vertex, and related to modern Penrose tilings.[9] Example tilings with isotoxal star polygons[10] Star triangles Star squares Star hexagons Star octagons (3.3* α .3.3** α ) (8.4* π/4 .8.4* π/4 ) (6.6* π/3 .6.6* π/3 ) (3.6* π/3 .6** π/3 ) (3.6.6* π/3 .6) Not edge-to-edge Interiors The interior of a star polygon may be treated in different ways. Three such treatments are illustrated for a pentagram. Branko Grünbaum and Geoffrey Shephard consider two of them, as regular star polygons and concave isogonal 2n-gons.[9] These include: • Where a side occurs, one side is treated as outside and the other as inside. This is shown in the left hand illustration and commonly occurs in computer vector graphics rendering. • The number of times that the polygonal curve winds around a given region determines its density. The exterior is given a density of 0, and any region of density > 0 is treated as internal. This is shown in the central illustration and commonly occurs in the mathematical treatment of polyhedra. (However, for non-orientable polyhedra density can only be considered modulo 2 and hence the first treatment is sometimes used instead in those cases for consistency.) • Where a line may be drawn between two sides, the region in which the line lies is treated as inside the figure. This is shown in the right hand illustration and commonly occurs when making a physical model. When the area of the polygon is calculated, each of these approaches yields a different answer. In art and culture Star polygons feature prominently in art and culture. Such polygons may or may not be regular but they are always highly symmetrical. Examples include: • The {5/2} star pentagon (pentagram) is also known as a pentalpha or pentangle, and historically has been considered by many magical and religious cults to have occult significance. • The {7/2} and {7/3} star polygons (heptagrams) also have occult significance, particularly in the Kabbalah and in Wicca. • The {8/3} star polygon (octagram) is a frequent geometrical motif in Mughal Islamic art and architecture; the first is on the emblem of Azerbaijan. • An eleven pointed star called the hendecagram was used on the tomb of Shah Nemat Ollah Vali. An {8/3} octagram constructed in a regular octagon Seal of Solomon with circle and dots (star figure) See also • List of regular polytopes and compounds#Stars • Five-pointed star • Magic star • Moravian star • Pentagramma mirificum • Regular star 4-polytope • Rub el Hizb • Star (glyph) • Star polyhedron, Kepler–Poinsot polyhedron, and uniform star polyhedron • Starfish References 1. Grünbaum & Shephard 1987, section 2.5 harvnb error: no target: CITEREFGrünbaumShephard1987 (help) 2. Abelson, Harold, diSessa, Andera, 1980, Turtle Geometry, MIT Press, p.24 3. γραμμή, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus 4. Coxeter, Introduction to Geometry, second edition, 2.8 Star polygons p.36-38 5. Coxeter, Harold Scott Macdonald (1973). Regular polytopes. Courier Dover Publications. p. 93. ISBN 978-0-486-61480-9. 6. Weisstein, Eric W. "Star Polygon". MathWorld. 7. Are Your Polyhedra the Same as My Polyhedra? Archived 2016-08-03 at the Wayback Machine Branko Grünbaum 8. Coxeter, The Densities of the Regular polytopes I, p.43: If d is odd, the truncation of the polygon {p/q} is naturally {2n/d}. But if not, it consists of two coincident {n/(d/2)}'s; two, because each side arises from an original side and once from an original vertex. Thus the density of a polygon is unaltered by truncation. 9. Branko Grunbaum and Geoffrey C. Shephard, Tilings by Regular Polygons, Mathematics Magazine 50 (1977), 227–247 and 51 (1978), 205–206] 10. Tiling with Regular Star Polygons, Joseph Myers • Cromwell, P.; Polyhedra, CUP, Hbk. 1997, ISBN 0-521-66432-2. Pbk. (1999), ISBN 0-521-66405-5. p. 175 • Grünbaum, B. and G.C. Shephard; Tilings and Patterns, New York: W. H. Freeman & Co., (1987), ISBN 0-7167-1193-1. • Grünbaum, B.; Polyhedra with Hollow Faces, Proc of NATO-ASI Conference on Polytopes ... etc. (Toronto 1993), ed T. Bisztriczky et al., Kluwer Academic (1994) pp. 43–70. • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 26. pp. 404: Regular star-polytopes Dimension 2) • Branko Grünbaum, Metamorphoses of polygons, published in The Lighter Side of Mathematics: Proceedings of the Eugène Strens Memorial Conference on Recreational Mathematics and its History, (1994) Polygons (List) Triangles • Acute • Equilateral • Ideal • Isosceles • Kepler • Obtuse • Right Quadrilaterals • Antiparallelogram • Bicentric • Crossed • Cyclic • Equidiagonal • Ex-tangential • Harmonic • Isosceles trapezoid • Kite • Orthodiagonal • Parallelogram • Rectangle • Right kite • Right trapezoid • Rhombus • Square • Tangential • Tangential trapezoid • Trapezoid By number of sides 1–10 sides • Monogon (1) • Digon (2) • Triangle (3) • Quadrilateral (4) • Pentagon (5) • Hexagon (6) • Heptagon (7) • Octagon (8) • Nonagon (Enneagon, 9) • Decagon (10) 11–20 sides • Hendecagon (11) • Dodecagon (12) • Tridecagon (13) • Tetradecagon (14) • Pentadecagon (15) • Hexadecagon (16) • Heptadecagon (17) • Octadecagon (18) • Icosagon (20) >20 sides • Icositrigon (23) • Icositetragon (24) • Triacontagon (30) • 257-gon • Chiliagon (1000) • Myriagon (10,000) • 65537-gon • Megagon (1,000,000) • Apeirogon (∞) Star polygons • Pentagram • Hexagram • Heptagram • Octagram • Enneagram • Decagram • Hendecagram • Dodecagram Classes • Concave • Convex • Cyclic • Equiangular • Equilateral • Infinite skew • Isogonal • Isotoxal • Magic • Pseudotriangle • Rectilinear • Regular • Reinhardt • Simple • Skew • Star-shaped • Tangential • Weakly simple
Wikipedia
Star polyhedron In geometry, a star polyhedron is a polyhedron which has some repetitive quality of nonconvexity giving it a star-like visual quality. There are two general kinds of star polyhedron: • Polyhedra which self-intersect in a repetitive way. • Concave polyhedra of a particular kind which alternate convex and concave or saddle vertices in a repetitive way. Mathematically these figures are examples of star domains. Mathematical studies of star polyhedra are usually concerned with regular, uniform polyhedra, or the duals of the uniform polyhedra. All these stars are of the self-intersecting kind. Self-intersecting star polyhedra Regular star polyhedra The regular star polyhedra are self-intersecting polyhedra. They may either have self-intersecting faces, or self-intersecting vertex figures. There are four regular star polyhedra, known as the Kepler–Poinsot polyhedra. The Schläfli symbol {p,q} implies faces with p sides, and vertex figures with q sides. Two of them have pentagrammic {5/2} faces and two have pentagrammic vertex figures. These images show each form with a single face colored yellow to show the visible portion of that face. There are also an infinite number of regular star dihedra and hosohedra {2,p/q} and {p/q,2} for any star polygon {p/q}. While degenerate in Euclidean space, they can be realised spherically in nondegenerate form. Uniform and uniform dual star polyhedra There are many uniform star polyhedra including two infinite series, of prisms and of antiprisms, and their duals. The uniform and dual uniform star polyhedra are also self-intersecting polyhedra. They may either have self-intersecting faces, or self-intersecting vertex figures or both. The uniform star polyhedra have regular faces or regular star polygon faces. The dual uniform star polyhedra have regular faces or regular star polygon vertex figures. Example uniform polyhedra and their duals Uniform polyhedron Dual polyhedron The pentagrammic prism is a prismatic star polyhedron. It is composed of two pentagram faces connected by five intersecting square faces. The pentagrammic dipyramid is also a star polyhedron, representing the dual to the pentagrammic prism. It is face-transitive, composed of ten intersecting isosceles triangles. The great dodecicosahedron is a star polyhedron, constructed from a single vertex figure of intersecting hexagonal and decagrammic, {10/3}, faces. The great dodecicosacron is the dual to the great dodecicosahedron. It is face-transitive, composed of 60 intersecting bow-tie-shaped quadrilateral faces. Stellations and facettings Beyond the forms above, there are unlimited classes of self-intersecting (star) polyhedra. Two important classes are the stellations of convex polyhedra and their duals, the facettings of the dual polyhedra. For example, the complete stellation of the icosahedron (illustrated) can be interpreted as a self-intersecting polyhedron composed of 20 identical faces, each a (9/4) wound polygon. Below is an illustration of this polyhedron with one face drawn in yellow. Star polytopes A similarly self-intersecting polytope in any number of dimensions is called a star polytope. A regular polytope {p,q,r,...,s,t} is a star polytope if either its facet {p,q,...s} or its vertex figure {q,r,...,s,t} is a star polytope. In four dimensions, the 10 regular star polychora are called the Schläfli–Hess polychora. Analogous to the regular star polyhedra, these 10 are all composed of facets which are either one of the five regular Platonic solids or one of the four regular star Kepler–Poinsot polyhedra. For example, the great grand stellated 120-cell, projected orthogonally into 3-space, looks like this: There are no regular star polytopes in dimensions higher than 4. Star-domain star polyhedra A polyhedron which does not cross itself, such that all of the interior can be seen from one interior point, is an example of a star domain. The visible exterior portions of many self-intersecting star polyhedra form the boundaries of star domains, but despite their similar appearance, as abstract polyhedra these are different structures. For instance, the small stellated dodecahedron has 12 pentagram faces, but the corresponding star domain has 60 isosceles triangle faces, and correspondingly different numbers of vertices and edges. Polyhedral star domains appear in various types of architecture, usually religious in nature. For example, they are seen on many baroque churches as symbols of the Pope who built the church, on Hungarian churches and on other religious buildings. These stars can also be used as decorations. Moravian stars are used for both purposes and can be constructed in various forms. See also • Star polygon • Stellation • Polyhedral compound • List of uniform polyhedra • List of uniform polyhedra by Schwarz triangle References • Coxeter, H.S.M., M. S. Longuet-Higgins and J.C.P Miller, Uniform Polyhedra, Phil. Trans. 246 A (1954) pp. 401–450. • Coxeter, H.S.M., Regular Polytopes, 3rd. ed., Dover Publications, 1973. ISBN 0-486-61480-8. (VI. Star-polyhedra, XIV. Star-polytopes) (p. 263) • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 26, Regular star-polytopes, pp. 404–408) • Tarnai, T., Krähling, J. and Kabai, S.; "Star polyhedra: from St. Mark's Basilica in Venice to Hungarian Protestant churches", Paper ID209, Proc. of the IASS 2007, Shell and Spatial Structures: Structural Architecture-Towards the Future Looking to the Past, University of IUAV, 2007. or External links • Weisstein, Eric W. "Star Polyhedron". MathWorld.
Wikipedia
Star number A star number is a centered figurate number, a centered hexagram (six-pointed star), such as the Star of David, or the board Chinese checkers is played on. 11337 Star number First four star numbers, by color. Total no. of termsinfinity Formula$S_{n}=6n(n-1)+1$ First terms1, 13, 37, 73, 121, 181 OEIS index • A003154 • star The nth star number is given by the formula Sn = 6n(n − 1) + 1. The first 43 star numbers are 1, 13, 37, 73, 121, 181, 253, 337, 433, 541, 661, 793, 937, 1093, 1261, 1441, 1633, 1837, 2053, 2281, 2521, 2773, 3037, 3313, 3601, 3901, 4213, 4537, 4873, 5221, 5581, 5953, 6337, 6733, 7141, 7561, 7993, 8437, 8893, 9361, 9841, 10333, 10837 (sequence A003154 in the OEIS) The digital root of a star number is always 1 or 4, and progresses in the sequence 1, 4, 1. The last two digits of a star number in base 10 are always 01, 13, 21, 33, 37, 41, 53, 61, 73, 81, or 93. Unique among the star numbers is 35113, since its prime factors (i.e., 13, 37 and 73) are also consecutive star numbers. Relationships to other kinds of numbers Geometrically, the nth star number is made up of a central point and 12 copies of the (n−1)th triangular number — making it numerically equal to the nth centered dodecagonal number, but differently arranged. Infinitely many star numbers are also triangular numbers, the first four being S1 = 1 = T1, S7 = 253 = T22, S91 = 49141 = T313, and S1261 = 9533161 = T4366 (sequence A156712 in the OEIS). Infinitely many star numbers are also square numbers, the first four being S1 = 12, S5 = 121 = 112, S45 = 11881 = 1092, and S441 = 1164241 = 10792 (sequence A054318 in the OEIS), for square stars (sequence A006061 in the OEIS). A star prime is a star number that is prime. The first few star primes (sequence A083577 in the OEIS) are 13, 37, 73, 181, 337, 433, 541, 661, 937. A superstar prime is a star prime whose prime index is also a star number. The first two such numbers are 661 and 1750255921. A reverse superstar prime is a star number whose index is a star prime. The first few such numbers are 937, 7993, 31537, 195481, 679393, 1122337, 1752841, 2617561, 5262193. The term "star number" or "stellate number" is occasionally used to refer to octagonal numbers.[1] Other properties The harmonic series of unit fractions with the star numbers as denominators is: ${\begin{aligned}\sum _{n=1}^{\infty }&{\frac {1}{S_{n}}}\\&=1+{\frac {1}{13}}+{\frac {1}{37}}+{\frac {1}{73}}+{\frac {1}{121}}+{\frac {1}{181}}+{\frac {1}{253}}+{\frac {1}{337}}+\cdots \\&={\frac {\pi }{2{\sqrt {3}}}}\tan({\frac {\pi }{2{\sqrt {3}}}})\\&\approx 1.159173.\\\end{aligned}}$ The alternating series of unit fractions with the star numbers as denominators is: ${\begin{aligned}\sum _{n=1}^{\infty }&(-1)^{n-1}{\frac {1}{S_{n}}}\\&=1-{\frac {1}{13}}+{\frac {1}{37}}-{\frac {1}{73}}+{\frac {1}{121}}-{\frac {1}{181}}+{\frac {1}{253}}-{\frac {1}{337}}+\cdots \\&\approx 0.941419.\\\end{aligned}}$ See also • Centered hexagonal number References 1. Sloane, N. J. A. (ed.). "Sequence A000567". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Figurate numbers 2-dimensional centered • Centered triangular numbers • Centered square numbers • Centered pentagonal numbers • Centered hexagonal numbers • Centered heptagonal numbers • Centered octagonal numbers • Centered nonagonal numbers • Centered decagonal numbers • Star numbers non-centered • Triangular numbers • Square numbers • Pentagonal numbers • Hexagonal numbers • Heptagonal numbers • Octagonal numbers • Nonagonal numbers • Decagonal numbers • Dodecagonal numbers 3-dimensional centered • Centered tetrahedral numbers • Centered cube numbers • Centered octahedral numbers • Centered dodecahedral numbers • Centered icosahedral numbers non-centered • Cube numbers • Octahedral numbers • Dodecahedral numbers • Icosahedral numbers • Stella octangula numbers pyramidal • Tetrahedral numbers • Square pyramidal numbers 4-dimensional non-centered • Pentatope numbers • Squared triangular numbers • Tesseractic numbers Higher dimensional non-centered • 5-hypercube numbers • 6-hypercube numbers • 7-hypercube numbers • 8-hypercube numbers Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Star product In mathematics, the star product is a method of combining graded posets with unique minimal and maximal elements, preserving the property that the posets are Eulerian. Not to be confused with the Moyal product, also sometimes referred to as the star product. Definition The star product of two graded posets $(P,\leq _{P})$ and $(Q,\leq _{Q})$, where $P$ has a unique maximal element ${\widehat {1}}$ and $Q$ has a unique minimal element ${\widehat {0}}$, is a poset $P*Q$ on the set $(P\setminus \{{\widehat {1}}\})\cup (Q\setminus \{{\widehat {0}}\})$. We define the partial order $\leq _{P*Q}$ by $x\leq y$ if and only if: 1. $\{x,y\}\subset P$, and $x\leq _{P}y$; 2. $\{x,y\}\subset Q$, and $x\leq _{Q}y$; or 3. $x\in P$ and $y\in Q$. In other words, we pluck out the top of $P$ and the bottom of $Q$, and require that everything in $P$ be smaller than everything in $Q$. Example For example, suppose $P$ and $Q$ are the Boolean algebra on two elements. Then $P*Q$ is the poset with the Hasse diagram below. Properties The star product of Eulerian posets is Eulerian. See also • Product order, a different way of combining posets References • Stanley, R., Flag $f$-vectors and the $\mathbf {cd} $-index, Math. Z. 216 (1994), 483-499. This article incorporates material from star product on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. Order theory • Topics • Glossary • Category Key concepts • Binary relation • Boolean algebra • Cyclic order • Lattice • Partial order • Preorder • Total order • Weak ordering Results • Boolean prime ideal theorem • Cantor–Bernstein theorem • Cantor's isomorphism theorem • Dilworth's theorem • Dushnik–Miller theorem • Hausdorff maximal principle • Knaster–Tarski theorem • Kruskal's tree theorem • Laver's theorem • Mirsky's theorem • Szpilrajn extension theorem • Zorn's lemma Properties & Types (list) • Antisymmetric • Asymmetric • Boolean algebra • topics • Completeness • Connected • Covering • Dense • Directed • (Partial) Equivalence • Foundational • Heyting algebra • Homogeneous • Idempotent • Lattice • Bounded • Complemented • Complete • Distributive • Join and meet • Reflexive • Partial order • Chain-complete • Graded • Eulerian • Strict • Prefix order • Preorder • Total • Semilattice • Semiorder • Symmetric • Total • Tolerance • Transitive • Well-founded • Well-quasi-ordering (Better) • (Pre) Well-order Constructions • Composition • Converse/Transpose • Lexicographic order • Linear extension • Product order • Reflexive closure • Series-parallel partial order • Star product • Symmetric closure • Transitive closure Topology & Orders • Alexandrov topology & Specialization preorder • Ordered topological vector space • Normal cone • Order topology • Order topology • Topological vector lattice • Banach • Fréchet • Locally convex • Normed Related • Antichain • Cofinal • Cofinality • Comparability • Graph • Duality • Filter • Hasse diagram • Ideal • Net • Subnet • Order morphism • Embedding • Isomorphism • Order type • Ordered field • Ordered vector space • Partially ordered • Positive cone • Riesz space • Upper set • Young's lattice
Wikipedia
Moyal product In mathematics, the Moyal product (after José Enrique Moyal; also called the star product or Weyl–Groenewold product, after Hermann Weyl and Hilbrand J. Groenewold) is an example of a phase-space star product. It is an associative, non-commutative product, ★, on the functions on ℝ2n, equipped with its Poisson bracket (with a generalization to symplectic manifolds, described below). It is a special case of the ★-product of the "algebra of symbols" of a universal enveloping algebra. This article is about the product on functions on phase space. It is not to be confused with the star product on graded posets. Historical comments The Moyal product is named after José Enrique Moyal, but is also sometimes called the Weyl–Groenewold product as it was introduced by H. J. Groenewold in his 1946 doctoral dissertation, in a trenchant appreciation[1] of the Weyl correspondence. Moyal actually appears not to know about the product in his celebrated article[2] and was crucially lacking it in his legendary correspondence with Dirac, as illustrated in his biography.[3] The popular naming after Moyal appears to have emerged only in the 1970s, in homage to his flat phase-space quantization picture.[4] Definition The product for smooth functions f and g on ℝ2n takes the form $f\star g=fg+\sum _{n=1}^{\infty }\hbar ^{n}C_{n}(f,g),$ where each Cn is a certain bidifferential operator of order n characterized by the following properties (see below for an explicit formula): • $f\star g=fg+{\mathcal {O}}(\hbar ),$ Deformation of the pointwise product — implicit in the formula above. • $f\star g-g\star f=i\hbar \{f,g\}+{\mathcal {O}}(\hbar ^{3})\equiv i\hbar \{\{f,g\}\},$ Deformation of the Poisson bracket, called Moyal bracket. • $f\star 1=1\star f=f,$ The 1 of the undeformed algebra is also the identity in the new algebra. • ${\overline {f\star g}}={\overline {g}}\star {\overline {f}},$ The complex conjugate is an antilinear antiautomorphism. Note that, if one wishes to take functions valued in the real numbers, then an alternative version eliminates the i in the second condition and eliminates the fourth condition. If one restricts to polynomial functions, the above algebra is isomorphic to the Weyl algebra An, and the two offer alternative realizations of the Weyl map of the space of polynomials in n variables (or the symmetric algebra of a vector space of dimension 2n). To provide an explicit formula, consider a constant Poisson bivector Π on ℝ2n: $\Pi =\sum _{i,j}\Pi ^{ij}\partial _{i}\wedge \partial _{j},$ where Πij is a real number for each i, j. The star product of two functions f and g can then be defined as the pseudo-differential operator acting on both of them, $f\star g=fg+{\frac {i\hbar }{2}}\sum _{i,j}\Pi ^{ij}(\partial _{i}f)(\partial _{j}g)-{\frac {\hbar ^{2}}{8}}\sum _{i,j,k,m}\Pi ^{ij}\Pi ^{km}(\partial _{i}\partial _{k}f)(\partial _{j}\partial _{m}g)+\ldots ,$ where ħ is the reduced Planck constant, treated as a formal parameter here. This is a special case of what is known as the Berezin formula[5] on the algebra of symbols and can be given a closed form[6] (which follows from the Baker–Campbell–Hausdorff formula). The closed form can be obtained by using the exponential: $f\star g=m\circ e^{{\frac {i\hbar }{2}}\Pi }(f\otimes g),$ where m is the multiplication map, m(a ⊗ b) = ab, and the exponential is treated as a power series, $e^{A}=\sum _{n=0}^{\infty }{\frac {1}{n!}}A^{n}.$ That is, the formula for Cn is $C_{n}={\frac {i^{n}}{2^{n}n!}}m\circ \Pi ^{n}.$ As indicated, often one eliminates all occurrences of i above, and the formulas then restrict naturally to real numbers. Note that if the functions f and g are polynomials, the above infinite sums become finite (reducing to the ordinary Weyl-algebra case). The relationship of the Moyal product to the generalized ★-product used in the definition of the "algebra of symbols" of a universal enveloping algebra follows from the fact that the Weyl algebra is the universal enveloping algebra of the Heisenberg algebra (modulo that the center equals the unit). On manifolds On any symplectic manifold, one can, at least locally, choose coordinates so as to make the symplectic structure constant, by Darboux's theorem; and, using the associated Poisson bivector, one may consider the above formula. For it to work globally, as a function on the whole manifold (and not just a local formula), one must equip the symplectic manifold with a torsion-free symplectic connection. This makes it a Fedosov manifold. More general results for arbitrary Poisson manifolds (where the Darboux theorem does not apply) are given by the Kontsevich quantization formula. Examples A simple explicit example of the construction and utility of the ★-product (for the simplest case of a two-dimensional euclidean phase space) is given in the article on the Wigner–Weyl transform: two Gaussians compose with this ★-product according to a hyperbolic tangent law:[7] $\exp \left[-a\left(x^{2}+p^{2}\right)\right]\star \exp \left[-b\left(x^{2}+p^{2}\right)\right]={\frac {1}{1+\hbar ^{2}ab}}\exp \left[-{\frac {a+b}{1+\hbar ^{2}ab}}\left(x^{2}+p^{2}\right)\right].$ (Note the classical limit, ħ → 0.) Every correspondence prescription between phase space and Hilbert space, however, induces its own proper ★-product.[8][9] Similar results are seen in the Segal–Bargmann space and in the theta representation of the Heisenberg group, where the creation and annihilation operators a∗ = z and a = ∂/∂z are understood to act on the complex plane (respectively, the upper half-plane for the Heisenberg group), so that the position and momenta operators are given by x = (a + a∗)/2 and p = (a - a∗)/(2i). This situation is clearly different from the case where the positions are taken to be real-valued, but does offer insights into the overall algebraic structure of the Heisenberg algebra and its envelope, the Weyl algebra. Inside phase-space integrals Inside a phase-space integral, just one star product of the Moyal type may be dropped,[10] resulting in plain multiplication, as evident by integration by parts, $\int dx\,dp\;f\star g=\int dx\,dp~f~g,$ making the cyclicity of the phase-space trace manifest. This is a unique property of the above specific Moyal product, and does not hold for other correspondence rules' star products, such as Husimi's, etc. References 1. Groenewold, H. J. (1946). "On the Principles of elementary quantum mechanics" (PDF). Physica. 12: 405–460. 2. Moyal, J. E.; Bartlett, M. S. (1949). "Quantum mechanics as a statistical theory". Mathematical Proceedings of the Cambridge Philosophical Society. 45: 99. Bibcode:1949PCPS...45...99M. doi:10.1017/S0305004100000487. 3. Moyal, Ann (2006). Maverick Mathematician: The Life and Science of J. E. Moyal. ANU E-press. 4. Curtright, T. L.; Zachos, C. K. (2012). "Quantum Mechanics in Phase Space". Asia Pacific Physics Newsletter. 1: 37. arXiv:1104.5269. doi:10.1142/S2251158X12000069. 5. Berezin, Felix A. (1967). "Some remarks about the associated envelope of a Lie algebra". Functional Analysis and its Applications. 1: 91. 6. Bekaert, Xavier (June 2005). "Universal enveloping algebras and some applications in physics" (PDF) (Lecture notes). Université Libre du Bruxelles, Institut des Hautes Études Scientifiques. 7. Zachos, Cosmas; Fairlie, David; Curtright, Thomas, eds. (2005). Quantum Mechanics in Phase Space: An Overview with Selected Papers. World Scientific Series in 20th Century Physics. Vol. 34. Singapore: World Scientific. ISBN 978-981-238-384-6. 8. Cohen, L (1995). Time-Frequency Analysis. New York: Prentice-Hall. ISBN 978-0135945322. 9. Lee, H. W. (1995). "Theory and application of the quantum phase-space distribution functions". Physics Reports. 259 (3): 147. Bibcode:1995PhR...259..147L. doi:10.1016/0370-1573(95)00007-4. 10. Curtright, T. L.; Fairlie, D. B.; Zachos, C. K. (2014). A Concise Treatise on Quantum Mechanics in Phase Space. World Scientific. ISBN 9789814520430.
Wikipedia
Star unfolding In computational geometry, the star unfolding of a convex polyhedron is a net obtained by cutting the polyhedron along geodesics (shortest paths) through its faces. It has also been called the inward layout of the polyhedron, or the Alexandrov unfolding after Aleksandr Danilovich Aleksandrov, who first considered it.[1] Description In more detail, the star unfolding is obtained from a polyhedron $P$ by choosing a starting point $p$ on the surface of $P$, in general position, meaning that there is a unique shortest geodesic from $p$ to each vertex of $P$.[2][3][4] The star polygon is obtained by cutting the surface of $P$ along these geodesics, and unfolding the resulting cut surface onto a plane. The resulting shape forms a simple polygon in the plane.[2][3] The star unfolding may be used as the basis for polynomial time algorithms for various other problems involving geodesics on convex polyhedra.[2][3] Related unfoldings The star unfolding should be distinguished from another way of cutting a convex polyhedron into a simple polygon net, the source unfolding. The source unfolding cuts the polyhedron at points that have multiple equally short geodesics to the given base point $p$, and forms a polygon with $p$ at its center, preserving geodesics from $p$. Instead, the star unfolding cuts the polyhedron along the geodesics, and forms a polygon with multiple copies of $p$ at its vertices.[3] Despite their names, the source unfolding always produces a star-shaped polygon, but the star unfolding does not.[1] Generalizations of the star unfolding using a geodesic or quasigeodesic in place of a single base point have also been studied.[5][6] Another generalization uses a single base point, and a system of geodesics that are not necessarily shortest geodesics.[7] Neither the star unfolding nor the source unfolding restrict their cuts to the edges of the polyhedron. It is an open problem whether every polyhedron can be cut and unfolded to a simple polygon using only cuts along its edges.[3] References 1. Demaine, Erik; O'Rourke, Joseph (2007), "24.3 Star unfolding", Geometric Folding Algorithms, Cambridge University Press, pp. 366–372, ISBN 978-0-521-71522-5 2. Aronov, Boris; O'Rourke, Joseph (1992), "Nonoverlap of the star unfolding", Discrete & Computational Geometry, 8 (3): 219–250, doi:10.1007/BF02293047, MR 1174356 3. Agarwal, Pankaj K.; Aronov, Boris; O'Rourke, Joseph; Schevon, Catherine A. (1997), "Star unfolding of a polytope with applications", SIAM Journal on Computing, 26 (6): 1689–1713, doi:10.1137/S0097539793253371, MR 1484151 4. Chen, Jindong; Han, Yijie (1990), "Shortest paths on a polyhedron", Proceedings of the 6th Annual Symposium on Computational Geometry (SoCG 1990), ACM Press, doi:10.1145/98524.98601, S2CID 7498502 5. Itoh, Jin-ichi; O'Rourke, Joseph; Vîlcu, Costin (2010), "Star unfolding convex polyhedra via quasigeodesic loops", Discrete & Computational Geometry, 44 (1): 35–54, doi:10.1007/s00454-009-9223-x, MR 2639817 6. Kiazyk, Stephen; Lubiw, Anna (2016), "Star unfolding from a geodesic curve", Discrete & Computational Geometry, 56 (4): 1018–1036, doi:10.1007/s00454-016-9795-1, hdl:10012/8935, MR 3561798, S2CID 34942363 7. Alam, Md. Ashraful; Streinu, Ileana (2015), "Star-unfolding polygons", in Botana, Francisco; Quaresma, Pedro (eds.), Automated Deduction in Geometry: 10th International Workshop, ADG 2014, Coimbra, Portugal, July 9-11, 2014, Revised Selected Papers, Lecture Notes in Computer Science, vol. 9201, Springer, pp. 1–20, doi:10.1007/978-3-319-21362-0_1, MR 3440706 Mathematics of paper folding Flat folding • Big-little-big lemma • Crease pattern • Huzita–Hatori axioms • Kawasaki's theorem • Maekawa's theorem • Map folding • Napkin folding problem • Pureland origami • Yoshizawa–Randlett system Strip folding • Dragon curve • Flexagon • Möbius strip • Regular paperfolding sequence 3d structures • Miura fold • Modular origami • Paper bag problem • Rigid origami • Schwarz lantern • Sonobe • Yoshimura buckling Polyhedra • Alexandrov's uniqueness theorem • Blooming • Flexible polyhedron (Bricard octahedron, Steffen's polyhedron) • Net • Source unfolding • Star unfolding Miscellaneous • Fold-and-cut theorem • Lill's method Publications • Geometric Exercises in Paper Folding • Geometric Folding Algorithms • Geometric Origami • A History of Folding in Mathematics • Origami Polyhedra Design • Origamics People • Roger C. Alperin • Margherita Piazzola Beloch • Robert Connelly • Erik Demaine • Martin Demaine • Rona Gurkewitz • David A. Huffman • Tom Hull • Kôdi Husimi • Humiaki Huzita • Toshikazu Kawasaki • Robert J. Lang • Anna Lubiw • Jun Maekawa • Kōryō Miura • Joseph O'Rourke • Tomohiro Tachi • Eve Torrence
Wikipedia
Michael Starbird Michael P. Starbird (born 1948) is a Professor of Mathematics and a University of Texas Distinguished Teaching Professor in the Department of Mathematics at the University of Texas at Austin. He received his B.A from Pomona College and his Ph.D. in mathematics from the University of Wisconsin–Madison. Starbird's mathematical specialty is topology. He joined the University of Texas at Austin as a faculty member in 1974, and served as an associate dean in Natural Sciences from 1989 to 1997. He serves on the national education committees of the Mathematical Association of America and the American Mathematical Society. He directs UT's Inquiry Based Learning Project and works to promote the use of Inquiry Based Learning methods of instruction nationally. Awards He has received over fifteen teaching awards including the Mathematical Association of America's 2007 national teaching award; the Minnie Stevens Piper Professor award, which is a Texas statewide award given to professors in any subject in any college in the state of Texas; the UT System Regents’ Outstanding Teaching Award in its inaugural year; membership in the UT System Academy of Distinguished Teachers in its inaugural year; member and chair of UT Austin's Academy of Distinguished Teachers; and has received most of the UT-wide teaching awards. He is an inaugural year Fellow of the American Mathematical Society. He received an honorary Doctor of Science degree from Pomona College in 2014. Administrative work and Service Starbird served as Associate Dean for Academic and Student Affairs and as Associate Dean for Undergraduate Education in the College of Natural Sciences from 1989 to 1997. He has served on the Steering Committee of the Academy of Distinguished Teachers since 2000 and is currently chair. He has accepted visiting positions at the Institute for Advanced Study in Princeton, The University of California at San Diego, and the Jet Propulsion Laboratory. In 2012 he became a fellow of the American Mathematical Society, in his inaugural year.[1] Starbird has served on the national education committees of both the American Mathematical Society and the Mathematical Association of America. He currently serves on the MAA's Committee on the Undergraduate Program and is a member of the Steering Committee for the next CUPM Curriculum Guide. Publications He has produced DVD courses for The Teaching Company in the Great Courses Series on calculus, statistics, probability, geometry, and the joy of thinking, which have reached hundreds of thousands of people worldwide. Since 2000, he has given over 200 invited lectures and presented more than 35 workshops on effective teaching to faculty members. He has co-authored two Inquiry Based Learning textbooks published by the MAA: (with David Marshall and Edward Odell) Number Theory Through Inquiry and (with Brian Katz) Distilling Ideas: An Introduction to Mathematical Thinking in the new Mathematics Through Inquiry subseries of the MAA Textbook Series. He has written three books with co-author Edward B. Burger: The Heart of Mathematics: An invitation to effective thinking (in its 4th edition and winner of a Robert Hamilton book award); Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas (which has been translated into eight languages); and The 5 Elements of Effective Thinking (which is published by Princeton University Press, has translation contracts in 16 languages, and was a 2013 Independent Publisher Book Award Silver Medal winner). The book, The Heart of Mathematics: An Invitation to Effective Thinking was acclaimed by the American Mathematical Monthly as possibly the best math book for nonmathematicians it had ever reviewed. It won a 2001 Robert W. Hamilton Book Award. Starbird has created several innovative courses at UT including a mathematics course for liberal arts students. In 2014, he produced one of UT's first Massive Open Online Courses (MOOCs). His MOOC was called Effective Thinking Through Mathematics, whose title summarizes much of his goal for education. Courses for The Teaching Company Starbird has developed several courses for The Teaching Company: • Change and Motion: Calculus Made Clear • Joy of Thinking: The Beauty and Power of Classical Mathematical Ideas • Meaning from Data: Statistics Made Clear • What Are the Chances? Probability Made Clear • Mathematics from the Visual World References 1. List of Fellows of the American Mathematical Society, retrieved 2013-08-05. External links • Michael Starbird's homepage • Profile at University of Texas at Austin website • Michael Starbird at the Mathematics Genealogy Project • Lecture: To Infinity and Beyond Authority control International • ISNI • VIAF National • Norway • France • BnF data • Catalonia • Germany • Israel • United States • Sweden • Japan • Czech Republic • Korea • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • SNAC • IdRef
Wikipedia
Stark conjectures In number theory, the Stark conjectures, introduced by Stark (1971, 1975, 1976, 1980) and later expanded by Tate (1984), give conjectural information about the coefficient of the leading term in the Taylor expansion of an Artin L-function associated with a Galois extension K/k of algebraic number fields. The conjectures generalize the analytic class number formula expressing the leading coefficient of the Taylor series for the Dedekind zeta function of a number field as the product of a regulator related to S-units of the field and a rational number. When K/k is an abelian extension and the order of vanishing of the L-function at s = 0 is one, Stark gave a refinement of his conjecture, predicting the existence of certain S-units, called Stark units. Rubin (1996) and Cristian Dumitru Popescu gave extensions of this refined conjecture to higher orders of vanishing. Formulation The Stark conjectures, in the most general form, predict that the leading coefficient of an Artin L-function is the product of a type of regulator, the Stark regulator, with an algebraic number. When the extension is abelian and the order of vanishing of an L-function at s = 0 is one, Stark's refined conjecture predicts the existence of the Stark units, whose roots generate Kummer extensions of K that are abelian over the base field k (and not just abelian over K, as Kummer theory implies). As such, this refinement of his conjecture has theoretical implications for solving Hilbert's twelfth problem. Also, it is possible to compute Stark units in specific examples, allowing verification of the veracity of his refined conjecture as well as providing an important computational tool for generating abelian extensions of number fields. In fact, some standard algorithms for computing abelian extensions of number fields involve producing Stark units that generate the extensions (see below). Computation The first order zero conjectures are used in recent versions of the PARI/GP computer algebra system to compute Hilbert class fields of totally real number fields, and the conjectures provide one solution to Hilbert's twelfth problem, which challenged mathematicians to show how class fields may be constructed over any number field by the methods of complex analysis. Progress Stark's principal conjecture has been proven in various special cases, including the case where the character defining the L-function takes on only rational values. Except when the base field is the field of rational numbers or an imaginary quadratic field, the abelian Stark conjectures are still unproved in number fields, and more progress has been made in function fields of an algebraic variety. Manin (2004) related Stark's conjectures to the noncommutative geometry of Alain Connes.[1] This provides a conceptual framework for studying the conjectures, although at the moment it is unclear whether Manin's techniques will yield the actual proof. Recent progress has been made by Dasgupta and Kakde. See also • Brumer–Stark conjecture Notes 1. Manin, Yu. I.; Panchishkin, A. A. (2007). Introduction to Modern Number Theory. Encyclopaedia of Mathematical Sciences. Vol. 49 (Second ed.). p. 171. ISBN 978-3-540-20364-3. ISSN 0938-0396. Zbl 1079.11002. References • Burns, David; Sands, Jonathan; Solomon, David, eds. (2004), Stark's conjectures: recent work and new directions, Contemporary Mathematics, vol. 358, Providence, RI: American Mathematical Society, doi:10.1090/conm/358, ISBN 978-0-8218-3480-0, MR 2090725, archived from the original on 2012-04-26 • Manin, Yuri Ivanovich (2004), "Real multiplication and noncommutative geometry (ein Alterstraum)", in Piene, Ragni; Laudal, Olav Arnfinn (eds.), The legacy of Niels Henrik Abel, Berlin, New York: Springer-Verlag, pp. 685–727, arXiv:math/0202109, Bibcode:2002math......2109M, ISBN 978-3-540-43826-7, MR 2077591 • Popescu, Cristian D. (1999), "On a refined Stark conjecture for function fields", Compositio Mathematica, 116 (3): 321–367, doi:10.1023/A:1000833610462, ISSN 0010-437X, MR 1691163 • Rubin, Karl (1996), "A Stark conjecture over Z for abelian L-functions with multiple zeros", Annales de l'Institut Fourier, 46 (1): 33–62, doi:10.5802/aif.1505, ISSN 0373-0956, MR 1385509 • Stark, Harold M. (1971), "Values of L-functions at s = 1. I. L-functions for quadratic forms.", Advances in Mathematics, 7 (3): 301–343, doi:10.1016/S0001-8708(71)80009-9, ISSN 0001-8708, MR 0289429 • Stark, Harold M. (1975), "L-functions at s = 1. II. Artin L-functions with rational characters", Advances in Mathematics, 17 (1): 60–92, doi:10.1016/0001-8708(75)90087-0, ISSN 0001-8708, MR 0382194 • Stark, H. M. (1977), "Class fields and modular forms of weight one", in Serre, Jean-Pierre; Zagier, D. B. (eds.), Modular Functions of One Variable V: Proceedings International Conference, University of Bonn, Sonderforschungsbereich Theoretische Mathematik, July 1976, Lecture Notes in Math, vol. 601, Berlin, New York: Springer-Verlag, pp. 277–287, doi:10.1007/BFb0063951, ISBN 978-3-540-08348-1, MR 0450243 • Stark, Harold M. (1976), "L-functions at s = 1. III. Totally real fields and Hilbert's twelfth problem", Advances in Mathematics, 22 (1): 64–84, doi:10.1016/0001-8708(76)90138-9, ISSN 0001-8708, MR 0437501 • Stark, Harold M. (1980), "L-functions at s = 1. IV. First derivatives at s = 0", Advances in Mathematics, 35 (3): 197–235, doi:10.1016/0001-8708(80)90049-3, ISSN 0001-8708, MR 0563924 • Tate, John (1984), "Les conjectures de Stark sur les fonctions L d'Artin en s=0", Mathematical Programming, Progress in Mathematics, Boston, MA: Birkhäuser Boston, 47 (1–3): 143–153, doi:10.1007/BF01580857, ISBN 978-0-8176-3188-8, MR 0782485, S2CID 13291194 External links • Hayes, David R. (1999), Lectures on Stark's Conjectures, archived from the original on February 4, 2012{{citation}}: CS1 maint: unfit URL (link) Authority control: National • Israel • United States
Wikipedia
Stark–Heegner theorem In number theory, the Baker–Heegner–Stark theorem[1] establishes the complete list of the quadratic imaginary number fields whose rings of integers are unique factorization domains. It solves a special case of Gauss's class number problem of determining the number of imaginary quadratic fields that have a given fixed class number. Let Q denote the set of rational numbers, and let d be a square-free integer. The field Q(√d) is a quadratic extension of Q. The class number of Q(√d) is one if and only if the ring of integers of Q(√d) is a principal ideal domain (or, equivalently, a unique factorization domain). The Baker–Heegner–Stark theorem can then be stated as follows: If d < 0, then the class number of Q(√d) is one if and only if $d\in \{\,-1,-2,-3,-7,-11,-19,-43,-67,-163\,\}.$ These are known as the Heegner numbers. By replacing d with the discriminant D of Q(√d) this list is often written as:[2] $D\in \{-3,-4,-7,-8,-11,-19,-43,-67,-163\}.$ History This result was first conjectured by Gauss in Section 303 of his Disquisitiones Arithmeticae (1798). It was essentially proven by Kurt Heegner in 1952, but Heegner's proof had some minor gaps and the theorem was not accepted until Harold Stark gave a complete proof in 1967, which had many commonalities to Heegner's work, but sufficiently many differences that Stark considers the proofs to be different.[3] Heegner "died before anyone really understood what he had done".[4] Stark formally filled in the gap in Heegner's proof in 1969 (other contemporary papers produced various similar proofs by modular functions, but Stark concentrated on explicitly filling Heegner's gap).[5] Alan Baker gave a completely different proof slightly earlier (1966) than Stark's work (or more precisely Baker reduced the result to a finite amount of computation, with Stark's work in his 1963/4 thesis already providing this computation), and won the Fields Medal for his methods. Stark later pointed out that Baker's proof, involving linear forms in 3 logarithms, could be reduced to only 2 logarithms, when the result was already known from 1949 by Gelfond and Linnik.[6] Stark's 1969 paper (Stark 1969a) also cited the 1895 text by Heinrich Martin Weber and noted that if Weber had "only made the observation that the reducibility of [a certain equation] would lead to a Diophantine equation, the class-number one problem would have been solved 60 years ago". Bryan Birch notes that Weber's book, and essentially the whole field of modular functions, dropped out of interest for half a century: "Unhappily, in 1952 there was no one left who was sufficiently expert in Weber's Algebra to appreciate Heegner's achievement."[7] Deuring, Siegel, and Chowla all gave slightly variant proofs by modular functions in the immediate years after Stark.[8] Other versions in this genre have also cropped up over the years. For instance, in 1985, Monsur Kenku gave a proof using the Klein quartic (though again utilizing modular functions).[9] And again, in 1999, Imin Chen gave another variant proof by modular functions (following Siegel's outline).[10] The work of Gross and Zagier (1986) (Gross & Zagier 1986) combined with that of Goldfeld (1976) also gives an alternative proof.[11] Real case On the other hand, it is unknown whether there are infinitely many d > 0 for which Q(√d) has class number 1. Computational results indicate that there are many such fields. Number Fields with class number one provides a list of some of these. Notes 1. Elkies (1999) calls this the Stark–Heegner theorem (cognate to Stark–Heegner points as in page xiii of Darmon (2004)) but omitting Baker's name is atypical. Chowla (1970) gratuitously adds Deuring and Siegel in his paper's title. 2. Elkies (1999), p. 93. 3. Stark (2011) page 42 4. Goldfeld (1985). 5. Stark (1969a) 6. Stark (1969b) 7. Birch (2004) 8. Chowla (1970) 9. Kenku (1985). 10. Chen (1999) 11. Goldfeld (1985) References • Birch, Bryan (2004), "Heegner Points: The Beginnings" (PDF), MSRI Publications, 49: 1–10 • Chen, Imin (1999), "On Siegel's Modular Curve of Level 5 and the Class Number One Problem", Journal of Number Theory, 74 (2): 278–297, doi:10.1006/jnth.1998.2320 • Chowla, S. (1970), "The Heegner–Stark–Baker–Deuring–Siegel Theorem", Journal für die reine und angewandte Mathematik, 241: 47–48, doi:10.1515/crll.1970.241.47 • Darmon, Henri (2004), "Preface to Heegner Points and Rankin L-Series" (PDF), MSRI Publications, 49: ix–xiii • Elkies, Noam D. (1999), "The Klein Quartic in Number Theory" (PDF), in Levy, Silvio (ed.), The Eightfold Way: The Beauty of Klein's Quartic Curve, MSRI Publications, vol. 35, Cambridge University Press, pp. 51–101, MR 1722413 • Goldfeld, Dorian (1985), "Gauss's class number problem for imaginary quadratic fields", Bulletin of the American Mathematical Society, 13: 23–37, doi:10.1090/S0273-0979-1985-15352-2, MR 0788386 • Gross, Benedict H.; Zagier, Don B. (1986), "Heegner points and derivatives of L-series", Inventiones Mathematicae, 84 (2): 225–320, Bibcode:1986InMat..84..225G, doi:10.1007/BF01388809, MR 0833192, S2CID 125716869. • Heegner, Kurt (1952), "Diophantische Analysis und Modulfunktionen" [Diophantine Analysis and Modular Functions], Mathematische Zeitschrift (in German), 56 (3): 227–253, doi:10.1007/BF01174749, MR 0053135, S2CID 120109035 • Kenku, M. Q. (1985), "A note on the integral points of a modular curve of level 7", Mathematika, 32: 45–48, doi:10.1112/S0025579300010846, MR 0817106 • Levy, Silvio, ed. (1999), The Eightfold Way: The Beauty of Klein's Quartic Curve, MSRI Publications, vol. 35, Cambridge University Press • Stark, H. M. (1969a), "On the gap in the theorem of Heegner" (PDF), Journal of Number Theory, 1 (1): 16–27, Bibcode:1969JNT.....1...16S, doi:10.1016/0022-314X(69)90023-7, hdl:2027.42/33039 • Stark, H. M. (1969b), "A historical note on complex quadratic fields with class-number one.", Proceedings of the American Mathematical Society, 21: 254–255, doi:10.1090/S0002-9939-1969-0237461-X • Stark, H. M. (2011), The Origin of the "Stark" conjectures, vol. appearing in Arithmetic of L-functions
Wikipedia
Starlike tree In the area of mathematics known as graph theory, a tree is said to be starlike if it has exactly one vertex of degree greater than 2. This high-degree vertex is the root and a starlike tree is obtained by attaching at least three linear graphs to this central vertex. Properties Two finite starlike trees are isospectral, i.e. their graph Laplacians have the same spectra, if and only if they are isomorphic.[1] The graph Laplacian has always only one eigenvalue equal or greater than 4.[2] References 1. M. Lepovic, I. Gutman (2001). No starlike trees are cospectral. 2. Nakatsukasa, Yuji; Saito, Naoki; Woei, Ernest (April 2013). "Mysteries around the Graph Laplacian Eigenvalue 4". Linear Algebra and Its Applications. 438 (8): 3231–46. arXiv:1112.4526. doi:10.1016/j.laa.2012.12.012. External links • Weisstein, Eric W. "Spider Graph". MathWorld. • (sequence A004250 in the OEIS)
Wikipedia
Pentagrammic antiprism In geometry, the pentagrammic antiprism is one in an infinite set of nonconvex antiprisms formed by triangle sides and two regular star polygon caps, in this case two pentagrams. Uniform pentagrammic antiprism TypePrismatic uniform polyhedron ElementsF = 12, E = 20 V = 10 (χ = 2) Faces by sides10{3}+2{5/2} Schläfli symbolsr{2,5/2} Wythoff symbol| 2 2 5/2 Coxeter diagram SymmetryD5h, [5,2], (*552), order 20 Rotation groupD5, [5,2]+, (55), order 10 Index referencesU79(a) DualPentagrammic trapezohedron Propertiesnonconvex Vertex figure 3.3.3.5/2 It has 12 faces, 20 edges and 10 vertices. This polyhedron is identified with the indexed name U79 as a uniform polyhedron.[1] Note that the pentagram face has an ambiguous interior because it is self-intersecting. The central pentagon region can be considered interior or exterior depending on how interior is defined. One definition of interior is the set of points that have a ray that crosses the boundary an odd number of times to escape the perimeter. In either case, it is best to show the pentagram boundary line to distinguish it from a concave decagon. Gallery An alternative representation with hollow centers to the pentagrams. The pentagrammic trapezohedron is the dual to the pentagrammic antiprism. Net Net (fold the dotted line in the centre in the opposite direction to all the other lines): See also • Prismatic uniform polyhedron • Pentagrammic prism • Pentagrammic crossed-antiprism References 1. Maeder, Roman. "79: pentagrammic antiprism".{{cite web}}: CS1 maint: url-status (link) External links • Weisstein, Eric W. "Pentagrammic antiprism". MathWorld. • http://www.mathconsult.ch/showroom/unipoly/04.html • https://web.archive.org/web/20050313233653/http://www.math.technion.ac.il/~rl/kaleido/data/04.html
Wikipedia
Stars and bars (combinatorics) In the context of combinatorial mathematics, stars and bars (also called "sticks and stones",[1] "balls and bars",[2] and "dots and dividers"[3]) is a graphical aid for deriving certain combinatorial theorems. It was popularized by William Feller in his classic book on probability. It can be used to solve many simple counting problems, such as how many ways there are to put n indistinguishable balls into k distinguishable bins.[4] Statements of theorems The stars and bars method is often introduced specifically to prove the following two theorems of elementary combinatorics concerning the number of solutions to an equation. Theorem one For any pair of positive integers n and k, the number of k-tuples of positive integers whose sum is n is equal to the number of (k − 1)-element subsets of a set with n − 1 elements. For example, if n = 10 and k = 4, the theorem gives the number of solutions to x1 + x2 + x3 + x4 = 10 (with x1, x2, x3, x4 > 0) as the binomial coefficient ${\binom {n-1}{k-1}}={\binom {10-1}{4-1}}={\binom {9}{3}}=84.$ This corresponds to compositions of an integer. Theorem two For any pair of positive integers n and k, the number of k-tuples of non-negative integers whose sum is n is equal to the number of multisets of cardinality n taken from a set of size k, or equivalently, the number of multisets of cardinality k − 1 taken from a set of size n + 1. For example, if n = 10 and k = 4, the theorem gives the number of solutions to x1 + x2 + x3 + x4 = 10 (with x1, x2, x3, x4 $\geq 0$ ) as: $\left(\!\!{k \choose n}\!\!\right)={k+n-1 \choose n}={\binom {13}{10}}=286$ $\left(\!\!{n+1 \choose k-1}\!\!\right)={n+1+k-1-1 \choose k-1}={\binom {13}{3}}=286$ ${\binom {n+k-1}{k-1}}={\binom {10+4-1}{4-1}}={\binom {13}{3}}=286$ This corresponds to weak compositions of an integer. Proofs via the method of stars and bars Theorem one proof Suppose there are n objects (represented here by stars) to be placed into k bins, such that all bins contain at least one object. The bins are distinguishable (say they are numbered 1 to k) but the n stars are not (so configurations are only distinguished by the number of stars present in each bin). A configuration is thus represented by a k-tuple of positive integers, as in the statement of the theorem. For example, with n = 7 and k = 3, start by placing the stars in a line: ★ ★ ★ ★ ★ ★ ★ Fig. 1: Seven objects, represented by stars The configuration will be determined once it is known which is the first star going to the second bin, and the first star going to the third bin, etc.. This is indicated by placing k − 1 bars between the stars. Because no bin is allowed to be empty (all the variables are positive), there is at most one bar between any pair of stars. For example: ★ ★ ★ ★ | ★ | ★ ★ Fig. 2: These two bars give rise to three bins containing 4, 1, and 2 objects There are n − 1 gaps between stars. A configuration is obtained by choosing k − 1 of these gaps to contain a bar; therefore there are ${\tbinom {n-1}{k-1}}$ possible combinations. Theorem two proof In this case, the weakened restriction of non-negativity instead of positivity means that we can place multiple bars between stars, before the first star and after the last star. For example, when n = 7 and k = 5, the tuple (4, 0, 1, 2, 0) may be represented by the following diagram: ★ ★ ★ ★ | | ★ | ★ ★ | Fig. 3: These four bars give rise to five bins containing 4, 0, 1, 2, and 0 objects To see that there are ${\tbinom {n+k-1}{k-1}}$ possible arrangements, observe that any arrangement of stars and bars consists of a total of n + k − 1 objects, n of which are stars and k − 1 of which are bars. Thus, we only need to choose k − 1 of the n + k − 1 positions to be bars (or, equivalently, choose n of the positions to be stars). Theorem 1 can now be restated in terms of Theorem 2, because the requirement that all the variables are positive is equivalent to pre-assigning each variable a 1, and asking for the number of solutions when each variable is non-negative. For example: $x_{1}+x_{2}+x_{3}+x_{4}=10$ with $x_{1},x_{2},x_{3},x_{4}>0$ is equivalent to: $x_{1}+x_{2}+x_{3}+x_{4}=6$ with $x_{1},x_{2},x_{3},x_{4}\geq 0$ Proofs by generating functions Both cases are very similar, we will look at the case when $x_{i}\geq 0$ first. The 'bucket' becomes ${\frac {1}{1-x}}$ This can also be written as $1+x+x^{2}+\dots $ and the exponent of x tells us how many balls are placed in the bucket. Each additional bucket is represented by another ${\frac {1}{1-x}}$, and so the final generating function is ${\frac {1}{1-x}}{\frac {1}{1-x}}\dots {\frac {1}{1-x}}={\frac {1}{(1-x)^{k}}}$ As we only have m balls, we want the coefficient of $x^{m}$ (written $[x^{m}]:$) from this $[x^{m}]:{\frac {1}{(1-x)^{k}}}$ This is a well-known generating function - it generates the diagonals in Pascal's Triangle, and the coefficient of $x^{m}$ is ${\binom {m+k-1}{k-1}}$ For the case when $x_{i}>0$, we need to add x into the numerator to indicate that at least one ball is in the bucket. ${\frac {x}{1-x}}{\frac {x}{1-x}}\dots {\frac {x}{1-x}}={\frac {x^{k}}{(1-x)^{k}}}$ and the coefficient of $x^{m}$ is ${\binom {m-1}{k-1}}$ Examples Many elementary word problems in combinatorics are resolved by the theorems above. Example 1 If one wishes to count the number of ways to distribute seven indistinguishable one dollar coins among Amber, Ben, and Curtis so that each of them receives at least one dollar, one may observe that distributions are essentially equivalent to tuples of three positive integers whose sum is 7. (Here the first entry in the tuple is the number of coins given to Amber, and so on.) Thus stars and bars theorem 1 applies, with n = 7 and k = 3, and there are ${\tbinom {7-1}{3-1}}=15$ ways to distribute the coins. Example 2 If n = 5, k = 4, and a set of size k is {a, b, c, d}, then ★|★★★||★ could represent either the multiset {a, b, b, b, d} or the 4-tuple (1, 3, 0, 1). The representation of any multiset for this example should use SAB2 with n = 5, k – 1 = 3 bars to give ${\tbinom {5+4-1}{4-1}}={\tbinom {8}{3}}=56$. Example 3 SAB2 allows for more bars than stars, which isn't permitted in SAB1. So, for example, 10 balls into 7 bins is ${\tbinom {16}{6}}$, while 7 balls into 10 bins is ${\tbinom {16}{9}}$, with 6 balls into 11 bins as ${\tbinom {16}{10}}={\tbinom {16}{6}}.$ Example 4 If we have the infinite power series $\left[\sum _{k=1}^{\infty }x^{k}\right],$ we can use this method to compute the Cauchy product of m copies of the series. For the nth term of the expansion, we are picking n powers of x from m separate locations. Hence there are ${\tbinom {n-1}{m-1}}$ ways to form our nth power: $\left[\sum _{k=1}^{\infty }x^{k}\right]^{m}=\sum _{n=m}^{\infty }{{n-1} \choose {m-1}}x^{n}$ Example 5 The graphical method was used by Paul Ehrenfest and Heike Kamerlingh Onnes – with symbol ε (quantum energy element) in place of a star – as a simple derivation of Max Planck's expression of "complexions".[5] Planck called "complexions" the number R of possible distributions of P energy elements ε over N resonators:[6] $R={\frac {(N+P-1)!}{P!(N-1)!}}\ $ The graphical representation would contain P times the symbol ε and N – 1 times the sign | for each possible distribution. In their demonstration, Ehrenfest and Kamerlingh Onnes took N = 4 and P = 7 (i.e., R = 120 combinations). They chose the 4-tuple (4, 2, 0, 1) as the illustrative example for this symbolic representation: εεεε|εε||ε See also • Gaussian binomial coefficient • Partition (number theory) • Twelvefold way References 1. Batterson, J. Competition Math for Middle School. Art of Problem Solving. 2. Flajolet, Philippe; Sedgewick, Robert (June 26, 2009). Analytic Combinatorics. Cambridge University Press. ISBN 978-0-521-89806-5. 3. "Art of Problem Solving". artofproblemsolving.com. Retrieved 2021-10-26. 4. Feller, William (1950). An Introduction to Probability Theory and Its Applications. Vol. 1 (3rd ed.). Wiley. p. 38. 5. Ehrenfest, Paul; Kamerlingh Onnes, Heike (1915). "Simplified deduction of the formula from the theory of combinations which Planck uses as the basis of his radiation theory". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. Series 6. 29 (170): 297–301. doi:10.1080/14786440208635308. Retrieved 5 December 2020. 6. Planck, Max (1901). "Ueber das Gesetz der Energieverteilung im Normalspectrum". Annalen der Physik. 309 (3): 553–563. Bibcode:1901AnP...309..553P. doi:10.1002/andp.19013090310. Further reading • Pitman, Jim (1993). Probability. Berlin: Springer-Verlag. ISBN 0-387-97974-3. • Weisstein, Eric W. "Multichoose". Mathworld -- A Wolfram Web Resource. Retrieved 18 November 2012.
Wikipedia
StatXact StatXact is a statistical software package for analyzing data using exact statistics. It calculates exact p-values and confidence intervals for contingency tables and non-parametric procedures. It is marketed by Cytel Inc. StatXact Developer(s)Cytel Inc. Stable release 10 / July 10, 2013 (2013-07-10) Written inC Operating systemWindows TypeNumerical analysis LicenseProprietary Websitewww.cytel.com/software/statxact  References • Mehta, Cyrus R. (1991). "StatXact: A Statistical Package for Exact Nonparametric Inference". The American Statistician. 45 (1): 74–75. doi:10.2307/2685246. JSTOR 2685246. External links • StatXact homepage at Cytel Inc. Statistical software Public domain • Dataplot • Epi Info • CSPro • X-12-ARIMA Open-source • ADMB • DAP • gretl • JASP • JAGS • JMulTi • Julia • Jupyter (Julia, Python, R) • GNU Octave • OpenBUGS • Orange • PSPP • Python (statsmodels, PyMC3, IPython, IDLE) • R (RStudio) • SageMath • SimFiT • SOFA Statistics • Stan • XLispStat Freeware • BV4.1 • CumFreq • SegReg • XploRe • WinBUGS Commercial Cross-platform • Data Desk • GAUSS • GraphPad InStat • GraphPad Prism • IBM SPSS Statistics • IBM SPSS Modeler • JMP • Maple • Mathcad • Mathematica • MATLAB • OxMetrics • RATS • Revolution Analytics • SAS • SmartPLS • Stata • StatView • SUDAAN • S-PLUS • TSP • World Programming System (WPS) Windows only • BMDP • EViews • GenStat • LIMDEP • LISREL • MedCalc • Microfit • Minitab • MLwiN • NCSS • SHAZAM • SigmaStat • Statistica • StatsDirect • StatXact • SYSTAT • The Unscrambler • UNISTAT Excel add-ons • Analyse-it • UNISTAT for Excel • XLfit • RExcel • Category • Comparison
Wikipedia
Statistics Surveys Statistics Surveys is an open-access electronic journal, founded in 2007, that is jointly sponsored by the American Statistical Association, the Bernoulli Society, the Institute of Mathematical Statistics and the Statistical Society of Canada. It publishes review articles on topics of interest in statistics. Wendy L. Martinez serves as the coordinating editor. Statistics Surveys DisciplineStatistics LanguageEnglish Edited byWendy L. Martinez Publication details Historybegun 2007 Publisher American Statistical Association, Bernoulli Society, Institute of Mathematical Statistics, Statistical Society of Canada (USA, The Netherlands, USA, Canada) Open access yes Standard abbreviations ISO 4 (alt) · Bluebook (alt1 · alt2) NLM (alt) · MathSciNet (alt ) ISO 4Stat. Surv. Indexing CODEN (alt · alt2) · JSTOR (alt) · LCCN (alt) MIAR · NLM (alt) · Scopus ISSN1935-7516 Links • Journal homepage External links • Official page Statistics journals Open access • Annals of Statistics • Brazilian Journal of Probability and Statistics • Chilean Journal of Statistics • Journal of Modern Applied Statistical Methods • Journal of Official Statistics • Journal of Statistical Software • Revista Colombiana de Estadistica • REVSTAT • SORT • Statistics Surveys • Survey Methodology Delayed open access • Annals of the Institute of Statistical Mathematics • Statistical Applications in Genetics and Molecular Biology • Statistical Science Subscription • American Statistician • Biometrical Journal • Biometrics • Biometrika • Biostatistics • Communications in Statistics • Econometrica • Econometric Theory • Journal of Applied Econometrics • Journal of Applied Statistics • Journal of Business & Economic Statistics • Journal of Chemometrics • Journal of Computational and Graphical Statistics • Journal of Econometrics • Journal of Educational and Behavioral Statistics • Journal of Statistical Computation and Simulation • Journal of Statistical Planning and Inference • Journal of the American Statistical Association • Journal of the Royal Statistical Society • Multivariate Behavioral Research • Pharmaceutical Statistics • Review of Economics and Statistics • Statistics in Medicine • Technometrics • Category • Comparison • Current Index to Statistics •  Mathematics portal
Wikipedia
Linearization In mathematics, linearization is finding the linear approximation to a function at a given point. The linear approximation of a function is the first order Taylor expansion around the point of interest. In the study of dynamical systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or discrete dynamical systems.[1] This method is used in fields such as engineering, physics, economics, and ecology. For the linearization of a partial order, see Linear extension. For the linearization in concurrent computing, see Linearizability. Linearization of a function Linearizations of a function are lines—usually lines that can be used for purposes of calculation. Linearization is an effective method for approximating the output of a function $y=f(x)$ at any $x=a$ based on the value and slope of the function at $x=b$, given that $f(x)$ is differentiable on $[a,b]$ (or $[b,a]$) and that $a$ is close to $b$. In short, linearization approximates the output of a function near $x=a$. For example, ${\sqrt {4}}=2$. However, what would be a good approximation of ${\sqrt {4.001}}={\sqrt {4+.001}}$? For any given function $y=f(x)$, $f(x)$ can be approximated if it is near a known differentiable point. The most basic requisite is that $L_{a}(a)=f(a)$, where $L_{a}(x)$ is the linearization of $f(x)$ at $x=a$. The point-slope form of an equation forms an equation of a line, given a point $(H,K)$ and slope $M$. The general form of this equation is: $y-K=M(x-H)$. Using the point $(a,f(a))$, $L_{a}(x)$ becomes $y=f(a)+M(x-a)$. Because differentiable functions are locally linear, the best slope to substitute in would be the slope of the line tangent to $f(x)$ at $x=a$. While the concept of local linearity applies the most to points arbitrarily close to $x=a$, those relatively close work relatively well for linear approximations. The slope $M$ should be, most accurately, the slope of the tangent line at $x=a$. Visually, the accompanying diagram shows the tangent line of $f(x)$ at $x$. At $f(x+h)$, where $h$ is any small positive or negative value, $f(x+h)$ is very nearly the value of the tangent line at the point $(x+h,L(x+h))$. The final equation for the linearization of a function at $x=a$ is: $y=(f(a)+f'(a)(x-a))$ For $x=a$, $f(a)=f(x)$. The derivative of $f(x)$ is $f'(x)$, and the slope of $f(x)$ at $a$ is $f'(a)$. Example To find ${\sqrt {4.001}}$, we can use the fact that ${\sqrt {4}}=2$. The linearization of $f(x)={\sqrt {x}}$ at $x=a$ is $y={\sqrt {a}}+{\frac {1}{2{\sqrt {a}}}}(x-a)$, because the function $f'(x)={\frac {1}{2{\sqrt {x}}}}$ defines the slope of the function $f(x)={\sqrt {x}}$ at $x$. Substituting in $a=4$, the linearization at 4 is $y=2+{\frac {x-4}{4}}$. In this case $x=4.001$, so ${\sqrt {4.001}}$ is approximately $2+{\frac {4.001-4}{4}}=2.00025$. The true value is close to 2.00024998, so the linearization approximation has a relative error of less than 1 millionth of a percent. Linearization of a multivariable function See also: Taylor series § In several variables The equation for the linearization of a function $f(x,y)$ at a point $p(a,b)$ is: $f(x,y)\approx f(a,b)+\left.{\frac {\partial f(x,y)}{\partial x}}\right|_{a,b}(x-a)+\left.{\frac {\partial f(x,y)}{\partial y}}\right|_{a,b}(y-b)$ The general equation for the linearization of a multivariable function $f(\mathbf {x} )$ at a point $\mathbf {p} $ is: $f({\mathbf {x} })\approx f({\mathbf {p} })+\left.{\nabla f}\right|_{\mathbf {p} }\cdot ({\mathbf {x} }-{\mathbf {p} })$ where $\mathbf {x} $ is the vector of variables, ${\nabla f}$ is the gradient, and $\mathbf {p} $ is the linearization point of interest .[2] Uses of linearization Linearization makes it possible to use tools for studying linear systems to analyze the behavior of a nonlinear function near a given point. The linearization of a function is the first order term of its Taylor expansion around the point of interest. For a system defined by the equation ${\frac {d\mathbf {x} }{dt}}=\mathbf {F} (\mathbf {x} ,t)$, the linearized system can be written as ${\frac {d\mathbf {x} }{dt}}\approx \mathbf {F} (\mathbf {x_{0}} ,t)+D\mathbf {F} (\mathbf {x_{0}} ,t)\cdot (\mathbf {x} -\mathbf {x_{0}} )$ where $\mathbf {x_{0}} $ is the point of interest and $D\mathbf {F} (\mathbf {x_{0}} ,t)$ is the $\mathbf {x} $-Jacobian of $\mathbf {F} (\mathbf {x} ,t)$ evaluated at $\mathbf {x_{0}} $. Stability analysis In stability analysis of autonomous systems, one can use the eigenvalues of the Jacobian matrix evaluated at a hyperbolic equilibrium point to determine the nature of that equilibrium. This is the content of the linearization theorem. For time-varying systems, the linearization requires additional justification.[3] Microeconomics In microeconomics, decision rules may be approximated under the state-space approach to linearization.[4] Under this approach, the Euler equations of the utility maximization problem are linearized around the stationary steady state.[4] A unique solution to the resulting system of dynamic equations then is found.[4] Optimization In mathematical optimization, cost functions and non-linear components within can be linearized in order to apply a linear solving method such as the Simplex algorithm. The optimized result is reached much more efficiently and is deterministic as a global optimum. Multiphysics In multiphysics systems—systems involving multiple physical fields that interact with one another—linearization with respect to each of the physical fields may be performed. This linearization of the system with respect to each of the fields results in a linearized monolithic equation system that can be solved using monolithic iterative solution procedures such as the Newton–Raphson method. Examples of this include MRI scanner systems which results in a system of electromagnetic, mechanical and acoustic fields.[5] See also • Linear stability • Tangent stiffness matrix • Stability derivatives • Linearization theorem • Taylor approximation • Functional equation (L-function) References 1. The linearization problem in complex dimension one dynamical systems at Scholarpedia 2. Linearization. The Johns Hopkins University. Department of Electrical and Computer Engineering Archived 2010-06-07 at the Wayback Machine 3. Leonov, G. A.; Kuznetsov, N. V. (2007). "Time-Varying Linearization and the Perron effects". International Journal of Bifurcation and Chaos. 17 (4): 1079–1107. Bibcode:2007IJBC...17.1079L. doi:10.1142/S0218127407017732. 4. Moffatt, Mike. (2008) About.com State-Space Approach Economics Glossary; Terms Beginning with S. Accessed June 19, 2008. 5. Bagwell, S.; Ledger, P. D.; Gil, A. J.; Mallett, M.; Kruip, M. (2017). "A linearised hp–finite element framework for acousto-magneto-mechanical coupling in axisymmetric MRI scanners". International Journal for Numerical Methods in Engineering. 112 (10): 1323–1352. Bibcode:2017IJNME.112.1323B. doi:10.1002/nme.5559. External links Linearization tutorials • Linearization for Model Analysis and Control Design Authority control: National • Germany • Czech Republic
Wikipedia
State-transition equation The state-transition equation is defined as the solution of the linear homogeneous state equation. The linear time-invariant state equation given by ${\frac {dx(t)}{dt}}=Ax(t)+Bu(t)+Ew(t),$ with state vector x, control vector u, vector w of additive disturbances, and fixed matrices A, B, and E, can be solved by using either the classical method of solving linear differential equations or the Laplace transform method. The Laplace transform solution is presented in the following equations. The Laplace transform of the above equation yields $sX(s)-x(0)=AX(s)+BU(s)+EW(s)$ where x(0) denotes initial-state vector evaluated at $t=0$ . Solving for $X(s)$ gives $X(s)=(sI-A)^{-1}x(0)+(sI-A)^{-1}[BU(s)+EW(s)].$ So, the state-transition equation can be obtained by taking inverse Laplace transform as $x(t)=L^{-1}[(sI-A)^{-1}]x(0)+L^{-1}{(sI-A)^{-1}[BU(s)+EW(s)]}=\phi (t)x(0)+\int _{0}^{t}\phi (t-\tau )[Bu(\tau )+Ew(\tau )]dt.$ The state-transition equation as derived above is useful only when the initial time is defined to be at $t=0$ . In the study of control systems, specially discrete-data control systems, it is often desirable to break up a state-transition process into a sequence of transitions, so a more flexible initial time must be chosen. Let the initial time be represented by $t_{0}$ and the corresponding initial state by $x(t_{0})$, and assume that the input $u(t)$ and the disturbance $w(t)$ are applied at t≥0. Starting with the above equation by setting $t=t_{0},$ and solving for $x(0)$, we get $x(0)=\phi (-t_{0})x(t_{0})-\phi (-t_{0})\int _{0}^{t_{0}}\phi (t_{0}-\tau )[Bu(\tau )+Ew(\tau )]d\tau .$ Once the state-transition equation is determined, the output vector can be expressed as a function of the initial state. See also • Control theory • Control engineering • Automatic control • Feedback • Process control • PID loop External links • Control System Toolbox for design and analysis of control systems. • http://web.mit.edu/2.14/www/Handouts/StateSpaceResponse.pdf • Wikibooks:Control Systems/State-Space Equations • http://planning.cs.uiuc.edu/node411.html
Wikipedia
State space (physics) In physics, a state space is an abstract space in which different "positions" represent, not literal locations, but rather states of some physical system. This makes it a type of phase space. Quantum Mechanics Specifically, in quantum mechanics a state space is a complex Hilbert space in which each unit vector represents a different state that could come out of a measurement. Each unit vector specifies a different dimension, so the numbers of dimensions in this Hilbert space depends on the system we choose to describe.[1] Any state vector in this space can be written as a linear combination of unit vectors. Having an nonzero component along multiple dimensions is called a superposition. These state vectors, using Dirac's bra–ket notation, can often be treated like coordinate vectors and operated on using the rules of linear algebra. This Dirac formalism of quantum mechanics can replace calculation of complicated integrals with simpler vector operations. See also • Configuration space (physics) for the space of possible positions that a physical system may attain • Configuration space (mathematics) for the space of positions of particles in a topological space • State space (controls) for information about state space in control engineering • State space for information about discrete state space in computer science Notes 1. McIntyre, David (2012). Quantum Mechanics: A Paradigms Approach (1st ed.). Pearson. ISBN 978-0321765796. References • Claude Cohen-Tannoudji (1977). Quantum Mechanics. John Wiley & Sons. Inc. ISBN 0-471-16433-X. • David J. Griffiths (1995). Introduction to Quantum Mechanics. Prentice Hall. ISBN 0-13-124405-1. • David H. McIntyre (2012). Quantum Mechanics: A Paradigms Approach. Pearson. ISBN 978-0321765796.
Wikipedia
State transition network A state transition network is a diagram that is developed from a set of data and charts the flow of data from particular data points (called states or nodes) to the next in a probabilistic manner. Use State transition networks are used in both academic and industrial fields. Examples State transition networks are a general construct, with more specific examples being augmented transition networks, recursive transition networks, and augmented recursive networks, among others. See also • State transition system • Markov network • History monoid References
Wikipedia
Static discipline In a digital circuit or system, static discipline is a guarantee on logical elements that "if inputs meet valid input thresholds, then the system guarantees outputs will meet valid output thresholds", named by Stephen A. Ward and Robert H. Halstead in 1990, but practiced for decades earlier.[1] The valid output thresholds voltages VOH (output high) and VOL (output low), and valid input thresholds VIH (input high) and VIL (input low), satisfy a robustness principle such that VOL < VIL < VIH < VOH with sufficient noise margins in the inequalities.[2] References 1. The Integrated Circuit Data Book. Motorola Semiconductor Products, Inc. 1968. pp. 2.81–2.84. 2. Stephen A. Ward and Robert H. Halstead (1990). Computation Structures. MIT Press. pp. 5–7. ISBN 9780262231398. External links • MIT course 6002x section on static discipline
Wikipedia
Stationary ergodic process In probability theory, a stationary ergodic process is a stochastic process which exhibits both stationarity and ergodicity. In essence this implies that the random process will not change its statistical properties with time and that its statistical properties (such as the theoretical mean and variance of the process) can be deduced from a single, sufficiently long sample (realization) of the process. Stationarity is the property of a random process which guarantees that its statistical properties, such as the mean value, its moments and variance, will not change over time. A stationary process is one whose probability distribution is the same at all times. For more information see stationary process. Several sub-types of stationarity are defined: first-order, second-order, nth-order, wide-sense and strict-sense. For details please see the reference above. An ergodic process is one which conforms to the ergodic theorem. The theorem allows the time average of a conforming process to equal the ensemble average. In practice this means that statistical sampling can be performed at one instant across a group of identical processes or sampled over time on a single process with no change in the measured result. A simple example of a violation of ergodicity is a measured process which is the superposition of two underlying processes, each with its own statistical properties. Although the measured process may be stationary in the long term, it is not appropriate to consider the sampled distribution to be the reflection of a single (ergodic) process: The ensemble average is meaningless. Also see ergodic theory and ergodic process. See also • Measure-preserving dynamical system References • Peebles,P. Z., 2001, Probability, Random Variables and Random Signal Principles, McGraw-Hill Inc, Boston, ISBN 0-07-118181-4
Wikipedia
Stationary increments In probability theory, a stochastic process is said to have stationary increments if its change only depends on the time span of observation, but not on the time when the observation was started. Many large families of stochastic processes have stationary increments either by definition (e.g. Lévy processes) or by construction (e.g. random walks) Definition A stochastic process $X=(X_{t})_{t\geq 0}$ has stationary increments if for all $t\geq 0$ and $h>0$, the distribution of the random variables $Y_{t,h}:=X_{t+h}-X_{t}$ depends only on $h$ and not on $t$.[1][2] Examples Having stationary increments is a defining property for many large families of stochastic processes such as the Lévy processes. Being special Lévy processes, both the Wiener process and the Poisson processes have stationary increments. Other families of stochastic processes such as random walks have stationary increments by construction. An example of a stochastic process with stationary increments that is not a Lévy process is given by $X=(X_{t})$, where the $X_{t}$ are independent and identically distributed random variables following a normal distribution with mean zero and variance one. Then the increments $Y_{t,h}$ are independent of $t$ as they have a normal distribution with mean zero and variance two. In this special case, the increments are even independent of the duration of observation $h$ itself. Generalized Definition for Complex Index Sets The concept of stationary increments can be generalized to stochastic processes with more complex index sets $T$. Let $X=(X_{t})_{t\in T}$ be a stochastic process whose index set $T\subset \mathbb {R} $ is closed with respect to addition. Then it has stationary increments if for any $p,q,r\in T$, the random variables $Y_{1}=X_{p+q+r}-X_{q+r}$ and $Y_{2}=X_{p+r}-X_{r}$ have identical distributions. If $0\in T$ it is sufficient to consider $r=0$.[1] References 1. Klenke, Achim (2008). Probability Theory. Berlin: Springer. p. 190. doi:10.1007/978-1-84800-048-3. ISBN 978-1-84800-047-6. 2. Kallenberg, Olav (2002). Foundations of Modern Probability (2nd ed.). New York: Springer. p. 290.
Wikipedia
Stationary phase approximation In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential. This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin.[1] It is closely related to Laplace's method and the method of steepest descent, but Laplace's contribution precedes the others. Basics The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times. Formula Letting $\Sigma $ denote the set of critical points of the function $f$ (i.e. points where $\nabla f=0$), under the assumption that $g$ is either compactly supported or has exponential decay, and that all critical points are nondegenerate (i.e. $\det(\mathrm {Hess} (f(x_{0})))\neq 0$ for $x_{0}\in \Sigma $) we have the following asymptotic formula, as $k\to \infty $: $\int _{\mathbb {R} ^{n}}g(x)e^{ikf(x)}dx=\sum _{x_{0}\in \Sigma }e^{ikf(x_{0})}|\det({\mathrm {Hess} }(f(x_{0})))|^{-1/2}e^{{\frac {i\pi }{4}}\mathrm {sgn} (\mathrm {Hess} (f(x_{0})))}(2\pi /k)^{n/2}g(x_{0})+o(k^{-n/2})$ Here $\mathrm {Hess} (f)$ denotes the Hessian of $f$, and $\mathrm {sgn} (\mathrm {Hess} (f))$ denotes the signature of the Hessian, i.e. the number of positive eigenvalues minus the number of negative eigenvalues. For $n=1$, this reduces to: $\int _{\mathbb {R} }g(x)e^{ikf(x)}dx=\sum _{x_{0}\in \Sigma }g(x_{0})e^{ikf(x_{0})+\mathrm {sign} (f''(x_{0}))i\pi /4}\left({\frac {2\pi }{k|f''(x_{0})|}}\right)^{1/2}+o(k^{-1/2})$ In this case the assumptions on $f$ reduce to all the critical points being non-degenerate. This is just the Wick-rotated version of the formula for the method of steepest descent. An example Consider a function $f(x,t)={\frac {1}{2\pi }}\int _{\mathbb {R} }F(\omega )e^{i[k(\omega )x-\omega t]}\,d\omega $. The phase term in this function, $\phi =k(\omega )x-\omega t$, is stationary when ${\frac {d}{d\omega }}{\mathopen {}}\left(k(\omega )x-\omega t\right){\mathclose {}}=0$ or equivalently, ${\frac {dk(\omega )}{d\omega }}{\Big |}_{\omega =\omega _{0}}={\frac {t}{x}}$. Solutions to this equation yield dominant frequencies $\omega _{0}$ for some $x$ and $t$. If we expand $\phi $ as a Taylor series about $\omega _{0}$ and neglect terms of order higher than $(\omega -\omega _{0})^{2}$, we have $\phi =\left[k(\omega _{0})x-\omega _{0}t\right]+{\frac {1}{2}}xk''(\omega _{0})(\omega -\omega _{0})^{2}+\cdots $ where $k''$ denotes the second derivative of $k$. When $x$ is relatively large, even a small difference $(\omega -\omega _{0})$ will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we use the formula, $\int _{\mathbb {R} }e^{{\frac {1}{2}}icx^{2}}dx={\sqrt {\frac {2i\pi }{c}}}={\sqrt {\frac {2\pi }{|c|}}}e^{\pm i{\frac {\pi }{4}}}$. $f(x,t)\approx {\frac {1}{2\pi }}e^{i\left[k(\omega _{0})x-\omega _{0}t\right]}\left|F(\omega _{0})\right|\int _{\mathbb {R} }e^{{\frac {1}{2}}ixk''(\omega _{0})(\omega -\omega _{0})^{2}}\,d\omega $. This integrates to $f(x,t)\approx {\frac {\left|F(\omega _{0})\right|}{2\pi }}{\sqrt {\frac {2\pi }{x\left|k''(\omega _{0})\right|}}}\cos \left[k(\omega _{0})x-\omega _{0}t\pm {\frac {\pi }{4}}\right]$. Reduction steps The first major general statement of the principle involved is that the asymptotic behaviour of I(k) depends only on the critical points of f. If by choice of g the integral is localised to a region of space where f has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann–Lebesgue lemma. The second statement is that when f is a Morse function, so that the singular points of f are non-degenerate and isolated, then the question can be reduced to the case n = 1. In fact, then, a choice of g can be made to split the integral into cases with just one critical point P in each. At that point, because the Hessian determinant at P is by assumption not 0, the Morse lemma applies. By a change of co-ordinates f may be replaced by $(x_{1}^{2}+x_{2}^{2}+\cdots +x_{j}^{2})-(x_{j+1}^{2}+x_{j+2}^{2}+\cdots +x_{n}^{2})$. The value of j is given by the signature of the Hessian matrix of f at P. As for g, the essential case is that g is a product of bump functions of xi. Assuming now without loss of generality that P is the origin, take a smooth bump function h with value 1 on the interval [−1, 1] and quickly tending to 0 outside it. Take $g(x)=\prod _{i}h(x_{i})$, then Fubini's theorem reduces I(k) to a product of integrals over the real line like $J(k)=\int h(x)e^{ikf(x)}\,dx$ with f(x) = ±x2. The case with the minus sign is the complex conjugate of the case with the plus sign, so there is essentially one required asymptotic estimate. In this way asymptotics can be found for oscillatory integrals for Morse functions. The degenerate case requires further techniques (see for example Airy function). One-dimensional case The essential statement is this one: $\int _{-1}^{1}e^{ikx^{2}}\,dx={\sqrt {\frac {\pi }{k}}}e^{i\pi /4}+{\mathcal {O}}{\mathopen {}}\left({\frac {1}{k}}\right){\mathclose {}}$. In fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side, extended over the range $[-\infty ,\infty ]$ (for a proof see Fresnel integral). Therefore it is the question of estimating away the integral over, say, $[1,\infty ]$.[2] This is the model for all one-dimensional integrals $I(k)$ with $f$ having a single non-degenerate critical point at which $f$ has second derivative $>0$. In fact the model case has second derivative 2 at 0. In order to scale using $k$, observe that replacing $k$ by $ck$ where $c$ is constant is the same as scaling $x$ by ${\sqrt {c}}$. It follows that for general values of $f''(0)>0$, the factor ${\sqrt {\pi /k}}$ becomes ${\sqrt {\frac {2\pi }{kf''(0)}}}$. For $f''(0)<0$ one uses the complex conjugate formula, as mentioned before. Lower-order terms As can be seen from the formula, the stationary phase approximation is a first-order approximation of the asymptotic behavior of the integral. The lower-order terms can be understood as a sum of over Feynman diagrams with various weighting factors, for well behaved $f$. See also • Common integrals in quantum field theory • Laplace's method • Method of steepest descent Notes 1. Courant, Richard; Hilbert, David (1953), Methods of mathematical physics, vol. 1 (2nd revised ed.), New York: Interscience Publishers, p. 474, OCLC 505700 2. See for example Jean Dieudonné, Infinitesimal Calculus, p. 119 or Jean Dieudonné, Calcul Infinitésimal, p.135. References • Bleistein, N. and Handelsman, R. (1975), Asymptotic Expansions of Integrals, Dover, New York. • Victor Guillemin and Shlomo Sternberg (1990), Geometric Asymptotics, (see Chapter 1). • Hörmander, L. (1976), Linear Partial Differential Operators, Volume 1, Springer-Verlag, ISBN 978-3-540-00662-6. • Aki, Keiiti; & Richards, Paul G. (2002). "Quantitative Seismology" (2nd ed.), pp 255–256. University Science Books, ISBN 0-935702-96-2 • Wong, R. (2001), Asymptotic Approximations of Integrals, Classics in Applied Mathematics, Vol. 34. Corrected reprint of the 1989 original. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA. xviii+543 pages, ISBN 0-89871-497-4. • Dieudonné, J. (1980), Calcul Infinitésimal, Hermann, Paris External links • "Stationary phase, method of the", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia
Method of steepest descent In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals. For the optimization algorithm, see Gradient descent. The integral to be estimated is often of the form $\int _{C}f(z)e^{\lambda g(z)}\,dz,$ where C is a contour, and λ is large. One version of the method of steepest descent deforms the contour of integration C into a new path integration C′ so that the following conditions hold: 1. C′ passes through one or more zeros of the derivative g′(z), 2. the imaginary part of g(z) is constant on C′. The method of steepest descent was first published by Debye (1909), who used it to estimate Bessel functions and pointed out that it occurred in the unpublished note by Riemann (1863) about hypergeometric functions. The contour of steepest descent has a minimax property, see Fedoryuk (2001) harvtxt error: no target: CITEREFFedoryuk2001 (help). Siegel (1932) described some other unpublished notes of Riemann, where he used this method to derive the Riemann–Siegel formula. Basic idea The method of steepest descent is a method to approximate a complex integral of the form $I(\lambda )=\int _{C}f(z)e^{\lambda g(z)}\,\mathrm {d} z$ for large $\lambda \rightarrow \infty $, where $f(z)$ and $g(z)$ are analytic functions of $z$. Because the integrand is analytic, the contour $C$ can be deformed into a new contour $C'$ without changing the integral. In particular, one seeks a new contour on which the imaginary part of $g(z)={\text{Re}}[g(z)]+i\,{\text{Im}}[g(z)]$ is constant. Then $I(\lambda )=e^{i\lambda {\text{Im}}\{g(z)\}}\int _{C'}f(z)e^{\lambda {\text{Re}}\{g(z)\}}\,\mathrm {d} z,$ and the remaining integral can be approximated with other methods like Laplace's method.[1] Etymology The method is called the method of steepest descent because for analytic $g(z)$, constant phase contours are equivalent to steepest descent contours. If $g(z)=X(z)+iY(z)$ is an analytic function of $z=x+iy$, it satisfies the Cauchy–Riemann equations ${\frac {\partial X}{\partial x}}={\frac {\partial Y}{\partial y}}\qquad {\text{and}}\qquad {\frac {\partial X}{\partial y}}=-{\frac {\partial Y}{\partial x}}.$ Then ${\frac {\partial X}{\partial x}}{\frac {\partial Y}{\partial x}}+{\frac {\partial X}{\partial y}}{\frac {\partial Y}{\partial y}}=\nabla X\cdot \nabla Y=0,$ so contours of constant phase are also contours of steepest descent. A simple estimate Let  f, S : Cn → C and C ⊂ Cn. If $M=\sup _{x\in C}\Re (S(x))<\infty ,$ where $\Re (\cdot )$ denotes the real part, and there exists a positive real number λ0 such that $\int _{C}\left|f(x)e^{\lambda _{0}S(x)}\right|dx<\infty ,$ then the following estimate holds:[2] $\left|\int _{C}f(x)e^{\lambda S(x)}dx\right|\leqslant {\text{const}}\cdot e^{\lambda M},\qquad \forall \lambda \in \mathbb {R} ,\quad \lambda \geqslant \lambda _{0}.$ Proof of the simple estimate: ${\begin{aligned}\left|\int _{C}f(x)e^{\lambda S(x)}dx\right|&\leqslant \int _{C}|f(x)|\left|e^{\lambda S(x)}\right|dx\\&\equiv \int _{C}|f(x)|e^{\lambda M}\left|e^{\lambda _{0}(S(x)-M)}e^{(\lambda -\lambda _{0})(S(x)-M)}\right|dx\\&\leqslant \int _{C}|f(x)|e^{\lambda M}\left|e^{\lambda _{0}(S(x)-M)}\right|dx&&\left|e^{(\lambda -\lambda _{0})(S(x)-M)}\right|\leqslant 1\\&=\underbrace {e^{-\lambda _{0}M}\int _{C}\left|f(x)e^{\lambda _{0}S(x)}\right|dx} _{\text{const}}\cdot e^{\lambda M}.\end{aligned}}$ The case of a single non-degenerate saddle point Basic notions and notation Let x be a complex n-dimensional vector, and $S''_{xx}(x)\equiv \left({\frac {\partial ^{2}S(x)}{\partial x_{i}\partial x_{j}}}\right),\qquad 1\leqslant i,\,j\leqslant n,$ denote the Hessian matrix for a function S(x). If ${\boldsymbol {\varphi }}(x)=(\varphi _{1}(x),\varphi _{2}(x),\ldots ,\varphi _{k}(x))$ is a vector function, then its Jacobian matrix is defined as ${\boldsymbol {\varphi }}_{x}'(x)\equiv \left({\frac {\partial \varphi _{i}(x)}{\partial x_{j}}}\right),\qquad 1\leqslant i\leqslant k,\quad 1\leqslant j\leqslant n.$ A non-degenerate saddle point, z0 ∈ Cn, of a holomorphic function S(z) is a critical point of the function (i.e., ∇S(z0) = 0) where the function's Hessian matrix has a non-vanishing determinant (i.e., $\det S''_{zz}(z^{0})\neq 0$). The following is the main tool for constructing the asymptotics of integrals in the case of a non-degenerate saddle point: Complex Morse lemma The Morse lemma for real-valued functions generalizes as follows[3] for holomorphic functions: near a non-degenerate saddle point z0 of a holomorphic function S(z), there exist coordinates in terms of which S(z) − S(z0) is exactly quadratic. To make this precise, let S be a holomorphic function with domain W ⊂ Cn, and let z0 in W be a non-degenerate saddle point of S, that is, ∇S(z0) = 0 and $\det S''_{zz}(z^{0})\neq 0$. Then there exist neighborhoods U ⊂ W of z0 and V ⊂ Cn of w = 0, and a bijective holomorphic function φ : V → U with φ(0) = z0 such that $\forall w\in V:\qquad S({\boldsymbol {\varphi }}(w))=S(z^{0})+{\frac {1}{2}}\sum _{j=1}^{n}\mu _{j}w_{j}^{2},\quad \det {\boldsymbol {\varphi }}_{w}'(0)=1,$ Here, the μj are the eigenvalues of the matrix $S_{zz}''(z^{0})$. Proof of complex Morse lemma The following proof is a straightforward generalization of the proof of the real Morse Lemma, which can be found in.[4] We begin by demonstrating Auxiliary statement. Let  f  : Cn → C be holomorphic in a neighborhood of the origin and  f (0) = 0. Then in some neighborhood, there exist functions gi : Cn → C such that $f(z)=\sum _{i=1}^{n}z_{i}g_{i}(z),$ where each gi is holomorphic and $g_{i}(0)=\left.{\tfrac {\partial f(z)}{\partial z_{i}}}\right|_{z=0}.$ From the identity $f(z)=\int _{0}^{1}{\frac {d}{dt}}f\left(tz_{1},\cdots ,tz_{n}\right)dt=\sum _{i=1}^{n}z_{i}\int _{0}^{1}\left.{\frac {\partial f(z)}{\partial z_{i}}}\right|_{z=(tz_{1},\ldots ,tz_{n})}dt,$ we conclude that $g_{i}(z)=\int _{0}^{1}\left.{\frac {\partial f(z)}{\partial z_{i}}}\right|_{z=(tz_{1},\ldots ,tz_{n})}dt$ and $g_{i}(0)=\left.{\frac {\partial f(z)}{\partial z_{i}}}\right|_{z=0}.$ Without loss of generality, we translate the origin to z0, such that z0 = 0 and S(0) = 0. Using the Auxiliary Statement, we have $S(z)=\sum _{i=1}^{n}z_{i}g_{i}(z).$ Since the origin is a saddle point, $\left.{\frac {\partial S(z)}{\partial z_{i}}}\right|_{z=0}=g_{i}(0)=0,$ we can also apply the Auxiliary Statement to the functions gi(z) and obtain $S(z)=\sum _{i,j=1}^{n}z_{i}z_{j}h_{ij}(z).$ (1) Recall that an arbitrary matrix A can be represented as a sum of symmetric A(s) and anti-symmetric A(a) matrices, $A_{ij}=A_{ij}^{(s)}+A_{ij}^{(a)},\qquad A_{ij}^{(s)}={\tfrac {1}{2}}\left(A_{ij}+A_{ji}\right),\qquad A_{ij}^{(a)}={\tfrac {1}{2}}\left(A_{ij}-A_{ji}\right).$ The contraction of any symmetric matrix B with an arbitrary matrix A is $\sum _{i,j}B_{ij}A_{ij}=\sum _{i,j}B_{ij}A_{ij}^{(s)},$ (2) i.e., the anti-symmetric component of A does not contribute because $\sum _{i,j}B_{ij}C_{ij}=\sum _{i,j}B_{ji}C_{ji}=-\sum _{i,j}B_{ij}C_{ij}=0.$ Thus, hij(z) in equation (1) can be assumed to be symmetric with respect to the interchange of the indices i and j. Note that $\left.{\frac {\partial ^{2}S(z)}{\partial z_{i}\partial z_{j}}}\right|_{z=0}=2h_{ij}(0);$ hence, det(hij(0)) ≠ 0 because the origin is a non-degenerate saddle point. Let us show by induction that there are local coordinates u = (u1, ... un), z = ψ(u), 0 = ψ(0), such that $S({\boldsymbol {\psi }}(u))=\sum _{i=1}^{n}u_{i}^{2}.$ (3) First, assume that there exist local coordinates y = (y1, ... yn), z = φ(y), 0 = φ(0), such that $S({\boldsymbol {\phi }}(y))=y_{1}^{2}+\cdots +y_{r-1}^{2}+\sum _{i,j=r}^{n}y_{i}y_{j}H_{ij}(y),$ (4) where Hij is symmetric due to equation (2). By a linear change of the variables (yr, ... yn), we can assure that Hrr(0) ≠ 0. From the chain rule, we have ${\frac {\partial ^{2}S({\boldsymbol {\phi }}(y))}{\partial y_{i}\partial y_{j}}}=\sum _{l,k=1}^{n}\left.{\frac {\partial ^{2}S(z)}{\partial z_{k}\partial z_{l}}}\right|_{z={\boldsymbol {\phi }}(y)}{\frac {\partial \phi _{k}}{\partial y_{i}}}{\frac {\partial \phi _{l}}{\partial y_{j}}}+\sum _{k=1}^{n}\left.{\frac {\partial S(z)}{\partial z_{k}}}\right|_{z={\boldsymbol {\phi }}(y)}{\frac {\partial ^{2}\phi _{k}}{\partial y_{i}\partial y_{j}}}$ Therefore: $S''_{yy}({\boldsymbol {\phi }}(0))={\boldsymbol {\phi }}'_{y}(0)^{T}S''_{zz}(0){\boldsymbol {\phi }}'_{y}(0),\qquad \det {\boldsymbol {\phi }}'_{y}(0)\neq 0;$ whence, $0\neq \det S''_{yy}({\boldsymbol {\phi }}(0))=2^{r-1}\det \left(2H_{ij}(0)\right).$ The matrix (Hij(0)) can be recast in the Jordan normal form: (Hij(0)) = LJL−1, were L gives the desired non-singular linear transformation and the diagonal of J contains non-zero eigenvalues of (Hij(0)). If Hij(0) ≠ 0 then, due to continuity of Hij(y), it must be also non-vanishing in some neighborhood of the origin. Having introduced ${\tilde {H}}_{ij}(y)=H_{ij}(y)/H_{rr}(y)$, we write ${\begin{aligned}S({\boldsymbol {\varphi }}(y))=&y_{1}^{2}+\cdots +y_{r-1}^{2}+H_{rr}(y)\sum _{i,j=r}^{n}y_{i}y_{j}{\tilde {H}}_{ij}(y)\\=&y_{1}^{2}+\cdots +y_{r-1}^{2}+H_{rr}(y)\left[y_{r}^{2}+2y_{r}\sum _{j=r+1}^{n}y_{j}{\tilde {H}}_{rj}(y)+\sum _{i,j=r+1}^{n}y_{i}y_{j}{\tilde {H}}_{ij}(y)\right]\\=&y_{1}^{2}+\cdots +y_{r-1}^{2}+H_{rr}(y)\left[\left(y_{r}+\sum _{j=r+1}^{n}y_{j}{\tilde {H}}_{rj}(y)\right)^{2}-\left(\sum _{j=r+1}^{n}y_{j}{\tilde {H}}_{rj}(y)\right)^{2}\right]+H_{rr}(y)\sum _{i,j=r+1}^{n}y_{i}y_{j}{\tilde {H}}_{ij}(y)\end{aligned}}$ Motivated by the last expression, we introduce new coordinates z = η(x), 0 = η(0), $x_{r}={\sqrt {H_{rr}(y)}}\left(y_{r}+\sum _{j=r+1}^{n}y_{j}{\tilde {H}}_{rj}(y)\right),\qquad x_{j}=y_{j},\quad \forall j\neq r.$ The change of the variables y ↔ x is locally invertible since the corresponding Jacobian is non-zero, $\left.{\frac {\partial x_{r}}{\partial y_{k}}}\right|_{y=0}={\sqrt {H_{rr}(0)}}\left[\delta _{r,\,k}+\sum _{j=r+1}^{n}\delta _{j,\,k}{\tilde {H}}_{jr}(0)\right].$ Therefore, $S({\boldsymbol {\eta }}(x))={x}_{1}^{2}+\cdots +{x}_{r}^{2}+\sum _{i,j=r+1}^{n}{x}_{i}{x}_{j}W_{ij}(x).$ (5) Comparing equations (4) and (5), we conclude that equation (3) is verified. Denoting the eigenvalues of $S''_{zz}(0)$ by μj, equation (3) can be rewritten as $S({\boldsymbol {\varphi }}(w))={\frac {1}{2}}\sum _{j=1}^{n}\mu _{j}w_{j}^{2}.$ (6) Therefore, $S''_{ww}({\boldsymbol {\varphi }}(0))={\boldsymbol {\varphi }}'_{w}(0)^{T}S''_{zz}(0){\boldsymbol {\varphi }}'_{w}(0),$ (7) From equation (6), it follows that $\det S''_{ww}({\boldsymbol {\varphi }}(0))=\mu _{1}\cdots \mu _{n}$. The Jordan normal form of $S''_{zz}(0)$ reads $S''_{zz}(0)=PJ_{z}P^{-1}$, where Jz is an upper diagonal matrix containing the eigenvalues and det P ≠ 0; hence, $\det S''_{zz}(0)=\mu _{1}\cdots \mu _{n}$. We obtain from equation (7) $\det S''_{ww}({\boldsymbol {\varphi }}(0))=\left[\det {\boldsymbol {\varphi }}'_{w}(0)\right]^{2}\det S''_{zz}(0)\Longrightarrow \det {\boldsymbol {\varphi }}'_{w}(0)=\pm 1.$ If $\det {\boldsymbol {\varphi }}'_{w}(0)=-1$, then interchanging two variables assures that $\det {\boldsymbol {\varphi }}'_{w}(0)=+1$. The asymptotic expansion in the case of a single non-degenerate saddle point Assume 1.  f (z) and S(z) are holomorphic functions in an open, bounded, and simply connected set Ωx ⊂ Cn such that the Ix = Ωx ∩ Rn is connected; 2. $\Re (S(z))$ has a single maximum: $\max _{z\in I_{x}}\Re (S(z))=\Re (S(x^{0}))$ for exactly one point x0 ∈ Ix; 3. x0 is a non-degenerate saddle point (i.e., ∇S(x0) = 0 and $\det S''_{xx}(x^{0})\neq 0$). Then, the following asymptotic holds $I(\lambda )\equiv \int _{I_{x}}f(x)e^{\lambda S(x)}dx=\left({\frac {2\pi }{\lambda }}\right)^{\frac {n}{2}}e^{\lambda S(x^{0})}\left(f(x^{0})+O\left(\lambda ^{-1}\right)\right)\prod _{j=1}^{n}(-\mu _{j})^{-{\frac {1}{2}}},\qquad \lambda \to \infty ,$ (8) where μj are eigenvalues of the Hessian $S''_{xx}(x^{0})$ and $(-\mu _{j})^{-{\frac {1}{2}}}$ are defined with arguments $\left|\arg {\sqrt {-\mu _{j}}}\right|<{\tfrac {\pi }{4}}.$ (9) This statement is a special case of more general results presented in Fedoryuk (1987).[5] Derivation of equation (8) First, we deform the contour Ix into a new contour $I'_{x}\subset \Omega _{x}$ passing through the saddle point x0 and sharing the boundary with Ix. This deformation does not change the value of the integral I(λ). We employ the Complex Morse Lemma to change the variables of integration. According to the lemma, the function φ(w) maps a neighborhood x0 ∈ U ⊂ Ωx onto a neighborhood Ωw containing the origin. The integral I(λ) can be split into two: I(λ) = I0(λ) + I1(λ), where I0(λ) is the integral over $U\cap I'_{x}$, while I1(λ) is over $I'_{x}\setminus (U\cap I'_{x})$ (i.e., the remaining part of the contour I′x). Since the latter region does not contain the saddle point x0, the value of I1(λ) is exponentially smaller than I0(λ) as λ → ∞;[6] thus, I1(λ) is ignored. Introducing the contour Iw such that $U\cap I'_{x}={\boldsymbol {\varphi }}(I_{w})$, we have $I_{0}(\lambda )=e^{\lambda S(x^{0})}\int _{I_{w}}f[{\boldsymbol {\varphi }}(w)]\exp \left(\lambda \sum _{j=1}^{n}{\tfrac {\mu _{j}}{2}}w_{j}^{2}\right)\left|\det {\boldsymbol {\varphi }}_{w}'(w)\right|dw.$ (10) Recalling that x0 = φ(0) as well as $\det {\boldsymbol {\varphi }}_{w}'(0)=1$, we expand the pre-exponential function $f[{\boldsymbol {\varphi }}(w)]$ into a Taylor series and keep just the leading zero-order term $I_{0}(\lambda )\approx f(x^{0})e^{\lambda S(x^{0})}\int _{\mathbf {R} ^{n}}\exp \left(\lambda \sum _{j=1}^{n}{\tfrac {\mu _{j}}{2}}w_{j}^{2}\right)dw=f(x^{0})e^{\lambda S(x^{0})}\prod _{j=1}^{n}\int _{-\infty }^{\infty }e^{{\frac {1}{2}}\lambda \mu _{j}y^{2}}dy.$ (11) Here, we have substituted the integration region Iw by Rn because both contain the origin, which is a saddle point, hence they are equal up to an exponentially small term.[7] The integrals in the r.h.s. of equation (11) can be expressed as ${\mathcal {I}}_{j}=\int _{-\infty }^{\infty }e^{{\frac {1}{2}}\lambda \mu _{j}y^{2}}dy=2\int _{0}^{\infty }e^{-{\frac {1}{2}}\lambda \left({\sqrt {-\mu _{j}}}y\right)^{2}}dy=2\int _{0}^{\infty }e^{-{\frac {1}{2}}\lambda \left|{\sqrt {-\mu _{j}}}\right|^{2}y^{2}\exp \left(2i\arg {\sqrt {-\mu _{j}}}\right)}dy.$ (12) From this representation, we conclude that condition (9) must be satisfied in order for the r.h.s. and l.h.s. of equation (12) to coincide. According to assumption 2, $\Re \left(S_{xx}''(x^{0})\right)$ is a negatively defined quadratic form (viz., $\Re (\mu _{j})<0$) implying the existence of the integral ${\mathcal {I}}_{j}$, which is readily calculated ${\mathcal {I}}_{j}={\frac {2}{{\sqrt {-\mu _{j}}}{\sqrt {\lambda }}}}\int _{0}^{\infty }e^{-{\frac {\xi ^{2}}{2}}}d\xi ={\sqrt {\frac {2\pi }{\lambda }}}(-\mu _{j})^{-{\frac {1}{2}}}.$ Equation (8) can also be written as $I(\lambda )=\left({\frac {2\pi }{\lambda }}\right)^{\frac {n}{2}}e^{\lambda S(x^{0})}\left(\det(-S_{xx}''(x^{0}))\right)^{-{\frac {1}{2}}}\left(f(x^{0})+O\left(\lambda ^{-1}\right)\right),$ (13) where the branch of ${\sqrt {\det \left(-S_{xx}''(x^{0})\right)}}$ is selected as follows ${\begin{aligned}\left(\det \left(-S_{xx}''(x^{0})\right)\right)^{-{\frac {1}{2}}}&=\exp \left(-i{\text{ Ind}}\left(-S_{xx}''(x^{0})\right)\right)\prod _{j=1}^{n}\left|\mu _{j}\right|^{-{\frac {1}{2}}},\\{\text{Ind}}\left(-S_{xx}''(x^{0})\right)&={\tfrac {1}{2}}\sum _{j=1}^{n}\arg(-\mu _{j}),&&|\arg(-\mu _{j})|<{\tfrac {\pi }{2}}.\end{aligned}}$ Consider important special cases: • If S(x) is real valued for real x and x0 in Rn (aka, the multidimensional Laplace method), then[8] ${\text{Ind}}\left(-S_{xx}''(x^{0})\right)=0.$ • If S(x) is purely imaginary for real x (i.e., $\Re (S(x))=0$ for all x in Rn) and x0 in Rn (aka, the multidimensional stationary phase method),[9] then[10] ${\text{Ind}}\left(-S_{xx}''(x^{0})\right)={\frac {\pi }{4}}{\text{sign }}S_{xx}''(x_{0}),$ where ${\text{sign }}S_{xx}''(x_{0})$ denotes the signature of matrix $S_{xx}''(x_{0})$, which equals to the number of negative eigenvalues minus the number of positive ones. It is noteworthy that in applications of the stationary phase method to the multidimensional WKB approximation in quantum mechanics (as well as in optics), Ind is related to the Maslov index see, e.g., Chaichian & Demichev (2001) and Schulman (2005). The case of multiple non-degenerate saddle points If the function S(x) has multiple isolated non-degenerate saddle points, i.e., $\nabla S\left(x^{(k)}\right)=0,\quad \det S''_{xx}\left(x^{(k)}\right)\neq 0,\quad x^{(k)}\in \Omega _{x}^{(k)},$ where $\left\{\Omega _{x}^{(k)}\right\}_{k=1}^{K}$ is an open cover of Ωx, then the calculation of the integral asymptotic is reduced to the case of a single saddle point by employing the partition of unity. The partition of unity allows us to construct a set of continuous functions ρk(x) : Ωx → [0, 1], 1 ≤ k ≤ K, such that ${\begin{aligned}\sum _{k=1}^{K}\rho _{k}(x)&=1,&&\forall x\in \Omega _{x},\\\rho _{k}(x)&=0&&\forall x\in \Omega _{x}\setminus \Omega _{x}^{(k)}.\end{aligned}}$ Whence, $\int _{I_{x}\subset \Omega _{x}}f(x)e^{\lambda S(x)}dx\equiv \sum _{k=1}^{K}\int _{I_{x}\subset \Omega _{x}}\rho _{k}(x)f(x)e^{\lambda S(x)}dx.$ Therefore as λ → ∞ we have: $\sum _{k=1}^{K}\int _{{\text{a neighborhood of }}x^{(k)}}f(x)e^{\lambda S(x)}dx=\left({\frac {2\pi }{\lambda }}\right)^{\frac {n}{2}}\sum _{k=1}^{K}e^{\lambda S\left(x^{(k)}\right)}\left(\det \left(-S_{xx}''\left(x^{(k)}\right)\right)\right)^{-{\frac {1}{2}}}f\left(x^{(k)}\right),$ where equation (13) was utilized at the last stage, and the pre-exponential function  f (x) at least must be continuous. The other cases When ∇S(z0) = 0 and $\det S''_{zz}(z^{0})=0$, the point z0 ∈ Cn is called a degenerate saddle point of a function S(z). Calculating the asymptotic of $\int f(x)e^{\lambda S(x)}dx,$ when λ → ∞,  f (x) is continuous, and S(z) has a degenerate saddle point, is a very rich problem, whose solution heavily relies on the catastrophe theory. Here, the catastrophe theory replaces the Morse lemma, valid only in the non-degenerate case, to transform the function S(z) into one of the multitude of canonical representations. For further details see, e.g., Poston & Stewart (1978) and Fedoryuk (1987). Integrals with degenerate saddle points naturally appear in many applications including optical caustics and the multidimensional WKB approximation in quantum mechanics. The other cases such as, e.g.,  f (x) and/or S(x) are discontinuous or when an extremum of S(x) lies at the integration region's boundary, require special care (see, e.g., Fedoryuk (1987) and Wong (1989)). Extensions and generalizations An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems. Given a contour C in the complex sphere, a function f defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If f and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution. An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour. The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of the Russian mathematician Alexander Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, steepest descent contours solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov). The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics. Another extension is the Method of Chester–Friedman–Ursell for coalescing saddle points and uniform asymptotic extensions. See also • Pearcey integral • Stationary phase approximation • Laplace's method Notes 1. Bender, Carl M.; Orszag, Steven A. (1999). Advanced Mathematical Methods for Scientists and Engineers I. New York, NY: Springer New York. doi:10.1007/978-1-4757-3069-2. ISBN 978-1-4419-3187-0. 2. A modified version of Lemma 2.1.1 on page 56 in Fedoryuk (1987). 3. Lemma 3.3.2 on page 113 in Fedoryuk (1987) 4. Poston & Stewart (1978), page 54; see also the comment on page 479 in Wong (1989). 5. Fedoryuk (1987), pages 417-420. 6. This conclusion follows from a comparison between the final asymptotic for I0(λ), given by equation (8), and a simple estimate for the discarded integral I1(λ). 7. This is justified by comparing the integral asymptotic over Rn [see equation (8)] with a simple estimate for the altered part. 8. See equation (4.4.9) on page 125 in Fedoryuk (1987) 9. Rigorously speaking, this case cannot be inferred from equation (8) because the second assumption, utilized in the derivation, is violated. To include the discussed case of a purely imaginary phase function, condition (9) should be replaced by $\left|\arg {\sqrt {-\mu _{j}}}\right|\leqslant {\tfrac {\pi }{4}}.$ 10. See equation (2.2.6') on page 186 in Fedoryuk (1987) References • Chaichian, M.; Demichev, A. (2001), Path Integrals in Physics Volume 1: Stochastic Process and Quantum Mechanics, Taylor & Francis, p. 174, ISBN 075030801X • Debye, P. (1909), "Näherungsformeln für die Zylinderfunktionen für große Werte des Arguments und unbeschränkt veränderliche Werte des Index", Mathematische Annalen, 67 (4): 535–558, doi:10.1007/BF01450097, S2CID 122219667 English translation in Debye, Peter J. W. (1954), The collected papers of Peter J. W. Debye, Interscience Publishers, Inc., New York, ISBN 978-0-918024-58-9, MR 0063975 • Deift, P.; Zhou, X. (1993), "A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation", Ann. of Math., The Annals of Mathematics, Vol. 137, No. 2, vol. 137, no. 2, pp. 295–368, arXiv:math/9201261, doi:10.2307/2946540, JSTOR 2946540, S2CID 12699956. • Erdelyi, A. (1956), Asymptotic Expansions, Dover. • Fedoryuk, M. V. (2001) [1994], "Saddle point method", Encyclopedia of Mathematics, EMS Press. • Fedoryuk, M. V. (1987), Asymptotic: Integrals and Series, Nauka, Moscow [in Russian]. • Kamvissis, S.; McLaughlin, K. T.-R.; Miller, P. (2003), "Semiclassical Soliton Ensembles for the Focusing Nonlinear Schrödinger Equation", Annals of Mathematics Studies, Princeton University Press, vol. 154. • Riemann, B. (1863), Sullo svolgimento del quoziente di due serie ipergeometriche in frazione continua infinita (Unpublished note, reproduced in Riemann's collected papers.) • Siegel, C. L. (1932), "Über Riemanns Nachlaß zur analytischen Zahlentheorie", Quellen und Studien zur Geschichte der Mathematik, Astronomie und Physik, 2: 45–80 Reprinted in Gesammelte Abhandlungen, Vol. 1. Berlin: Springer-Verlag, 1966. • Translated in Deift, Percy; Zhou, Xin (2018), "On Riemanns Nachlass for Analytic Number Theory: A translation of Siegel's Uber", arXiv:1810.05198 [math.HO]. • Poston, T.; Stewart, I. (1978), Catastrophe Theory and Its Applications, Pitman. • Schulman, L. S. (2005), "Ch. 17: The Phase of the Semiclassical Amplitude", Techniques and Applications of Path Integration, Dover, ISBN 0486445283 • Wong, R. (1989), Asymptotic approximations of integrals, Academic Press.
Wikipedia
Stationary sequence In probability theory – specifically in the theory of stochastic processes, a stationary sequence is a random sequence whose joint probability distribution is invariant over time. If a random sequence X j is stationary then the following holds: ${\begin{aligned}&{}\quad F_{X_{n},X_{n+1},\dots ,X_{n+N-1}}(x_{n},x_{n+1},\dots ,x_{n+N-1})\\&=F_{X_{n+k},X_{n+k+1},\dots ,X_{n+k+N-1}}(x_{n},x_{n+1},\dots ,x_{n+N-1}),\end{aligned}}$ where F is the joint cumulative distribution function of the random variables in the subscript. If a sequence is stationary then it is wide-sense stationary. If a sequence is stationary then it has a constant mean (which may not be finite): $E(X[n])=\mu \quad {\text{for all }}n.$ See also • Stationary process References • Probability and Random Processes with Application to Signal Processing: Third Edition by Henry Stark and John W. Woods. Prentice-Hall, 2002.
Wikipedia
Stationary set In mathematics, specifically set theory and model theory, a stationary set is a set that is not too small in the sense that it intersects all club sets and is analogous to a set of non-zero measure in measure theory. There are at least three closely related notions of stationary set, depending on whether one is looking at subsets of an ordinal, or subsets of something of given cardinality, or a powerset. Classical notion If $\kappa $ is a cardinal of uncountable cofinality, $S\subseteq \kappa ,$ and $S$ intersects every club set in $\kappa ,$ then $S$ is called a stationary set.[1] If a set is not stationary, then it is called a thin set. This notion should not be confused with the notion of a thin set in number theory. If $S$ is a stationary set and $C$ is a club set, then their intersection $S\cap C$ is also stationary. This is because if $D$ is any club set, then $C\cap D$ is a club set, thus $(S\cap C)\cap D=S\cap (C\cap D)$ is nonempty. Therefore, $(S\cap C)$ must be stationary. See also: Fodor's lemma The restriction to uncountable cofinality is in order to avoid trivialities: Suppose $\kappa $ has countable cofinality. Then $S\subseteq \kappa $ is stationary in $\kappa $ if and only if $\kappa \setminus S$ is bounded in $\kappa $. In particular, if the cofinality of $\kappa $ is $\omega =\aleph _{0}$, then any two stationary subsets of $\kappa $ have stationary intersection. This is no longer the case if the cofinality of $\kappa $ is uncountable. In fact, suppose $\kappa $ is moreover regular and $S\subseteq \kappa $ is stationary. Then $S$ can be partitioned into $\kappa $ many disjoint stationary sets. This result is due to Solovay. If $\kappa $ is a successor cardinal, this result is due to Ulam and is easily shown by means of what is called an Ulam matrix. H. Friedman has shown that for every countable successor ordinal $\beta $, every stationary subset of $\omega _{1}$ contains a closed subset of order type $\beta $. Jech's notion There is also a notion of stationary subset of $[X]^{\lambda }$, for $\lambda $ a cardinal and $X$ a set such that $|X|\geq \lambda $, where $[X]^{\lambda }$ is the set of subsets of $X$ of cardinality $\lambda $: $[X]^{\lambda }=\{Y\subseteq X:|Y|=\lambda \}$. This notion is due to Thomas Jech. As before, $S\subseteq [X]^{\lambda }$ is stationary if and only if it meets every club, where a club subset of $[X]^{\lambda }$ is a set unbounded under $\subseteq $ and closed under union of chains of length at most $\lambda $. These notions are in general different, although for $X=\omega _{1}$ and $\lambda =\aleph _{0}$ they coincide in the sense that $S\subseteq [\omega _{1}]^{\omega }$ is stationary if and only if $S\cap \omega _{1}$ is stationary in $\omega _{1}$. The appropriate version of Fodor's lemma also holds for this notion. Generalized notion There is yet a third notion, model theoretic in nature and sometimes referred to as generalized stationarity. This notion is probably due to Magidor, Foreman and Shelah and has also been used prominently by Woodin. Now let $X$ be a nonempty set. A set $C\subseteq {\mathcal {P}}(X)$ is club (closed and unbounded) if and only if there is a function $F:[X]^{<\omega }\to X$ such that $C=\{z:F[[z]^{<\omega }]\subseteq z\}$. Here, $[y]^{<\omega }$ is the collection of finite subsets of $y$. $S\subseteq {\mathcal {P}}(X)$ is stationary in ${\mathcal {P}}(X)$ if and only if it meets every club subset of ${\mathcal {P}}(X)$. To see the connection with model theory, notice that if $M$ is a structure with universe $X$ in a countable language and $F$ is a Skolem function for $M$, then a stationary $S$ must contain an elementary substructure of $M$. In fact, $S\subseteq {\mathcal {P}}(X)$ is stationary if and only if for any such structure $M$ there is an elementary substructure of $M$ that belongs to $S$. References 1. Jech (2003) p.91 • Foreman, Matthew (2002) Stationary sets, Chang's Conjecture and partition theory, in Set Theory (The Hajnal Conference) DIMACS Ser. Discrete Math. Theoret. Comp. Sci., 58, Amer. Math. Soc., Providence, RI. pp. 73–94. File at • Friedman, Harvey (1974). "On closed sets of ordinals". Proc. Am. Math. Soc. 43 (1): 190–192. doi:10.2307/2039353. JSTOR 2039353. Zbl 0299.04003. • Jech, Thomas (2003). Set Theory. Springer Monographs in Mathematics (Third Millennium ed.). Berlin, New York: Springer-Verlag. ISBN 978-3-540-44085-7. Zbl 1007.03002. External links • "Stationary set". PlanetMath.
Wikipedia
AVT Statistical filtering algorithm AVT Statistical filtering algorithm is an approach to improving quality of raw data collected from various sources. It is most effective in cases when there is inband noise present. In those cases AVT is better at filtering data then, band-pass filter or any digital filtering based on variation of. Conventional filtering is useful when signal/data has different frequency than noise and signal/data is separated/filtered by frequency discrimination of noise. Frequency discrimination filtering is done using Low Pass, High Pass and Band Pass filtering which refers to relative frequency filtering criteria target for such configuration. Those filters are created using passive and active components and sometimes are implemented using software algorithms based on Fast Fourier transform (FFT). AVT filtering is implemented in software and its inner working is based on statistical analysis of raw data. When signal frequency/(useful data distribution frequency) coincides with noise frequency/(noisy data distribution frequency) we have inband noise. In this situations frequency discrimination filtering does not work since the noise and useful signal are indistinguishable and where AVT excels. To achieve filtering in such conditions there are several methods/algorithms available which are briefly described below. Averaging algorithm 1. Collect n samples of data 2. Calculate average value of collected data 3. Present/record result as actual data Median algorithm 1. Collect n samples of data 2. Sort the data in ascending or descending order. Note that order does not matter 3. Select the data that happen to be in n/2 position and present/record it as final result representing data sample AVT algorithm AVT algorithm stands for Antonyan Vardan Transform and its implementation explained below. 1. Collect n samples of data 2. Calculate the standard deviation and average value 3. Drop any data that is greater or less than average ± one standard deviation 4. Calculate average value of remaining data 5. Present/record result as actual value representing data sample This algorithm is based on amplitude discrimination and can easily reject any noise that is not like actual signal, otherwise statistically different then 1 standard deviation of the signal. Note that this type of filtering can be used in situations where the actual environmental noise is not known in advance. Notice that it is preferable to use the median in above steps than average. Originally the AVT algorithm used average value to compare it with results of median on the data window. Filtering algorithms comparison Using a system that has signal value of 1 and has noise added at 0.1% and 1% levels will simplify quantification of algorithm performance. The R[1] script is used to create pseudo random noise added to signal and analyze the results of filtering using several algorithms. Please refer to "Reduce Inband Noise with the AVT Algorithm" [2] article for details. This graphs show that AVT algorithm provides best results compared with Median and Averaging algorithms while using data sample size of 32, 64 and 128 values. Note that this graph was created by analyzing random data array of 10000 values. Sample of this data is graphically represented below. From this graph it is apparent that AVT outperforms other filtering algorithms by providing 5% to 10% more accurate data when analyzing same datasets. Considering random nature of noise used in this numerical experiment that borderlines worst case situation where actual signal level is below ambient noise the precision improvements of processing data with AVT algorithm are significant. AVT algorithm variations Cascaded AVT In some situations better results can be obtained by cascading several stages of AVT filtering. This will produce singular constant value which can be used for equipment that has known stable characteristics like thermometers, thermistors and other slow acting sensors. Reverse AVT 1. Collect n samples of data 2. Calculate the standard deviation and average value 3. Drop any data that is within one standard deviation ± average band 4. Calculate average value of remaining data 5. Present/record result as actual data This is useful for detecting minute signals that are close to background noise level. Possible applications and uses • Use to filter data that is near or below noise level • Used in planet detection to filter out raw data from Kepler (spacecraft) • Filter out noise from sound sources where all other filtering methods (Low-pass filter, High-pass filter, Band-pass filter, Digital filter) fail. • Pre-process scientific data for data analysis (Smoothness) before plotting see (Plot (graphics)) • Used in SETI (Search for extraterrestrial intelligence) for detecting/distinguishing extraterrestrial signals from cosmic background • Use AVT as image filtering algorithm to detect altered images, please see Python program that is available for download. This image of Jupiter generated from this program, detecting alterations in original picture that was modified to be visually appealing by applying filters. Another version of this comparison is the Reverse AVT filter applied to the same original Jupiter Image, where we only see that altered portion as Noise that was eliminated by AVT algorithm. • Use AVT as image filtering algorithm to estimate data density from images, please see Python program program. Picture of Pillars of Creation Nebula shows data density in filtered images from Hubble and Webb. Note that image on the left has big patches of missing data marked with simpler color patterns. References 1. "The R Project for Statistical Computing". r-project.org. Retrieved 2015-01-10. 2. "Reduce Inband Noise with the AVT Algorithm | Embedded content from Electronic Design". electronicdesign.com. Retrieved 2015-01-10. 1. Joseph, Favis; Balinadoa, C.; Paolo Dar Santos, Gerald; Escanilla, Rio; Darell C. Aguda, John; Ramona A. Alcantara, Ma.; Belen M. Roble, Mariela; F. Bueser, Jomalyn (May 5, 2020). "Design and implementation of water velocity monitoring system based on hydropower generation and antonyan vardan transform (AVT) statistics". 13Th International Engineering Research Conference (13Th Eureca 2019). Vol. 2233. p. 050003. doi:10.1063/5.0002323. 2. Vinicius, Cene; Mauricio, Tosin; J., Machado; A., Balbinot (April 2019). "Open Database for Accurate Upper-Limb Intent Detection Using Electromyography and Reliable Extreme Learning Machines". Sensors. 19 (8): 1864. Bibcode:2019Senso..19.1864C. doi:10.3390/s19081864. PMC 6515272. PMID 31003524. 3. HornCene, Vinicius; Balbinot, Alexandr (August 10, 2018), "Using the sEMG signal representativity improvement towards upper-limb movement classification reliability", Biomedical Signal Processing and Control, 46: 182–191, doi:10.1016/j.bspc.2018.07.014, ISSN 1746-8094, S2CID 52071917 4. Horn Cene, Vinicius; Ruschel dos Santos, Raphael; Balbinot, Alexandre (July 18, 2018). 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Honolulu, HI, USA: IEEE. pp. 5224–5227. doi:10.1109/EMBC.2018.8513468. ISBN 978-1-5386-3646-6. 5. AVT image Filtering algorithm in python[1] 1. Antonyan, Vardan. "AVT Image Filter". Github. Github.
Wikipedia
Statistical population In statistics, a population is a set of similar items or events which is of interest for some question or experiment.[1] A statistical population can be a group of existing objects (e.g. the set of all stars within the Milky Way galaxy) or a hypothetical and potentially infinite group of objects conceived as a generalization from experience (e.g. the set of all possible hands in a game of poker).[2] A common aim of statistical analysis is to produce information about some chosen population.[3] In statistical inference, a subset of the population (a statistical sample) is chosen to represent the population in a statistical analysis.[4] Moreover, the statistical sample must be unbiased and accurately model the population (every unit of the population has an equal chance of selection). The ratio of the size of this statistical sample to the size of the population is called a sampling fraction. It is then possible to estimate the population parameters using the appropriate sample statistics. Mean The population mean, or population expected value, is a measure of the central tendency either of a probability distribution or of a random variable characterized by that distribution.[5] In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving $\mu =\sum xp(x)....$.[6][7] An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions. For a finite population, the population mean of a property is equal to the arithmetic mean of the given property, while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual—divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples. The law of large numbers states that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.[8] Sub population A subset of a population that shares one or more additional properties is called a sub population. For example, if the population is all Egyptian people, a sub population is all Egyptian males; if the population is all pharmacies in the world, a sub population is all pharmacies in Egypt. By contrast, a sample is a subset of a population that is not chosen to share any additional property. Descriptive statistics may yield different results for different sub populations. For instance, a particular medicine may have different effects on different sub populations, and these effects may be obscured or dismissed if such special sub populations are not identified and examined in isolation. Similarly, one can often estimate parameters more accurately if one separates out sub populations: the distribution of heights among people is better modeled by considering men and women as separate sub populations, for instance. Populations consisting of sub populations can be modeled by mixture models, which combine the distributions within sub populations into an overall population distribution. Even if sub populations are well-modeled by given simple models, the overall population may be poorly fit by a given simple model – poor fit may be evidence for the existence of sub populations. For example, given two equal sub populations, both normally distributed, if they have the same standard deviation but different means, the overall distribution will exhibit low kurtosis relative to a single normal distribution – the means of the sub populations fall on the shoulders of the overall distribution. If sufficiently separated, these form a bimodal distribution; otherwise, it simply has a wide peak. Further, it will exhibit overdispersion relative to a single normal distribution with the given variation. Alternatively, given two sub populations with the same mean but different standard deviations, the overall population will exhibit high kurtosis, with a sharper peak and heavier tails (and correspondingly shallower shoulders) than a single distribution. See also • Data collection system • Horvitz–Thompson estimator • Sample (statistics) • Sampling (statistics) • Stratum (statistics) References 1. "Glossary of statistical terms: Population". Statistics.com. Retrieved 22 February 2016. 2. Weisstein, Eric W. "Statistical population". MathWorld. 3. Yates, Daniel S.; Moore, David S; Starnes, Daren S. (2003). The Practice of Statistics (2nd ed.). New York: Freeman. ISBN 978-0-7167-4773-4. Archived from the original on 2005-02-09. 4. "Glossary of statistical terms: Sample". Statistics.com. Retrieved 22 February 2016. 5. Feller, William (1950). Introduction to Probability Theory and its Applications, Vol I. Wiley. p. 221. ISBN 0471257087. 6. Elementary Statistics by Robert R. Johnson and Patricia J. Kuby, p. 279 7. Weisstein, Eric W. "Population Mean". mathworld.wolfram.com. Retrieved 2020-08-21. 8. Schaum's Outline of Theory and Problems of Probability by Seymour Lipschutz and Marc Lipson, p. 141 External links • Statistical Terms Made Simple Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Statistical theory The theory of statistics provides a basis for the whole range of techniques, in both study design and data analysis, that are used within applications of statistics.[1][2] The theory covers approaches to statistical-decision problems and to statistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find a best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures.[2][3] Apart from philosophical considerations about how to make statistical inferences and decisions, much of statistical theory consists of mathematical statistics, and is closely linked to probability theory, to utility theory, and to optimization. Scope Statistical theory provides an underlying rationale and provides a consistent basis for the choice of methodology used in applied statistics. Modelling Statistical models describe the sources of data and can have different types of formulation corresponding to these sources and to the problem being studied. Such problems can be of various kinds: • Sampling from a finite population • Measuring observational error and refining procedures • Studying statistical relations Statistical models, once specified, can be tested to see whether they provide useful inferences for new data sets.[4] Data collection Statistical theory provides a guide to comparing methods of data collection, where the problem is to generate informative data using optimization and randomization while measuring and controlling for observational error.[5][6][7] Optimization of data collection reduces the cost of data while satisfying statistical goals,[8][9] while randomization allows reliable inferences. Statistical theory provides a basis for good data collection and the structuring of investigations in the topics of: • Design of experiments to estimate treatment effects, to test hypotheses, and to optimize responses.[8][10][11] • Survey sampling to describe populations[12][13][14] Summarising data The task of summarising statistical data in conventional forms (also known as descriptive statistics) is considered in theoretical statistics as a problem of defining what aspects of statistical samples need to be described and how well they can be described from a typically limited sample of data. Thus the problems theoretical statistics considers include: • Choosing summary statistics to describe a sample • Summarising probability distributions of sample data while making limited assumptions about the form of distribution that may be met • Summarising the relationships between different quantities measured on the same items with a sample Interpreting data Besides the philosophy underlying statistical inference, statistical theory has the task of considering the types of questions that data analysts might want to ask about the problems they are studying and of providing data analytic techniques for answering them. Some of these tasks are: • Summarising populations in the form of a fitted distribution or probability density function • Summarising the relationship between variables using some type of regression analysis • Providing ways of predicting the outcome of a random quantity given other related variables • Examining the possibility of reducing the number of variables being considered within a problem (the task of Dimension reduction) When a statistical procedure has been specified in the study protocol, then statistical theory provides well-defined probability statements for the method when applied to all populations that could have arisen from the randomization used to generate the data. This provides an objective way of estimating parameters, estimating confidence intervals, testing hypotheses, and selecting the best. Even for observational data, statistical theory provides a way of calculating a value that can be used to interpret a sample of data from a population, it can provide a means of indicating how well that value is determined by the sample, and thus a means of saying corresponding values derived for different populations are as different as they might seem; however, the reliability of inferences from post-hoc observational data is often worse than for planned randomized generation of data. Applied statistical inference Statistical theory provides the basis for a number of data-analytic approaches that are common across scientific and social research. Interpreting data is done with one of the following approaches: • Estimating parameters • Providing a range of values instead of a point estimate • Testing statistical hypotheses Many of the standard methods for those approaches rely on certain statistical assumptions (made in the derivation of the methodology) actually holding in practice. Statistical theory studies the consequences of departures from these assumptions. In addition it provides a range of robust statistical techniques that are less dependent on assumptions, and it provides methods checking whether particular assumptions are reasonable for a given data set. See also • List of statistical topics • Foundations of statistics References Citations 1. Cox & Hinkley (1974, p.1) 2. Rao, C. R. (1981). "Foreword". In Arthanari, T. S.; Dodge, Yadolah (eds.). Mathematical Programming in Statistics. New York: John Wiley & Sons. pp. vii–viii. ISBN 0-471-08073-X. MR 0607328. 3. Lehmann & Romano (2005) 4. Freedman (2009) 5. Charles Sanders Peirce and Joseph Jastrow (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences. 3: 73–83. http://psychclassics.yorku.ca/Peirce/small-diffs.htm 6. Hacking, Ian (September 1988). "Telepathy: Origins of Randomization in Experimental Design". Isis. 79 (3): 427–451. doi:10.1086/354775. JSTOR 234674. MR 1013489. S2CID 52201011. 7. Stephen M. Stigler (November 1992). "A Historical View of Statistical Concepts in Psychology and Educational Research". American Journal of Education. 101 (1): 60–70. doi:10.1086/444032. S2CID 143685203. 8. Atkinson et al. (2007) 9. Kiefer, Jack Carl (1985). Brown, Lawrence D.; Olkin, Ingram; Sacks, Jerome; et al. (eds.). Jack Carl Kiefer: Collected papers III—Design of experiments. Springer-Verlag and the Institute of Mathematical Statistics. pp. 718+xxv. ISBN 0-387-96004-X. 10. Hinkelmann & Kempthorne (2008) 11. Bailey (2008). 12. Kish (1965) 13. Cochran (1977) 14. Särndal et al. (1992) Sources • Atkinson, A. C.; Donev, A. N.; Tobias, R. D. (2007). Optimum Experimental Designs, with SAS. Oxford University Press. pp. 511+xvi. ISBN 978-0-19-929660-6. • Bailey, R. A (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9. Pre-publication chapters are available on-line. • Cochran, William G. (1977). Sampling Techniques (Third ed.). John Wiley & Sons. ISBN 0-471-16240-X. • Cox, D.R., Hinkley, D.V. (1974) Theoretical Statistics, Chapman & Hall. ISBN 0-412-12420-3 • Freedman, David A. (2009). Statistical Models: Theory and Practice (Second ed.). Cambridge University Press. ISBN 978-0-521-67105-7. • Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments. Vol. I, II (Second ed.). John Wiley & Sons. ISBN 978-0-470-38551-7.{{cite book}}: CS1 maint: multiple names: authors list (link) • Kish, L. (1965), Survey Sampling, John Wiley & Sons. ISBN 0-471-48900-X • Lehmann, E. L.; Romano, J. P. (2005), Testing Statistical Hypotheses (third ed.), Springer. • Särndal, Carl-Erik, Swensson, Bengt, and Wretman, Jan (1992). Model Assisted Survey Sampling. Springer-Verlag. ISBN 0-387-40620-4.{{cite book}}: CS1 maint: multiple names: authors list (link) Further reading • Peirce, C. S. • (1876), "Note on the Theory of the Economy of Research" in Coast Survey Report, pp. 197–201 (Appendix No. 14), NOAA PDF Eprint. Reprinted 1958 in Collected Papers of Charles Sanders Peirce 7, paragraphs 139–157 and in 1967 in Operations Research 15 (4): pp. 643–648, Abstract from JSTOR. • (1967) Peirce, C. S. (1967). "Note on the Theory of the Economy of Research". Operations Research. 15 (4): 643–648. doi:10.1287/opre.15.4.643. • (1877–1878), "Illustrations of the Logic of Science" • (1883), "A Theory of Probable Inference" • and Jastrow, Joseph (1885), "On Small Differences in Sensation" in Memoirs of the National Academy of Sciences 3: pp. 73–83. Eprint. • Bickel, Peter J. & Doksum, Kjell A. (2001). Mathematical Statistics: Basic and Selected Topics. Vol. I (Second (updated printing 2007) ed.). Pearson Prentice-Hall. ISBN 0-13-850363-X. • Davison, A.C. (2003) Statistical Models. Cambridge University Press. ISBN 0-521-77339-3 • Lehmann, Erich (1983). Theory of Point Estimation. • Liese, Friedrich & Miescke, Klaus-J. (2008). Statistical Decision Theory: Estimation, Testing, and Selection. Springer. ISBN 978-0-387-73193-3. External links • Media related to Statistical theory at Wikimedia Commons Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Admissible decision rule In statistical decision theory, an admissible decision rule is a rule for making a decision such that there is no other rule that is always "better" than it[1] (or at least sometimes better and never worse), in the precise sense of "better" defined below. This concept is analogous to Pareto efficiency. Part of a series on Bayesian statistics Posterior = Likelihood × Prior ÷ Evidence Background • Bayesian inference • Bayesian probability • Bayes' theorem • Bernstein–von Mises theorem • Coherence • Cox's theorem • Cromwell's rule • Principle of indifference • Principle of maximum entropy Model building • Weak prior ... Strong prior • Conjugate prior • Linear regression • Empirical Bayes • Hierarchical model Posterior approximation • Markov chain Monte Carlo • Laplace's approximation • Integrated nested Laplace approximations • Variational inference • Approximate Bayesian computation Estimators • Bayesian estimator • Credible interval • Maximum a posteriori estimation Evidence approximation • Evidence lower bound • Nested sampling Model evaluation • Bayes factor • Model averaging • Posterior predictive •  Mathematics portal Definition Define sets $\Theta \,$, ${\mathcal {X}}$ and ${\mathcal {A}}$, where $\Theta \,$ are the states of nature, ${\mathcal {X}}$ the possible observations, and ${\mathcal {A}}$ the actions that may be taken. An observation of $x\in {\mathcal {X}}\,\!$ is distributed as $F(x\mid \theta )\,\!$ and therefore provides evidence about the state of nature $\theta \in \Theta \,\!$. A decision rule is a function $\delta :{\mathcal {X}}\rightarrow {\mathcal {A}}$ :{\mathcal {X}}\rightarrow {\mathcal {A}}} , where upon observing $x\in {\mathcal {X}}$, we choose to take action $\delta (x)\in {\mathcal {A}}\,\!$. Also define a loss function $L:\Theta \times {\mathcal {A}}\rightarrow \mathbb {R} $, which specifies the loss we would incur by taking action $a\in {\mathcal {A}}$ when the true state of nature is $\theta \in \Theta $. Usually we will take this action after observing data $x\in {\mathcal {X}}$, so that the loss will be $L(\theta ,\delta (x))\,\!$. (It is possible though unconventional to recast the following definitions in terms of a utility function, which is the negative of the loss.) Define the risk function as the expectation $R(\theta ,\delta )=\operatorname {E} _{F(x\mid \theta )}[{L(\theta ,\delta (x))]}.\,\!$ Whether a decision rule $\delta \,\!$ has low risk depends on the true state of nature $\theta \,\!$. A decision rule $\delta ^{*}\,\!$ dominates a decision rule $\delta \,\!$ if and only if $R(\theta ,\delta ^{*})\leq R(\theta ,\delta )$ for all $\theta \,\!$, and the inequality is strict for some $\theta \,\!$. A decision rule is admissible (with respect to the loss function) if and only if no other rule dominates it; otherwise it is inadmissible. Thus an admissible decision rule is a maximal element with respect to the above partial order. An inadmissible rule is not preferred (except for reasons of simplicity or computational efficiency), since by definition there is some other rule that will achieve equal or lower risk for all $\theta \,\!$. But just because a rule $\delta \,\!$ is admissible does not mean it is a good rule to use. Being admissible means there is no other single rule that is always as good or better – but other admissible rules might achieve lower risk for most $\theta \,\!$ that occur in practice. (The Bayes risk discussed below is a way of explicitly considering which $\theta \,\!$ occur in practice.) Bayes rules and generalized Bayes rules See also: Bayes estimator § Admissibility Bayes rules Let $\pi (\theta )\,\!$ be a probability distribution on the states of nature. From a Bayesian point of view, we would regard it as a prior distribution. That is, it is our believed probability distribution on the states of nature, prior to observing data. For a frequentist, it is merely a function on $\Theta \,\!$ with no such special interpretation. The Bayes risk of the decision rule $\delta \,\!$ with respect to $\pi (\theta )\,\!$ is the expectation $r(\pi ,\delta )=\operatorname {E} _{\pi (\theta )}[R(\theta ,\delta )].\,\!$ A decision rule $\delta \,\!$ that minimizes $r(\pi ,\delta )\,\!$ is called a Bayes rule with respect to $\pi (\theta )\,\!$. There may be more than one such Bayes rule. If the Bayes risk is infinite for all $\delta \,\!$, then no Bayes rule is defined. Generalized Bayes rules See also: Bayes estimator § Generalized Bayes estimators In the Bayesian approach to decision theory, the observed $x\,\!$ is considered fixed. Whereas the frequentist approach (i.e., risk) averages over possible samples $x\in {\mathcal {X}}\,\!$, the Bayesian would fix the observed sample $x\,\!$ and average over hypotheses $\theta \in \Theta \,\!$. Thus, the Bayesian approach is to consider for our observed $x\,\!$ the expected loss $\rho (\pi ,\delta \mid x)=\operatorname {E} _{\pi (\theta \mid x)}[L(\theta ,\delta (x))].\,\!$ where the expectation is over the posterior of $\theta \,\!$ given $x\,\!$ (obtained from $\pi (\theta )\,\!$ and $F(x\mid \theta )\,\!$ using Bayes' theorem). Having made explicit the expected loss for each given $x\,\!$ separately, we can define a decision rule $\delta \,\!$ by specifying for each $x\,\!$ an action $\delta (x)\,\!$ that minimizes the expected loss. This is known as a generalized Bayes rule with respect to $\pi (\theta )\,\!$. There may be more than one generalized Bayes rule, since there may be multiple choices of $\delta (x)\,\!$ that achieve the same expected loss. At first, this may appear rather different from the Bayes rule approach of the previous section, not a generalization. However, notice that the Bayes risk already averages over $\Theta \,\!$ in Bayesian fashion, and the Bayes risk may be recovered as the expectation over ${\mathcal {X}}$ of the expected loss (where $x\sim \theta \,\!$ and $\theta \sim \pi \,\!$). Roughly speaking, $\delta \,\!$ minimizes this expectation of expected loss (i.e., is a Bayes rule) if and only if it minimizes the expected loss for each $x\in {\mathcal {X}}$ separately (i.e., is a generalized Bayes rule). Then why is the notion of generalized Bayes rule an improvement? It is indeed equivalent to the notion of Bayes rule when a Bayes rule exists and all $x\,\!$ have positive probability. However, no Bayes rule exists if the Bayes risk is infinite (for all $\delta \,\!$). In this case it is still useful to define a generalized Bayes rule $\delta \,\!$, which at least chooses a minimum-expected-loss action $\delta (x)\!\,$ for those $x\,\!$ for which a finite-expected-loss action does exist. In addition, a generalized Bayes rule may be desirable because it must choose a minimum-expected-loss action $\delta (x)\,\!$ for every $x\,\!$, whereas a Bayes rule would be allowed to deviate from this policy on a set $X\subseteq {\mathcal {X}}$ of measure 0 without affecting the Bayes risk. More important, it is sometimes convenient to use an improper prior $\pi (\theta )\,\!$. In this case, the Bayes risk is not even well-defined, nor is there any well-defined distribution over $x\,\!$. However, the posterior $\pi (\theta \mid x)\,\!$—and hence the expected loss—may be well-defined for each $x\,\!$, so that it is still possible to define a generalized Bayes rule. Admissibility of (generalized) Bayes rules According to the complete class theorems, under mild conditions every admissible rule is a (generalized) Bayes rule (with respect to some prior $\pi (\theta )\,\!$—possibly an improper one—that favors distributions $\theta \,\!$ where that rule achieves low risk). Thus, in frequentist decision theory it is sufficient to consider only (generalized) Bayes rules. Conversely, while Bayes rules with respect to proper priors are virtually always admissible, generalized Bayes rules corresponding to improper priors need not yield admissible procedures. Stein's example is one such famous situation. Examples The James–Stein estimator is a nonlinear estimator of the mean of Gaussian random vectors which can be shown to dominate, or outperform, the ordinary least squares technique with respect to a mean-square error loss function.[2] Thus least squares estimation is not an admissible estimation procedure in this context. Some others of the standard estimates associated with the normal distribution are also inadmissible: for example, the sample estimate of the variance when the population mean and variance are unknown.[3] Notes 1. Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms. OUP. ISBN 0-19-920613-9 (entry for admissible decision function) 2. Cox & Hinkley 1974, Section 11.8 3. Cox & Hinkley 1974, Exercise 11.7 References • Cox, D. R.; Hinkley, D. V. (1974). Theoretical Statistics. Wiley. ISBN 0-412-12420-3. • Berger, James O. (1980). Statistical Decision Theory and Bayesian Analysis (2nd ed.). Springer-Verlag. ISBN 0-387-96098-8. • DeGroot, Morris (2004) [1st. pub. 1970]. Optimal Statistical Decisions. Wiley Classics Library. ISBN 0-471-68029-X. • Robert, Christian P. (1994). The Bayesian Choice. Springer-Verlag. ISBN 3-540-94296-3.
Wikipedia
Computational statistics Computational statistics, or statistical computing, is the bond between statistics and computer science. It means statistical methods that are enabled by using computational methods. It is the area of computational science (or scientific computing) specific to the mathematical science of statistics. This area is also developing rapidly, leading to calls that a broader concept of computing should be taught as part of general statistical education.[1] As in traditional statistics the goal is to transform raw data into knowledge,[2] but the focus lies on computer intensive statistical methods, such as cases with very large sample size and non-homogeneous data sets.[2] The terms 'computational statistics' and 'statistical computing' are often used interchangeably, although Carlo Lauro (a former president of the International Association for Statistical Computing) proposed making a distinction, defining 'statistical computing' as "the application of computer science to statistics", and 'computational statistics' as "aiming at the design of algorithm for implementing statistical methods on computers, including the ones unthinkable before the computer age (e.g. bootstrap, simulation), as well as to cope with analytically intractable problems" [sic].[3] The term 'Computational statistics' may also be used to refer to computationally intensive statistical methods including resampling methods, Markov chain Monte Carlo methods, local regression, kernel density estimation, artificial neural networks and generalized additive models. History Though computational statistics is widely used today, it actually has a relatively short history of acceptance in the statistics community. For the most part, the founders of the field of statistics relied on mathematics and asymptotic approximations in the development of computational statistical methodology.[4] In statistical field, the first use of the term “computer” comes in an article in the Journal of the American Statistical Association archives by Robert P. Porter in 1891. The article discusses about the use of Hermann Hollerith’s machine in the 11th Census of the United States. Hermann Hollerith’s machine, also called tabulating machine, was an electromechanical machine designed to assist in summarizing information stored on punched cards. It was invented by Herman Hollerith (February 29, 1860 – November 17, 1929), an American businessman, inventor, and statistician. His invention of the punched card tabulating machine was patented in 1884, and later was used in the 1890 Census of the United States. The advantages of the technology were immediately apparent. the 1880 Census, with about 50 million people, and it took over 7 years to tabulate. While in the 1890 Census, with over 62 million people, it took less than a year. This marks the beginning of the era of mechanized computational statistics and semiautomatic data processing systems. In 1908, William Sealy Gosset performed his now well-known Monte Carlo method simulation which led to the discovery of the Student’s t-distribution.[5] With the help of computational methods, he also has plots of the empirical distributions overlaid on the corresponding theoretical distributions. The computer has revolutionized simulation and has made the replication of Gosset’s experiment little more than an exercise.[6][7] Later on, the scientists put forward computational ways of generating pseudo-random deviates, performed methods to convert uniform deviates into other distributional forms using inverse cumulative distribution function or acceptance-rejection methods, and developed state-space methodology for Markov chain Monte Carlo.[8] One of the first efforts to generate random digits in a fully automated way, was undertaken by the RAND Corporation in 1947. The tables produced were published as a book in 1955, and also as a series of punch cards. By the mid-1950s, several articles and patents for devices had been proposed for random number generators.[9] The development of these devices were motivated from the need to use random digits to perform simulations and other fundamental components in statistical analysis. One of the most well known of such devices is ERNIE, which produces random numbers that determine the winners of the Premium Bond, a lottery bond issued in the United Kingdom. In 1958, John Tukey’s jackknife was developed. It is as a method to reduce the bias of parameter estimates in samples under nonstandard conditions.[10] This requires computers for practical implementations. To this point, computers have made many tedious statistical studies feasible.[11] Methods Maximum likelihood estimation Maximum likelihood estimation is used to estimate the parameters of an assumed probability distribution, given some observed data. It is achieved by maximizing a likelihood function so that the observed data is most probable under the assumed statistical model. Monte Carlo method Monte Carlo a statistical method relies on repeated random sampling to obtain numerical results. The concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution. Markov chain Monte Carlo The Markov chain Monte Carlo method creates samples from a continuous random variable, with probability density proportional to a known function. These samples can be used to evaluate an integral over that variable, as its expected value or variance. The more steps are included, the more closely the distribution of the sample matches the actual desired distribution. Applications • Computational biology • Computational linguistics • Computational physics • Computational mathematics • Computational materials science Computational statistics journals • Communications in Statistics - Simulation and Computation • Computational Statistics • Computational Statistics & Data Analysis • Journal of Computational and Graphical Statistics • Journal of Statistical Computation and Simulation • Journal of Statistical Software • The R Journal • The Stata Journal • Statistics and Computing • Wiley Interdisciplinary Reviews Computational Statistics Associations • International Association for Statistical Computing See also • Algorithms for statistical classification • Data science • Statistical methods in artificial intelligence • Free statistical software • List of statistical algorithms • List of statistical packages • Machine learning References 1. Nolan, D. & Temple Lang, D. (2010). "Computing in the Statistics Curricula", The American Statistician 64 (2), pp.97-107. 2. Wegman, Edward J. “Computational Statistics: A New Agenda for Statistical Theory and Practice.” Journal of the Washington Academy of Sciences, vol. 78, no. 4, 1988, pp. 310–322. JSTOR 3. Lauro, Carlo (1996), "Computational statistics or statistical computing, is that the question?", Computational Statistics & Data Analysis, 23 (1): 191–193, doi:10.1016/0167-9473(96)88920-1 4. Watnik, Mitchell (2011). "Early Computational Statistics". Journal of Computational and Graphical Statistics. 20 (4): 811–817. doi:10.1198/jcgs.2011.204b. ISSN 1061-8600. S2CID 120111510. 5. "Student" [William Sealy Gosset] (1908). "The probable error of a mean" (PDF). Biometrika. 6 (1): 1–25. doi:10.1093/biomet/6.1.1. hdl:10338.dmlcz/143545. JSTOR 2331554. 6. Trahan, Travis John (2019-10-03). "Recent Advances in Monte Carlo Methods at Los Alamos National Laboratory". doi:10.2172/1569710. OSTI 1569710. {{cite journal}}: Cite journal requires |journal= (help) 7. Metropolis, Nicholas; Ulam, S. (1949). "The Monte Carlo Method". Journal of the American Statistical Association. 44 (247): 335–341. doi:10.1080/01621459.1949.10483310. ISSN 0162-1459. PMID 18139350. 8. Robert, Christian; Casella, George (2011-02-01). "A Short History of Markov Chain Monte Carlo: Subjective Recollections from Incomplete Data". Statistical Science. 26 (1). doi:10.1214/10-sts351. ISSN 0883-4237. S2CID 2806098. 9. Pierre L'Ecuyer (2017). "History of uniform random number generation" (PDF). 2017 Winter Simulation Conference (WSC). pp. 202–230. doi:10.1109/WSC.2017.8247790. ISBN 978-1-5386-3428-8. S2CID 4567651. 10. QUENOUILLE, M. H. (1956). "Notes on Bias in Estimation". Biometrika. 43 (3–4): 353–360. doi:10.1093/biomet/43.3-4.353. ISSN 0006-3444. 11. Teichroew, Daniel (1965). "A History of Distribution Sampling Prior to the Era of the Computer and its Relevance to Simulation". Journal of the American Statistical Association. 60 (309): 27–49. doi:10.1080/01621459.1965.10480773. ISSN 0162-1459. Further reading Articles • Albert, J.H.; Gentle, J.E. (2004), Albert, James H; Gentle, James E (eds.), "Special Section: Teaching Computational Statistics", The American Statistician, 58: 1, doi:10.1198/0003130042872, S2CID 219596225 • Wilkinson, Leland (2008), "The Future of Statistical Computing (with discussion)", Technometrics, 50 (4): 418–435, doi:10.1198/004017008000000460, S2CID 3521989 Books • Drew, John H.; Evans, Diane L.; Glen, Andrew G.; Lemis, Lawrence M. (2007), Computational Probability: Algorithms and Applications in the Mathematical Sciences, Springer International Series in Operations Research & Management Science, Springer, ISBN 978-0-387-74675-3 • Gentle, James E. (2002), Elements of Computational Statistics, Springer, ISBN 0-387-95489-9 • Gentle, James E.; Härdle, Wolfgang; Mori, Yuichi, eds. (2004), Handbook of Computational Statistics: Concepts and Methods, Springer, ISBN 3-540-40464-3 • Givens, Geof H.; Hoeting, Jennifer A. (2005), Computational Statistics, Wiley Series in Probability and Statistics, Wiley-Interscience, ISBN 978-0-471-46124-1 • Klemens, Ben (2008), Modeling with Data: Tools and Techniques for Statistical Computing, Princeton University Press, ISBN 978-0-691-13314-0 • Monahan, John (2001), Numerical Methods of Statistics, Cambridge University Press, ISBN 978-0-521-79168-7 • Rose, Colin; Smith, Murray D. (2002), Mathematical Statistics with Mathematica, Springer Texts in Statistics, Springer, ISBN 0-387-95234-9 • Thisted, Ronald Aaron (1988), Elements of Statistical Computing: Numerical Computation, CRC Press, ISBN 0-412-01371-1 • Gharieb, Reda. R. (2017), Data Science: Scientific and Statistical Computing, Noor Publishing, ISBN 978-3-330-97256-8 External links Associations • International Association for Statistical Computing • Statistical Computing section of the American Statistical Association Journals • Computational Statistics & Data Analysis • Journal of Computational & Graphical Statistics • Statistics and Computing
Wikipedia
Statistical and Applied Mathematical Sciences Institute Statistical and Applied Mathematical Sciences Institute (SAMSI) is an applied mathematics and statistics research organization based in Research Triangle Park, North Carolina. It is funded by the National Science Foundation, and is partnered with Duke University, North Carolina State University, the University of North Carolina at Chapel Hill, and the National Institute of Statistical Sciences. Statistical and Applied Mathematical Sciences Institute SAMSI in 2014 AbbreviationSAMSI Formation2002 Dissolved2021 Location • Research Triangle Park, North Carolina Director David Banks Websitewww.samsi.info SAMSI was founded in 2002. In 2012, the National Science Foundation renewed SAMSI's funding for an additional five years.[1] SAMSI is offering programs in bioinformatics and statistical ecology in 2014–15.[2] SAMSI closed its doors in August of 2021, after 19 years of work.[3] References 1. "SAMSI renewed for another five years by NSF". North Carolina State University. 2012-09-04. Retrieved 20 February 2014. 2. "SAMSI Offers Two New Programs for 2014–2015". Amstat News. American Statistical Association. December 1, 2013. Retrieved 21 February 2014. 3. "The End of an Era: The Closing of SAMSI | Amstat News". 2021-09-01. Retrieved 2023-03-02. American mathematics Organizations • AMS • MAA • SIAM • AMATYC • AWM Institutions • AIM • CIMS • IAS • ICERM • IMA • IPAM • MBI • SLMath • SAMSI • Geometry Center Competitions • MATHCOUNTS • AMC • AIME • USAMO • MOP • Putnam Competition • Integration Bee
Wikipedia
Statistical assembly In statistics, for example in statistical quality control, a statistical assembly is a collection of parts or components which makes up a statistical unit. Thus a statistical unit, which would be the prime item of concern, is made of discrete components like organs or machine parts. The reliability of the statistical unit is, in part, determined by the reliability of the components in the statistical assembly, and by their interactions. Much of the observation of a statistical assembly requires special preparation of the unit, which demands that the intervention must not prejudice the observations. A simple version of this kind of research uses the stimulus-response model. In other contexts, statistical assembly refers to the process of constructing a manufactured item which must be carefully specified to contain given amounts of nonuniformity within it. External links • "1 2D Overview7/92". adcats.et.byu.edu. Retrieved 2018-08-20.
Wikipedia
Binary classification Binary classification is the task of classifying the elements of a set into two groups (each called class) on the basis of a classification rule. Typical binary classification problems include: • Medical testing to determine if a patient has certain disease or not; • Quality control in industry, deciding whether a specification has been met; • In information retrieval, deciding whether a page should be in the result set of a search or not. Binary classification is dichotomization applied to a practical situation. In many practical binary classification problems, the two groups are not symmetric, and rather than overall accuracy, the relative proportion of different types of errors is of interest. For example, in medical testing, detecting a disease when it is not present (a false positive) is considered differently from not detecting a disease when it is present (a false negative). Statistical binary classification Statistical classification is a problem studied in machine learning. It is a type of supervised learning, a method of machine learning where the categories are predefined, and is used to categorize new probabilistic observations into said categories. When there are only two categories the problem is known as statistical binary classification. Some of the methods commonly used for binary classification are: • Decision trees • Random forests • Bayesian networks • Support vector machines • Neural networks • Logistic regression • Probit model • Genetic Programming • Multi expression programming • Linear genetic programming Each classifier is best in only a select domain based upon the number of observations, the dimensionality of the feature vector, the noise in the data and many other factors. For example, random forests perform better than SVM classifiers for 3D point clouds.[1][2] Evaluation of binary classifiers There are many metrics that can be used to measure the performance of a classifier or predictor; different fields have different preferences for specific metrics due to different goals. In medicine sensitivity and specificity are often used, while in information retrieval precision and recall are preferred. An important distinction is between metrics that are independent of how often each category occurs in the population (the prevalence), and metrics that depend on the prevalence – both types are useful, but they have very different properties. Given a classification of a specific data set, there are four basic combinations of actual data category and assigned category: true positives TP (correct positive assignments), true negatives TN (correct negative assignments), false positives FP (incorrect positive assignments), and false negatives FN (incorrect negative assignments). Assigned Actual Test outcome positive Test outcome negative Condition positive True positive False negative Condition negative False positive True negative These can be arranged into a 2×2 contingency table, with columns corresponding to actual value – condition positive or condition negative – and rows corresponding to classification value – test outcome positive or test outcome negative. The eight basic ratios There are eight basic ratios that one can compute from this table, which come in four complementary pairs (each pair summing to 1). These are obtained by dividing each of the four numbers by the sum of its row or column, yielding eight numbers, which can be referred to generically in the form "true positive row ratio" or "false negative column ratio". There are thus two pairs of column ratios and two pairs of row ratios, and one can summarize these with four numbers by choosing one ratio from each pair – the other four numbers are the complements. The row ratios are: • true positive rate (TPR) = (TP/(TP+FN)), aka sensitivity or recall. These are the proportion of the population with the condition for which the test is correct. • with complement the false negative rate (FNR) = (FN/(TP+FN)) • true negative rate (TNR) = (TN/(TN+FP), aka specificity (SPC), • with complement false positive rate (FPR) = (FP/(TN+FP)), also called independent of prevalence The column ratios are: • positive predictive value (PPV, aka precision) (TP/(TP+FP)). These are the proportion of the population with a given test result for which the test is correct. • with complement the false discovery rate (FDR) (FP/(TP+FP)) • negative predictive value (NPV) (TN/(TN+FN)) • with complement the false omission rate (FOR) (FN/(TN+FN)), also called dependence on prevalence. In diagnostic testing, the main ratios used are the true column ratios – true positive rate and true negative rate – where they are known as sensitivity and specificity. In informational retrieval, the main ratios are the true positive ratios (row and column) – positive predictive value and true positive rate – where they are known as precision and recall. There is no general theory that sets out which pair should be used in which circumstances; each discipline has its own reason for the choice it has made. One can take ratios of a complementary pair of ratios, yielding four likelihood ratios (two column ratio of ratios, two row ratio of ratios). This is primarily done for the column (condition) ratios, yielding likelihood ratios in diagnostic testing. Taking the ratio of one of these groups of ratios yields a final ratio, the diagnostic odds ratio (DOR). This can also be defined directly as (TP×TN)/(FP×FN) = (TP/FN)/(FP/TN); this has a useful interpretation – as an odds ratio – and is prevalence-independent. There are a number of other metrics, most simply the accuracy or Fraction Correct (FC), which measures the fraction of all instances that are correctly categorized; the complement is the Fraction Incorrect (FiC). The F-score combines precision and recall into one number via a choice of weighing, most simply equal weighing, as the balanced F-score (F1 score). Some metrics come from regression coefficients: the markedness and the informedness, and their geometric mean, the Matthews correlation coefficient. Other metrics include Youden's J statistic, the uncertainty coefficient, the phi coefficient, and Cohen's kappa. Converting continuous values to binary Tests whose results are of continuous values, such as most blood values, can artificially be made binary by defining a cutoff value, with test results being designated as positive or negative depending on whether the resultant value is higher or lower than the cutoff. However, such conversion causes a loss of information, as the resultant binary classification does not tell how much above or below the cutoff a value is. As a result, when converting a continuous value that is close to the cutoff to a binary one, the resultant positive or negative predictive value is generally higher than the predictive value given directly from the continuous value. In such cases, the designation of the test of being either positive or negative gives the appearance of an inappropriately high certainty, while the value is in fact in an interval of uncertainty. For example, with the urine concentration of hCG as a continuous value, a urine pregnancy test that measured 52 mIU/ml of hCG may show as "positive" with 50 mIU/ml as cutoff, but is in fact in an interval of uncertainty, which may be apparent only by knowing the original continuous value. On the other hand, a test result very far from the cutoff generally has a resultant positive or negative predictive value that is lower than the predictive value given from the continuous value. For example, a urine hCG value of 200,000 mIU/ml confers a very high probability of pregnancy, but conversion to binary values results in that it shows just as "positive" as the one of 52 mIU/ml. See also • Examples of Bayesian inference • Classification rule • Confusion matrix • Detection theory • Kernel methods • Multiclass classification • Multi-label classification • One-class classification • Prosecutor's fallacy • Receiver operating characteristic • Thresholding (image processing) • Uncertainty coefficient, aka proficiency • Qualitative property • Precision and recall (equivalent classification schema) References 1. Zhang & Zakhor, Richard & Avideh (2014). "Automatic Identification of Window Regions on Indoor Point Clouds Using LiDAR and Cameras". VIP Lab Publications. CiteSeerX 10.1.1.649.303. 2. Y. Lu and C. Rasmussen (2012). "Simplified markov random fields for efficient semantic labeling of 3D point clouds" (PDF). IROS. Bibliography • Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. ISBN 0-521-78019-5 ( SVM Book) • John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004. ISBN 0-521-81397-2 (Website for the book) • Bernhard Schölkopf and A. J. Smola: Learning with Kernels. MIT Press, Cambridge, Massachusetts, 2002. ISBN 0-262-19475-9 Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Calibration (statistics) There are two main uses of the term calibration in statistics that denote special types of statistical inference problems. "Calibration" can mean • a reverse process to regression, where instead of a future dependent variable being predicted from known explanatory variables, a known observation of the dependent variables is used to predict a corresponding explanatory variable;[1] • procedures in statistical classification to determine class membership probabilities which assess the uncertainty of a given new observation belonging to each of the already established classes. In addition, "calibration" is used in statistics with the usual general meaning of calibration. For example, model calibration can be also used to refer to Bayesian inference about the value of a model's parameters, given some data set, or more generally to any type of fitting of a statistical model. As Philip Dawid puts it, "a forecaster is well calibrated if, for example, of those events to which he assigns a probability 30 percent, the long-run proportion that actually occurs turns out to be 30 percent".[2] In classification Calibration in classification means turning transform classifier scores into class membership probabilities. An overview of calibration methods for two-class and multi-class classification tasks is given by Gebel (2009) .[3] A variety of metrics exist that are aimed to measure the extent to which a classifier produces well-calibrated probabilities. Foundational work includes the Expected Calibration Error (ECE).[4] Recent variants include the Adaptive Calibration Error (ACE)[5] and the Test-based Calibration Error (TCE)[6] which address limitations of the ECE metric that may arise when classifier scores concentrate on narrow subset of the [0,1] range. The following univariate calibration methods exist for transforming classifier scores into class membership probabilities in the two-class case: • Assignment value approach, see Garczarek (2002)[7] • Bayes approach, see Bennett (2002)[8] • Isotonic regression, see Zadrozny and Elkan (2002)[9] • Platt scaling (a form of logistic regression), see Lewis and Gale (1994)[10] and Platt (1999)[11] • Bayesian Binning into Quantiles (BBQ) calibration, see Naeini, Cooper, Hauskrecht (2015)[12] • Beta calibration, see Kull, Filho, Flach (2017)[13] In regression The calibration problem in regression is the use of known data on the observed relationship between a dependent variable and an independent variable to make estimates of other values of the independent variable from new observations of the dependent variable.[14][15][16] This can be known as "inverse regression":[17] see also sliced inverse regression. One example is that of dating objects, using observable evidence such as tree rings for dendrochronology or carbon-14 for radiometric dating. The observation is caused by the age of the object being dated, rather than the reverse, and the aim is to use the method for estimating dates based on new observations. The problem is whether the model used for relating known ages with observations should aim to minimise the error in the observation, or minimise the error in the date. The two approaches will produce different results, and the difference will increase if the model is then used for extrapolation at some distance from the known results. The following multivariate calibration methods exist for transforming classifier scores into class membership probabilities in the case with classes count greater than two: • Reduction to binary tasks and subsequent pairwise coupling, see Hastie and Tibshirani (1998)[18] • Dirichlet calibration, see Gebel (2009)[3] In prediction and forecasting In prediction and forecasting, a Brier score is sometimes used to assess prediction accuracy of a set of predictions, specifically that the magnitude of the assigned probabilities track the relative frequency of the observed outcomes. Philip E. Tetlock employs the term "calibration" in this sense[19] in his 2015 book Superforecasting. This differs from accuracy and precision. For example, as expressed by Daniel Kahneman, "if you give all events that happen a probability of .6 and all the events that don’t happen a probability of .4, your discrimination is perfect but your calibration is miserable".[19] Aggregative Contingent Estimation was a program of the Office of Incisive Analysis (OIA) at the Intelligence Advanced Research Projects Activity (IARPA) that sponsored research and forecasting tournaments in partnership with The Good Judgment Project, co-created by Philip E. Tetlock, Barbara Mellers, and Don Moore. In meteorology, in particular, as concerns weather forecasting, a related mode of assessment is known as forecast skill. See also • Calibration – Check on the accuracy of measurement devices • Calibrated probability assessment – Subjective probabilities assigned in a way that historically represents their uncertainty • Conformal prediction References 1. Upton, G, Cook, I. (2006) Oxford Dictionary of Statistics, OUP. ISBN 978-0-19-954145-4 2. Dawid, A. P (1982). "The Well-Calibrated Bayesian". Journal of the American Statistical Association. 77 (379): 605–610. doi:10.1080/01621459.1982.10477856. 3. Gebel, Martin (2009). Multivariate calibration of classifier scores into the probability space (PDF) (PhD thesis). University of Dortmund. 4. M.P. Naeini, G. Cooper, and M. Hauskrecht, Obtaining well calibrated probabilities using bayesian binning. In: Proceedings of the AAAI Conference on Artificial Intelligence, 2015. 5. J. Nixon, M.W. Dusenberry, L. Zhang, G. Jerfel, & D. Tran. Measuring Calibration in Deep Learning. In: CVPR workshops (Vol. 2, No. 7), 2019. 6. T. Matsubara, N. Tax, R. Mudd, & I. Guy. TCE: A Test-Based Approach to Measuring Calibration Error. In: Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI), PMLR, 2023. 7. U. M. Garczarek " Archived 2004-11-23 at the Wayback Machine," Classification Rules in Standardized Partition Spaces, Dissertation, Universität Dortmund, 2002 8. P. N. Bennett, Using asymmetric distributions to improve text classifier probability estimates: A comparison of new and standard parametric methods, Technical Report CMU-CS-02-126 , Carnegie Mellon, School of Computer Science, 2002. 9. B. Zadrozny and C. Elkan, Transforming classifier scores into accurate multiclass probability estimates. In: Proceedings of the Eighth International Conference on Knowledge Discovery and Data Mining , 694–699, Edmonton, ACM Press, 2002. 10. D. D. Lewis and W. A. Gale, A Sequential Algorithm for Training Text classifiers. In: W. B. Croft and C. J. van Rijsbergen (eds.), Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '94), 3–12. New York, Springer-Verlag, 1994. 11. J. C. Platt, Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: A. J. Smola, P. Bartlett, B. Schölkopf and D. Schuurmans (eds.), Advances in Large Margin Classiers, 61–74. Cambridge, MIT Press, 1999. 12. Naeini MP, Cooper GF, Hauskrecht M. Obtaining Well Calibrated Probabilities Using Bayesian Binning. Proceedings of the . AAAI Conference on Artificial Intelligence AAAI Conference on Artificial Intelligence. 2015;2015:2901-2907. 13. Meelis Kull, Telmo Silva Filho, Peter Flach; Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:623-631, 2017. 14. Brown, P.J. (1994) Measurement, Regression and Calibration, OUP. ISBN 0-19-852245-2 15. Ng, K. H., Pooi, A. H. (2008) "Calibration Intervals in Linear Regression Models", Communications in Statistics - Theory and Methods, 37 (11), 1688–1696. 16. Hardin, J. W., Schmiediche, H., Carroll, R. J. (2003) "The regression-calibration method for fitting generalized linear models with additive measurement error", Stata Journal, 3 (4), 361–372. link, pdf 17. Draper, N.L., Smith, H. (1998) Applied Regression analysis, 3rd Edition, Wiley. ISBN 0-471-17082-8 18. T. Hastie and R. Tibshirani, "," Classification by pairwise coupling. In: M. I. Jordan, M. J. Kearns and S. A. Solla (eds.), Advances in Neural Information Processing Systems, volume 10, Cambridge, MIT Press, 1998. 19. "Edge Master Class 2015: A Short Course in Superforecasting, Class II". edge.org. Edge Foundation. 24 August 2015. Retrieved 13 April 2018. Calibration is when I say there's a 70 percent likelihood of something happening, things happen 70 percent of time.
Wikipedia
Statistical conclusion validity Statistical conclusion validity is the degree to which conclusions about the relationship among variables based on the data are correct or "reasonable". This began as being solely about whether the statistical conclusion about the relationship of the variables was correct, but now there is a movement towards moving to "reasonable" conclusions that use: quantitative, statistical, and qualitative data.[1] Fundamentally, two types of errors can occur: type I (finding a difference or correlation when none exists) and type II (finding no difference or correlation when one exists). Statistical conclusion validity concerns the qualities of the study that make these types of errors more likely. Statistical conclusion validity involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures.[2][3][4] Common threats The most common threats to statistical conclusion validity are: Low statistical power Power is the probability of correctly rejecting the null hypothesis when it is false (inverse of the type II error rate). Experiments with low power have a higher probability of incorrectly accepting the null hypothesis—that is, committing a type II error and concluding that there is no effect when there actually is (I.e. there is real covariation between the cause and effect). Low power occurs when the sample size of the study is too small given other factors (small effect sizes, large group variability, unreliable measures, etc.). Violated assumptions of the test statistics Most statistical tests (particularly inferential statistics) involve assumptions about the data that make the analysis suitable for testing a hypothesis. Violating the assumptions of statistical tests can lead to incorrect inferences about the cause–effect relationship. The robustness of a test indicates how sensitive it is to violations. Violations of assumptions may make tests more or less likely to make type I or II errors. Dredging and the error rate problem Each hypothesis test involves a set risk of a type I error (the alpha rate). If a researcher searches or "dredges" through their data, testing many different hypotheses to find a significant effect, they are inflating their type I error rate. The more the researcher repeatedly tests the data, the higher the chance of observing a type I error and making an incorrect inference about the existence of a relationship. Unreliability of measures If the dependent and/or independent variable(s) are not measured reliably (i.e. with large amounts of measurement error), incorrect conclusions can be drawn. Restriction of range Restriction of range, such as floor and ceiling effects or selection effects, reduce the power of the experiment, and increase the chance of a type II error.[5] This is because correlations are attenuated (weakened) by reduced variability (see, for example, the equation for the Pearson product-moment correlation coefficient which uses score variance in its estimation). Heterogeneity of the units under study Greater heterogeneity of individuals participating in the study can also impact interpretations of results by increasing the variance of results or obscuring true relationships (see also sampling error). This obscures possible interactions between the characteristics of the units and the cause–effect relationship. Threats to internal validity Any effect that can impact the internal validity of a research study may bias the results and impact the validity of statistical conclusions reached. These threats to internal validity include unreliability of treatment implementation (lack of standardization) or failing to control for extraneous variables. See also • Internal validity • Statistical model validation • Test validity • Validity (statistics) References 1. Cozby, Paul C. (2009). Methods in behavioral research (10th ed.). Boston: McGraw-Hill Higher Education. 2. Cohen, R. J.; Swerdlik, M. E. (2004). Psychological testing and assessment (6th edition). Sydney: McGraw-Hill. 3. Cook, T. D.; Campbell, D. T.; Day, A. (1979). Quasi-experimentation: Design & analysis issues for field settings. Houghton Mifflin. 4. Shadish, W.; Cook, T. D.; Campbell, D. T. (2006). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin. 5. Sackett, P.R.; Lievens, F.; Berry, C.M.; Landers, R.N. (2007). "A Cautionary Note on the Effects of Range Restriction on Predictor Intercorrelations" (PDF). Journal of Applied Psychology. 92 (2): 538–544. doi:10.1037/0021-9010.92.2.538. PMID 17371098.
Wikipedia
Deviance (statistics) In statistics, deviance is a goodness-of-fit statistic for a statistical model; it is often used for statistical hypothesis testing. It is a generalization of the idea of using the sum of squares of residuals (SSR) in ordinary least squares to cases where model-fitting is achieved by maximum likelihood. It plays an important role in exponential dispersion models and generalized linear models. Not to be confused with Deviate (statistics), Deviation (statistics), Discrepancy (statistics), or Divergence (statistics). Deviance can be related to Kullback-Leibler divergence[1]. Definition The unit deviance[2][3] $d(y,\mu )$ is a bivariate function that satisfies the following conditions: • $d(y,y)=0$ • $d(y,\mu )>0\quad \forall y\neq \mu $ The total deviance $D(\mathbf {y} ,{\hat {\boldsymbol {\mu }}})$ of a model with predictions ${\hat {\boldsymbol {\mu }}}$ of the observation $\mathbf {y} $ is the sum of its unit deviances: $ D(\mathbf {y} ,{\hat {\boldsymbol {\mu }}})=\sum _{i}d(y_{i},{\hat {\mu }}_{i})$. The (total) deviance for a model M0 with estimates ${\hat {\mu }}=E[Y|{\hat {\theta }}_{0}]$, based on a dataset y, may be constructed by its likelihood as:[4][5] $D(y,{\hat {\mu }})=2\left(\log \left[p(y\mid {\hat {\theta }}_{s})\right]-\log \left[p(y\mid {\hat {\theta }}_{0})\right]\right).$ Here ${\hat {\theta }}_{0}$ denotes the fitted values of the parameters in the model M0, while ${\hat {\theta }}_{s}$ denotes the fitted parameters for the saturated model: both sets of fitted values are implicitly functions of the observations y. Here, the saturated model is a model with a parameter for every observation so that the data are fitted exactly. This expression is simply 2 times the log-likelihood ratio of the full model compared to the reduced model. The deviance is used to compare two models – in particular in the case of generalized linear models (GLM) where it has a similar role to residual sum of squares from ANOVA in linear models (RSS). Suppose in the framework of the GLM, we have two nested models, M1 and M2. In particular, suppose that M1 contains the parameters in M2, and k additional parameters. Then, under the null hypothesis that M2 is the true model, the difference between the deviances for the two models follows, based on Wilks' theorem, an approximate chi-squared distribution with k-degrees of freedom.[5] This can be used for hypothesis testing on the deviance. Some usage of the term "deviance" can be confusing. According to Collett:[6] "the quantity $-2\log {\big [}p(y\mid {\hat {\theta }}_{0}){\big ]}$ is sometimes referred to as a deviance. This is [...] inappropriate, since unlike the deviance used in the context of generalized linear modelling, $-2\log {\big [}p(y\mid {\hat {\theta }}_{0}){\big ]}$ does not measure deviation from a model that is a perfect fit to the data." However, since the principal use is in the form of the difference of the deviances of two models, this confusion in definition is unimportant. Examples The unit deviance for the Poisson distribution is $d(y,\mu )=2\left(y\log {\frac {y}{\mu }}-y+\mu \right)$, the unit deviance for the Normal distribution is given by $d(y,\mu )=\left(y-\mu \right)^{2}$. See also • Akaike information criterion • Deviance information criterion • Hosmer–Lemeshow test, a quality of fit statistic that can be used for binary data • Pearson's chi-squared test, an alternative quality of fit statistic for generalized linear models for count data • Peirce's criterion Notes 1. Hastie, Trevor. "A closer look at the deviance." The American Statistician 41.1 (1987): 16-20. 2. Jørgensen, B. (1997). The Theory of Dispersion Models. Chapman & Hall. 3. Song, Peter X. -K. (2007). Correlated Data Analysis: Modeling, Analytics, and Applications. Springer Series in Statistics. Springer Series in Statistics. doi:10.1007/978-0-387-71393-9. ISBN 978-0-387-71392-2. 4. Nelder, J.A.; Wedderburn, R.W.M. (1972). "Generalized Linear Models". Journal of the Royal Statistical Society. Series A (General). 135 (3): 370–384. doi:10.2307/2344614. JSTOR 2344614. S2CID 14154576. 5. McCullagh and Nelder (1989): page 17 6. Collett (2003): page 76 References • McCullagh, Peter; Nelder, John (1989). Generalized Linear Models, Second Edition. Chapman & Hall/CRC. ISBN 0-412-31760-5. • Collett, David (2003). Modelling Survival Data in Medical Research, Second Edition. Chapman & Hall/CRC. ISBN 1-58488-325-1. External links • Generalized Linear Models - Edward F. Connor • Lectures notes on Deviance Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject Least squares and regression analysis Computational statistics • Least squares • Linear least squares • Non-linear least squares • Iteratively reweighted least squares Correlation and dependence • Pearson product-moment correlation • Rank correlation (Spearman's rho • Kendall's tau) • Partial correlation • Confounding variable Regression analysis • Ordinary least squares • Partial least squares • Total least squares • Ridge regression Regression as a statistical model Linear regression • Simple linear regression • Ordinary least squares • Generalized least squares • Weighted least squares • General linear model Predictor structure • Polynomial regression • Growth curve (statistics) • Segmented regression • Local regression Non-standard • Nonlinear regression • Nonparametric • Semiparametric • Robust • Quantile • Isotonic Non-normal errors • Generalized linear model • Binomial • Poisson • Logistic Decomposition of variance • Analysis of variance • Analysis of covariance • Multivariate AOV Model exploration • Stepwise regression • Model selection • Mallows's Cp • AIC • BIC • Model specification • Regression validation Background • Mean and predicted response • Gauss–Markov theorem • Errors and residuals • Goodness of fit • Studentized residual • Minimum mean-square error • Frisch–Waugh–Lovell theorem Design of experiments • Response surface methodology • Optimal design • Bayesian design Numerical approximation • Numerical analysis • Approximation theory • Numerical integration • Gaussian quadrature • Orthogonal polynomials • Chebyshev polynomials • Chebyshev nodes Applications • Curve fitting • Calibration curve • Numerical smoothing and differentiation • System identification • Moving least squares • Regression analysis category • Statistics category •  Mathematics portal • Statistics outline • Statistics topics
Wikipedia
Empirical distribution function In statistics, an empirical distribution function (commonly also called an empirical cumulative distribution function, eCDF) is the distribution function associated with the empirical measure of a sample.[1] This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value. See also: Frequency distribution The green curve, which asymptotically approaches heights of 0 and 1 without reaching them, is the true cumulative distribution function of the standard normal distribution. The grey hash marks represent the observations in a particular sample drawn from that distribution, and the horizontal steps of the blue step function (including the leftmost point in each step but not including the rightmost point) form the empirical distribution function of that sample. () The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the Glivenko–Cantelli theorem. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function. Definition Let (X1, …, Xn) be independent, identically distributed real random variables with the common cumulative distribution function F(t). Then the empirical distribution function is defined as[2] ${\widehat {F}}_{n}(t)={\frac {{\mbox{number of elements in the sample}}\leq t}{n}}={\frac {1}{n}}\sum _{i=1}^{n}\mathbf {1} _{X_{i}\leq t},$ where $\mathbf {1} _{A}$ is the indicator of event A. For a fixed t, the indicator $\mathbf {1} _{X_{i}\leq t}$ is a Bernoulli random variable with parameter p = F(t); hence $n{\widehat {F}}_{n}(t)$ is a binomial random variable with mean nF(t) and variance nF(t)(1 − F(t)). This implies that ${\widehat {F}}_{n}(t)$ is an unbiased estimator for F(t). However, in some textbooks, the definition is given as ${\widehat {F}}_{n}(t)={\frac {1}{n+1}}\sum _{i=1}^{n}\mathbf {1} _{X_{i}\leq t}$[3][4] Mean The mean of the empirical distribution is an unbiased estimator of the mean of the population distribution. $E_{n}(X)={\frac {1}{n}}\left(\sum _{i=1}^{n}{x_{i}}\right)$ which is more commonly denoted ${\bar {x}}$ Variance The variance of the empirical distribution times ${\tfrac {n}{n-1}}$ is an unbiased estimator of the variance of the population distribution, for any distribution of X that has a finite variance. ${\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left[(X-\operatorname {E} [X])^{2}\right]\\[4pt]&=\operatorname {E} \left[(X-{\bar {x}})^{2}\right]\\[4pt]&={\frac {1}{n}}\left(\sum _{i=1}^{n}{(x_{i}-{\bar {x}})^{2}}\right)\end{aligned}}$ Mean squared error The mean squared error for the empirical distribution is as follows. ${\begin{aligned}\operatorname {MSE} &={\frac {1}{n}}\sum _{i=1}^{n}(Y_{i}-{\hat {Y_{i}}})^{2}\\[4pt]&=\operatorname {Var} _{\hat {\theta }}({\hat {\theta }})+\operatorname {Bias} ({\hat {\theta }},\theta )^{2}\end{aligned}}$ Where ${\hat {\theta }}$ is an estimator and $\theta $ an unknown parameter Quantiles For any real number $a$ the notation $\lceil {a}\rceil $ (read “ceiling of a”) denotes the least integer greater than or equal to $a$. For any real number a, the notation $\lfloor {a}\rfloor $ (read “floor of a”) denotes the greatest integer less than or equal to $a$. If $nq$ is not an integer, then the $q$-th quantile is unique and is equal to $x_{(\lceil {nq}\rceil )}$ If $nq$ is an integer, then the $q$-th quantile is not unique and is any real number $x$ such that $x_{({nq})}<x<x_{({nq+1})}$ Empirical median If $n$ is odd, then the empirical median is the number ${\tilde {x}}=x_{(\lceil {n/2}\rceil )}$ If $n$ is even, then the empirical median is the number ${\tilde {x}}={\frac {x_{n/2}+x_{n/2+1}}{2}}$ Asymptotic properties Since the ratio (n + 1)/n approaches 1 as n goes to infinity, the asymptotic properties of the two definitions that are given above are the same. By the strong law of large numbers, the estimator $\scriptstyle {\widehat {F}}_{n}(t)$ converges to F(t) as n → ∞ almost surely, for every value of t:[2] ${\widehat {F}}_{n}(t)\ {\xrightarrow {\text{a.s.}}}\ F(t);$ thus the estimator $\scriptstyle {\widehat {F}}_{n}(t)$ is consistent. This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the Glivenko–Cantelli theorem, which states that the convergence in fact happens uniformly over t:[5] $\|{\widehat {F}}_{n}-F\|_{\infty }\equiv \sup _{t\in \mathbb {R} }{\big |}{\widehat {F}}_{n}(t)-F(t){\big |}\ {\xrightarrow {\text{a.s.}}}\ 0.$ The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the goodness-of-fit between the empirical distribution $\scriptstyle {\widehat {F}}_{n}(t)$ and the assumed true cumulative distribution function F. Other norm functions may be reasonably used here instead of the sup-norm. For example, the L2-norm gives rise to the Cramér–von Mises statistic. The asymptotic distribution can be further characterized in several different ways. First, the central limit theorem states that pointwise, $\scriptstyle {\widehat {F}}_{n}(t)$ has asymptotically normal distribution with the standard ${\sqrt {n}}$ rate of convergence:[2] ${\sqrt {n}}{\big (}{\widehat {F}}_{n}(t)-F(t){\big )}\ \ {\xrightarrow {d}}\ \ {\mathcal {N}}{\Big (}0,F(t){\big (}1-F(t){\big )}{\Big )}.$ This result is extended by the Donsker’s theorem, which asserts that the empirical process $\scriptstyle {\sqrt {n}}({\widehat {F}}_{n}-F)$, viewed as a function indexed by $\scriptstyle t\in \mathbb {R} $, converges in distribution in the Skorokhod space $\scriptstyle D[-\infty ,+\infty ]$ to the mean-zero Gaussian process $\scriptstyle G_{F}=B\circ F$, where B is the standard Brownian bridge.[5] The covariance structure of this Gaussian process is $\operatorname {E} [\,G_{F}(t_{1})G_{F}(t_{2})\,]=F(t_{1}\wedge t_{2})-F(t_{1})F(t_{2}).$ The uniform rate of convergence in Donsker’s theorem can be quantified by the result known as the Hungarian embedding:[6] $\limsup _{n\to \infty }{\frac {\sqrt {n}}{\ln ^{2}n}}{\big \|}{\sqrt {n}}({\widehat {F}}_{n}-F)-G_{F,n}{\big \|}_{\infty }<\infty ,\quad {\text{a.s.}}$ Alternatively, the rate of convergence of $\scriptstyle {\sqrt {n}}({\widehat {F}}_{n}-F)$ can also be quantified in terms of the asymptotic behavior of the sup-norm of this expression. Number of results exist in this venue, for example the Dvoretzky–Kiefer–Wolfowitz inequality provides bound on the tail probabilities of $\scriptstyle {\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }$:[6] $\Pr \!{\Big (}{\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }>z{\Big )}\leq 2e^{-2z^{2}}.$ In fact, Kolmogorov has shown that if the cumulative distribution function F is continuous, then the expression $\scriptstyle {\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }$ converges in distribution to $\scriptstyle \|B\|_{\infty }$, which has the Kolmogorov distribution that does not depend on the form of F. Another result, which follows from the law of the iterated logarithm, is that [6] $\limsup _{n\to \infty }{\frac {{\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }}{\sqrt {2\ln \ln n}}}\leq {\frac {1}{2}},\quad {\text{a.s.}}$ and $\liminf _{n\to \infty }{\sqrt {2n\ln \ln n}}\|{\widehat {F}}_{n}-F\|_{\infty }={\frac {\pi }{2}},\quad {\text{a.s.}}$ Confidence intervals As per Dvoretzky–Kiefer–Wolfowitz inequality the interval that contains the true CDF, $F(x)$, with probability $1-\alpha $ is specified as $F_{n}(x)-\varepsilon \leq F(x)\leq F_{n}(x)+\varepsilon \;{\text{ where }}\varepsilon ={\sqrt {\frac {\ln {\frac {2}{\alpha }}}{2n}}}.$ As per the above bounds, we can plot the Empirical CDF, CDF and confidence intervals for different distributions by using any one of the statistical implementations. Statistical implementation A non-exhaustive list of software implementations of Empirical Distribution function includes: • In R software, we compute an empirical cumulative distribution function, with several methods for plotting, printing and computing with such an “ecdf” object. • In MATLAB we can use Empirical cumulative distribution function (cdf) plot • jmp from SAS, the CDF plot creates a plot of the empirical cumulative distribution function. • Minitab, create an Empirical CDF • Mathwave, we can fit probability distribution to our data • Dataplot, we can plot Empirical CDF plot • Scipy, we can use scipy.stats.ecdf • Statsmodels, we can use statsmodels.distributions.empirical_distribution.ECDF • Matplotlib, we can use histograms to plot a cumulative distribution • Seaborn, using the seaborn.ecdfplot function • Plotly, using the plotly.express.ecdf function • Excel, we can plot Empirical CDF plot See also • Càdlàg functions • Count data • Distribution fitting • Dvoretzky–Kiefer–Wolfowitz inequality • Empirical probability • Empirical process • Estimating quantiles from a sample • Frequency (statistics) • Empirical likelihood • Kaplan–Meier estimator for censored processes • Survival function • Q–Q plot References 1. A modern introduction to probability and statistics: Understanding why and how. Michel Dekking. London: Springer. 2005. p. 219. ISBN 978-1-85233-896-1. OCLC 262680588.{{cite book}}: CS1 maint: others (link) 2. van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. p. 265. ISBN 0-521-78450-6. 3. Coles, S. (2001) An Introduction to Statistical Modeling of Extreme Values. Springer, p. 36, Definition 2.4. ISBN 978-1-4471-3675-0. 4. Madsen, H.O., Krenk, S., Lind, S.C. (2006) Methods of Structural Safety. Dover Publications. p. 148-149. ISBN 0486445976 5. van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. p. 266. ISBN 0-521-78450-6. 6. van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. p. 268. ISBN 0-521-78450-6. Further reading • Shorack, G.R.; Wellner, J.A. (1986). Empirical Processes with Applications to Statistics. New York: Wiley. ISBN 0-471-86725-X. External links • Media related to Empirical distribution functions at Wikimedia Commons Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Stochastic dominance Stochastic dominance is a partial order between random variables.[1][2] It is a form of stochastic ordering. The concept arises in decision theory and decision analysis in situations where one gamble (a probability distribution over possible outcomes, also known as prospects) can be ranked as superior to another gamble for a broad class of decision-makers. It is based on shared preferences regarding sets of possible outcomes and their associated probabilities. Only limited knowledge of preferences is required for determining dominance. Risk aversion is a factor only in second order stochastic dominance. Stochastic dominance does not give a total order, but rather only a partial order: for some pairs of gambles, neither one stochastically dominates the other, since different members of the broad class of decision-makers will differ regarding which gamble is preferable without them generally being considered to be equally attractive. Throughout the article, $\rho ,\nu $ stand for probability distributions on $\mathbb {R} $, while $A,B,X,Y,Z$ stand for particular random variables on $\mathbb {R} $. The notation $X\sim \rho $ means that $X$ has distribution $\rho $. There are a sequence of stochastic dominance orderings, from first $\succeq _{1}$, to second $\succeq _{2}$, to higher orders $\succeq _{n}$. The sequence is increasingly more inclusive. That is, if $\rho \succeq _{n}\nu $, then $\rho \succeq _{k}\nu $ for all $k\geq n$. Further, there exists $\rho ,\nu $ such that $\rho \succeq _{n+1}\nu $ but not $\rho \succeq _{n}\nu $. Stochastic dominance could trace back to (Blackwell, 1953),[3] but it was not developed until 1969–1970.[4] Statewise dominance The simplest case of stochastic dominance is statewise dominance (also known as state-by-state dominance), defined as follows: Random variable A is statewise dominant over random variable B if A gives at least as good a result in every state (every possible set of outcomes), and a strictly better result in at least one state. For example, if a dollar is added to one or more prizes in a lottery, the new lottery statewise dominates the old one because it yields a better payout regardless of the specific numbers realized by the lottery. Similarly, if a risk insurance policy has a lower premium and a better coverage than another policy, then with or without damage, the outcome is better. Anyone who prefers more to less (in the standard terminology, anyone who has monotonically increasing preferences) will always prefer a statewise dominant gamble. First-order Statewise dominance implies first-order stochastic dominance (FSD),[5] which is defined as: Random variable A has first-order stochastic dominance over random variable B if for any outcome x, A gives at least as high a probability of receiving at least x as does B, and for some x, A gives a higher probability of receiving at least x. In notation form, $P[A\geq x]\geq P[B\geq x]$ for all x, and for some x, $P[A\geq x]>P[B\geq x]$. In terms of the cumulative distribution functions of the two random variables, A dominating B means that $F_{A}(x)\leq F_{B}(x)$ for all x, with strict inequality at some x. Equivalent definitions Let $\rho ,\nu $ be two probability distributions on $\mathbb {R} $, such that $\mathbb {E} _{X\sim \rho }[|X|],\mathbb {E} _{X\sim \nu }[|X|]$ are both finite, then the following conditions are equivalent, thus they may all serve as the definition of first-order stochastic dominance:[6] • For any $u:\mathbb {R} \to \mathbb {R} $ that is non-decreasing, $\mathbb {E} _{X\sim \rho }[u(X)]\geq \mathbb {E} _{X\sim \nu }[u(X)]$ • $F_{\rho }(t)\leq F_{\nu }(t),\quad \forall t\in \mathbb {R} .$ • There exists two random variables $X\sim \rho ,Y\sim \nu $, such that $X=Y+\delta $, where $\delta \geq 0$. The first definition states that a gamble $\rho $ first-order stochastically dominates gamble $\nu $ if and only if every expected utility maximizer with an increasing utility function prefers gamble $\rho $ over gamble $\nu $. The third definition states that we can construct a pair of gambles $X,Y$ with distributions $\rho ,\nu $, such that gamble $X$ always pays at least as much as gamble $Y$. More concretely, construct first a uniformly distributed $Z\sim Uniform(0,1)$, then use the inverse transform sampling to get $X=F_{X}^{-1}(Z),Y=F_{Y}^{-1}(Z)$, then $X\geq Y$ for any $Z$. Pictorially, the second and third definition are equivalent, because we can go from the graphed density function of A to that of B both by pushing it upwards and pushing it leftwards. Extended example Consider three gambles over a single toss of a fair six-sided die: ${\begin{array}{rcccccc}{\text{State (die result)}}&1&2&3&4&5&6\\\hline {\text{gamble A wins }}\__SUB_LEVEL_SECTION_3__amp;1&1&2&2&2&2\\{\text{gamble B wins }}\__SUB_LEVEL_SECTION_3__amp;1&1&1&2&2&2\\{\text{gamble C wins }}\__SUB_LEVEL_SECTION_3__amp;3&3&3&1&1&1\\\hline \end{array}}$ Gamble A statewise dominates gamble B because A gives at least as good a yield in all possible states (outcomes of the die roll) and gives a strictly better yield in one of them (state 3). Since A statewise dominates B, it also first-order dominates B. Gamble C does not statewise dominate B because B gives a better yield in states 4 through 6, but C first-order stochastically dominates B because Pr(B ≥ 1) = Pr(C ≥ 1) = 1, Pr(B ≥ 2) = Pr(C ≥ 2) = 3/6, and Pr(B ≥ 3) = 0 while Pr(C ≥ 3) = 3/6 > Pr(B ≥ 3). Gambles A and C cannot be ordered relative to each other on the basis of first-order stochastic dominance because Pr(A ≥ 2) = 4/6 > Pr(C ≥ 2) = 3/6 while on the other hand Pr(C ≥ 3) = 3/6 > Pr(A ≥ 3) = 0. In general, although when one gamble first-order stochastically dominates a second gamble, the expected value of the payoff under the first will be greater than the expected value of the payoff under the second, the converse is not true: one cannot order lotteries with regard to stochastic dominance simply by comparing the means of their probability distributions. For instance, in the above example C has a higher mean (2) than does A (5/3), yet C does not first-order dominate A. Second-order The other commonly used type of stochastic dominance is second-order stochastic dominance.[1][7][8] Roughly speaking, for two gambles $\rho $ and $\nu $, gamble $\rho $ has second-order stochastic dominance over gamble $\nu $ if the former is more predictable (i.e. involves less risk) and has at least as high a mean. All risk-averse expected-utility maximizers (that is, those with increasing and concave utility functions) prefer a second-order stochastically dominant gamble to a dominated one. Second-order dominance describes the shared preferences of a smaller class of decision-makers (those for whom more is better and who are averse to risk, rather than all those for whom more is better) than does first-order dominance. In terms of cumulative distribution functions $F_{\rho }$ and $F_{\nu }$, $\rho $ is second-order stochastically dominant over $\nu $ if and only if $\int _{-\infty }^{x}[F_{\nu }(t)-F_{\rho }(t)]\,dt\geq 0$ for all $x$, with strict inequality at some $x$. Equivalently, $\rho $ dominates $\nu $ in the second order if and only if $\mathbb {E} _{X\sim \rho }[u(X)]\geq \mathbb {E} _{X\sim \nu }[u(X)]$ for all nondecreasing and concave utility functions $u(x)$. Second-order stochastic dominance can also be expressed as follows: Gamble $\rho $ second-order stochastically dominates $\nu $ if and only if there exist some gambles $y$ and $z$ such that $x_{\nu }{\overset {d}{=}}(x_{\rho }+y+z)$, with $y$ always less than or equal to zero, and with $\mathbb {E} (z\mid x_{\rho }+y)=0$ for all values of $x_{\rho }+y$. Here the introduction of random variable $y$ makes $\nu $ first-order stochastically dominated by $\rho $ (making $\nu $ disliked by those with an increasing utility function), and the introduction of random variable $z$ introduces a mean-preserving spread in $\nu $ which is disliked by those with concave utility. Note that if $\rho $ and $\nu $ have the same mean (so that the random variable $y$ degenerates to the fixed number 0), then $\nu $ is a mean-preserving spread of $\rho $. Equivalent definitions Let $\rho ,\nu $ be two probability distributions on $\mathbb {R} $, such that $\mathbb {E} _{X\sim \rho }[|X|],\mathbb {E} _{X\sim \nu }[|X|]$ are both finite, then the following conditions are equivalent, thus they may all serve as the definition of second-order stochastic dominance:[6] • For any $u:\mathbb {R} \to \mathbb {R} $ that is non-decreasing, and (not necessarily strictly) concave,$\mathbb {E} _{X\sim \rho }[u(X)]\geq \mathbb {E} _{X\sim \nu }[u(X)]$ • $\int _{-\infty }^{t}F_{\rho }(x)dx\leq \int _{-\infty }^{t}F_{\nu }(x)dx,\quad \forall t\in \mathbb {R} .$ • There exists two random variables $X\sim \rho ,Y\sim \nu $, such that $X=Y+\delta +\epsilon $, where $\delta \geq 0$ and $\mathbb {E} [\epsilon |Y+\delta ]=0$. These are analogous with the equivalent definitions of first-order stochastic dominance, given above. Sufficient conditions • First-order stochastic dominance of A over B is a sufficient condition for second-order dominance of A over B. • If B is a mean-preserving spread of A, then A second-order stochastically dominates B. Necessary conditions • $\mathbb {E} _{\rho }(x)\geq \mathbb {E} _{\nu }(x)$ is a necessary condition for A to second-order stochastically dominate B. • $\min _{\rho }(x)\geq \min _{\nu }(x)$ is a necessary condition for A to second-order dominate B. The condition implies that the left tail of $F_{\nu }$ must be thicker than the left tail of $F_{\rho }$. Third-order Let $F_{\rho }$ and $F_{\nu }$ be the cumulative distribution functions of two distinct investments $\rho $ and $\nu $. $\rho $ dominates $\nu $ in the third order if and only if both • $\int _{-\infty }^{x}\left(\int _{-\infty }^{z}[F_{\nu }(t)-F_{\rho }(t)]\,dt\right)dz\geq 0{\text{ for all }}x,$ • $\mathbb {E} _{\rho }(x)\geq \mathbb {E} _{\nu }(x)$. Equivalently, $\rho $ dominates $\nu $ in the third order if and only if $\mathbb {E} _{\rho }U(x)\geq \mathbb {E} _{\nu }U(x)$ for all $U\in D_{3}$. The set $D_{3}$ has two equivalent definitions: • the set of nondecreasing, concave utility functions that are positively skewed (that is, have a nonnegative third derivative throughout).[9] • the set of nondecreasing, concave utility functions, such that for any random variable $Z$, the risk-premium function $\pi _{u}(x,Z)$ is a monotonically nonincreasing function of $x$.[10] Here, $\pi _{u}(x,Z)$ is defined as the solution to the problem $u(x+\mathbb {E} [Z]-\pi )=\mathbb {E} [u(x+Z)].$ See more details at risk premium page. Sufficient condition • Second-order dominance is a sufficient condition. Necessary conditions • $\mathbb {E} _{\rho }(\log(x))\geq \mathbb {E} _{\nu }(\log(x))$ is a necessary condition. The condition implies that the geometric mean of $\rho $ must be greater than or equal to the geometric mean of $\nu $. • $\min _{\rho }(x)\geq \min _{\nu }(x)$ is a necessary condition. The condition implies that the left tail of $F_{\nu }$ must be thicker than the left tail of $F_{\rho }$. Higher-order Higher orders of stochastic dominance have also been analyzed, as have generalizations of the dual relationship between stochastic dominance orderings and classes of preference functions.[11] Arguably the most powerful dominance criterion relies on the accepted economic assumption of decreasing absolute risk aversion.[12][13] This involves several analytical challenges and a research effort is on its way to address those. [14] Formally, the n-th-order stochastic dominance is defined as [15] • For any probability distribution $\rho $ on $[0,\infty )$, define the functions inductively: $F_{\rho }^{1}(t)=F_{\rho }(t),\quad F_{\rho }^{2}(t)=\int _{0}^{t}F_{\rho }^{1}(x)dx,\quad \cdots $ • For any two probability distributions $\rho ,\nu $ on $[0,\infty )$, non-strict and strict n-th-order stochastic dominance is defined as $\rho \succeq _{n}\nu \quad {\text{ iff }}\quad F_{\rho }^{n}\leq F_{\nu }^{n}{\text{ on }}[0,\infty )$ $\rho \succ _{n}\nu \quad {\text{ iff }}\quad \rho \succeq _{n}\nu {\text{ and }}\rho \neq \nu $ These relations are transitive and increasingly more inclusive. That is, if $\rho \succeq _{n}\nu $, then $\rho \succeq _{k}\nu $ for all $k\geq n$. Further, there exists $\rho ,\nu $ such that $\rho \succeq _{n+1}\nu $ but not $\rho \succeq _{n}\nu $. Define the n-th moment by $\mu _{k}(\rho )=\mathbb {E} _{X\sim \rho }[X^{k}]=\int x^{k}dF_{\rho }(x)$, then Theorem — If $\rho \succ _{n}\nu $ are on $[0,\infty )$ with finite moments $\mu _{k}(\rho ),\mu _{k}(\nu )$ for all $k=1,2,...,n$, then $(\mu _{1}(\rho ),...,\mu _{n}(\rho ))\succ _{n}^{*}(\mu _{1}(\nu ),\mu _{n}(\nu ))$. Here, the partial ordering $\succ _{n}^{*}$ is defined on $\mathbb {R} ^{n}$ by $v\succ _{n}^{*}w$ iff $v\neq w$, and, letting $k$ be the smallest such that $v_{k}\neq w_{k}$, we have $(-1)^{k-1}v_{k}>(-1)^{k-1}w_{k}$ Constraints Stochastic dominance relations may be used as constraints in problems of mathematical optimization, in particular stochastic programming.[16][17][18] In a problem of maximizing a real functional $f(X)$ over random variables $X$ in a set $X_{0}$ we may additionally require that $X$ stochastically dominates a fixed random benchmark $B$. In these problems, utility functions play the role of Lagrange multipliers associated with stochastic dominance constraints. Under appropriate conditions, the solution of the problem is also a (possibly local) solution of the problem to maximize $f(X)+\mathbb {E} [u(X)-u(B)]$ over $X$ in $X_{0}$, where $u(x)$ is a certain utility function. If the first order stochastic dominance constraint is employed, the utility function $u(x)$ is nondecreasing; if the second order stochastic dominance constraint is used, $u(x)$ is nondecreasing and concave. A system of linear equations can test whether a given solution if efficient for any such utility function.[19] Third-order stochastic dominance constraints can be dealt with using convex quadratically constrained programming (QCP).[20] See also • Modern portfolio theory • Marginal conditional stochastic dominance • Responsive set extension - equivalent to stochastic dominance in the context of preference relations. • Quantum catalyst • Ordinal Pareto efficiency • Lexicographic dominance References 1. Hadar, J.; Russell, W. (1969). "Rules for Ordering Uncertain Prospects". American Economic Review. 59 (1): 25–34. JSTOR 1811090. 2. Bawa, Vijay S. (1975). "Optimal Rules for Ordering Uncertain Prospects". Journal of Financial Economics. 2 (1): 95–121. doi:10.1016/0304-405X(75)90025-2. 3. Blackwell, David (June 1953). "Equivalent Comparisons of Experiments". The Annals of Mathematical Statistics. 24 (2): 265–272. doi:10.1214/aoms/1177729032. ISSN 0003-4851. 4. Levy, Haim (1990), Eatwell, John; Milgate, Murray; Newman, Peter (eds.), "Stochastic Dominance", Utility and Probability, London: Palgrave Macmillan UK, pp. 251–254, doi:10.1007/978-1-349-20568-4_34, ISBN 978-1-349-20568-4, retrieved 2022-12-23 5. Quirk, J. P.; Saposnik, R. (1962). "Admissibility and Measurable Utility Functions". Review of Economic Studies. 29 (2): 140–146. doi:10.2307/2295819. JSTOR 2295819. 6. Mas-Colell, Andreu (1995). Microeconomic theory. Michael Dennis Whinston, Jerry R. Green. New York. Proposition 6.D.1. ISBN 0-19-507340-1. OCLC 32430901.{{cite book}}: CS1 maint: location missing publisher (link) 7. Hanoch, G.; Levy, H. (1969). "The Efficiency Analysis of Choices Involving Risk". Review of Economic Studies. 36 (3): 335–346. doi:10.2307/2296431. JSTOR 2296431. 8. Rothschild, M.; Stiglitz, J. E. (1970). "Increasing Risk: I. A Definition". Journal of Economic Theory. 2 (3): 225–243. doi:10.1016/0022-0531(70)90038-4. 9. Chan, Raymond H.; Clark, Ephraim; Wong, Wing-Keung (2012-11-16). "On the Third Order Stochastic Dominance for Risk-Averse and Risk-Seeking Investors". mpra.ub.uni-muenchen.de. Retrieved 2022-12-25. 10. Whitmore, G. A. (1970). "Third-Degree Stochastic Dominance". The American Economic Review. 60 (3): 457–459. ISSN 0002-8282. JSTOR 1817999. 11. Ekern, Steinar (1980). "Increasing Nth Degree Risk". Economics Letters. 6 (4): 329–333. doi:10.1016/0165-1765(80)90005-1. 12. Vickson, R.G. (1975). "Stochastic Dominance Tests for Decreasing Absolute Risk Aversion. I. Discrete Random Variables". Management Science. 21 (12): 1438–1446. doi:10.1287/mnsc.21.12.1438. 13. Vickson, R.G. (1977). "Stochastic Dominance Tests for Decreasing Absolute Risk Aversion. II. General random Variables". Management Science. 23 (5): 478–489. doi:10.1287/mnsc.23.5.478. 14. See, e.g. Post, Th.; Fang, Y.; Kopa, M. (2015). "Linear Tests for DARA Stochastic Dominance". Management Science. 61 (7): 1615–1629. doi:10.1287/mnsc.2014.1960. 15. Fishburn, Peter C. (1980-02-01). "Stochastic Dominance and Moments of Distributions". Mathematics of Operations Research. 5 (1): 94–100. doi:10.1287/moor.5.1.94. ISSN 0364-765X. 16. Dentcheva, D.; Ruszczyński, A. (2003). "Optimization with Stochastic Dominance Constraints". SIAM Journal on Optimization. 14 (2): 548–566. CiteSeerX 10.1.1.201.7815. doi:10.1137/S1052623402420528. S2CID 12502544. 17. Kuosmanen, T (2004). "Efficient diversification according to stochastic dominance criteria". Management Science. 50 (10): 1390–1406. doi:10.1287/mnsc.1040.0284. 18. Dentcheva, D.; Ruszczyński, A. (2004). "Semi-Infinite Probabilistic Optimization: First Order Stochastic Dominance Constraints". Optimization. 53 (5–6): 583–601. doi:10.1080/02331930412331327148. S2CID 122168294. 19. Post, Th (2003). "Empirical tests for stochastic dominance efficiency". Journal of Finance. 58 (5): 1905–1932. doi:10.1111/1540-6261.00592. 20. Post, Thierry; Kopa, Milos (2016). "Portfolio Choice Based on Third-Degree Stochastic Dominance". Management Science. 63 (10): 3381–3392. doi:10.1287/mnsc.2016.2506. SSRN 2687104.
Wikipedia
Mathematical economics Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. Often, these applied methods are beyond simple geometry, and may include differential and integral calculus, difference and differential equations, matrix algebra, mathematical programming, or other computational methods.[1][2] Proponents of this approach claim that it allows the formulation of theoretical relationships with rigor, generality, and simplicity.[3] Part of a series on Economics • History • Outline • Index Branches and classifications • Schools of economics • Mainstream economics • Heterodox economics • Economic methodology • Economic theory • Political economy • Microeconomics • Macroeconomics • International economics • Applied economics • Mathematical economics • Econometrics • JEL classification codes Concepts, theory and techniques • Economic systems • Economic growth • Market • National accounting • Experimental economics • Computational economics • Game theory • Operations research • Middle income trap • Industrial complex By application • Agricultural • Behavioral • Business • Cultural • Demographic • Development • Digitization • Ecological • Economic geography • Economic history • Economic planning • Economic policy • Economic sociology • Economic statistics • Education • Engineering • Environmental • Evolutionary • Expeditionary • Feminist • Financial • Happiness economics • Health • Human capital • Humanistic economics • Industrial organization • Information • Institutional • Knowledge • Labour • Law • Managerial • Monetary • Natural resource • Organizational • Participation • Personnel • Public economics • Public / Social choice • Regional • Rural • Service • Socioeconomics • Solidarity economy • Urban • Welfare • Welfare economics Notable economists • François Quesnay • Adam Smith • David Ricardo • Thomas Robert Malthus • John Stuart Mill • Karl Marx • William Stanley Jevons • Léon Walras • Alfred Marshall • Irving Fisher • John Maynard Keynes • Friedrich Hayek • Arthur Cecil Pigou • John Hicks • Wassily Leontief • Paul Samuelson • more Lists • Glossary • Economists • Publications (journals) •  Business and Economics portal •  Money portal Mathematics allows economists to form meaningful, testable propositions about wide-ranging and complex subjects which could less easily be expressed informally. Further, the language of mathematics allows economists to make specific, positive claims about controversial or contentious subjects that would be impossible without mathematics.[4] Much of economic theory is currently presented in terms of mathematical economic models, a set of stylized and simplified mathematical relationships asserted to clarify assumptions and implications.[5] Broad applications include: • optimization problems as to goal equilibrium, whether of a household, business firm, or policy maker • static (or equilibrium) analysis in which the economic unit (such as a household) or economic system (such as a market or the economy) is modeled as not changing • comparative statics as to a change from one equilibrium to another induced by a change in one or more factors • dynamic analysis, tracing changes in an economic system over time, for example from economic growth.[2][6][7] Formal economic modeling began in the 19th century with the use of differential calculus to represent and explain economic behavior, such as utility maximization, an early economic application of mathematical optimization. Economics became more mathematical as a discipline throughout the first half of the 20th century, but introduction of new and generalized techniques in the period around the Second World War, as in game theory, would greatly broaden the use of mathematical formulations in economics.[8][7] This rapid systematizing of economics alarmed critics of the discipline as well as some noted economists. John Maynard Keynes, Robert Heilbroner, Friedrich Hayek and others have criticized the broad use of mathematical models for human behavior, arguing that some human choices are irreducible to mathematics. History The use of mathematics in the service of social and economic analysis dates back to the 17th century. Then, mainly in German universities, a style of instruction emerged which dealt specifically with detailed presentation of data as it related to public administration. Gottfried Achenwall lectured in this fashion, coining the term statistics. At the same time, a small group of professors in England established a method of "reasoning by figures upon things relating to government" and referred to this practice as Political Arithmetick.[9] Sir William Petty wrote at length on issues that would later concern economists, such as taxation, Velocity of money and national income, but while his analysis was numerical, he rejected abstract mathematical methodology. Petty's use of detailed numerical data (along with John Graunt) would influence statisticians and economists for some time, even though Petty's works were largely ignored by English scholars.[10] The mathematization of economics began in earnest in the 19th century. Most of the economic analysis of the time was what would later be called classical economics. Subjects were discussed and dispensed with through algebraic means, but calculus was not used. More importantly, until Johann Heinrich von Thünen's The Isolated State in 1826, economists did not develop explicit and abstract models for behavior in order to apply the tools of mathematics. Thünen's model of farmland use represents the first example of marginal analysis.[11] Thünen's work was largely theoretical, but he also mined empirical data in order to attempt to support his generalizations. In comparison to his contemporaries, Thünen built economic models and tools, rather than applying previous tools to new problems.[12] Meanwhile, a new cohort of scholars trained in the mathematical methods of the physical sciences gravitated to economics, advocating and applying those methods to their subject,[13] and described today as moving from geometry to mechanics.[14] These included W.S. Jevons who presented paper on a "general mathematical theory of political economy" in 1862, providing an outline for use of the theory of marginal utility in political economy.[15] In 1871, he published The Principles of Political Economy, declaring that the subject as science "must be mathematical simply because it deals with quantities". Jevons expected that only collection of statistics for price and quantities would permit the subject as presented to become an exact science.[16] Others preceded and followed in expanding mathematical representations of economic problems. [17] Marginalists and the roots of neoclassical economics Augustin Cournot and Léon Walras built the tools of the discipline axiomatically around utility, arguing that individuals sought to maximize their utility across choices in a way that could be described mathematically.[18] At the time, it was thought that utility was quantifiable, in units known as utils.[19] Cournot, Walras and Francis Ysidro Edgeworth are considered the precursors to modern mathematical economics.[20] Augustin Cournot Cournot, a professor of mathematics, developed a mathematical treatment in 1838 for duopoly—a market condition defined by competition between two sellers.[20] This treatment of competition, first published in Researches into the Mathematical Principles of Wealth,[21] is referred to as Cournot duopoly. It is assumed that both sellers had equal access to the market and could produce their goods without cost. Further, it assumed that both goods were homogeneous. Each seller would vary her output based on the output of the other and the market price would be determined by the total quantity supplied. The profit for each firm would be determined by multiplying their output by the per unit market price. Differentiating the profit function with respect to quantity supplied for each firm left a system of linear equations, the simultaneous solution of which gave the equilibrium quantity, price and profits.[22] Cournot's contributions to the mathematization of economics would be neglected for decades, but eventually influenced many of the marginalists.[22][23] Cournot's models of duopoly and oligopoly also represent one of the first formulations of non-cooperative games. Today the solution can be given as a Nash equilibrium but Cournot's work preceded modern game theory by over 100 years.[24] Léon Walras While Cournot provided a solution for what would later be called partial equilibrium, Léon Walras attempted to formalize discussion of the economy as a whole through a theory of general competitive equilibrium. The behavior of every economic actor would be considered on both the production and consumption side. Walras originally presented four separate models of exchange, each recursively included in the next. The solution of the resulting system of equations (both linear and non-linear) is the general equilibrium.[25] At the time, no general solution could be expressed for a system of arbitrarily many equations, but Walras's attempts produced two famous results in economics. The first is Walras' law and the second is the principle of tâtonnement. Walras' method was considered highly mathematical for the time and Edgeworth commented at length about this fact in his review of Éléments d'économie politique pure (Elements of Pure Economics).[26] Walras' law was introduced as a theoretical answer to the problem of determining the solutions in general equilibrium. His notation is different from modern notation but can be constructed using more modern summation notation. Walras assumed that in equilibrium, all money would be spent on all goods: every good would be sold at the market price for that good and every buyer would expend their last dollar on a basket of goods. Starting from this assumption, Walras could then show that if there were n markets and n-1 markets cleared (reached equilibrium conditions) that the nth market would clear as well. This is easiest to visualize with two markets (considered in most texts as a market for goods and a market for money). If one of two markets has reached an equilibrium state, no additional goods (or conversely, money) can enter or exit the second market, so it must be in a state of equilibrium as well. Walras used this statement to move toward a proof of existence of solutions to general equilibrium but it is commonly used today to illustrate market clearing in money markets at the undergraduate level.[27] Tâtonnement (roughly, French for groping toward) was meant to serve as the practical expression of Walrasian general equilibrium. Walras abstracted the marketplace as an auction of goods where the auctioneer would call out prices and market participants would wait until they could each satisfy their personal reservation prices for the quantity desired (remembering here that this is an auction on all goods, so everyone has a reservation price for their desired basket of goods).[28] Only when all buyers are satisfied with the given market price would transactions occur. The market would "clear" at that price—no surplus or shortage would exist. The word tâtonnement is used to describe the directions the market takes in groping toward equilibrium, settling high or low prices on different goods until a price is agreed upon for all goods. While the process appears dynamic, Walras only presented a static model, as no transactions would occur until all markets were in equilibrium. In practice, very few markets operate in this manner.[29] Francis Ysidro Edgeworth Edgeworth introduced mathematical elements to Economics explicitly in Mathematical Psychics: An Essay on the Application of Mathematics to the Moral Sciences, published in 1881.[30] He adopted Jeremy Bentham's felicific calculus to economic behavior, allowing the outcome of each decision to be converted into a change in utility.[31] Using this assumption, Edgeworth built a model of exchange on three assumptions: individuals are self-interested, individuals act to maximize utility, and individuals are "free to recontract with another independently of...any third party".[32] Given two individuals, the set of solutions where both individuals can maximize utility is described by the contract curve on what is now known as an Edgeworth Box. Technically, the construction of the two-person solution to Edgeworth's problem was not developed graphically until 1924 by Arthur Lyon Bowley.[34] The contract curve of the Edgeworth box (or more generally on any set of solutions to Edgeworth's problem for more actors) is referred to as the core of an economy.[35] Edgeworth devoted considerable effort to insisting that mathematical proofs were appropriate for all schools of thought in economics. While at the helm of The Economic Journal, he published several articles criticizing the mathematical rigor of rival researchers, including Edwin Robert Anderson Seligman, a noted skeptic of mathematical economics.[36] The articles focused on a back and forth over tax incidence and responses by producers. Edgeworth noticed that a monopoly producing a good that had jointness of supply but not jointness of demand (such as first class and economy on an airplane, if the plane flies, both sets of seats fly with it) might actually lower the price seen by the consumer for one of the two commodities if a tax were applied. Common sense and more traditional, numerical analysis seemed to indicate that this was preposterous. Seligman insisted that the results Edgeworth achieved were a quirk of his mathematical formulation. He suggested that the assumption of a continuous demand function and an infinitesimal change in the tax resulted in the paradoxical predictions. Harold Hotelling later showed that Edgeworth was correct and that the same result (a "diminution of price as a result of the tax") could occur with a discontinuous demand function and large changes in the tax rate.[37] Modern mathematical economics From the later-1930s, an array of new mathematical tools from the differential calculus and differential equations, convex sets, and graph theory were deployed to advance economic theory in a way similar to new mathematical methods earlier applied to physics.[8][38] The process was later described as moving from mechanics to axiomatics.[39] Differential calculus Main articles: Foundations of Economic Analysis and Differential calculus Vilfredo Pareto analyzed microeconomics by treating decisions by economic actors as attempts to change a given allotment of goods to another, more preferred allotment. Sets of allocations could then be treated as Pareto efficient (Pareto optimal is an equivalent term) when no exchanges could occur between actors that could make at least one individual better off without making any other individual worse off.[40] Pareto's proof is commonly conflated with Walrassian equilibrium or informally ascribed to Adam Smith's Invisible hand hypothesis.[41] Rather, Pareto's statement was the first formal assertion of what would be known as the first fundamental theorem of welfare economics.[42] These models lacked the inequalities of the next generation of mathematical economics. In the landmark treatise Foundations of Economic Analysis (1947), Paul Samuelson identified a common paradigm and mathematical structure across multiple fields in the subject, building on previous work by Alfred Marshall. Foundations took mathematical concepts from physics and applied them to economic problems. This broad view (for example, comparing Le Chatelier's principle to tâtonnement) drives the fundamental premise of mathematical economics: systems of economic actors may be modeled and their behavior described much like any other system. This extension followed on the work of the marginalists in the previous century and extended it significantly. Samuelson approached the problems of applying individual utility maximization over aggregate groups with comparative statics, which compares two different equilibrium states after an exogenous change in a variable. This and other methods in the book provided the foundation for mathematical economics in the 20th century.[7][43] Linear models See also: Linear algebra, Linear programming, and Perron–Frobenius theorem Restricted models of general equilibrium were formulated by John von Neumann in 1937.[44] Unlike earlier versions, the models of von Neumann had inequality constraints. For his model of an expanding economy, von Neumann proved the existence and uniqueness of an equilibrium using his generalization of Brouwer's fixed point theorem. Von Neumann's model of an expanding economy considered the matrix pencil  A - λ B with nonnegative matrices A and B; von Neumann sought probability vectors p and q and a positive number λ that would solve the complementarity equation pT (A − λ B) q = 0, along with two inequality systems expressing economic efficiency. In this model, the (transposed) probability vector p represents the prices of the goods while the probability vector q represents the "intensity" at which the production process would run. The unique solution λ represents the rate of growth of the economy, which equals the interest rate. Proving the existence of a positive growth rate and proving that the growth rate equals the interest rate were remarkable achievements, even for von Neumann.[45][46][47] Von Neumann's results have been viewed as a special case of linear programming, where von Neumann's model uses only nonnegative matrices.[48] The study of von Neumann's model of an expanding economy continues to interest mathematical economists with interests in computational economics.[49][50][51] Input-output economics In 1936, the Russian–born economist Wassily Leontief built his model of input-output analysis from the 'material balance' tables constructed by Soviet economists, which themselves followed earlier work by the physiocrats. With his model, which described a system of production and demand processes, Leontief described how changes in demand in one economic sector would influence production in another.[52] In practice, Leontief estimated the coefficients of his simple models, to address economically interesting questions. In production economics, "Leontief technologies" produce outputs using constant proportions of inputs, regardless of the price of inputs, reducing the value of Leontief models for understanding economies but allowing their parameters to be estimated relatively easily. In contrast, the von Neumann model of an expanding economy allows for choice of techniques, but the coefficients must be estimated for each technology.[53][54] Mathematical optimization Main articles: Mathematical optimization and Dual problem See also: Convexity in economics and Non-convexity (economics) In mathematics, mathematical optimization (or optimization or mathematical programming) refers to the selection of a best element from some set of available alternatives.[55] In the simplest case, an optimization problem involves maximizing or minimizing a real function by selecting input values of the function and computing the corresponding values of the function. The solution process includes satisfying general necessary and sufficient conditions for optimality. For optimization problems, specialized notation may be used as to the function and its input(s). More generally, optimization includes finding the best available element of some function given a defined domain and may use a variety of different computational optimization techniques.[56] Economics is closely enough linked to optimization by agents in an economy that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses.[57] Optimization problems run through modern economics, many with explicit economic or technical constraints. In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem for a given level of utility, are economic optimization problems.[58] Theory posits that consumers maximize their utility, subject to their budget constraints and that firms maximize their profits, subject to their production functions, input costs, and market demand.[59] Economic equilibrium is studied in optimization theory as a key ingredient of economic theorems that in principle could be tested against empirical data.[7][60] Newer developments have occurred in dynamic programming and modeling optimization with risk and uncertainty, including applications to portfolio theory, the economics of information, and search theory.[59] Optimality properties for an entire market system may be stated in mathematical terms, as in formulation of the two fundamental theorems of welfare economics[61] and in the Arrow–Debreu model of general equilibrium (also discussed below).[62] More concretely, many problems are amenable to analytical (formulaic) solution. Many others may be sufficiently complex to require numerical methods of solution, aided by software.[56] Still others are complex but tractable enough to allow computable methods of solution, in particular computable general equilibrium models for the entire economy.[63] Linear and nonlinear programming have profoundly affected microeconomics, which had earlier considered only equality constraints.[64] Many of the mathematical economists who received Nobel Prizes in Economics had conducted notable research using linear programming: Leonid Kantorovich, Leonid Hurwicz, Tjalling Koopmans, Kenneth J. Arrow, Robert Dorfman, Paul Samuelson and Robert Solow.[65] Both Kantorovich and Koopmans acknowledged that George B. Dantzig deserved to share their Nobel Prize for linear programming. Economists who conducted research in nonlinear programming also have won the Nobel prize, notably Ragnar Frisch in addition to Kantorovich, Hurwicz, Koopmans, Arrow, and Samuelson. Linear optimization Main articles: Linear programming and Simplex algorithm Linear programming was developed to aid the allocation of resources in firms and in industries during the 1930s in Russia and during the 1940s in the United States. During the Berlin airlift (1948), linear programming was used to plan the shipment of supplies to prevent Berlin from starving after the Soviet blockade.[66][67] Nonlinear programming See also: Nonlinear programming, Lagrangian multiplier, Karush–Kuhn–Tucker conditions, and Shadow price Extensions to nonlinear optimization with inequality constraints were achieved in 1951 by Albert W. Tucker and Harold Kuhn, who considered the nonlinear optimization problem: Minimize $f(x)$ subject to $g_{i}(x)\leq 0$ and $h_{j}(x)=0$ where $f(\cdot )$ is the function to be minimized $g_{i}(\cdot )$ are the functions of the $m$ inequality constraints where $i=1,\dots ,m$ $h_{j}(\cdot )$ are the functions of the $l$ equality constraints where $j=1,\dots ,l$. In allowing inequality constraints, the Kuhn–Tucker approach generalized the classic method of Lagrange multipliers, which (until then) had allowed only equality constraints.[68] The Kuhn–Tucker approach inspired further research on Lagrangian duality, including the treatment of inequality constraints.[69][70] The duality theory of nonlinear programming is particularly satisfactory when applied to convex minimization problems, which enjoy the convex-analytic duality theory of Fenchel and Rockafellar; this convex duality is particularly strong for polyhedral convex functions, such as those arising in linear programming. Lagrangian duality and convex analysis are used daily in operations research, in the scheduling of power plants, the planning of production schedules for factories, and the routing of airlines (routes, flights, planes, crews).[70] Variational calculus and optimal control See also: Calculus of variations, Optimal control, and Dynamic programming Economic dynamics allows for changes in economic variables over time, including in dynamic systems. The problem of finding optimal functions for such changes is studied in variational calculus and in optimal control theory. Before the Second World War, Frank Ramsey and Harold Hotelling used the calculus of variations to that end. Following Richard Bellman's work on dynamic programming and the 1962 English translation of L. Pontryagin et al.'s earlier work,[71] optimal control theory was used more extensively in economics in addressing dynamic problems, especially as to economic growth equilibrium and stability of economic systems,[72] of which a textbook example is optimal consumption and saving.[73] A crucial distinction is between deterministic and stochastic control models.[74] Other applications of optimal control theory include those in finance, inventories, and production for example.[75] Functional analysis See also: Functional analysis, Convex set, Supporting hyperplane, Hahn–Banach theorem, Fixed point theorem, and Dual space It was in the course of proving of the existence of an optimal equilibrium in his 1937 model of economic growth that John von Neumann introduced functional analytic methods to include topology in economic theory, in particular, fixed-point theory through his generalization of Brouwer's fixed-point theorem.[8][44][76] Following von Neumann's program, Kenneth Arrow and Gérard Debreu formulated abstract models of economic equilibria using convex sets and fixed–point theory. In introducing the Arrow–Debreu model in 1954, they proved the existence (but not the uniqueness) of an equilibrium and also proved that every Walras equilibrium is Pareto efficient; in general, equilibria need not be unique.[77] In their models, the ("primal") vector space represented quantities while the "dual" vector space represented prices.[78] In Russia, the mathematician Leonid Kantorovich developed economic models in partially ordered vector spaces, that emphasized the duality between quantities and prices.[79] Kantorovich renamed prices as "objectively determined valuations" which were abbreviated in Russian as "o. o. o.", alluding to the difficulty of discussing prices in the Soviet Union.[78][80][81] Even in finite dimensions, the concepts of functional analysis have illuminated economic theory, particularly in clarifying the role of prices as normal vectors to a hyperplane supporting a convex set, representing production or consumption possibilities. However, problems of describing optimization over time or under uncertainty require the use of infinite–dimensional function spaces, because agents are choosing among functions or stochastic processes.[78][82][83][84] Differential decline and rise See also: Global analysis, Baire category, and Sard's lemma John von Neumann's work on functional analysis and topology broke new ground in mathematics and economic theory.[44][85] It also left advanced mathematical economics with fewer applications of differential calculus. In particular, general equilibrium theorists used general topology, convex geometry, and optimization theory more than differential calculus, because the approach of differential calculus had failed to establish the existence of an equilibrium. However, the decline of differential calculus should not be exaggerated, because differential calculus has always been used in graduate training and in applications. Moreover, differential calculus has returned to the highest levels of mathematical economics, general equilibrium theory (GET), as practiced by the "GET-set" (the humorous designation due to Jacques H. Drèze). In the 1960s and 1970s, however, Gérard Debreu and Stephen Smale led a revival of the use of differential calculus in mathematical economics. In particular, they were able to prove the existence of a general equilibrium, where earlier writers had failed, because of their novel mathematics: Baire category from general topology and Sard's lemma from differential topology. Other economists associated with the use of differential analysis include Egbert Dierker, Andreu Mas-Colell, and Yves Balasko.[86][87] These advances have changed the traditional narrative of the history of mathematical economics, following von Neumann, which celebrated the abandonment of differential calculus. Game theory Main article: Game Theory See also: Cooperative game theory; Non-cooperative game theory; John von Neumann; Theory of Games and Economic Behavior; and John Forbes Nash, Jr. John von Neumann, working with Oskar Morgenstern on the theory of games, broke new mathematical ground in 1944 by extending functional analytic methods related to convex sets and topological fixed-point theory to economic analysis.[8][85] Their work thereby avoided the traditional differential calculus, for which the maximum–operator did not apply to non-differentiable functions. Continuing von Neumann's work in cooperative game theory, game theorists Lloyd S. Shapley, Martin Shubik, Hervé Moulin, Nimrod Megiddo, Bezalel Peleg influenced economic research in politics and economics. For example, research on the fair prices in cooperative games and fair values for voting games led to changed rules for voting in legislatures and for accounting for the costs in public–works projects. For example, cooperative game theory was used in designing the water distribution system of Southern Sweden and for setting rates for dedicated telephone lines in the USA. Earlier neoclassical theory had bounded only the range of bargaining outcomes and in special cases, for example bilateral monopoly or along the contract curve of the Edgeworth box.[88] Von Neumann and Morgenstern's results were similarly weak. Following von Neumann's program, however, John Nash used fixed–point theory to prove conditions under which the bargaining problem and noncooperative games can generate a unique equilibrium solution.[89] Noncooperative game theory has been adopted as a fundamental aspect of experimental economics,[90] behavioral economics,[91] information economics,[92] industrial organization,[93] and political economy.[94] It has also given rise to the subject of mechanism design (sometimes called reverse game theory), which has private and public-policy applications as to ways of improving economic efficiency through incentives for information sharing.[95] In 1994, Nash, John Harsanyi, and Reinhard Selten received the Nobel Memorial Prize in Economic Sciences their work on non–cooperative games. Harsanyi and Selten were awarded for their work on repeated games. Later work extended their results to computational methods of modeling.[96] Agent-based computational economics Agent-based computational economics (ACE) as a named field is relatively recent, dating from about the 1990s as to published work. It studies economic processes, including whole economies, as dynamic systems of interacting agents over time. As such, it falls in the paradigm of complex adaptive systems.[97] In corresponding agent-based models, agents are not real people but "computational objects modeled as interacting according to rules" ... "whose micro-level interactions create emergent patterns" in space and time.[98] The rules are formulated to predict behavior and social interactions based on incentives and information. The theoretical assumption of mathematical optimization by agents markets is replaced by the less restrictive postulate of agents with bounded rationality adapting to market forces.[99] ACE models apply numerical methods of analysis to computer-based simulations of complex dynamic problems for which more conventional methods, such as theorem formulation, may not find ready use.[100] Starting from specified initial conditions, the computational economic system is modeled as evolving over time as its constituent agents repeatedly interact with each other. In these respects, ACE has been characterized as a bottom-up culture-dish approach to the study of the economy.[101] In contrast to other standard modeling methods, ACE events are driven solely by initial conditions, whether or not equilibria exist or are computationally tractable. ACE modeling, however, includes agent adaptation, autonomy, and learning.[102] It has a similarity to, and overlap with, game theory as an agent-based method for modeling social interactions.[96] Other dimensions of the approach include such standard economic subjects as competition and collaboration,[103] market structure and industrial organization,[104] transaction costs,[105] welfare economics[106] and mechanism design,[95] information and uncertainty,[107] and macroeconomics.[108][109] The method is said to benefit from continuing improvements in modeling techniques of computer science and increased computer capabilities. Issues include those common to experimental economics in general[110] and by comparison[111] and to development of a common framework for empirical validation and resolving open questions in agent-based modeling.[112] The ultimate scientific objective of the method has been described as "test[ing] theoretical findings against real-world data in ways that permit empirically supported theories to cumulate over time, with each researcher's work building appropriately on the work that has gone before".[113] Mathematicization of economics Over the course of the 20th century, articles in "core journals"[115] in economics have been almost exclusively written by economists in academia. As a result, much of the material transmitted in those journals relates to economic theory, and "economic theory itself has been continuously more abstract and mathematical."[116] A subjective assessment of mathematical techniques[117] employed in these core journals showed a decrease in articles that use neither geometric representations nor mathematical notation from 95% in 1892 to 5.3% in 1990.[118] A 2007 survey of ten of the top economic journals finds that only 5.8% of the articles published in 2003 and 2004 both lacked statistical analysis of data and lacked displayed mathematical expressions that were indexed with numbers at the margin of the page.[119] Econometrics Main article: Econometrics Between the world wars, advances in mathematical statistics and a cadre of mathematically trained economists led to econometrics, which was the name proposed for the discipline of advancing economics by using mathematics and statistics. Within economics, "econometrics" has often been used for statistical methods in economics, rather than mathematical economics. Statistical econometrics features the application of linear regression and time series analysis to economic data. Ragnar Frisch coined the word "econometrics" and helped to found both the Econometric Society in 1930 and the journal Econometrica in 1933.[120][121] A student of Frisch's, Trygve Haavelmo published The Probability Approach in Econometrics in 1944, where he asserted that precise statistical analysis could be used as a tool to validate mathematical theories about economic actors with data from complex sources.[122] This linking of statistical analysis of systems to economic theory was also promulgated by the Cowles Commission (now the Cowles Foundation) throughout the 1930s and 1940s.[123] The roots of modern econometrics can be traced to the American economist Henry L. Moore. Moore studied agricultural productivity and attempted to fit changing values of productivity for plots of corn and other crops to a curve using different values of elasticity. Moore made several errors in his work, some from his choice of models and some from limitations in his use of mathematics. The accuracy of Moore's models also was limited by the poor data for national accounts in the United States at the time. While his first models of production were static, in 1925 he published a dynamic "moving equilibrium" model designed to explain business cycles—this periodic variation from over-correction in supply and demand curves is now known as the cobweb model. A more formal derivation of this model was made later by Nicholas Kaldor, who is largely credited for its exposition.[124] Application Much of classical economics can be presented in simple geometric terms or elementary mathematical notation. Mathematical economics, however, conventionally makes use of calculus and matrix algebra in economic analysis in order to make powerful claims that would be more difficult without such mathematical tools. These tools are prerequisites for formal study, not only in mathematical economics but in contemporary economic theory in general. Economic problems often involve so many variables that mathematics is the only practical way of attacking and solving them. Alfred Marshall argued that every economic problem which can be quantified, analytically expressed and solved, should be treated by means of mathematical work.[126] Economics has become increasingly dependent upon mathematical methods and the mathematical tools it employs have become more sophisticated. As a result, mathematics has become considerably more important to professionals in economics and finance. Graduate programs in both economics and finance require strong undergraduate preparation in mathematics for admission and, for this reason, attract an increasingly high number of mathematicians. Applied mathematicians apply mathematical principles to practical problems, such as economic analysis and other economics-related issues, and many economic problems are often defined as integrated into the scope of applied mathematics.[18] This integration results from the formulation of economic problems as stylized models with clear assumptions and falsifiable predictions. This modeling may be informal or prosaic, as it was in Adam Smith's The Wealth of Nations, or it may be formal, rigorous and mathematical. Broadly speaking, formal economic models may be classified as stochastic or deterministic and as discrete or continuous. At a practical level, quantitative modeling is applied to many areas of economics and several methodologies have evolved more or less independently of each other.[127] • Stochastic models are formulated using stochastic processes. They model economically observable values over time. Most of econometrics is based on statistics to formulate and test hypotheses about these processes or estimate parameters for them. Between the World Wars, Herman Wold developed a representation of stationary stochastic processes in terms of autoregressive models and a determinist trend. Wold and Jan Tinbergen applied time-series analysis to economic data. Contemporary research on time series statistics consider additional formulations of stationary processes, such as autoregressive moving average models. More general models include autoregressive conditional heteroskedasticity (ARCH) models and generalized ARCH (GARCH) models. • Non-stochastic mathematical models may be purely qualitative (for example, models involved in some aspect of social choice theory) or quantitative (involving rationalization of financial variables, for example with hyperbolic coordinates, and/or specific forms of functional relationships between variables). In some cases economic predictions of a model merely assert the direction of movement of economic variables, and so the functional relationships are used only in a qualitative sense: for example, if the price of an item increases, then the demand for that item will decrease. For such models, economists often use two-dimensional graphs instead of functions. • Qualitative models are occasionally used. One example is qualitative scenario planning in which possible future events are played out. Another example is non-numerical decision tree analysis. Qualitative models often suffer from lack of precision. Example: The effect of a corporate tax cut on wages The great appeal of mathematical economics is that it brings a degree of rigor to economic thinking, particularly around charged political topics. For example, during the discussion of the efficacy of a corporate tax cut for increasing the wages of workers, a simple mathematical model proved beneficial to understanding the issues at hand. As an intellectual exercise, the following problem was posed by Prof. Greg Mankiw of Harvard University:[128] An open economy has the production function $ y=f(k)$, where $ y$ is output per worker and $ k$ is capital per worker. The capital stock adjusts so that the after-tax marginal product of capital equals the exogenously given world interest rate $ r$...How much will the tax cut increase wages? To answer this question, we follow John H. Cochrane of the Hoover Institution.[129] Suppose an open economy has the production function: $Y=F(K,L)=f(k)L,\quad k=K/L$ Where the variables in this equation are: • $ Y$ is the total output • $ F(K,L)$ is the production function • $ K$ is the total capital stock • $ L$ is the total labor stock The standard choice for the production function is the Cobb-Douglas production function: $Y=AK^{\alpha }L^{1-\alpha }=Ak^{\alpha }L,\quad \alpha \in [0,1]$ where $ A$ is the factor of productivity - assumed to be a constant. A corporate tax cut in this model is equivalent to a tax on capital. With taxes, firms look to maximize: $J=\max _{K,L}\;(1-\tau )\left[F(K,L)-wL\right]-rK\equiv \max _{K,L}\;(1-\tau )\left[f(k)-w\right]L-rK$ where $ \tau $ is the capital tax rate, $ w$ is wages per worker, and $ r$ is the exogenous interest rate. Then the first-order optimality conditions become: ${\begin{aligned}{\frac {\partial J}{\partial K}}&=(1-\tau )f'(k)-r\\{\frac {\partial J}{\partial L}}&=(1-\tau )\left[f(k)-f'(k)k-w\right]\end{aligned}}$ Therefore, the optimality conditions imply that: $r=(1-\tau )f'(k),\quad w=f(k)-f'(k)k$ Define total taxes $ X=\tau [F(K,L)-wL]$. This implies that taxes per worker $ x$ are: $x=\tau [f(k)-w]=\tau f'(k)k$ Then the change in taxes per worker, given the tax rate, is: ${dx \over {d\tau }}=\underbrace {f'(k)k} _{\text{Static}}+\underbrace {\tau \left[f'(k)+f''(k)k\right]{dk \over {d\tau }}} _{\text{Dynamic}}$ To find the change in wages, we differentiate the second optimality condition for the per worker wages to obtain: ${\frac {dw}{d\tau }}=\left[f'(k)-f'(k)-f''(k)k\right]{\frac {dk}{d\tau }}=-f''(k)k{\frac {dk}{d\tau }}$ Assuming that the interest rate is fixed at $ r$, so that $ dr/d\tau =0$, we may differentiate the first optimality condition for the interest rate to find: ${dk \over {d\tau }}={f'(k) \over {(1-\tau )f''(k)}}$ For the moment, let's focus only on the static effect of a capital tax cut, so that $ dx/d\tau =f'(k)k$. If we substitute this equation into equation for wage changes with respect to the tax rate, then we find that: ${dw \over {d\tau }}=-{\frac {f'(k)k}{1-\tau }}=-{1 \over {1-\tau }}{\frac {dx}{d\tau }}$ Therefore, the static effect of a capital tax cut on wages is: ${dw \over {dx}}=-{1 \over {1-\tau }}$ Based on the model, it seems possible that we may achieve a rise in the wage of a worker greater than the amount of the tax cut. But that only considers the static effect, and we know that the dynamic effect must be accounted for. In the dynamic model, we may rewrite the equation for changes in taxes per worker with respect to the tax rate as: ${\begin{aligned}{dx \over {d\tau }}&=f'(k)k+\tau \left[f'(k)+f''(k)k\right]{dk \over {d\tau }}\\&=f'(k)k+{\tau \over {1-\tau }}{[f'(k)]^{2}+f'(k)f''(k)k \over {f''(k)}}\\&={\tau \over {1-\tau }}{f'(k)^{2} \over {f''(k)}}+{1 \over {1-\tau }}f'(k)k\\&={f'(k) \over {1-\tau }}\left[\tau {f'(k) \over {f''(k)}}+k\right]\end{aligned}}$ Recalling that $ dw/d\tau =-f'(k)k/(1-\tau )$, we have that: ${\frac {dw}{dx}}=-{{\frac {f'(k)k}{1-\tau }} \over {{\frac {f'(k)}{1-\tau }}\left[\tau {\frac {f'(k)}{f''(k)}}+k\right]}}=-{\frac {1}{\tau {\frac {f'(k)}{kf''(k)}}+1}}$ Using the Cobb-Douglas production function, we have that: ${f'(k) \over {kf''(k)}}=-{1 \over {1-\alpha }}$ Therefore, the dynamic effect of a capital tax cut on wages is: ${dw \over {dx}}=-{1-\alpha \over {1-\tau -\alpha }}$ If we take $ \alpha =\tau =1/3$, then the dynamic effect of lowering capital taxes on wages will be even larger than the static effect. Moreover, if there are positive externalities to capital accumulation, the effect of the tax cut on wages would be larger than in the model we just derived. It is important to note that the result is a combination of: 1. The standard result that in a small open economy labor bears 100% of a small capital income tax 2. The fact that, starting at a positive tax rate, the burden of a tax increase exceeds revenue collection due to the first-order deadweight loss This result showing that, under certain assumptions, a corporate tax cut can boost the wages of workers by more than the lost revenue does not imply that the magnitude is correct. Rather, it suggests a basis for policy analysis that is not grounded in handwaving. If the assumptions are reasonable, then the model is an acceptable approximation of reality; if they are not, then better models should be developed. CES production function Now let's assume that instead of the Cobb-Douglas production function we have a more general constant elasticity of substitution (CES) production function: $f(k)=A\left[\alpha k^{\rho }+(1-\alpha )\right]^{1/\rho }$ where $ \rho =1-\sigma ^{-1}$; $ \sigma $ is the elasticity of substitution between capital and labor. The relevant quantity we want to calculate is $ f'/kf''$, which may be derived as: ${f' \over {kf''}}=-{1 \over {1-\rho -{\alpha (1-\rho ) \over {\alpha +(1-\alpha )k^{-\rho }}}}}$ Therefore, we may use this to find that: ${\begin{aligned}1+\tau {f' \over {kf''}}&=1-{\tau [\alpha +(1-\alpha )k^{-\rho }] \over {(1-\rho )[\alpha +(1-\alpha )k^{-\rho }]-\alpha (1-\rho )}}\\[6pt]&={(1-\rho -\tau )[\alpha +(1-\alpha )k^{-\rho }]-\alpha (1-\rho ) \over {(1-\rho )[\alpha +(1-\alpha )k^{-\rho }]-\alpha (1-\rho )}}\end{aligned}}$ Therefore, under a general CES model, the dynamic effect of a capital tax cut on wages is: ${dw \over {dx}}=-{(1-\rho )[\alpha +(1-\alpha )k^{-\rho }]-\alpha (1-\rho ) \over {(1-\rho -\tau )[\alpha +(1-\alpha )k^{-\rho }]-\alpha (1-\rho )}}$ We recover the Cobb-Douglas solution when $ \rho =0$. When $ \rho =1$, which is the case when perfect substitutes exist, we find that $ dw/dx=0$ - there is no effect of changes in capital taxes on wages. And when $ \rho =-\infty $, which is the case when perfect complements exist, we find that $ dw/dx=-1$ - a cut in capital taxes increases wages by exactly one dollar. Criticisms and defences Adequacy of mathematics for qualitative and complicated economics The Austrian school — while making many of the same normative economic arguments as mainstream economists from marginalist traditions, such as the Chicago school — differs methodologically from mainstream neoclassical schools of economics, in particular in their sharp critiques of the mathematization of economics.[130] Friedrich Hayek contended that the use of formal techniques projects a scientific exactness that does not appropriately account for informational limitations faced by real economic agents. [131] In an interview in 1999, the economic historian Robert Heilbroner stated:[132] I guess the scientific approach began to penetrate and soon dominate the profession in the past twenty to thirty years. This came about in part because of the "invention" of mathematical analysis of various kinds and, indeed, considerable improvements in it. This is the age in which we have not only more data but more sophisticated use of data. So there is a strong feeling that this is a data-laden science and a data-laden undertaking, which, by virtue of the sheer numerics, the sheer equations, and the sheer look of a journal page, bears a certain resemblance to science . . . That one central activity looks scientific. I understand that. I think that is genuine. It approaches being a universal law. But resembling a science is different from being a science. Heilbroner stated that "some/much of economics is not naturally quantitative and therefore does not lend itself to mathematical exposition."[133] Testing predictions of mathematical economics Philosopher Karl Popper discussed the scientific standing of economics in the 1940s and 1950s. He argued that mathematical economics suffered from being tautological. In other words, insofar as economics became a mathematical theory, mathematical economics ceased to rely on empirical refutation but rather relied on mathematical proofs and disproof.[134] According to Popper, falsifiable assumptions can be tested by experiment and observation while unfalsifiable assumptions can be explored mathematically for their consequences and for their consistency with other assumptions.[135] Sharing Popper's concerns about assumptions in economics generally, and not just mathematical economics, Milton Friedman declared that "all assumptions are unrealistic". Friedman proposed judging economic models by their predictive performance rather than by the match between their assumptions and reality.[136] Mathematical economics as a form of pure mathematics See also: Pure mathematics, Applied mathematics, and Engineering Considering mathematical economics, J.M. Keynes wrote in The General Theory:[137] It is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis ... that they expressly assume strict independence between the factors involved and lose their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating and know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are merely concoctions, as imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols. Defense of mathematical economics In response to these criticisms, Paul Samuelson argued that mathematics is a language, repeating a thesis of Josiah Willard Gibbs. In economics, the language of mathematics is sometimes necessary for representing substantive problems. Moreover, mathematical economics has led to conceptual advances in economics.[138] In particular, Samuelson gave the example of microeconomics, writing that "few people are ingenious enough to grasp [its] more complex parts... without resorting to the language of mathematics, while most ordinary individuals can do so fairly easily with the aid of mathematics."[139] Some economists state that mathematical economics deserves support just like other forms of mathematics, particularly its neighbors in mathematical optimization and mathematical statistics and increasingly in theoretical computer science. Mathematical economics and other mathematical sciences have a history in which theoretical advances have regularly contributed to the reform of the more applied branches of economics. In particular, following the program of John von Neumann, game theory now provides the foundations for describing much of applied economics, from statistical decision theory (as "games against nature") and econometrics to general equilibrium theory and industrial organization. In the last decade, with the rise of the internet, mathematical economists and optimization experts and computer scientists have worked on problems of pricing for on-line services --- their contributions using mathematics from cooperative game theory, nondifferentiable optimization, and combinatorial games. Robert M. Solow concluded that mathematical economics was the core "infrastructure" of contemporary economics: Economics is no longer a fit conversation piece for ladies and gentlemen. It has become a technical subject. Like any technical subject it attracts some people who are more interested in the technique than the subject. That is too bad, but it may be inevitable. In any case, do not kid yourself: the technical core of economics is indispensable infrastructure for the political economy. That is why, if you consult [a reference in contemporary economics] looking for enlightenment about the world today, you will be led to technical economics, or history, or nothing at all.[140] Mathematical economists Prominent mathematical economists include the following. 19th century • Enrico Barone • Antoine Augustin Cournot • Francis Ysidro Edgeworth • Irving Fisher • William Stanley Jevons • Vilfredo Pareto • Léon Walras 20th century • Charalambos D. Aliprantis • R. G. D. Allen • Maurice Allais • Kenneth J. Arrow • Robert J. Aumann • Yves Balasko • David Blackwell • Lawrence E. Blume • Graciela Chichilnisky • George B. Dantzig • Gérard Debreu • Mario Draghi • Jacques H. Drèze • David Gale • Nicholas Georgescu-Roegen • Roger Guesnerie • Frank Hahn • John C. Harsanyi • John R. Hicks • Werner Hildenbrand • Harold Hotelling • Leonid Hurwicz • Leonid Kantorovich • Tjalling Koopmans • David M. Kreps • Harold W. Kuhn • Edmond Malinvaud • Andreu Mas-Colell • Eric Maskin • Nimrod Megiddo • Jean-François Mertens • James Mirrlees • Roger Myerson • John Forbes Nash, Jr. • John von Neumann • Edward C. Prescott • Roy Radner • Frank Ramsey • Donald John Roberts • Paul Samuelson • Yuliy Sannikov • Thomas Sargent • Leonard J. Savage • Herbert Scarf • Reinhard Selten • Amartya Sen • Lloyd S. Shapley • Stephen Smale • Robert Solow • Hugo F. Sonnenschein • Nancy L. Stokey • Albert W. Tucker • Hirofumi Uzawa • Robert B. Wilson • Abraham Wald • Hermann Wold • Nicholas C. Yannelis See also • Econophysics • Mathematical finance References 1. Elaborated at the JEL classification codes, Mathematical and quantitative methods JEL: C Subcategories. 2. Chiang, Alpha C.; Kevin Wainwright (2005). Fundamental Methods of Mathematical Economics. McGraw-Hill Irwin. pp. 3–4. ISBN 978-0-07-010910-0. TOC. Archived 2012-03-08 at the Wayback Machine 3. Debreu, Gérard ([1987] 2008). "mathematical economics", section II, The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Archived 2013-05-16 at the Wayback Machine Republished with revisions from 1986, "Theoretic Models: Mathematical Form and Economic Content", Econometrica, 54(6), pp. 1259 Archived 2017-08-05 at the Wayback Machine-1270. 4. Varian, Hal (1997). "What Use Is Economic Theory?" in A. D'Autume and J. Cartelier, ed., Is Economics Becoming a Hard Science?, Edward Elgar. Pre-publication PDF. Archived 2006-06-25 at the Wayback Machine Retrieved 2008-04-01. • As in Handbook of Mathematical Economics, 1st-page chapter links:      Arrow, Kenneth J., and Michael D. Intriligator, ed., (1981), v. 1      _____ (1982). v. 2      _____ (1986). v. 3      Hildenbrand, Werner, and Hugo Sonnenschein, ed. (1991). v. 4. Archived 2013-04-15 at the Wayback Machine • Debreu, Gérard (1983). Mathematical Economics: Twenty Papers of Gérard Debreu, Contents Archived 2023-07-01 at the Wayback Machine. • Glaister, Stephen (1984). Mathematical Methods for Economists, 3rd ed., Blackwell. Contents. Archived 2023-07-01 at the Wayback Machine • Takayama, Akira (1985). Mathematical Economics, 2nd ed. Cambridge. Description Archived 2023-07-01 at the Wayback Machine and Contents Archived 2023-07-01 at the Wayback Machine. • Michael Carter (2001). Foundations of Mathematical Economics, MIT Press. Description and Contents Archived 2023-07-01 at the Wayback Machine. 5. Chiang, Alpha C. (1992). Elements of Dynamic Optimization, Waveland. TOC & Amazon.com link Archived 2016-03-03 at the Wayback Machine to inside, first pp. 6. Samuelson, Paul (1947) [1983]. Foundations of Economic Analysis. Harvard University Press. ISBN 978-0-674-31301-9. • Debreu, Gérard ([1987] 2008). "mathematical economics", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Archived 2013-05-16 at the Wayback Machine Republished with revisions from 1986, "Theoretic Models: Mathematical Form and Economic Content", Econometrica, 54(6), pp. 1259 Archived 2017-08-05 at the Wayback Machine-1270. • von Neumann, John, and Oskar Morgenstern (1944). Theory of Games and Economic Behavior. Princeton University Press. 7. Schumpeter, J.A. (1954). Elizabeth B. Schumpeter (ed.). History of Economic Analysis. New York: Oxford University Press. pp. 209–212. ISBN 978-0-04-330086-2. OCLC 13498913. Archived from the original on 2023-07-01. Retrieved 2020-05-28. 8. Schumpeter (1954) p. 212-215 9. Schnieder, Erich (1934). "Johann Heinrich von Thünen". Econometrica. 2 (1): 1–12. doi:10.2307/1907947. ISSN 0012-9682. JSTOR 1907947. OCLC 35705710. 10. Schumpeter (1954) p. 465-468 11. Philip Mirowski, 1991. "The When, the How and the Why of Mathematical Expression in the History of Economics Analysis", Journal of Economic Perspectives, 5(1) pp. 145-157. 12. Weintraub, E. Roy (2008). "mathematics and economics", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2013-05-16 at the Wayback Machine. 13. Jevons, W.S. (1866). "Brief Account of a General Mathematical Theory of Political Economy", Journal of the Royal Statistical Society, XXIX (June) pp. 282–87. Read in Section F of the British Association, 1862. PDF. 14. Jevons, W. Stanley (1871). The Principles of Political Economy, pp. 4, 25. Macmillan. The Theory of Political Economy, jevons 1871. 15. See the preface Archived 2023-07-01 at the Wayback Machine to Irving Fisher's 1897 work, A brief introduction to the infinitesimal calculus: designed especially to aid in reading mathematical economics and statistics. 16. Sheila C., Dow (1999-05-21). "The Use of Mathematics in Economics". ESRC Public Understanding of Mathematics Seminar. Birmingham: Economic and Social Research Council. Retrieved 2008-07-06. 17. While the concept of cardinality has fallen out of favor in neoclassical economics, the differences between cardinal utility and ordinal utility are minor for most applications. 18. Nicola, PierCarlo (2000). Mainstream Mathermatical Economics in the 20th Century. Springer. p. 4. ISBN 978-3-540-67084-1. Archived from the original on 2023-07-01. Retrieved 2008-08-21. 19. Augustin Cournot (1838, tr. 1897) Researches into the Mathematical Principles of Wealth. Links to description Archived 2023-07-01 at the Wayback Machine and chapters. Archived 2023-07-01 at the Wayback Machine 20. Hotelling, Harold (1990). "Stability in Competition". In Darnell, Adrian C. (ed.). The Collected Economics Articles of Harold Hotelling. Springer. pp. 51, 52. ISBN 978-3-540-97011-8. OCLC 20217006. Archived from the original on 2023-07-01. Retrieved 2008-08-21. 21. "Antoine Augustin Cournot, 1801-1877". The History of Economic Thought Website. The New School for Social Research. Archived from the original on 2000-07-09. Retrieved 2008-08-21. 22. Gibbons, Robert (1992). Game Theory for Applied Economists. Princeton, New Jersey: Princeton University Press. pp. 14, 15. ISBN 978-0-691-00395-5. 23. Nicola, p. 9-12 24. Edgeworth, Francis Ysidro (September 5, 1889). "The Mathematical Theory of Political Economy: Review of Léon Walras, Éléments d'économie politique pure" (PDF). Nature. 40 (1036): 434–436. doi:10.1038/040434a0. ISSN 0028-0836. S2CID 21004543. Archived from the original (PDF) on April 11, 2003. Retrieved 2008-08-21. 25. Nicholson, Walter; Snyder, Christopher, p. 350-353. 26. Dixon, Robert. "Walras Law and Macroeconomics". Walras Law Guide. Department of Economics, University of Melbourne. Archived from the original on April 17, 2008. Retrieved 2008-09-28. 27. Dixon, Robert. "A Formal Proof of Walras Law". Walras Law Guide. Department of Economics, University of Melbourne. Archived from the original on April 30, 2008. Retrieved 2008-09-28. 28. Rima, Ingrid H. (1977). "Neoclassicism and Dissent 1890-1930". In Weintraub, Sidney (ed.). Modern Economic Thought. University of Pennsylvania Press. pp. 10, 11. ISBN 978-0-8122-7712-8. Archived from the original on 2023-07-01. Retrieved 2021-05-31. 29. Heilbroner, Robert L. (1999) [1953]. The Worldly Philosophers (Seventh ed.). New York: Simon and Schuster. pp. 172–175, 313. ISBN 978-0-684-86214-9. Archived from the original on 2023-07-01. Retrieved 2020-05-28. 30. Edgeworth, Francis Ysidro (1961) [1881]. Mathematical Psychics. London: Kegan Paul [A. M. Kelley]. pp. 15–19. Archived from the original on 2023-07-01. Retrieved 2020-05-28. 31. Nicola, p. 14, 15, 258-261 32. Bowley, Arthur Lyon (1960) [1924]. The Mathematical Groundwork of Economics: an Introductory Treatise. Oxford: Clarendon Press [Kelly]. Archived from the original on 2023-07-01. Retrieved 2020-05-28. 33. Gillies, D. B. (1969). "Solutions to general non-zero-sum games". In Tucker, A. W.; Luce, R. D. (eds.). Contributions to the Theory of Games. Annals of Mathematics. Vol. 40. Princeton, New Jersey: Princeton University Press. pp. 47–85. ISBN 978-0-691-07937-0. Archived from the original on 2023-07-01. Retrieved 2020-05-28. 34. Moss, Lawrence S. (2003). "The Seligman-Edgeworth Debate about the Analysis of Tax Incidence: The Advent of Mathematical Economics, 1892–1910". History of Political Economy. 35 (2): 207, 212, 219, 234–237. doi:10.1215/00182702-35-2-205. ISSN 0018-2702. 35. Hotelling, Harold (1990). "Note on Edgeworth's Taxation Phenomenon and Professor Garver's Additional Condition on Demand Functions". In Darnell, Adrian C. (ed.). The Collected Economics Articles of Harold Hotelling. Springer. pp. 94–122. ISBN 978-3-540-97011-8. OCLC 20217006. Archived from the original on 2023-07-01. Retrieved 2008-08-26. 36. Herstein, I.N. (October 1953). "Some Mathematical Methods and Techniques in Economics". Quarterly of Applied Mathematics. 11 (3): 249–262. doi:10.1090/qam/60205. ISSN 1552-4485. [Pp. 249-62. • Weintraub, E. Roy (2008). "mathematics and economics", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2013-05-16 at the Wayback Machine. • _____ (2002). How Economics Became a Mathematical Science. Duke University Press. Description and preview Archived 2023-06-04 at the Wayback Machine. 37. Nicholson, Walter; Snyder, Christopher (2007). "General Equilibrium and Welfare". Intermediate Microeconomics and Its Applications (10th ed.). Thompson. pp. 364, 365. ISBN 978-0-324-31968-2. • Jolink, Albert (2006). "What Went Wrong with Walras?". In Backhaus, Juergen G.; Maks, J.A. Hans (eds.). From Walras to Pareto. The European Heritage in Economics and the Social Sciences. Vol. IV. Springer. pp. 69–80. doi:10.1007/978-0-387-33757-9_6. ISBN 978-0-387-33756-2. • Blaug, Mark (2007). "The Fundamental Theorems of Modern Welfare Economics, Historically Contemplated". History of Political Economy. 39 (2): 186–188. doi:10.1215/00182702-2007-001. ISSN 0018-2702. S2CID 154074343. Archived from the original on 2023-07-01. Retrieved 2020-01-27. 38. Blaug (2007), p. 185, 187 39. Metzler, Lloyd (1948). "Review of Foundations of Economic Analysis". American Economic Review. 38 (5): 905–910. ISSN 0002-8282. JSTOR 1811704. 40. Neumann, J. von (1937). "Über ein ökonomisches Gleichungssystem und ein Verallgemeinerung des Brouwerschen Fixpunktsatzes", Ergebnisse eines Mathematischen Kolloquiums, 8, pp. 73–83, translated and published in 1945-46, as "A Model of General Equilibrium", Review of Economic Studies, 13, pp. 1–9. 41. For this problem to have a unique solution, it suffices that the nonnegative matrices A and B satisfy an irreducibility condition, generalizing that of the Perron–Frobenius theorem of nonnegative matrices, which considers the (simplified) eigenvalue problem A − λ I q = 0, where the nonnegative matrix A must be square and where the diagonal matrix I is the identity matrix. Von Neumann's irreducibility condition was called the "whales and wranglers" hypothesis by David Champernowne, who provided a verbal and economic commentary on the English translation of von Neumann's article. Von Neumann's hypothesis implied that every economic process used a positive amount of every economic good. Weaker "irreducibility" conditions were given by David Gale and by John Kemeny, Oskar Morgenstern, and Gerald L. Thompson in the 1950s and then by Stephen M. Robinson in the 1970s. 42. David Gale. The theory of linear economic models. McGraw-Hill, New York, 1960. 43. Morgenstern, Oskar; Thompson, Gerald L. (1976). Mathematical theory of expanding and contracting economies. Lexington Books. Lexington, Massachusetts: D. C. Heath and Company. pp. xviii+277. 44. Alexander Schrijver, Theory of Linear and Integer Programming. John Wiley & sons, 1998, ISBN 0-471-98232-6. • Rockafellar, R. Tyrrell (1967). Monotone processes of convex and concave type. Memoirs of the American Mathematical Society. Providence, R.I.: American Mathematical Society. pp. i+74. • Rockafellar, R. T. (1974). "Convex algebra and duality in dynamic models of production". In Josef Loz; Maria Loz (eds.). Mathematical models in economics (Proc. Sympos. and Conf. von Neumann Models, Warsaw, 1972). Amsterdam: North-Holland and Polish Academy of Sciences (PAN). pp. 351–378. • Rockafellar, R. T. (1997) [1970]. Convex analysis. Princeton, New Jersey: Princeton University Press. 45. Kenneth Arrow, Paul Samuelson, John Harsanyi, Sidney Afriat, Gerald L. Thompson, and Nicholas Kaldor. (1989). Mohammed Dore; Sukhamoy Chakravarty; Richard Goodwin (eds.). John Von Neumann and modern economics. Oxford:Clarendon. p. 261.{{cite book}}: CS1 maint: multiple names: authors list (link) 46. Chapter 9.1 "The von Neumann growth model" (pages 277–299): Yinyu Ye. Interior point algorithms: Theory and analysis. Wiley. 1997. 47. Screpanti, Ernesto; Zamagni, Stefano (1993). An Outline of the History of Economic Thought. New York: Oxford University Press. pp. 288–290. ISBN 978-0-19-828370-6. OCLC 57281275. 48. David Gale. The theory of linear economic models. McGraw-Hill, New York, 1960. 49. Morgenstern, Oskar; Thompson, Gerald L. (1976). Mathematical theory of expanding and contracting economies. Lexington Books. Lexington, Massachusetts: D. C. Heath and Company. pp. xviii+277. 50. "The Nature of Mathematical Programming", Mathematical Programming Glossary, INFORMS Computing Society. 51. Schmedders, Karl (2008). "numerical optimization methods in economics", The New Palgrave Dictionary of Economics, 2nd Edition, v. 6, pp. 138–57. Abstract. Archived 2017-08-11 at the Wayback Machine 52. Robbins, Lionel (1935, 2nd ed.). An Essay on the Nature and Significance of Economic Science, Macmillan, p. 16. 53. Blume, Lawrence E. (2008). "duality", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Archived 2017-02-02 at the Wayback Machine 54. Dixit, A. K. ([1976] 1990). Optimization in Economic Theory, 2nd ed., Oxford. Description Archived 2023-07-01 at the Wayback Machine and contents preview Archived 2023-07-01 at the Wayback Machine. • Samuelson, Paul A., 1998. "How Foundations Came to Be", Journal of Economic Literature, 36(3), pp. 1375–1386. • _____ (1970)."Maximum Principles in Analytical Economics" Archived 2012-10-11 at the Wayback Machine, Nobel Prize lecture. • Allan M. Feldman (3008). "welfare economics", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-08-11 at the Wayback Machine. • Mas-Colell, Andreu, Michael D. Whinston, and Jerry R. Green (1995), Microeconomic Theory, Chapter 16. Oxford University Press, ISBN 0-19-510268-1. Description Archived 2012-01-26 at the Wayback Machine and contents Archived 2012-01-26 at the Wayback Machine . • Geanakoplos, John ([1987] 2008). "Arrow–Debreu model of general equilibrium", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-08-11 at the Wayback Machine. • Arrow, Kenneth J., and Gérard Debreu (1954). "Existence of an Equilibrium for a Competitive Economy", Econometrica 22(3), pp. 265-290. • Scarf, Herbert E. (2008). "computation of general equilibria", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Archived 2009-05-23 at the Wayback Machine • Kubler, Felix (2008). "computation of general equilibria (new developments)", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Archived 2017-08-11 at the Wayback Machine 55. Nicola, p. 133 56. Dorfman, Robert, Paul A. Samuelson, and Robert M. Solow (1958). Linear Programming and Economic Analysis. McGraw–Hill. Chapter-preview links. Archived 2023-07-01 at the Wayback Machine 57. M. Padberg, Linear Optimization and Extensions, Second Edition, Springer-Verlag, 1999. 58. Dantzig, George B. ([1987] 2008). "linear programming", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-08-11 at the Wayback Machine. • Intriligator, Michael D. (2008). "nonlinear programming", The New Palgrave Dictionary of Economics, 2nd Edition. TOC Archived 2016-03-04 at the Wayback Machine. • Blume, Lawrence E. (2008). "convex programming", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-10-18 at the Wayback Machine. • Kuhn, H. W.; Tucker, A. W. (1951). "Nonlinear programming". Proceedings of 2nd Berkeley Symposium. Berkeley: University of California Press. pp. 481–492. • Bertsekas, Dimitri P. (1999). Nonlinear Programming (Second ed.). Cambridge, Massachusetts.: Athena Scientific. ISBN 978-1-886529-00-7. • Vapnyarskii, I.B. (2001) [1994], "Lagrange multipliers", Encyclopedia of Mathematics, EMS Press. • Lasdon, Leon S. (1970). Optimization theory for large systems. Macmillan series in operations research. New York: The Macmillan Company. pp. xi+523. MR 0337317. • Lasdon, Leon S. (2002). Optimization theory for large systems (reprint of the 1970 Macmillan ed.). Mineola, New York: Dover Publications, Inc. pp. xiii+523. MR 1888251. • Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1993). "XII Abstract duality for practitioners". Convex analysis and minimization algorithms, Volume II: Advanced theory and bundle methods. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Vol. 306. Berlin: Springer-Verlag. pp. 136–193 (and Bibliographical comments on pp. 334–335). ISBN 978-3-540-56852-0. MR 1295240. 59. Lemaréchal, Claude (2001). "Lagrangian relaxation". In Michael Jünger; Denis Naddef (eds.). Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000. Lecture Notes in Computer Science. Vol. 2241. Berlin: Springer-Verlag. pp. 112–156. doi:10.1007/3-540-45586-8_4. ISBN 978-3-540-42877-0. MR 1900016. S2CID 9048698. 60. Pontryagin, L. S.; Boltyanski, V. G., Gamkrelidze, R. V., Mischenko, E. F. (1962). The Mathematical Theory of Optimal Processes. New York: Wiley. ISBN 9782881240775. Archived from the original on 2023-07-01. Retrieved 2015-06-27.{{cite book}}: CS1 maint: multiple names: authors list (link) • Zelikin, M. I. ([1987] 2008). "Pontryagin's principle of optimality", The New Palgrave Dictionary of Economics, 2nd Edition. Preview link Archived 2017-08-11 at the Wayback Machine. • Martos, Béla (1987). "control and coordination of economic activity", The New Palgrave: A Dictionary of Economics. Description link Archived 2016-03-06 at the Wayback Machine. • Brock, W. A. (1987). "optimal control and economic dynamics", The New Palgrave: A Dictionary of Economics. Outline Archived 2017-08-11 at the Wayback Machine. • Shell, K., ed. (1967). Essays on the Theory of Optimal Economic Growth. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-19036-7.] 61. Stokey, Nancy L. and Robert E. Lucas with Edward Prescott (1989). Recursive Methods in Economic Dynamics, Harvard University Press, chapter 5. Desecription Archived 2017-08-11 at the Wayback Machine and chapter-preview links Archived 2023-07-01 at the Wayback Machine. 62. Malliaris, A.G. (2008). "stochastic optimal control", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-10-18 at the Wayback Machine. • Arrow, K. J.; Kurz, M. (1970). Public Investment, the Rate of Return, and Optimal Fiscal Policy. Baltimore, Maryland: The Johns Hopkins Press. ISBN 978-0-8018-1124-1. Abstract. Archived 2013-03-09 at the Wayback Machine • Sethi, S. P.; Thompson, G. L. (2000). Optimal Control Theory: Applications to Management Science and Economics, Second Edition. New York: Springer. ISBN 978-0-7923-8608-7. Scroll to chapter-preview links. Archived 2023-07-01 at the Wayback Machine 63. Andrew McLennan, 2008. "fixed point theorems", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2016-03-06 at the Wayback Machine. 64. Weintraub, E. Roy (1977). "General Equilibrium Theory". In Weintraub, Sidney (ed.). Modern Economic Thought. University of Pennsylvania Press. pp. 107–109. ISBN 978-0-8122-7712-8. Archived from the original on 2023-07-01. Retrieved 2020-05-28. • Arrow, Kenneth J.; Debreu, Gérard (1954). "Existence of an equilibrium for a competitive economy". Econometrica. 22 (3): 265–290. doi:10.2307/1907353. ISSN 0012-9682. JSTOR 1907353. 65. Kantorovich, Leonid, and Victor Polterovich (2008). "Functional analysis", in S. Durlauf and L. Blume, ed., The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Archived 2016-03-03 at the Wayback Machine, ed., Palgrave Macmillan. 66. Kantorovich, L. V (1990). ""My journey in science (supposed report to the Moscow Mathematical Society)" [expanding Russian Math. Surveys 42 (1987), no. 2, pp. 233–270]". In Lev J. Leifman (ed.). Functional analysis, optimization, and mathematical economics: A collection of papers dedicated to the memory of Leonid Vitalʹevich Kantorovich. New York: The Clarendon Press, Oxford University Press. pp. 8–45. ISBN 978-0-19-505729-4. MR 0898626. 67. Page 406: Polyak, B. T. (2002). "History of mathematical programming in the USSR: Analyzing the phenomenon (Chapter 3 The pioneer: L. V. Kantorovich, 1912–1986, pp. 405–407)". Mathematical Programming. Series B. 91 (ISMP 2000, Part 1 (Atlanta, GA), number 3): 401–416. doi:10.1007/s101070100258. MR 1888984. S2CID 13089965. 68. "Leonid Vitaliyevich Kantorovich — Prize Lecture ("Mathematics in economics: Achievements, difficulties, perspectives")". Nobelprize.org. Archived from the original on 14 December 2010. Retrieved 12 Dec 2010. 69. Aliprantis, Charalambos D.; Brown, Donald J.; Burkinshaw, Owen (1990). Existence and optimality of competitive equilibria. Berlin: Springer–Verlag. pp. xii+284. ISBN 978-3-540-52866-1. MR 1075992. 70. Rockafellar, R. Tyrrell. Conjugate duality and optimization. Lectures given at the Johns Hopkins University, Baltimore, Maryland, June, 1973. Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics, No. 16. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1974. vi+74 pp. 71. Lester G. Telser and Robert L. Graves Functional Analysis in Mathematical Economics: Optimization Over Infinite Horizons 1972. University of Chicago Press, 1972, ISBN 978-0-226-79190-6. 72. Neumann, John von, and Oskar Morgenstern (1944) Theory of Games and Economic Behavior, Princeton. 73. Mas-Colell, Andreu (1985). The Theory of general economic equilibrium: A differentiable approach. Econometric Society monographs. Cambridge UP. ISBN 978-0-521-26514-0. MR 1113262. 74. Yves Balasko. Foundations of the Theory of General Equilibrium, 1988, ISBN 0-12-076975-1. 75. Creedy, John (2008). "Francis Ysidro (1845–1926)", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-08-11 at the Wayback Machine. • Nash, John F., Jr. (1950). "The Bargaining Problem", Econometrica, 18(2), pp. 155-162 Archived 2016-03-04 at the Wayback Machine. • Serrano, Roberto (2008). "bargaining", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-08-11 at the Wayback Machine. • Smith, Vernon L. (1992). "Game Theory and Experimental Economics: Beginnings and Early Influences", in E. R. Weintraub, ed., Towards a History of Game Theory, pp. 241- Archived 2023-07-01 at the Wayback Machine 282. • _____ (2001). "Experimental Economics", International Encyclopedia of the Social & Behavioral Sciences, pp. 5100–5108. Abstract Archived 2018-10-14 at the Wayback Machine per sect. 1.1 & 2.1. • Plott, Charles R., and Vernon L. Smith, ed. (2008). Handbook of Experimental Economics Results, v. 1, Elsevier, Part 4, Games, ch. 45-66 preview links. • Shubik, Martin (2002). "Game Theory and Experimental Gaming", in R. Aumann and S. Hart, ed., Handbook of Game Theory with Economic Applications, Elsevier, v. 3, pp. 2327–2351. Abstract Archived 2018-11-07 at the Wayback Machine. 76. From The New Palgrave Dictionary of Economics (2008), 2nd Edition: • Gul, Faruk. "behavioural economics and game theory." Abstract. Archived 2017-08-07 at the Wayback Machine • Camerer, Colin F. "behavioral game theory." Abstract. Archived 2011-11-23 at the Wayback Machine • Rasmusen, Eric (2007). Games and Information, 4th ed. Description Archived 2017-06-24 at the Wayback Machine and chapter-preview links. Archived 2023-07-01 at the Wayback Machine • Aumann, R., and S. Hart, ed. (1992, 2002). Handbook of Game Theory with Economic Applications v. 1, links at ch. 3-6 Archived 2017-08-16 at the Wayback Machine and v. 3, ch. 43 Archived 2018-10-14 at the Wayback Machine. • Tirole, Jean (1988). The Theory of Industrial Organization, MIT Press. Description and chapter-preview links, pp. vii-ix, "General Organization", pp. 5-6, and "Non-Cooperative Game Theory: A User's Guide Manual,' " ch. 11, pp. 423-59. • Bagwell, Kyle, and Asher Wolinsky (2002). "Game theory and Industrial Organization", ch. 49, Handbook of Game Theory with Economic Applications, v. 3, pp. 1851 Archived 2016-01-02 at the Wayback Machine-1895. • Shubik, Martin (1981). "Game Theory Models and Methods in Political Economy", in Handbook of Mathematical Economics,, v. 1, pp. 285-330. • The New Palgrave Dictionary of Economics (2008), 2nd Edition:      Myerson, Roger B. "mechanism design." Abstract. Archived 2011-11-23 at the Wayback Machine      _____. "revelation principle." Abstract. Archived 2013-05-16 at the Wayback Machine      Sandholm, Tuomas. "computing in mechanism design." Abstract. Archived 2011-11-23 at the Wayback Machine • Nisan, Noam, and Amir Ronen (2001). "Algorithmic Mechanism Design", Games and Economic Behavior, 35(1-2), pp. 166–196 Archived 2018-10-14 at the Wayback Machine. • Nisan, Noam, et al., ed. (2007). Algorithmic Game Theory, Cambridge University Press. Description Archived 2012-05-05 at the Wayback Machine. • Halpern, Joseph Y. (2008). "computer science and game theory", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2017-11-05 at the Wayback Machine.          • Shoham, Yoav (2008). "Computer Science and Game Theory", Communications of the ACM, 51(8), pp. 75-79 Archived 2012-04-26 at the Wayback Machine.          • Roth, Alvin E. (2002). "The Economist as Engineer: Game Theory, Experimentation, and Computation as Tools for Design Economics", Econometrica, 70(4), pp. 1341–1378. • Kirman, Alan (2008). "economy as a complex system", The New Palgrave Dictionary of Economics , 2nd Edition. Abstract Archived 2017-08-11 at the Wayback Machine. • Tesfatsion, Leigh (2003). "Agent-based Computational Economics: Modeling Economies as Complex Adaptive Systems", Information Sciences, 149(4), pp. 262-268. 77. Scott E. Page (2008), "agent-based models", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2018-02-10 at the Wayback Machine. • Holland, John H., and John H. Miller (1991). "Artificial Adaptive Agents in Economic Theory", American Economic Review, 81(2), pp. 365-370 Archived 2011-01-05 at the Wayback Machine p. 366. • Arthur, W. Brian, 1994. "Inductive Reasoning and Bounded Rationality", American Economic Review, 84(2), pp. 406-411. • Schelling, Thomas C. (1978 [2006]). Micromotives and Macrobehavior, Norton. Description Archived 2017-11-02 at the Wayback Machine, preview Archived 2023-07-01 at the Wayback Machine. • Sargent, Thomas J. (1994). Bounded Rationality in Macroeconomics, Oxford. Description and chapter-preview 1st-page links Archived 2023-07-01 at the Wayback Machine. • Judd, Kenneth L. (2006). "Computationally Intensive Analyses in Economics", Handbook of Computational Economics, v. 2, ch. 17, Introduction, p. 883. Pp. 881- 893. Pre-pub PDF Archived 2022-01-21 at the Wayback Machine.   • _____ (1998). Numerical Methods in Economics, MIT Press. Links to description and chapter previews. • Tesfatsion, Leigh (2002). "Agent-Based Computational Economics: Growing Economies from the Bottom Up", Artificial Life, 8(1), pp.55-82. Abstract Archived 2020-03-06 at the Wayback Machine and pre-pub PDF.   • _____ (1997). "How Economists Can Get Alife", in W. B. Arthur, S. Durlauf, and D. Lane, eds., The Economy as an Evolving Complex System, II, pp. 533–564. Addison-Wesley. Pre-pub PDF Archived 2012-04-15 at the Wayback Machine. 78. Tesfatsion, Leigh (2006), "Agent-Based Computational Economics: A Constructive Approach to Economic Theory", ch. 16, Handbook of Computational Economics, v. 2, part 2, ACE study of economic system. Abstract Archived 2018-08-09 at the Wayback Machine and pre-pub PDF Archived 2017-08-11 at the Wayback Machine. 79. Axelrod, Robert (1997). The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration, Princeton. Description Archived 2018-01-02 at the Wayback Machine, contents Archived 2018-01-02 at the Wayback Machine, and preview Archived 2023-07-01 at the Wayback Machine. • Leombruni, Roberto, and Matteo Richiardi, ed. (2004), Industry and Labor Dynamics: The Agent-Based Computational Economics Approach. World Scientific Publishing ISBN 981-256-100-5. Description Archived 2010-07-27 at the Wayback Machine and chapter-preview links Archived 2023-07-01 at the Wayback Machine. • Epstein, Joshua M. (2006). "Growing Adaptive Organizations: An Agent-Based Computational Approach", in Generative Social Science: Studies in Agent-Based Computational Modeling, pp. 309 - Archived 2023-07-01 at the Wayback Machine 344. Description Archived 2012-01-26 at the Wayback Machine and abstract Archived 2016-10-19 at the Wayback Machine. 80. Klosa, Tomas B., and Bart Nooteboom, 2001. "Agent-based Computational Transaction Cost Economics", Journal of Economic Dynamics and Control 25(3–4), pp. 503–52. Abstract. Archived 2020-06-22 at the Wayback Machine 81. Axtell, Robert (2005). "The Complexity of Exchange", Economic Journal, 115(504, Features), pp. F193-F210 Archived 2017-08-08 at the Wayback Machine. 82. Sandholm, Tuomas W., and Victor R. Lesser (2001)."Leveled Commitment Contracts and Strategic Breach", Games and Economic Behavior, 35(1-2), pp. 212-270 Archived 2020-12-04 at the Wayback Machine. • Colander, David, Peter Howitt, Alan Kirman, Axel Leijonhufvud, and Perry Mehrling (2008). "Beyond DSGE Models: Toward an Empirically Based Macroeconomics", American Economic Review, 98(2), pp. 236-240. Pre-pub PDF. • Sargent, Thomas J. (1994). Bounded Rationality in Macroeconomics, Oxford. Description and chapter-preview 1st-page links Archived 2023-07-01 at the Wayback Machine. 83. Tesfatsion, Leigh (2006), "Agent-Based Computational Economics: A Constructive Approach to Economic Theory", ch. 16, Handbook of Computational Economics, v. 2, pp. 832–865. Abstract Archived 2018-08-09 at the Wayback Machine and pre-pub PDF Archived 2017-08-11 at the Wayback Machine. 84. Smith, Vernon L. (2008). "experimental economics", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract Archived 2012-01-19 at the Wayback Machine. 85. Duffy, John (2006). "Agent-Based Models and Human Subject Experiments", ch. 19, Handbook of Computational Economics, v.2, pp. 949–101. Abstract Archived 2015-09-24 at the Wayback Machine. • Namatame, Akira, and Takao Terano (2002). "The Hare and the Tortoise: Cumulative Progress in Agent-based Simulation", in Agent-based Approaches in Economic and Social Complex Systems. pp. 3- 14, IOS Press. Description Archived 2012-04-05 at the Wayback Machine. • Fagiolo, Giorgio, Alessio Moneta, and Paul Windrum (2007). "A Critical Guide to Empirical Validation of Agent-Based Models in Economics: Methodologies, Procedures, and Open Problems", Computational Economics, 30, pp. 195 Archived 2023-07-01 at the Wayback Machine–226. • Tesfatsion, Leigh (2006). "Agent-Based Computational Economics: A Constructive Approach to Economic Theory", ch. 16, Handbook of Computational Economics, v. 2, [pp. 831–880] sect. 5. Abstract Archived 2018-08-09 at the Wayback Machine and pre-pub PDF Archived 2017-08-11 at the Wayback Machine. • Judd, Kenneth L. (2006). "Computationally Intensive Analyses in Economics", Handbook of Computational Economics, v. 2, ch. 17, pp. 881- 893. Pre-pub PDF Archived 2022-01-21 at the Wayback Machine. • Tesfatsion, Leigh, and Kenneth L. Judd, ed. (2006). Handbook of Computational Economics, v. 2. Description Archived 2012-03-06 at the Wayback Machine & and chapter-preview links. 86. Brockhaus, Oliver; Farkas, Michael; Ferraris, Andrew; Long, Douglas; Overhaus, Marcus (2000). Equity Derivatives and Market Risk Models. Risk Books. pp. 13–17. ISBN 978-1-899332-87-8. Archived from the original on 2023-07-01. Retrieved 2008-08-17.{{cite book}}: CS1 maint: multiple names: authors list (link) 87. Liner, Gaines H. (2002). "Core Journals in Economics". Economic Inquiry. 40 (1): 140. doi:10.1093/ei/40.1.138. 88. Stigler, George J.; Stigler, Steven J.; Friedland, Claire (April 1995). "The Journals of Economics". The Journal of Political Economy. 103 (2): 331–359. doi:10.1086/261986. ISSN 0022-3808. JSTOR 2138643. S2CID 154780520. Archived from the original on 2023-07-01. Retrieved 2020-01-27. 89. Stigler et al. reviewed journal articles in core economic journals (as defined by the authors but meaning generally non-specialist journals) throughout the 20th century. Journal articles which at any point used geometric representation or mathematical notation were noted as using that level of mathematics as its "highest level of mathematical technique". The authors refer to "verbal techniques" as those which conveyed the subject of the piece without notation from geometry, algebra or calculus. 90. Stigler et al., p. 342 91. Sutter, Daniel and Rex Pjesky. "Where Would Adam Smith Publish Today?: The Near Absence of Math-free Research in Top Journals" (May 2007). Archived 2017-10-10 at the Wayback Machine 92. Arrow, Kenneth J. (April 1960). "The Work of Ragnar Frisch, Econometrician". Econometrica. 28 (2): 175–192. doi:10.2307/1907716. ISSN 0012-9682. JSTOR 1907716. 93. Bjerkholt, Olav (July 1995). "Ragnar Frisch, Editor of Econometrica 1933-1954". Econometrica. 63 (4): 755–765. doi:10.2307/2171799. ISSN 0012-9682. JSTOR 1906940. 94. Lange, Oskar (1945). "The Scope and Method of Economics". Review of Economic Studies. 13 (1): 19–32. doi:10.2307/2296113. ISSN 0034-6527. JSTOR 2296113. S2CID 4140287. 95. Aldrich, John (January 1989). "Autonomy". Oxford Economic Papers. 41 (1, History and Methodology of Econometrics): 15–34. doi:10.1093/oxfordjournals.oep.a041889. ISSN 0030-7653. JSTOR 2663180. 96. Epstein, Roy J. (1987). A History of Econometrics. Contributions to Economic Analysis. North-Holland. pp. 13–19. ISBN 978-0-444-70267-8. OCLC 230844893. 97. Colander, David C. (2004). "The Strange Persistence of the IS-LM Model". History of Political Economy. 36 (Annual Supplement): 305–322. CiteSeerX 10.1.1.692.6446. doi:10.1215/00182702-36-Suppl_1-305. ISSN 0018-2702. S2CID 6705939. 98. Brems, Hans (October 1975). "Marshall on Mathematics". Journal of Law and Economics. 18 (2): 583–585. doi:10.1086/466825. ISSN 0022-2186. JSTOR 725308. S2CID 154881432. 99. Frigg, R.; Hartman, S. (February 27, 2006). Edward N. Zalta (ed.). Models in Science. Stanford Encyclopedia of Philosophy. Stanford, California: The Metaphysics Research Lab. ISSN 1095-5054. Archived from the original on 2007-06-09. Retrieved 2008-08-16. 100. "Greg Mankiw's Blog: An Exercise for My Readers". Archived from the original on 2019-08-07. Retrieved 2019-08-07. 101. Cochrane, John H. (2017-10-21). "The Grumpy Economist: Greg's algebra". The Grumpy Economist. Archived from the original on 2023-07-01. Retrieved 2019-08-07. 102. Ekelund, Robert; Hébert, Robert (2014). A History of Economic Theory & Method (6th ed.). Long Grove, IL: Waveland Press. pp. 574–575. 103. Hayek, Friedrich (September 1945). "The Use of Knowledge in Society". American Economic Review. 35 (4): 519–530. JSTOR 1809376. 104. Heilbroner, Robert (May–June 1999). "The end of the Dismal Science?". Challenge Magazine. Archived from the original on 2008-12-10. 105. Beed & Owen, 584 106. Boland, L. A. (2007). "Seven Decades of Economic Methodology". In I. C. Jarvie; K. Milford; D.W. Miller (eds.). Karl Popper:A Centenary Assessment. London: Ashgate Publishing. p. 219. ISBN 978-0-7546-5375-2. Retrieved 2008-06-10. 107. Beed, Clive; Kane, Owen (1991). "What Is the Critique of the Mathematization of Economics?". Kyklos. 44 (4): 581–612. doi:10.1111/j.1467-6435.1991.tb01798.x. 108. Friedman, Milton (1953). Essays in Positive Economics. Chicago: University of Chicago Press. pp. 30, 33, 41. ISBN 978-0-226-26403-5. 109. Keynes, John Maynard (1936). The General Theory of Employment, Interest and Money. Cambridge: Macmillan. p. 297. ISBN 978-0-333-10729-4. Archived from the original on 2019-05-28. Retrieved 2009-04-30. 110. Paul A. Samuelson (1952). "Economic Theory and Mathematics — An Appraisal", American Economic Review, 42(2), pp. 56, 64-65 (press +). 111. D.W. Bushaw and R.W. Clower (1957). Introduction to Mathematical Economics, p. vii. Archived 2022-03-18 at the Wayback Machine 112. Solow, Robert M. (20 March 1988). "The Wide, Wide World Of Wealth (The New Palgrave: A Dictionary of Economics. Edited by John Eatwell, Murray Milgate and Peter Newman. Four volumes. 4,103 pp. New York: Stockton Press. $650)". New York Times. Archived from the original on 1 August 2017. Retrieved 11 February 2017. Further reading • Alpha C. Chiang and Kevin Wainwright, [1967] 2005. Fundamental Methods of Mathematical Economics, McGraw-Hill Irwin. Contents. • E. Roy Weintraub, 1982. Mathematics for Economists, Cambridge. Contents. • Stephen Glaister, 1984. Mathematical Methods for Economists, 3rd ed., Blackwell. Contents. • Akira Takayama, 1985. Mathematical Economics, 2nd ed. Cambridge. Contents. • Nancy L. Stokey and Robert E. Lucas with Edward Prescott, 1989. Recursive Methods in Economic Dynamics, Harvard University Press. Desecription and chapter-preview links. • A. K. Dixit, [1976] 1990. Optimization in Economic Theory, 2nd ed., Oxford. Description and contents preview. • Kenneth L. Judd, 1998. Numerical Methods in Economics, MIT Press. Description and chapter-preview links. • Michael Carter, 2001. Foundations of Mathematical Economics, MIT Press. Contents. • Ferenc Szidarovszky and Sándor Molnár, 2002. Introduction to Matrix Theory: With Applications to Business and Economics, World Scientific Publishing. Description and preview. • D. Wade Hands, 2004. Introductory Mathematical Economics, 2nd ed. Oxford. Contents. • Giancarlo Gandolfo, [1997] 2009. Economic Dynamics, 4th ed., Springer. Description and preview. • John Stachurski, 2009. Economic Dynamics: Theory and Computation, MIT Press. Description and preview. External links Wikiquote has quotations related to Mathematical economics. Look up mathematical economics in Wiktionary, the free dictionary. • Journal of Mathematical Economics Aims & Scope • Mathematical Economics and Financial Mathematics at Curlie • Erasmus Mundus Master QEM - Models and Methods of Quantitative Economics, The Models and Methods of Quantitative Economics - QEM Major mathematics areas • History • Timeline • Future • Outline • Lists • Glossary Foundations • Category theory • Information theory • Mathematical logic • Philosophy of mathematics • Set theory • Type theory Algebra • Abstract • Commutative • Elementary • Group theory • Linear • Multilinear • Universal • Homological Analysis • Calculus • Real analysis • Complex analysis • Hypercomplex analysis • Differential equations • Functional analysis • Harmonic analysis • Measure theory Discrete • Combinatorics • Graph theory • Order theory Geometry • Algebraic • Analytic • Arithmetic • Differential • Discrete • Euclidean • Finite Number theory • Arithmetic • Algebraic number theory • Analytic number theory • Diophantine geometry Topology • General • Algebraic • Differential • Geometric • Homotopy theory Applied • Engineering mathematics • Mathematical biology • Mathematical chemistry • Mathematical economics • Mathematical finance • Mathematical physics • Mathematical psychology • Mathematical sociology • Mathematical statistics • Probability • Statistics • Systems science • Control theory • Game theory • Operations research Computational • Computer science • Theory of computation • Computational complexity theory • Numerical analysis • Optimization • Computer algebra Related topics • Mathematicians • lists • Informal mathematics • Films about mathematicians • Recreational mathematics • Mathematics and art • Mathematics education •  Mathematics portal • Category • Commons • WikiProject Economics Theoretical economics • Microeconomics • Decision theory • Price theory • Game theory • Contract theory • Mechanism design • Macroeconomics • Mathematical economics • Computational economics • Behavioral economics • Pluralism in economics Empirical economics • Econometrics • Economic statistics • Experimental economics • Economic history Applied economics • Agricultural • Business • Demographic • Development • Economic geography • Education • Environmental • Ecological • Financial • Health • Industrial organization • International • Labour • Law and economics • Monetary • Natural resource • Participation • Economic policy • Public economics • Public choice • Regional • Socioeconomics • Solidarity • Transportation • Urban • Welfare Schools (history) of economic thought • Mainstream • Heterodox • American (National) • Ancient thought • Anarchist • Mutualism • Austrian • Behavioral • Buddhist • Chartalism • Modern Monetary Theory • Chicago • Classical • Critique of political economy • Economic democracy • Disequilibrium • Ecological • Evolutionary • Feminist • Georgism • Happiness • Historical • Humanistic • Institutional • Keynesian • Neo- (neoclassical–Keynesian synthesis) • New • Post- • Circuitism • Malthusianism • Marginalism • Marxian • Neo- • Mercantilism • Mixed • Neoclassical • Lausanne • New classical • Real business-cycle theory • New institutional • Physiocracy • Socialist • Stockholm • Supply-side • Thermoeconomics Notable economists and thinkers within economics • François Quesnay • Adam Smith • David Ricardo • Thomas Robert Malthus • Johann Heinrich von Thünen • Friedrich List • Hermann Heinrich Gossen • Jules Dupuit • Antoine Augustin Cournot • John Stuart Mill • Karl Marx • William Stanley Jevons • Henry George • Léon Walras • Alfred Marshall • Georg Friedrich Knapp • Francis Ysidro Edgeworth • Vilfredo Pareto • Friedrich von Wieser • John Bates Clark • Thorstein Veblen • John R. Commons • Irving Fisher • Wesley Clair Mitchell • John Maynard Keynes • Joseph Schumpeter • Arthur Cecil Pigou • Frank Knight • Karl Polanyi • John von Neumann • Alvin Hansen • Jacob Viner • Edward Chamberlin • Ragnar Frisch • Harold Hotelling • Michał Kalecki • Oskar R. Lange • Jacob Marschak • Gunnar Myrdal • Abba P. Lerner • Roy Harrod • Piero Sraffa • Simon Kuznets • Joan Robinson • E. F. Schumacher • Friedrich Hayek • John Hicks • Tjalling Koopmans • Nicholas Georgescu-Roegen • Wassily Leontief • John Kenneth Galbraith • Hyman Minsky • Herbert A. Simon • Milton Friedman • Paul Samuelson • Kenneth Arrow • William Baumol • Gary Becker • Elinor Ostrom • Robert Solow • Amartya Sen • Robert Lucas Jr. • Joseph Stiglitz • Richard Thaler • Paul Krugman • Thomas Piketty • more • Category • Index • Lists • Outline • Publications • Business portal Social sciences • Outline • History • Index Primary • Anthropology (archaeology • cultural • social • physical/biological) • Economics (microeconomics • macroeconomics • econometrics • mathematical) • Geography • physical • human • technical • integrated • History • cultural • auxiliary sciences • economic • human • military • political • social • Law (jurisprudence • legal history • legal systems • public law • private law) • Linguistics (semiotics) • Political science (international relations • comparative • theory • public policy • public administration) • Psychology (abnormal • cognitive • developmental • personality • social) • Sociology (criminology • demography • internet • rural • urban) Interdisciplinary • Administration (business • public) • Anthrozoology • Area studies • Business studies • Cognitive science • Communication studies • Community studies • Criminology • Cultural studies • Development studies • Education • Environmental (social science • studies) • Food studies • Gender studies • Global studies • Historical sociology • History of technology • Human ecology • Information science • International studies • Linguistics • Management • Media studies • Philosophy of science (economics • history • psychology • social science) • Planning (land use • regional • urban) • Political ecology • Political economy • Political sociology • Public health • Regional science • Science and technology studies • Science studies • historical • Quantum social science • Social work • Vegan studies List • List of social science journals Other categorizations • Behavioral sciences • Geisteswissenschaft • Human science • Humanities • Category • Commons •  Society portal • Wikiversity Authority control: National • France • BnF data • Germany • Israel • United States • Japan • Czech Republic
Wikipedia
Statistics education Statistics education is the practice of teaching and learning of statistics, along with the associated scholarly research. Statistics is both a formal science and a practical theory of scientific inquiry, and both aspects are considered in statistics education. Education in statistics has similar concerns as does education in other mathematical sciences, like logic, mathematics, and computer science. At the same time, statistics is concerned with evidence-based reasoning, particularly with the analysis of data. Therefore, education in statistics has strong similarities to education in empirical disciplines like psychology and chemistry, in which education is closely tied to "hands-on" experimentation. Mathematicians and statisticians often work in a department of mathematical sciences (particularly at colleges and small universities). Statistics courses have been sometimes taught by non-statisticians, against the recommendations of some professional organizations of statisticians and of mathematicians. Statistics education research is an emerging field that grew out of different disciplines and is currently establishing itself as a unique field that is devoted to the improvement of teaching and learning statistics at all educational levels. Goals of statistics education Statistics educators have cognitive and noncognitive goals for students. For example, former American Statistical Association (ASA) President Katherine Wallman defined statistical literacy as including the cognitive abilities of understanding and critically evaluating statistical results as well as appreciating the contributions statistical thinking can make.[1][2] Cognitive goals In the text rising from the 2008 joint conference of the International Commission on Mathematical Instruction and the International Association of Statistics Educators, editors Carmen Batanero, Gail Burrill, and Chris Reading (Universidad de Granada, Spain, Michigan State University, USA, and University of New England, Australia, respectively) note worldwide trends in curricula which reflect data-oriented goals. In particular, educators currently seek to have students: "design investigations; formulate research questions; collect data using observations, surveys, and experiments; describe and compare data sets; and propose and justify conclusions and predictions based on data."[3] The authors note the importance of developing statistical thinking and reasoning in addition to statistical knowledge. Despite the fact that cognitive goals for statistics education increasingly focus on statistical literacy, statistical reasoning, and statistical thinking rather than on skills, computations and procedures alone, there is no agreement about what these terms mean or how to assess these outcomes. A first attempt to define and distinguish between these three terms appears in the ARTIST website[4] which was created by Garfield, delMas and Chance and has since been included in several publications.[5][6] Brief definitions of these terms are as follows: 1. Statistical literacy is being able to read and use basic statistical language and graphical representations to understand statistical information in the media and in daily life. 2. Statistical reasoning is being able to reason about and connect different statistical concepts and ideas, such as knowing how and why outliers affect statistical measures of center and variability. 3. Statistical thinking is the type of thinking used by statisticians when they encounter a statistical problem. This involves thinking about the nature and quality of the data and, where the data came from, choosing appropriate analyses and models, and interpreting the results in the context of the problem and given the constraints of the data. Further cognitive goals of statistics education vary across students' educational level and the contexts in which they expect to encounter statistics. Statisticians have proposed what they consider the most important statistical concepts for educated citizens. For example, Utts (2003) published seven areas of what every educated citizen should know, including understanding that "variability is normal" and how "coincidences… are not uncommon because there are so many possibilities."[7] Gal (2002) suggests adults in industrialized societies are expected to exercise statistical literacy, "the ability to interpret and critically evaluate statistical information… in diverse contexts, and the ability to… communicate understandings and concerns regarding the… conclusions."[8] Non-cognitive goals Non-cognitive outcomes include affective constructs such as attitudes, beliefs, emotions, dispositions, and motivation.[9] According to prominent researchers Gal & Ginsburg,[10] statistics educators should make it a priority to be aware of students' ideas, reactions, and feelings towards statistics and how these affect their learning. Beliefs Beliefs are defined as one's individually held ideas about statistics, about oneself as a learner of statistics, and about the social context of learning statistics.[11] Beliefs are distinct from attitudes in the sense that attitudes are relatively stable and intense feelings that develop over time in the context of experiences learning statistics. Students' web of beliefs provides a context for their approach towards their classroom experiences in statistics. Many students enter a statistics course with apprehension towards learning the subject, which works against the learning environment that the instructor is trying to accomplish. Therefore, it is important for instructors to have access to assessment instruments that can give an initial diagnosis of student beliefs and monitor beliefs during a course.[10] Frequently, assessment instruments have monitored beliefs and attitudes together. For examples of such instruments, see the attitudes section below. Dispositions Disposition has to do with the ways students question the data and approach a statistical problem. Dispositions is one of the four dimensions in Wild and Pfannkuch's[12] framework for statistical thinking, and contains the following elements: • Curiosity and Awareness: These traits are a part of the process of generating questions and generating ideas to explore and analyze data. • Engagement: Students will be most observant and aware in the areas they find most interesting. • Imagination: This trait is important for viewing a problem from different perspectives and coming up with possible explanations. • Scepticism: Critical thinking is important for receiving new ideas and information and evaluating the appropriateness of study design and analysis. • Being logical: The ability to detect when one idea follows from another is important for arriving at valid conclusions. • A propensity to seek deeper meaning: This means not taking everything at face value and being open to consider new ideas and dig deeper for information. Scheaffer states that a goal of statistics education is to have students see statistics broadly. He developed a list of views of statistics that can lead to this broad view, and describes them as follows:[13] • Statistics as number sense: Do I understand what the numbers mean? (seeing data as numbers in context, reading charts, graphs and tables, understanding numerical and graphical summaries of data, etc.) • Statistics as a way of understanding the world: Can I use existing data to help make decisions? (using census data, birth and death rates, disease rates, CPI, ratings, rankings, etc., to describe, decide and defend) • Statistics as organized problem solving: Can I design and carry out a study to answer specific questions? (pose problem, collect data according to a plan, analyze data, and draw conclusions from data) Attitudes Since students often experience math anxiety and negative opinions about statistics courses, various researchers have addressed attitudes and anxiety towards statistics. Some instruments have been developed to measure college students' attitudes towards statistics, and have been shown to have appropriate psychometric properties. Examples of such instruments include: • Survey of Attitudes Towards Statistics (SATS), developed by Schau, Stevens, Dauphinee, and Del Vecchio[14] • Attitude Toward Statistics Scale, developed by Wise[15] • Statistics Attitude Survey (SAS), developed by Roberts and Bilderback[16] Careful use of instruments such as these can help statistics instructors to learn about students' perception of statistics, including their anxiety towards learning statistics, the perceived difficulty of learning statistics, and their perceived usefulness of the subject.[17] Some studies have shown modest success at improving student attitudes in individual courses,[18][19] but no generalizable studies showing improvement in student attitudes have been seen. Nevertheless, one of the goals of statistics education is to make the study of statistics a positive experience for students and to bring in interesting and engaging examples and data that will motivate students. According to a fairly recent literature review,[17] improved student attitudes towards statistics can lead to better motivation and engagement, which also improves cognitive learning outcomes. Primary–secondary education level New Zealand In New Zealand, a new curriculum for statistics has been developed by Chris Wild and colleagues at Auckland University. Rejecting the contrived, and now unnecessary due to computer power, approach of reasoning under the null and the restrictions of normal theory, they use comparative box plots and bootstrap to introduce concepts of sampling variability and inference.[20] The developing curriculum also contains aspects of statistical literacy. United Kingdom In the United Kingdom, at least some statistics has been taught in schools since the 1930s.[21][22] At present, A-level qualifications (typically taken by 17- to 18-year-olds) are being developed in "Statistics" and "Further Statistics". The coverage of the former includes: Probability; Data Collection; Descriptive Statistics; Discrete Probability Distributions; Binomial Distribution; Poisson Distributions; Continuous Probability Distributions; The Normal Distribution; Estimation; Hypothesis Testing; Chi-Squared; Correlation and Regression. The coverage of "Further Statistics" includes: Continuous Probability Distributions; Estimation; Hypothesis Testing; One Sample Tests; Hypothesis Testing; Two Sample Tests; Goodness of Fit Tests; Experimental Design; Analysis of Variance (Anova); Statistical Process Control; Acceptance Sampling. The Centre for Innovation in Mathematics Teaching (CIMT)[23] has online course notes for these sets of topics.[24] Revision notes for an existing qualification[25] indicate a similar coverage. At an earlier age (typically 15–16 years) GCSE qualifications in mathematics contain "Statistics and Probability" topics on: Probability; Averages; Standard Deviation; Sampling; Cumumulative Frequency Graphs (including median and quantiles); Representing Data; Histograms.[26] The UK's Office for National Statistics has a webpage[27] leading to material suitable for both teachers and students at school level. In 2004 the Smith inquiry made the following statement: "There is much concern and debate about the positioning of Statistics and Data Handling within the current mathematics GCSE, where it occupies some 25 per cent of the timetable allocation. On the one hand, there is widespread agreement that the Key Stage 4 curriculum is over-crowded and that the introduction of Statistics and Data Handling may have been at the expense of time needed for practising and acquiring fluency in core mathematical manipulations. Many in higher education mathematics and engineering departments take this view. On the other hand, there is overwhelming recognition, shared by the Inquiry, of the vital importance of Statistics and Data Handling skills both for a number of other academic disciplines and in the workplace. The Inquiry recommends that there be a radical re-look at this issue and that much of the teaching and learning of Statistics and Data Handling would be better removed from the mathematics timetable and integrated with the teaching and learning of other disciplines (e.g. biology or geography). The time restored to the mathematics timetable should be used for acquiring greater mastery of core mathematical concepts and operations."[28] United States In the United States, schooling has increased the use of probability and statistics, especially since the 1990s.[29] Summary statistics and graphs are taught in elementary school in many states. Topics in probability and statistical reasoning are taught in high school algebra (or mathematical science) courses; statistical reasoning has been examined in the SAT test since 1994. The College Board has developed an Advanced Placement course in statistics, which has provided a college-level course in statistics to hundreds of thousands of high school students, with the first examination happening in May 1997.[30] In 2007, the ASA endorsed the Guidelines for Assessment and Instruction in Statistics Education (GAISE), a two-dimensional framework for the conceptual understanding of statistics in Pre-K-12 students. The framework contains learning objectives for students at each conceptual level and provides pedagogical examples that are consistent with the conceptual levels. Estonia Estonia is piloting a new statistics curriculum developed by the Computer-Based Math foundation based around its principles of using computers as the primary tool of education.[31][32][33] in cooperation with the University of Tartu.[34] University level General Statistics is often taught in departments of mathematics or in departments of mathematical sciences. At the undergraduate level, statistics is often taught as a service course. United Kingdom By tradition in the U.K., most professional statisticians are trained at the Master level. A difficulty of recruiting strong undergraduates has been noted: "Very few undergraduates positively choose to study statistics degrees; most choose some statistics options within a mathematics programme, often to avoid the advanced pure and applied mathematics courses. My view is that statistics as a theoretical discipline is better taught late rather than early, whereas statistics as part of scientific methodology should be taught as part of science."[35] In the United Kingdom, the teaching of statistics at university level was originally done within science departments that needed the topic to accompany the teaching of their own subjects, and departments of mathematics had limited coverage before the 1930s.[21] For the twenty years subsequent to this, while departments of mathematics had started to teach statistics, there was little realisation that essentially the same basic statistical methodology was being applied across a variety of sciences.[21] Statistical departments have had difficulty when they have been separated from mathematics departments.[35] Psychologist Andy Field (British Psychological Society Teaching and Book Award) created a new concept of statistical teaching and textbooks that goes beyond the printed page.[36] United States Enrollments in statistics have increased in community colleges, in four-year colleges and universities in the United States. At community colleges in the United States, mathematics has experienced increased enrollment since 1990. At community colleges, the ratio of the students enrolled in statistics to those enrolled in calculus rose from 56% in 1990 to 82% in 1995.[37] One of the ASA-endorsed GAISE reports focused on statistics education at the introductory college level. The report includes a brief history of the introductory statistics course and recommendations for how it should be taught. In many colleges, a basic course in "statistics for non-statisticians" has required only algebra (and not calculus); for future statisticians, in contrast, the undergraduate exposure to statistics is highly mathematical.[nb 1] As undergraduates, future statisticians should have completed courses in multivariate calculus, linear algebra, computer programming, and a year of calculus-based probability and statistics. Students wanting to obtain a doctorate in statistics from "any of the better graduate programs in statistics" should also take "real analysis".[38] Laboratory courses in physics, chemistry and psychology also provide useful experiences with planning and conducting experiments and with analyzing data. The ASA recommends that undergraduate students consider obtaining a bachelor's degree in applied mathematics as preparation for entering a master program in statistics.[nb 2] Historically, professional degrees in statistics have been at the Master level, although some students may qualify to work with a bachelor's degree and job-related experience or further self-study.[nb 3] Professional competence requires a background in mathematics—including at least multivariate calculus, linear algebra, and a year of calculus-based probability and statistics.[39] In the United States, a master program in statistics requires courses in probability, mathematical statistics, and applied statistics (e.g., design of experiments, survey sampling, etc.). For a doctoral degree in statistics, it has been traditional that students complete a course in measure-theoretic probability as well as courses in mathematical statistics. Such courses require a good course in real analysis, covering the proofs of the theory of calculus and topics like the uniform convergence of functions.[38][40] In recent decades, some departments have discussed allowing doctoral students to waive the course in measure-theoretic probability by demonstrating advanced skills in computer programming or scientific computing.[nb 4] Who should teach statistics? The question of what qualities are needed to teach statistics has been much discussed, and sometimes this discussion is concentrated on the qualifications necessary for those undertaking such teaching. The question arises separately for teaching at both school and university levels, partly because of the need for numerically more such teachers at school level and partly because of need for such teachers to cover a broad range of other topics within their overall duties. Given that "statistics" is often taught to non-scientists, opinions can range all the way from "statistics should be taught by statisticians", through "teaching of statistics is too mathematical" to the extreme that "statistics should not be taught by statisticians".[41] Teaching at university level In the United States especially, statisticians have long complained that many mathematics departments have assigned mathematicians (without statistical competence) to teach statistics courses, effectively giving "double blind" courses. The principle that college-instructors should have qualifications and engagement with their academic discipline has long been violated in United States colleges and universities, according to generations of statisticians. For example, the journal Statistical Science reprinted "classic" articles on the teaching of statistics by non-statisticians by Harold Hotelling;[42][43][44] Hotelling's articles are followed by the comments of Kenneth J. Arrow, W. Edwards Deming, Ingram Olkin, David S. Moore, James V. Sidek, Shanti S. Gupta, Robert V. Hogg, Ralph A. Bradley, and by Harold Hotelling, Jr. (an economist and son of Harold Hotelling). Data on the teaching of statistics in the United States has been collected on behalf of the Conference Board of the Mathematical Sciences (CBMS). Examining data from 2000, Schaeffer and Stasny[45] reported By far the majority of instructors within statistics departments have at least a master’s degree in statistics or biostatistics (about 89% for doctoral departments and about 79% for master’s departments). In doctoral mathematics departments, however, only about 58% of statistics course instructors had at least a master’s degree in statistics or biostatistics as their highest degree earned. In master’s-level mathematics departments, the corresponding percentage was near 44%, and in bachelor’s-level departments only 19% of statistics course instructors had at least a master’s degree in statistics or biostatistics as their highest degree earned. As we expected, a large majority of instructors in statistics departments (83% for doctoral departments and 62% for master’s departments) held doctoral degrees in either statistics or biostatistics. The comparable percentages for instructors of statistics in mathematics departments were about 52% and 38%. The principle that statistics-instructors should have statistical competence has been affirmed by the guidelines of the Mathematical Association of America, which has been endorsed by the ASA. The unprofessional teaching of statistics by mathematicians (without qualifications in statistics) has been addressed in many articles.[46][47] Teaching methods See also: Mathematics education, Pedagogy, and Didactics The literature on methods of teaching statistics is closely related to the literature on the teaching of mathematics for two reasons. First, statistics is often taught as part of the mathematics curriculum, by instructors trained in mathematics and working in a mathematics department. Second, statistical theory has often been taught as a mathematical theory rather than as the practical logic of science --- as the science that "puts chance to work" in Rao's phrase--- and this has entailed an emphasis on formal and manipulative training, such as solving combinatorial problems involving red and green jelly beans. Statisticians have complained that mathematicians are prone to over-emphasize mathematical manipulations and probability theory and under-emphasize questions of experimentation, survey methodology, exploratory data analysis, and statistical inference.[48] In recent decades, there has been an increased emphasis on data analysis and scientific inquiry in statistics education. In the United Kingdom, the Smith inquiry Making Mathematics Count suggests teaching basic statistical concepts as part of the science curriculum, rather than as part of mathematics.[49] In the United States, the ASA's guidelines for undergraduate statistics specify that introductory statistics should emphasize the scientific methods of data collection, particularly randomized experiments and random samples:[39][50] further, the first course should review these topics when the theory of "statistical inference" is studied.[50] Similar recommendations occur for the Advanced Placement (AP) course in Statistics. The ASA and AP guidelines are followed by contemporary textbooks in the US, such as those by Freedman, Purvis & Pisani (Statistics)[51] and by David S. Moore (Introduction to the Practice of Statistics with McCabe[52] and Statistics: Concepts and Controversies with Notz[53]) and by Watkins, Schaeffer & Cobb (Statistics: From Data to Decisions[54] and Statistics in Action[55]). Besides an emphasis on the scientific inquiry in the content of beginning of statistics, there has also been an increase on active learning in the conduct of the statistics classroom.[56] Professional community Associations The International Statistical Institute (ISI) now has one section devoted to education, the International Association for Statistical Education (IASE), which runs the International Conference on Teaching Statistics every four years as well as IASE satellite conferences around ISI and ICMI meetings. The UK established the Royal Statistical Society Centre for Statistics Education and the ASA now also has a Section on Statistical Education, focused mostly on statistics teaching at the elementary and secondary levels. Conferences In addition to the international gatherings of statistics educators at ICOTS every four years, the US hosts a US Conference on Teaching Statistics (USCOTS) every two years and has recently started an Electronic Conference on Teaching Statistics (eCOTS) to alternate with USCOTS. Sessions on statistics education area also offered at many conferences in mathematics educations such as the International Congress on Mathematical Education, the National Council of Teachers of Mathematics, the Conference of the International Group for the Psychology of Mathematics Education, and the Mathematics Education Research Group of Australasia. The annual Joint Statistical Meetings (offered by the ASA and Statistics Canada) offer many sessions and roundtables on statistics education. The International Research Forums on Statistical Reasoning, Thinking, and Literacy offer scientific gatherings every two years and related publications in journals, CD-ROMs and books on research in statistics education. Graduate coursework and programs Only three universities currently offer graduate programs in statistics education: the University of Granada,[57] the University of Minnesota,[58][59] and the University of Florida.[60] However, graduate students in a variety of disciplines (e.g., mathematics education, psychology, educational psychology) have been finding ways to complete dissertations on topics related to teaching and learning statistics. These dissertations are archived on the IASE web site.[61] Two main courses in statistics education that have been taught in a variety of settings and departments are a course on teaching statistics[62] and a course on statistics education research.[63] An ASA-sponsored workshop has established recommendations for additional graduate programs and courses.[64] Software for learning • Fathom: Dynamic Data Software • TinkerPlots • StatCrunch Trends in Statistics Education Teachers of statistics have been encouraged to explore new directions in curriculum content, pedagogy and assessment. In an influential talk at USCOTS, researcher George Cobb presented an innovative approach to teaching statistics that put simulation, randomization, and bootstrapping techniques at the core of the college-level introductory course, in place of traditional content such as probability theory and the t-test.[65] Several teachers and curriculum developers have been exploring ways to introduce simulation, randomization, and bootstrapping as teaching tools for the secondary and postsecondary levels. Courses such as the University of Minnesota's CATALST,[66] Nathan Tintle and collaborators' Introduction to Statistical Investigations,[67] and the Lock team's Unlocking the Power of Data,[68] are curriculum projects based on Cobb's ideas. Other researchers have been exploring the development of informal inferential reasoning as a way to use these methods to build a better understanding of statistical inference. [69] [70][71] Another recent direction is addressing the big data sets that are increasingly affecting or being contributed to in our daily lives. Statistician Rob Gould, creator of Data Cycle, The Musical dinner and theatre spectacular, outlines many of these types of data and encourages teachers to find ways to use the data and address issues around big data.[72] According to Gould, curricula focused on big data will address issues of sampling, prediction, visualization, data cleaning, and the underlying processes that generate data, rather than traditionally emphasized methods of making statistical inferences such as hypothesis testing. Driving both of these changes is the increased role of computing in teaching and learning statistics.[73] Some researchers argue that as the use of modeling and simulation increase, and as data sets become larger and more complex, students will need better and more technical computing skills.[74] Projects such as MOSAIC have been creating courses that blend computer science, modeling, and statistics.[75][76] See also • Mathematics education • Science education • Statistical literacy Footnotes 1. "Undergraduate major programs should include study of probability and statistical theory, along with the prerequisite mathematics, especially calculus and linear algebra. Programs for nonmajors may require less study of mathematics. Programs preparing for graduate work may require additional mathematics." American Statistical Association. "Curriculum Guidelines for Undergraduate Programs in Statistical Science". Retrieved 14 May 2010. 2. The ASA makes the following recommendations for undergraduates wishing to become statisticians: "Major in applied mathematics, or a closely related field. If you do major in a nonstatistical field, minor in mathematics or statistics. Develop a background in mathematics, science, and computers and gain knowledge in a specific field of interest. A master's degree or PhD is very helpful and often recommended or required for higher-level positions." American Statistical Association. "How Do I Become a Statistician?". Retrieved 14 May 2010. 3. "A master's degree or PhD is very helpful and often recommended or required for higher-level positions." American Statistical Association. "How Do I Become a Statistician?". Retrieved 14 May 2010. 4. Stanford University statistician Persi Diaconis wrote that "I see a strong trend against measure theory in modern statistics departments: I had to fight to keep the measure theory requirement in Stanford's statistics graduate program. The fight was lost at Berkeley." Diaconis, Persi (March 2004). "A Frequentist does this, a Bayesian that (Review of Probability Theory: The Logic of Science by E.T. Jaynes)". SIAM News. Archived from the original on 7 October 2007. Retrieved 14 May 2010. References 1. Wallman, K.S. (1993). "Enhancing statistical literacy: Enriching our society". Journal of the American Statistical Association. 88 (421): 1–8. doi:10.1080/01621459.1993.10594283. JSTOR 2290686. 2. Bond, M.E.; Perkins, S.M.; Ramirez, C. (2012). "Students' Perceptions of Statistics: An Exploration of Attitudes, Conceptualizations, and Content Knowledge of Statistics" (PDF). Statistics Education Research Journal. 11 (2): 6–25. doi:10.52041/serj.v11i2.325. S2CID 140436759. 3. Batanero, Carmen; Burrill, Gail F.; Reading, Chris, eds. (2011). Teaching Statistics in School Mathematics—Challenges for Teaching and Teacher Education: A Joint ICMI/IASE Study: The 18th ICMI Study. Springer. ISBN 978-94-007-1131-0. 4. "Assessment Resource Tools for Improving Statistical Thinking – Home Page". Retrieved 28 February 2013. 5. Garfield, J., & Ben-Zvi, D. (2008). Developing students' statistical reasoning: Connecting research and teaching practice. Springer. 6. Garfield, J., & Ben-Zvi, D. (2008). Preparing school teachers to develop students' statistical reasoning. In C. Batanero, G. Burrill, C. Reading, & A. Rossman. Teaching Statistics in School Mathematics-Challenges for Teaching and Teacher Education: A Joint ICMI/IASE Study: The 18th ICMI Study. 299–310. Dordrecht: Springer. 7. Utts, J. (2003). "What educated citizens should know about statistics and probability". The American Statistician. 57 (2): 74–79. CiteSeerX 10.1.1.193.2420. doi:10.1198/0003130031630. S2CID 14289727. 8. Gal, I. (2002). "Adults' statistical literacy: Meanings, components, responsibilities". International Statistical Review. 70 (1): 1–25. doi:10.1111/j.1751-5823.2002.tb00336.x. S2CID 122781003. 9. Bloom, Benjamin Samuel (1956). Taxonomy of educational objectives: the classification of educational goals. Handbook I, Cognitive domain. David McKay. ISBN 9780582323865. OCLC 220283628. 10. Gal, I.; Ginsburg, L. (November 1994). "The role of beliefs and attitudes in learning statistics: towards an assessment framework". Journal of Statistics Education. 2 (2). doi:10.1080/10691898.1994.11910471. 11. Gal, Iddo; Garfield, Joan B.; Gal, Y., eds. (1997). "Monitoring attitudes and beliefs in statistics education". The Assessment Challenge in Statistics Education. IOS Press. pp. 37–51. ISBN 978-90-5199-333-2. 12. Wild, C.J.; Pfannkuch, M. (1999). "Statistical thinking in empirical enquiry". International Statistical Review. 67 (3): 223–265. doi:10.1111/j.1751-5823.1999.tb00442.x. S2CID 17076878. 13. Scheaffer, R. (2001). "Statistics education: perusing the past, embracing the present, and charting the future". Newsletter for the Section on Statistical Education. 7 (1). 14. Schau, C.; Stevens, J.; Dauphinee, T.; Del Vecchio, A. (1995). "The development and validation of the Survey of Attitudes Toward Statistics". Educational and Psychological Measurement. 55 (5): 868–876. doi:10.1177/0013164495055005022. S2CID 145141281. 15. Wise, S.L. (1985). "The development and validation of a scale measuring attitudes toward statistics". Educational and Psychological Measurement. 45 (2): 401–5. doi:10.1177/001316448504500226. S2CID 142923582. 16. Roberts, D.; Bilderback, W. (April 1980). "Reliability and Validity of a Statistics Attitude Survey". Educational and Psychological Measurement. 40 (1): 235–8. doi:10.1177/001316448004000138. S2CID 145497772. 17. Zieffler, A.; Garfield, J.; Alt, S.; Dupuis, D.; Holleque, K.; Chang, B. (2008). "What does research suggest about the teaching and learning of introductory statistics at the college level? A review of the literature" (PDF). Journal of Statistics Education. 16 (2). doi:10.1080/10691898.2008.11889566. S2CID 118200782. 18. Harlow, L.L.; Burkholder, G.J.; Morrow, J.A. (2002). "Evaluating attitudes, skill and performance in a learning-enhanced quantitative methods course: A structural modelling approach". Structural Equation Modeling. 9 (3): 413–430. doi:10.1207/S15328007SEM0903_6. S2CID 143847777. 19. Carlson, K.A.; Winquist, J.R. (2011). "Evaluating an active learning approach to teaching introductory statistics: A classroom workbook approach" (PDF). Journal of Statistics Education. 19 (1). doi:10.1080/10691898.2011.11889596. S2CID 122759663. 20. Wild, C.J.; Pfannkuch, M.; Regan, M.; Horton, N.J. (2011). "Towards more accessible conceptions of statistical inference". Journal of the Royal Statistical Society, Series A. 174 (2): 247–295. doi:10.1111/j.1467-985X.2010.00678.x. 21. Conway, F. (1986). "Statistics in schools". Journal of the Royal Statistical Society, Series A. 149 (1): 60–64. doi:10.2307/2981885. JSTOR 2981885. 22. Holmes, P. (2003). "50 years of statistics teaching in English schools: some milestones (with discussion)". Journal of the Royal Statistical Society, Series D. 52 (4): 439–474. doi:10.1046/j.1467-9884.2003.372_1.x. 23. "CIMT – Page no longer available at Plymouth University servers". 24. CIMT A-level course notes 25. mathsrevision.net A-level notes 26. matherevision.net GCSE maths notes 27. ONS stats4schools teacher/student resources 28. Smith, Adrian (2004). Making Mathematics Count: The Report of Professor Adrian Smith's Inquiry into Post-14 Mathematics Education. London, England: The Stationery Office. 29. In the United States, there was a "wide growth of statistical training in grades K-12, led by the implementation of an Advanced Placement (AP) course in statistics." p. 403 in Lindsay, Bruce G.; Kettenring, Jon; Siegmund, David O. (August 2004). "A Report on the Future of Statistics". Statistical Science. 19 (3): 387–407. doi:10.1214/088342304000000404. JSTOR 4144386. MR 2185624. 30. Page 403 in Lindsay, Bruce G.; Kettenring, Jon; Siegmund, David O. (August 2004). "A Report on the Future of Statistics". Statistical Science. 19 (3): 387–407. doi:10.1214/088342304000000404. JSTOR 4144386. MR 2185624. 31. Estonian Schools to Teach Computer-Based Math Wall Street Journal, 11 February 2013 32. Math Rebels Invade Estonia With Computerized Education Wired, 12 February 2013 33. Estonia Chosen as Testing Ground for Math Education Experiment Estonian Public Broadcasting News 34. Estonian, British experts team up to develop computer-based math education Postimees, 13 February 2013. 35. Smith, T. M. F.; Staetsky, L. (2007). "The teaching of statistics in UK universities". Journal of the Royal Statistical Society, Series A. 170 (3): 581–622. doi:10.1111/j.1467-985X.2007.00482.x. MR 2380589. 36. SAGE Strikes Gold with Andy Field’s New Statistics Textbook/Ebook – Nancy K. Herther 2013 37. Page 616 in Moore, David S.; Cobb, George W. (August 2000). "Statistics and Mathematics: Tension and Cooperation". The American Mathematical Monthly. 107 (7–September): 615–630. CiteSeerX 10.1.1.422.4356. doi:10.2307/2589117. JSTOR 2589117. MR 1543690. 38. Page 622 in Moore, David S.; Cobb, George W. (August 2000). "Statistics and Mathematics: Tension and Cooperation". The American Mathematical Monthly. 107 (7): 615–630. CiteSeerX 10.1.1.422.4356. doi:10.2307/2589117. JSTOR 2589117. MR 1543690. 39. American Statistical Association. "Curriculum Guidelines for Undergraduate Programs in Statistical Science". Retrieved 14 May 2010. 40. Speed, Terry (November 2009). "A Dialogue (Terence's Stuff)". IMS Bulletin. 38 (9): 14. ISSN 1544-1881. 41. Tanur 1988 42. Harold Hotelling (December 1940). "The Teaching of Statistics". The Annals of Mathematical Statistics. 11 (4): 457–470. doi:10.1214/aoms/1177731833. JSTOR 2235726. 43. Harold Hotelling (1988). "Golden Oldies: Classic Articles from the World of Statistics and Probability: 'The Teaching of Statistics'". Statistical Science. 3 (1): 63–71. doi:10.1214/ss/1177013001. 44. Harold Hotelling (1988). "Golden Oldies: Classic Articles from the World of Statistics and Probability: 'The Place of Statistics in the University'". Statistical Science. 3 (1): 72–83. doi:10.1214/ss/1177013002. 45. Scheaffer, Richard L. Scheaffer & Stasny, Elizabeth A (November 2004). "The State of Undergraduate Education in Statistics: A Report from the CBMS". The American Statistician. 58 (4): 265–271. doi:10.1198/000313004X5770. S2CID 123312251. 46. Moore, David S (January 1988). "Should Mathematicians Teach Statistics?". The College Mathematics Journal. 19 (1): 3–7. doi:10.2307/2686686. JSTOR 2686686. 47. Cobb, George W.; Moore, David S. (November 1997). "Mathematics, Statistics, and Teaching". The American Mathematical Monthly. 104 (9): 801–823. doi:10.2307/2975286. JSTOR 2975286. 48. Hotelling. Cobb and Moore. 49. Adrian Smith (primary source). T.M.F. Smith et alia. 50. Joan Garfield and Bob Hogg and Candace Schau and Dex Whittinghill (9 June 2000). First Courses in Statistical Science Working Group (ed.). Best Practices in Introductory Statistics (Draft 2000.06.09) (PDF). Undergraduate Statistics Education Initiative Position Paper. American Statistical Association. 51. Freedman, David; Robert Pisani; Roger Purves (1998). Statistics (4th ed.). New York: W.W. Norton. ISBN 978-0393929720. 52. Moore, David; George P. McCabe; Bruce Craig (2012). Introduction to the practice of statistics (7th ed.). New York: W.H. Freeman. ISBN 978-1429240321. 53. Moore, David; Notz, William I. (2014). Statistics : concepts and controversies (8th ed.). New York: W.H. Freeman and Company. ISBN 978-1464125669. 54. Watkins, A. E.; Richard L. Scheaffer; George W. Cobb (2011). Statistics from data to decision (2nd ed.). Hoboken, N.J: Wiley. ISBN 978-0470458518. 55. Watkins, A. E.; Richard L. Scheaffer; George W. Cobb (2008). Statistics in action : understanding a world of data (2nd ed.). Emeryville, CA: Key Curriculum Press. ISBN 978-1559539098. 56. Moore and Cobb. 57. Batanero, Carmen (2002). "Training future researchers in statistics education. Reflections from the Spanish experience" (PDF). Statistics Education Research Journal. 1 (1): 16–18. 58. Cynkar, Amy (July 2007). "Honoring innovation". Monitor on Psychology. 38 (7): 48. 59. "Curriculum for PhD Statistics Education Concentration – Univ. of Minn". Retrieved 12 April 2013. 60. "Statistics Education » College of Education, University of Florida". Retrieved 12 April 2013. 61. Garfield, Joan. "IASE – Publications: Dissertations". Retrieved 12 April 2013. 62. Garfield, Joan; Michelle Everson (2009). "Preparing teachers of statistics: A graduate course for future teachers". Journal of Statistics Education. 17 (2): 223–237. doi:10.1080/10691898.2009.11889516. 63. "Educational Psychology courses at University of Minnesota—Twin Cities". Retrieved 12 April 2013. See EPSY 8271. 64. Garfield, Joan; Pantula, Sastry; Pearl, Dennis; Utts, Jessica (March 2009). "Statistics Education Graduate Programs: Report on a Workshop Funded by an ASA Member Initiative Grant" (PDF). American Statistical Association. Retrieved 12 April 2013. 65. Cobb, George W (2007). "The Introductory Statistics Course: A Ptolemaic Curriculum?" (PDF). Technology Innovations in Statistics Education. 1 (1). doi:10.5070/T511000028. ISSN 1933-4214. 66. Garfield, Joan; delMas, Robert; Zieffler, Andrew (1 November 2012). "Developing statistical modelers and thinkers in an introductory, tertiary-level statistics course". ZDM. 44 (7): 883–898. doi:10.1007/s11858-012-0447-5. ISSN 1863-9690. S2CID 145588037. 67. Tintle, Nathan; VanderStoep, Jill; Holmes, Vicki-Lynn; Quisenberry, Brooke; Swanson, Todd (2011). "Development and assessment of a preliminary randomization-based introductory statistics curriculum" (PDF). Journal of Statistics Education. 19 (1): n1. doi:10.1080/10691898.2011.11889599. S2CID 30333809. 68. Lock, R. H.; Lock, P. F.; Lock Morgan, K.; Lock, E. F.; Lock, D. F. (2012). Statistics: Unlocking the power of data. Hoboken, NJ: John Wiley & Sons. 69. Nikoletseas, Michael (2010). Statistics for College Students and Researchers: Grasping the Concepts. ISBN 978-1453604533. 70. Arnold, P.; C. Education; N. Zealand; M. Pfannkuch; C.J. Wild; M. Regan; S. Budgett (2011). "Enhancing Students' Inferential Reasoning: From Hands-On To "Movies"". Journal of Statistics Education. 19 (2). doi:10.1080/10691898.2011.11889609. 71. Rossman, A. (2008). "Reasoning about informal statistical inference: One statistician's view" (PDF). Statistics Education Research Journal. 7 (2): 5–19. doi:10.52041/serj.v7i2.467. S2CID 18885739. (8–22 in PDF.) 72. Gould, Robert (2010). "Statistics and the Modern Student" (PDF). International Statistical Review. 78 (2): 297–315. doi:10.1111/j.1751-5823.2010.00117.x. ISSN 1751-5823. S2CID 62346843. 73. Chance, Beth; Dani Ben-Zvi; Joan Garfield; Elsa Medina (12 October 2007). "The Role of Technology in Improving Student Learning of Statistics". Technology Innovations in Statistics Education. 1 (1). doi:10.5070/T511000026. Retrieved 15 October 2012. 74. Nolan, Deborah; Temple Lang, Duncan (1 May 2010). "Computing in the Statistics Curricula" (PDF). The American Statistician. 64 (2): 97–107. CiteSeerX 10.1.1.724.797. doi:10.1198/tast.2010.09132. ISSN 0003-1305. S2CID 121050486. 75. Pruim, Randall (2011). Foundations and Applications of Statistics: An Introduction Using R. American Mathematical Society. ISBN 978-0-8218-5233-0. 76. Kaplan, Danny (2012). Statistical Modeling: A Fresh Approach (2nd ed.). Project MOSAIC. ISBN 978-0-9839658-7-9. Further reading • Barnett, Vic (Editor) (1982) Teaching statistics in schools throughout the world, International Statistical Institute. • Boland, Philip J.; Nicholson, James (1996). "The Statistics and Probability Curriculum at the Secondary School Level in the USA, Ireland and the UK". Journal of the Royal Statistical Society, Series D. 45 (4): 437–446. JSTOR 2988544. • Conway, F. (1976). "What is Statistics and Who Should Teach it?". Journal of the Royal Statistical Society, Series A. 139 (3): 385–8. doi:10.2307/2344843. JSTOR 2344843. • Cobb, George W. (2007). "The Introductory Statistics Course: A Ptolemaic Curriculum?" (PDF). Technology Innovations in Statistics Education. 1 (1). doi:10.5070/T511000028. • Cook, Thomas D. (2002). "Randomized Experiments in Educational Policy Research: A Critical Examination of the Reasons the Educational Evaluation Community has Offered for Not Doing Them". Educational Evaluation and Policy Analysis. 24 (3): 175–199. doi:10.3102/01623737024003175. S2CID 144583638. As PDF. • Daniels, H.E. (1975). "Statistics in Universities—A Personal View". Journal of the Royal Statistical Society, Series A. 138 (1): 1–17. doi:10.2307/2345246. JSTOR 2345246. • Franklin, C., Kader, G., Mewborn, D., Moreno, J., Peck, R., Perry, M., and Scheaffer, R. (2007) Guidelines for Assessment and Instruction in Statistics Education (GAISE) Report: A Pre-K–12 Curriculum Framework. American Statistical Association. • Garfield, Joan; Hogg, Bob; Schau, Candace; Whittinghill, Dex (9 June 2000). First Courses in Statistical Science Working Group (ed.). Best Practices in Introductory Statistics (Draft 2000.06.09) (PDF). Undergraduate Statistics Education Initiative Position Paper. American Statistical Association. • Hogg, Robert V.; Hogg, Mary C. (1995). "Continuous Quality Improvement in Higher Education". International Statistical Review. 63 (1): 35–48. doi:10.2307/1403776. JSTOR 1403776. • Moore, David S. (1995). "The Craft of teaching" (PDF). MAA Focus. 15: 5–8. • Moore, D.S.; Cobb, G.W.; Garfeld, J.; Meeker, W.Q. (1995). "Statistics Education Fin de Siécle". The American Statistician. 49 (3): 250–260. CiteSeerX 10.1.1.422.5088. doi:10.1080/00031305.1995.10476159. • Rao, C.R. (1997). "A cross disciplinary approach to teaching statistics". Report for US Army Research Office. ARO 35518.11-MA. Archived from the original on 1 October 2012. • Working Group on Statistics in Mathematics Education Research (2007). Scheaffer, Richard (ed.). "Using Statistics Effectively in Mathematics Education Research: A report from a series of workshops organized by the American Statistical Association with funding from the National Science Foundation" (PDF). The American Statistical Association. • Smith, Adrian (2004). Making Mathematics Count: The Report of Professor Adrian Smith's Inquiry into Post-14 Mathematics Education (PDF). London, England: The Stationery Office. • Tanur, Judith (1988). "No! But Who Should Teach Statistics?". The College Mathematics Journal. 19 (1): 11–12. doi:10.2307/2686688. JSTOR 2686688. External links Journals • Journal of Statistics and Data Science Education (formerly the Journal of Statistics Education) published by the American Statistical Association and Taylor & Francis • Statistics Education Research Journal published by the International Association for Statistical Education (IASE) • Teaching Statistics: An International Journal for Teachers • Technology Innovations in Statistics Education (TISE) "reports on studies of the use of technology to improve statistics learning at all levels, from kindergarten to graduate school and professional development". Associations and Centers • IASE: newsletters, conference proceedings, recent dissertations, and links to statistics education conferences • CAUSEweb: many resources aimed at teaching undergraduate statistics classes, including activities, webinars, and a literature database • SRTL: forums and publications for The International Statistical Reasoning, Thinking, and Literacy Research Forums • Web pages of the Royal Statistical Society Centre for Statistical Education • Maths, Stats & OR Network: supports lecturers in Mathematics, Statistics and Operational Research in the UK Other Links • Quotes on "Who should teach statistics?" Education Overview General • Glossary • Index • Outline By perspective • Aims and objectives • Anthropology • Assessment • Economics • Evaluation • History • Leadership • Philosophy • Policy • Politics • Psychology • Research • Rights • Sociology • Technology • Theory By subject • Agricultural • Art • Bilingual • Business • Chemistry • Computer science • Death • Design • Economics • Engineering • Environmental • Euthenics • Health • Language • Legal • Mathematics • Medical • Military • Music • Nursing • Peace • Performing arts • Philosophy • Physical • Physics • Reading • Religious • Science • Sex • Teacher • Technology • Values • Vocational Alternative • Autodidacticism • Education reform • Gifted education • Homeschooling • Religious education • Special education Wikimedia • Books • Definitions • Images • Learning resources • News • Quotes • Texts Stages Early childhood educationPrimary educationSecondary educationTertiary education Preschool → Kindergarten → Primary school → Infant → Junior → Secondary school → Middle school → High school → Higher education   VocationalFurther (Continuing) Undergraduate → Postgraduate   • Alternative education • Homeschooling • Adult education • Portal Education by region Education in Africa Sovereign states • Algeria • Angola • Benin • Botswana • Burkina Faso • Burundi • Cameroon • Cape Verde • Central African Republic • Chad • Comoros • Democratic Republic of the Congo • Republic of the Congo • Djibouti • Egypt • Equatorial Guinea • Eritrea • Eswatini • Ethiopia • Gabon • The Gambia • Ghana • Guinea • Guinea-Bissau • Ivory Coast • Kenya • Lesotho • Liberia • Libya • Madagascar • Malawi • Mali • Mauritania • Mauritius • Morocco • Mozambique • Namibia • Niger • Nigeria • Rwanda • São Tomé and Príncipe • Senegal • Seychelles • Sierra Leone • Somalia • South Africa • South Sudan • Sudan • Tanzania • Togo • Tunisia • Uganda • Zambia • Zimbabwe States with limited recognition • Sahrawi Arab Democratic Republic • Somaliland Dependencies and other territories • Canary Islands / Ceuta / Melilla  (Spain) • Madeira (Portugal) • Mayotte / Réunion (France) • Saint Helena / Ascension Island / Tristan da Cunha (United Kingdom) Education in Asia Sovereign states • Afghanistan • Armenia • Azerbaijan • Bahrain • Bangladesh • Bhutan • Brunei • Cambodia • China • Cyprus • East Timor (Timor-Leste) • Egypt • Georgia • India • Indonesia • Iran • Iraq • Israel • Japan • Jordan • Kazakhstan • North Korea • South Korea • Kuwait • Kyrgyzstan • Laos • Lebanon • Malaysia • Maldives • Mongolia • Myanmar • Nepal • Oman • Pakistan • Philippines • Qatar • Russia • Saudi Arabia • Singapore • Sri Lanka • Syria • Tajikistan • Thailand • Turkey • Turkmenistan • United Arab Emirates • Uzbekistan • Vietnam • Yemen States with limited recognition • Abkhazia • Artsakh • Northern Cyprus • Palestine • South Ossetia • Taiwan Dependencies and other territories • British Indian Ocean Territory • Christmas Island • Cocos (Keeling) Islands • Hong Kong • Macau • Category • Asia portal Education in Europe Sovereign states • Albania • Andorra • Armenia • Austria • Azerbaijan • Belarus • Belgium • Bosnia and Herzegovina • Bulgaria • Croatia • Cyprus • Czech Republic • Denmark • Estonia • Finland • France • Georgia • Germany • Greece • Hungary • Iceland • Ireland • Italy • Kazakhstan • Latvia • Liechtenstein • Lithuania • Luxembourg • Malta • Moldova • Monaco • Montenegro • Netherlands • North Macedonia • Norway • Poland • Portugal • Romania • Russia • San Marino • Serbia • Slovakia • Slovenia • Spain • Sweden • Switzerland • Turkey • Ukraine • United Kingdom States with limited recognition • Abkhazia • Artsakh • Kosovo • Northern Cyprus • South Ossetia • Transnistria Dependencies and other entities • Åland • Faroe Islands • Gibraltar • Guernsey • Isle of Man • Jersey • Svalbard Other entities • European Union Education in North America Sovereign states • Antigua and Barbuda • Bahamas • Barbados • Belize • Canada • Costa Rica • Cuba • Dominica • Dominican Republic • El Salvador • Grenada • Guatemala • Haiti • Honduras • Jamaica • Mexico • Nicaragua • Panama • Saint Kitts and Nevis • Saint Lucia • Saint Vincent and the Grenadines • Trinidad and Tobago • United States Dependencies and other territories • Anguilla • Aruba • Bermuda • Bonaire • British Virgin Islands • Cayman Islands • Curaçao • Greenland • Guadeloupe • Martinique • Montserrat • Puerto Rico • Saint Barthélemy • Saint Martin • Saint Pierre and Miquelon • Saba • Sint Eustatius • Sint Maarten • Turks and Caicos Islands • United States Virgin Islands Education in Oceania Sovereign states • Australia • Federated States of Micronesia • Fiji • Kiribati • Marshall Islands • Nauru • New Zealand • Palau • Papua New Guinea • Samoa • Solomon Islands • Tonga • Tuvalu • Vanuatu Associated states of New Zealand • Cook Islands • Niue Dependencies and other territories • American Samoa • Christmas Island • Clipperton Island • Cocos (Keeling) Islands • Easter Island • French Polynesia • Guam • Hawaii • New Caledonia • Norfolk Island • Northern Mariana Islands • Pitcairn Islands • Tokelau • Wallis and Futuna Education in South America Sovereign states • Argentina • Bolivia • Brazil • Chile • Colombia • Ecuador • Guyana • Paraguay • Peru • Suriname • Uruguay • Venezuela Dependencies and other territories • Falkland Islands • French Guiana • South Georgia and the South Sandwich Islands •  Schools portal • Category • WikiProject Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Statistical learning theory Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis.[1][2][3] Statistical learning theory deals with the statistical inference problem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics. Part of a series on Machine learning and data mining Paradigms • Supervised learning • Unsupervised learning • Online learning • Batch learning • Meta-learning • Semi-supervised learning • Self-supervised learning • Reinforcement learning • Rule-based learning • Quantum machine learning Problems • Classification • Generative model • Regression • Clustering • dimension reduction • density estimation • Anomaly detection • Data Cleaning • AutoML • Association rules • Semantic analysis • Structured prediction • Feature engineering • Feature learning • Learning to rank • Grammar induction • Ontology learning • Multimodal learning Supervised learning (classification • regression) • Apprenticeship learning • Decision trees • Ensembles • Bagging • Boosting • Random forest • k-NN • Linear regression • Naive Bayes • Artificial neural networks • Logistic regression • Perceptron • Relevance vector machine (RVM) • Support vector machine (SVM) Clustering • BIRCH • CURE • Hierarchical • k-means • Fuzzy • Expectation–maximization (EM) • DBSCAN • OPTICS • Mean shift Dimensionality reduction • Factor analysis • CCA • ICA • LDA • NMF • PCA • PGD • t-SNE • SDL Structured prediction • Graphical models • Bayes net • Conditional random field • Hidden Markov Anomaly detection • RANSAC • k-NN • Local outlier factor • Isolation forest Artificial neural network • Autoencoder • Cognitive computing • Deep learning • DeepDream • Feedforward neural network • Recurrent neural network • LSTM • GRU • ESN • reservoir computing • Restricted Boltzmann machine • GAN • Diffusion model • SOM • Convolutional neural network • U-Net • Transformer • Vision • Spiking neural network • Memtransistor • Electrochemical RAM (ECRAM) Reinforcement learning • Q-learning • SARSA • Temporal difference (TD) • Multi-agent • Self-play Learning with humans • Active learning • Crowdsourcing • Human-in-the-loop Model diagnostics • Learning curve Mathematical foundations • Kernel machines • Bias–variance tradeoff • Computational learning theory • Empirical risk minimization • Occam learning • PAC learning • Statistical learning • VC theory Machine-learning venues • ECML PKDD • NeurIPS • ICML • ICLR • IJCAI • ML • JMLR Related articles • Glossary of artificial intelligence • List of datasets for machine-learning research • Outline of machine learning Introduction The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood.[4] Supervised learning involves learning from a training set of data. Every point in the training is an input-output pair, where the input maps to an output. The learning problem consists of inferring the function that maps between the input and the output, such that the learned function can be used to predict the output from future input. Depending on the type of output, supervised learning problems are either problems of regression or problems of classification. If the output takes a continuous range of values, it is a regression problem. Using Ohm's law as an example, a regression could be performed with voltage as input and current as an output. The regression would find the functional relationship between voltage and current to be $R$, such that $V=IR$ Classification problems are those for which the output will be an element from a discrete set of labels. Classification is very common for machine learning applications. In facial recognition, for instance, a picture of a person's face would be the input, and the output label would be that person's name. The input would be represented by a large multidimensional vector whose elements represent pixels in the picture. After learning a function based on the training set data, that function is validated on a test set of data, data that did not appear in the training set. Formal description Take $X$ to be the vector space of all possible inputs, and $Y$ to be the vector space of all possible outputs. Statistical learning theory takes the perspective that there is some unknown probability distribution over the product space $Z=X\times Y$, i.e. there exists some unknown $p(z)=p({\vec {x}},y)$. The training set is made up of $n$ samples from this probability distribution, and is notated $S=\{({\vec {x}}_{1},y_{1}),\dots ,({\vec {x}}_{n},y_{n})\}=\{{\vec {z}}_{1},\dots ,{\vec {z}}_{n}\}$ Every ${\vec {x}}_{i}$ is an input vector from the training data, and $y_{i}$is the output that corresponds to it. In this formalism, the inference problem consists of finding a function $f:X\to Y$ such that $f({\vec {x}})\sim y$. Let ${\mathcal {H}}$ be a space of functions $f:X\to Y$ called the hypothesis space. The hypothesis space is the space of functions the algorithm will search through. Let $V(f({\vec {x}}),y)$ be the loss function, a metric for the difference between the predicted value $f({\vec {x}})$ and the actual value $y$. The expected risk is defined to be $I[f]=\displaystyle \int _{X\times Y}V(f({\vec {x}}),y)\,p({\vec {x}},y)\,d{\vec {x}}\,dy$ The target function, the best possible function $f$ that can be chosen, is given by the $f$ that satisfies $f=\mathrm {argmin} _{\{h\in {\mathcal {H}}\}}I[h]$ Because the probability distribution $p({\vec {x}},y)$ is unknown, a proxy measure for the expected risk must be used. This measure is based on the training set, a sample from this unknown probability distribution. It is called the empirical risk $I_{S}[f]={\frac {1}{n}}\displaystyle \sum _{i=1}^{n}V(f({\vec {x}}_{i}),y_{i})$ A learning algorithm that chooses the function $f_{S}$ that minimizes the empirical risk is called empirical risk minimization. Loss functions The choice of loss function is a determining factor on the function $f_{S}$ that will be chosen by the learning algorithm. The loss function also affects the convergence rate for an algorithm. It is important for the loss function to be convex.[5] Different loss functions are used depending on whether the problem is one of regression or one of classification. Regression The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression. The form is: $V(f({\vec {x}}),y)=(y-f({\vec {x}}))^{2}$ The absolute value loss (also known as the L1-norm) is also sometimes used: $V(f({\vec {x}}),y)=|y-f({\vec {x}})|$ Classification In some sense the 0-1 indicator function is the most natural loss function for classification. It takes the value 0 if the predicted output is the same as the actual output, and it takes the value 1 if the predicted output is different from the actual output. For binary classification with $Y=\{-1,1\}$, this is: $V(f({\vec {x}}),y)=\theta (-yf({\vec {x}}))$ where $\theta $ is the Heaviside step function. Regularization In machine learning problems, a major problem that arises is that of overfitting. Because learning is a prediction problem, the goal is not to find a function that most closely fits the (previously observed) data, but to find one that will most accurately predict output from future input. Empirical risk minimization runs this risk of overfitting: finding a function that matches the data exactly but does not predict future output well. Overfitting is symptomatic of unstable solutions; a small perturbation in the training set data would cause a large variation in the learned function. It can be shown that if the stability for the solution can be guaranteed, generalization and consistency are guaranteed as well.[6][7] Regularization can solve the overfitting problem and give the problem stability. Regularization can be accomplished by restricting the hypothesis space ${\mathcal {H}}$. A common example would be restricting ${\mathcal {H}}$ to linear functions: this can be seen as a reduction to the standard problem of linear regression. ${\mathcal {H}}$ could also be restricted to polynomial of degree $p$, exponentials, or bounded functions on L1. Restriction of the hypothesis space avoids overfitting because the form of the potential functions are limited, and so does not allow for the choice of a function that gives empirical risk arbitrarily close to zero. One example of regularization is Tikhonov regularization. This consists of minimizing ${\frac {1}{n}}\displaystyle \sum _{i=1}^{n}V(f({\vec {x}}_{i}),y_{i})+\gamma \|f\|_{\mathcal {H}}^{2}$ where $\gamma $ is a fixed and positive parameter, the regularization parameter. Tikhonov regularization ensures existence, uniqueness, and stability of the solution.[8] See also • Reproducing kernel Hilbert spaces are a useful choice for ${\mathcal {H}}$. • Proximal gradient methods for learning References 1. Vladimir Vapnik (1995) The Nature of Statistical Learning Theory, Springer New York ISBN 978-1-475-72440-0. 2. Trevor Hastie, Robert Tibshirani, Jerome Friedman (2009) The Elements of Statistical Learning, Springer-Verlag ISBN 978-0-387-84857-0. 3. Mohri, Mehryar; Rostamizadeh, Afshin; Talwalkar, Ameet (2012). Foundations of Machine Learning. US, Massachusetts: MIT Press. ISBN 9780262018258. 4. Tomaso Poggio, Lorenzo Rosasco, et al. Statistical Learning Theory and Applications, 2012, Class 1 5. Rosasco, L., Vito, E.D., Caponnetto, A., Fiana, M., and Verri A. 2004. Neural computation Vol 16, pp 1063-1076 6. Vapnik, V.N. and Chervonenkis, A.Y. 1971. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and Its Applications Vol 16, pp 264-280. 7. Mukherjee, S., Niyogi, P. Poggio, T., and Rifkin, R. 2006. Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization. Advances in Computational Mathematics. Vol 25, pp 161-193. 8. Tomaso Poggio, Lorenzo Rosasco, et al. Statistical Learning Theory and Applications, 2012, Class 2 Differentiable computing General • Differentiable programming • Information geometry • Statistical manifold • Automatic differentiation • Neuromorphic engineering • Pattern recognition • Tensor calculus • Computational learning theory • Inductive bias Concepts • Gradient descent • SGD • Clustering • Regression • Overfitting • Hallucination • Adversary • Attention • Convolution • Loss functions • Backpropagation • Normalization (Batchnorm) • Activation • Softmax • Sigmoid • Rectifier • Regularization • Datasets • Augmentation • Diffusion • Autoregression Applications • Machine learning • In-context learning • Artificial neural network • Deep learning • Scientific computing • Artificial Intelligence • Language model • Large language model Hardware • IPU • TPU • VPU • Memristor • SpiNNaker Software libraries • TensorFlow • PyTorch • Keras • Theano • JAX • Flux.jl Implementations Audio–visual • AlexNet • WaveNet • Human image synthesis • HWR • OCR • Speech synthesis • Speech recognition • Facial recognition • AlphaFold • DALL-E • Midjourney • Stable Diffusion Verbal • Word2vec • Seq2seq • BERT • LaMDA • Bard • NMT • Project Debater • IBM Watson • GPT-2 • GPT-3 • ChatGPT • GPT-4 • GPT-J • Chinchilla AI • PaLM • BLOOM • LLaMA Decisional • AlphaGo • AlphaZero • Q-learning • SARSA • OpenAI Five • Self-driving car • MuZero • Action selection • Auto-GPT • Robot control People • Yoshua Bengio • Alex Graves • Ian Goodfellow • Stephen Grossberg • Demis Hassabis • Geoffrey Hinton • Yann LeCun • Fei-Fei Li • Andrew Ng • Jürgen Schmidhuber • David Silver Organizations • Anthropic • EleutherAI • Google DeepMind • Hugging Face • OpenAI • Meta AI • Mila • MIT CSAIL Architectures • Neural Turing machine • Differentiable neural computer • Transformer • Recurrent neural network (RNN) • Long short-term memory (LSTM) • Gated recurrent unit (GRU) • Echo state network • Multilayer perceptron (MLP) • Convolutional neural network • Residual network • Autoencoder • Variational autoencoder (VAE) • Generative adversarial network (GAN) • Graph neural network • Portals • Computer programming • Technology • Categories • Artificial neural networks • Machine learning
Wikipedia
Statistical manifold In mathematics, a statistical manifold is a Riemannian manifold, each of whose points is a probability distribution. Statistical manifolds provide a setting for the field of information geometry. The Fisher information metric provides a metric on these manifolds. Following this definition, the log-likelihood function is a differentiable map and the score is an inclusion.[1] Examples The family of all normal distributions can be thought of as a 2-dimensional parametric space parametrized by the expected value μ and the variance σ2 ≥ 0. Equipped with the Riemannian metric given by the Fisher information matrix, it is a statistical manifold with a geometry modeled on hyperbolic space. A way of picturing the manifold is done by inferring the parametric equations via the Fisher Information rather than starting from the likelihood-function. A simple example of a statistical manifold, taken from physics, would be the canonical ensemble: it is a one-dimensional manifold, with the temperature T serving as the coordinate on the manifold. For any fixed temperature T, one has a probability space: so, for a gas of atoms, it would be the probability distribution of the velocities of the atoms. As one varies the temperature T, the probability distribution varies. Another simple example, taken from medicine, would be the probability distribution of patient outcomes, in response to the quantity of medicine administered. That is, for a fixed dose, some patients improve, and some do not: this is the base probability space. If the dosage is varied, then the probability of outcomes changes. Thus, the dosage is the coordinate on the manifold. To be a smooth manifold, one would have to measure outcomes in response to arbitrarily small changes in dosage; this is not a practically realizable example, unless one has a pre-existing mathematical model of dose-response where the dose can be arbitrarily varied. Definition Let X be an orientable manifold, and let $(X,\Sigma ,\mu )$ be a measure on X. Equivalently, let $(\Omega ,{\mathcal {F}},P)$ be a probability space on $\Omega =X$, with sigma algebra ${\mathcal {F}}=\Sigma $ and probability $P=\mu $. The statistical manifold S(X) of X is defined as the space of all measures $\mu $ on X (with the sigma-algebra $\Sigma $ held fixed). Note that this space is infinite-dimensional; it is commonly taken to be a Fréchet space. The points of S(X) are measures. Rather than dealing with an infinite-dimensional space S(X), it is common to work with a finite-dimensional submanifold, defined by considering a set of probability distributions parameterized by some smooth, continuously-varying parameter $\theta $. That is, one considers only those measures that are selected by the parameter. If the parameter $\theta $ is n-dimensional, then, in general, the submanifold will be as well. All finite-dimensional statistical manifolds can be understood in this way. References 1. Murray, Michael K.; Rice, John W. (1993). "The definition of a statistical manifold". Differential Geometry and Statistics. Chapman & Hall. pp. 76–77. ISBN 0-412-39860-5. Differentiable computing General • Differentiable programming • Information geometry • Statistical manifold • Automatic differentiation • Neuromorphic engineering • Pattern recognition • Tensor calculus • Computational learning theory • Inductive bias Concepts • Gradient descent • SGD • Clustering • Regression • Overfitting • Hallucination • Adversary • Attention • Convolution • Loss functions • Backpropagation • Normalization (Batchnorm) • Activation • Softmax • Sigmoid • Rectifier • Regularization • Datasets • Augmentation • Diffusion • Autoregression Applications • Machine learning • In-context learning • Artificial neural network • Deep learning • Scientific computing • Artificial Intelligence • Language model • Large language model Hardware • IPU • TPU • VPU • Memristor • SpiNNaker Software libraries • TensorFlow • PyTorch • Keras • Theano • JAX • Flux.jl Implementations Audio–visual • AlexNet • WaveNet • Human image synthesis • HWR • OCR • Speech synthesis • Speech recognition • Facial recognition • AlphaFold • DALL-E • Midjourney • Stable Diffusion Verbal • Word2vec • Seq2seq • BERT • LaMDA • Bard • NMT • Project Debater • IBM Watson • GPT-2 • GPT-3 • ChatGPT • GPT-4 • GPT-J • Chinchilla AI • PaLM • BLOOM • LLaMA Decisional • AlphaGo • AlphaZero • Q-learning • SARSA • OpenAI Five • Self-driving car • MuZero • Action selection • Auto-GPT • Robot control People • Yoshua Bengio • Alex Graves • Ian Goodfellow • Stephen Grossberg • Demis Hassabis • Geoffrey Hinton • Yann LeCun • Fei-Fei Li • Andrew Ng • Jürgen Schmidhuber • David Silver Organizations • Anthropic • EleutherAI • Google DeepMind • Hugging Face • OpenAI • Meta AI • Mila • MIT CSAIL Architectures • Neural Turing machine • Differentiable neural computer • Transformer • Recurrent neural network (RNN) • Long short-term memory (LSTM) • Gated recurrent unit (GRU) • Echo state network • Multilayer perceptron (MLP) • Convolutional neural network • Residual network • Autoencoder • Variational autoencoder (VAE) • Generative adversarial network (GAN) • Graph neural network • Portals • Computer programming • Technology • Categories • Artificial neural networks • Machine learning
Wikipedia
Statistical risk Statistical risk is a quantification of a situation's risk using statistical methods. These methods can be used to estimate a probability distribution for the outcome of a specific variable, or at least one or more key parameters of that distribution, and from that estimated distribution a risk function can be used to obtain a single non-negative number representing a particular conception of the risk of the situation. Statistical risk is taken account of in a variety of contexts including finance and economics, and there are many risk functions that can be used depending on the context. One measure of the statistical risk of a continuous variable, such as the return on an investment, is simply the estimated variance of the variable, or equivalently the square root of the variance, called the standard deviation. Another measure in finance, one which views upside risk as unimportant compared to downside risk, is the downside beta. In the context of a binary variable, a simple statistical measure of risk is simply the probability that a variable will take on the lower of two values. There is a sense in which one risk A can be said to be unambiguously greater than another risk B (that is, greater for any reasonable risk function): namely, if A is a mean-preserving spread of B. This means that the probability density function of A can be formed, roughly speaking, by "spreading out" that of B. However, this is only a partial ordering: most pairs of risks cannot be unambiguously ranked in this way, and different risk functions applied to the estimated distributions of two such unordered risky variables will give different answers as to which is riskier. In the context of statistical estimation itself, the risk involved in estimating a particular parameter is a measure of the degree to which the estimate is likely to be inaccurate. See also • Loss function – Mathematical relation assigning a probability event to a cost • Risk assessment – Estimation of risk associated with exposure to a given set of hazards • Risk aversion – Economics theory
Wikipedia
Statistical significance In statistical hypothesis testing,[1][2] a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true.[3] More precisely, a study's defined significance level, denoted by $\alpha $, is the probability of the study rejecting the null hypothesis, given that the null hypothesis is true;[4] and the p-value of a result, $p$, is the probability of obtaining a result at least as extreme, given that the null hypothesis is true.[5] The result is statistically significant, by the standards of the study, when $p\leq \alpha $.[6][7][8][9][10][11][12] The significance level for a study is chosen before data collection, and is typically set to 5%[13] or much lower—depending on the field of study.[14] In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.[15][16] But if the p-value of an observed effect is less than (or equal to) the significance level, an investigator may conclude that the effect reflects the characteristics of the whole population,[1] thereby rejecting the null hypothesis.[17] This technique for testing the statistical significance of results was developed in the early 20th century. The term significance does not imply importance here, and the term statistical significance is not the same as research significance, theoretical significance, or practical significance.[1][2][18][19] For example, the term clinical significance refers to the practical importance of a treatment effect.[20] History Main article: History of statistics Statistical significance dates to the 18th century, in the work of John Arbuthnot and Pierre-Simon Laplace, who computed the p-value for the human sex ratio at birth, assuming a null hypothesis of equal probability of male and female births; see p-value § History for details.[21][22][23][24][25][26][27] In 1925, Ronald Fisher advanced the idea of statistical hypothesis testing, which he called "tests of significance", in his publication Statistical Methods for Research Workers.[28][29][30] Fisher suggested a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis.[31] In a 1933 paper, Jerzy Neyman and Egon Pearson called this cutoff the significance level, which they named $\alpha $. They recommended that $\alpha $ be set ahead of time, prior to any data collection.[31][32] Despite his initial suggestion of 0.05 as a significance level, Fisher did not intend this cutoff value to be fixed. In his 1956 publication Statistical Methods and Scientific Inference, he recommended that significance levels be set according to specific circumstances.[31] Related concepts The significance level $\alpha $ is the threshold for $p$ below which the null hypothesis is rejected even though by assumption it were true, and something else is going on. This means that $\alpha $ is also the probability of mistakenly rejecting the null hypothesis, if the null hypothesis is true.[4] This is also called false positive and type I error. Sometimes researchers talk about the confidence level γ = (1 − α) instead. This is the probability of not rejecting the null hypothesis given that it is true.[33][34] Confidence levels and confidence intervals were introduced by Neyman in 1937.[35] Role in statistical hypothesis testing Main articles: Statistical hypothesis testing, Null hypothesis, Alternative hypothesis, p-value, and Type I and type II errors Statistical significance plays a pivotal role in statistical hypothesis testing. It is used to determine whether the null hypothesis should be rejected or retained. The null hypothesis is the default assumption that nothing happened or changed.[36] For the null hypothesis to be rejected, an observed result has to be statistically significant, i.e. the observed p-value is less than the pre-specified significance level $\alpha $. To determine whether a result is statistically significant, a researcher calculates a p-value, which is the probability of observing an effect of the same magnitude or more extreme given that the null hypothesis is true.[5][12] The null hypothesis is rejected if the p-value is less than (or equal to) a predetermined level, $\alpha $. $\alpha $ is also called the significance level, and is the probability of rejecting the null hypothesis given that it is true (a type I error). It is usually set at or below 5%. For example, when $\alpha $ is set to 5%, the conditional probability of a type I error, given that the null hypothesis is true, is 5%,[37] and a statistically significant result is one where the observed p-value is less than (or equal to) 5%.[38] When drawing data from a sample, this means that the rejection region comprises 5% of the sampling distribution.[39] These 5% can be allocated to one side of the sampling distribution, as in a one-tailed test, or partitioned to both sides of the distribution, as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution. The use of a one-tailed test is dependent on whether the research question or alternative hypothesis specifies a direction such as whether a group of objects is heavier or the performance of students on an assessment is better.[3] A two-tailed test may still be used but it will be less powerful than a one-tailed test, because the rejection region for a one-tailed test is concentrated on one end of the null distribution and is twice the size (5% vs. 2.5%) of each rejection region for a two-tailed test. As a result, the null hypothesis can be rejected with a less extreme result if a one-tailed test was used.[40] The one-tailed test is only more powerful than a two-tailed test if the specified direction of the alternative hypothesis is correct. If it is wrong, however, then the one-tailed test has no power. Significance thresholds in specific fields Further information: Standard deviation and Normal distribution In specific fields such as particle physics and manufacturing, statistical significance is often expressed in multiples of the standard deviation or sigma (σ) of a normal distribution, with significance thresholds set at a much stricter level (for example 5σ).[41][42] For instance, the certainty of the Higgs boson particle's existence was based on the 5σ criterion, which corresponds to a p-value of about 1 in 3.5 million.[42][43] In other fields of scientific research such as genome-wide association studies, significance levels as low as 5×10−8 are not uncommon[44][45]—as the number of tests performed is extremely large. Limitations Researchers focusing solely on whether their results are statistically significant might report findings that are not substantive[46] and not replicable.[47][48] There is also a difference between statistical significance and practical significance. A study that is found to be statistically significant may not necessarily be practically significant.[49][19] Effect size Main article: Effect size Effect size is a measure of a study's practical significance.[49] A statistically significant result may have a weak effect. To gauge the research significance of their result, researchers are encouraged to always report an effect size along with p-values. An effect size measure quantifies the strength of an effect, such as the distance between two means in units of standard deviation (cf. Cohen's d), the correlation coefficient between two variables or its square, and other measures.[50] Reproducibility A statistically significant result may not be easy to reproduce.[48] In particular, some statistically significant results will in fact be false positives. Each failed attempt to reproduce a result increases the likelihood that the result was a false positive.[51] Challenges Overuse in some journals Starting in the 2010s, some journals began questioning whether significance testing, and particularly using a threshold of α=5%, was being relied on too heavily as the primary measure of validity of a hypothesis.[52] Some journals encouraged authors to do more detailed analysis than just a statistical significance test. In social psychology, the journal Basic and Applied Social Psychology banned the use of significance testing altogether from papers it published,[53] requiring authors to use other measures to evaluate hypotheses and impact.[54][55] Other editors, commenting on this ban have noted: "Banning the reporting of p-values, as Basic and Applied Social Psychology recently did, is not going to solve the problem because it is merely treating a symptom of the problem. There is nothing wrong with hypothesis testing and p-values per se as long as authors, reviewers, and action editors use them correctly."[56] Some statisticians prefer to use alternative measures of evidence, such as likelihood ratios or Bayes factors.[57] Using Bayesian statistics can avoid confidence levels, but also requires making additional assumptions,[57] and may not necessarily improve practice regarding statistical testing.[58] The widespread abuse of statistical significance represents an important topic of research in metascience.[59] Redefining significance In 2016, the American Statistical Association (ASA) published a statement on p-values, saying that "the widespread use of 'statistical significance' (generally interpreted as 'p ≤ 0.05') as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process".[57] In 2017, a group of 72 authors proposed to enhance reproducibility by changing the p-value threshold for statistical significance from 0.05 to 0.005.[60] Other researchers responded that imposing a more stringent significance threshold would aggravate problems such as data dredging; alternative propositions are thus to select and justify flexible p-value thresholds before collecting data,[61] or to interpret p-values as continuous indices, thereby discarding thresholds and statistical significance.[62] Additionally, the change to 0.005 would increase the likelihood of false negatives, whereby the effect being studied is real, but the test fails to show it.[63] In 2019, over 800 statisticians and scientists signed a message calling for the abandonment of the term "statistical significance" in science,[64] and the ASA published a further official statement [65] declaring (page 2): We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term "statistically significant" entirely. Nor should variants such as "significantly different," "$p\leq 0.05$," and "nonsignificant" survive, whether expressed in words, by asterisks in a table, or in some other way. See also • A/B testing, ABX test • Estimation statistics • Fisher's method for combining independent tests of significance • Look-elsewhere effect • Multiple comparisons problem • Sample size • Texas sharpshooter fallacy (gives examples of tests where the significance level was set too high) References 1. Sirkin, R. Mark (2005). "Two-sample t tests". Statistics for the Social Sciences (3rd ed.). Thousand Oaks, CA: SAGE Publications, Inc. pp. 271–316. ISBN 978-1-412-90546-6. 2. Borror, Connie M. (2009). "Statistical decision making". The Certified Quality Engineer Handbook (3rd ed.). Milwaukee, WI: ASQ Quality Press. pp. 418–472. ISBN 978-0-873-89745-7. 3. Myers, Jerome L.; Well, Arnold D.; Lorch, Robert F. Jr. (2010). "Developing fundamentals of hypothesis testing using the binomial distribution". Research design and statistical analysis (3rd ed.). New York, NY: Routledge. pp. 65–90. ISBN 978-0-805-86431-1. 4. Dalgaard, Peter (2008). "Power and the computation of sample size". Introductory Statistics with R. Statistics and Computing. New York: Springer. pp. 155–56. doi:10.1007/978-0-387-79054-1_9. ISBN 978-0-387-79053-4. 5. "Statistical Hypothesis Testing". www.dartmouth.edu. Archived from the original on 2020-08-02. Retrieved 2019-11-11. 6. Johnson, Valen E. (October 9, 2013). "Revised standards for statistical evidence". Proceedings of the National Academy of Sciences. 110 (48): 19313–19317. Bibcode:2013PNAS..11019313J. doi:10.1073/pnas.1313476110. PMC 3845140. PMID 24218581. 7. Redmond, Carol; Colton, Theodore (2001). "Clinical significance versus statistical significance". Biostatistics in Clinical Trials. Wiley Reference Series in Biostatistics (3rd ed.). West Sussex, United Kingdom: John Wiley & Sons Ltd. pp. 35–36. ISBN 978-0-471-82211-0. 8. Cumming, Geoff (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York, USA: Routledge. pp. 27–28. 9. Krzywinski, Martin; Altman, Naomi (30 October 2013). "Points of significance: Significance, P values and t-tests". Nature Methods. 10 (11): 1041–1042. doi:10.1038/nmeth.2698. PMID 24344377. 10. Sham, Pak C.; Purcell, Shaun M (17 April 2014). "Statistical power and significance testing in large-scale genetic studies". Nature Reviews Genetics. 15 (5): 335–346. doi:10.1038/nrg3706. PMID 24739678. S2CID 10961123. 11. Altman, Douglas G. (1999). Practical Statistics for Medical Research. New York, USA: Chapman & Hall/CRC. pp. 167. ISBN 978-0412276309. 12. Devore, Jay L. (2011). Probability and Statistics for Engineering and the Sciences (8th ed.). Boston, MA: Cengage Learning. pp. 300–344. ISBN 978-0-538-73352-6. 13. Craparo, Robert M. (2007). "Significance level". In Salkind, Neil J. (ed.). Encyclopedia of Measurement and Statistics. Vol. 3. Thousand Oaks, CA: SAGE Publications. pp. 889–891. ISBN 978-1-412-91611-0. 14. Sproull, Natalie L. (2002). "Hypothesis testing". Handbook of Research Methods: A Guide for Practitioners and Students in the Social Science (2nd ed.). Lanham, MD: Scarecrow Press, Inc. pp. 49–64. ISBN 978-0-810-84486-5. 15. Babbie, Earl R. (2013). "The logic of sampling". The Practice of Social Research (13th ed.). Belmont, CA: Cengage Learning. pp. 185–226. ISBN 978-1-133-04979-1. 16. Faherty, Vincent (2008). "Probability and statistical significance". Compassionate Statistics: Applied Quantitative Analysis for Social Services (With exercises and instructions in SPSS) (1st ed.). Thousand Oaks, CA: SAGE Publications, Inc. pp. 127–138. ISBN 978-1-412-93982-9. 17. McKillup, Steve (2006). "Probability helps you make a decision about your results". Statistics Explained: An Introductory Guide for Life Scientists (1st ed.). Cambridge, United Kingdom: Cambridge University Press. pp. 44–56. ISBN 978-0-521-54316-3. 18. Myers, Jerome L.; Well, Arnold D.; Lorch, Robert F. Jr. (2010). "The t distribution and its applications". Research Design and Statistical Analysis (3rd ed.). New York, NY: Routledge. pp. 124–153. ISBN 978-0-805-86431-1. 19. Hooper, Peter. "What is P-value?" (PDF). University of Alberta, Department of Mathematical and Statistical Sciences. Retrieved November 10, 2019. 20. Leung, W.-C. (2001-03-01). "Balancing statistical and clinical significance in evaluating treatment effects". Postgraduate Medical Journal. 77 (905): 201–204. doi:10.1136/pmj.77.905.201. ISSN 0032-5473. PMC 1741942. PMID 11222834. 21. Brian, Éric; Jaisson, Marie (2007). "Physico-Theology and Mathematics (1710–1794)". The Descent of Human Sex Ratio at Birth. Springer Science & Business Media. pp. 1–25. ISBN 978-1-4020-6036-6. 22. John Arbuthnot (1710). "An argument for Divine Providence, taken from the constant regularity observed in the births of both sexes" (PDF). Philosophical Transactions of the Royal Society of London. 27 (325–336): 186–190. doi:10.1098/rstl.1710.0011. 23. Conover, W.J. (1999), "Chapter 3.4: The Sign Test", Practical Nonparametric Statistics (Third ed.), Wiley, pp. 157–176, ISBN 978-0-471-16068-7 24. Sprent, P. (1989), Applied Nonparametric Statistical Methods (Second ed.), Chapman & Hall, ISBN 978-0-412-44980-2 25. Stigler, Stephen M. (1986). The History of Statistics: The Measurement of Uncertainty Before 1900. Harvard University Press. pp. 225–226. ISBN 978-0-67440341-3. 26. Bellhouse, P. (2001), "John Arbuthnot", in Statisticians of the Centuries by C.C. Heyde and E. Seneta, Springer, pp. 39–42, ISBN 978-0-387-95329-8 27. Hald, Anders (1998), "Chapter 4. Chance or Design: Tests of Significance", A History of Mathematical Statistics from 1750 to 1930, Wiley, p. 65 28. Cumming, Geoff (2011). "From null hypothesis significance to testing effect sizes". Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. Multivariate Applications Series. East Sussex, United Kingdom: Routledge. pp. 21–52. ISBN 978-0-415-87968-2. 29. Fisher, Ronald A. (1925). Statistical Methods for Research Workers. Edinburgh, UK: Oliver and Boyd. pp. 43. ISBN 978-0-050-02170-5. 30. Poletiek, Fenna H. (2001). "Formal theories of testing". Hypothesis-testing Behaviour. Essays in Cognitive Psychology (1st ed.). East Sussex, United Kingdom: Psychology Press. pp. 29–48. ISBN 978-1-841-69159-6. 31. Quinn, Geoffrey R.; Keough, Michael J. (2002). Experimental Design and Data Analysis for Biologists (1st ed.). Cambridge, UK: Cambridge University Press. pp. 46–69. ISBN 978-0-521-00976-8. 32. Neyman, J.; Pearson, E.S. (1933). "The testing of statistical hypotheses in relation to probabilities a priori". Mathematical Proceedings of the Cambridge Philosophical Society. 29 (4): 492–510. Bibcode:1933PCPS...29..492N. doi:10.1017/S030500410001152X. S2CID 119855116. 33. "Conclusions about statistical significance are possible with the help of the confidence interval. If the confidence interval does not include the value of zero effect, it can be assumed that there is a statistically significant result." Prel, Jean-Baptist du; Hommel, Gerhard; Röhrig, Bernd; Blettner, Maria (2009). "Confidence Interval or P-Value?". Deutsches Ärzteblatt Online. 106 (19): 335–9. doi:10.3238/arztebl.2009.0335. PMC 2689604. PMID 19547734. 34. StatNews #73: Overlapping Confidence Intervals and Statistical Significance 35. Neyman, J. (1937). "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability". Philosophical Transactions of the Royal Society A. 236 (767): 333–380. Bibcode:1937RSPTA.236..333N. doi:10.1098/rsta.1937.0005. JSTOR 91337. 36. Meier, Kenneth J.; Brudney, Jeffrey L.; Bohte, John (2011). Applied Statistics for Public and Nonprofit Administration (3rd ed.). Boston, MA: Cengage Learning. pp. 189–209. ISBN 978-1-111-34280-7. 37. Healy, Joseph F. (2009). The Essentials of Statistics: A Tool for Social Research (2nd ed.). Belmont, CA: Cengage Learning. pp. 177–205. ISBN 978-0-495-60143-2. 38. McKillup, Steve (2006). Statistics Explained: An Introductory Guide for Life Scientists (1st ed.). Cambridge, UK: Cambridge University Press. pp. 32–38. ISBN 978-0-521-54316-3. 39. Health, David (1995). An Introduction To Experimental Design And Statistics For Biology (1st ed.). Boston, MA: CRC press. pp. 123–154. ISBN 978-1-857-28132-3. 40. Hinton, Perry R. (2010). "Significance, error, and power". Statistics explained (3rd ed.). New York, NY: Routledge. pp. 79–90. ISBN 978-1-848-72312-2. 41. Vaughan, Simon (2013). Scientific Inference: Learning from Data (1st ed.). Cambridge, UK: Cambridge University Press. pp. 146–152. ISBN 978-1-107-02482-3. 42. Bracken, Michael B. (2013). Risk, Chance, and Causation: Investigating the Origins and Treatment of Disease (1st ed.). New Haven, CT: Yale University Press. pp. 260–276. ISBN 978-0-300-18884-4. 43. Franklin, Allan (2013). "Prologue: The rise of the sigmas". Shifting Standards: Experiments in Particle Physics in the Twentieth Century (1st ed.). Pittsburgh, PA: University of Pittsburgh Press. pp. Ii–Iii. ISBN 978-0-822-94430-0. 44. Clarke, GM; Anderson, CA; Pettersson, FH; Cardon, LR; Morris, AP; Zondervan, KT (February 6, 2011). "Basic statistical analysis in genetic case-control studies". Nature Protocols. 6 (2): 121–33. doi:10.1038/nprot.2010.182. PMC 3154648. PMID 21293453. 45. Barsh, GS; Copenhaver, GP; Gibson, G; Williams, SM (July 5, 2012). "Guidelines for Genome-Wide Association Studies". PLOS Genetics. 8 (7): e1002812. doi:10.1371/journal.pgen.1002812. PMC 3390399. PMID 22792080. 46. Carver, Ronald P. (1978). "The Case Against Statistical Significance Testing". Harvard Educational Review. 48 (3): 378–399. doi:10.17763/haer.48.3.t490261645281841. S2CID 16355113. 47. Ioannidis, John P. A. (2005). "Why most published research findings are false". PLOS Medicine. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722. 48. Amrhein, Valentin; Korner-Nievergelt, Fränzi; Roth, Tobias (2017). "The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research". PeerJ. 5: e3544. doi:10.7717/peerj.3544. PMC 5502092. PMID 28698825. 49. Hojat, Mohammadreza; Xu, Gang (2004). "A Visitor's Guide to Effect Sizes". Advances in Health Sciences Education. 9 (3): 241–9. doi:10.1023/B:AHSE.0000038173.00909.f6. PMID 15316274. S2CID 8045624. 50. Pedhazur, Elazar J.; Schmelkin, Liora P. (1991). Measurement, Design, and Analysis: An Integrated Approach (Student ed.). New York, NY: Psychology Press. pp. 180–210. ISBN 978-0-805-81063-9. 51. Stahel, Werner (2016). "Statistical Issue in Reproducibility". Principles, Problems, Practices, and Prospects Reproducibility: Principles, Problems, Practices, and Prospects: 87–114. doi:10.1002/9781118865064.ch5. ISBN 9781118864975. 52. "CSSME Seminar Series: The argument over p-values and the Null Hypothesis Significance Testing (NHST) paradigm". www.education.leeds.ac.uk. School of Education, University of Leeds. Retrieved 2016-12-01. 53. Novella, Steven (February 25, 2015). "Psychology Journal Bans Significance Testing". Science-Based Medicine. 54. Woolston, Chris (2015-03-05). "Psychology journal bans P values". Nature. 519 (7541): 9. Bibcode:2015Natur.519....9W. doi:10.1038/519009f. 55. Siegfried, Tom (2015-03-17). "P value ban: small step for a journal, giant leap for science". Science News. Retrieved 2016-12-01. 56. Antonakis, John (February 2017). "On doing better science: From thrill of discovery to policy implications" (PDF). The Leadership Quarterly. 28 (1): 5–21. doi:10.1016/j.leaqua.2017.01.006. 57. Wasserstein, Ronald L.; Lazar, Nicole A. (2016-04-02). "The ASA's Statement on p-Values: Context, Process, and Purpose". The American Statistician. 70 (2): 129–133. doi:10.1080/00031305.2016.1154108. 58. García-Pérez, Miguel A. (2016-10-05). "Thou Shalt Not Bear False Witness Against Null Hypothesis Significance Testing". Educational and Psychological Measurement. 77 (4): 631–662. doi:10.1177/0013164416668232. ISSN 0013-1644. PMC 5991793. PMID 30034024. 59. Ioannidis, John P. A.; Ware, Jennifer J.; Wagenmakers, Eric-Jan; Simonsohn, Uri; Chambers, Christopher D.; Button, Katherine S.; Bishop, Dorothy V. M.; Nosek, Brian A.; Munafò, Marcus R. (January 2017). "A manifesto for reproducible science". Nature Human Behaviour. 1: 0021. doi:10.1038/s41562-016-0021. PMC 7610724. PMID 33954258. 60. Benjamin, Daniel; et al. (2018). "Redefine statistical significance". Nature Human Behaviour. 1 (1): 6–10. doi:10.1038/s41562-017-0189-z. PMID 30980045. 61. Chawla, Dalmeet (2017). "'One-size-fits-all' threshold for P values under fire". Nature. doi:10.1038/nature.2017.22625. 62. Amrhein, Valentin; Greenland, Sander (2017). "Remove, rather than redefine, statistical significance". Nature Human Behaviour. 2 (1): 0224. doi:10.1038/s41562-017-0224-0. PMID 30980046. S2CID 46814177. 63. Vyse, Stuart (November 2017). "Moving Science's Statistical Goalposts". csicop.org. CSI. Retrieved 10 July 2018. 64. McShane, Blake; Greenland, Sander; Amrhein, Valentin (March 2019). "Scientists rise up against statistical significance". Nature. 567 (7748): 305–307. Bibcode:2019Natur.567..305A. doi:10.1038/d41586-019-00857-9. PMID 30894741. 65. Wasserstein, Ronald L.; Schirm, Allen L.; Lazar, Nicole A. (2019-03-20). "Moving to a World Beyond "p < 0.05"". The American Statistician. 73 (sup1): 1–19. doi:10.1080/00031305.2019.1583913. Further reading • Lydia Denworth, "A Significant Problem: Standard scientific methods are under fire. Will anything change?", Scientific American, vol. 321, no. 4 (October 2019), pp. 62–67. "The use of p values for nearly a century [since 1925] to determine statistical significance of experimental results has contributed to an illusion of certainty and [to] reproducibility crises in many scientific fields. There is growing determination to reform statistical analysis... Some [researchers] suggest changing statistical methods, whereas others would do away with a threshold for defining "significant" results." (p. 63.) • Ziliak, Stephen and Deirdre McCloskey (2008), The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor, University of Michigan Press, 2009. ISBN 978-0-472-07007-7. Reviews and reception: (compiled by Ziliak) • Thompson, Bruce (2004). "The "significance" crisis in psychology and education". Journal of Socio-Economics. 33 (5): 607–613. doi:10.1016/j.socec.2004.09.034. • Chow, Siu L., (1996). Statistical Significance: Rationale, Validity and Utility, Volume 1 of series Introducing Statistical Methods, Sage Publications Ltd, ISBN 978-0-7619-5205-3 – argues that statistical significance is useful in certain circumstances. • Kline, Rex, (2004). Beyond Significance Testing: Reforming Data Analysis Methods in Behavioral Research Washington, DC: American Psychological Association. • Nuzzo, Regina (2014). Scientific method: Statistical errors. Nature Vol. 506, p. 150-152 (open access). Highlights common misunderstandings about the p value. • Cohen, Joseph (1994). Archived 2017-07-13 at the Wayback Machine. The earth is round (p<.05). American Psychologist. Vol 49, p. 997-1003. Reviews problems with null hypothesis statistical testing. • Amrhein, Valentin; Greenland, Sander; McShane, Blake (2019-03-20). "Scientists rise up against statistical significance". Nature. 567 (7748): 305–307. Bibcode:2019Natur.567..305A. doi:10.1038/d41586-019-00857-9. PMID 30894741. External links Wikiversity has learning resources about Statistical significance • The article "Earliest Known Uses of Some of the Words of Mathematics (S)" contains an entry on Significance that provides some historical information. • "The Concept of Statistical Significance Testing" (February 1994): article by Bruce Thompon hosted by the ERIC Clearinghouse on Assessment and Evaluation, Washington, D.C. • "What does it mean for a result to be "statistically significant"?" (no date): an article from the Statistical Assessment Service at George Mason University, Washington, D.C. Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Statistician A statistician is a person who works with theoretical or applied statistics. The profession exists in both the private and public sectors. It is common to combine statistical knowledge with expertise in other subjects, and statisticians may work as employees or as statistical consultants.[1][2] Nature of the work According to the United States Bureau of Labor Statistics, as of 2014, 26,970 jobs were classified as statistician in the United States. Of these people, approximately 30 percent worked for governments (federal, state, or local).[3] As of October 2021, the median pay for statisticians in the United States was $92,270.[4] Additionally, there is a substantial number of people who use statistics and data analysis in their work but have job titles other than statistician,[5] such as actuaries, applied mathematicians, economists, data scientists, data analysts (predictive analytics), financial analysts, psychometricians, sociologists, epidemiologists, and quantitative psychologists.[6] Statisticians are included with the professions in various national and international occupational classifications.[7][8] In many countries, including the United States, employment in the field requires either a master's degree in statistics or a related field or a PhD.[1] According to one industry professional, "Typical work includes collaborating with scientists, providing mathematical modeling, simulations, designing randomized experiments and randomized sampling plans, analyzing experimental or survey results, and forecasting future events (such as sales of a product)."[9] According to the BLS, "Overall employment is projected to grow 33% from 2016 to 2026, much faster than average for all occupations. Businesses will need these workers to analyze the increasing volume of digital and electronic data."[10] In October 2021, the CNBC rated it the fastest growing job in science and technology of the next decade, with a projected growth rate of 35.40%.[11] See also • List of statisticians • History of statistics • Data science References 1. "O*NET OnLine: 15-2041.00 - Statisticians". Retrieved 29 January 2017. 2. "Royal Statistical Society StatsLife Types of Job". Retrieved 29 January 2017. 3. Bureau of Labor Statistics, US Department of Labor. "Statisticians". Occupational Outlook Handbook (2016-17 ed.). Retrieved 30 May 2017. 4. Smith, Morgan (2021-10-11). "The 10 fastest-growing science and technology jobs of the next decade". CNBC. Retrieved 2021-10-13. 5. Bureau of Labor Statistics, US Department of Labor. "Statisticians". Occupational Outlook Handbook (2010-11 ed.). Archived from the original on 14 May 2011. 6. Bureau of Labor Statistics, US Department of Labor. "Statisticians". Occupational Outlook Handbook (2006-07 ed.). Archived from the original on 1 October 2007. Retrieved 3 October 2007. 7. "International Labour Organisation (ILO) International Standard Classification of Occupations (ISCO) ISCO-08 classification structure" (PDF). Archived (PDF) from the original on 2008-10-07. Retrieved 29 January 2017. 8. "Canadian National Occupational Classification (NOC) 2011 21 - Professional occupations in natural and applied sciences". 6 January 2012. Retrieved 29 January 2017. 9. "IMS Presidential Address: Let us own Data Science, 1 October 2014, News of the Institute of Mathematical Statistics". Retrieved 29 January 2017. 10. "Mathematicians and Statisticians : Occupational Outlook Handbook: : U.S. Bureau of Labor Statistics". www.bls.gov. Retrieved 2018-02-28. 11. Smith, Morgan (2021-10-11). "The 10 fastest-growing science and technology jobs of the next decade". CNBC. Retrieved 2021-10-13. External links Wikimedia Commons has media related to Statisticians. • Statistician entry, Occupational Outlook Handbook, U.S. Bureau of Labor Statistics • Careers Center, American Statistical Association • Careers information, Royal Statistical Society (UK) • Listing of tasks and duties - The International Standard Classification of Occupations (ISCO) • Listings of nature of work etc - O*NET Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject Authority control: National • Germany • 2 • Israel • United States • Czech Republic • 2
Wikipedia
Journal of Official Statistics The Journal of Official Statistics is a peer-reviewed scientific journal that publishes papers related to official statistics. It is published by Statistics Sweden, the national statistical office of Sweden. The journal was established in 1985, when it replaced the Swedish language journal Statistisk Tidskrift (Statistical Review). It publishes four issues each year. Journal of Official Statistics DisciplineStatistics LanguageEnglish Publication details History1985–present Publisher Statistics Sweden (Sweden) FrequencyQuarterly Impact factor 0.993 (2013) Standard abbreviations ISO 4 (alt) · Bluebook (alt1 · alt2) NLM (alt) · MathSciNet (alt ) ISO 4J. Off. Stat. Indexing CODEN (alt · alt2) · JSTOR (alt) · LCCN (alt) MIAR · NLM (alt) · Scopus ISSN0282-423X OCLC no.11960555 Links • Journal homepage Abstracting and indexing Journal of Official Statistics is indexed in the Current Index to Statistics. References External links • Official website, 2013–present. • Archive, 1985–2012. Statistics journals Open access • Annals of Statistics • Brazilian Journal of Probability and Statistics • Chilean Journal of Statistics • Journal of Modern Applied Statistical Methods • Journal of Official Statistics • Journal of Statistical Software • Revista Colombiana de Estadistica • REVSTAT • SORT • Statistics Surveys • Survey Methodology Delayed open access • Annals of the Institute of Statistical Mathematics • Statistical Applications in Genetics and Molecular Biology • Statistical Science Subscription • American Statistician • Biometrical Journal • Biometrics • Biometrika • Biostatistics • Communications in Statistics • Econometrica • Econometric Theory • Journal of Applied Econometrics • Journal of Applied Statistics • Journal of Business & Economic Statistics • Journal of Chemometrics • Journal of Computational and Graphical Statistics • Journal of Econometrics • Journal of Educational and Behavioral Statistics • Journal of Statistical Computation and Simulation • Journal of Statistical Planning and Inference • Journal of the American Statistical Association • Journal of the Royal Statistical Society • Multivariate Behavioral Research • Pharmaceutical Statistics • Review of Economics and Statistics • Statistics in Medicine • Technometrics • Category • Comparison • Current Index to Statistics •  Mathematics portal
Wikipedia
StatsDirect StatsDirect is a statistical software package designed for biomedical, public health, and general health science uses. The second generation of the software was reviewed in general medical[1] and public health journals.[2] StatsDirect Developer(s)StatsDirect Ltd Stable release Ver. 3.2.7 / May 30, 2019 (2019-05-30) Operating systemWindows TypeStatistical analysis Licenseproprietary Websitewww.statsdirect.com Features and use StatsDirect's interface is menu driven and has editors for spreadsheet-like data and reports. The function library includes common medical statistical methods that can be extended by users via an XML-based description that can embed calls to native StatsDirect numerical libraries, R scripts, or algorithms in any of the .NET languages (such as C#, VB.Net, J#, or F#).[3] Common statistical misconceptions are challenged by the interface. For example, users can perform a chi-square test on a two-by-two table, but they are asked whether the data are from a cohort (perspective) or case-control (retrospective) study before delivering the result. Both processes produce a chi-square test result but more emphasis is put on the appropriate statistic for the inference, which is the odds ratio for retrospective studies and relative risk for prospective studies.[4] Origins Professor Iain Buchan, formerly of the University of Manchester, wrote a doctoral thesis on the foundational work and is credited as the creator of the software.[5] Buchan said he wished to address the problem of clinicians lacking the statistical knowledge to select and interpret statistical functions correctly, and often misusing software written by and for statisticians as a result. The software debuted in 1989 as Arcus, then Arcus ProStat in 1993, both written for the DOS platform.[6] Arcus Quickstat for Windows followed in 1999.[7] In 2000, an expanded version, StatsDirect, was released for Microsoft Windows.[1] In 2013, the third generation of this software was released, written in C# for the .NET platform.[3] StatsDirect reports embed the metadata necessary to replay calculations, which may be needed if the original data is ever updated. The reproducible report technology follows the research object approach for replaying in "eLabs".[8] References 1. Freemantle, Nick (16 December 2000). "CD: StatsDirect---Statistical Software for Medical Research in the 21st Century". BMJ. 321 (7275): 1536. doi:10.1136/bmj.321.7275.1536. S2CID 62184308. 2. Lipp, Alastair (1 September 2002). "Stats Direct". Journal of Public Health. 24 (3): 242. doi:10.1093/pubmed/24.3.242. 3. "StatsDirect Technologies". StatsDirect Ltd. Retrieved 27 July 2014. 4. Davis, Cole (2013). Statistical Testing in Practice with StatsDirect. ISBN 978-1605944500. 5. Buchan, Iain Edward. "The Development of a Statistical Computer Software Resource for Medical Research (MD Thesis)" (PDF). University of Liverpool. Retrieved 3 May 2014. 6. Adler, Eric (March 1994). "Arcus Pro-Stat 3 (Short Reviews)". Personal Computer World: 306. 7. Sachdev, Harshi (October 1999). "Arcus Quickstat (Multimedia Review)". Indian Pediatrics. 36 (10): 1075–1076. 8. Bechhofer, Sean; Buchan, Iain; De Roure, David; Missier, Paolo; Ainsworth, John; Bhagat, Jiten; Couch, Philip; Cruickshank, Don; Delderfield, Mark; Dunlop, Ian; Gamble, Matthew; Michaelides, Danius; Owen, Stuart; Newman, David; Sufi, Shoaib; Goble, Carole (2013). "Why linked data is not enough for scientists" (PDF). Future Generation Computer Systems. 29 (2): 599–611. doi:10.1016/j.future.2011.08.004. External links • StatsDirect Home page Statistical software Public domain • Dataplot • Epi Info • CSPro • X-12-ARIMA Open-source • ADMB • DAP • gretl • JASP • JAGS • JMulTi • Julia • Jupyter (Julia, Python, R) • GNU Octave • OpenBUGS • Orange • PSPP • Python (statsmodels, PyMC3, IPython, IDLE) • R (RStudio) • SageMath • SimFiT • SOFA Statistics • Stan • XLispStat Freeware • BV4.1 • CumFreq • SegReg • XploRe • WinBUGS Commercial Cross-platform • Data Desk • GAUSS • GraphPad InStat • GraphPad Prism • IBM SPSS Statistics • IBM SPSS Modeler • JMP • Maple • Mathcad • Mathematica • MATLAB • OxMetrics • RATS • Revolution Analytics • SAS • SmartPLS • Stata • StatView • SUDAAN • S-PLUS • TSP • World Programming System (WPS) Windows only • BMDP • EViews • GenStat • LIMDEP • LISREL • MedCalc • Microfit • Minitab • MLwiN • NCSS • SHAZAM • SigmaStat • Statistica • StatsDirect • StatXact • SYSTAT • The Unscrambler • UNISTAT Excel add-ons • Analyse-it • UNISTAT for Excel • XLfit • RExcel • Category • Comparison
Wikipedia
Steady state In systems theory, a system or a process is in a steady state if the variables (called state variables) which define the behavior of the system or the process are unchanging in time.[1] In continuous time, this means that for those properties p of the system, the partial derivative with respect to time is zero and remains so: ${\frac {\partial p}{\partial t}}=0\quad {\text{for all present and future }}t.$ In discrete time, it means that the first difference of each property is zero and remains so: $p_{t}-p_{t-1}=0\quad {\text{for all present and future }}t.$ The concept of a steady state has relevance in many fields, in particular thermodynamics, economics, and engineering. If a system is in a steady state, then the recently observed behavior of the system will continue into the future.[1] In stochastic systems, the probabilities that various states will be repeated will remain constant. See for example Linear difference equation#Conversion to homogeneous form for the derivation of the steady state. In many systems, a steady state is not achieved until some time after the system is started or initiated. This initial situation is often identified as a transient state, start-up or warm-up period.[1] For example, while the flow of fluid through a tube or electricity through a network could be in a steady state because there is a constant flow of fluid or electricity, a tank or capacitor being drained or filled with fluid is a system in transient state, because its volume of fluid changes with time. Often, a steady state is approached asymptotically. An unstable system is one that diverges from the steady state. See for example Linear difference equation#Stability. In chemistry, a steady state is a more general situation than dynamic equilibrium. While a dynamic equilibrium occurs when two or more reversible processes occur at the same rate, and such a system can be said to be in a steady state, a system that is in a steady state may not necessarily be in a state of dynamic equilibrium, because some of the processes involved are not reversible. Applications Economics A steady state economy is an economy (especially a national economy but possibly that of a city, a region, or the world) of stable size featuring a stable population and stable consumption that remain at or below carrying capacity. In the economic growth model of Robert Solow and Trevor Swan, the steady state occurs when gross investment in physical capital equals depreciation and the economy reaches economic equilibrium, which may occur during a period of growth. Electrical engineering In electrical engineering and electronic engineering, steady state is an equilibrium condition of a circuit or network that occurs as the effects of transients are no longer important. Steady state is also used as an approximation in systems with on-going transient signals, such as audio systems, to allow simplified analysis of first order performance. Sinusoidal Steady State Analysis is a method for analyzing alternating current circuits using the same techniques as for solving DC circuits.[2] The ability of an electrical machine or power system to regain its original/previous state is called Steady State Stability.[3] The stability of a system refers to the ability of a system to return to its steady state when subjected to a disturbance. As mentioned before, power is generated by synchronous generators that operate in synchronism with the rest of the system. A generator is synchronized with a bus when both of them have same frequency, voltage and phase sequence. We can thus define the power system stability as the ability of the power system to return to steady state without losing synchronicity. Usually power system stability is categorized into Steady State, Transient and Dynamic Stability Steady State Stability studies are restricted to small and gradual changes in the system operating conditions. In this we basically concentrate on restricting the bus voltages close to their nominal values. We also ensure that phase angles between two buses are not too large and check for the overloading of the power equipment and transmission lines. These checks are usually done using power flow studies. Transient Stability involves the study of the power system following a major disturbance. Following a large disturbance in the synchronous alternator the machine power (load) angle changes due to sudden acceleration of the rotor shaft. The objective of the transient stability study is to ascertain whether the load angle returns to a steady value following the clearance of the disturbance. The ability of a power system to maintain stability under continuous small disturbances is investigated under the name of Dynamic Stability (also known as small-signal stability). These small disturbances occur due to random fluctuations in loads and generation levels. In an interconnected power system, these random variations can lead catastrophic failure as this may force the rotor angle to increase steadily. Steady state determination is an important topic, because many design specifications of electronic systems are given in terms of the steady-state characteristics. Periodic steady-state solution is also a prerequisite for small signal dynamic modeling. Steady-state analysis is therefore an indispensable component of the design process. In some cases, it is useful to consider constant envelope vibration—vibration that never settles down to motionlessness, but continues to move at constant amplitude—a kind of steady-state condition. Chemical engineering In chemistry, thermodynamics, and other chemical engineering, a steady state is a situation in which all state variables are constant in spite of ongoing processes that strive to change them. For an entire system to be at steady state, i.e. for all state variables of a system to be constant, there must be a flow through the system (compare mass balance). One of the simplest examples of such a system is the case of a bathtub with the tap open but without the bottom plug: after a certain time the water flows in and out at the same rate, so the water level (the state variable being Volume) stabilizes and the system is at steady state. Of course the Volume stabilizing inside the tub depends on the size of the tub, the diameter of the exit hole and the flowrate of water in. Since the tub can overflow, eventually a steady state can be reached where the water flowing in equals the overflow plus the water out through the drain. A steady state flow process requires conditions at all points in an apparatus remain constant as time changes. There must be no accumulation of mass or energy over the time period of interest. The same mass flow rate will remain constant in the flow path through each element of the system.[4] Thermodynamic properties may vary from point to point, but will remain unchanged at any given point.[5] Mechanical engineering When a periodic force is applied to a mechanical system, it will typically reach a steady state after going through some transient behavior. This is often observed in vibrating systems, such as a clock pendulum, but can happen with any type of stable or semi-stable dynamic system. The length of the transient state will depend on the initial conditions of the system. Given certain initial conditions, a system may be in steady state from the beginning. Biochemistry In biochemistry, the study of biochemical pathways is an important topic. Such pathways will often display steady-state behavior where the chemical species are unchanging, but there is a continuous dissipation of flux through the pathway. Many, but not all, biochemical pathways evolve to stable, steady states. As a result, the steady state represents an important reference state to study. This is also related to the concept of homeostasis, however, in biochemistry, a steady state can be stable or unstable such as in the case of sustained oscillations or bistable behavior. Physiology Homeostasis (from Greek ὅμοιος, hómoios, "similar" and στάσις, stásis, "standing still") is the property of a system that regulates its internal environment and tends to maintain a stable, constant condition. Typically used to refer to a living organism, the concept came from that of milieu interieur that was created by Claude Bernard and published in 1865. Multiple dynamic equilibrium adjustment and regulation mechanisms make homeostasis possible. Fiber optics In fiber optics, "steady state" is a synonym for equilibrium mode distribution.[6] Pharmacokinetics In Pharmacokinetics, steady state is a dynamic equilibrium in the body where drug concentrations consistently stay within a therapeutic limit over time.[7] See also • Attractor • Carrying capacity • Control theory • Dynamical system • Ecological footprint • Economic growth • Engine test stand • Equilibrium point • List of types of equilibrium • Evolutionary economics • Growth curve • Herman Daly • Homeostasis • Limit cycle • Limits to Growth • Population dynamics • Simulation • State function • Steady state economy • Steady State theory • Systems theory • Thermodynamic equilibrium • Transient state References 1. Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. USA, NJ: John Wiley & Sons. pp. 46–59. ISBN 978-1-119-38755-8. 2. "AC analysis intro 1 (Video)". 3. Power System Analysis 4. Smith, J. M.; Van Ness, H. C. (1959). Introduction to Chemical Engineering Thermodynamics (2nd ed.). McGraw-Hill. p. 34. ISBN 0-070-49486-X. 5. Zemansky, M. W.; Van Ness, H. C. (1966). Basic Engineering Thermodynamics. McGraw-Hill. p. 244. ISBN 0-070-72805-4. 6.  This article incorporates public domain material from Federal Standard 1037C. General Services Administration. (in support of MIL-STD-188). 7. Wadhwa, Raoul R.; Cascella, Marco (2021), "Steady State Concentration", StatPearls, Treasure Island (FL): StatPearls Publishing, PMID 31985925, retrieved 2021-06-17
Wikipedia
Steane code The Steane code is a tool in quantum error correction introduced by Andrew Steane in 1996. It is a CSS code (Calderbank-Shor-Steane), using the classical binary [7,4,3] Hamming code to correct for qubit flip errors (X errors) and the dual of the Hamming code, the [7,3,4] code, to correct for phase flip errors (Z errors). The Steane code encodes one logical qubit in 7 physical qubits and is able to correct arbitrary single qubit errors. Its check matrix in standard form is ${\begin{bmatrix}H&0\\0&H\end{bmatrix}}$ where H is the parity-check matrix of the Hamming code and is given by $H={\begin{bmatrix}1&0&0&1&0&1&1\\0&1&0&1&1&0&1\\0&0&1&0&1&1&1\end{bmatrix}}.$ The $[[7,1,3]]$ Steane code is the first in the family of quantum Hamming codes, codes with parameters $[[2^{r}-1,2^{r}-1-2r,3]]$ for integers $r\geq 3$. It is also a quantum color code. Expression in the stabilizer formalism Main article: stabilizer formalism In a quantum error correcting code, the codespace is the subspace of the overall Hilbert space where all logical states live. In an $n$-qubit stabilizer code, we can describe this subspace by its Pauli stabilizing group, the set of all $n$-qubit Pauli operators which stabilize every logical state. The stabilizer formalism allows us to define the codespace of a stabilizer code by specifying its Pauli stabilizing group. We can efficiently describe this exponentially large group by listing its generators. Since the Steane code encodes one logical qubit in 7 physical qubits, the codespace for the Steane code is a $2$-dimensional subspace of its $2^{7}$-dimensional Hilbert space. In the stabilizer formalism, the Steane code has 6 generators: ${\begin{aligned}&IIIXXXX\\&IXXIIXX\\&XIXIXIX\\&IIIZZZZ\\&IZZIIZZ\\&ZIZIZIZ.\end{aligned}}$ Note that each of the above generators is the tensor product of 7 single-qubit Pauli operations. For instance, $IIIXXXX$ is just shorthand for $I\otimes I\otimes I\otimes X\otimes X\otimes X\otimes X$, that is, an identity on the first three qubits and an $X$ gate on each of the last four qubits. The tensor products are often omitted in notation for brevity. The logical $X$ and $Z$ gates are ${\begin{aligned}X_{L}&=XXXXXXX\\Z_{L}&=ZZZZZZZ.\end{aligned}}$ The logical $|0\rangle $ and $|1\rangle $ states of the Steane code are ${\begin{aligned}|0\rangle _{L}=&{\frac {1}{\sqrt {8}}}[|0000000\rangle +|1010101\rangle +|0110011\rangle +|1100110\rangle \\&+|0001111\rangle +|1011010\rangle +|0111100\rangle +|1101001\rangle ]\\|1\rangle _{L}=&X_{L}|0\rangle _{L}.\end{aligned}}$ Arbitrary codestates are of the form $|\psi \rangle =\alpha |0\rangle _{L}+\beta |1\rangle _{L}$. References • Steane, Andrew (1996). "Multiple-Particle Interference and Quantum Error Correction". Proc. R. Soc. Lond. A. 452 (1954): 2551–2577. arXiv:quant-ph/9601029. Bibcode:1996RSPSA.452.2551S. doi:10.1098/rspa.1996.0136. S2CID 8246615. Quantum information science General • DiVincenzo's criteria • NISQ era • Quantum computing • timeline • Quantum information • Quantum programming • Quantum simulation • Qubit • physical vs. logical • Quantum processors • cloud-based Theorems • Bell's • Eastin–Knill • Gleason's • Gottesman–Knill • Holevo's • Margolus–Levitin • No-broadcasting • No-cloning • No-communication • No-deleting • No-hiding • No-teleportation • PBR • Threshold • Solovay–Kitaev • Purification Quantum communication • Classical capacity • entanglement-assisted • quantum capacity • Entanglement distillation • Monogamy of entanglement • LOCC • Quantum channel • quantum network • Quantum teleportation • quantum gate teleportation • Superdense coding Quantum cryptography • Post-quantum cryptography • Quantum coin flipping • Quantum money • Quantum key distribution • BB84 • SARG04 • other protocols • Quantum secret sharing Quantum algorithms • Amplitude amplification • Bernstein–Vazirani • Boson sampling • Deutsch–Jozsa • Grover's • HHL • Hidden subgroup • Quantum annealing • Quantum counting • Quantum Fourier transform • Quantum optimization • Quantum phase estimation • Shor's • Simon's • VQE Quantum complexity theory • BQP • EQP • QIP • QMA • PostBQP Quantum processor benchmarks • Quantum supremacy • Quantum volume • Randomized benchmarking • XEB • Relaxation times • T1 • T2 Quantum computing models • Adiabatic quantum computation • Continuous-variable quantum information • One-way quantum computer • cluster state • Quantum circuit • quantum logic gate • Quantum machine learning • quantum neural network • Quantum Turing machine • Topological quantum computer Quantum error correction • Codes • CSS • quantum convolutional • stabilizer • Shor • Bacon–Shor • Steane • Toric • gnu • Entanglement-assisted Physical implementations Quantum optics • Cavity QED • Circuit QED • Linear optical QC • KLM protocol Ultracold atoms • Optical lattice • Trapped-ion QC Spin-based • Kane QC • Spin qubit QC • NV center • NMR QC Superconducting • Charge qubit • Flux qubit • Phase qubit • Transmon Quantum programming • OpenQASM-Qiskit-IBM QX • Quil-Forest/Rigetti QCS • Cirq • Q# • libquantum • many others... • Quantum information science • Quantum mechanics topics
Wikipedia
Stechkin's lemma In mathematics – more specifically, in functional analysis and numerical analysis – Stechkin's lemma is a result about the ℓq norm of the tail of a sequence, when the whole sequence is known to have finite ℓp norm. Here, the term "tail" means those terms in the sequence that are not among the N largest terms, for an arbitrary natural number N. Stechkin's lemma is often useful when analysing best-N-term approximations to functions in a given basis of a function space. The result was originally proved by Stechkin in the case $q=2$. Statement of the lemma Let $0<p<q<\infty $ and let $I$ be a countable index set. Let $(a_{i})_{i\in I}$ be any sequence indexed by $I$, and for $N\in \mathbb {N} $ let $I_{N}\subset I$ be the indices of the $N$ largest terms of the sequence $(a_{i})_{i\in I}$ in absolute value. Then $\left(\sum _{i\in I\setminus I_{N}}|a_{i}|^{q}\right)^{1/q}\leq \left(\sum _{i\in I}|a_{i}|^{p}\right)^{1/p}{\frac {1}{N^{r}}}$ where $r={\frac {1}{p}}-{\frac {1}{q}}>0$. Thus, Stechkin's lemma controls the ℓq norm of the tail of the sequence $(a_{i})_{i\in I}$ (and hence the ℓq norm of the difference between the sequence and its approximation using its $N$ largest terms) in terms of the ℓp norm of the full sequence and an rate of decay. References • Schneider, Reinhold; Uschmajew, André (2014). "Approximation rates for the hierarchical tensor format in periodic Sobolev spaces". Journal of Complexity. 30 (2): 56–71. CiteSeerX 10.1.1.690.6952. doi:10.1016/j.jco.2013.10.001. ISSN 0885-064X. See Section 2.1 and Footnote 5.
Wikipedia
Jackie Stedall Jacqueline Anne "Jackie" Stedall (4 August 1950 – 27 September 2014)[1] was a British mathematics historian. She wrote nine books,[1] and appeared on radio on BBC Radio 4's In Our Time programme. Jackie Stedall Born Jacqueline Anne Stedall (1950-08-04)4 August 1950 Romford, Essex, England, UK Died27 September 2014(2014-09-27) (aged 64) NationalityBritish OccupationMathematics historian Academic background EducationQueen Mary's High School Alma mater • Girton College, Cambridge • University of Kent • Bristol Polytechnic • Open University Academic work DisciplineMathematics Sub-disciplineHistory of mathematics Institutions • University of Bristol • The Queen's College, Oxford Notable worksThe History of Mathematics: A Very Short Introduction (2012) Early life Stedall was born in Romford, Essex, and attended Queen Mary's High School in Walsall.[1] Her academic achievements included a BA in mathematics from Girton College, Cambridge, an MSc in statistics from the University of Kent, a PGCE from Bristol Polytechnic (now the University of the West of England), and a PhD in the history of mathematics from the Open University.[1][2] Her PhD focused upon John Wallis' 1685 work Treatise of Algebra.[3] Career After her MSc degree, Stedall worked for three years as a statistician at the University of Bristol, and four years as an administrator for War on Want. Subsequently, she worked as a teacher for eight years.[2] Stedall's academic career began in 2000, when she became a Clifford Norton student at The Queen's College, Oxford, studying the history of science.[1][2] She later became a fellow of the college, and created a third-year module on the history of mathematics at the University of Oxford.[1][2] In 2002, Stedall became the managing editor of the British Society for the History of Mathematics's newsletter, which later became the BSHM Bulletin journal. She worked alongside fellow mathematical historian Eleanor Robson.[3] Stedall appeared multiple times on the BBC Radio 4 programme In Our Time. Topics that she discussed on the programme included Archimedes, whether Isaac Newton or Gottfried Wilhelm Leibniz were the founder of calculus, the Fibonacci sequence, prime numbers in finance, and Renaissance era mathematics.[4][5] Books Stedall wrote a 2008 book Mathematics Emerging[6] which was used as the primary textbook for her course.[1][2] She also co-edited and published the Oxford Handbook of the History of Mathematics.[7] With Janet Beery, she co-edited Thomas Harriot’s Doctrine of Triangular Numbers: the 'Magisteria Magna' (European Mathematical Society, 2009).[8] In 2012, Stedall wrote the book The History of Mathematics: A Very Short Introduction, part of the Oxford University Press' Very Short Introduction series of books. The book focused on "what mathematical historians do and how they do it".[9] It won the 2013 Neumann Prize for the best English-language book on the history of mathematics.[10] Personal life Stedall was married and had two children.[7] Whilst suffering from cancer, Stedall joined the Painswick Friends' meeting house, which "helped her find peace with her illness".[1][7] In March 2014, she was robbed by a Romanian fraud gang, who stole her bank card.[7] Stedall died of cancer on 27 September 2014.[1][2] In her will, she donated money to Queen's College Library for the preservation of mathematical history books.[11] In 2015, the Canadian Society for History and Philosophy of Mathematics held a special session to remember Stedall, and in 2016, the British Society for the History of Mathematics held a two-day meeting at Queen's College on sixteenth- and seventeenth-century algebra, which they dedicated to Stedall.[12][13] References 1. Neumann, Peter (24 October 2014). "Jacqueline Stedall obituary". The Guardian. Retrieved 23 October 2016. 2. "Jacqueline Anne Stedall (4 August 1950 – 27 September 2014)". University of Oxford. 1 October 2014. Retrieved 23 October 2016. 3. Robson, Eleanor (27 October 2015). "Subverting expectations: memories of editing with Jackie" (PDF). British Society for the History of Mathematics (pdf). 30 (3): 178–182. doi:10.1080/17498430.2015.1055902. S2CID 123529423. 4. Collins, Julia; Docherty, Pamela. "Mathematical ideas that shaped the world". maths.ed.ac.uk. Archived from the original on 21 July 2015. Retrieved 23 October 2016. 5. Bragg, Melvyn (December 2011). In Our Time: A companion to the Radio 4 series. Hachette UK. ISBN 978-1-4447-4285-5. Retrieved 23 October 2016. 6. Grattan Guinness, I. (2010). "Review of Mathematics Emerging. A Sourcebook 1540–1900". Annals of Science. 68 (1): 133–134. doi:10.1080/00033790802657848. ISSN 0003-3790. S2CID 121561644. 7. "Terminally ill Oxford research fellow targeted by Romanian fraudster". The Telegraph. 19 June 2015. Retrieved 23 October 2016. 8. Reviews of Thomas Harriot’s Doctrine of Triangular Numbers: • Gouvêa, Fernando Q. (March 2009). "Review". MAA Reviews. Mathematical Association of America. • Schemmel, Matthias (September 2010). "Before calculus". Notes and Records of the Royal Society of London. 64 (3): 303–304. doi:10.1098/rsnr.2010.0016. JSTOR 20753908. • Shea, William R. (2010). Mathematical Reviews. MR 2516550.{{cite journal}}: CS1 maint: untitled periodical (link) • "Review". European Mathematical Society Reviews. May 2011. 9. Reviews of The History of Mathematics: A Very Short Introduction: • Blanco, Mònica (2015). "Review". Actes d'História de la Ciència i de la Técnica (in Catalan). 8: 161–163. • Ferguson, Wallace A. (June 2013). "Review". IMA Reviews. Institute of Mathematics and its Applications. • Gouvêa, Fernando Q. (21 December 2012). "Review". MAA Reviews. Mathematical Association of America. Retrieved 23 October 2016. • Harkleroad, Leon. "Review of The History of Mathematics: A Very Short Introduction". MathSciNet. MR 3137003. • Lemmermeyer, Franz. "Review of The History of Mathematics: A Very Short Introduction". zbMATH. Zbl 1244.00001. • Leversha, Gerry (March 2014). The Mathematical Gazette. 98 (541): 155–156. doi:10.1017/s0025557200000917. JSTOR 24496613. S2CID 233360256.{{cite journal}}: CS1 maint: untitled periodical (link) • Schneebeli, H. R. (2013). "Review of The History of Mathematics: A Very Short Introduction". Elemente der Mathematik (in German). 68 (3): 136. doi:10.4171/EM/231. • Sonar, Thomas (September 2014). BSHM Bulletin: Journal of the British Society for the History of Mathematics. 29 (3): 217–219. doi:10.1080/17498430.2014.920217. S2CID 120118503.{{cite journal}}: CS1 maint: untitled periodical (link) 10. "Neumann Prize". British Society for the History of Mathematics. Retrieved 2 November 2016. 11. "From the Librarian" (PDF). Insight (pdf). Queen's College Oxford Library (5): 1, 2, 6. 2015. Retrieved 2 November 2016. 12. "Call for Papers" (PDF). Canadian Society for History and Philosophy of Mathematics (pdf). 2015. Retrieved 2 November 2016. 13. "Mathematics emerging: A tribute to Jackie Stedall and her influence on the history of mathematics". British Society for the History of Mathematics. Retrieved 23 October 2016. Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • Belgium • United States • Latvia • Czech Republic • Croatia • Netherlands • Poland Academics • Mathematics Genealogy Project • zbMATH Artists • MusicBrainz Other • IdRef
Wikipedia
Steenrod homology In algebraic topology, Steenrod homology is a homology theory for compact metric spaces introduced by Norman Steenrod (1940, 1941), based on regular cycles. It is similar to the homology theory introduced rather sketchily by Andrey Kolmogorov in 1936. References • Milnor, John Willard (1995) [1961], "On the Steenrod homology theory", Novikov conjectures, index theorems and rigidity, Vol. 1 (Oberwolfach, 1993), London Math. Soc. Lecture Note Ser., vol. 226, Cambridge University Press, pp. 79–96, doi:10.1017/CBO9780511662676.005, MR 1388297 • Steenrod, Norman E. (1940), "Regular cycles of compact metric spaces", Annals of Mathematics, Second Series, 41 (4): 833–851, doi:10.2307/1968863, ISSN 0003-486X, JSTOR 1968863, MR 0002544 • Steenrod, Norman E. (1941), "Regular cycles of compact metric spaces", Lectures in Topology, Ann Arbor: University of Michigan Press, pp. 43–55, MR 0005298
Wikipedia
Steenrod problem In mathematics, and particularly homology theory, Steenrod's Problem (named after mathematician Norman Steenrod) is a problem concerning the realisation of homology classes by singular manifolds.[1] Formulation Let $M$ be a closed, oriented manifold of dimension $n$, and let $[M]\in H_{n}(M)$ be its orientation class. Here $H_{n}(M)$ denotes the integral, $n$-dimensional homology group of $M$. Any continuous map $f\colon M\to X$ defines an induced homomorphism $f_{*}\colon H_{n}(M)\to H_{n}(X)$.[2] A homology class of $H_{n}(X)$ is called realisable if it is of the form $f_{*}[M]$ where $[M]\in H_{n}(M)$. The Steenrod problem is concerned with describing the realisable homology classes of $H_{n}(X)$.[3] Results All elements of $H_{k}(X)$ are realisable by smooth manifolds provided $k\leq 6$. Moreover, any cycle can be realized by the mapping of a pseudo-manifold.[3] The assumption that M be orientable can be relaxed. In the case of non-orientable manifolds, every homology class of $H_{n}(X,\mathbb {Z} _{2})$, where $\mathbb {Z} _{2}$ denotes the integers modulo 2, can be realized by a non-oriented manifold, $f\colon M^{n}\to X$.[3] Conclusions For smooth manifolds M the problem reduces to finding the form of the homomorphism $\Omega _{n}(X)\to H_{n}(X)$, where $\Omega _{n}(X)$ is the oriented bordism group of X.[4] The connection between the bordism groups $\Omega _{*}$ and the Thom spaces MSO(k) clarified the Steenrod problem by reducing it to the study of the homomorphisms $H_{*}(\operatorname {MSO} (k))\to H_{*}(X)$.[3][5] In his landmark paper from 1954,[5] René Thom produced an example of a non-realisable class, $[M]\in H_{7}(X)$, where M is the Eilenberg–MacLane space $K(\mathbb {Z} _{3}\oplus \mathbb {Z} _{3},1)$. See also • Singular homology • Pontryagin-Thom construction • Cobordism References 1. Eilenberg, Samuel (1949). "On the problems of topology". Annals of Mathematics. 50 (2): 247–260. doi:10.2307/1969448. JSTOR 1969448. 2. Hatcher, Allen (2001), Algebraic Topology, Cambridge University Press, ISBN 0-521-79540-0 3. Encyclopedia of Mathematics. "Steenrod Problem". Retrieved October 29, 2020. 4. Rudyak, Yuli B. (1987). "Realization of homology classes of PL-manifolds with singularities". Mathematical Notes. 41 (5): 417–421. doi:10.1007/bf01159869. S2CID 122228542. 5. Thom, René (1954). "Quelques propriétés globales des variétés differentiable". Commentarii Mathematici Helvetici (in French). 28: 17–86. doi:10.1007/bf02566923. S2CID 120243638. External links • Thom construction and the Steenrod problem on MathOverflow • Explanation for the Pontryagin-Thom construction
Wikipedia
Stefan Bergman Stefan Bergman (5 May 1895 – 6 June 1977) was a Congress Poland-born American mathematician whose primary work was in complex analysis. His name is also written Bergmann; he dropped the second "n" when he came to the U. S. He is best known for the kernel function he discovered while at University of Berlin in 1922. This function is known today as the Bergman kernel. Bergman taught for many years at Stanford University, and served as an advisor to several students.[1] Stefan Bergman Bergman in Zürich, 1932 Born(1895-05-05)May 5, 1895 Częstochowa, Congress Poland, Russian Empire DiedJune 6, 1977(1977-06-06) (aged 82) Palo Alto, California, USA EducationUniversity of Vienna University of Berlin Known forBergman kernel Bergman metric Bergman space SpouseAdele Adlersberg Scientific career InstitutionsUniversity of Berlin Tomsk State University MIT Yeshiva University Brown University Stanford University ThesisÜber die Entwicklung der harmonischen Funktionen der Ebene und des Raumes nach Orthogonalfunktionen (1922) Doctoral advisorRichard von Mises Doctoral studentsMichael Maschler Biography Born in Częstochowa, Congress Poland, Russian Empire, to a German Jewish family,[2] Bergman received his Ph.D. at University of Berlin in 1921 for a dissertation on Fourier analysis. His advisor, Richard von Mises, had a strong influence on him, lasting for the rest of his career.[3] In 1933, Bergman was forced to leave his post at the Berlin University because he was a Jew. He fled first to Russia, where he stayed until 1939, and then to Paris. In 1939, he emigrated to the United States, where he would remain for the rest of life.[3] He was elected a Fellow of the American Academy of Arts and Sciences in 1951.[4] He was a professor at Stanford University from 1952 until his retirement in 1972.[5] He was an invited speaker at the International Congress of Mathematicians in 1950 in Cambridge, Massachusetts[6] and in 1962 in Stockholm (On meromorphic functions of several complex variables).[7] He died in Palo Alto, California, aged 82. The Bergman Prize The Stefan Bergman Prize in mathematics was initiated by Bergman's wife in her will, in memory of her husband's work. The American Mathematical Society supports the prize and selects the committee of judges.[8] The prize is awarded for:[8] 1. the theory of the kernel function and its applications in real and complex analysis; or 2. function-theoretic methods in the theory of partial differential equations of elliptic type with a special attention to Bergman's and related operator methods. Selected publications • Bergmann, Stefan (1933), "Über die Kernfunktion eines Bereiches und ihr Verhalten am Rande. I", Journal für die reine und angewandte Mathematik (in German), 1933 (169): 1–42, doi:10.1515/crll.1933.169.1, JFM 60.1025.01, S2CID 119758238. • Bergmann, Stefan (1934), "Über eine in gewissen Bereichen mit Maximumfläche gültige Integraldarstellung der Funktionen zweier komplexer Variabler: I", Mathematische Zeitschrift (in German), 39: 76–94, doi:10.1007/BF01201345, S2CID 120856598, Zbl 0009.26202. • Bergmann, Stefan (1935), "Über die Kernfunktion eines Bereiches und ihr Verhalten am Rande. II", Journal für die reine und angewandte Mathematik (in German), 1935 (172): 89–128, doi:10.1515/crll.1935.172.89, JFM 60.1025.01, S2CID 199546391. • Bergmann, S. (1935), "Über eine in gewissen Bereichen mit Maximumfläche gültige Integraldarstellung der Funktionen zweier komplexer Variabler: II", Mathematische Zeitschrift (in German), 39: 605–608, doi:10.1007/BF01201376, JFM 61.0372.01, S2CID 186228345 • Bergmann, S. (1936), "Über eine Integraldarstellung von Funktionen zweier komplexer Veränderlichen", Recueil Mathématique (Matematicheskii Sbornik), New Series (in German), 1 (43) (6): 851–862, Zbl 0016.17001 • Bergmann, Stefan (1947) [1941], Sur les fonctions orthogonales de plusieurs variables complexes avec les applications à la théorie des fonctions analytiques., Mémorial des sciences mathématiques (in French), vol. 106 (2nd ed.), Paris: Gauthier-Villars, p. 61, JFM 67.0299.03, MR 0032776, Zbl 0036.05101. The original edition was published in 1941 by Interscience Publishers.[9] • Bergmann, Stefan (1948), Sur la fonction-noyau d'un domaine et ses applications dans la théorie des transformations pseudo-conformes., Mémorial des sciences mathématiques (in French), vol. 108, Paris: Gauthier-Villars, p. 80, MR 0032777, Zbl 0036.05201.[10] • The Kernel Function and Conformal Mapping, American Mathematical Society 1950,[11] 2nd edn. 1970 • with Menahem Max Schiffer: Kernel Functions and elliptic differential equations in mathematical physics, Academic Press 1953[12] • with John G. Herriot: Application of the method of the kernel function for solving boundary value problems, Numerische Mathematik 3, 1961 • Integral operators in the theory of linear partial differential equations, Springer 1961,[13] 2nd edn. 1969 See also • Bergman kernel • Bergman metric • Bergman space • Bergman–Weil integral representation References 1. Stefan Bergman at the Mathematics Genealogy Project 2. O'Connor & Robertson, Stefan Bergman. 3. O'Connor, John J.; Robertson, Edmund F., "Stefan Bergman", MacTutor History of Mathematics Archive, University of St Andrews. 4. "Book of Members, 1780–2010: Chapter B" (PDF). American Academy of Arts and Sciences. Retrieved June 16, 2011. 5. Stefan Bergman papers, circa 1940–1972 in SearchWorks, Stanford University Libraries 6. Bergman, Stefan. "On visualization of domains in the theory of functions of two complex variables." Archived 2016-10-03 at the Wayback Machine In Proceedings of the International Congress of Mathematicians, vol. 1, pp. 363–373. 1950. 7. Bergman, S. "On meromorphic functions of several complex variables, Abstract of short communications." Internat. Congr. Math., Stockholm (1962): 63. 8. Other Prizes and Awards Supported by the AMS 9. See the review by Gelbart, Abe (1942). "Review: Stefan Bergman, Sur les fonctions orthogonales de plusieurs variables complexes avec les applications à la théorie des fonctions analytiques". Bulletin of the American Mathematical Society. 48 (1): 15–18. doi:10.1090/s0002-9904-1942-07606-3.. 10. See the review by Behnke, H. (1951). "Review: Stefan Bergman, Sur la fonction-noyau d'un domaine et ses applications dans la théorie du transformations pseudo-conformes". Bulletin of the American Mathematical Society. 57 (3): 186–188. doi:10.1090/s0002-9904-1951-09483-5.. 11. Behnke, H. (1952). "Review: Stefan Bergman, The kernel function and conformal mapping". Bull. Amer. Math. Soc. 58 (1): 76–78. doi:10.1090/s0002-9904-1952-09553-7. 12. Henrici, Peter (1955). "Review: S. Bergman and M. Schiffer, Kernel Functions and elliptic differential equations in mathematical physics". Bull. Amer. Math. Soc. 61 (6): 596–600. doi:10.1090/s0002-9904-1955-10005-5. 13. Kreyszig, Erwin (1962). "Review: Stefan Bergman, Integral operators in the theory of linear partial differential equations". Bull. Amer. Math. Soc. 68 (3): 161–162. doi:10.1090/s0002-9904-1962-10724-1. External links • Author profile in the database zbMATH Authority control International • ISNI • VIAF National • France • 2 • BnF data • 2 • Germany • Israel • United States • Sweden • Czech Republic • Netherlands • Poland Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • SNAC • IdRef
Wikipedia
Stefan Cohn-Vossen Stefan Cohn-Vossen (28 May 1902 – 25 June 1936) was a mathematician, who was responsible for Cohn-Vossen's inequality and the Cohn-Vossen transformation is also named for him.[1] He proved the first version of the splitting theorem. Stefan Cohn-Vossen in Moscow, probably 1936 Born(1902-05-28)May 28, 1902 Breslau, Silesia DiedJune 25, 1936(1936-06-25) (aged 34) Moscow, Soviet Union Alma materWrocław University Known forCohn-Vossen's inequality Scientific career FieldsMathematics ThesisSinguläre Punkte reeller, schlichter Kurvenscharen, deren Differentialgleichung gegeben ist (1924) Doctoral advisorAdolf Kneser He was also known for his collaboration with David Hilbert on the 1932 book Anschauliche Geometrie, translated into English as Geometry and the Imagination.[2] He was born in Breslau (then a city in the Kingdom of Prussia; now Wrocław in Poland). He wrote a 1924 doctoral dissertation at the University of Breslau (now the University of Wrocław) under the supervision of Adolf Kneser.[3] He became a professor at the University of Cologne in 1930. He was barred from lecturing in 1933 under Nazi racial legislation, because he was Jewish.[4] In 1934 he emigrated to the USSR, with some help from Herman Müntz.[5] While there, he taught at Leningrad University. He died in Moscow from pneumonia.[6] See also • Cohn-Vossen's inequality References 1. Voitsekhovskii, M.I. (2001) [1994], "Cohn-Vossen transformation", Encyclopedia of Mathematics, EMS Press 2. Hilbert, David; Cohn-Vossen, Stephan (1952). Geometry and the Imagination (2nd ed.). Chelsea. ISBN 0-8284-1087-9. 3. Stefan Cohn-Vossen at the Mathematics Genealogy Project 4. Siegmund-Schultze, Reinhard (2009), Mathematicians Fleeing from Nazi Germany: Individual Fates and Global Impact, Princeton University Press, pp. 132, 133, 346, 370, 373, 399, ISBN 9780691140414. 5. Siegmund-Schultze 2009 (p.133) quotes from a 1937 letter by Müntz: "The appointments of Cohn-Vossen, Walfisz, Pollaczek (the latter was not allowed to slip in again) were immediately influenced by myself, the ones for Plessner and Bergmann indirectly." 6. Cohn-Vossen's Obituary (in Russian) External links • Anschauliche Geometrie at Göttinger Digitalisierungszentrum • Cohn-Vossen transformation at Encyclopedia of Mathematics Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Japan • Czech Republic • Netherlands • Poland Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie • Trove Other • SNAC • IdRef
Wikipedia
Stefan Güttel Stefan Dietrich Güttel (born 27 November 1981) is a German numerical analyst. He is Professor of Applied Mathematics in the Department of Mathematics at the University of Manchester. Stefan Güttel Born Stefan Dietrich Güttel (1981-11-27) 27 November 1981[1] Dresden, Germany NationalityGerman CitizenshipGermany Alma materFreiberg University of Mining and Technology Awards • James H. Wilkinson Prize in Numerical Analysis and Scientific Computing (2021) • ILAS Taussky–Todd Prize (2023) Scientific career Fields • Mathematics • Numerical analysis • Numerical linear algebra • Mathematical software[2] InstitutionsUniversity of Manchester ThesisRational Krylov Methods for Operator Functions (2010) Doctoral advisorMichael Eiermann[3] Websitewww.guettel.com Güttel was born in Dresden, and was educated at the Freiberg University of Mining and Technology, from which he gained his MSc in Applied Mathematics (2006) and PhD in Applied Mathematics (2010). His PhD thesis Rational Krylov Methods for Operator Functions was supervised by Michael Eiermann.[3] He worked as a postdoctoral researcher at the University of Geneva (2010–2011) and the University of Oxford (2011–2012). In 2012 he was appointed lecturer in mathematics at the University of Manchester, and later promoted to Senior Lecturer and Reader. In 2021 he was promoted to Professor of Applied Mathematics.[1] Güttel is best known for his work on numerical algorithms for large-scale problems arising with differential equations and in data science, in particular Krylov subspace methods. He worked with companies such as Intel, Schlumberger, and Arup. Since 2018 Güttel is a Fellow of the Alan Turing Institute,[4] the United Kingdom's national institute for data science and artificial intelligence. In 2018 he received a Teaching Excellence Award of the University of Manchester. In 2021 he was awarded the James H. Wilkinson Prize in Numerical Analysis and Scientific Computing by the Society for Industrial and Applied Mathematics (SIAM) for his contributions to the analysis, implementation, and application of rational and block Krylov methods (laudatio),[5] which have become popular for the efficient numerical solution of large eigenvalue problems, matrix equations, and in model order reduction. In 2023 he received the Taussky–Todd Prize of the International Linear Algebra Society.[6] Güttel has served as the elected Secretary and Treasurer of the UK and Republic of Ireland section of SIAM (2016–2018) and is a member of SIAM's membership committee (since 2020). He served on the editorial boards of the SIAM Journal on Scientific Computing (2015–2021) and Electronic Transactions on Numerical Analysis (since 2020).[1] References 1. "Web CV of Stefan Güttel" (PDF). personalpages.manchester.ac.uk. Retrieved 2022-12-08. 2. Stefan Güttel publications indexed by Google Scholar 3. Stefan Güttel at the Mathematics Genealogy Project 4. Stefan Güttel's Turing Fellow page 5. Dr Stefan Güttel awarded 2021 SIAM James H Wilkinson Prize, University of Manchester, 11 Dezember 2020 6. "ILAS Taussky–Todd Prize Recipients". International Linear Algebra Society. 31 December 2020. External links • Stefan Güttel's Homepage • Stefan Güttel at the Mathematics Genealogy Project • Publications by Stefan Güttel at ResearchGate Authority control International • VIAF National • Germany Academics • Google Scholar • Mathematics Genealogy Project • ORCID • Scopus Other • IdRef
Wikipedia
Spidron In geometry, a spidron is a continuous flat geometric figure composed entirely of triangles, where, for every pair of joining triangles, each has a leg of the other as one of its legs, and neither has any point inside the interior of the other. A deformed spidron is a three-dimensional figure sharing the other properties of a specific spidron, as if that spidron were drawn on paper, cut out in a single piece, and folded along a number of legs. This article discusses the geometric figure; for the science-fiction character see Spidron (character). Origin and development A standard spidron consists of two alternating, adjoining sequences of equilateral and isosceles triangles.[1] It was first modelled in 1979 by Dániel Erdély, as a homework presented to Ernő Rubik, for Rubik's design class, at the Hungarian University of Arts and Design (now: Moholy-Nagy University of Art and Design). Erdély also gave the name "Spidron" to it, when he discovered it in the early 70s.[1] The name originates from the English names of spider and spiral, because the shape is reminiscent of a spider web.[2] The term ends with the affix "-on" as in polygon.[1] A spidron is a plane figure consisting of an alternating sequence of equilateral and isosceles (30°, 30°, 120°) triangles. Within the figure, one side of a regular triangle coincides with one of the sides of an isosceles triangle, while another side coincides with the hypotenuse of another, smaller isosceles triangle. The sequence can be repeated any number of times in the direction of the smaller and smaller triangles, and the entire figure is centrally projected through the mid-point of the base of the largest unilateral triangle.[3] In his initial work Erdély started with a hexagon. He combined every corner with the after-next one. In his mathematical analysis of spidrons Stefan Stenzhorn demonstrated that it is possible to create a spidron with every regular Polygon greater than four. Furthermore, you can vary the number of points to the next combination. Stenzhorn reasoned that after all the initial hexagon-spidron is just the special case of a general spidron.[4] In a two-dimensional plane a tessellation with hexagon-spidrons is possible. The form is known from many works by M.C. Escher, who devoted himself to such bodies of high symmetry. Due to their symmetry spidrons are also an interesting object for mathematicians. The spidrons can appear in a very large number of versions, and the different formations make possible the development of a great variety of plane, spatial and mobile applications. These developments are suitable to perform aesthetic and practical functions that are defined in advance by the consciously selected arrangements of all the possible characteristics of symmetry. The spidron system is under the protection of several know-how and industrial pattern patents; Spidron is a registered trademark. It was awarded a gold medal at the exhibition Genius Europe in 2005. It has been presented in a number of art magazines, conferences and international exhibitions. During the last two years it has also appeared, in several versions, as a public area work. Since spidron-system is the personal work by Dániel Erdély but in the development of the individual formations he worked together with several Hungarian, Dutch, Canadian and American colleagues, the exhibition is a collective product in a sense, several works and developments are a result of an international team-work. The spidron is constructed from two semi-spidrons sharing a long side, with one rotated 180 degrees to the other. If the second semi-spidron is reflected in the long side instead of rotated, the result is a "hornflake". Deformed spidrons or hornflakes can be used to construct polyhedra called spidrohedra or hornhedra. Some of these polyhedra are space fillers.[5] A semi-spidron may have infinitely many triangles. Such spidronized polyhedra have infinitely many faces and are examples of apeirohedra. Practical use Considering the use of spidrons Daniel Erdély enumerated several possible applications: It has been raised repeatedly that several layers of spidron reliefs could be used as shock dampers or crumple zones in vehicles. Its space-filling properties make it suitable for the construction of building blocks or toys. The surface could be used to create an adjustable acoustic wall or a system of solar cells that follow the sun in a simple manner. Various folding buildings and static structures could also be developed on the basis of my geometric investigation which may have utility in space travel.[3] See also • Triangle mesh • Triangle strip References 1. Peterson, Ivars (2006). "Swirling Seas, Crystal Balls". ScienceNews.org. Archived from the original on February 28, 2007. Retrieved 2007-02-14. 2. "Spidrons", Jugend-forscht.de (in German). 3. Erdély, Daniel (2004). "Concept of the Spidron System" (PDF). Archived from the original (PDF) on 2011-12-15. Retrieved 2011-12-28. In: Proceedings of Sprout-Selecting Conference: Computer Algebra Systems and Dynamic Geometry Systems in Mathematics Teaching. C. Sárvári, ed. University of Pécs, Pécs, Hungary. 4. . Mathematical description of spidrons by Stefan Stenzhorn (in German). 5. Erdély, Daniel.(2000). "Spidron System". Symmetry: Culture and Science. Vol. 11, Nos. 1-4. pp.307-316. External links • 'Spidron 3D' Google image search • "Edanet", SpaceCollective.org • "Spidron Geometric Systems". Archived from the original on 3 May 2007. Retrieved 9 June 2005. • "New Developments". Archived from the original on 28 January 2007. Retrieved 16 July 2006.{{cite web}}: CS1 maint: bot: original URL status unknown (link) • The Pécs Exhibition on Spidron homepage • Peterson, Ivars (21 Oct 2006). "Swirling Seas, Crystal Balls". Science News. Society for Science &#38. 170 (17): 266. doi:10.2307/4017499. JSTOR 4017499. Archived from the original on February 28, 2007. Retrieved 2006-10-21. • Spidrons as playable art: Tulips, GamePuzzles.com
Wikipedia
Stefan problem In mathematics and its applications, particularly to phase transitions in matter, a Stefan problem is a particular kind of boundary value problem for a system of partial differential equations (PDE), in which the boundary between the phases can move with time. The classical Stefan problem aims to describe the evolution of the boundary between two phases of a material undergoing a phase change, for example the melting of a solid, such as ice to water. This is accomplished by solving heat equations in both regions, subject to given boundary and initial conditions. At the interface between the phases (in the classical problem) the temperature is set to the phase change temperature. To close the mathematical system a further equation, the Stefan condition, is required. This is an energy balance which defines the position of the moving interface. Note that this evolving boundary is an unknown (hyper-)surface; hence, Stefan problems are examples of free boundary problems. Analogous problems occur, for example, in the study of porous media flow, mathematical finance and crystal growth from monomer solutions.[1] Historical note The problem is named after Josef Stefan (Jožef Stefan), the Slovenian physicist who introduced the general class of such problems around 1890 in a series of four papers concerning the freezing of the ground and the formation of sea ice.[2] However, some 60 years earlier, in 1831, an equivalent problem, concerning the formation of the Earth's crust, had been studied by Lamé and Clapeyron. Stefan's problem admits a similarity solution, this is often termed the Neumann solution, which was allegedly presented in a series of lectures in the early 1860s. A comprehensive description of the history of Stefan problems may be found in Rubinstein.[3] Premises to the mathematical description From a mathematical point of view, the phases are merely regions in which the solutions of the underlying PDE are continuous and differentiable up to the order of the PDE. In physical problems such solutions represent properties of the medium for each phase. The moving boundaries (or interfaces) are infinitesimally thin surfaces that separate adjacent phases; therefore, the solutions of the underlying PDE and its derivatives may suffer discontinuities across interfaces. The underlying PDEs are not valid at the phase change interfaces; therefore, an additional condition—the Stefan condition—is needed to obtain closure. The Stefan condition expresses the local velocity of a moving boundary, as a function of quantities evaluated at either side of the phase boundary, and is usually derived from a physical constraint. In problems of heat transfer with phase change, for instance, conservation of energy dictates that the discontinuity of heat flux at the boundary must be accounted for by the rate of latent heat release (which is proportional to the local velocity of the interface). The regularity of the equation has been studied mainly by Luis Caffarelli[4][5] and further refined by work of Alessio Figalli, Xavier Ros-Oton and Joaquim Serra[6][7] Mathematical formulation The one-dimensional one-phase Stefan problem The one-phase Stefan problem is based on an assumption that one of the material phases may be neglected. Typically this is achieved by assuming that a phase is at the phase change temperature and hence any variation from this leads to a change of phase. This is a mathematically convenient approximation, which simplifies analysis whilst still demonstrating the essential ideas behind the process. A further standard simplification is to work in non-dimensional format, such that the temperature at the interface may be set to zero and far-field values to $+1$ or $-1$. Consider a semi-infinite one-dimensional block of ice initially at melting temperature $u=0$ for $x\in [0;+\infty )$. The most well-known form of Stefan problem involves melting via an imposed constant temperature at the left hand boundary, leaving a region $[0;s(t)]$ occupied by water. The melted depth, denoted by $s(t)$, is an unknown function of time. The Stefan problem is defined by • The heat equation: ${\frac {\partial u}{\partial t}}={\frac {\partial ^{2}u}{\partial x^{2}}},\quad \forall (x,t)\in [0;s(t)]\times [0;+\infty ]$ • A fixed temperature, above the melt temperature, on the left boundary: $u(0,t)=1,\quad \forall t>0$ • The interface at the melting temperature is set to $u\left(s(t),t\right)=0$ • The Stefan condition: $\beta {\frac {\mathrm {d} }{\mathrm {d} t}}s(t)=-{\frac {\partial }{\partial x}}u\left(s(t),t\right)$ where $\beta $ is the Stefan number, the ratio of latent to specific sensible heat (where specific indicates it is divided by the mass). Note this definition follows naturally from the nondimensionalisation and is used in many texts [8][9] however it may also be defined as the inverse of this. • The initial temperature distribution: $u(x,0)=0,\;\forall x\geq 0$ • The initial depth of the melted ice block: $s(0)=0$ The Neumann solution, obtained by using self-similar variables, indicates that the position of the boundary is given by $ s(t)=2\lambda {\sqrt {t}}$ where $\lambda $ satisfies the transcendental equation $\beta \lambda ={\frac {1}{\sqrt {\pi }}}{\frac {\mathrm {e} ^{-\lambda ^{2}}}{{\text{erf}}(\lambda )}}.$ The temperature in the liquid is then given by $T=1-{\frac {{\text{erf}}\left({\frac {x}{2{\sqrt {t}}}}\right)}{{\text{erf}}(\lambda )}}.$ Applications Apart from modelling melting of solids, Stefan problem is also used as a model for the asymptotic behaviour (in time) of more complex problems. For example, Pego[10] uses matched asymptotic expansions to prove that Cahn-Hilliard solutions for phase separation problems behave as solutions to a non-linear Stefan problem at an intermediate time scale. Additionally, the solution of the Cahn–Hilliard equation for a binary mixture is reasonably comparable with the solution of a Stefan problem.[11] In this comparison, the Stefan problem was solved using a front-tracking, moving-mesh method with homogeneous Neumann boundary conditions at the outer boundary. Also, Stefan problems can be applied to describe phase transformations other than solid-fluid or fluid-fluid.[12] The Stefan problem also has a rich inverse theory; in such problems, the meting depth (or curve or hyper-surface) s is the known datum and the problem is to find u or f.[13] Advanced forms of Stefan problem The classical Stefan problem deals with stationary materials with constant thermophysical properties (usually irrespective of phase), a constant phase change temperature and, in the example above, an instantaneous switch from the initial temperature to a distinct value at the boundary. In practice thermal properties may vary and specifically always do when the phase changes. The jump in density at phase change induces a fluid motion: the resultant kinetic energy does not figure in the standard energy balance. With an instantaneous temperature switch the initial fluid velocity is infinite, resulting in an initial infinite kinetic energy. In fact the liquid layer is often in motion, thus requiring advection or convection terms in the heat equation. The melt temperature may vary with size, curvature or speed of the interface. It is impossible to instantaneously switch temperatures and then difficult to maintain an exact fixed boundary temperature. Further, at the nanoscale the temperature may not even follow Fourier's law. A number of these issues have been tackled in recent years for a variety of physical applications. In the solidification of supercooled melts an analysis where the phase change temperature depends on the interface velocity may be found in Font et al.[14] Nanoscale solidification, with variable phase change temperature and energy/density effects are modelled in.[15][16] Solidification with flow in a channel has been studied, in the context of lava[17] and microchannels,[18] or with a free surface in the context of water freezing over an ice layer.[19][20] A general model including different properties in each phase, variable phase change temperature and heat equations based on either Fourier's law or the Guyer-Krumhansl equation is analysed in.[21] See also • Free boundary problem • Olga Oleinik • Shoshana Kamin • Stefan's equation Notes 1. Applied partial differential equations. Ockendon, J. R. (Rev. ed.). Oxford: Oxford University Press. 2003. ISBN 0-19-852770-5. OCLC 52486357.{{cite book}}: CS1 maint: others (link) 2. (Vuik 1993, p. 157). 3. RUBINSTEIN, L. I. (2016). STEFAN PROBLEM. [Place of publication not identified]: American Mathematical Society. ISBN 978-1-4704-2850-1. OCLC 973324855. 4. Caffarelli, Luis A. (1977). "The regularity of free boundaries in higher dimensions". Acta Mathematica. 139 (none): 155–184. doi:10.1007/BF02392236. ISSN 0001-5962. S2CID 123660704. 5. CAFFARELLI, LUIS A. (1978). "Some Aspects of the One-Phase Stefan Problem". Indiana University Mathematics Journal. 27 (1): 73–77. doi:10.1512/iumj.1978.27.27006. ISSN 0022-2518. JSTOR 24891579. 6. Figalli, Alessio; Ros-Oton, Xavier; Serra, Joaquim (2021-03-24). "The singular set in the Stefan problem". arXiv:2103.13379v1. {{cite journal}}: Cite journal requires |journal= (help) 7. Rorvig, Mordechai (2021-10-06). "Mathematicians Prove Melting Ice Stays Smooth". Quanta Magazine. Retrieved 2021-10-08. 8. Davis, Stephen H., 1939-. Theory of solidification. Cambridge. ISBN 978-0-511-01924-1. OCLC 232161077.{{cite book}}: CS1 maint: multiple names: authors list (link) 9. Fowler, A. C. (Andrew Cadle), 1953- (1997). Mathematical models in the applied sciences. Cambridge: Cambridge University Press. ISBN 0-521-46140-5. OCLC 36621805.{{cite book}}: CS1 maint: multiple names: authors list (link) 10. R. L. Pego. (1989). Front Migration in the Nonlinear Cahn-Hilliard Equation. Proc. R. Soc. Lond. A.,422:261–278. 11. Vermolen, F. J.; Gharasoo, M. G.; Zitha, P. L. J.; Bruining, J. (2009). "Numerical Solutions of Some Diffuse Interface Problems: The Cahn–Hilliard Equation and the Model of Thomas and Windle". International Journal for Multiscale Computational Engineering. 7 (6): 523–543. doi:10.1615/IntJMultCompEng.v7.i6.40. 12. Alvarenga HD, Van de Putter T, Van Steenberge N, Sietsma J, Terryn H (Apr 2009). "Influence of Carbide Morphology and Microstructure on the Kinetics of Superficial Decarburization of C-Mn Steels". Metallurgical and Materials Transactions A. 46 (1): 123–133. Bibcode:2015MMTA...46..123A. doi:10.1007/s11661-014-2600-y. S2CID 136871961. 13. (Kirsch 1996). 14. Font, F.; Mitchell, S. L.; Myers, T. G. (2013-07-01). "One-dimensional solidification of supercooled melts". International Journal of Heat and Mass Transfer. 62: 411–421. doi:10.1016/j.ijheatmasstransfer.2013.02.070. hdl:2072/205484. ISSN 0017-9310. 15. Myers, T. G. (2016-08-01). "Mathematical modelling of phase change at the nanoscale". International Communications in Heat and Mass Transfer. 76: 59–62. doi:10.1016/j.icheatmasstransfer.2016.05.005. ISSN 0735-1933. 16. Font, F.; Myers, T. G.; Mitchell, S. L. (February 2015). "A mathematical model for nanoparticle melting with density change". Microfluidics and Nanofluidics. 18 (2): 233–243. doi:10.1007/s10404-014-1423-x. ISSN 1613-4982. S2CID 54087370. 17. Lister, J.R. (1994). "The solidification of buoyancy-driven flow in a flexible-walled channel. Part 1. Constant-volume release". Journal of Fluid Mechanics. 272: 21–44. Bibcode:1994JFM...272...21L. doi:10.1017/S0022112094004362. S2CID 124068245. 18. Myers, T. G.; Low, J. (October 2011). "An approximate mathematical model for solidification of a flowing liquid in a microchannel". Microfluidics and Nanofluidics. 11 (4): 417–428. doi:10.1007/s10404-011-0807-4. hdl:2072/169268. ISSN 1613-4982. S2CID 97060677. 19. Myers, T. G.; Charpin, J. P. F.; Chapman, S. J. (August 2002). "The flow and solidification of a thin fluid film on an arbitrary three-dimensional surface". Physics of Fluids. 14 (8): 2788–2803. Bibcode:2002PhFl...14.2788M. doi:10.1063/1.1488599. hdl:2117/102903. ISSN 1070-6631. 20. Myers, T.G.; Charpin, J.P.F. (December 2004). "A mathematical model for atmospheric ice accretion and water flow on a cold surface". International Journal of Heat and Mass Transfer. 47 (25): 5483–5500. doi:10.1016/j.ijheatmasstransfer.2004.06.037. 21. Myers, T. G.; Hennessy, M. G.; Calvo-Schwarzwälder, M. (2020-03-01). "The Stefan problem with variable thermophysical properties and phase change temperature". International Journal of Heat and Mass Transfer. 149: 118975. arXiv:1904.05698. doi:10.1016/j.ijheatmasstransfer.2019.118975. hdl:2072/445741. ISSN 0017-9310. S2CID 115147121. References Historical references • Vuik, C. (1993), "Some historical notes about the Stefan problem", Nieuw Archief voor Wiskunde, 4e serie, 11 (2): 157–167, Bibcode:1993STIN...9332397V, MR 1239620, Zbl 0801.35002. An interesting historical paper on the early days of the theory; a preprint version (in PDF format) is available here . Scientific and general references • Cannon, John Rozier (1984), The One-Dimensional Heat Equation, Encyclopedia of Mathematics and Its Applications, vol. 23 (1st ed.), Reading–Menlo Park–London–Don Mills–Sydney–Tokyo/ Cambridge–New York City–New Rochelle–Melbourne–Sydney: Addison-Wesley Publishing Company/Cambridge University Press, pp. XXV+483, ISBN 978-0-521-30243-2, MR 0747979, Zbl 0567.35001. Contains an extensive bibliography, 460 items of which deal with the Stefan and other free boundary problems, updated to 1982. • Kirsch, Andreas (1996), Introduction to the Mathematical Theory of Inverse Problems, Applied Mathematical Sciences series, vol. 120, Berlin–Heidelberg–New York: Springer Verlag, pp. x+282, ISBN 0-387-94530-X, MR 1479408, Zbl 0865.35004 • Meirmanov, Anvarbek M. (1992), The Stefan Problem, De Gruyter Expositions in Mathematics, vol. 3, Berlin – New York: Walter de Gruyter, pp. x+245, doi:10.1515/9783110846720, ISBN 3-11-011479-8, MR 1154310, Zbl 0751.35052.  – via De Gruyter (subscription required) An important monograph from one of the leading contributors to the field, describing his proof of the existence of a classical solution to the multidimensional Stefan problem and surveying its historical development. • Oleinik, O. A. (1960), "A method of solution of the general Stefan problem", Doklady Akademii Nauk SSSR (in Russian), 135: 1050–1057, MR 0125341, Zbl 0131.09202. The paper containing Olga Oleinik's proof of the existence and uniqueness of a generalized solution for the three-dimensional Stefan problem, based on previous researches of her pupil S.L. Kamenomostskaya. • Kamenomostskaya, S. L. (1958), "On Stefan Problem", Nauchnye Doklady Vysshey Shkoly, Fiziko-Matematicheskie Nauki (in Russian), 1 (1): 60–62, Zbl 0143.13901. The earlier account of the research of the author on the Stefan problem. • Kamenomostskaya, S. L. (1961), "On Stefan's problem", Matematicheskii Sbornik (in Russian), 53(95) (4): 489–514, MR 0141895, Zbl 0102.09301. In this paper the author proves the existence and uniqueness of a generalized solution for the three-dimensional Stefan problem, later improved by her master Olga Oleinik. • Rodrigues, J. F. (1989), "The Stefan problem revisited", Mathematical Models for Phase Change Problems, Birkhäuser, pp. 129–190, ISBN 0-8176-2309-4 • Rubinstein, L. I. (1971), The Stefan Problem, Translations of Mathematical Monographs, vol. 27, Providence, R.I.: American Mathematical Society, pp. viii+419, ISBN 0-8218-1577-6, MR 0351348, Zbl 0219.35043. A comprehensive reference, written by one of the leading contributors to the theory, updated up to 1962–1963 and containing a bibliography of 201 items. • Tarzia, Domingo Alberto (July 2000), "A Bibliography on Moving-Free Boundary Problems for the Heat-Diffusion Equation. The Stefan and Related Problems", MAT. Serie A: Conferencias, Seminarios y Trabajos de Matemática, 2: 1–297, doi:10.26422/MAT.A.2000.2.tar, ISSN 1515-4904, MR 1802028, Zbl 0963.35207. The impressive personal bibliography of the author on moving and free boundary problems (M–FBP) for the heat-diffusion equation (H–DE), containing about 5900 references to works appeared on approximately 884 different kinds of publications. Its declared objective is trying to give a comprehensive account of the existing western mathematical–physical–engineering literature on this research field. Almost all the material on the subject, published after the historical and first paper of Lamé–Clapeyron (1831), has been collected. Sources include scientific journals, symposium or conference proceedings, technical reports and books. External links • Vasil'ev, F. P. (2001) [1994], "Stefan condition", Encyclopedia of Mathematics, EMS Press • Vasil'ev, F. P. (2001) [1994], "Stefan problem", Encyclopedia of Mathematics, EMS Press • Vasil'ev, F. P. (2001) [1994], "Stefan problem, inverse", Encyclopedia of Mathematics, EMS Press
Wikipedia
Stefano degli Angeli Stefano degli Angeli (Venice, September 23, 1623 – Padova, October 11, 1697) was an Italian mathematician, philosopher, and Jesuate. He was member of the Catholic Order of the Jesuats (Jesuati). In 1668 the order was suppressed by Pope Clement IX. Angeli was a student of Bonaventura Cavalieri. From 1662 until his death he taught at the University of Padua. From 1654 to 1667 he devoted himself to the study of geometry, continuing the research of Cavalieri and Evangelista Torricelli based on the method of Indivisibles. He then moved on to mechanics, where he often found himself in conflict with Giovanni Alfonso Borelli and Giovanni Riccioli. Jean-Étienne Montucla in his monumental Histoire des mathématiques (Paris, 1758), lavishes praise on Angeli (II, p. 69). Move to Venice and defense of indivisibles Angeli moved from Rome to his native city of Venice in 1652 and began publishing on the method of indivisibles. The method had been under attack by Jesuits Paul Guldin, Mario Bettini, and André Tacquet. Angeli's first response appeared in an "Appendix pro indivisibilibus," attached to his 1658 book Problemata geometrica sexaginta, and was aimed at Bettini.[1] Alexander (2014) shows how indivisibles and infinitesimals were perceived as a theological threat and opposed on doctrinal grounds in the 17th century. The opposition was spearheaded by clerics and more specifically by the Jesuits. In 1632 (the year Galileo was summoned to stand trial over heliocentrism) the Society's Revisors General led by father Jacob Bidermann banned teaching indivisibles in their schools.[1]: 17  Cavalieri's indivisibles and Galileo Galilei's heliocentrism were systematically opposed by the Jesuits and attacked through a spectrum of means, be it mathematical, academic, political, or religious.[1]: Part I  Bettini called the method of indivisibles "counterfeit philosophizing" and sought to discredit it through a discussion of a paradox presented in Galileo's Discorsi. Angeli analyzes Bettini's position and proves it untenable.[1]: 168  De infinitis parabolis In the preface to his work De infinitis parabolis (1659), Angeli examines the criticisms of indivisibles penned by Jesuit Tacquet, who claimed in his 1651 book Cylindricorum et annularium libri IV that [the method of indivisibles] makes war upon geometry to such an extent, that if it is not to destroy it, it must itself be destroyed.[1]: 119  Angeli writes that Tacquet's criticisms were already raised by Guldin and satisfactorily answered by Cavalieri. In his work, Tacquet asked rhetorically: "Who does this reasoning convince?" Angeli responds incredulously: Whom does it convince? Everyone except the Jesuits. Angeli proceeds to give an impressive list of European mathematicians that have accepted the method of indivisibles, including Jean Beaugrand, Ismael Boulliau, Richard White, and Frans van Schooten.[1]: 169  Angeli is trying to portray the Jesuits as the lone holdouts against a method that is being universally accepted. However, as Alexander points out, the mathematicians cited reside north of the Alps. Of the three Italians that Angeli cites, Torricelli, Rocca, and Raffaello Magiotti, only the former had in fact published on indivisibles, and in any case by 1659 all three were dead. Despite his protestations to the contrary, Angeli was, in his own land, alone. James Gregory studied under Angeli from 1664 until 1668 in Padua. Andersen[2] notes that Angeli, who was a Jesuat like Cavalieri, remarked that the circles opposed to the method of indivisibles mainly contained Jesuit mathematicians. Composition of the continuum Tacquet warned that unless the method of indivisibles is destroyed first, it will destroy geometry.[1]: 169  Tacquet's concern reflected the Jesuits' commitment to geometry as practiced by Euclid, as well as their commitment to Aristotelian philosophy that rejected the notion that the continuum is made up of indivisibles. Angeli followed his teacher Cavalieri and argued that the composition of the continuum has no bearing on the method of indivisibles, and "even if the continuum is not composed of indivisibles, the method of indivisibles nevertheless remains unshaken." Angeli then went beyond his teacher's cautious advocacy of the method by declaring that the effectiveness of the method of indivisibles provides evidence that the continuum is in fact composed of indivisibles, contrary to the Jesuit position. Fall of the Jesuats On 6 December 1668 Pope Clement IX issued a brief suppressing the Jesuati order counting Angeli among its members, on the grounds that "no advantage or utility to the Christian people was to be anticipated from their survival." Writes Alexander: "It was a stunningly violent and unexpected end to an old and venerable order. Founded by the Blessed John Colombini in 1361 to tend for the poor and the sick, it had survived for [over] three centuries."[1]: 171, 2  While Angeli had previously published no fewer than nine books promoting and using the method of indivisibles, he did not publish a word on the topic ever again.[1]: 174  Works • Problemata geometrica sexaginta (in Latin). Venetiis. apud Iohannem La Noù. 1658. • De infinitis parabolis, de infinitisque solidis ex variis rotationibus ipsarum, partiumque earundem genitis (in Latin). Venetiis. apud Ioannem La Noù. 1659. • Miscellaneum geometricum (in Latin). Venetijs. apud Ioannem La Noù. 1660. • De infinitorum spiralium spatiorum mensura (in Latin). Venetijs. apud Ioannem La Noù. 1660. • Accessionis ad steriometriam, et mecanicam, pars prima (in Latin). Venetijs. apud Ioannem La Noù. 1662. • Della gravità dell'aria e fluidi, esercitata principalmente nei loro omogenei (in Italian). Padova: Matteo Cadorino. 1671. Notes 1. Amir Alexander (2014). Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. Scientific American / Farrar, Straus and Giroux. ISBN 978-0374176815. 2. Kirsti Andersen (1985) "Cavalieri's method of indivisibles", Archive for History of Exact Sciences 31(4): 291-367, especially 355 References • Mario Gliozzi (1961) "Stefano degli Angeli", Biographical Dictionary of Italians, volume 3, Rome, Institute of the Italian. External links • Media related to Stefano degli Angeli at Wikimedia Commons • O'Connor, John J.; Robertson, Edmund F., "Stefano degli Angeli", MacTutor History of Mathematics Archive, University of St Andrews Authority control International • FAST • ISNI • VIAF National • Spain • 2 • 3 • Germany • Belgium • United States • Czech Republic • Netherlands Academics • CiNii • zbMATH People • Italian People • Deutsche Biographie Other • IdRef
Wikipedia
Stefano Montaldo Professor Stefano Montaldo (born 1969) is an Italian mathematician working at the University of Cagliari[1] in the fields of differential geometry and global analysis. Montaldo is well known for his research on biharmonic maps. Montaldo earned his Ph.D. from the University of Leeds in 1996, under the supervision of John C. Wood.[2] References 1. Faculty profile, University of Cagliari, retrieved 2017-01-22. 2. Stefano Montaldo at the Mathematics Genealogy Project External links • Home Page at University of Cagliari • Profile at Zentralblatt MATH • Profile at Google Scholar Authority control International • ISNI • VIAF National • Israel • United States Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • ResearcherID • Scopus • zbMATH Other • IdRef
Wikipedia
Stefano da San Gregorio Stefano da San Gregorio was a 17th-century Italian mathematician and theologian from the Order of Discalced Augustinians.[1] Works • Aritmetica pratica (in Italian). Ferrara: Francesco Suzzi. 1642. • De praecipuis iuris et iustitiae partibus (in Latin). Ferrara: Franciscum Succium. 1643. • De divinae pietatis vinculis (in Latin). Milano: Francisci Vigoni. 1668. References 1. Origlia, Giangiuseppe (1757). "SANGREGORIO (Stefano)". Dizionario storico (in Italian). Vol. 2. Napoli: Benedetto Gessari. p. 235. Authority control International • VIAF National • United States
Wikipedia
Steffensen's inequality Steffensen's inequality is an equation in mathematics named after Johan Frederik Steffensen.[1] It is an integral inequality in real analysis, stating: If ƒ : [a, b] → R is a non-negative, monotonically decreasing, integrable function and g : [a, b] → [0, 1] is another integrable function, then $\int _{b-k}^{b}f(x)\,dx\leq \int _{a}^{b}f(x)g(x)\,dx\leq \int _{a}^{a+k}f(x)\,dx,$ where $k=\int _{a}^{b}g(x)\,dx.$ References 1. Rabier, Patrick J. (2012). "Steffensen's inequality and $L^{1}$ – $L$∞ estimates of weighted integrals". Proceedings of the American Mathematical Society. 140 (2): 665–675. doi:10.1090/S0002-9939-2011-10939-0. ISSN 0002-9939. External links • Weisstein, Eric W. "Steffensen's Inequality". MathWorld.
Wikipedia
Stein-Rosenberg theorem The Stein-Rosenberg theorem, proved in 1948, states that under certain premises, the Jacobi method and the Gauss-Seidel method are either both convergent, or both divergent. If they are convergent then the Gauss-Seidel is asymptotically faster than the Jacobi method. Let $A=(a_{ij})\in \mathbb {R} ^{n\times n}$. Let $\rho (X)$ be the spectral radius of a matrix $X$. Let $T_{J}=D^{-1}(L+U)$ and $T_{1}=(D-L)^{-1}U$ be the matrix splitting for the Jacobi method and the Gauss-Seidel method respectively. Theorem: If $a_{ij}\leq 0$ for $i\neq j$ and $a_{ii}>0$ for $i=1,\ldots ,n$. Then, one and only one of the following mutually exclusive relations is valid: 1. $\rho (T_{J})=\rho (T_{1})=0$. 2. $0<\rho (T_{1})<\rho (T_{J})<1$. 3. $1=\rho (T_{J})=\rho (T_{1})$. 4. $1<\rho (T_{J})<\rho (T_{1})$. The proof uses the Perron-Frobenius theorem for non-negative matrices. Its proof can be found in Richard S. Varga's book.[1] In the words of Richard Varga: the Stein-Rosenberg theorem gives us our first comparison theorem for two different iterative methods. Interpreted in a more practical way, not only is the point Gauss-Seidel iterative method computationally more convenient to use (because of storage requirements) than the point Jacobi iterative matrix, but it is also asymptotically faster when the Jacobi matrix $T_{J}$ is non-negative Employing more premises on the matrix $A$ one can even give quantitative results. For example, under certain conditions one can state that the Gauss-Seidel method is twice as fast as the Jacobi iteration.[2] References 1. Varga, Richard S. (1962). Matrix Iterative Analysis. ISBN 978-3-540-66321-8. OL 5858659M. 2. "Theorem of Stein and Rosenberg". 2023-06-06.
Wikipedia
Stein's method Stein's method is a general method in probability theory to obtain bounds on the distance between two probability distributions with respect to a probability metric. It was introduced by Charles Stein, who first published it in 1972,[1] to obtain a bound between the distribution of a sum of $m$-dependent sequence of random variables and a standard normal distribution in the Kolmogorov (uniform) metric and hence to prove not only a central limit theorem, but also bounds on the rates of convergence for the given metric. History At the end of the 1960s, unsatisfied with the by-then known proofs of a specific central limit theorem, Charles Stein developed a new way of proving the theorem for his statistics lecture.[2] His seminal paper was presented in 1970 at the sixth Berkeley Symposium and published in the corresponding proceedings.[1] Later, his Ph.D. student Louis Chen Hsiao Yun modified the method so as to obtain approximation results for the Poisson distribution;[3] therefore the Stein method applied to the problem of Poisson approximation is often referred to as the Stein–Chen method. Probably the most important contributions are the monograph by Stein (1986), where he presents his view of the method and the concept of auxiliary randomisation, in particular using exchangeable pairs, and the articles by Barbour (1988) and Götze (1991), who introduced the so-called generator interpretation, which made it possible to easily adapt the method to many other probability distributions. An important contribution was also an article by Bolthausen (1984) on the so-called combinatorial central limit theorem. In the 1990s the method was adapted to a variety of distributions, such as Gaussian processes by Barbour (1990), the binomial distribution by Ehm (1991), Poisson processes by Barbour and Brown (1992), the Gamma distribution by Luk (1994), and many others. The method gained further popularity in the machine learning community in the mid 2010s, following the development of computable Stein discrepancies and the diverse applications and algorithms based on them. The basic approach Probability metrics Stein's method is a way to bound the distance between two probability distributions using a specific probability metric. Let the metric be given in the form $(1.1)\quad d(P,Q)=\sup _{h\in {\mathcal {H}}}\left|\int h\,dP-\int h\,dQ\right|=\sup _{h\in {\mathcal {H}}}\left|Eh(W)-Eh(Y)\right|$ Here, $P$ and $Q$ are probability measures on a measurable space ${\mathcal {X}}$, $W$ and $Y$ are random variables with distribution $P$ and $Q$ respectively, $E$ is the usual expectation operator and ${\mathcal {H}}$ is a set of functions from ${\mathcal {X}}$ to the set of real numbers. Set ${\mathcal {H}}$ has to be large enough, so that the above definition indeed yields a metric. Important examples are the total variation metric, where we let ${\mathcal {H}}$ consist of all the indicator functions of measurable sets, the Kolmogorov (uniform) metric for probability measures on the real numbers, where we consider all the half-line indicator functions, and the Lipschitz (first order Wasserstein; Kantorovich) metric, where the underlying space is itself a metric space and we take the set ${\mathcal {H}}$ to be all Lipschitz-continuous functions with Lipschitz-constant 1. However, note that not every metric can be represented in the form (1.1). In what follows $P$ is a complicated distribution (e.g., the distribution of a sum of dependent random variables), which we want to approximate by a much simpler and tractable distribution $Q$ (e.g., the standard normal distribution). The Stein operator We assume now that the distribution $Q$ is a fixed distribution; in what follows we shall in particular consider the case where $Q$ is the standard normal distribution, which serves as a classical example. First of all, we need an operator ${\mathcal {A}}$, which acts on functions $f$ from ${\mathcal {X}}$ to the set of real numbers and 'characterizes' distribution $Q$ in the sense that the following equivalence holds: $(2.1)\quad E(({\mathcal {A}}f)(Y))=0{\text{ for all }}f\quad \iff \quad Y{\text{ has distribution }}Q.$ We call such an operator the Stein operator. For the standard normal distribution, Stein's lemma yields such an operator: $(2.2)\quad E\left(f'(Y)-Yf(Y)\right)=0{\text{ for all }}f\in C_{b}^{1}\quad \iff \quad Y{\text{ has standard normal distribution.}}$ Thus, we can take $(2.3)\quad ({\mathcal {A}}f)(x)=f'(x)-xf(x).$ There are in general infinitely many such operators and it still remains an open question, which one to choose. However, it seems that for many distributions there is a particular good one, like (2.3) for the normal distribution. There are different ways to find Stein operators.[4] The Stein equation $P$ is close to $Q$ with respect to $d$ if the difference of expectations in (1.1) is close to 0. We hope now that the operator ${\mathcal {A}}$ exhibits the same behavior: if $P=Q$ then $E({\mathcal {A}}f)(W)=0$, and hopefully if $P\approx Q$ we have $E({\mathcal {A}}f)(W)\approx 0$. It is usually possible to define a function $f=f_{h}$ such that $(3.1)\quad ({\mathcal {A}}f)(x)=h(x)-E[h(Y)]\qquad {\text{ for all }}x.$ We call (3.1) the Stein equation. Replacing $x$ by $W$ and taking expectation with respect to $W$, we get $(3.2)\quad E({\mathcal {A}}f)(W)=E[h(W)]-E[h(Y)].$ Now all the effort is worthwhile only if the left-hand side of (3.2) is easier to bound than the right hand side. This is, surprisingly, often the case. If $Q$ is the standard normal distribution and we use (2.3), then the corresponding Stein equation is $(3.3)\quad f'(x)-xf(x)=h(x)-E[h(Y)]\qquad {\text{for all }}x.$ If probability distribution Q has an absolutely continuous (with respect to the Lebesgue measure) density q, then[4] $(3.4)\quad ({\mathcal {A}}f)(x)=f'(x)+f(x)q'(x)/q(x).$ Solving the Stein equation Analytic methods. Equation (3.3) can be easily solved explicitly: $(4.1)\quad f(x)=e^{x^{2}/2}\int _{-\infty }^{x}[h(s)-Eh(Y)]e^{-s^{2}/2}\,ds.$ Generator method. If ${\mathcal {A}}$ is the generator of a Markov process $(Z_{t})_{t\geq 0}$ (see Barbour (1988), Götze (1991)), then the solution to (3.2) is $(4.2)\quad f(x)=-\int _{0}^{\infty }[E^{x}h(Z_{t})-Eh(Y)]\,dt,$ where $E^{x}$ denotes expectation with respect to the process $Z$ being started in $x$. However, one still has to prove that the solution (4.2) exists for all desired functions $h\in {\mathcal {H}}$. Properties of the solution to the Stein equation Usually, one tries to give bounds on $f$ and its derivatives (or differences) in terms of $h$ and its derivatives (or differences), that is, inequalities of the form $(5.1)\quad \|D^{k}f\|\leq C_{k,l}\|D^{l}h\|,$ for some specific $k,l=0,1,2,\dots $ (typically $k\geq l$ or $k\geq l-1$, respectively, depending on the form of the Stein operator), where often $\|\cdot \|$ is the supremum norm. Here, $D^{k}$ denotes the differential operator, but in discrete settings it usually refers to a difference operator. The constants $C_{k,l}$ may contain the parameters of the distribution $Q$. If there are any, they are often referred to as Stein factors. In the case of (4.1) one can prove for the supremum norm that $(5.2)\quad \|f\|_{\infty }\leq \min \left\{{\sqrt {\pi /2}}\|h\|_{\infty },2\|h'\|_{\infty }\right\},\quad \|f'\|_{\infty }\leq \min\{2\|h\|_{\infty },4\|h'\|_{\infty }\},\quad \|f''\|_{\infty }\leq 2\|h'\|_{\infty },$ where the last bound is of course only applicable if $h$ is differentiable (or at least Lipschitz-continuous, which, for example, is not the case if we regard the total variation metric or the Kolmogorov metric!). As the standard normal distribution has no extra parameters, in this specific case the constants are free of additional parameters. If we have bounds in the general form (5.1), we usually are able to treat many probability metrics together. One can often start with the next step below, if bounds of the form (5.1) are already available (which is the case for many distributions). An abstract approximation theorem We are now in a position to bound the left hand side of (3.1). As this step heavily depends on the form of the Stein operator, we directly regard the case of the standard normal distribution. At this point we could directly plug in random variable $W$, which we want to approximate, and try to find upper bounds. However, it is often fruitful to formulate a more general theorem. Consider here the case of local dependence. Assume that $W=\sum _{i=1}^{n}X_{i}$ is a sum of random variables such that the $E[W]=0$ and variance $\operatorname {var} [W]=1$. Assume that, for every $i=1,\dots ,n$, there is a set $A_{i}\subset \{1,2,\dots ,n\}$, such that $X_{i}$ is independent of all the random variables $X_{j}$ with $j\not \in A_{i}$. We call this set the 'neighborhood' of $X_{i}$. Likewise let $B_{i}\subset \{1,2,\dots ,n\}$ be a set such that all $X_{j}$ with $j\in A_{i}$ are independent of all $X_{k}$, $k\not \in B_{i}$. We can think of $B_{i}$ as the neighbors in the neighborhood of $X_{i}$, a second-order neighborhood, so to speak. For a set $A\subset \{1,2,\dots ,n\}$ define now the sum $X_{A}:=\sum _{j\in A}X_{j}$. Using Taylor expansion, it is possible to prove that $(6.1)\quad \left|E(f'(W)-Wf(W))\right|\leq \|f''\|_{\infty }\sum _{i=1}^{n}\left({\frac {1}{2}}E|X_{i}X_{A_{i}}^{2}|+E|X_{i}X_{A_{i}}X_{B_{i}\setminus A_{i}}|+E|X_{i}X_{A_{i}}|E|X_{B_{i}}|\right)$ Note that, if we follow this line of argument, we can bound (1.1) only for functions where $\|h'\|_{\infty }$ is bounded because of the third inequality of (5.2) (and in fact, if $h$ has discontinuities, so will $f''$). To obtain a bound similar to (6.1) which contains only the expressions $\|f\|_{\infty }$ and $\|f'\|_{\infty }$, the argument is much more involved and the result is not as simple as (6.1); however, it can be done. Theorem A. If $W$ is as described above, we have for the Lipschitz metric $d_{W}$ that $(6.2)\quad d_{W}({\mathcal {L}}(W),N(0,1))\leq 2\sum _{i=1}^{n}\left({\frac {1}{2}}E|X_{i}X_{A_{i}}^{2}|+E|X_{i}X_{A_{i}}X_{B_{i}\setminus A_{i}}|+E|X_{i}X_{A_{i}}|E|X_{B_{i}}|\right).$ Proof. Recall that the Lipschitz metric is of the form (1.1) where the functions $h$ are Lipschitz-continuous with Lipschitz-constant 1, thus $\|h'\|\leq 1$. Combining this with (6.1) and the last bound in (5.2) proves the theorem. Thus, roughly speaking, we have proved that, to calculate the Lipschitz-distance between a $W$ with local dependence structure and a standard normal distribution, we only need to know the third moments of $X_{i}$ and the size of the neighborhoods $A_{i}$ and $B_{i}$. Application of the theorem We can treat the case of sums of independent and identically distributed random variables with Theorem A. Assume that $EX_{i}=0$, $\operatorname {var} X_{i}=1$ and $W=n^{-1/2}\sum X_{i}$. We can take $A_{i}=B_{i}=\{i\}$. From Theorem A we obtain that $(7.1)\quad d_{W}({\mathcal {L}}(W),N(0,1))\leq {\frac {5E|X_{1}|^{3}}{n^{1/2}}}.$ For sums of random variables another approach related to Steins Method is known as the zero bias transform. Connections to other methods • Lindeberg's device. Lindeberg (1922) introduced a device, where the difference $Eh(X_{1}+\cdots +X_{n})-Eh(Y_{1}+\cdots +Y_{n})$ is represented as a sum of step-by-step differences. • Tikhomirov's method. Clearly the approach via (1.1) and (3.1) does not involve characteristic functions. However, Tikhomirov (1980) presented a proof of a central limit theorem based on characteristic functions and a differential operator similar to (2.3). The basic observation is that the characteristic function $\psi (t)$ of the standard normal distribution satisfies the differential equation $\psi '(t)+t\psi (t)=0$ for all $t$. Thus, if the characteristic function $\psi _{W}(t)$ of $W$ is such that $\psi '_{W}(t)+t\psi _{W}(t)\approx 0$ we expect that $\psi _{W}(t)\approx \psi (t)$ and hence that $W$ is close to the normal distribution. Tikhomirov states in his paper that he was inspired by Stein's seminal paper. See also • Stein's lemma • Stein discrepancy Notes 1. Stein, C. (1972). "A bound for the error in the normal approximation to the distribution of a sum of dependent random variables". Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2. Vol. 6. University of California Press. pp. 583–602. MR 0402873. Zbl 0278.60026. 2. Charles Stein: The Invariant, the Direct and the "Pretentious" Archived 2007-07-05 at the Wayback Machine. Interview given in 2003 in Singapore 3. Chen, L.H.Y. (1975). "Poisson approximation for dependent trials". Annals of Probability. 3 (3): 534–545. doi:10.1214/aop/1176996359. JSTOR 2959474. MR 0428387. Zbl 0335.60016. 4. Novak, S.Y. (2011). Extreme Value Methods with Applications to Finance. Monographs on Statistics and Applied Probability. Vol. 122. CRC Press. Ch. 12. ISBN 978-1-43983-574-6. References • Barbour, A. D. (1988). "Stein's method and Poisson process convergence". Journal of Applied Probability. 25: 175–184. doi:10.2307/3214155. JSTOR 3214155. S2CID 121759039. • Barbour, A. D. (1990). "Stein's method for diffusion approximations". Probability Theory and Related Fields. 84 (3): 297–322. doi:10.1007/BF01197887. S2CID 123057547. • Barbour, A. D. & Brown, T. C. (1992). "Stein's method and point process approximation". Stochastic Processes and Their Applications. 43 (1): 9–31. doi:10.1016/0304-4149(92)90073-Y. • Bolthausen, E. (1984). "An estimate of the remainder in a combinatorial central limit theorem". Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete. 66 (3): 379–386. doi:10.1007/BF00533704. S2CID 121725342. • Ehm, W. (1991). "Binomial approximation to the Poisson binomial distribution". Statistics & Probability Letters. 11 (1): 7–16. doi:10.1016/0167-7152(91)90170-V. • Götze, F. (1991). "On the rate of convergence in the multivariate CLT". The Annals of Probability. 19 (2): 724–739. doi:10.1214/aop/1176990448. • Lindeberg, J. W. (1922). "Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechung". Mathematische Zeitschrift. 15 (1): 211–225. doi:10.1007/BF01494395. S2CID 119730242. • Luk, H. M. (1994). Stein's method for the gamma distribution and related statistical applications. Dissertation. • Novak, S. Y. (2011). Extreme value methods with applications to finance. Monographs on Statistics and Applied Probability. Vol. 122. CRC Press. ISBN 978-1-43983-574-6. • Stein, C. (1986). Approximate computation of expectations. Lecture Notes-Monograph Series. Vol. 7. Institute of Mathematical Statistics. ISBN 0-940600-08-0. • Tikhomirov, A. N. (1980). "Convergence rate in the central limit theorem for weakly dependent random variables". Teoriya Veroyatnostei i ee Primeneniya. 25: 800–818. English translation in Tikhomirov, A. N. (1981). "On the Convergence Rate in the Central Limit Theorem for Weakly Dependent Random Variables". Theory of Probability & Its Applications. 25 (4): 790–809. doi:10.1137/1125092. Literature The following text is advanced, and gives a comprehensive overview of the normal case • Chen, L.H.Y., Goldstein, L., and Shao, Q.M (2011). Normal approximation by Stein's method. www.springer.com. ISBN 978-3-642-15006-7.{{cite book}}: CS1 maint: multiple names: authors list (link) Another advanced book, but having some introductory character, is • ed. Barbour, A.D. and Chen, L.H.Y. (2005). An introduction to Stein's method. Lecture Notes Series, Institute for Mathematical Sciences, National University of Singapore. Vol. 4. Singapore University Press. ISBN 981-256-280-X. {{cite book}}: |author= has generic name (help)CS1 maint: multiple names: authors list (link) A standard reference is the book by Stein, • Stein, C. (1986). Approximate computation of expectations. Institute of Mathematical Statistics Lecture Notes, Monograph Series, 7. Hayward, Calif.: Institute of Mathematical Statistics. ISBN 0-940600-08-0. which contains a lot of interesting material, but may be a little hard to understand at first reading. Despite its age, there are few standard introductory books about Stein's method available. The following recent textbook has a chapter (Chapter 2) devoted to introducing Stein's method: • Ross, Sheldon & Peköz, Erol (2007). A second course in probability. ISBN 978-0-9795704-0-7. Although the book • Barbour, A. D. and Holst, L. and Janson, S. (1992). Poisson approximation. Oxford Studies in Probability. Vol. 2. The Clarendon Press Oxford University Press. ISBN 0-19-852235-5.{{cite book}}: CS1 maint: multiple names: authors list (link) is by large parts about Poisson approximation, it contains nevertheless a lot of information about the generator approach, in particular in the context of Poisson process approximation. The following textbook has a chapter (Chapter 10) devoted to introducing Stein's method of Poisson approximation: • Sheldon M. Ross (1995). Stochastic Processes. Wiley. ISBN 978-0471120629.
Wikipedia
Complementary series representation In mathematics, complementary series representations of a reductive real or p-adic Lie groups are certain irreducible unitary representations that are not tempered and do not appear in the decomposition of the regular representation into irreducible representations. They are rather mysterious: they do not turn up very often, and seem to exist by accident. They were sometimes overlooked, in fact, in some earlier claims to have classified the irreducible unitary representations of certain groups. Several conjectures in mathematics, such as the Selberg conjecture, are equivalent to saying that certain representations are not complementary. For examples see the representation theory of SL2(R). Elias M. Stein (1972) constructed some families of them for higher rank groups using analytic continuation, sometimes called the Stein complementary series. References • A.I. Shtern (2001) [1994], "Complementary series (of representations)", Encyclopedia of Mathematics, EMS Press • Stein, Elias M. (April 1970), "Analytic Continuation of Group Representations", Advances in Mathematics, 4 (2): 172–207, doi:10.1016/0001-8708(70)90022-8, also reprinted as ISBN 0-300-01428-7
Wikipedia
Stein factorization In algebraic geometry, the Stein factorization, introduced by Karl Stein (1956) for the case of complex spaces, states that a proper morphism can be factorized as a composition of a finite mapping and a proper morphism with connected fibers. Roughly speaking, Stein factorization contracts the connected components of the fibers of a mapping to points. Statement One version for schemes states the following:(EGA, III.4.3.1) harv error: no target: CITEREFEGA (help) Let X be a scheme, S a locally noetherian scheme and $f:X\to S$ a proper morphism. Then one can write $f=g\circ f'$ where $g\colon S'\to S$ is a finite morphism and $f'\colon X\to S'$ is a proper morphism so that $f'_{*}{\mathcal {O}}_{X}={\mathcal {O}}_{S'}.$ The existence of this decomposition itself is not difficult. See below. But, by Zariski's connectedness theorem, the last part in the above says that the fiber $f'^{-1}(s)$ is connected for any $s\in S$. It follows: Corollary: For any $s\in S$, the set of connected components of the fiber $f^{-1}(s)$ is in bijection with the set of points in the fiber $g^{-1}(s)$. Proof Set: $S'=\operatorname {Spec} _{S}f_{*}{\mathcal {O}}_{X}$ where SpecS is the relative Spec. The construction gives the natural map $g\colon S'\to S$, which is finite since ${\mathcal {O}}_{X}$ is coherent and f is proper. The morphism f factors through g and one gets $f'\colon X\to S'$, which is proper. By construction, $f'_{*}{\mathcal {O}}_{X}={\mathcal {O}}_{S'}$. One then uses the theorem on formal functions to show that the last equality implies $f'$ has connected fibers. (This part is sometimes referred to as Zariski's connectedness theorem.) See also • Contraction morphism References • Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157 • Grothendieck, Alexandre; Dieudonné, Jean (1961). "Eléments de géométrie algébrique: III. Étude cohomologique des faisceaux cohérents, Première partie". Publications Mathématiques de l'IHÉS. 11. doi:10.1007/bf02684274. MR 0217085. • Stein, Karl (1956), "Analytische Zerlegungen komplexer Räume", Mathematische Annalen, 132: 63–93, doi:10.1007/BF01343331, ISSN 0025-5831, MR 0083045
Wikipedia
Stein manifold In mathematics, in the theory of several complex variables and complex manifolds, a Stein manifold is a complex submanifold of the vector space of n complex dimensions. They were introduced by and named after Karl Stein (1951). A Stein space is similar to a Stein manifold but is allowed to have singularities. Stein spaces are the analogues of affine varieties or affine schemes in algebraic geometry. Definition Suppose $X$ is a complex manifold of complex dimension $n$ and let ${\mathcal {O}}(X)$ denote the ring of holomorphic functions on $X.$ We call $X$ a Stein manifold if the following conditions hold: • $X$ is holomorphically convex, i.e. for every compact subset $K\subset X$, the so-called holomorphically convex hull, ${\bar {K}}=\left\{z\in X\,\left|\,|f(z)|\leq \sup _{w\in K}|f(w)|\ \forall f\in {\mathcal {O}}(X)\right.\right\},$ is also a compact subset of $X$. • $X$ is holomorphically separable, i.e. if $x\neq y$ are two points in $X$, then there exists $f\in {\mathcal {O}}(X)$ such that $f(x)\neq f(y).$ Non-compact Riemann surfaces are Stein manifolds Let X be a connected, non-compact Riemann surface. A deep theorem of Heinrich Behnke and Stein (1948) asserts that X is a Stein manifold. Another result, attributed to Hans Grauert and Helmut Röhrl (1956), states moreover that every holomorphic vector bundle on X is trivial. In particular, every line bundle is trivial, so $H^{1}(X,{\mathcal {O}}_{X}^{*})=0$. The exponential sheaf sequence leads to the following exact sequence: $H^{1}(X,{\mathcal {O}}_{X})\longrightarrow H^{1}(X,{\mathcal {O}}_{X}^{*})\longrightarrow H^{2}(X,\mathbb {Z} )\longrightarrow H^{2}(X,{\mathcal {O}}_{X})$ Now Cartan's theorem B shows that $H^{1}(X,{\mathcal {O}}_{X})=H^{2}(X,{\mathcal {O}}_{X})=0$, therefore $H^{2}(X,\mathbb {Z} )=0$. This is related to the solution of the second Cousin problem. Properties and examples of Stein manifolds • The standard complex space $\mathbb {C} ^{n}$ is a Stein manifold. • Every domain of holomorphy in $\mathbb {C} ^{n}$ is a Stein manifold. • It can be shown quite easily that every closed complex submanifold of a Stein manifold is a Stein manifold, too. • The embedding theorem for Stein manifolds states the following: Every Stein manifold $X$ of complex dimension $n$ can be embedded into $\mathbb {C} ^{2n+1}$ by a biholomorphic proper map. These facts imply that a Stein manifold is a closed complex submanifold of complex space, whose complex structure is that of the ambient space (because the embedding is biholomorphic). • Every Stein manifold of (complex) dimension n has the homotopy type of an n-dimensional CW-complex. • In one complex dimension the Stein condition can be simplified: a connected Riemann surface is a Stein manifold if and only if it is not compact. This can be proved using a version of the Runge theorem for Riemann surfaces, due to Behnke and Stein. • Every Stein manifold $X$ is holomorphically spreadable, i.e. for every point $x\in X$, there are $n$ holomorphic functions defined on all of $X$ which form a local coordinate system when restricted to some open neighborhood of $x$. • Being a Stein manifold is equivalent to being a (complex) strongly pseudoconvex manifold. The latter means that it has a strongly pseudoconvex (or plurisubharmonic) exhaustive function, i.e. a smooth real function $\psi $ on $X$ (which can be assumed to be a Morse function) with $i\partial {\bar {\partial }}\psi >0$, such that the subsets $\{z\in X\mid \psi (z)\leq c\}$ are compact in $X$ for every real number $c$. This is a solution to the so-called Levi problem,[1] named after Eugenio Levi (1911). The function $\psi $ invites a generalization of Stein manifold to the idea of a corresponding class of compact complex manifolds with boundary called Stein domains. A Stein domain is the preimage $\{z\mid -\infty \leq \psi (z)\leq c\}$. Some authors call such manifolds therefore strictly pseudoconvex manifolds. • Related to the previous item, another equivalent and more topological definition in complex dimension 2 is the following: a Stein surface is a complex surface X with a real-valued Morse function f on X such that, away from the critical points of f, the field of complex tangencies to the preimage $X_{c}=f^{-1}(c)$ is a contact structure that induces an orientation on Xc agreeing with the usual orientation as the boundary of $f^{-1}(-\infty ,c).$ That is, $f^{-1}(-\infty ,c)$ is a Stein filling of Xc. Numerous further characterizations of such manifolds exist, in particular capturing the property of their having "many" holomorphic functions taking values in the complex numbers. See for example Cartan's theorems A and B, relating to sheaf cohomology. The initial impetus was to have a description of the properties of the domain of definition of the (maximal) analytic continuation of an analytic function. In the GAGA set of analogies, Stein manifolds correspond to affine varieties. Stein manifolds are in some sense dual to the elliptic manifolds in complex analysis which admit "many" holomorphic functions from the complex numbers into themselves. It is known that a Stein manifold is elliptic if and only if it is fibrant in the sense of so-called "holomorphic homotopy theory". Relation to smooth manifolds Every compact smooth manifold of dimension 2n, which has only handles of index ≤ n, has a Stein structure provided n > 2, and when n = 2 the same holds provided the 2-handles are attached with certain framings (framing less than the Thurston–Bennequin framing).[2][3] Every closed smooth 4-manifold is a union of two Stein 4-manifolds glued along their common boundary.[4] Notes 1. Onishchik, A.L. (2001) [1994], "Levi problem", Encyclopedia of Mathematics, EMS Press 2. Yakov Eliashberg, Topological characterization of Stein manifolds of dimension > 2, International Journal of Mathematics vol. 1, no 1 (1990) 29–46. 3. Robert Gompf, Handlebody construction of Stein surfaces, Annals of Mathematics 148, (1998) 619–693. 4. Selman Akbulut and Rostislav Matveyev, A convex decomposition for four-manifolds, International Mathematics Research Notices (1998), no.7, 371–381. MR1623402 References • Andrist, Rafael (2010). "Stein spaces characterized by their endomorphisms". Transactions of the American Mathematical Society. 363 (5): 2341–2355. doi:10.1090/S0002-9947-2010-05104-9. S2CID 14903691. • Forster, Otto (1981), Lectures on Riemann surfaces, Graduate Text in Mathematics, vol. 81, New-York: Springer Verlag, ISBN 0-387-90617-7 (including a proof of Behnke-Stein and Grauert–Röhrl theorems) • Forstnerič, Franc (2011). Stein Manifolds and Holomorphic Mappings. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge / A Series of Modern Surveys in Mathematics. Vol. 56. doi:10.1007/978-3-642-22250-4. ISBN 978-3-642-22249-8. • Hörmander, Lars (1990), An introduction to complex analysis in several variables, North-Holland Mathematical Library, vol. 7, Amsterdam: North-Holland Publishing Co., ISBN 978-0-444-88446-6, MR 1045639 (including a proof of the embedding theorem) • Gompf, Robert E. (1998), "Handlebody construction of Stein surfaces", Annals of Mathematics, Second Series, The Annals of Mathematics, Vol. 148, No. 2, 148 (2): 619–693, arXiv:math/9803019, doi:10.2307/121005, ISSN 0003-486X, JSTOR 121005, MR 1668563, S2CID 17709531 (definitions and constructions of Stein domains and manifolds in dimension 4) • Grauert, Hans; Remmert, Reinhold (1979), Theory of Stein spaces, Grundlehren der Mathematischen Wissenschaften, vol. 236, Berlin-New York: Springer-Verlag, ISBN 3-540-90388-7, MR 0580152 • Ornea, Liviu; Verbitsky, Misha (2010). "Locally conformal Kähler manifolds with potential". Mathematische Annalen. 348: 25–33. doi:10.1007/s00208-009-0463-0. S2CID 10734808. • Iss'Sa, Hej (1966). "On the Meromorphic Function Field of a Stein Variety". Annals of Mathematics. 83 (1): 34–46. doi:10.2307/1970468. JSTOR 1970468. • Stein, Karl (1951), "Analytische Funktionen mehrerer komplexer Veränderlichen zu vorgegebenen Periodizitätsmoduln und das zweite Cousinsche Problem", Math. Ann. (in German), 123: 201–222, doi:10.1007/bf02054949, MR 0043219, S2CID 122647212 • Zhang, Jing (2006). "Algebraic Stein Varieties". arXiv:math/0610886. Bibcode:2006math.....10886Z. {{cite journal}}: Cite journal requires |journal= (help)
Wikipedia
3D4 In mathematics, the Steinberg triality groups of type 3D4 form a family of Steinberg or twisted Chevalley groups. They are quasi-split forms of D4, depending on a cubic Galois extension of fields K ⊂ L, and using the triality automorphism of the Dynkin diagram D4. Unfortunately the notation for the group is not standardized, as some authors write it as 3D4(K) (thinking of 3D4 as an algebraic group taking values in K) and some as 3D4(L) (thinking of the group as a subgroup of D4(L) fixed by an outer automorphism of order 3). The group 3D4 is very similar to an orthogonal or spin group in dimension 8. Over finite fields these groups form one of the 18 infinite families of finite simple groups, and were introduced by Steinberg (1959). They were independently discovered by Jacques Tits in Tits (1958) and Tits (1959). Construction The simply connected split algebraic group of type D4 has a triality automorphism σ of order 3 coming from an order 3 automorphism of its Dynkin diagram. If L is a field with an automorphism τ of order 3, then this induced an order 3 automorphism τ of the group D4(L). The group 3D4(L) is the subgroup of D4(L) of points fixed by στ. It has three 8-dimensional representations over the field L, permuted by the outer automorphism τ of order 3. Over finite fields The group 3D4(q3) has order q12 (q8 + q4 + 1) (q6 − 1) (q2 − 1). For comparison, the split spin group D4(q) in dimension 8 has order q12 (q8 − 2q4 + 1) (q6 − 1) (q2 − 1) and the quasisplit spin group 2D4(q2) in dimension 8 has order q12 (q8 − 1) (q6 − 1) (q2 − 1). The group 3D4(q3) is always simple. The Schur multiplier is always trivial. The outer automorphism group is cyclic of order f where q3 = pf and p is prime. This group is also sometimes called 3D4(q), D42(q3), or a twisted Chevalley group. 3D4(23) The smallest member of this family of groups has several exceptional properties not shared by other members of the family. It has order 211341312 = 212⋅34⋅72⋅13 and outer automorphism group of order 3. The automorphism group of 3D4(23) is a maximal subgroup of the Thompson sporadic group, and is also a subgroup of the compact Lie group of type F4 of dimension 52. In particular it acts on the 26-dimensional representation of F4. In this representation it fixes a 26-dimensional lattice that is the unique 26-dimensional even lattice of determinant 3 with no norm 2 vectors, studied by Elkies & Gross (1996). The dual of this lattice has 819 pairs of vectors of norm 8/3, on which 3D4(23) acts as a rank 4 permutation group. The group 3D4(23) has 9 classes of maximal subgroups, of structure 21+8:L2(8) fixing a point of the rank 4 permutation representation on 819 points. [211]:(7 × S3) U3(3):2 S3 × L2(8) (7 × L2(7)):2 31+2.2S4 72:2A4 32:2A4 13:4 See also • List of finite simple groups • 2E6 References • Carter, Roger W. (1989) [1972], Simple groups of Lie type, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-50683-6, MR 0407163 • Elkies, Noam D.; Gross, Benedict H. (1996), "The exceptional cone and the Leech lattice", International Mathematics Research Notices, 1996 (14): 665–698, doi:10.1155/S1073792896000426, ISSN 1073-7928, MR 1411589 • Steinberg, Robert (1959), "Variations on a theme of Chevalley", Pacific Journal of Mathematics, 9 (3): 875–891, doi:10.2140/pjm.1959.9.875, ISSN 0030-8730, MR 0109191 • Steinberg, Robert (1968), Lectures on Chevalley groups, Yale University, New Haven, Conn., MR 0466335, archived from the original on 2012-09-10 • Tits, Jacques (1958), Les "formes réelles" des groupes de type E6, Séminaire Bourbaki; 10e année: 1957/1958. Textes des conférences; Exposés 152 à 168; 2e èd. corrigée, Exposé 162, vol. 15, Paris: Secrétariat math'ematique, MR 0106247 • Tits, Jacques (1959), "Sur la trialité et certains groupes qui s'en déduisent", Inst. Hautes Études Sci. Publ. Math., 2: 13–60, doi:10.1007/BF02684706, S2CID 120426125 External links • 3D4(23) at the atlas of finite groups • 3D4(33) at the atlas of finite groups
Wikipedia
Steinberg formula In mathematical representation theory, Steinberg's formula, introduced by Steinberg (1961), describes the multiplicity of an irreducible representation of a semisimple complex Lie algebra in a tensor product of two irreducible representations. It is a consequence of the Weyl character formula, and for the Lie algebra sl2 it is essentially the Clebsch–Gordan formula. Steinberg's formula states that the multiplicity of the irreducible representation of highest weight ν in the tensor product of the irreducible representations with highest weights λ and μ is given by $\sum _{w,w^{\prime }\in W}\epsilon (ww^{\prime })P(w(\lambda +\rho )+w^{\prime }(\mu +\rho )-(\nu +2\rho ))$ where W is the Weyl group, ε is the determinant of an element of the Weyl group, ρ is the Weyl vector, and P is the Kostant partition function giving the number of ways of writing a vector as a sum of positive roots. References • Bourbaki, Nicolas (2005) [1975], Lie groups and Lie algebras. Chapters 7–9, Elements of Mathematics (Berlin), Berlin, New York: Springer-Verlag, ISBN 978-3-540-43405-4, MR 2109105 • Steinberg, Robert (1961), "A general Clebsch–Gordan theorem", Bulletin of the American Mathematical Society, 67 (4): 406–407, doi:10.1090/S0002-9904-1961-10644-7, ISSN 0002-9904, MR 0126508
Wikipedia
Steinberg group (K-theory) In algebraic K-theory, a field of mathematics, the Steinberg group $\operatorname {St} (A)$ of a ring $A$ is the universal central extension of the commutator subgroup of the stable general linear group of $A$. It is named after Robert Steinberg, and it is connected with lower $K$-groups, notably $K_{2}$ and $K_{3}$. Definition Abstractly, given a ring $A$, the Steinberg group $\operatorname {St} (A)$ is the universal central extension of the commutator subgroup of the stable general linear group (the commutator subgroup is perfect and so has a universal central extension). Presentation using generators and relations A concrete presentation using generators and relations is as follows. Elementary matrices — i.e. matrices of the form ${e_{pq}}(\lambda ):=\mathbf {1} +{a_{pq}}(\lambda )$, where $\mathbf {1} $ is the identity matrix, ${a_{pq}}(\lambda )$ is the matrix with $\lambda $ in the $(p,q)$-entry and zeros elsewhere, and $p\neq q$ — satisfy the following relations, called the Steinberg relations: ${\begin{aligned}e_{ij}(\lambda )e_{ij}(\mu )&=e_{ij}(\lambda +\mu );&&\\\left[e_{ij}(\lambda ),e_{jk}(\mu )\right]&=e_{ik}(\lambda \mu ),&&{\text{for }}i\neq k;\\\left[e_{ij}(\lambda ),e_{kl}(\mu )\right]&=\mathbf {1} ,&&{\text{for }}i\neq l{\text{ and }}j\neq k.\end{aligned}}$ The unstable Steinberg group of order $r$ over $A$, denoted by ${\operatorname {St} _{r}}(A)$, is defined by the generators ${x_{ij}}(\lambda )$, where $1\leq i\neq j\leq r$ and $\lambda \in A$, these generators being subject to the Steinberg relations. The stable Steinberg group, denoted by $\operatorname {St} (A)$, is the direct limit of the system ${\operatorname {St} _{r}}(A)\to {\operatorname {St} _{r+1}}(A)$. It can also be thought of as the Steinberg group of infinite order. Mapping ${x_{ij}}(\lambda )\mapsto {e_{ij}}(\lambda )$ yields a group homomorphism $\varphi :\operatorname {St} (A)\to {\operatorname {GL} _{\infty }}(A)$ :\operatorname {St} (A)\to {\operatorname {GL} _{\infty }}(A)} . As the elementary matrices generate the commutator subgroup, this mapping is surjective onto the commutator subgroup. Interpretation as a fundamental group The Steinberg group is the fundamental group of the Volodin space, which is the union of classifying spaces of the unipotent subgroups of $\operatorname {GL} (A)$. Relation to K-theory K1 ${K_{1}}(A)$ is the cokernel of the map $\varphi :\operatorname {St} (A)\to {\operatorname {GL} _{\infty }}(A)$ :\operatorname {St} (A)\to {\operatorname {GL} _{\infty }}(A)} , as $K_{1}$ is the abelianization of ${\operatorname {GL} _{\infty }}(A)$ and the mapping $\varphi $ is surjective onto the commutator subgroup. K2 ${K_{2}}(A)$ is the center of the Steinberg group. This was Milnor's definition, and it also follows from more general definitions of higher $K$-groups. It is also the kernel of the mapping $\varphi :\operatorname {St} (A)\to {\operatorname {GL} _{\infty }}(A)$ :\operatorname {St} (A)\to {\operatorname {GL} _{\infty }}(A)} . Indeed, there is an exact sequence $1\to {K_{2}}(A)\to \operatorname {St} (A)\to {\operatorname {GL} _{\infty }}(A)\to {K_{1}}(A)\to 1.$ Equivalently, it is the Schur multiplier of the group of elementary matrices, so it is also a homology group: ${K_{2}}(A)={H_{2}}(E(A);\mathbb {Z} )$. K3 Gersten (1973) showed that ${K_{3}}(A)={H_{3}}(\operatorname {St} (A);\mathbb {Z} )$. References • Gersten, S. M. (1973), "$K_{3}$ of a Ring is $H_{3}$ of the Steinberg Group", Proceedings of the American Mathematical Society, American Mathematical Society, 37 (2): 366–368, doi:10.2307/2039440, JSTOR 2039440 • Milnor, John Willard (1971), Introduction to Algebraic $K$-theory, Annals of Mathematics Studies, vol. 72, Princeton University Press, MR 0349811 • Steinberg, Robert (1968), Lectures on Chevalley Groups, Yale University, New Haven, Conn., MR 0466335, archived from the original on 2012-09-10
Wikipedia
Steinberg representation In mathematics, the Steinberg representation, or Steinberg module or Steinberg character, denoted by St, is a particular linear representation of a reductive algebraic group over a finite field or local field, or a group with a BN-pair. It is analogous to the 1-dimensional sign representation ε of a Coxeter or Weyl group that takes all reflections to –1. For groups over finite fields, these representations were introduced by Robert Steinberg (1951, 1956, 1957), first for the general linear groups, then for classical groups, and then for all Chevalley groups, with a construction that immediately generalized to the other groups of Lie type that were discovered soon after by Steinberg, Suzuki and Ree. Over a finite field of characteristic p, the Steinberg representation has degree equal to the largest power of p dividing the order of the group. The Steinberg representation is the Alvis–Curtis dual of the trivial 1-dimensional representation. Matsumoto (1969), Shalika (1970), and Harish-Chandra (1973) defined analogous Steinberg representations (sometimes called special representations) for algebraic groups over local fields. For the general linear group GL(2), the dimension of the Jacquet module of a special representation is always one. The Steinberg representation of a finite group • The character value of St on an element g equals, up to sign, the order of a Sylow subgroup of the centralizer of g if g has order prime to p, and is zero if the order of g is divisible by p. • The Steinberg representation is equal to an alternating sum over all parabolic subgroups containing a Borel subgroup, of the representation induced from the identity representation of the parabolic subgroup.[1] • The Steinberg representation is both regular and unipotent, and is the only irreducible regular unipotent representation (for the given prime p). • The Steinberg representation is used in the proof of Haboush's theorem (the Mumford conjecture). Most finite simple groups have exactly one Steinberg representation. A few have more than one because they are groups of Lie type in more than one way. For symmetric groups (and other Coxeter groups) the sign representation is analogous to the Steinberg representation. Some of the sporadic simple groups act as doubly transitive permutation groups so have a BN-pair for which one can define a Steinberg representation, but for most of the sporadic groups there is no known analogue of it. The Steinberg representation of a p-adic group Matsumoto (1969), Shalika (1970), and Harish-Chandra (1973) introduced Steinberg representations for algebraic groups over local fields. Casselman (1973) showed that the different ways of defining Steinberg representations are equivalent. Borel & Serre (1976) and Borel (1976) showed how to realize the Steinberg representation in the cohomology group Hl c (X) of the Bruhat–Tits building of the group. References 1. (Cotner 2021, ) harv error: no target: CITEREFCotner2021 (help) • Borel, Armand (1976), "Admissible representations of a semi-simple group over a local field with vectors fixed under an Iwahori subgroup", Inventiones Mathematicae, 35: 233–259, doi:10.1007/BF01390139, ISSN 0020-9910, MR 0444849 • Borel, Armand; Serre, Jean-Pierre (1976), "Cohomologie d'immeubles et de groupes S-arithmétiques", Topology, 15 (3): 211–232, doi:10.1016/0040-9383(76)90037-9, ISSN 0040-9383, MR 0447474 • Bump, Daniel (1997), Automorphic forms and representations, Cambridge Studies in Advanced Mathematics, vol. 55, Cambridge University Press, doi:10.1017/CBO9780511609572, ISBN 978-0-521-55098-7, MR 1431508 • Finite Groups of Lie Type: Conjugacy Classes and Complex Characters (Wiley Classics Library) by Roger W. Carter, John Wiley & Sons Inc; New Ed edition (August 1993) ISBN 0-471-94109-3 • Casselman, W. (1973), "The Steinberg character as a true character", in Moore, Calvin C. (ed.), Harmonic analysis on homogeneous spaces (Williams Coll., Williamstown, Mass., 1972), Proc. Sympos. Pure Math., vol. XXVI, Providence, R.I.: American Mathematical Society, pp. 413–417, ISBN 978-0-8218-1426-0, MR 0338273 • Harish-Chandra (1973), "Harmonic analysis on reductive p-adic groups", in Moore, Calvin C. (ed.), Harmonic analysis on homogeneous spaces (Proc. Sympos. Pure Math., Vol. XXVI, Williams Coll., Williamstown, Mass., 1972), Proc. Sympos. Pure Math., vol. XXVI, Providence, R.I.: American Mathematical Society, pp. 167–192, ISBN 978-0-8218-1426-0, MR 0340486 • Matsumoto, Hideya (1969), "Fonctions sphériques sur un groupe semi-simple p-adique", Comptes Rendus de l'Académie des Sciences, Série A et B, 269: A829––A832, ISSN 0151-0509, MR 0263977 • Shalika, J. A. (1970), "On the space of cusp forms of a P-adic Chevalley group", Annals of Mathematics, Second Series, 92 (2): 262–278, doi:10.2307/1970837, ISSN 0003-486X, JSTOR 1970837, MR 0265514 • Steinberg, Robert (2001) [1994], "Steinberg module", Encyclopedia of Mathematics, EMS Press • Steinberg, Robert (1951), "A geometric approach to the representations of the full linear group over a Galois field", Transactions of the American Mathematical Society, 71 (2): 274–282, doi:10.1090/S0002-9947-1951-0043784-0, ISSN 0002-9947, JSTOR 1990691, MR 0043784 • Steinberg, Robert (1956), "Prime power representations of finite linear groups", Canadian Journal of Mathematics, 8: 580–591, doi:10.4153/CJM-1956-063-3, ISSN 0008-414X, MR 0080669 • Steinberg, R. (1957), "Prime power representations of finite linear groups II", Can. J. Math., 9: 347–351, doi:10.4153/CJM-1957-041-1 • R. Steinberg, Collected Papers, Amer. Math. Soc. (1997) ISBN 0-8218-0576-2 pp. 580–586 • Humphreys, J.E. (1987), "The Steinberg representation", Bull. Amer. Math. Soc. (N.S.), 16 (2): 237–263, doi:10.1090/S0273-0979-1987-15512-1, MR 0876960
Wikipedia
Steinberg symbol In mathematics a Steinberg symbol is a pairing function which generalises the Hilbert symbol and plays a role in the algebraic K-theory of fields. It is named after mathematician Robert Steinberg. For a field F we define a Steinberg symbol (or simply a symbol) to be a function $(\cdot ,\cdot ):F^{*}\times F^{*}\rightarrow G$, where G is an abelian group, written multiplicatively, such that • $(\cdot ,\cdot )$ is bimultiplicative; • if $a+b=1$ then $(a,b)=1$. The symbols on F derive from a "universal" symbol, which may be regarded as taking values in $F^{*}\otimes F^{*}/\langle a\otimes 1-a\rangle $. By a theorem of Matsumoto, this group is $K_{2}F$ and is part of the Milnor K-theory for a field. Properties If (⋅,⋅) is a symbol then (assuming all terms are defined) • $(a,-a)=1$; • $(b,a)=(a,b)^{-1}$; • $(a,a)=(a,-1)$ is an element of order 1 or 2; • $(a,b)=(a+b,-b/a)$. Examples • The trivial symbol which is identically 1. • The Hilbert symbol on F with values in {±1} defined by[1][2] $(a,b)={\begin{cases}1,&{\mbox{ if }}z^{2}=ax^{2}+by^{2}{\mbox{ has a non-zero solution }}(x,y,z)\in F^{3};\\-1,&{\mbox{ if not.}}\end{cases}}$ • The Contou-Carrère symbol is a symbol for the ring of Laurent power series over an Artinian ring. Continuous symbols If F is a topological field then a symbol c is weakly continuous if for each y in F∗ the set of x in F∗ such that c(x,y) = 1 is closed in F∗. This makes no reference to a topology on the codomain G. If G is a topological group, then one may speak of a continuous symbol, and when G is Hausdorff then a continuous symbol is weakly continuous.[3] The only weakly continuous symbols on R are the trivial symbol and the Hilbert symbol: the only weakly continuous symbol on C is the trivial symbol.[4] The characterisation of weakly continuous symbols on a non-Archimedean local field F was obtained by Moore. The group K2(F) is the direct sum of a cyclic group of order m and a divisible group K2(F)m. A symbol on F lifts to a homomorphism on K2(F) and is weakly continuous precisely when it annihilates the divisible component K2(F)m. It follows that every weakly continuous symbol factors through the norm residue symbol.[5] See also • Steinberg group (K-theory) References 1. Serre, Jean-Pierre (1996). A Course in Arithmetic. Graduate Texts in Mathematics. Vol. 7. Berlin, New York: Springer-Verlag. ISBN 978-3-540-90040-5. 2. Milnor (1971) p.94 3. Milnor (1971) p.165 4. Milnor (1971) p.166 5. Milnor (1971) p.175 • Conner, P.E.; Perlis, R. (1984). A Survey of Trace Forms of Algebraic Number Fields. Series in Pure Mathematics. Vol. 2. World Scientific. ISBN 9971-966-05-0. Zbl 0551.10017. • Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. pp. 132–142. ISBN 0-8218-1095-2. Zbl 1068.11023. • Milnor, John Willard (1971). Introduction to algebraic K-theory. Annals of Mathematics Studies. Vol. 72. Princeton, NJ: Princeton University Press. MR 0349811. Zbl 0237.18005. • Steinberg, Robert (1962). "Générateurs, relations et revêtements de groupes algébriques". Colloq. Théorie des Groupes Algébriques (in French). Bruxelles: Gauthier-Villars: 113–127. MR 0153677. Zbl 0272.20036. External links • Steinberg symbol at the Encyclopaedia of Mathematics
Wikipedia
Minkowski–Steiner formula In mathematics, the Minkowski–Steiner formula is a formula relating the surface area and volume of compact subsets of Euclidean space. More precisely, it defines the surface area as the "derivative" of enclosed volume in an appropriate sense. The Minkowski–Steiner formula is used, together with the Brunn–Minkowski theorem, to prove the isoperimetric inequality. It is named after Hermann Minkowski and Jakob Steiner. Statement of the Minkowski-Steiner formula Let $n\geq 2$, and let $A\subsetneq \mathbb {R} ^{n}$ be a compact set. Let $\mu (A)$ denote the Lebesgue measure (volume) of $A$. Define the quantity $\lambda (\partial A)$ by the Minkowski–Steiner formula $\lambda (\partial A):=\liminf _{\delta \to 0}{\frac {\mu \left(A+{\overline {B_{\delta }}}\right)-\mu (A)}{\delta }},$ where ${\overline {B_{\delta }}}:=\left\{x=(x_{1},\dots ,x_{n})\in \mathbb {R} ^{n}\left||x|:={\sqrt {x_{1}^{2}+\dots +x_{n}^{2}}}\leq \delta \right.\right\}$ denotes the closed ball of radius $\delta >0$, and $A+{\overline {B_{\delta }}}:=\left\{a+b\in \mathbb {R} ^{n}\left|a\in A,b\in {\overline {B_{\delta }}}\right.\right\}$ is the Minkowski sum of $A$ and ${\overline {B_{\delta }}}$, so that $A+{\overline {B_{\delta }}}=\left\{x\in \mathbb {R} ^{n}{\mathrel {|}}\ {\mathopen {|}}x-a{\mathclose {|}}\leq \delta {\mbox{ for some }}a\in A\right\}.$ Remarks Surface measure For "sufficiently regular" sets $A$, the quantity $\lambda (\partial A)$ does indeed correspond with the $(n-1)$-dimensional measure of the boundary $\partial A$ of $A$. See Federer (1969) for a full treatment of this problem. Convex sets When the set $A$ is a convex set, the lim-inf above is a true limit, and one can show that $\mu \left(A+{\overline {B_{\delta }}}\right)=\mu (A)+\lambda (\partial A)\delta +\sum _{i=2}^{n-1}\lambda _{i}(A)\delta ^{i}+\omega _{n}\delta ^{n},$ where the $\lambda _{i}$ are some continuous functions of $A$ (see quermassintegrals) and $\omega _{n}$ denotes the measure (volume) of the unit ball in $\mathbb {R} ^{n}$: $\omega _{n}={\frac {2\pi ^{n/2}}{n\Gamma (n/2)}},$ where $\Gamma $ denotes the Gamma function. Example: volume and surface area of a ball Taking $A={\overline {B_{R}}}$ gives the following well-known formula for the surface area of the sphere of radius $R$, $S_{R}:=\partial B_{R}$: $\lambda (S_{R})=\lim _{\delta \to 0}{\frac {\mu \left({\overline {B_{R}}}+{\overline {B_{\delta }}}\right)-\mu \left({\overline {B_{R}}}\right)}{\delta }}$ $=\lim _{\delta \to 0}{\frac {[(R+\delta )^{n}-R^{n}]\omega _{n}}{\delta }}$ $=nR^{n-1}\omega _{n},$ where $\omega _{n}$ is as above. References • Dacorogna, Bernard (2004). Introduction to the Calculus of Variations. London: Imperial College Press. ISBN 1-86094-508-2. • Federer, Herbert (1969). Geometric Measure Theory. New-York: Springer-Verlag. Lp spaces Basic concepts • Banach & Hilbert spaces • Lp spaces • Measure • Lebesgue • Measure space • Measurable space/function • Minkowski distance • Sequence spaces L1 spaces • Integrable function • Lebesgue integration • Taxicab geometry L2 spaces • Bessel's • Cauchy–Schwarz • Euclidean distance • Hilbert space • Parseval's identity • Polarization identity • Pythagorean theorem • Square-integrable function $L^{\infty }$ spaces • Bounded function • Chebyshev distance • Infimum and supremum • Essential • Uniform norm Maps • Almost everywhere • Convergence almost everywhere • Convergence in measure • Function space • Integral transform • Locally integrable function • Measurable function • Symmetric decreasing rearrangement Inequalities • Babenko–Beckner • Chebyshev's • Clarkson's • Hanner's • Hausdorff–Young • Hölder's • Markov's • Minkowski • Young's convolution Results • Marcinkiewicz interpolation theorem • Plancherel theorem • Riemann–Lebesgue • Riesz–Fischer theorem • Riesz–Thorin theorem For Lebesgue measure • Isoperimetric inequality • Brunn–Minkowski theorem • Milman's reverse • Minkowski–Steiner formula • Prékopa–Leindler inequality • Vitale's random Brunn–Minkowski inequality Applications & related • Bochner space • Fourier analysis • Lorentz space • Probability theory • Quasinorm • Real analysis • Sobolev space • *-algebra • C*-algebra • Von Neumann Measure theory Basic concepts • Absolute continuity of measures • Lebesgue integration • Lp spaces • Measure • Measure space • Probability space • Measurable space/function Sets • Almost everywhere • Atom • Baire set • Borel set • equivalence relation • Borel space • Carathéodory's criterion • Cylindrical σ-algebra • Cylinder set • 𝜆-system • Essential range • infimum/supremum • Locally measurable • π-system • σ-algebra • Non-measurable set • Vitali set • Null set • Support • Transverse measure • Universally measurable Types of Measures • Atomic • Baire • Banach • Besov • Borel • Brown • Complex • Complete • Content • (Logarithmically) Convex • Decomposable • Discrete • Equivalent • Finite • Inner • (Quasi-) Invariant • Locally finite • Maximising • Metric outer • Outer • Perfect • Pre-measure • (Sub-) Probability • Projection-valued • Radon • Random • Regular • Borel regular • Inner regular • Outer regular • Saturated • Set function • σ-finite • s-finite • Signed • Singular • Spectral • Strictly positive • Tight • Vector Particular measures • Counting • Dirac • Euler • Gaussian • Haar • Harmonic • Hausdorff • Intensity • Lebesgue • Infinite-dimensional • Logarithmic • Product • Projections • Pushforward • Spherical measure • Tangent • Trivial • Young Maps • Measurable function • Bochner • Strongly • Weakly • Convergence: almost everywhere • of measures • in measure • of random variables • in distribution • in probability • Cylinder set measure • Random: compact set • element • measure • process • variable • vector • Projection-valued measure Main results • Carathéodory's extension theorem • Convergence theorems • Dominated • Monotone • Vitali • Decomposition theorems • Hahn • Jordan • Maharam's • Egorov's • Fatou's lemma • Fubini's • Fubini–Tonelli • Hölder's inequality • Minkowski inequality • Radon–Nikodym • Riesz–Markov–Kakutani representation theorem Other results • Disintegration theorem • Lifting theory • Lebesgue's density theorem • Lebesgue differentiation theorem • Sard's theorem For Lebesgue measure • Isoperimetric inequality • Brunn–Minkowski theorem • Milman's reverse • Minkowski–Steiner formula • Prékopa–Leindler inequality • Vitale's random Brunn–Minkowski inequality Applications & related • Convex analysis • Descriptive set theory • Probability theory • Real analysis • Spectral theory
Wikipedia
Steiner chain In geometry, a Steiner chain is a set of n circles, all of which are tangent to two given non-intersecting circles (blue and red in Figure 1), where n is finite and each circle in the chain is tangent to the previous and next circles in the chain. In the usual closed Steiner chains, the first and last (n-th) circles are also tangent to each other; by contrast, in open Steiner chains, they need not be. The given circles α and β do not intersect, but otherwise are unconstrained; the smaller circle may lie completely inside or outside of the larger circle. In these cases, the centers of Steiner-chain circles lie on an ellipse or a hyperbola, respectively. Steiner chains are named after Jakob Steiner, who defined them in the 19th century and discovered many of their properties. A fundamental result is Steiner's porism, which states: If at least one closed Steiner chain of n circles exists for two given circles α and β, then there is an infinite number of closed Steiner chains of n circles; and any circle tangent to α and β in the same way[lower-alpha 1] is a member of such a chain. The method of circle inversion is helpful in treating Steiner chains. Since it preserves tangencies, angles and circles, inversion transforms one Steiner chain into another of the same number of circles. One particular choice of inversion transforms the given circles α and β into concentric circles; in this case, all the circles of the Steiner chain have the same size and can "roll" around in the annulus between the circles similar to ball bearings. This standard configuration allows several properties of Steiner chains to be derived, e.g., its points of tangencies always lie on a circle. Several generalizations of Steiner chains exist, most notably Soddy's hexlet and Pappus chains.[1] Definitions and types of tangency • Steiner chains with different internal/external tangencies • The 7 circles of this Steiner chain (black) are externally tangent to the inner given circle (red) but internally tangent to the outer given circle (blue). • The 7 circles of this Steiner chain (black) are externally tangent to both given circles (red and blue), which lie outside one another. • Seven of the 8 circles of this Steiner chain (black) are externally tangent to both given circles (red and blue); the 8th circle is internally tangent to both. The two given circles α and β cannot intersect; hence, the smaller given circle must lie inside or outside the larger. The circles are usually shown as an annulus, i.e., with the smaller given circle inside the larger one. In this configuration, the Steiner-chain circles are externally tangent to the inner given circle and internally tangent to the outer circle. However, the smaller circle may also lie completely outside the larger one (Figure 2). The black circles of Figure 2 satisfy the conditions for a closed Steiner chain: they are all tangent to the two given circles and each is tangent to its neighbors in the chain. In this configuration, the Steiner-chain circles have the same type of tangency to both given circles, either externally or internally tangent to both. If the two given circles are tangent at a point, the Steiner chain becomes an infinite Pappus chain, which is often discussed in the context of the arbelos (shoemaker's knife), a geometric figure made from three circles. There is no general name for a sequence of circles tangent to two given circles that intersect at two points. Closed, open and multi-cyclic • Closed, open and multi-cyclic Steiner chains • Closed Steiner chain of nine circles. The 1st and 9th circles are tangent. • Open Steiner chain of nine circles. The 1st and 9th circles overlap. • Multicyclic Steiner chain of 17 circles in 2 wraps. The 1st and 17th circles touch. The two given circles α and β touch the n circles of the Steiner chain, but each circle Ck of a Steiner chain touches only four circles: α, β, and its two neighbors, Ck−1 and Ck+1. By default, Steiner chains are assumed to be closed, i.e., the first and last circles are tangent to one another. By contrast, an open Steiner chain is one in which the first and last circles, C1 and Cn, are not tangent to one another; these circles are tangent only to three circles. Multicyclic Steiner chains wrap around the inner circle more than once before closing, i.e., before being tangent to the initial circle. Closed Steiner chains are the systems of circles obtained as the circle packing theorem representation of a bipyramid. Annular case and feasibility criterion • Annular Steiner chains • n = 3 • n = 6 • n = 9 • n = 12 • n = 20 The simplest type of Steiner chain is a closed chain of n circles of equal size surrounding an inscribed circle of radius r; the chain of circles is itself surrounded by a circumscribed circle of radius R. The inscribed and circumscribed given circles are concentric, and the Steiner-chain circles lie in the annulus between them. By symmetry, the angle 2θ between the centers of the Steiner-chain circles is 360°/n. Because Steiner chain circles are tangent to one another, the distance between their centers equals the sum of their radii, here twice their radius ρ. The bisector (green in Figure) creates two right triangles, with a central angle of θ = 180°/n. The sine of this angle can be written as the length of its opposite segment, divided by the hypotenuse of the right triangle $\sin \theta ={\frac {\rho }{r+\rho }}$ Since θ is known from n, this provides an equation for the unknown radius ρ of the Steiner-chain circles $\rho ={\frac {r\sin \theta }{1-\sin \theta }}$ The tangent points of a Steiner chain circle with the inner and outer given circles lie on a line that pass through their common center; hence, the outer radius R = r + 2ρ. These equations provide a criterion for the feasibility of a Steiner chain for two given concentric circles. A closed Steiner chain of n circles requires that the ratio of radii R/r of the given circles equal exactly ${\frac {R}{r}}=1+{\frac {2\sin \theta }{1-\sin \theta }}={\frac {1+\sin \theta }{1-\sin \theta }}=\left[\sec \theta +\tan \theta \right]^{2}$ As shown below, this ratio-of-radii criterion for concentric given circles can be extended to all types of given circles by the inversive distance δ of the two given circles. For concentric circles, this distance is defined as a logarithm of their ratio of radii $\delta =\ln {\frac {R}{r}}$ Using the solution for concentric circles, the general criterion for a Steiner chain of n circles can be written $\delta =2\ln \left(\sec \theta +\tan \theta \right).$ If a multicyclic annular Steiner chain has n total circles and wraps around m times before closing, the angle between Steiner-chain circles equals $\theta ={\frac {m}{n}}180^{\circ }$ In other respects, the feasibility criterion is unchanged. Properties under inversion • Inversive properties of Steiner chains • Two circles (pink and cyan) that are internally tangent to both given circles and whose centers are collinear with the center of the given circles intersect at the angle 2θ. • Under inversion, these lines and circles become circles with the same intersection angle, 2θ. The gold circles intersect the two given circles at right angles, i.e., orthogonally. • The circles passing through the mutual tangent points of the Steiner-chain circles are orthogonal to the two given circles and intersect one another at multiples of the angle 2θ. • The circles passing through the tangent points of the Steiner-chain circles with the two given circles are orthogonal to the latter and intersect at multiples of the angle 2θ. Circle inversion transforms one Steiner chain into another with the same number of circles. In the transformed chain, the tangent points between adjacent circles of the Steiner chain all lie on a circle, namely the concentric circle midway between the two fixed concentric circles. Since tangencies and circles are preserved under inversion, this property of all tangencies lying on a circle is also true in the original chain. This property is also shared with the Pappus chain of circles, which can be construed as a special limiting case of the Steiner chain. In the transformed chain, the tangent lines from O to the Steiner chain circles are separated by equal angles. In the original chain, this corresponds to equal angles between the tangent circles that pass through the center of inversion used to transform the original circles into a concentric pair. In the transformed chain, the n lines connecting the pairs of tangent points of the Steiner circles with the concentric circles all pass through O, the common center. Similarly, the n lines tangent to each pair of adjacent circles in the Steiner chain also pass through O. Since lines through the center of inversion are invariant under inversion, and since tangency and concurrence are preserved under inversion, the 2n lines connecting the corresponding points in the original chain also pass through a single point, O. Infinite family A Steiner chain between two non-intersecting circles can always be transformed into another Steiner chain of equally sized circles sandwiched between two concentric circles. Therefore, any such Steiner chain belongs to an infinite family of Steiner chains related by rotation of the transformed chain about O, the common center of the transformed bounding circles. Elliptical/hyperbolic locus of centers The centers of the circles of a Steiner chain lie on a conic section. For example, if the smaller given circle lies within the larger, the centers lie on an ellipse. This is true for any set of circles that are internally tangent to one given circle and externally tangent to the other; such systems of circles appear in the Pappus chain, the problem of Apollonius, and the three-dimensional Soddy's hexlet. Similarly, if some circles of the Steiner chain are externally tangent to both given circles, their centers must lie on a hyperbola, whereas those that are internally tangent to both lie on a different hyperbola. The circles of the Steiner chain are tangent to two fixed circles, denoted here as α and β, where β is enclosed by α. Let the radii of these two circles be denoted as rα and rβ, respectively, and let their respective centers be the points A and B. Let the radius, diameter and center point of the kth circle of the Steiner chain be denoted as rk, dk and Pk, respectively. All the centers of the circles in the Steiner chain are located on a common ellipse, for the following reason.[2] The sum of the distances from the center point of the kth circle of the Steiner chain to the two centers A and B of the fixed circles equals a constant ${\overline {\mathbf {P} _{k}\mathbf {A} }}+{\overline {\mathbf {P} _{k}\mathbf {B} }}=(r_{\alpha }-r_{k})+\left(r_{\beta }+r_{k}\right)=r_{\alpha }+r_{\beta }$ Thus, for all the centers of the circles of the Steiner chain, the sum of distances to A and B equals the same constant, rα + rβ. This defines an ellipse, whose two foci are the points A and B, the centers of the circles, α and β, that sandwich the Steiner chain of circles. The sum of distances to the foci equals twice the semi-major axis a of an ellipse; hence, $2a=r_{\alpha }+r_{\beta }$ Let p equal the distance between the foci, A and B. Then, the eccentricity e is defined by 2 ae = p, or $e={\frac {p}{2a}}={\frac {p}{r_{\alpha }+r_{\beta }}}$ From these parameters, the semi-minor axis b and the semi-latus rectum L can be determined $b^{2}=a^{2}\left(1-e^{2}\right)=a^{2}-{\frac {p^{2}}{4}}$ $L={\frac {b^{2}}{a}}=a-{\frac {p^{2}}{4a}}$ Therefore, the ellipse can be described by an equation in terms of its distance d to one focus $d={\frac {L}{1-e\cos \theta }}$ where θ is the angle with the line joining the two foci. Conjugate chains • Conjugate Steiner chains with n = 4 • Steiner chain with the two given circles shown in red and blue. • Same set of circles, but with a different choice of given circles. • Same set of circles, but with yet another choice of given circles. If a Steiner chain has an even number of circles, then any two diametrically opposite circles in the chain can be taken as the two given circles of a new Steiner chain to which the original circles belong. If the original Steiner chain has n circles in m wraps, and the new chain has p circles in q wraps, then the equation holds ${\frac {m}{n}}+{\frac {p}{q}}={\frac {1}{2}}.$ A simple example occurs for Steiner chains of four circles (n = 4) and one wrap (m = 1). In this case, the given circles and the Steiner-chain circles are equivalent in that both types of circles are tangent to four others; more generally, Steiner-chain circles are tangent to four circles, but the two given circles are tangent to n circles. In this case, any pair of opposite members of the Steiner chain may be selected as the given circles of another Steiner chain that involves the original given circles. Since m = p = 1 and n = q = 4, Steiner's equation is satisfied: ${\frac {1}{4}}+{\frac {1}{4}}={\frac {1}{2}}.$ Generalizations The simplest generalization of a Steiner chain is to allow the given circles to touch or intersect one another. In the former case, this corresponds to a Pappus chain, which has an infinite number of circles. Soddy's hexlet is a three-dimensional generalization of a Steiner chain of six circles. The centers of the six spheres (the hexlet) travel along the same ellipse as do the centers of the corresponding Steiner chain. The envelope of the hexlet spheres is a Dupin cyclide, the inversion of a torus. The six spheres are not only tangent to the inner and outer sphere, but also to two other spheres, centered above and below the plane of the hexlet centers. Multiple rings of Steiner chains are another generalization. An ordinary Steiner chain is obtained by inverting an annular chain of tangent circles bounded by two concentric circles. This may be generalized to inverting three or more concentric circles that sandwich annular chains of tangent circles. Hierarchical Steiner chains are yet another generalization. If the two given circles of an ordinary Steiner chain are nested, i.e., if one lies entirely within the other, then the larger given circle circumscribes the Steiner-chain circles. In a hierarchical Steiner chain, each circle of a Steiner chain is itself the circumscribing given circle of another Steiner chain within it; this process may be repeated indefinitely, forming a fractal. See also • Poncelet porism • Ford circles • Apollonian gasket Notes 1. meaning that the arbitrary circle is internally or externally tangent in the same way as a circle of the original Steiner chain References 1. Ogilvy, p. 60. 2. Ogilvy, p. 57. Bibliography • Ogilvy, C. S. (1990). Excursions in Geometry. Dover. pp. 51–54. ISBN 0-486-26530-7. • Coxeter, H.S.M.; Greitzer, S.L. (1967). Geometry Revisited. New Mathematical Library. Vol. 19. Washington: MAA. pp. 123–126, 175–176, 180. ISBN 978-0-88385-619-2. Zbl 0166.16402. • Johnson RA (1960). Advanced Euclidean Geometry: An elementary treatise on the geometry of the triangle and the circle (reprint of 1929 edition by Houghton Mifflin ed.). New York: Dover Publications. pp. 113–115. ISBN 978-0-486-46237-0. • Wells D (1991). The Penguin Dictionary of Curious and Interesting Geometry. New York: Penguin Books. pp. 244–245. ISBN 0-14-011813-6. Further reading • Eves H (1972). A Survey of Geometry (revised ed.). Boston: Allyn and Bacon. pp. 134–135. ISBN 978-0-205-03226-6. • Pedoe D (1970). A Course of Geometry for Colleges and Universities. Cambridge University Press. pp. 97–101. ISBN 978-0-521-07638-8. • Coolidge JL (1916). A Treatise on the Circle and the Sphere. Oxford: Clarendon Press. pp. 31–37. External links Wikimedia Commons has media related to Steiner chains. • Weisstein, Eric W. "Steiner Chain". MathWorld. • Interactive animation of a Steiner chain, CodePen • Interactive Applet by Michael Borcherds showing an animation of Steiner's Chain with a variable number of circles made with GeoGebra.
Wikipedia
Poncelet–Steiner theorem In the branch of mathematics known as Euclidean geometry, the Poncelet–Steiner theorem is one of several results concerning compass and straightedge constructions having additional restrictions imposed on the traditional rules. This result states that whatever can be constructed by straightedge and compass together can be constructed by straightedge alone, provided that a single circle and its centre are given. This theorem is related to the rusty compass equivalence. Any Euclidean construction, insofar as the given and required elements are points (or lines), if it can be completed with both the compass and the straightedge together, may be completed with the straightedge alone provided that no fewer than one circle with its center exist in the plane. Though a compass can make constructions significantly easier, it is implied that there is no functional purpose of the compass once the first circle has been drawn. All constructions remain possible, though it is naturally understood that circles and their arcs cannot be drawn without the compass. This means only that the compass may be used for aesthetic purposes, rather than for the purposes of construction. All points that uniquely define a construction, which can be determined with the use of the compass, are equally determinable without. Constructions carried out in adherence with this theorem - relying solely on the use of a straightedge tool without the aid of a compass - are known as Steiner constructions. Steiner constructions may involve any number of circles, including none, already drawn in the plane, with or without their centers. History In the tenth century, the Persian mathematician Abu al-Wafa' Buzjani (940−998) considered geometric constructions using a straightedge and a compass with a fixed opening, a so-called rusty compass. Constructions of this type appeared to have some practical significance as they were used by artists Leonardo da Vinci and Albrecht Dürer in Europe in the late fifteenth century. A new viewpoint developed in the mid sixteenth century when the size of the opening was considered fixed but arbitrary and the question of how many of Euclid's constructions could be obtained was paramount.[1] Renaissance mathematician Lodovico Ferrari, a student of Gerolamo Cardano in a "mathematical challenge" against Niccolò Fontana Tartaglia was able to show that "all of Euclid" (that is, the straightedge and compass constructions in the first six books of Euclid's Elements) could be accomplished with a straightedge and rusty compass. Within ten years additional sets of solutions were obtained by Cardano, Tartaglia and Tartaglia's student Benedetti. During the next century these solutions were generally forgotten until, in 1673, Georg Mohr published (anonymously and in Dutch) Euclidis Curiosi containing his own solutions. Mohr had only heard about the existence of the earlier results and this led him to work on the problem.[2] Showing that "all of Euclid" could be performed with straightedge and rusty compass is not the same as proving that all straightedge and compass constructions could be done with a straightedge and just a rusty compass. Such a proof would require the formalization of what a straightedge and compass could construct. This groundwork was provided by Jean Victor Poncelet in 1822, having been motivated by Mohr's work on the Mohr-Mascheroni theorem. He also conjectured and suggested a possible proof that a straightedge and rusty compass would be equivalent to a straightedge and compass, and moreover, the rusty compass need only be used once. The result that a straightedge and single circle with given centre is equivalent to a straightedge and compass was proved by Jakob Steiner in 1833.[3][1] Relationships to other constructs Various other notions, tools, terminology, etc., is often associated (sometimes loosely) to the Poncelet-Steiner theorem. Some are listed here. Rusty compass The rusty compass describes a compass whose hinge is so rusted as to be fused such that its legs - the needle and pencil - are unable to adjust width. In essence, it is a compass whose distance is fixed, and which draws circles of a predetermined and constant, but arbitrary radius. Circles may be drawn centered at any arbitrary point, but the radius is unchangeable. As a restricted construction paradigm, the rusty compass constructions allow the use of a straightedge and the fixed-width compass. In some sense, the rusty compass is a generalization and simplification of the Poncelet-Steiner theorem. Though not more powerful, it is certainly more convenient. The Poncelet-Steiner theorem requires a single circle with arbitrary radius and center point to be placed in the plane. As it is the only drawn circle, whether or not it was drawn by a rusty compass is immaterial and equivalent. The benefit of general rusty compass constructions, however, is that the compass may be used repeatedly to redraw circles centered at any desired point, albeit with the same radius, thus simplifying many constructions. Naturally if all constructions are possible with a single circle arbitrarily placed in the plane, then the same can surely be said about a straightedge and rusty compass. It is known that a straightedge and a rusty compass is sufficient to construct all that is possible with straightedge and standard compass - with the implied understanding that circular arcs of arbitrary radii cannot be drawn, and only need be drawn for aesthetic purposes rather than constructive ones. Historically this was proven when the Poncelet-Steiner theorem was proven, which is a stronger result. The rusty compass, therefore, is no weaker than the Poncelet-Steiner theorem. The rusty compass is also no stronger. The Poncelet-Steiner theorem reduces Ferrari's rusty compass equivalence, a claim at the time, to a single-use compass: all points necessary to uniquely describe any compass-straightedge construction may be achieved with only a straightedge, once the first circle has been placed. The Poncelet-Steiner theorem takes the rusty compass scenario, and breaks the compass completely after its first use. Steiner constructions The term Steiner construction typically refers to any geometric construction that utilizes the straightedge tool only, and is sometimes simply called a straightedge-only construction. No stipulations are made about what geometric objects already exist in the plane, and no implications are made about what is or is not possible to construct. Thus, all constructions adhering to the Poncelet-Steiner theorem are Steiner constructions, though not all Steiner constructions adhere to the standard of providing only one circle with its center. The Poncelet-Steiner theorem does not require an actual compass - it is presumed that the circle preexists in the plane - therefore all constructions are Steiner constructions. Steiner's theorem, a lemma If only one circle is to be given and no other special information, Steiner's theorem implies that the center of the circle must be provided along with the circle. This is done by proving the impossibility of constructing the circle's center from straightedge alone using only a single circle in the plane, without its center. An argument using projective transformations and Steiner's conic sections is used. A naïve summary of the proof is as follows. With the use of a straightedge tool, only linear projective transformations are possible, and linear projective transformations are reversible operations. Lines project onto lines under any linear projective transformation, while conic sections project onto conic sections under a linear projective transformation, but the latter are skewed such that eccentricities, foci, and centers of circles are not preserved. Under different mappings the center does not map uniquely and reversibly. This would not be the case if lines could be used to determine a circles center. As linear transformations are reversible operations and would thus produce unique results, the fact that unique results are not possible implies the impossibility of center-point constructions. The uniqueness of the constructed center would depend on additional information which makes the construction reversible. Thus it is not possible to construct everything that can be constructed with straightedge and compass with straightedge alone. Consequently, requirements on the Poncelet-Steiner theorem cannot be weakened with respect to the circle center. If the centre of the only given circle is not provided, it cannot be obtained by a straightedge alone. Many constructions are impossible with straightedge alone. Something more is necessary, and a circle with its center identified is sufficient. Alternatively, the center may be omitted with sufficient additional information. This is not a weakening of the Poncelet-Steiner theorem, merely an alternative framework. Nor is it a contradiction of Steiner's Theorem which hypothesizes only a single circle. The inclusion of this sufficient alternative information disambiguates the mappings under the projective transformations, thus allowing various Steiner constructions to recover the circle center. Some alternatives include two concentric or two intersecting circles, or three circles, or other variations wherein the provided circle(s) are devoid of their centers, but some other unique but sufficient criterion is met. In any of these cases, the center of a circle can be constructed, thereby reducing the problem to the Poncelet-Steiner theorem hypothesis (with the added convenience of having additional circles in the plane). Constructive proof approach To prove the theorem, each of the basic constructions of compass and straightedge need to be proven to be possible by using a straightedge alone (provided that a circle and its center exist in the plane), as these are the foundations of, or elementary steps for, all other constructions. That is to say, all constructions can be written as a series of steps involving these five basic constructions: 1. Creating the line through two existing points 2. Creating the circle through one point with centre another point 3. Creating the point which is the intersection of two existing, non-parallel lines 4. Creating the one or two points in the intersection of a line and a circle (if they intersect) 5. Creating the one or two points in the intersection of two circles (if they intersect). #1 - A line through two points This can be done with a straightedge alone. Neither a compass nor a circle is required. #2 - A circle through one point with defined center It is understood that the arc of a circle cannot be drawn without a compass. A circle is considered to be given by any two points, one defining the center and one existing on the circumference at radius. Any such pair define a unique circle. In keeping with the intent of the theorem which we aim to prove, the actual circle need not be drawn but for aesthetic reasons. #3 – Intersection of two lines This construction can also be done directly with a straightedge. #4, #5 - The other constructions Thus, to prove the theorem, only constructions #4 and #5 need be proven possible using only a straightedge and a given circle with its center. Notes Any doubts about constructions #1 or #3 would apply equally to the traditional construction paradigm involving the compass, and thus are not concerns unique to the Poncelet-Steiner theorem. Construction #2 should not be of concern. The arc of the circle is only used in traditional construction paradigms for the purposes of circle-circle and circle-line intersections, thus if constructions #4 and #5 are satisfiable without the arc of the circle, then this will prove non-necessity of drawing the arc. This therefore proves that construction #2 is satisfied by a simple labeling of two points uniquely identifying the circle. Constructive proof In general constructions there are often several variations that will produce the same result. The choices made in such a variant can be made without loss of generality. However, when a construction is being used to prove that something can be done, it is not necessary to describe all these various choices and, for the sake of clarity of exposition, only one variant will be given below. The variants below are chosen for their ubiquity in application rather than simplicity under any particular set of special conditions. In the constructions below, a circle defined by a center point P and a point on its circumference, Q, through which the arc of the circle passes, is denoted P(Q). As most circles are not compass-drawn, center and circumference points are named explicitly, and usually separately. Per the theorem, when a compass-drawn circle is provided it is simply referred to as the given circle or the provided circle. The provided circle should always be assumed to be placed arbitrarily in the plane with an arbitrary radius (i.e. in general position). The intersection points between any line and the given circle may be found directly. The Poncelet-Steiner Theorem does not prohibit the normal treatment of circles already drawn in the plane; normal construction rules apply. The theorem only prohibits the construction of new circular arcs with a compass. Steiner constructions and those constructions herein proving the Poncelet-Steiner theorem require the arbitrary placement of points in space. In some construction paradigms - such as in the geometric definition of the constructible number - this may be prohibited. Some preliminary constructions To prove the above constructions #4 and #5, which are included below, a few necessary intermediary constructions are also explained below since they are used and referenced frequently. These are also straightedge-only constructions. All constructions below rely on basic constructions #1,#2,#3, and any other construction that is listed prior to it. Parallel of a line having a collinear bisected segment This construction does not require the use of the given circle. Naturally any line that passes through the center of the given circle implicitly has a bisected segment: the diameter is bisected by the center. The animated gif file embedded at the introduction to this article demonstrates this construction, reiterated here without the circle and with enumerated steps. Given an arbitrary line n (in black) on which there exist two points A and B, having a midpoint M between them, and an arbitrary point P in the plane (assumed not to be on line n) through which a parallel of line n is to be made: 1. Construct a line AP (in red). 2. Construct a line BP (in orange). 3. Define an arbitrary point R on line AP. 4. Construct a line BR (in green). 5. Construct a line MR (in light blue). 6. Lines MR and BP intersect at point X. 7. Construct a line AX (in purple). 8. Lines BR and AX intersect at point Q. 9. Construct a line PQ (in dark blue), the desired parallel. In some literature the bisected line segment is sometimes viewed as a one-dimensional "circle" existing on the line. Alternatively, some literature views the bisected line segment as a two dimensional circle in three dimensional space with the line passing through a diameter, but not parallel to the plane, thus intersecting the plane of construction at two points on the circumference with the midpoint simply being the prescribed circle center. Creating a bisected segment on a line If the line passes through the center of a circle, the segment defined by the diameter through the circle is bisected by the center of the circle. In the general case, however, any other line in the plane may have a bisected segment constructed onto it. This construction does require the use of the given circle. Given a line, m (in black), and a circle centered at A, we wish to create points E, B, and H on the line such that B is the midpoint: 1. Draw an arbitrary line (in red) passing through the given circles center, A, and the desired midpoint B (chosen arbitrarily) on the line m. • Notice that the red line, AB, passes through the center of the circle and highlights a diameter, bisected by the circle center. Any parallel may be made from this line according to the previous construction. 2. Choose an arbitrary point C on the given circle (which does not lie on the perpendicular of AB through the circle center). 3. Construct a line (in orange), passing through C, that is parallel to the red line AB. • This parallel intersects the given circle at D. • This parallel also intersects the black line m at E, defining one end of the line segment. 4. Create two lines (in green), AC and AD, that each pass through the given circles center. • These green lines intersect the given circle at points G and F, respectively. 5. Line FG (in blue) intersects the line m at H, defining the other endpoint of the line segment. Constructing a parallel of any line This construction does require the use of the given circle. In order to generalize the parallel line construction to all possible lines, not just the ones with a collinear bisected line segment, it becomes necessary to have additional information. In keeping with the Poncelet-Steiner theorem, a circle (with center) is the object of choice for this construction. To construct a parallel line of any given line, through any point in the plane, we trivially combine two constructions: 1. Any line from which a parallel is to be made must have a bisected segment constructed onto it, if one does not already exist. 2. A parallel is then constructed according to the previous parallel construction involving the collinear bisected segment. In general, however, a parallel may be constructed from any pair of lines which are already parallel to one another; thus a third parallel may be produced from any two, without the use of a circle. Additionally, a parallel of any line may be constructed whenever there exists in the plane any parallelogram, also without the use of a given circle. Constructing a perpendicular line This construction does require the use of the given circle and takes advantage of Thales's theorem. From a given line m, and a given point A in the plane, a perpendicular to the line is to be constructed through the point. Provided is the given circle O(r). 1. If the desired line from which a perpendicular is to be made, m, does not pass through the given circle (or it also passes through the given circle's center), then a new parallel line (in red) may be constructed arbitrarily such that it does pass through the given circle but not its center, and the perpendicular is to be made from this line instead. 2. This red line which passes through the given circle but not its center, will intersect the given circle in two points, B and C. 3. Draw a line BO, through the circle center. • This line intersects the given circle at point D. 4. Draw a line DC. • This line is perpendicular to the red (and black) lines, BC and m. 5. Construct a parallel of line DC through point A using previous constructions. • A perpendicular of the original black line, m, now exists in the plane, and a parallel of it may be constructed through any point in the plane. An alternative construction allows a perpendicular to be constructed without the given circle, provided there exists in the plane any square. Constructing the midpoint of any segment Given is a line segment AB, which is to be bisected. Optionally, a parallel line m exists in the plane. 1. If the line m, which is parallel to line segment AB, does not exist in the plane then it must be constructed according to earlier constructions using the given circle in the plane (not depicted). • A given circle in the plane is not required for this construction if the parallel already exists. • The parallel may be placed in the plane arbitrarily, so long as it is not collinear with the line segment. 2. Arbitrarily choose a point C in the plane which is not collinear with the line or the line segment. 3. Draw a line AC (in red), intersecting line m at point D. 4. Draw a line BC (in orange), intersecting line m at point E. 5. Draw two lines, AE and BD (each in light green), intersecting each other at point X 6. Draw a line CX (in blue), intersecting segment AB at point M. • Point M is the desired midpoint of segment AB. • Line CX also bisects segment DE For added perspective, in some sense this construction is a variant of a previous construction of a parallel from a bisected line segment. It is the same set of lines when taken on whole, but constructed in a different order, and from a different initial set of conditions, arriving at a different end goal. Constructing the radical axis between circles This construction does require the use of the given circle (which is not depicted) for the referenced sub-constructions. Suppose two circles A(B) and C(D) are implicitly given, defined only by the points A, B, C, and D in the plane, with their centers defined, but are not compass-constructed. The radical axis, line m, between the two circles may be constructed: 1. Draw a line AC (in orange) through the circle centers. 2. Draw a line segment BD (in red) between the points on the circumference of the circles. 3. Find the midpoint, M, of segment BD. 4. Draw lines AM and CM (both in light green), connecting the segment midpoint with each of the circle centers. 5. Construct a line j (in purple) passing through point B, and perpendicular to AM. 6. Construct a line k (in dark green) passing through point D, and perpendicular to CM. 7. Lines j and k intersect at point X. • If the lines j and k are parallel then the segment midpoint M is on the line AC, and the construction will fail. An alternative approach is required (see below). 8. Construct a line m (in dark blue) perpendicular to line AC and passing through point X. 9. Line m is the desired radical axis. Resolution of failed construction In the event that the construction of the radical axis fails due to there not being an intersection point X between parallel lines j and k, which results from the coincidental placement of the midpoint M on the line AC, an alternative approach is required. One such alternative is given below with the arbitrarily chosen circle A(B) used for demonstration, along with the provided circle O(r). The circle C(D) of the radical axis construction is not depicted. To define a circle only the center and one point - any point - on the circumference is required. In principle a new point B' is constructed such that circle A(B) is equal to circle A(B' ), but point B is not equal to point B' . In essence, segment AB is rotated to AB' , for a different set of defining points for the same circle. The construction of the radical axis is begun anew with circle A(B' ) standing in for circle A(B). In this way the coincidental placement of the midpoint M (now of segment B'D ) on the line AC is avoided. One way of going about this which satisfies most conditions is to construct point B' diametrically opposite B, collinear with a line AB: 1. Draw the line AB (in red). 2. Construct a parallel (in orange) of line AB through the center, point O, of the given circle. • The parallel intersects the given circle at points E and F. 3. Draw a line AO (in green), connecting the center of circle A(B) with the center of the given circle. 4. Draw a line BE (in pink), connecting the points on the circle circumferences. • In the general case, points E and F may be switched without loss of generality. 5. Lines AO and BE intersect in a point Z. • If point Z does not exist due to lines AO and BE being parallel - caused by circles A(B) and O(r) having equal radii - then refer to step 4 and switch the roles of points E and F. 6. Draw a line FZ (in blue). 7. Lines AB and FZ intersect at a point B' . • Point B' is the desired point. In the general case it is now possible to construct the radical axis between the circles A(B' )=A(B) and C(D). This specific construction of a diametrically opposite point, however, can itself potentially fail under the right conditions - when points A, B, and O are collinear. If the final goal is to construct a diametrically opposite point, an alternative approach is required. If the goal is to resolve the limitation in the radical axis construction, one option is to attempt a similar construction on circle C(D) instead. This too may fail, if all five points are collinear. Alternatively an entirely different point B' may be determined, not necessarily a diametrically opposite one, requiring a small variation on the above construction. Intersecting a line with a circle (Construction #4) This construction does require the use of the provided circle, O(r). Given is the line m (in black) and the circle P(Q), which is not compass-constructed. The intersection points of the circle P(Q) and the line m, which are point A and B, may be constructed: 1. Draw a line PQ (in red) through the points defining the circle. 2. Construct a parallel (in orange) of line PQ through the center O of the provided circle. • The parallel intersects the provided circle at two points, one of which is arbitrarily chosen: R. 3. Draw a line PO (in light green), through the centers of the two circles (i.e. the one provided by compass construction and the one which is to be intersected). 4. Draw a line QR (in light blue), connecting the two points on the circumferences of the two circles. 5. Intersect the lines PO and QR at point X. • If point X does not exist due to lines PO and QR being parallel - which results from circles P(Q) and O(r) having equal radii - then refer back to step 2 and choose the alternate point of intersection, R. 6. Choosing a point M arbitrarily on line m, such that it is not on line PO, draw a line PM (in pink). • For construction simplicity and only if line PQ is not parallel to line m, lines PM and PQ may be coincident. 7. Draw a line MX (in brown). 8. Construct a parallel (in dark purple) of line PM through the center O of the provided circle. • The parallel intersects the line MX at a point N. 9. Construct a parallel (in yellow) of line m through the point N. • The parallel intersects the provided circle at points C and D. • If the parallel does not intersect the provided circle then neither does the line m intersect circle P(Q). 10. Draw lines CX and DX (both in dark blue). • These lines both intersect line m at points A and B, respectively. 11. Points A and B are the desired points of intersection between the line m and the circle P(Q). Intersecting two circles (Construction #5) The intersection between two circles becomes a trivial combination of two earlier constructions: 1. Construct the radical axis between the two circles. 2. Construct the intersection points between the radical axis (which is a line) and either one of the two circles arbitrarily chosen, using basic construction #4. 3. These points are the desired points of intersection of the circles. • The two circles and the radical axis all intersect at the same loci of points: two points, one point if tangential, or none if they do not intersect. • If the radical axis does not intersect one circle then it intersects neither, and neither do the two circles intersect. Conclusion The second basic construction - defining a circle with two points - never needed an arc to be constructed with the compass in order for the circle to be utilized in constructions - namely the intersections with circles and with lines which, together, are the essence of all constructions involving a circle. Thus defining a circle by its center and by any arbitrary point on its circumference is sufficient to fully describe the entire circle and construct with it. Basic construction #2 is satisfied. Since all five basic constructions have been shown to be achievable with only a straightedge, provided that a single circle with its center is placed in the plane, this proves the Poncelet-Steiner theorem. Other types of restricted construction The Poncelet–Steiner theorem can be contrasted with the Mohr–Mascheroni theorem, which states that any compass and straightedge construction can be performed with only a compass. The rusty compass restriction allows the use of a compass, provided it produces circles of fixed radius. Although the rusty compass constructions were explored since the 10th century, and all of Euclid was shown to be constructable with a rusty compass by the 17th century, the Poncelet-Steiner theorem proves that the rusty compass and straightedge together are more than sufficient for any and all Euclidean construction. Indeed, the rusty compass becomes a tool simplifying constructions over merely the straightedge and single circle. Viewed the other way, the Poncelet-Steiner theorem not only fixes the width of the rusty compass, but ensures that the compass breaks after its first use. The requirement that one circle with its center provided has been since generalized to include alternative but equally restrictive conditions. In one such alternative, the entire circle is not required at all. In 1904, Francesco Severi proved that any small arc (of the circle), together with the centre, will suffice.[4] This construction breaks the rusty compass at any point before the first circle is completed, but after it has begun, and still all constructions remain possible. Thus, the conditions hypothesizing the Poncelet-Steiner theorem may indeed be weakened, but only with respect to the completeness of the circular arc, and not, per the Steiner theorem, with respect to the center. In two other alternatives, the centre may be omitted entirely provided that given are either two concentric circles, or two distinct intersecting circles, of which there are two cases: two intersection points and one intersection point (tangential circles). From any of these scenarios, centres can be constructed, reducing the scenario to the original hypothesis. Still other variations exist. It suffices to have two non-intersecting circles (without their centres), provided that at least one point is given on either the centerline or on the radical axis of the two circles, or alternatively to have three non-intersecting circles.[5] Once a single center is constructed, the scenario again reduces to the original hypothesis of the Poncelet-Steiner theorem. Liberated, or neusis, constructions Instead of restricting the rules of construction, it is of equal interest to study alleviating the rules. Just as geometers have studied what remains possible to construct (and how) when additional restrictions are placed on traditional construction rules - such as compass only, straightedge only, rusty compass, etc. - they have also studied what constructions becomes possible that weren't already when the natural restrictions inherent to traditional construction rules are alleviated. Questions such as "what becomes constructible", "how might it be constructed", "what are the fewest traditional rules to be broken", "what are the simplest tools needed", "which seemingly different tools are equivalent", etc. are asked. The arbitrary angle is not trisectable using traditional compass and straightedge rules, for example, but the trisection becomes constructible when allowed the additional tool of an ellipse in the plane. Some of the traditional problems such as angle trisection, doubling the cube, squaring the circle, finding cubic roots, etc., have been resolvable using an expanded set of tools. In general, the objects studied to expand the scope of what is constructible have included: • Non-constructible "auxiliary" curves in the plane - including any of the conic sections, cycloids, lemniscates, limaçons, the Archimedean spiral, any of the trisectrices or quadratrices, and others. • Physical tools other than the compass and straightedge - generally called neuseis - which include specific tools such as the Tomahawk, markable straightedges and graduated rulers, right triangular rulers, linkages, ellipsographs, and others. • Origami, or paper-folding techniques. The ancient geometers preferred unusual curves over the use of neuseis (alternative physical tools). They also would have preferred the conic sections over any other curve. The term neusis or neusis construction may also refer to a specific tool or method employed by the ancient geometers. Approximations Although not a true and rigorous construction (nor considered a neusis construction by normal definitions), it is possible to approximate a construction to a predetermined level of precision using only compass and straightedge, using a reiterative approach. Although each point, line or circle is a valid construction, what it aims to approximate can never truly be achieved. Indeed, using a compass and straightedge alone, if an infinite number of constructive steps are allowed, many more points beyond what is normally constructible become possible, as a convergent process and limiting behavior. For example, an angle trisection may be performed exactly using an infinite sequence of angle bisections. If terminated at some finite point, an accurate approximation of a trisection can be achieved. In traditional construction rules, this is not allowed: a construction must terminate in a finite number of applications of the compass and straightedge, and must produce the desired construction exactly. See also • Steel square • Constructible polygon • Projective geometry • Inversive geometry • Geometrography Notes 1. Eves 1963, p.205 2. Retz & Keihn 1989, p.195 3. Jacob Steiner (1833). Die geometrischen Konstructionen, ausgeführt mittelst der geraden Linie und eines festen Kreises, als Lehrgegenstand auf höheren Unterrichts-Anstalten und zur praktischen Benutzung (in German). Berlin: Ferdinand Dümmler. Retrieved 2 April 2013. 4. Retz & Keihn 1989, p. 196 5. Wolfram's Math World References • Eves, Howard (1963), A Survey of Geometry /Volume one, Allyn and Bacon • Retz, Merlyn; Keihn, Meta Darlene (1989), "Compass and Straightedge Constructions", Historical Topics for the Mathematics Classroom, National Council of Teachers of Mathematics (NCTM), pp. 192–196, ISBN 9780873532815 Further reading • Eves, Howard Whitley (1995), "3.6 The Poncelet–Steiner Construction Theorem", College Geometry, Jones & Bartlett Learning, pp. 180–186, ISBN 9780867204759 External links • Jacob Steiner's theorem at cut-the-knot (It is impossible to find the center of a given circle with the straightedge alone) • Straightedge alone Basic constructions of straightedge-only constructions. • Two circles and only a straightedge, an article by Arseniy Akopyan and Roman Fedorov. • A remark on the construction of the centre of a circle by means of the ruler, by Christian Gram. • Poncelet-Steiner Theorem, a page primarily about Steiner's Theorem • Poncelet-Steiner Theorem
Wikipedia
Steiner conic The Steiner conic or more precisely Steiner's generation of a conic, named after the Swiss mathematician Jakob Steiner, is an alternative method to define a non-degenerate projective conic section in a projective plane over a field. The usual definition of a conic uses a quadratic form (see Quadric (projective geometry)). Another alternative definition of a conic uses a hyperbolic polarity. It is due to K. G. C. von Staudt and sometimes called a von Staudt conic. The disadvantage of von Staudt's definition is that it only works when the underlying field has odd characteristic (i.e., $Char\neq 2$). Definition of a Steiner conic • Given two pencils $B(U),B(V)$ of lines at two points $U,V$ (all lines containing $U$ and $V$ resp.) and a projective but not perspective mapping $\pi $ of $B(U)$ onto $B(V)$. Then the intersection points of corresponding lines form a non-degenerate projective conic section[1][2][3][4] (figure 1) A perspective mapping $\pi $ of a pencil $B(U)$ onto a pencil $B(V)$ is a bijection (1-1 correspondence) such that corresponding lines intersect on a fixed line $a$, which is called the axis of the perspectivity $\pi $ (figure 2). A projective mapping is a finite product of perspective mappings. Simple example: If one shifts in the first diagram point $U$ and its pencil of lines onto $V$ and rotates the shifted pencil around $V$ by a fixed angle $\varphi $ then the shift (translation) and the rotation generate a projective mapping $\pi $ of the pencil at point $U$ onto the pencil at $V$. From the inscribed angle theorem one gets: The intersection points of corresponding lines form a circle. Examples of commonly used fields are the real numbers $\mathbb {R} $, the rational numbers $\mathbb {Q} $ or the complex numbers $\mathbb {C} $. The construction also works over finite fields, providing examples in finite projective planes. Remark: The fundamental theorem for projective planes states,[5] that a projective mapping in a projective plane over a field (pappian plane) is uniquely determined by prescribing the images of three lines. That means that, for the Steiner generation of a conic section, besides two points $U,V$ only the images of 3 lines have to be given. These 5 items (2 points, 3 lines) uniquely determine the conic section. Remark: The notation "perspective" is due to the dual statement: The projection of the points on a line $a$ from a center $Z$ onto a line $b$ is called a perspectivity (see below).[5] Example For the following example the images of the lines $a,u,w$ (see picture) are given: $\pi (a)=b,\pi (u)=w,\pi (w)=v$. The projective mapping $\pi $ is the product of the following perspective mappings $\pi _{b},\pi _{a}$: 1) $\pi _{b}$ is the perspective mapping of the pencil at point $U$ onto the pencil at point $O$ with axis $b$. 2) $\pi _{a}$ is the perspective mapping of the pencil at point $O$ onto the pencil at point $V$ with axis $a$. First one should check that $\pi =\pi _{a}\pi _{b}$ has the properties: $\pi (a)=b,\pi (u)=w,\pi (w)=v$. Hence for any line $g$ the image $\pi (g)=\pi _{a}\pi _{b}(g)$ can be constructed and therefore the images of an arbitrary set of points. The lines $u$ and $v$ contain only the conic points $U$ and $V$ resp.. Hence $u$ and $v$ are tangent lines of the generated conic section. A proof that this method generates a conic section follows from switching to the affine restriction with line $w$ as the line at infinity, point $O$ as the origin of a coordinate system with points $U,V$ as points at infinity of the x- and y-axis resp. and point $E=(1,1)$. The affine part of the generated curve appears to be the hyperbola $y=1/x$.[2] Remark: 1. The Steiner generation of a conic section provides simple methods for the construction of ellipses, parabolas and hyperbolas which are commonly called the parallelogram methods. 2. The figure that appears while constructing a point (figure 3) is the 4-point-degeneration of Pascal's theorem.[6] Steiner generation of a dual conic Definitions and the dual generation Dualizing (see duality (projective geometry)) a projective plane means exchanging the points with the lines and the operations intersection and connecting. The dual structure of a projective plane is also a projective plane. The dual plane of a pappian plane is pappian and can also be coordinatized by homogeneous coordinates. A nondegenerate dual conic section is analogously defined by a quadratic form. A dual conic can be generated by Steiner's dual method: • Given the point sets of two lines $u,v$ and a projective but not perspective mapping $\pi $ of $u$ onto $v$. Then the lines connecting corresponding points form a dual non-degenerate projective conic section. A perspective mapping $\pi $ of the point set of a line $u$ onto the point set of a line $v$ is a bijection (1-1 correspondence) such that the connecting lines of corresponding points intersect at a fixed point $Z$, which is called the centre of the perspectivity $\pi $ (see figure). A projective mapping is a finite sequence of perspective mappings. It is usual, when dealing with dual and common conic sections, to call the common conic section a point conic and the dual conic a line conic. In the case that the underlying field has $\operatorname {Char} =2$ all the tangents of a point conic intersect in a point, called the knot (or nucleus) of the conic. Thus, the dual of a non-degenerate point conic is a subset of points of a dual line and not an oval curve (in the dual plane). So, only in the case that $\operatorname {Char} \neq 2$ is the dual of a non-degenerate point conic a non-degenerate line conic. Examples (1) Projectivity given by two perspectivities: Two lines $u,v$ with intersection point $W$ are given and a projectivity $\pi $ from $u$ onto $v$ by two perspectivities $\pi _{A},\pi _{B}$ with centers $A,B$. $\pi _{A}$ maps line $u$ onto a third line $o$, $\pi _{B}$ maps line $o$ onto line $v$ (see diagram). Point $W$ must not lie on the lines ${\overline {AB}},o$. Projectivity $\pi $ is the composition of the two perspectivities: $\ \pi =\pi _{B}\pi _{A}$. Hence a point $X$ is mapped onto $\pi (X)=\pi _{B}\pi _{A}(X)$ and the line $x={\overline {X\pi (X)}}$ is an element of the dual conic defined by $\pi $. (If $W$ would be a fixpoint, $\pi $ would be perspective.[7]) (2) Three points and their images are given: The following example is the dual one given above for a Steiner conic. The images of the points $A,U,W$ are given: $\pi (A)=B,\,\pi (U)=W,\,\pi (W)=V$. The projective mapping $\pi $ can be represented by the product of the following perspectivities $\pi _{B},\pi _{A}$: 1. $\pi _{B}$ is the perspectivity of the point set of line $u$ onto the point set of line $o$ with centre $B$. 2. $\pi _{A}$ is the perspectivity of the point set of line $o$ onto the point set of line $v$ with centre $A$. One easily checks that the projective mapping $\pi =\pi _{A}\pi _{B}$ fulfills $\pi (A)=B,\,\pi (U)=W,\,\pi (W)=V$. Hence for any arbitrary point $G$ the image $\pi (G)=\pi _{A}\pi _{B}(G)$ can be constructed and line ${\overline {G\pi (G)}}$ is an element of a non degenerate dual conic section. Because the points $U$ and $V$ are contained in the lines $u$, $v$ resp.,the points $U$ and $V$ are points of the conic and the lines $u,v$ are tangents at $U,V$. Intrinsic conics in a linear incidence geometry The Steiner construction defines the conics in a planar linear incidence geometry (two points determine at most one line and two lines intersect in at most one point) intrinsically, that is, using only the collineation group. Specifically, $E(T,P)$ is the conic at point $P$ afforded by the collineation $T$, consisting of the intersections of $L$ and $T(L)$ for all lines $L$ through $P$. If $T(P)=P$ or $T(L)=L$ for some $L$ then the conic is degenerate. For example, in the real coordinate plane, the affine type (ellipse, parabola, hyperbola) of $E(T,P)$ is determined by the trace and determinant of the matrix component of $T$, independent of $P$. By contrast, the collineation group of the real hyperbolic plane $\mathbb {H} ^{2}$consists of isometries. Consequently, the intrinsic conics comprise a small but varied subset of the general conics, curves obtained from the intersections of projective conics with a hyperbolic domain. Further, unlike the Euclidean plane, there is no overlap between the direct $E(T,P);$ $T$ preserves orientation – and the opposite $E(T,P);$ $T$ reverses orientation. The direct case includes central (two perpendicular lines of symmetry) and non-central conics, whereas every opposite conic is central. Even though direct and opposite central conics cannot be congruent, they are related by a quasi-symmetry defined in terms of complementary angles of parallelism. Thus, in any inversive model of $\mathbb {H} ^{2}$, each direct central conic is birationally equivalent to an opposite central conic.[8] In fact, the central conics represent all genus 1 curves with real shape invariant $j\geq 1$. A minimal set of representatives is obtained from the central direct conics with common center and axis of symmetry, whereby the shape invariant is a function of the eccentricity, defined in terms of the distance between $P$ and $T(P)$. The orthogonal trajectories of these curves represent all genus 1 curves with $j\leq 1$, which manifest as either irreducible cubics or bi-circular quartics. Using the elliptic curve addition law on each trajectory, every general central conic in $\mathbb {H} ^{2}$decomposes uniquely as the sum of two intrinsic conics by adding pairs of points where the conics intersect each trajectory.[9] Notes 1. Coxeter 1993, p. 80 2. Hartmann, p. 38 3. Merserve 1983, p. 65 4. Jacob Steiner's Vorlesungen über synthetische Geometrie, B. G. Teubner, Leipzig 1867 (from Google Books: (German) Part II follows Part I) Part II, pg. 96 5. Hartmann, p. 19 6. Hartmann, p. 32 7. H. Lenz: Vorlesungen über projektive Geometrie, BI, Mannheim, 1965, S. 49. 8. Sarli, John (April 2012). "Conics in the hyperbolic plane intrinsic to the collineation group". Journal of Geometry. 103 (1): 131–148. doi:10.1007/s00022-012-0115-5. ISSN 0047-2468. S2CID 119588289. 9. Sarli, John (2021-10-22). "The Elliptic Curve Decomposition of Central Conics in the Real Hyperbolic Plane". doi:10.21203/rs.3.rs-936116/v1. {{cite journal}}: Cite journal requires |journal= (help) References Wikimedia Commons has media related to Steiner conic. • Coxeter, H. S. M. (1993), The Real Projective Plane, Springer Science & Business Media • Hartmann, Erich, Planar Circle Geometries, an Introduction to Moebius-, Laguerre- and Minkowski Planes (PDF), retrieved 20 September 2014 (PDF; 891 kB). • Merserve, Bruce E. (1983) [1959], Fundamental Concepts of Geometry, Dover, ISBN 0-486-63415-9
Wikipedia
Steiner ellipse In geometry, the Steiner ellipse of a triangle, also called the Steiner circumellipse to distinguish it from the Steiner inellipse, is the unique circumellipse (ellipse that touches the triangle at its vertices) whose center is the triangle's centroid.[1] Named after Jakob Steiner, it is an example of a circumconic. By comparison the circumcircle of a triangle is another circumconic that touches the triangle at its vertices, but is not centered at the triangle's centroid unless the triangle is equilateral. Not to be confused with Steiner conic. The area of the Steiner ellipse equals the area of the triangle times ${\frac {4\pi }{3{\sqrt {3}}}},$ and hence is 4 times the area of the Steiner inellipse. The Steiner ellipse has the least area of any ellipse circumscribed about the triangle.[1] The Steiner ellipse is the scaled Steiner inellipse (factor 2, center is the centroid). Hence both ellipses are similar (have the same eccentricity). Properties • A Steiner ellipse is the only ellipse, whose center is the centroid $S$ of a triangle $ABC$ and contains the points $A,B,C$. The area of the Steiner ellipse is ${\tfrac {4\pi }{3{\sqrt {3}}}}$-fold of the triangle's area. Proof A) For an equilateral triangle the Steiner ellipse is the circumcircle, which is the only ellipse, that fulfills the preconditions. The desired ellipse has to contain the triangle reflected at the center of the ellipse. This is true for the circumcircle. A conic is uniquely determined by 5 points. Hence the circumcircle is the only Steiner ellipse. B) Because an arbitrary triangle is the affine image of an equilateral triangle, an ellipse is the affine image of the unit circle and the centroid of a triangle is mapped onto the centroid of the image triangle, the property (a unique circumellipse with the centroid as center) is true for any triangle. The area of the circumcircle of an equilateral triangle is ${\tfrac {4\pi }{3{\sqrt {3}}}}$-fold of the area of the triangle. An affine map preserves the ratio of areas. Hence the statement on the ratio is true for any triangle and its Steiner ellipse. Determination of conjugate points An ellipse can be drawn (by computer or by hand), if besides the center at least two conjugate points on conjugate diameters are known. In this case • either one determines by Rytz's construction the vertices of the ellipse and draws the ellipse with a suitable ellipse compass • or uses an parametric representation for drawing the ellipse. Let be $ABC$ a triangle and its centroid $S$. The shear mapping with axis $d$ through $S$ and parallel to $AB$ transforms the triangle onto the isosceles triangle $A'B'C'$ (see diagram). Point $C'$ is a vertex of the Steiner ellipse of triangle $A'B'C'$. A second vertex $D$ of this ellipse lies on $d$, because $d$ is perpendicular to $SC'$ (symmetry reasons). This vertex can be determined from the data (ellipse with center $S$ through $C'$ and $B'$, $|A'B'|=c$) by calculation. It turns out that $|SD|={\frac {c}{\sqrt {3}}}\ .$ Or by drawing: Using de la Hire's method (see center diagram) vertex $D$ of the Steiner ellipse of the isosceles triangle $A'B'C'$ is determined. The inverse shear mapping maps $C'$ back to $C$ and point $D$ is fixed, because it is a point on the shear axis. Hence semi diameter $SD$ is conjugate to $SC$. With help of this pair of conjugate semi diameters the ellipse can be drawn, by hand or by computer. Parametric representation and equation Given: Triangle $\ A=(a_{1},a_{2}),\;B=(b_{1},b_{2}),\;C=(c_{1},c_{2})$ Wanted: Parametric representation and equation of its Steiner ellipse The centroid of the triangle is $\ S=({\tfrac {a_{1}+b_{1}+c_{1}}{3}},{\tfrac {a_{2}+b_{2}+c_{2}}{3}})\ .$ Parametric representation: From the investigation of the previous section one gets the following parametric representation of the Steiner ellipse: • $\ {\vec {x}}={\vec {p}}(t)={\overrightarrow {OS}}\;+\;{\overrightarrow {SC}}\;\cos t\;+\;{\frac {1}{\sqrt {3}}}{\overrightarrow {AB}}\;\sin t\;,\quad 0\leq t<2\pi \;.$ • The four vertices of the ellipse are $\quad {\vec {p}}(t_{0}),\;{\vec {p}}(t_{0}\pm {\frac {\pi }{2}}),\;{\vec {p}}(t_{0}+\pi ),\ $ where $t_{0}$ comes from $\cot(2t_{0})={\frac {{\vec {f}}_{1}^{\,2}-{\vec {f}}_{2}^{\,2}}{2{\vec {f}}_{1}\cdot {\vec {f}}_{2}}}\quad $ with $\quad {\vec {f}}_{1}={\vec {SC}},\quad {\vec {f}}_{2}={\frac {1}{\sqrt {3}}}{\vec {AB}}\quad $ (see ellipse). The roles of the points for determining the parametric representation can be changed. Example (see diagram): $A=(-5,-5),B=(0,25),C=(20,0)$. Equation: If the origin is the centroid of the triangle (center of the Steiner ellipse) the equation corresponding to the parametric representation $ {\vec {x}}={\vec {f}}_{1}\cos t+{\vec {f}}_{2}\sin t$ is • $\ (xf_{2y}-yf_{2x})^{2}+(yf_{1x}-xf_{1y})^{2}-(f_{1x}f_{2y}-f_{1y}f_{2x})^{2}=0\ ,$ with $\ {\vec {f}}_{i}=(f_{ix},f_{iy})^{T}\ $.[2] Example: The centroid of triangle $\quad A=(-{\tfrac {3}{2}}{\sqrt {3}},-{\tfrac {3}{2}}),\ B=({\tfrac {\sqrt {3}}{2}},-{\tfrac {3}{2}}),\ C=({\sqrt {3}},3)\quad $ is the origin. From the vectors ${\vec {f}}_{1}=({\sqrt {3}},3)^{T},\ {\vec {f}}_{2}=(2,0)^{T}\ $ one gets the equation of the Steiner ellipse: $9x^{2}+7y^{2}-6{\sqrt {3}}xy-36=0\ .$ Determination of the semi-axes and linear eccentricity If the vertices are already known (see above), the semi axes can be determined. If one is interested in the axes and eccentricity only, the following method is more appropriate: Let be $a,b,\;a>b$ the semi axes of the Steiner ellipse. From Apollonios theorem on properties of conjugate semi diameters of ellipses one gets: $a^{2}+b^{2}={\vec {SC}}^{2}+{\vec {SD}}^{2}\ ,\quad a\cdot b=\left|\det({\vec {SC}},{\vec {SD}})\right|\ .$ Denoting the right hand sides of the equations by $M$ and $N$ respectively and transforming the non linear system (respecting $a>b>0$) leads to: $a^{2}+b^{2}=M,\ ab=N\quad \rightarrow \quad a^{2}+2ab+b^{2}=M+2N,\ a^{2}-2ab+b^{2}=M-2N$ $\rightarrow \quad (a+b)^{2}=M+2N,\ (a-b)^{2}=M-2N\quad \rightarrow \quad a+b={\sqrt {M+2N}},\ a-b={\sqrt {M-2N}}\ .$ Solving for $a$ and $b$ one gets the semi axes: • $\ a={\frac {1}{2}}({\sqrt {M+2N}}+{\sqrt {M-2N}})\ ,\qquad b={\frac {1}{2}}({\sqrt {M+2N}}-{\sqrt {M-2N}})\ ,$ with $\qquad M={\vec {SC}}^{2}+{\frac {1}{3}}{\vec {AB}}^{2}\ ,\quad N={\frac {1}{\sqrt {3}}}|\det({\vec {SC}},{\vec {AB}})|\qquad $. The linear eccentricity of the Steiner ellipse is • $c={\sqrt {a^{2}-b^{2}}}=\cdots ={\sqrt {\sqrt {M^{2}-4N^{2}}}}\ .$ and the area • $F=\pi ab=\pi N={\frac {\pi }{\sqrt {3}}}\left|\det({\vec {SC}},{\vec {AB}})\right|$ One should not confuse $a,b$ in this section with other meanings in this article ! Trilinear equation The equation of the Steiner circumellipse in trilinear coordinates is[1] $bcyz+cazx+abxy=0$ for side lengths a, b, c. Alternative calculation of the semi axes and linear eccentricity The semi-major and semi-minor axes (of a triangle with sides of length a, b, c) have lengths[1] ${\frac {1}{3}}{\sqrt {a^{2}+b^{2}+c^{2}\pm 2Z}},$ and focal length ${\frac {2}{3}}{\sqrt {Z}}$ where $Z={\sqrt {a^{4}+b^{4}+c^{4}-a^{2}b^{2}-b^{2}c^{2}-c^{2}a^{2}}}.$ The foci are called the Bickart points of the triangle. See also • Triangle conic References 1. Weisstein, Eric W. "Steiner Circumellipse." From MathWorld—A Wolfram Web Resource. http://mathworld.wolfram.com/SteinerCircumellipse.html 2. CDKG: Computerunterstützte Darstellende und Konstruktive Geometrie (TU Darmstadt) (PDF; 3,4 MB), p. 65. • Georg Glaeser, Hellmuth Stachel, Boris Odehnal: The Universe of Conics, Springer 2016, ISBN 978-3-662-45449-7, p.383
Wikipedia
Steiner point (triangle) In triangle geometry, the Steiner point is a particular point associated with a triangle.[1] It is a triangle center[2] and it is designated as the center X(99) in Clark Kimberling's Encyclopedia of Triangle Centers. Jakob Steiner (1796–1863), Swiss mathematician, described this point in 1826. The point was given Steiner's name by Joseph Neuberg in 1886.[2][3] Definition The Steiner point is defined as follows. (This is not the way in which Steiner defined it.[2]) Let ABC be any given triangle. Let O be the circumcenter and K be the symmedian point of triangle ABC. The circle with OK as diameter is the Brocard circle of triangle ABC. The line through O perpendicular to the line BC intersects the Brocard circle at another point A'. The line through O perpendicular to the line CA intersects the Brocard circle at another point B'. The line through O perpendicular to the line AB intersects the Brocard circle at another point C'. (The triangle A'B'C' is the Brocard triangle of triangle ABC.) Let LA be the line through A parallel to the line B'C', LB be the line through B parallel to the line C'A' and LC be the line through C parallel to the line A'B'. Then the three lines LA, LB and LC are concurrent. The point of concurrency is the Steiner point of triangle ABC. In the Encyclopedia of Triangle Centers the Steiner point is defined as follows; Let ABC be any given triangle. Let O be the circumcenter and K be the symmedian point of triangle ABC. Let lA be the reflection of the line OK in the line BC, lB be the reflection of the line OK in the line CA and lC be the reflection of the line OK in the line AB. Let the lines lB and lC intersect at A″, the lines lC and lA intersect at B″ and the lines lA and lB intersect at C″. Then the lines AA″, BB″ and CC″ are concurrent. The point of concurrency is the Steiner point of triangle ABC. Trilinear coordinates The trilinear coordinates of the Steiner point are given below. $bc/(b^{2}-c^{2}):ca/(c^{2}-a^{2}):ab/(a^{2}-b^{2})$ $=b^{2}c^{2}\csc(b-C):c^{2}a^{2}\csc(c-a):a^{2}b^{2}\csc(a-b)$ Properties 1. The Steiner circumellipse of triangle ABC, also called the Steiner ellipse, is the ellipse of least area that passes through the vertices A, B and C. The Steiner point of triangle ABC lies on the Steiner circumellipse of triangle ABC. 2. Canadian mathematician Ross Honsberger stated the following as a property of Steiner point: The Steiner point of a triangle is the center of mass of the system obtained by suspending at each vertex a mass equal to the magnitude of the exterior angle at that vertex.[4] The center of mass of such a system is in fact not the Steiner point, but the Steiner curvature centroid, which has the trilinear coordinates $\left({\frac {\pi -A}{a}}:{\frac {\pi -B}{b}}:{\frac {\pi -C}{c}}\right)$.[5] It is the triangle center designated as X(1115) in Encyclopedia of Triangle Centers. 3. The Simson line of the Steiner point of a triangle ABC is parallel to the line OK where O is the circumcenter and K is the symmmedian point of triangle ABC. Tarry point The Tarry point of a triangle is closely related to the Steiner point of the triangle. Let ABC be any given triangle. The point on the circumcircle of triangle ABC diametrically opposite to the Steiner point of triangle ABC is called the Tarry point of triangle ABC. The Tarry point is a triangle center and it is designated as the center X(98) in Encyclopedia of Triangle Centers. The trilinear coordinates of the Tarry point are given below: $\sec(A+\omega ):\sec(B+\omega ):\sec(C+\omega )=f(a,b,c):f(b,c,a):f(c,a,b)$ where ω is the Brocard angle of triangle ABC and $f(a,b,c)={\frac {bc}{b^{4}+c^{4}-a^{2}b^{2}-a^{2}c^{2}}}$ Similar to the definition of the Steiner point, the Tarry point can be defined as follows: Let ABC be any given triangle. Let A'B'C' be the Brocard triangle of triangle ABC. Let LA be the line through A perpendicular to the line B'C', LB be the line through B perpendicular to the line C'A' and LC be the line through C perpendicular to the line A'B'. Then the three lines LA, LB and LC are concurrent. The point of concurrency is the Tarry point of triangle ABC. References 1. Paul E. Black. "Steiner point". Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. Retrieved 17 May 2012. 2. Kimberling, Clark. "Steiner point". Retrieved 17 May 2012. 3. J. Neuberg (1886). "Sur le point de Steiner". Journal de mathématiques spéciales: 29. 4. Honsberger, Ross (1965). Episodes in nineteenth and twentieth century Euclidean geometry. The Mathematical Association of America. pp. 119–124. 5. Eric W., Weisstein. "Steiner Curvature Centroid". MathWorld—A Wolfram Web Resource. Retrieved 17 May 2012.
Wikipedia
Steiner travelling salesman problem The Steiner traveling salesman problem (Steiner TSP, or STSP) is an extension of the traveling salesman problem. Given a list of cities, some of which are required, and the lengths of the roads between them, the goal is to find the shortest possible walk that visits each required city and then returns to the origin city.[1] During a walk, vertices can be visited more than once, and edges may be traversed more than once.[2] References 1. Interian, Ruben; Ribeiro, Celso C. (15 July 2017). "A GRASP heuristic using path-relinking and restarts for the Steiner traveling salesman problem". International Transactions in Operational Research. 24 (6): 1307–1323. doi:10.1111/itor.12419. 2. Álvarez-Miranda, Eduardo; Sinnl, Markus (2019-09-05). "A note on computational aspects of the Steiner traveling salesman problem". International Transactions in Operational Research. 26 (4): 1396–1401. doi:10.1111/itor.12592. S2CID 71717255. • M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, 1979. • Huili Zhang, Weitian Tong, Yinfeng Xu, and Guohui Lin. The steiner traveling salesman problem with online edge blockages. European Journal of Operational Research, 243(1):30–40, 2015. • Gerard Cornuejols, Jean Fonlupt, and Denis Naddef. The traveling salesman problem on a graph and some related integer polyhedra. Mathematical Programming, 33(1):1–27, 1985. • S. Borne, A.R. Mahjoub, and R. Taktak. A branch-and-cut algorithm for the multiple steiner TSP with order constraints. Electronic Notes in Discrete Mathematics, 41:487–494, 2013. • Huili Zhang, Weitian Tong, Yinfeng Xu, and Guohui Lin. The steiner traveling salesman problem with online advanced edge blockages. Computers & Operations Research, 70:26–38, 2016. • Adam N. Letchford, Saeideh D. Nasiri, and Dirk Oliver Theis. Compact formulations of the steiner traveling salesman problem and related problems. European Journal of Operational Research, 228(1):83–92, 2013. • Adam N. Letchford and Saeideh D. Nasiri. The steiner travelling salesman problem with correlated costs. European Journal of Operational Research, 245(1):62–69, 2015. • Juan-Jos´e Salazar-Gonz´alez. The steiner cycle polytope. European Journal of Operational Research, 147(3):671–679, 2003.
Wikipedia
Steiner system In combinatorial mathematics, a Steiner system (named after Jakob Steiner) is a type of block design, specifically a t-design with λ = 1 and t = 2 or (recently) t ≥ 2. A Steiner system with parameters t, k, n, written S(t,k,n), is an n-element set S together with a set of k-element subsets of S (called blocks) with the property that each t-element subset of S is contained in exactly one block. In an alternate notation for block designs, an S(t,k,n) would be a t-(n,k,1) design. This definition is relatively new. The classical definition of Steiner systems also required that k = t + 1. An S(2,3,n) was (and still is) called a Steiner triple (or triad) system, while an S(3,4,n) is called a Steiner quadruple system, and so on. With the generalization of the definition, this naming system is no longer strictly adhered to. Long-standing problems in design theory were whether there exist any nontrivial Steiner systems (nontrivial meaning t < k < n) with t ≥ 6; also whether infinitely many have t = 4 or 5.[1] Both existences were proved by Peter Keevash in 2014. His proof is non-constructive and, as of 2019, no actual Steiner systems are known for large values of t.[2][3][4] Types of Steiner systems A finite projective plane of order q, with the lines as blocks, is an S(2, q + 1, q2 + q + 1), since it has q2 + q + 1 points, each line passes through q + 1 points, and each pair of distinct points lies on exactly one line. A finite affine plane of order q, with the lines as blocks, is an S(2, q, q2). An affine plane of order q can be obtained from a projective plane of the same order by removing one block and all of the points in that block from the projective plane. Choosing different blocks to remove in this way can lead to non-isomorphic affine planes. An S(3,4,n) is called a Steiner quadruple system. A necessary and sufficient condition for the existence of an S(3,4,n) is that n $\equiv $ 2 or 4 (mod 6). The abbreviation SQS(n) is often used for these systems. Up to isomorphism, SQS(8) and SQS(10) are unique, there are 4 SQS(14)s and 1,054,163 SQS(16)s.[5] An S(4,5,n) is called a Steiner quintuple system. A necessary condition for the existence of such a system is that n $\equiv $ 3 or 5 (mod 6) which comes from considerations that apply to all the classical Steiner systems. An additional necessary condition is that n $\not \equiv $ 4 (mod 5), which comes from the fact that the number of blocks must be an integer. Sufficient conditions are not known. There is a unique Steiner quintuple system of order 11, but none of order 15 or order 17.[6] Systems are known for orders 23, 35, 47, 71, 83, 107, 131, 167 and 243. The smallest order for which the existence is not known (as of 2011) is 21. Steiner triple systems An S(2,3,n) is called a Steiner triple system, and its blocks are called triples. It is common to see the abbreviation STS(n) for a Steiner triple system of order n. The total number of pairs is n(n-1)/2, of which three appear in a triple, and so the total number of triples is n(n−1)/6. This shows that n must be of the form 6k+1 or 6k + 3 for some k. The fact that this condition on n is sufficient for the existence of an S(2,3,n) was proved by Raj Chandra Bose[7] and T. Skolem.[8] The projective plane of order 2 (the Fano plane) is an STS(7) and the affine plane of order 3 is an STS(9). Up to isomorphism, the STS(7) and STS(9) are unique, there are two STS(13)s, 80 STS(15)s, and 11,084,874,829 STS(19)s.[9] We can define a multiplication on the set S using the Steiner triple system by setting aa = a for all a in S, and ab = c if {a,b,c} is a triple. This makes S an idempotent, commutative quasigroup. It has the additional property that ab = c implies bc = a and ca = b.[note 1] Conversely, any (finite) quasigroup with these properties arises from a Steiner triple system. Commutative idempotent quasigroups satisfying this additional property are called Steiner quasigroups.[10] Resolvable Steiner systems Some of the S(2,3,n) systems can have their triples partitioned into (n-1)/2 sets each having (n/3) pairwise disjoint triples. This is called resolvable and such systems are called Kirkman triple systems after Thomas Kirkman, who studied such resolvable systems before Steiner. Dale Mesner, Earl Kramer, and others investigated collections of Steiner triple systems that are mutually disjoint (i.e., no two Steiner systems in such a collection share a common triplet). It is known (Bays 1917, Kramer & Mesner 1974) that seven different S(2,3,9) systems can be generated to together cover all 84 triplets on a 9-set; it was also known by them that there are 15360 different ways to find such 7-sets of solutions, which reduce to two non-isomorphic solutions under relabeling, with multiplicities 6720 and 8640 respectively. The corresponding question for finding thirteen different disjoint S(2,3,15) systems was asked by James Sylvester in 1860 as an extension of the Kirkman's schoolgirl problem, namely whether Kirkman's schoolgirls could march for an entire term of 13 weeks with no triplet of girls being repeated over the whole term. The question was solved by RHF Denniston in 1974,[11] who constructed Week 1 as follows: Day 1 ABJ CEM FKL HIN DGO Day 2 ACH DEI FGM JLN BKO Day 3 ADL BHM GIK CFN EJO Day 4 AEG BIL CJK DMN FHO Day 5 AFI BCD GHJ EKN LMO Day 6 AKM DFJ EHL BGN CIO Day 7 BEF CGL DHK IJM ANO for girls labeled A to O, and constructed each subsequent week's solution from its immediate predecessor by changing A to B, B to C, ... L to M and M back to A, all while leaving N and O unchanged. The Week 13 solution, upon undergoing that relabeling, returns to the Week 1 solution. Denniston reported in his paper that the search he employed took 7 hours on an Elliott 4130 computer at the University of Leicester, and he immediately ended the search on finding the solution above, not looking to establish uniqueness. The number of non-isomorphic solutions to Sylvester's problem remains unknown as of 2021. Properties It is clear from the definition of S(t, k, n) that $1<t<k<n$. (Equalities, while technically possible, lead to trivial systems.) If S(t, k, n) exists, then taking all blocks containing a specific element and discarding that element gives a derived system S(t−1, k−1, n−1). Therefore, the existence of S(t−1, k−1, n−1) is a necessary condition for the existence of S(t, k, n). The number of t-element subsets in S is ${\tbinom {n}{t}}$, while the number of t-element subsets in each block is ${\tbinom {k}{t}}$. Since every t-element subset is contained in exactly one block, we have ${\tbinom {n}{t}}=b{\tbinom {k}{t}}$, or $b={\frac {\tbinom {n}{t}}{\tbinom {k}{t}}}={\frac {n(n-1)\cdots (n-t+1)}{k(k-1)\cdots (k-t+1)}},$ where b is the number of blocks. Similar reasoning about t-element subsets containing a particular element gives us ${\tbinom {n-1}{t-1}}=r{\tbinom {k-1}{t-1}}$, or $r={\frac {\tbinom {n-1}{t-1}}{\tbinom {k-1}{t-1}}}$ =${\frac {(n-t+1)\cdots (n-2)(n-1)}{(k-t+1)\cdots (k-2)(k-1)}},$ where r is the number of blocks containing any given element. From these definitions follows the equation $bk=rn$. It is a necessary condition for the existence of S(t, k, n) that b and r are integers. As with any block design, Fisher's inequality $b\geq n$ is true in Steiner systems. Given the parameters of a Steiner system S(t, k, n) and a subset of size $t'\leq t$, contained in at least one block, one can compute the number of blocks intersecting that subset in a fixed number of elements by constructing a Pascal triangle.[12] In particular, the number of blocks intersecting a fixed block in any number of elements is independent of the chosen block. The number of blocks that contain any i-element set of points is: $\lambda _{i}=\left.{\binom {n-i}{t-i}}\right/{\binom {k-i}{t-i}}{\text{ for }}i=0,1,\ldots ,t,$ It can be shown that if there is a Steiner system S(2, k, n), where k is a prime power greater than 1, then n $\equiv $ 1 or k (mod k(k−1)). In particular, a Steiner triple system S(2, 3, n) must have n = 6m + 1 or 6m + 3. And as we have already mentioned, this is the only restriction on Steiner triple systems, that is, for each natural number m, systems S(2, 3, 6m + 1) and S(2, 3, 6m + 3) exist. History Steiner triple systems were defined for the first time by Wesley S. B. Woolhouse in 1844 in the Prize question #1733 of Lady's and Gentlemen's Diary.[13] The posed problem was solved by Thomas Kirkman (1847). In 1850 Kirkman posed a variation of the problem known as Kirkman's schoolgirl problem, which asks for triple systems having an additional property (resolvability). Unaware of Kirkman's work, Jakob Steiner (1853) reintroduced triple systems, and as this work was more widely known, the systems were named in his honor. Mathieu groups Several examples of Steiner systems are closely related to group theory. In particular, the finite simple groups called Mathieu groups arise as automorphism groups of Steiner systems: • The Mathieu group M11 is the automorphism group of a S(4,5,11) Steiner system • The Mathieu group M12 is the automorphism group of a S(5,6,12) Steiner system • The Mathieu group M22 is the unique index 2 subgroup of the automorphism group of a S(3,6,22) Steiner system • The Mathieu group M23 is the automorphism group of a S(4,7,23) Steiner system • The Mathieu group M24 is the automorphism group of a S(5,8,24) Steiner system. The Steiner system S(5, 6, 12) There is a unique S(5,6,12) Steiner system; its automorphism group is the Mathieu group M12, and in that context it is denoted by W12. Projective line construction This construction is due to Carmichael (1937).[14] Add a new element, call it ∞, to the 11 elements of the finite field F11 (that is, the integers mod 11). This set, S, of 12 elements can be formally identified with the points of the projective line over F11. Call the following specific subset of size 6, $\{\infty ,1,3,4,5,9\},$ a "block" (it contains ∞ together with the 5 nonzero squares in F11). From this block, we obtain the other blocks of the S(5,6,12) system by repeatedly applying the linear fractional transformations: $z'=f(z)={\frac {az+b}{cz+d}},$ where a,b,c,d are in F11 and ad − bc = 1. With the usual conventions of defining f (−d/c) = ∞ and f (∞) = a/c, these functions map the set S onto itself. In geometric language, they are projectivities of the projective line. They form a group under composition which is the projective special linear group PSL(2,11) of order 660. There are exactly five elements of this group that leave the starting block fixed setwise,[15] namely those such that b=c=0 and ad=1 so that f(z) = a2 z. So there will be 660/5 = 132 images of that block. As a consequence of the multiply transitive property of this group acting on this set, any subset of five elements of S will appear in exactly one of these 132 images of size six. Kitten construction An alternative construction of W12 is obtained by use of the 'kitten' of R.T. Curtis,[16] which was intended as a "hand calculator" to write down blocks one at a time. The kitten method is based on completing patterns in a 3x3 grid of numbers, which represent an affine geometry on the vector space F3xF3, an S(2,3,9) system. Construction from K6 graph factorization The relations between the graph factors of the complete graph K6 generate an S(5,6,12).[17] A K6 graph has 6 vertices, 15 edges, 15 perfect matchings, and 6 different 1-factorizations (ways to partition the edges into disjoint perfect matchings). The set of vertices (labeled 123456) and the set of factorizations (labeled ABCDEF) provide one block each. Every pair of factorizations has exactly one perfect matching in common. Suppose factorizations A and B have the common matching with edges 12, 34 and 56. Add three new blocks AB3456, 12AB56, and 1234AB, replacing each edge in the common matching with the factorization labels in turn. Similarly add three more blocks 12CDEF, 34CDEF, and 56CDEF, replacing the factorization labels by the corresponding edge labels of the common matching. Do this for all 15 pairs of factorizations to add 90 new blocks. Finally, take the full set of ${\tbinom {12}{6}}=924$ combinations of 6 objects out of 12, and discard any combination that has 5 or more objects in common with any of the 92 blocks generated so far. Exactly 40 blocks remain, resulting in 2 + 90 + 40 = 132 blocks of the S(5,6,12). This method works because there is an outer automorphism on the symmetric group S6, which maps the vertices to factorizations and the edges to partitions. Permuting the vertices causes the factorizations to permute differently, in accordance with the outer automorphism. The Steiner system S(5, 8, 24) The Steiner system S(5, 8, 24), also known as the Witt design or Witt geometry, was first described by Carmichael (1931) and rediscovered by Witt (1938). This system is connected with many of the sporadic simple groups and with the exceptional 24-dimensional lattice known as the Leech lattice. The automorphism group of S(5, 8, 24) is the Mathieu group M24, and in that context the design is denoted W24 ("W" for "Witt") Direct lexicographic generation All 8-element subsets of a 24-element set are generated in lexicographic order, and any such subset which differs from some subset already found in fewer than four positions is discarded. The list of octads for the elements 01, 02, 03, ..., 22, 23, 24 is then: 01 02 03 04 05 06 07 08 01 02 03 04 09 10 11 12 01 02 03 04 13 14 15 16 . . (next 753 octads omitted) . 13 14 15 16 17 18 19 20 13 14 15 16 21 22 23 24 17 18 19 20 21 22 23 24 Each single element occurs 253 times somewhere in some octad. Each pair occurs 77 times. Each triple occurs 21 times. Each quadruple (tetrad) occurs 5 times. Each quintuple (pentad) occurs once. Not every hexad, heptad or octad occurs. Construction from the binary Golay code The 4096 codewords of the 24-bit binary Golay code are generated, and the 759 codewords with a Hamming weight of 8 correspond to the S(5,8,24) system. The Golay code can be constructed by many methods, such as generating all 24-bit binary strings in lexicographic order and discarding those that differ from some earlier one in fewer than 8 positions. The result looks like this: 000000000000000000000000 000000000000000011111111 000000000000111100001111 . . (next 4090 24-bit strings omitted) . 111111111111000011110000 111111111111111100000000 111111111111111111111111 The codewords form a group under the XOR operation. Projective line construction This construction is due to Carmichael (1931).[18] Add a new element, call it ∞, to the 23 elements of the finite field F23 (that is, the integers mod 23). This set, S, of 24 elements can be formally identified with the points of the projective line over F23. Call the following specific subset of size 8, $\{\infty ,0,1,3,12,15,21,22\},$ a "block". (We can take any octad of the extended binary Golay code, seen as a quadratic residue code.) From this block, we obtain the other blocks of the S(5,8,24) system by repeatedly applying the linear fractional transformations: $z'=f(z)={\frac {az+b}{cz+d}},$ where a,b,c,d are in F23 and ad − bc = 1. With the usual conventions of defining f (−d/c) = ∞ and f (∞) = a/c, these functions map the set S onto itself. In geometric language, they are projectivities of the projective line. They form a group under composition which is the projective special linear group PSL(2,23) of order 6072. There are exactly 8 elements of this group that leave the initial block fixed setwise. So there will be 6072/8 = 759 images of that block. These form the octads of S(5,8,24). Construction from the Miracle Octad Generator The Miracle Octad Generator (MOG) is a tool to generate octads, such as those containing specified subsets. It consists of a 4x6 array with certain weights assigned to the rows. In particular, an 8-subset should obey three rules in order to be an octad of S(5,8,24). First, each of the 6 columns should have the same parity, that is, they should all have an odd number of cells or they should all have an even number of cells. Second, the top row should have the same parity as each of the columns. Third, the rows are respectively multiplied by the weights 0, 1, 2, and 3 over the finite field of order 4, and column sums are calculated for the 6 columns, with multiplication and addition using the finite field arithmetic definitions. The resulting column sums should form a valid hexacodeword of the form (a, b, c, a + b + c, 3a + 2b + c, 2a + 3b + c) where a, b, c are also from the finite field of order 4. If the column sums' parities don't match the row sum parity, or each other, or if there do not exist a, b, c such that the column sums form a valid hexacodeword, then that subset of 8 is not an octad of S(5,8,24). The MOG is based on creating a bijection (Conwell 1910, "The three-space PG(3,2) and its group") between the 35 ways to partition an 8-set into two different 4-sets, and the 35 lines of the Fano 3-space PG(3,2). It is also geometrically related (Cullinane, "Symmetry Invariance in a Diamond Ring", Notices of the AMS, pp A193-194, Feb 1979) to the 35 different ways to partition a 4x4 array into 4 different groups of 4 cells each, such that if the 4x4 array represents a four-dimensional finite affine space, then the groups form a set of parallel subspaces. See also • Constant weight code • Kirkman's schoolgirl problem • Sylvester–Gallai configuration Notes 1. This property is equivalent to saying that (xy)y = x for all x and y in the idempotent commutative quasigroup. References 1. "Encyclopaedia of Design Theory: t-Designs". Designtheory.org. 2004-10-04. Retrieved 2012-08-17. 2. Keevash, Peter (2014). "The existence of designs". arXiv:1401.3665 [math.CO]. 3. "A Design Dilemma Solved, Minus Designs". Quanta Magazine. 2015-06-09. Retrieved 2015-06-27. 4. Kalai, Gil. "Designs exist!" (PDF). S´eminaire BOURBAKI. 5. Colbourn & Dinitz 2007, pg.106 6. Östergard & Pottonen 2008 7. Bose, R. C. (1939). "On the Construction of Balanced Incomplete Block Designs". Annals of Eugenics. 9 (4): 353–399. doi:10.1111/j.1469-1809.1939.tb02219.x. 8. T. Skolem. Some remarks on the triple systems of Steiner. Math. Scand. 6 (1958), 273–280. 9. Colbourn & Dinitz 2007, pg.60 10. Colbourn & Dinitz 2007, pg. 497, definition 28.12 11. Denniston, R. H. F. (September 1974). "Denniston's paper, open access". Discrete Mathematics. 9 (3): 229–233. doi:10.1016/0012-365X(74)90004-1. 12. Assmus & Key 1994, pg. 8 13. Lindner & Rodger 1997, pg.3 14. Carmichael 1956, p. 431 15. Beth, Jungnickel & Lenz 1986, p. 196 16. Curtis 1984 17. "EAGTS textbook". 18. Carmichael 1931 References • Assmus, E. F. Jr.; Key, J. D. (1994), "8. Steiner Systems", Designs and Their Codes, Cambridge University Press, pp. 295–316, ISBN 978-0-521-45839-9. • Assmus, E.F.; Key, J.D. (1992), Designs and Their Codes, Cambridge: Cambridge University Press, ISBN 978-0-521-41361-9 • Beth, Thomas; Jungnickel, Dieter; Lenz, Hanfried (1986), Design Theory, Cambridge: Cambridge University Press. 2nd ed. (1999) ISBN 978-0-521-44432-3. • Carmichael, Robert (1931), "Tactical Configurations of Rank Two", American Journal of Mathematics, 53 (1): 217–240, doi:10.2307/2370885, JSTOR 2370885 • Carmichael, Robert D. (1956) [1937], Introduction to the theory of Groups of Finite Order, Dover, ISBN 978-0-486-60300-1 • Colbourn, Charles J.; Dinitz, Jeffrey H. (1996), Handbook of Combinatorial Designs, Boca Raton: Chapman & Hall/ CRC, ISBN 978-0-8493-8948-1, Zbl 0836.00010 • Colbourn, Charles J.; Dinitz, Jeffrey H. (2007), Handbook of Combinatorial Designs (2nd ed.), Boca Raton: Chapman & Hall/ CRC, ISBN 978-1-58488-506-1, Zbl 1101.05001 • Curtis, R.T. (1984), "The Steiner system S(5,6,12), the Mathieu group M12 and the "kitten"", in Atkinson, Michael D. (ed.), Computational group theory (Durham, 1982), London: Academic Press, pp. 353–358, ISBN 978-0-12-066270-8, MR 0760669 • Hughes, D. R.; Piper, F. C. (1985), Design Theory, Cambridge University Press, pp. 173–176, ISBN 978-0-521-35872-9. • Kirkman, Thomas P. (1847), "On a Problem in Combinations", The Cambridge and Dublin Mathematical Journal, II: 191–204. • Lindner, C.C.; Rodger, C.A. (1997), Design Theory, Boca Raton: CRC Press, ISBN 978-0-8493-3986-8 • Östergard, Patric R.J.; Pottonen, Olli (2008), "There exists no Steiner system S(4,5,17)", Journal of Combinatorial Theory, Series A, 115 (8): 1570–1573, doi:10.1016/j.jcta.2008.04.005 • Steiner, J. (1853), "Combinatorische Aufgabe", Journal für die reine und angewandte Mathematik, 1853 (45): 181–182, doi:10.1515/crll.1853.45.181, S2CID 199547187. • Witt, Ernst (1938), "Die 5-Fach transitiven Gruppen von Mathieu", Abh. Math. Sem. Univ. Hamburg, 12: 256–264, doi:10.1007/BF02948947, S2CID 123658601 External links Wikimedia Commons has media related to Steiner systems. • Rowland, Todd & Weisstein, Eric W. "Steiner System". MathWorld. • Rumov, B.T. (2001) [1994], "Steiner system", Encyclopedia of Mathematics, EMS Press • Steiner systems by Andries E. Brouwer • Implementation of S(5,8,24) by Dr. Alberto Delgado, Gabe Hart, and Michael Kolkebeck • S(5, 8, 24) Software and Listing by Johan E. Mebius Incidence structures Representation • Incidence matrix • Incidence graph Fields • Combinatorics • Block design • Steiner system • Geometry • Incidence • Projective plane • Graph theory • Hypergraph • Statistics • Blocking Configurations • Complete quadrangle • Fano plane • Möbius–Kantor configuration • Pappus configuration • Hesse configuration • Desargues configuration • Reye configuration • Schläfli double six • Cremona–Richmond configuration • Kummer configuration • Grünbaum–Rigby configuration • Klein configuration • Dual Theorems • Sylvester–Gallai theorem • De Bruijn–Erdős theorem • Szemerédi–Trotter theorem • Beck's theorem • Bruck–Ryser–Chowla theorem Applications • Design of experiments • Kirkman's schoolgirl problem
Wikipedia
Steinerian In algebraic geometry, a Steinerian of a hypersurface, introduced by Steiner (1854), is the locus of the singular points of its polar quadrics. References • Coolidge, Julian Lowell (1959) [1931], A treatise on algebraic plane curves, New York: Dover Publications, ISBN 978-0-486-49576-7, MR 0120551 • Dolgachev, Igor V. (2012), Classical Algebraic Geometry: a modern view (PDF), Cambridge University Press, ISBN 978-1-107-01765-8 • Steiner, Jakob (1854), "Allgemeine Eigenschaften der algebraischen Curven.", Journal für die Reine und Angewandte Mathematik (in German), 47: 1–6, doi:10.1515/crll.1854.47.1, ISSN 0075-4102
Wikipedia
Cayleyan In algebraic geometry, the Cayleyan is a variety associated to a hypersurface by Arthur Cayley (1844), who named it the pippian in (Cayley 1857) and also called it the Steiner–Hessian. See also • Quippian References • Cayley, Arthur (1844), "Mémoire sur les courbes du troisième ordre", Journal de Mathématiques Pures et Appliquées, 9: 285–293, Collected Papers, I, 183–189 • Cayley, Arthur (1857), "A Memoir on Curves of the Third Order", Philosophical Transactions of the Royal Society of London, The Royal Society, 147: 415–446, doi:10.1098/rstl.1857.0021, ISSN 0080-4614, JSTOR 108626 • Dolgachev, Igor V. (2012), Classical Algebraic Geometry: a modern view (PDF), Cambridge University Press, ISBN 978-1-107-01765-8, archived from the original (PDF) on 2014-05-31, retrieved 2012-04-06
Wikipedia
Steiner–Lehmus theorem The Steiner–Lehmus theorem, a theorem in elementary geometry, was formulated by C. L. Lehmus and subsequently proved by Jakob Steiner. It states: Every triangle with two angle bisectors of equal lengths is isosceles. The theorem was first mentioned in 1840 in a letter by C. L. Lehmus to C. Sturm, in which he asked for a purely geometric proof. Sturm passed the request on to other mathematicians and Steiner was among the first to provide a solution. The theorem became a rather popular topic in elementary geometry ever since with a somewhat regular publication of articles on it.[1][2][3] Direct proofs The Steiner–Lehmus theorem can be proved using elementary geometry by proving the contrapositive statement: if a triangle is not isosceles, then it does not have two angle bisectors of equal length. There is some controversy over whether a "direct" proof is possible; allegedly "direct" proofs have been published, but not everyone agrees that these proofs are "direct." For example, there exist simple algebraic expressions for angle bisectors in terms of the sides of the triangle. Equating two of these expressions and algebraically manipulating the equation results in a product of two factors which equal 0, but only one of them (a − b) can equal 0 and the other must be positive. Thus a = b. But this may not be considered direct as one must first argue about why the other factor cannot be 0. John Conway[4] has argued that there can be no "equality-chasing" proof because the theorem (stated algebraically) does not hold over an arbitrary field, or even when negative real numbers are allowed as parameters. A precise definition of a "direct proof" inside both classical and intuitionistic logic has been provided by Victor Pambuccian,[5] who proved, without presenting the direct proofs, that direct proofs must exist in both the classical logic and the intuitionistic logic setting. Ariel Kellison later gave a direct proof.[6] Notes 1. Coxeter, H. S. M. and Greitzer, S. L. "The Steiner–Lehmus Theorem." §1.5 in Geometry Revisited. Washington, DC: Math. Assoc. Amer., pp. 14–16, 1967. 2. Diane and Roy Dowling: The Lasting Legacy of Ludolph Lehmus. Manitoba Math Links – Volume II – Issue 3, Spring 2002 3. Barbara, Roy (2007). "91.66 Steiner-Lehmus, Revisited". The Mathematical Gazette. 91 (522): 528–529. doi:10.1017/S0025557200182233. JSTOR 40378432. S2CID 125997695. 4. Alleged impossibility of "direct" proof of Steiner–Lehmus theorem 5. Pambuccian, Victor (2018), "Negation-free and contradiction-free proof of the Steiner-Lehmus theorem", Notre Dame Journal of Formal Logic, 59: 75–90, doi:10.1215/00294527-2017-0019. 6. Kellison, Ariel (2021), "A Machine-Checked Direct Proof of the Steiner-Lehmus Theorem", arXiv:2112.11182 [cs.LO]. References & further reading • John Horton Conway, Alex Ryba: The Steiner-Lehmus Angle Bisector Theorem. In: Mircea Pitici (Hrsg.): The Best Writing on Mathematics 2015. Princeton University Press, 2016, ISBN 9781400873371, pp. 154–166 • Alexander Ostermann, Gerhard Wanner: Geometry by Its History. Springer, 2012, pp. 224–225 • Beran, David (1992). "SSA and the Steiner-Lehmus Theorem". The Mathematics Teacher. 85 (5): 381–383. doi:10.5951/MT.85.5.0381. JSTOR 27967647. • Parry, C. F. (1978). "A Variation on the Steiner-Lehmus Theme". The Mathematical Gazette. 62 (420): 89–94. doi:10.2307/3617662. JSTOR 3617662. S2CID 125461255. • Lewin, Mordechai (1974). "On the Steiner-Lehmus Theorem". Mathematics Magazine. 47 (2): 87–89. doi:10.1080/0025570X.1974.11976361. JSTOR 2688873. • S. Abu-Saymeh, M. Hajja, H. A. ShahAli: Another Variation on the Steiner-Lehmus Theme. Forum Geometricorum 8, 2008, pp. 131–140 • Pambuccian, Victor; Struve, Horst; Struve, Rolf (2016). "The Steiner–Lehmus theorem and "triangles with congruent medians are isosceles" hold in weak geometries". Beiträge zur Algebra und Geometrie. 57 (2): 483–497. arXiv:1501.01857. doi:10.1007/s13366-015-0278-y. S2CID 256110198. External links • Weisstein, Eric W. "Steiner–Lehmus theorem". MathWorld. • Paul Yiu: Euclidean Geometry Notes, Lectures Notes, Florida Atlantic University, pp. 16–17 • Torsten Sillke: Steiner–Lehmus Theorem, extensive compilation of proofs on a website of the University of Bielefeld
Wikipedia
Steinhaus theorem In the mathematical field of real analysis, the Steinhaus theorem states that the difference set of a set of positive measure contains an open neighbourhood of zero. It was first proved by Hugo Steinhaus.[1] Statement Let A be a Lebesgue-measurable set on the real line such that the Lebesgue measure of A is not zero. Then the difference set $A-A=\{a-b\mid a,b\in A\}$ contains an open neighbourhood of the origin. The general version of the theorem, first proved by André Weil,[2] states that if G is a locally compact group, and A ⊂ G a subset of positive (left) Haar measure, then $AA^{-1}=\{ab^{-1}\mid a,b\in A\}$ contains an open neighbourhood of unity. The theorem can also be extended to nonmeagre sets with the Baire property. The proof of these extensions, sometimes also called Steinhaus theorem, is almost identical to the one below. Proof The following simple proof can be found in a collection of problems by late professor H.M. Martirosian from the Yerevan State University, Armenia (Russian). Let's keep in mind that for any $\varepsilon >0$, there exists an open set $\,{\cal {U}}$, so that $A\subset {\cal {U}}$ and $\mu ({\cal {U}})<\mu (A)+\varepsilon $. As a consequence, for a given $\alpha \in (1/2,1)$, we can find an appropriate interval $\Delta =(a,b)$ so that taking just an appropriate part of positive measure of the set $A$ we can assume that $A\subset \Delta $, and that $\mu (A)>\alpha (b-a)$. Now assume that $|x|<\delta $, where $\delta =(2\alpha -1)(b-a)$. We'll show that there are common points in the sets $x+A$ and $A$. Otherwise $2\mu (A)=\mu \{(x+A)\cup A\}\leq \mu \{(x+\Delta )\cup \Delta \}$. But since $\delta <b-a$, and $\mu \{(x+\Delta )\cup \Delta \}=b-a+|x|<b-a+\delta $, we would get $2\mu (A)<b-a+\delta =2\alpha (b-a)$, which contradicts the initial property of the set. Hence, since $(x+A)\cap A\neq \varnothing $, when $|x|<\delta $, it follows immediately that $\{x;|x|<\delta \}\subset A-A$, what we needed to establish. Corollary A corollary of this theorem is that any measurable proper subgroup of $(\mathbb {R} ,+)$ is of measure zero. See also • Falconer's conjecture Notes 1. Steinhaus (1920); Väth (2002) 2. Weil (1940) p. 50 References • Steinhaus, Hugo (1920). "Sur les distances des points dans les ensembles de mesure positive" (PDF). Fund. Math. (in French). 1: 93–104. doi:10.4064/fm-1-1-93-104.. • Weil, André (1940). L'intégration dans les groupes topologiques et ses applications. Hermann. • Stromberg, K. (1972). "An Elementary Proof of Steinhaus's Theorem". Proceedings of the American Mathematical Society. 36 (1): 308. doi:10.2307/2039082. JSTOR 2039082. • Sadhukhan, Arpan (2020). "An Alternative Proof of Steinhaus's Theorem". American Mathematical Monthly. 127 (4): 330. arXiv:1903.07139. doi:10.1080/00029890.2020.1711693. S2CID 84845966. • Väth, Martin (2002). Integration theory: a second course. World Scientific. ISBN 981-238-115-5.
Wikipedia
Steinhaus longimeter The Steinhaus longimeter, patented by the professor Hugo Steinhaus, is an instrument used to measure the lengths of curves on maps. Description It is a transparent sheet of three grids, turned against each other by 30 degrees, each consisting of parallel lines spaced at equal distances 3.82 mm. The measurement is done by counting crossings of the curve with grid lines. The number of crossings is the approximate length of the curve in millimetres. The design of the Steinhaus longimeter can be seen as an application of the Crofton formula, according to which the length of a curve equals the expected number of times it is crossed by a random line.[1] See also • Opisometer, a mechanical device for measuring curve length by rolling a small wheel along the curve • Dot planimeter, a similar transparency-based device for estimating area, based on Pick's theorem References 1. Maling, D. H. (2016), Measurements from Maps: Principles and Methods of Cartometry, Butterworth-Heinemann, p. 48, ISBN 9780080984124 Bibliography • Hugo Steinhaus: Zur Praxis der Rectification und zum Längenbegriff, Berichte der Sächsischen Akademie der Wissenschaften 82, 120–130, 1930. • Hugo Steinhaus: Przeglad Geogr. 21, 1947. • Hugo Steinhaus: Comptes Rendus Soc. des Sciences et des Lettres de Wrocław, Sér. B, 1949. • Hugo Steinhaus: Length, shape and area, Colloquium Mathematicum 3(1), 1–13, 1954. • Hugo Steinhaus: Mathematical Snapshots, 3rd ed. New York: Dover, pp. 105–110, 1999. External links • Weisstein, Eric W. "Longimeter." From MathWorld — A Wolfram Web Resource. • Information about patent (DRGM 1241513) • Download a PDF recreation of the Longimeter
Wikipedia
Steinitz exchange lemma The Steinitz exchange lemma is a basic theorem in linear algebra used, for example, to show that any two bases for a finite-dimensional vector space have the same number of elements. The result is named after the German mathematician Ernst Steinitz. The result is often called the Steinitz–Mac Lane exchange lemma, also recognizing the generalization[1] by Saunders Mac Lane of Steinitz's lemma to matroids.[2] Statement Let $U$ and $W$ be finite subsets of a vector space $V$. If $U$ is a set of linearly independent vectors, and $W$ spans $V$, then: 1. $|U|\leq |W|$; 2. There is a set $W'\subseteq W$ with $|W'|=|W|-|U|$ such that $U\cup W'$ spans $V$. Proof Suppose $U=\{u_{1},\dots ,u_{m}\}$ and $W=\{w_{1},\dots ,w_{n}\}$. We wish to show that $m\leq n$, and that after rearranging the $w_{j}$ if necessary, the set $\{u_{1},\dotsc ,u_{m},w_{m+1},\dotsc ,w_{n}\}$ spans $V$. We proceed by induction on $m$. For the base case, suppose $m$ is zero. In this case, the claim holds because there are no vectors $u_{i}$, and the set $\{w_{1},\dotsc ,w_{n}\}$ spans $V$ by hypothesis. For the inductive step, assume the proposition is true for $m-1$. By the inductive hypothesis we may reorder the $w_{i}$ so that $\{u_{1},\ldots ,u_{m-1},w_{m},\ldots ,w_{n}\}$ spans $V$. Since $u_{m}\in V$, there exist coefficients $\mu _{1},\ldots ,\mu _{n}$ such that $u_{m}=\sum _{j=1}^{m-1}\mu _{j}u_{j}+\sum _{j=m}^{n}\mu _{j}w_{j}$. At least one of $\{\mu _{m},\ldots ,\mu _{n}\}$ must be non-zero, since otherwise this equality would contradict the linear independence of $\{u_{1},\ldots ,u_{m}\}$; it follows that $m\leq n$. By reordering the $\mu _{m}w_{m},\ldots ,\mu _{n}w_{n}$ if necessary, we may assume that $\mu _{m}$ is nonzero. Therefore, we have $w_{m}={\frac {1}{\mu _{m}}}\left(u_{m}-\sum _{j=1}^{m-1}\mu _{j}u_{j}-\sum _{j=m+1}^{n}\mu _{j}w_{j}\right)$. In other words, $w_{m}$ is in the span of $\{u_{1},\ldots ,u_{m},w_{m+1},\ldots ,w_{n}\}$. Since this span contains each of the vectors $u_{1},\ldots ,u_{m-1},w_{m},w_{m+1},\ldots ,w_{n}$, by the inductive hypothesis it contains $V$. Applications The Steinitz exchange lemma is a basic result in computational mathematics, especially in linear algebra and in combinatorial algorithms.[3] References 1. Mac Lane, Saunders (1936), "Some interpretations of abstract linear dependence in terms of projective geometry", American Journal of Mathematics, The Johns Hopkins University Press, 58 (1): 236–240, doi:10.2307/2371070, JSTOR 2371070. 2. Kung, Joseph P. S., ed. (1986), A Source Book in Matroid Theory, Boston: Birkhäuser, doi:10.1007/978-1-4684-9199-9, ISBN 0-8176-3173-9, MR 0890330. 3. Page v in Stiefel: Stiefel, Eduard L. (1963). An introduction to numerical mathematics (Translated by Werner C. Rheinboldt & Cornelie J. Rheinboldt from the second German ed.). New York: Academic Press. pp. x+286. MR 0181077. • Julio R. Bastida, Field extensions and Galois Theory, Addison–Wesley Publishing Company (1984). External links • Mizar system proof: http://mizar.org/version/current/html/vectsp_9.html#T19
Wikipedia
Steinmetz curve A Steinmetz curve is the curve of intersection of two right circular cylinders of radii $a$ and $b,$ whose axes intersect perpendicularly. In case of $a=b$ the Steimetz curves are the edges of a Steinmetz solid. If the cylinder axes are the x- and y-axes and $a\leq b$, then the Steinmetz curves are given by the parametric equations: ${\begin{aligned}x(t)&=a\cos t\\y(t)&=\pm {\sqrt {b^{2}-a^{2}\sin ^{2}t}}\\z(t)&=a\sin t\end{aligned}}$ It is named after mathematician Charles Proteus Steinmetz, along with Steinmetz's equation, Steinmetz solids, and Steinmetz equivalent circuit theory. In the case when the two cylinders have equal radii the curve degenerates to two intersecting ellipses. See also • Cylinder References [1][2][3] 1. Abbena, Elsa; Salamon, Simon; Gray, Alfred (2006). Modern Differential Geometry of Curves and Surfaces with Mathematica (3rd ed.). Chapman and Hall/CRC. ISBN 978-1584884484. 2. Gray, A. Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed. Boca Raton, FL: CRC Press, 1997. 3. Weisstein, Eric W. "Steinmetz Curve". Wolfram MathWorld. Retrieved October 28, 2018.
Wikipedia
Stein–Strömberg theorem In mathematics, the Stein–Strömberg theorem or Stein–Strömberg inequality is a result in measure theory concerning the Hardy–Littlewood maximal operator. The result is foundational in the study of the problem of differentiation of integrals. The result is named after the mathematicians Elias M. Stein and Jan-Olov Strömberg. Statement of the theorem Let λn denote n-dimensional Lebesgue measure on n-dimensional Euclidean space Rn and let M denote the Hardy–Littlewood maximal operator: for a function f : Rn → R, Mf : Rn → R is defined by $Mf(x)=\sup _{r>0}{\frac {1}{\lambda ^{n}{\big (}B_{r}(x){\big )}}}\int _{B_{r}(x)}|f(y)|\,\mathrm {d} \lambda ^{n}(y),$ where Br(x) denotes the open ball of radius r with center x. Then, for each p > 1, there is a constant Cp > 0 such that, for all natural numbers n and functions f ∈ Lp(Rn; R), $\|Mf\|_{L^{p}}\leq C_{p}\|f\|_{L^{p}}.$ In general, a maximal operator M is said to be of strong type (p, p) if $\|Mf\|_{L^{p}}\leq C_{p,n}\|f\|_{L^{p}}$ for all f ∈ Lp(Rn; R). Thus, the Stein–Strömberg theorem is the statement that the Hardy–Littlewood maximal operator is of strong type (p, p) uniformly with respect to the dimension n. References • Stein, Elias M.; Strömberg, Jan-Olov (1983). "Behavior of maximal functions in Rn for large n". Ark. Mat. 21 (2): 259–269. doi:10.1007/BF02384314. MR727348 • Tišer, Jaroslav (1988). "Differentiation theorem for Gaussian measures on Hilbert space". Trans. Amer. Math. Soc. 308 (2): 655–666. doi:10.2307/2001096. MR951621
Wikipedia
Stein's example In decision theory and estimation theory, Stein's example (also known as Stein's phenomenon or Stein's paradox) is the observation that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955.[1] An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent. If one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse. Formal statement The following is the simplest form of the paradox, the special case in which the number of observations is equal to the number of parameters to be estimated. Let ${\boldsymbol {\theta }}$ be a vector consisting of $n\geq 3$ unknown parameters. To estimate these parameters, a single measurement $X_{i}$ is performed for each parameter $\theta _{i}$, resulting in a vector $\mathbf {X} $ of length $n$. Suppose the measurements are known to be independent, Gaussian random variables, with mean ${\boldsymbol {\theta }}$ and variance 1, i.e., $\mathbf {X} \sim {\mathcal {N}}({\boldsymbol {\theta }},\mathbf {I} _{n})$. Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate. Under these conditions, it is intuitive and common to use each measurement as an estimate of its corresponding parameter. This so-called "ordinary" decision rule can be written as ${\hat {\boldsymbol {\theta }}}=\mathbf {X} $, which is the maximum likelihood estimator (MLE). The quality of such an estimator is measured by its risk function. A commonly used risk function is the mean squared error, defined as $\mathbb {E} [\|{\boldsymbol {\theta }}-{\hat {\boldsymbol {\theta }}}\|^{2}]$. Surprisingly, it turns out that the "ordinary" decision rule is suboptimal (inadmissible) in terms of mean squared error when $n\geq 3$. In other words, in the setting discussed here, there exist alternative estimators which always achieve lower mean squared error, no matter what the value of ${\boldsymbol {\theta }}$ is. For a given ${\boldsymbol {\theta }}$ one could obviously define a perfect "estimator" which is always just ${\boldsymbol {\theta }}$, but this estimator would be bad for other values of ${\boldsymbol {\theta }}$. The estimators of Stein's paradox are, for a given ${\boldsymbol {\theta }}$, better than the "ordinary" decision rule $\mathbf {X} $ for some $\mathbf {X} $ but necessarily worse for others. It is only on average that they are better. More accurately, an estimator ${\hat {\boldsymbol {\theta }}}_{1}$ is said to dominate another estimator ${\hat {\boldsymbol {\theta }}}_{2}$ if, for all values of ${\boldsymbol {\theta }}$, the risk of ${\hat {\boldsymbol {\theta }}}_{1}$ is lower than, or equal to, the risk of ${\hat {\boldsymbol {\theta }}}_{2}$, and if the inequality is strict for some ${\boldsymbol {\theta }}$. An estimator is said to be admissible if no other estimator dominates it, otherwise it is inadmissible. Thus, Stein's example can be simply stated as follows: The "ordinary" decision rule of the mean of a multivariate Gaussian distribution is inadmissible under mean squared error risk. Many simple, practical estimators achieve better performance than the "ordinary" decision rule. The best-known example is the James–Stein estimator, which shrinks $\mathbf {X} $ towards a particular point (such as the origin) by an amount inversely proportional to the distance of $\mathbf {X} $ from that point. For a sketch of the proof of this result, see Proof of Stein's example. An alternative proof is due to Larry Brown: he proved that the ordinary estimator for an $n$-dimensional multivariate normal mean vector is admissible if and only if the $n$ -dimensional Brownian motion is recurrent.[2] Since the Brownian motion is not recurrent for $n\geq 3$, the MLE is not admissible for $n\geq 3$. An intuitive explanation For any particular value of ${\boldsymbol {\theta }}$ the new estimator will improve at least one of the individual mean square errors $\mathbb {E} [(\theta _{i}-{\hat {\theta }}_{i})^{2}].$ This is not hard − for instance, if ${\boldsymbol {\theta }}$ is between −1 and 1, and $\sigma =1$, then an estimator that linearly shrinks $\mathbf {X} $ towards 0 by 0.5 (i.e., $\operatorname {sign} (X_{i})\max(|X_{i}|-0.5,0)$, soft thresholding with threshold $0.5$) will have a lower mean square error than $\mathbf {X} $ itself. But there are other values of ${\boldsymbol {\theta }}$ for which this estimator is worse than $\mathbf {X} $ itself. The trick of the Stein estimator, and others that yield the Stein paradox, is that they adjust the shift in such a way that there is always (for any ${\boldsymbol {\theta }}$ vector) at least one $X_{i}$ whose mean square error is improved, and its improvement more than compensates for any degradation in mean square error that might occur for another ${\hat {\theta }}_{i}$. The trouble is that, without knowing ${\boldsymbol {\theta }}$, you don't know which of the $n$ mean square errors are improved, so you can't use the Stein estimator only for those parameters. An example of the above setting occurs in channel estimation in telecommunications, for instance, because different factors affect overall channel performance. Implications Stein's example is surprising, since the "ordinary" decision rule is intuitive and commonly used. In fact, numerous methods for estimator construction, including maximum likelihood estimation, best linear unbiased estimation, least squares estimation and optimal equivariant estimation, all result in the "ordinary" estimator. Yet, as discussed above, this estimator is suboptimal. Example To demonstrate the unintuitive nature of Stein's example, consider the following real-world example. Suppose we are to estimate three unrelated parameters, such as the US wheat yield for 1993, the number of spectators at the Wimbledon tennis tournament in 2001, and the weight of a randomly chosen candy bar from the supermarket. Suppose we have independent Gaussian measurements of each of these quantities. Stein's example now tells us that we can get a better estimate (on average) for the vector of three parameters by simultaneously using the three unrelated measurements. At first sight it appears that somehow we get a better estimator for US wheat yield by measuring some other unrelated statistics such as the number of spectators at Wimbledon and the weight of a candy bar. However, we have not obtained a better estimator for US wheat yield by itself, but we have produced an estimator for the vector of the means of all three random variables, which has a reduced total risk. This occurs because the cost of a bad estimate in one component of the vector is compensated by a better estimate in another component. Also, a specific set of the three estimated mean values obtained with the new estimator will not necessarily be better than the ordinary set (the measured values). It is only on average that the new estimator is better. Sketched proof The risk function of the decision rule $d(\mathbf {x} )=\mathbf {x} $ is $R(\theta ,d)=\operatorname {E} _{\theta }[|{\boldsymbol {\theta }}-\mathbf {X} |^{2}]$ $=\int ({\boldsymbol {\theta }}-\mathbf {x} )^{T}({\boldsymbol {\theta }}-\mathbf {x} )\left({\frac {1}{2\pi }}\right)^{n/2}e^{(-1/2)({\boldsymbol {\theta }}-\mathbf {x} )^{T}({\boldsymbol {\theta }}-\mathbf {x} )}dx$ $=n.$ Now consider the decision rule $d'(\mathbf {x} )=\mathbf {x} -{\frac {\alpha }{|\mathbf {x} |^{2}}}\mathbf {x} ,$ where $\alpha =n-2$. We will show that $d'$ is a better decision rule than $d$. The risk function is $R(\theta ,d')=\operatorname {E} _{\theta }\left[\left|\mathbf {\theta -X} +{\frac {\alpha }{|\mathbf {X} |^{2}}}\mathbf {X} \right|^{2}\right]$ $=\operatorname {E} _{\theta }\left[|\mathbf {\theta -X} |^{2}+2(\mathbf {\theta -X} )^{T}{\frac {\alpha }{|\mathbf {X} |^{2}}}\mathbf {X} +{\frac {\alpha ^{2}}{|\mathbf {X} |^{4}}}|\mathbf {X} |^{2}\right]$ $=\operatorname {E} _{\theta }\left[|\mathbf {\theta -X} |^{2}\right]+2\alpha \operatorname {E} _{\theta }\left[{\frac {\mathbf {(\theta -X)} ^{T}\mathbf {X} }{|\mathbf {X} |^{2}}}\right]+\alpha ^{2}\operatorname {E} _{\theta }\left[{\frac {1}{|\mathbf {X} |^{2}}}\right]$ — a quadratic in $\alpha $. We may simplify the middle term by considering a general "well-behaved" function $h:\mathbf {x} \mapsto h(\mathbf {x} )\in \mathbb {R} $ and using integration by parts. For $1\leq i\leq n$, for any continuously differentiable $h$ growing sufficiently slowly for large $x_{i}$ we have: $\operatorname {E} _{\theta }[(\theta _{i}-X_{i})h(\mathbf {X} )\mid X_{j}=x_{j}(j\neq i)]=\int (\theta _{i}-x_{i})h(\mathbf {x} )\left({\frac {1}{2\pi }}\right)^{n/2}e^{-(1/2)({\boldsymbol {\theta }}-\mathbf {x} )^{T}({\boldsymbol {\theta }}-\mathbf {x} )}dx_{i}$ $=\left[h(\mathbf {x} )\left({\frac {1}{2\pi }}\right)^{n/2}e^{-(1/2)({\boldsymbol {\theta }}-\mathbf {x} )^{T}({\boldsymbol {\theta }}-\mathbf {x} )}\right]_{x_{i}=-\infty }^{\infty }-\int {\frac {\partial h}{\partial x_{i}}}(\mathbf {x} )\left({\frac {1}{2\pi }}\right)^{n/2}e^{-(1/2)({\boldsymbol {\theta }}-\mathbf {x} )^{T}({\boldsymbol {\theta }}-\mathbf {x} )}dx_{i}$ $=-\operatorname {E} _{\theta }\left[{\frac {\partial h}{\partial x_{i}}}(\mathbf {X} )\mid X_{j}=x_{j}(j\neq i)\right].$ Therefore, $\operatorname {E} _{\theta }[(\theta _{i}-X_{i})h(\mathbf {X} )]=-\operatorname {E} _{\theta }\left[{\frac {\partial h}{\partial x_{i}}}(\mathbf {X} )\right].$ (This result is known as Stein's lemma.) Now, we choose $h(\mathbf {x} )={\frac {x_{i}}{|\mathbf {x} |^{2}}}.$ If $h$ met the "well-behaved" condition (it doesn't, but this can be remedied—see below), we would have ${\frac {\partial h}{\partial x_{i}}}={\frac {1}{|\mathbf {x} |^{2}}}-{\frac {2x_{i}^{2}}{|\mathbf {x} |^{4}}}$ and so $\operatorname {E} _{\theta }\left[{\frac {({\boldsymbol {\theta }}-\mathbf {X} )^{T}\mathbf {X} }{|\mathbf {X} |^{2}}}\right]=\sum _{i=1}^{n}\operatorname {E} _{\theta }\left[(\theta _{i}-X_{i}){\frac {X_{i}}{|\mathbf {X} |^{2}}}\right]$ $=-\sum _{i=1}^{n}\operatorname {E} _{\theta }\left[{\frac {1}{|\mathbf {X} |^{2}}}-{\frac {2X_{i}^{2}}{|\mathbf {X} |^{4}}}\right]$ $=-(n-2)\operatorname {E} _{\theta }\left[{\frac {1}{|\mathbf {X} |^{2}}}\right].$ Then returning to the risk function of $d'$: $R(\theta ,d')=n-2\alpha (n-2)\operatorname {E} _{\theta }\left[{\frac {1}{|\mathbf {X} |^{2}}}\right]+\alpha ^{2}\operatorname {E} _{\theta }\left[{\frac {1}{|\mathbf {X} |^{2}}}\right].$ This quadratic in $\alpha $ is minimized at $\alpha =n-2$, giving $R(\theta ,d')=R(\theta ,d)-(n-2)^{2}\operatorname {E} _{\theta }\left[{\frac {1}{|\mathbf {X} |^{2}}}\right]$ which of course satisfies $R(\theta ,d')<R(\theta ,d).$ making $d$ an inadmissible decision rule. It remains to justify the use of $h(\mathbf {X} )={\frac {\mathbf {X} }{|\mathbf {X} |^{2}}}.$ This function is not continuously differentiable, since it is singular at $\mathbf {x} =0$. However, the function $h(\mathbf {X} )={\frac {\mathbf {X} }{\varepsilon +|\mathbf {X} |^{2}}}$ is continuously differentiable, and after following the algebra through and letting $\varepsilon \to 0$, one obtains the same result. See also • James–Stein estimator Notes 1. Efron, B.; Morris, C. (1977), "Stein's paradox in statistics" (PDF), Scientific American, 236 (5): 119–127, Bibcode:1977SciAm.236e.119E, doi:10.1038/scientificamerican0577-119 2. Brown, L. D. (1971). "Admissible Estimators, Recurrent Diffusions, and Insoluble Boundary Value Problems". The Annals of Mathematical Statistics. 42 (3): 855–903. doi:10.1214/aoms/1177693318. ISSN 0003-4851. References • Lehmann, E. L.; Casella, G. (1998), "ch.5", Theory of Point Estimation (2nd ed.), ISBN 0-471-05849-1 • Stein, C. (1956). "Inadmissibility of the usual estimator for the mean of a multivariate distribution". Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability. Vol. 1. pp. 197–206. MR 0084922. • Samworth, R. J. (2012), "Stein's Paradox" (PDF), Eureka, 62: 38–41
Wikipedia
Steklov Institute of Mathematics Steklov Institute of Mathematics or Steklov Mathematical Institute (Russian: Математический институт имени В.А.Стеклова) is a premier research institute based in Moscow, specialized in mathematics, and a part of the Russian Academy of Sciences. The institute is named after Vladimir Andreevich Steklov, who in 1919 founded the Institute of Physics and Mathematics in Leningrad. In 1934, this institute was split into separate parts for physics and mathematics, and the mathematical part became the Steklov Institute.[1] At the same time, it was moved to Moscow.[2] The first director of the Steklov Institute was Ivan Matveyevich Vinogradov.[3] From 1961–1964, the institute's director was the notable mathematician Sergei Chernikov.[4] The old building of the Institute in Leningrad became its Department in Leningrad. Today, that department has become a separate institute, called the St. Petersburg Department of Steklov Institute of Mathematics of Russian Academy of Sciences or PDMI RAS, located in Saint Petersburg, Russia. The name St. Petersburg Department is misleading, however, because the St. Petersburg Department is now an independent institute. In 1966, the Moscow-based Keldysh Institute of Applied Mathematics (Russian: Институт прикладной математики им. М.В.Келдыша) split off from the Steklov Institute. References 1. Sinai, Yakov (2003), Russian Mathematicians in the 20th Century, World Scientific, p. 38, ISBN 9789812383853. 2. Sinai (2003), pp. 530 and 697. 3. Sinai (2003), p. 189. 4. Ershov, Y.L.; et al. (1988). "Sergei Nikolaevich Chernikov (obituary)". Russian Math. Surveys. 43 (2): 153–155. Bibcode:1988RuMaS..43..153E. doi:10.1070/RM1988v043n02ABEH001714. External links • Steklov Mathematical Institute • Petersburg Department of Steklov Institute of Mathematics 55°41′41″N 37°33′52″E Authority control International • ISNI • VIAF • 2 National • France • BnF data • Israel • United States • Czech Republic Other • IdRef
Wikipedia
Stellated octahedron The stellated octahedron is the only stellation of the octahedron. It is also called the stella octangula (Latin for "eight-pointed star"), a name given to it by Johannes Kepler in 1609, though it was known to earlier geometers. It was depicted in Pacioli's De Divina Proportione, 1509.[2] Stellated octahedron Seen as a compound of two regular tetrahedra (red and yellow) TypeRegular compound Coxeter symbol{4,3}[2{3,3}]{3,4}[1] Schläfli symbols{{3,3}} a{4,3} ß{2,4} ßr{2,2} Coxeter diagrams ∪ Stellation coreOctahedron Convex hullCube IndexUC4, W19 Polyhedra2 tetrahedra Faces8 triangles Edges12 Vertices8 DualSelf-dual Symmetry group Coxeter group Oh, [4,3], order 48 D4h, [4,2], order 16 D2h, [2,2], order 8 D3d, [2+,6], order 12 Subgroup restricting to one constituent Td, [3,3], order 24 D2d, [2+,4], order 8 D2, [2,2]+, order 4 C3v, [3], order 6 It is the simplest of five regular polyhedral compounds, and the only regular compound of two tetrahedra. It is also the least dense of the regular polyhedral compounds, having a density of 2. It can be seen as a 3D extension of the hexagram: the hexagram is a two-dimensional shape formed from two overlapping equilateral triangles, centrally symmetric to each other, and in the same way the stellated octahedron can be formed from two centrally symmetric overlapping tetrahedra. This can be generalized to any desired amount of higher dimensions; the four-dimensional equivalent construction is the compound of two 5-cells. It can also be seen as one of the stages in the construction of a 3D Koch snowflake, a fractal shape formed by repeated attachment of smaller tetrahedra to each triangular face of a larger figure. The first stage of the construction of the Koch Snowflake is a single central tetrahedron, and the second stage, formed by adding four smaller tetrahedra to the faces of the central tetrahedron, is the stellated octahedron. Construction The Cartesian coordinates of the stellated octahedron are as follows: (±1/2, ±1/2, 0) (0, 0, ±1/√2) (±1, 0, ±1/√2) (0, ±1, ±1/√2) The stellated octahedron can be constructed in several ways: • It is a stellation of the regular octahedron, sharing the same face planes. (See Wenninger model W19.) In perspective Stellation plane The only stellation of a regular octahedron, with one stellation plane in yellow. • It is also a regular polyhedron compound, when constructed as the union of two regular tetrahedra (a regular tetrahedron and its dual tetrahedron). • It can be obtained as an augmentation of the regular octahedron, by adding tetrahedral pyramids on each face. In this construction it has the same topology as the convex Catalan solid, the triakis octahedron, which has much shorter pyramids. • It is a facetting of the cube, sharing the vertex arrangement. • It can be seen as a {4/2} antiprism; with {4/2} being a tetragram, a compound of two dual digons, and the tetrahedron seen as a digonal antiprism, this can be seen as a compound of two digonal antiprisms. • It can be seen as a net of a four-dimensional octahedral pyramid, consisting of a central octahedron surrounded by eight tetrahedra. Facetting of a cube A single diagonal triangle facetting in red Related concepts A compound of two spherical tetrahedra can be constructed, as illustrated. The two tetrahedra of the compound view of the stellated octahedron are "desmic", meaning that (when interpreted as a line in projective space) each edge of one tetrahedron crosses two opposite edges of the other tetrahedron. One of these two crossings is visible in the stellated octahedron; the other crossing occurs at a point at infinity of the projective space, where each edge of one tetrahedron crosses the parallel edge of the other tetrahedron. These two tetrahedra can be completed to a desmic system of three tetrahedra, where the third tetrahedron has as its four vertices the three crossing points at infinity and the centroid of the two finite tetrahedra. The same twelve tetrahedron vertices also form the points of Reye's configuration. The stella octangula numbers are figurate numbers that count the number of balls that can be arranged into the shape of a stellated octahedron. They are 0, 1, 14, 51, 124, 245, 426, 679, 1016, 1449, 1990, .... (sequence A007588 in the OEIS) In popular culture The stellated octahedron appears with several other polyhedra and polyhedral compounds in M. C. Escher's print "Stars",[3] and provides the central form in Escher's Double Planetoid (1949).[4] One of the stellated octahedra in the Plaza de Europa, Zaragoza The obelisk in the center of the Plaza de Europa in Zaragoza, Spain, is surrounded by twelve stellated octahedral lampposts, shaped to form a three-dimensional version of the Flag of Europe.[5] Some modern mystics have associated this shape with the "merkaba",[6] which according to them is a "counter-rotating energy field" named from an ancient Egyptian word.[7] However, the word "merkaba" is actually Hebrew, and more properly refers to a chariot in the visions of Ezekiel.[8] The resemblance between this shape and the two-dimensional star of David has also been frequently noted.[9] References 1. H.S.M. Coxeter, Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8, 3.6 The five regular compounds, pp.47-50, 6.2 Stellating the Platonic solids, pp.96-104 2. Barnes, John (2009), "Shapes and Solids", Gems of Geometry, Springer, pp. 25–56, doi:10.1007/978-3-642-05092-3_2, ISBN 978-3-642-05091-6. 3. Hart, George W. (1996), "The Polyhedra of M.C. Escher", Virtual Polyhedra. 4. Coxeter, H. S. M. (1985), "A special book review: M. C. Escher: His life and complete graphic work", The Mathematical Intelligencer, 7 (1): 59–69, doi:10.1007/BF03023010, S2CID 189887063. See in particular p. 61. 5. "Obelisco" [Obelisk], Zaragoza es Cultura (in Spanish), Ayuntamiento de Zaragoza, retrieved 2021-10-19 6. Dannelley, Richard (1995), Sedona: Beyond the Vortex: Activating the Planetary Ascension Program with Sacred Geometry, the Vortex, and the Merkaba, Light Technology Publishing, p. 14, ISBN 9781622336708 7. Melchizedek, Drunvalo (2000), The Ancient Secret of the Flower of Life: An Edited Transcript of the Flower of Life Workshop Presented Live to Mother Earth from 1985 to 1994 -, Volume 1, Light Technology Publishing, p. 4, ISBN 9781891824173 8. Patzia, Arthur G.; Petrotta, Anthony J. (2010), Pocket Dictionary of Biblical Studies: Over 300 Terms Clearly & Concisely Defined, The IVP Pocket Reference Series, InterVarsity Press, p. 78, ISBN 9780830867028 9. Brisson, David W. (1978), Hypergraphics: visualizing complex relationships in art, science, and technology, Westview Press for the American Association for the Advancement of Science, p. 220, The Stella octangula is the 3-d analog of the Star of David External links Wikimedia Commons has media related to Stellated octahedron. • Eric W. Weisstein, Stella Octangula (Compound of two tetrahedra) at MathWorld. • Klitzing, Richard, "3D compound"
Wikipedia
Stella octangula number In mathematics, a stella octangula number is a figurate number based on the stella octangula, of the form n(2n2 − 1).[1][2] The sequence of stella octangula numbers is 0, 1, 14, 51, 124, 245, 426, 679, 1016, 1449, 1990, ... (sequence A007588 in the OEIS)[1] Only two of these numbers are square. Ljunggren's equation There are only two positive square stella octangula numbers, 1 and 9653449 = 31072 = (13 × 239)2, corresponding to n = 1 and n = 169 respectively.[1][3] The elliptic curve describing the square stella octangula numbers, $m^{2}=n(2n^{2}-1)$ may be placed in the equivalent Weierstrass form $x^{2}=y^{3}-2y$ by the change of variables x = 2m, y = 2n. Because the two factors n and 2n2 − 1 of the square number m2 are relatively prime, they must each be squares themselves, and the second change of variables $X=m/{\sqrt {n}}$ and $Y={\sqrt {n}}$ leads to Ljunggren's equation $X^{2}=2Y^{4}-1$[3] A theorem of Siegel states that every elliptic curve has only finitely many integer solutions, and Wilhelm Ljunggren (1942) found a difficult proof that the only integer solutions to his equation were (1,1) and (239,13), corresponding to the two square stella octangula numbers.[4] Louis J. Mordell conjectured that the proof could be simplified, and several later authors published simplifications.[3][5][6] Additional applications The stella octangula numbers arise in a parametric family of instances to the crossed ladders problem in which the lengths and heights of the ladders and the height of their crossing point are all integers. In these instances, the ratio between the heights of the two ladders is a stella octangula number.[7] References 1. Sloane, N. J. A. (ed.), "Sequence A007588 (Stella octangula numbers: n*(2*n^2 - 1))", The On-Line Encyclopedia of Integer Sequences, OEIS Foundation. 2. Conway, John; Guy, Richard (1996), The Book of Numbers, Springer, p. 51, ISBN 978-0-387-97993-9. 3. Siksek, Samir (1995), Descents on Curves of Genus I (PDF), Ph.D. thesis, University of Exeter, pp. 16–17. 4. Ljunggren, Wilhelm (1942), "Zur Theorie der Gleichung x2 + 1 = Dy4", Avh. Norske Vid. Akad. Oslo. I., 1942 (5): 27, MR 0016375. 5. Steiner, Ray; Tzanakis, Nikos (1991), "Simplifying the solution of Ljunggren's equation X2 + 1 = 2Y4" (PDF), Journal of Number Theory, 37 (2): 123–132, doi:10.1016/S0022-314X(05)80029-0, MR 1092598. 6. Draziotis, Konstantinos A. (2007), "The Ljunggren equation revisited", Colloquium Mathematicum, 109 (1): 9–11, doi:10.4064/cm109-1-2, MR 2308822. 7. Bremner, A.; Høibakk, R.; Lukkassen, D. (2009), "Crossed ladders and Euler's quartic" (PDF), Annales Mathematicae et Informaticae, 36: 29–41, MR 2580898. External links • Weisstein, Eric W., "Stella Octangula Number", MathWorld Figurate numbers 2-dimensional centered • Centered triangular numbers • Centered square numbers • Centered pentagonal numbers • Centered hexagonal numbers • Centered heptagonal numbers • Centered octagonal numbers • Centered nonagonal numbers • Centered decagonal numbers • Star numbers non-centered • Triangular numbers • Square numbers • Pentagonal numbers • Hexagonal numbers • Heptagonal numbers • Octagonal numbers • Nonagonal numbers • Decagonal numbers • Dodecagonal numbers 3-dimensional centered • Centered tetrahedral numbers • Centered cube numbers • Centered octahedral numbers • Centered dodecahedral numbers • Centered icosahedral numbers non-centered • Cube numbers • Octahedral numbers • Dodecahedral numbers • Icosahedral numbers • Stella octangula numbers pyramidal • Tetrahedral numbers • Square pyramidal numbers 4-dimensional non-centered • Pentatope numbers • Squared triangular numbers • Tesseractic numbers Higher dimensional non-centered • 5-hypercube numbers • 6-hypercube numbers • 7-hypercube numbers • 8-hypercube numbers Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Barycentric subdivision In mathematics, the barycentric subdivision is a standard way to subdivide a given simplex into smaller ones. Its extension on simplicial complexes is a canonical method to refine them. Therefore, the barycentric subdivision is an important tool in algebraic topology. Motivation The barycentric subdivision is an operation on simplicial complexes. In algebraic topology it is sometimes useful to replace the original spaces with simplicial complexes via triangulations: The substitution allows to assign combinatorial invariants as the Euler characteristic to the spaces. One can ask if there is an analogous way to replace the continuous functions defined on the topological spaces by functions that are linear on the simplices and which are homotopic to the original maps (see also simplicial approximation). In general, such an assignment requires a refinement of the given complex, meaning, one replaces bigger simplices by a union of smaller simplices. A standard way to effectuate such a refinement is the barycentric subdivision. Moreover, barycentric subdivision induces maps on homology groups and is helpful for computational concerns, see Excision and Mayer-Vietoris-sequence. Definition Subdivision of simplicial complexes Let ${\mathcal {S}}\subset \mathbb {R} ^{n}$ be a geometric simplicial complex. A complex ${\mathcal {S'}}$ is said to be a subdivision of ${\mathcal {S}}$ if • each simplex of ${\mathcal {S'}}$ is contained in a simplex of ${\mathcal {S}}$ • each simplex of ${\mathcal {S}}$ is a finite union of simplices of ${\mathcal {S'}}$ These conditions imply that ${\mathcal {S}}$ and ${\mathcal {S'}}$ equal as sets and as topological spaces, only the simplicial structure changes.[1] Barycentric subdivision of a simplex For a simplex $\Delta $ spanned by points $p_{0},...,p_{n}$, the barycenter is defined to be the point $b_{\Delta }={\frac {1}{n+1}}(p_{0}+p_{1}+...+p_{n})$ . To define the subdivision, we will consider a simplex as a simplicial complex that contains only one simplex of maximal dimension, namely the simplex itself. The barycentric subdivision of a simplex can be defined inductively by its dimension. For points, i.e. simplices of dimension 0, the barycentric subdivision is defined as the point itself. Suppose then for a simplex $\Delta $ of dimension $n$ that its faces $\Delta _{i}$ of dimension $n-1$ are already divided. Therefore, there exist simplices $\Delta _{i,1},\;\Delta _{i,2}...,\Delta _{i,n!}$ covering $\Delta _{i}$. The barycentric subdivision is then defined to be the geometric simplicial complex whose maximal simplices of dimension $n$ are each a convex hulls of $\Delta _{i,j}\cup b_{\Delta }$ for one pair $i,j$ for some $i\in {0,...,n},\;j\in {1,...,n!}$, so there will be $(n+1)!$ simplices covering $\Delta $. One can generalize the subdivision for simplicial complexes whose simplices are not all contained in a single simplex of maximal dimension, i.e. simplicial complexes that do not correspond geometrically to one simplex. This can be done by effectuating the steps described above simultaneously for every simplex of maximal dimension. The induction will then be based on the $n$-th skeleton of the simplicial complex. It allows effectuating the subdivision more than once.[2] Barycentric subdivision of a convex polytope See also: Schläfli orthoscheme § Characteristic simplex of the general regular polytope The operation of barycentric subdivision can be applied to any convex polytope of any dimension, producing another convex polytope of the same dimension.[3] In this version of barycentric subdivision, it is not necessary for the polytope to form a simplicial complex: it can have faces that are not simplices. This is the dual operation to omnitruncation.[4] The vertices of the barycentric subdivision correspond to the faces of all dimensions of the original polytope. Two vertices are adjacent in the barycentric subdivision when they correspond to two faces of different dimensions with the lower-dimensional face included in the higher-dimensional face. The facets of the barycentric subdivision are simplices, corresponding to the flags of the original polytope. For instance, the barycentric subdivision of a cube, or of a regular octahedron, is the disdyakis dodecahedron.[5] The degree-6, degree-4, and degree-8 vertices of the disdyakis dodecahedron correspond to the vertices, edges, and square facets of the cube, respectively. Properties Mesh Let $\Delta \subset \mathbb {R} ^{n}$ a simplex and define $\operatorname {diam} (\Delta )=\operatorname {max} {\Bigl \{}\|a-b\|_{\mathbb {R} ^{n}}\;{\Big |}\;a,b\in \Delta {\Bigr \}}$. One way to measure the mesh of a geometric, simplicial complex is to take the maximal diameter of the simplices contained in the complex. Let $\Delta '$ be an $n$- dimensional simplex that comes from the covering of $\Delta $ obtained by the barycentric subdivision. Then, the following estimation holds: $\operatorname {diam} (\Delta ')\leq \left({\frac {n}{n+1}}\right)\;\operatorname {diam} (\Delta )$. Therefore, by applying barycentric subdivision sufficiently often, the largest edge can be made as small as desired.[6] Homology For some statements in homology-theory one wishes to replace simplicial complexes by a subdivision. On the level of simplicial homology groups one requires a map from the homology-group of the original simplicial complex to the groups of the subdivided complex. Indeed it can be shown that for any subdivision ${\mathcal {K'}}$ of a finite simplicial complex ${\mathcal {K}}$ there is a unique sequence of maps between the homology groups $\lambda _{n}:C_{n}({\mathcal {K}})\rightarrow C_{n}({\mathcal {K'}})$ such that for each $\Delta $ in ${\mathcal {K}}$ the maps fulfills $\lambda (\Delta )\subset \Delta $ and such that the maps induces endomorphisms of chain complexes. Moreover, the induced map is an isomorphism: Subdivision does not change the homology of the complex.[1] To compute the singular homology groups of a topological space $X$ one considers continuous functions $\sigma :\Delta ^{n}\rightarrow X$ :\Delta ^{n}\rightarrow X} where $\Delta ^{n}$ denotes the $n$-dimensional-standard-simplex. In an analogous way as described for simplicial homology groups, barycentric subdivision can be interpreted as an endomorphism of singular chain complexes. Here again, there exists a subdivision operator $\lambda _{n}:C_{n}(X)\rightarrow C_{n}(X)$ sending a chain $\sigma :\Delta \rightarrow X$ :\Delta \rightarrow X} to a linear combination $\sum \varepsilon _{B_{\Delta }}\sigma \vert _{B_{\Delta }}$ where the sum runs over all simplices $B_{\Delta }$ that appear in the covering of $\Delta $ by barycentric subdivision, and $\varepsilon _{B_{\Delta }}\in \{1,-1\}$ for all of such $B_{\Delta }$. This map also induces an automorphism of chain complexes.[7] Applications The barycentric subdivision can be applied on whole simplicial complexes as in the simplicial approximation theorem or it can be used to subdivide geometric simplices. Therefore it is crucial for statements in singular homology theory, see Mayer-Vietoris-sequence and excision. Simplicial approximation Let ${\mathcal {K}}$, ${\mathcal {L}}$ be abstract simplicial complexes above sets $V_{K}$, $V_{L}$. A simplicial map is a function $f:V_{K}\rightarrow V_{L}$ which maps each simplex in ${\mathcal {K}}$ onto a simplex in ${\mathcal {L}}$. By affin-linear extension on the simplices, $f$ induces a map between the geometric realizations of the complexes. Each point in a geometric complex lies in the inner of exactly one simplex, its support. Consider now a continuous map $f:{\mathcal {K}}\rightarrow {\mathcal {L}}$. A simplicial map $g:{\mathcal {K}}\rightarrow {\mathcal {L}}$ is said to be a simplicial approximation of $f$ if and only if each $x\in {\mathcal {K}}$ is mapped by $g$ onto the support of $f(x)$ in ${\mathcal {L}}$. If such an approximation exists, one can construct a homotopy $H$ transforming $f$ into $g$ by defining it on each simplex; there, it always exists, because simplices are contractible. The simplicial approximation theorem guarantees for every continuous function $f:V_{K}\rightarrow V_{L}$ the existence of a simplicial approximation at least after refinement of ${\mathcal {K}}$, for instance by replacing ${\mathcal {K}}$ by its iterated barycentric subdivision.[8] The theorem plays an important role for certain statements in algebraic topology in order to reduce the behavior of continuous maps on those of simplicial maps, as for instance in Lefschetz's fixed-point theorem. Lefschetz's fixed-point theorem The Lefschetz number is a useful tool to find out whether a continuous function admits fixed-points. This data is computed as follows: Suppose that $X$ and $Y$ are topological spaces that admit finite triangulations. A continuous map $f:X\rightarrow Y$ induces homomorphisms $f_{i}:H_{i}(X,K)\rightarrow H_{i}(Y,K)$ between its simplicial homology groups with coefficients in a field $K$. These are linear maps between $K$- vectorspaces, so their trace $tr_{i}$ can be determined and their alternating sum $L_{K}(f)=\sum _{i}(-1)^{i}tr_{i}(f)\in K$ is called the Lefschetz number of $f$. If $f=id$, this number is the Euler characteristic of $K$. The fixpoint theorem states that whenever $L_{K}(f)\neq 0$, $f$ has a fixed-point. In the proof this is first shown only for simplicial maps and then generalized for any continuous functions via the approximation theorem. Now, Brouwer's fixpoint theorem is a special case of this statement. Let $f:\mathbb {D} ^{n}\rightarrow \mathbb {D} ^{n}$ is an endomorphism of the unit-ball. For $k\geq 1$ all its homology groups $H_{k}(\mathbb {D} ^{n})$ vanish, and $f_{0}$ is always the identity, so $L_{K}(f)=tr_{0}(f)=1\neq 0$, so $f$ has a fixpoint.[9] Mayer-Vietoris-Sequence The Mayer- Vietoris- Sequence is often used to compute singular homology groups and gives rise to inductive arguments in topology. The related statement can be formulated as follows: Let $X=A\cup B$ an open cover of the topological space $X$ . There is an exact sequence $\cdots \to H_{n+1}(X)\,{\xrightarrow {\partial _{*}}}\,H_{n}(A\cap B)\,{\xrightarrow {(i_{*},j_{*})}}\,H_{n}(A)\oplus H_{n}(B)\,{\xrightarrow {k_{*}-l_{*}}}\,H_{n}(X)\,{\xrightarrow {\partial _{*}}}\,H_{n-1}(A\cap B)\to \cdots $ $\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \cdots \to H_{0}(A)\oplus H_{0}(B)\,{\xrightarrow {k_{*}-l_{*}}}\,H_{0}(X)\to 0.$ where we consider singular homology groups, $i:A\cap B\hookrightarrow A,\;j:A\cap B\hookrightarrow B,\;k:A\hookrightarrow X,\;l:B\hookrightarrow X$ are embeddings and $\oplus $ denotes the direct sum of abelian groups. For the construction of singular homology groups one considers continuous maps defined on the standard simplex $\sigma :\Delta \rightarrow X$ :\Delta \rightarrow X} . An obstacle in the proof of the theorem are maps $\sigma $ such that their image is nor contained in $A$ neither in $B$. This can be fixed using the subdivision operator: By considering the images of such maps as the sum of images of smaller simplices, lying in $A$ or $B$ one can show that the inclusion $C_{n}(A)\oplus C_{n}(B)\hookrightarrow C_{n}(X)$ induces an isomorphism on homology which is needed to compare the homology groups.[10] Excision Excision can be used to determine relative homology groups. It allows in certain cases to forget about subsets of topological spaces for their homology groups and therefore simplifies their computation: Let $X$ be a topological space and let $Z\subset A\subset X$ be subsets, where $Z$ is closed such that $Z\subset A^{\circ }$. Then the inclusion $i:(X\setminus Z,A\setminus Z)\hookrightarrow (X,A)$ induces an isomorphism $H_{k}(X\setminus Z,A\setminus Z)\rightarrow H_{k}(X,A)$ for all $k\geq 0.$ Again, in singular homology, maps $\sigma :\Delta \rightarrow X$ :\Delta \rightarrow X} may appear such that their image is not part of the subsets mentioned in the theorem. Analogously those can be understood as a sum of images of smaller simplices obtained by the barycentric subdivision.[11] References 1. James R. Munkres, Elements of algebraic topology (in German), Menlo Park, Calif., p. 96, ISBN 0-201-04586-9 2. James R. Munkres, Elements of algebraic topology (in German), Menlo Park, Calif., pp. 85 f, ISBN 0-201-04586-9 3. Ewald, G.; Shephard, G. C. (1974), "Stellar subdivisions of boundary complexes of convex polytopes", Mathematische Annalen, 210: 7–16, doi:10.1007/BF01344542, MR 0350623 4. Matteo, Nicholas (2015), Convex Polytopes and Tilings with Few Flag Orbits (Doctoral dissertation), Northeastern University, ProQuest 1680014879 See p. 22, where the omnitruncation is described as a "flag graph". 5. Langer, Joel C.; Singer, David A. (2010), "Reflections on the lemniscate of Bernoulli: the forty-eight faces of a mathematical gem", Milan Journal of Mathematics, 78 (2): 643–682, doi:10.1007/s00032-010-0124-5, MR 2781856 6. Hatcher, Allen (2001), Algebraic Topology (PDF), p. 120 7. Hatcher (2001), pp. 122 f. 8. Ralph Stöcker, Heiner Zieschang, Algebraische Topologie (in German) (2. überarbeitete ed.), Stuttgart: B.G. Teubner, p. 81, ISBN 3-519-12226-X 9. Bredon, Glen E., Springer Verlag (ed.), Topology and Geometry (in German), Berlin/ Heidelberg/ New York, pp. 254 f, ISBN 3-540-97926-3 10. Hatcher (2001), p. 149. 11. Hatcher (2001), p. 119.
Wikipedia
Stellated rhombic dodecahedral honeycomb The stellated rhombic dodecahedral honeycomb is a space-filling tessellation (or honeycomb) in Euclidean 3-space made up of copies of stellated rhombic dodecahedron cells.[1] Six stellated rhombic dodecahedra meet at each vertex. This honeycomb is cell-transitive, edge-transitive and vertex-transitive. See also • Yoshimoto Cube References 1. Ioana Mihăilă. "Tessellations from Group Actions and the Mystery of Escher's Solid". Retrieved 2013-05-09. Alt URL • George Hart, Stellations • Exploring a complex space-filling shape, Exploratorium • Ellery B. Golos; Daniel D. Joseph (1981). Patterns in mathematics. Prindle, Weber & Schmidt. ISBN 978-0-87150-301-5.
Wikipedia
Stellation diagram In geometry, a stellation diagram or stellation pattern is a two-dimensional diagram in the plane of some face of a polyhedron, showing lines where other face planes intersect with this one. The lines cause 2D space to be divided up into regions. Regions not intersected by any further lines are called elementary regions. Usually unbounded regions are excluded from the diagram, along with any portions of the lines extending to infinity. Each elementary region represents a top face of one cell, and a bottom face of another. A collection of these diagrams, one for each face type, can be used to represent any stellation of the polyhedron, by shading the regions which should appear in that stellation. A stellation diagram exists for every face of a given polyhedron. In face transitive polyhedra, symmetry can be used to require all faces have the same diagram shading. Semiregular polyhedra like the Archimedean solids will have different stellation diagrams for different kinds of faces. See also • List of Wenninger polyhedron models • The fifty nine icosahedra References • M Wenninger, Polyhedron models; Cambridge University Press, 1st Edn (1983), Ppbk (2003). • Coxeter, Harold Scott MacDonald; Du Val, P.; Flather, H. T.; Petrie, J. F. (1999), The fifty-nine icosahedra (3rd ed.), Tarquin, ISBN 978-1-899618-32-3, MR 0676126 (1st Edn University of Toronto (1938)) External links Wikimedia Commons has media related to Stellation diagrams. • Stellation diagram • Polyhedra Stellations Applet Vladimir Bulatov, 1998 • http://bulatov.org/polyhedra/stellation/index.html Polyhedra Stellation (VRML) • http://bulatov.org/polyhedra/icosahedron/index_vrml.html 59 stellations of icosahedron • http://www.queenhill.demon.co.uk/polyhedra/FacetingDiagrams/FacetingDiags.htm facetting diagrams • http://fortran.orpheusweb.co.uk/Poly/Ex/dodstl.htm Stellating the Dodecahedron • http://www.queenhill.demon.co.uk/polyhedra/icosa/stelfacet/StelFacet.htm Towards stellating the icosahedron and faceting the dodecahedron • http://www.mathconsult.ch/showroom/icosahedra/index.html 59 stellations of the icosahedron • http://www.uwgb.edu/dutchs/symmetry/stellate.htm Stellations of Polyhedra • http://www.uwgb.edu/dutchs/symmetry/stelicos.htm Coxeter's Classification and Notation • http://www.georgehart.com/virtual-polyhedra/stellations-icosahedron-index.html
Wikipedia
Great truncated icosidodecahedron In geometry, the great truncated icosidodecahedron (or great quasitruncated icosidodecahedron or stellatruncated icosidodecahedron) is a nonconvex uniform polyhedron, indexed as U68. It has 62 faces (30 squares, 20 hexagons, and 12 decagrams), 180 edges, and 120 vertices.[1] It is given a Schläfli symbol t0,1,2{5⁄3,3}, and Coxeter-Dynkin diagram, . Great truncated icosidodecahedron TypeUniform star polyhedron ElementsF = 62, E = 180 V = 120 (χ = 2) Faces by sides30{4}+20{6}+12{10/3} Coxeter diagram Wythoff symbol2 3 5/3 | Symmetry groupIh, [5,3], *532 Index referencesU68, C87, W108 Dual polyhedronGreat disdyakis triacontahedron Vertex figure 4.6.10/3 Bowers acronymGaquatid Cartesian coordinates Cartesian coordinates for the vertices of a great truncated icosidodecahedron centered at the origin are all the even permutations of (±τ, ±τ, ±(3−1/τ)), (±2τ, ±1/τ, ±τ−3), (±τ, ±1/τ2, ±(1+3/τ)), (±√5, ±2, ±√5/τ) and (±1/τ, ±3, ±2/τ), where τ = (1+√5)/2 is the golden ratio. Related polyhedra Great disdyakis triacontahedron Great disdyakis triacontahedron TypeStar polyhedron Face ElementsF = 120, E = 180 V = 62 (χ = 2) Symmetry groupIh, [5,3], *532 Index referencesDU68 dual polyhedronGreat truncated icosidodecahedron The great disdyakis triacontahedron (or trisdyakis icosahedron) is a nonconvex isohedral polyhedron. It is the dual of the great truncated icosidodecahedron. Its faces are triangles. Proportions The triangles have one angle of $\arccos({\frac {1}{6}}+{\frac {1}{15}}{\sqrt {5}})\approx 71.594\,636\,220\,88^{\circ }$, one of $\arccos({\frac {3}{4}}+{\frac {1}{10}}{\sqrt {5}})\approx 13.192\,999\,040\,74^{\circ }$ and one of $\arccos({\frac {3}{8}}-{\frac {5}{24}}{\sqrt {5}})\approx 95.212\,364\,738\,38^{\circ }$. The dihedral angle equals $\arccos({\frac {-179+24{\sqrt {5}}}{241}})\approx 121.336\,250\,807\,39^{\circ }$. Part of each triangle lies within the solid, hence is invisible in solid models. See also • List of uniform polyhedra References 1. Maeder, Roman. "68: great truncated icosidodecahedron". MathConsult. • Wenninger, Magnus (1983), Dual Models, Cambridge University Press, doi:10.1017/CBO9780511569371, ISBN 978-0-521-54325-5, MR 0730208 p. 96 External links • Weisstein, Eric W. "Great truncated icosidodecahedron". MathWorld. • Weisstein, Eric W. "Great disdyakis triacontahedron". MathWorld. Star-polyhedra navigator Kepler-Poinsot polyhedra (nonconvex regular polyhedra) • small stellated dodecahedron • great dodecahedron • great stellated dodecahedron • great icosahedron Uniform truncations of Kepler-Poinsot polyhedra • dodecadodecahedron • truncated great dodecahedron • rhombidodecadodecahedron • truncated dodecadodecahedron • snub dodecadodecahedron • great icosidodecahedron • truncated great icosahedron • nonconvex great rhombicosidodecahedron • great truncated icosidodecahedron Nonconvex uniform hemipolyhedra • tetrahemihexahedron • cubohemioctahedron • octahemioctahedron • small dodecahemidodecahedron • small icosihemidodecahedron • great dodecahemidodecahedron • great icosihemidodecahedron • great dodecahemicosahedron • small dodecahemicosahedron Duals of nonconvex uniform polyhedra • medial rhombic triacontahedron • small stellapentakis dodecahedron • medial deltoidal hexecontahedron • small rhombidodecacron • medial pentagonal hexecontahedron • medial disdyakis triacontahedron • great rhombic triacontahedron • great stellapentakis dodecahedron • great deltoidal hexecontahedron • great disdyakis triacontahedron • great pentagonal hexecontahedron Duals of nonconvex uniform polyhedra with infinite stellations • tetrahemihexacron • hexahemioctacron • octahemioctacron • small dodecahemidodecacron • small icosihemidodecacron • great dodecahemidodecacron • great icosihemidodecacron • great dodecahemicosacron • small dodecahemicosacron
Wikipedia
Stemm Stemm may refer to: • STEMM, American metal band • STEMM, abbreviation for Science, technology, engineering, mathematics, and medicine • Stemm, Indiana, a community in the US See also • Stem (disambiguation) • Stemme, a German light aircraft manufacturer
Wikipedia