text
stringlengths
100
500k
subset
stringclasses
4 values
Time hierarchy theorem In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time. The time hierarchy theorem for deterministic multi-tape Turing machines was first proven by Richard E. Stearns and Juris Hartmanis in 1965.[1] It was improved a year later when F. C. Hennie and Richard E. Stearns improved the efficiency of the Universal Turing machine.[2] Consequent to the theorem, for every deterministic time-bounded complexity class, there is a strictly larger time-bounded complexity class, and so the time-bounded hierarchy of complexity classes does not completely collapse. More precisely, the time hierarchy theorem for deterministic Turing machines states that for all time-constructible functions f(n), ${\mathsf {DTIME}}\left(o\left(f(n)\right)\right)\subsetneq {\mathsf {DTIME}}(f(n){\log f(n)})$, where DTIME(f(n)) denotes the complexity class of decision problems solvable in time O(f(n)). Note that the left-hand class involves little o notation, referring to the set of decision problems solvable in asymptotically less than f(n) time. The time hierarchy theorem for nondeterministic Turing machines was originally proven by Stephen Cook in 1972.[3] It was improved to its current form via a complex proof by Joel Seiferas, Michael Fischer, and Albert Meyer in 1978.[4] Finally in 1983, Stanislav Žák achieved the same result with the simple proof taught today.[5] The time hierarchy theorem for nondeterministic Turing machines states that if g(n) is a time-constructible function, and f(n+1) = o(g(n)), then ${\mathsf {NTIME}}(f(n))\subsetneq {\mathsf {NTIME}}(g(n))$. The analogous theorems for space are the space hierarchy theorems. A similar theorem is not known for time-bounded probabilistic complexity classes, unless the class also has one bit of advice.[6] Background Both theorems use the notion of a time-constructible function. A function $f:\mathbb {N} \rightarrow \mathbb {N} $ is time-constructible if there exists a deterministic Turing machine such that for every $n\in \mathbb {N} $, if the machine is started with an input of n ones, it will halt after precisely f(n) steps. All polynomials with non-negative integer coefficients are time-constructible, as are exponential functions such as 2n. Proof overview We need to prove that some time class TIME(g(n)) is strictly larger than some time class TIME(f(n)). We do this by constructing a machine which cannot be in TIME(f(n)), by diagonalization. We then show that the machine is in TIME(g(n)), using a simulator machine. Deterministic time hierarchy theorem Statement Time Hierarchy Theorem. If f(n) is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic time o(f(n)) but can be solved in worst-case deterministic time O(f(n)log f(n)). Thus ${\mathsf {DTIME}}(o(f(n)))\subsetneq {\mathsf {DTIME}}\left(f(n)\log f(n)\right).$ Note 1. f(n) is at least n, since smaller functions are never time-constructible. Example. There are problems solvable in time nlog2n but not time n. This follows by setting $f(n)=n\log n$, since n is in $o\left(n\log n\right).$ Proof We include here a proof of a weaker result, namely that DTIME(f(n)) is a strict subset of DTIME(f(2n + 1)3), as it is simpler but illustrates the proof idea. See the bottom of this section for information on how to extend the proof to f(n)logf(n). To prove this, we first define the language of the encodings of machines and their inputs which cause them to halt within f $H_{f}=\left\{([M],x)\ |\ M\ {\text{accepts}}\ x\ {\text{in}}\ f(|x|)\ {\text{steps}}\right\}.$ Notice here that this is a time-class. It is the set of pairs of machines and inputs to those machines (M,x) so that the machine M accepts within f(|x|) steps. Here, M is a deterministic Turing machine, and x is its input (the initial contents of its tape). [M] denotes an input that encodes the Turing machine M. Let m be the size of the tuple ([M], x). We know that we can decide membership of Hf by way of a deterministic Turing machine R, that simulates M for f(x) steps by first calculating f(|x|) and then writing out a row of 0s of that length, and then using this row of 0s as a "clock" or "counter" to simulate M for at most that many steps. At each step, the simulating machine needs to look through the definition of M to decide what the next action would be. It is safe to say that this takes at most f(m)3 operations (since it is known that a simulation of a machine of time complexity T(n) for can be achieved in time $O(T(n)\cdot |M|)$ on a multitape machine, where |M| is the length of the encoding of M), we have that: $H_{f}\in {\mathsf {TIME}}\left(f(m)^{3}\right).$ The rest of the proof will show that $H_{f}\notin {\mathsf {TIME}}\left(f\left(\left\lfloor {\frac {m}{2}}\right\rfloor \right)\right)$ so that if we substitute 2n + 1 for m, we get the desired result. Let us assume that Hf is in this time complexity class, and we will reach a contradiction. If Hf is in this time complexity class, then there exists a machine K which, given some machine description [M] and input x, decides whether the tuple ([M], x) is in Hf within ${\mathsf {TIME}}\left(f\left(\left\lfloor {\frac {m}{2}}\right\rfloor \right)\right).$ We use this K to construct another machine, N, which takes a machine description [M] and runs K on the tuple ([M], [M]), ie. M is simulated on its own code by K, and then N accepts if K rejects, and rejects if K accepts. If n is the length of the input to N, then m (the length of the input to K) is twice n plus some delimiter symbol, so m = 2n + 1. N{'}}s running time is thus ${\mathsf {TIME}}\left(f\left(\left\lfloor {\frac {m}{2}}\right\rfloor \right)\right)={\mathsf {TIME}}\left(f\left(\left\lfloor {\frac {2n+1}{2}}\right\rfloor \right)\right)={\mathsf {TIME}}\left(f(n)\right).$ Now if we feed [N] as input into N itself (which makes n the length of [N]) and ask the question whether N accepts its own description as input, we get: • If N accepts [N] (which we know it does in at most f(n) operations since K halts on ([N], [N]) in f(n) steps), this means that K rejects ([N], [N]), so ([N], [N]) is not in Hf, and so by the definition of Hf, this implies that N does not accept [N] in f(n) steps. Contradiction. • If N rejects [N] (which we know it does in at most f(n) operations), this means that K accepts ([N], [N]), so ([N], [N]) is in Hf, and thus N does accept [N] in f(n) steps. Contradiction. We thus conclude that the machine K does not exist, and so $H_{f}\notin {\mathsf {TIME}}\left(f\left(\left\lfloor {\frac {m}{2}}\right\rfloor \right)\right).$ Extension The reader may have realised that the proof gives the weaker result because we have chosen a simple Turing machine simulation for which we know that $H_{f}\in {\mathsf {TIME}}(f(m)^{3}).$ It is known[7] that a more efficient simulation exists which establishes that $H_{f}\in {\mathsf {TIME}}(f(m)\log f(m))$. Non-deterministic time hierarchy theorem If g(n) is a time-constructible function, and f(n+1) = o(g(n)), then there exists a decision problem which cannot be solved in non-deterministic time f(n) but can be solved in non-deterministic time g(n). In other words, the complexity class NTIME(f(n)) is a strict subset of NTIME(g(n)). Consequences The time hierarchy theorems guarantee that the deterministic and non-deterministic versions of the exponential hierarchy are genuine hierarchies: in other words P ⊊ EXPTIME ⊊ 2-EXP ⊊ ... and NP ⊊ NEXPTIME ⊊ 2-NEXP ⊊ .... For example, ${\mathsf {P}}\subsetneq {\mathsf {EXPTIME}}$ since ${\mathsf {P}}\subseteq {\mathsf {DTIME}}(2^{n})\subsetneq {\mathsf {DTIME}}(2^{2n})\subseteq {\mathsf {EXPTIME}}$. Indeed, ${\mathsf {DTIME}}\left(2^{n}\right)\subseteq {\mathsf {DTIME}}\left(o\left({\frac {2^{2n}}{2n}}\right)\right)\subsetneq {\mathsf {DTIME}}(2^{2n})$ from the time hierarchy theorem. The theorem also guarantees that there are problems in P requiring arbitrarily large exponents to solve; in other words, P does not collapse to DTIME(nk) for any fixed k. For example, there are problems solvable in n5000 time but not n4999 time. This is one argument against Cobham's thesis, the convention that P is a practical class of algorithms. If such a collapse did occur, we could deduce that P ≠ PSPACE, since it is a well-known theorem that DTIME(f(n)) is strictly contained in DSPACE(f(n)). However, the time hierarchy theorems provide no means to relate deterministic and non-deterministic complexity, or time and space complexity, so they cast no light on the great unsolved questions of computational complexity theory: whether P and NP, NP and PSPACE, PSPACE and EXPTIME, or EXPTIME and NEXPTIME are equal or not. Sharper hierarchy theorems The gap of approximately $\log f(n)$ between the lower and upper time bound in the hierarchy theorem can be traced to the efficiency of the device used in the proof, namely a universal program that maintains a step-count. This can be done more efficiently on certain computational models. The sharpest results, presented below, have been proved for: • The unit-cost random access machine[8] • A programming language model whose programs operate on a binary tree that is always accessed via its root. This model, introduced by Neil D. Jones[9] is stronger than a deterministic Turing machine but weaker than a random access machine. For these models, the theorem has the following form: If f(n) is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic time f(n) but can be solved in worst-case time af(n) for some constant a (dependent on f). Thus, a constant-factor increase in the time bound allows for solving more problems, in contrast with the situation for Turing machines (see Linear speedup theorem). Moreover, Ben-Amram proved[10] that, in the above movels, for f of polynomial growth rate (but more than linear), it is the case that for all $\varepsilon >0$, there exists a decision problem which cannot be solved in worst-case deterministic time f(n) but can be solved in worst-case time $(1+\varepsilon )f(n)$. See also • Space hierarchy theorem References 1. Hartmanis, J.; Stearns, R. E. (1 May 1965). "On the computational complexity of algorithms". Transactions of the American Mathematical Society. American Mathematical Society. 117: 285–306. doi:10.2307/1994208. ISSN 0002-9947. JSTOR 1994208. MR 0170805. 2. Hennie, F. C.; Stearns, R. E. (October 1966). "Two-Tape Simulation of Multitape Turing Machines". J. ACM. New York, NY, USA: ACM. 13 (4): 533–546. doi:10.1145/321356.321362. ISSN 0004-5411. S2CID 2347143. 3. Cook, Stephen A. (1972). "A hierarchy for nondeterministic time complexity". Proceedings of the fourth annual ACM symposium on Theory of computing. STOC '72. Denver, Colorado, United States: ACM. pp. 187–192. doi:10.1145/800152.804913. 4. Seiferas, Joel I.; Fischer, Michael J.; Meyer, Albert R. (January 1978). "Separating Nondeterministic Time Complexity Classes". J. ACM. New York, NY, USA: ACM. 25 (1): 146–167. doi:10.1145/322047.322061. ISSN 0004-5411. S2CID 13561149. 5. Žák, Stanislav (October 1983). "A Turing machine time hierarchy". Theoretical Computer Science. Elsevier Science B.V. 26 (3): 327–333. doi:10.1016/0304-3975(83)90015-4. 6. Fortnow, L.; Santhanam, R. (2004). "Hierarchy Theorems for Probabilistic Polynomial Time". 45th Annual IEEE Symposium on Foundations of Computer Science. p. 316. doi:10.1109/FOCS.2004.33. ISBN 0-7695-2228-9. S2CID 5555450. 7. Sipser, Michael (27 June 2012). Introduction to the Theory of Computation (3rd ed.). CENGAGE learning. ISBN 978-1-133-18779-0. 8. Sudborough, Ivan H.; Zalcberg, A. (1976). "On Families of Languages Defined by Time-Bounded Random Access Machines". SIAM Journal on Computing. 5 (2): 217–230. doi:10.1137/0205018. 9. Jones, Neil D. (1993). "Constant factors do matter". 25th Symposium on the Theory of Computing: 602–611. doi:10.1145/167088.167244. S2CID 7527905. 10. Ben-Amram, Amir M. (2003). "Tighter constant-factor time hierarchies". Information Processing Letters. 87 (1): 39–44. doi:10.1016/S0020-0190(03)00253-9. Further reading • Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X. Pages 310–313 of section 9.1: Hierarchy theorems. • Christos Papadimitriou (1993). Computational Complexity (1st ed.). Addison Wesley. ISBN 0-201-53082-1. Section 7.2: The Hierarchy Theorem, pp. 143–146.
Wikipedia
Seasonality In time series data, seasonality is the presence of variations that occur at specific regular intervals less than a year, such as weekly, monthly, or quarterly. Seasonality may be caused by various factors, such as weather, vacation, and holidays[1] and consists of periodic, repetitive, and generally regular and predictable patterns in the levels[2] of a time series. Seasonal fluctuations in a time series can be contrasted with cyclical patterns. The latter occur when the data exhibits rises and falls that are not of a fixed period. Such non-seasonal fluctuations are usually due to economic conditions and are often related to the "business cycle"; their period usually extends beyond a single year, and the fluctuations are usually of at least two years.[3] Organisations facing seasonal variations, such as ice-cream vendors, are often interested in knowing their performance relative to the normal seasonal variation. Seasonal variations in the labour market can be attributed to the entrance of school leavers into the job market as they aim to contribute to the workforce upon the completion of their schooling. These regular changes are of less interest to those who study employment data than the variations that occur due to the underlying state of the economy; their focus is on how unemployment in the workforce has changed, despite the impact of the regular seasonal variations.[3] It is necessary for organisations to identify and measure seasonal variations within their market to help them plan for the future. This can prepare them for the temporary increases or decreases in labour requirements and inventory as demand for their product or service fluctuates over certain periods. This may require training, periodic maintenance, and so forth that can be organized in advance. Apart from these considerations, the organisations need to know if variation they have experienced has been more or less than the expected amount, beyond what the usual seasonal variations account for. Motivation There are several main reasons for studying seasonal variation: • The description of the seasonal effect provides a better understanding of the impact this component has upon a particular series. • After establishing the seasonal pattern, methods can be implemented to eliminate it from the time-series to study the effect of other components such as cyclical and irregular variations. This elimination of the seasonal effect is referred to as de-seasonalizing or seasonal adjustment of data. • To use the past patterns of the seasonal variations to contribute to forecasting and the prediction of the future trends, such as in climate normals. Detection The following graphical techniques can be used to detect seasonality: • A run sequence plot will often show seasonality • A seasonal plot will show the data from each season overlapped[4] • A seasonal subseries plot is a specialized technique for showing seasonality • Multiple box plots can be used as an alternative to the seasonal subseries plot to detect seasonality • An autocorrelation plot (ACF) and a spectral plot can help identify seasonality. A really good way to find periodicity, including seasonality, in any regular series of data is to remove any overall trend first and then to inspect time periodicity.[5] The run sequence plot is a recommended first step for analyzing any time series. Although seasonality can sometimes be indicated by this plot, seasonality is shown more clearly by the seasonal subseries plot or the box plot. The seasonal subseries plot does an excellent job of showing both the seasonal differences (between group patterns) and also the within-group patterns. The box plot shows the seasonal difference (between group patterns) quite well, but it does not show within group patterns. However, for large data sets, the box plot is usually easier to read than the seasonal subseries plot. The seasonal plot, seasonal subseries plot, and the box plot all assume that the seasonal periods are known. In most cases, the analyst will in fact, know this. For example, for monthly data, the period is 12 since there are 12 months in a year. However, if the period is not known, the autocorrelation plot can help. If there is significant seasonality, the autocorrelation plot should show spikes at lags equal to the period. For example, for monthly data, if there is a seasonality effect, we would expect to see significant peaks at lag 12, 24, 36, and so on (although the intensity may decrease the further out we go). An autocorrelation plot (ACF) can be used to identify seasonality, as it calculates the difference (residual amount) between a Y value and a lagged value of Y. The result gives some points where the two values are close together ( no seasonality ), but other points where there is a large discrepancy. These points indicate a level of seasonality in the data. Semiregular cyclic variations might be dealt with by spectral density estimation. Calculation Seasonal variation is measured in terms of an index, called a seasonal index. It is an average that can be used to compare an actual observation relative to what it would be if there were no seasonal variation. An index value is attached to each period of the time series within a year. This implies that if monthly data are considered there are 12 separate seasonal indices, one for each month. The following methods use seasonal indices to measure seasonal variations of a time-series data. • Method of simple averages • Ratio to trend method • Ratio-to-moving-average method • Link relatives method Method of simple averages The measurement of seasonal variation by using the ratio-to-moving-average method provides an index to measure the degree of the seasonal variation in a time series. The index is based on a mean of 100, with the degree of seasonality measured by variations away from the base. For example, if we observe the hotel rentals in a winter resort, we find that the winter quarter index is 124. The value 124 indicates that 124 percent of the average quarterly rental occur in winter. If the hotel management records 1436 rentals for the whole of last year, then the average quarterly rental would be 359= (1436/4). As the winter-quarter index is 124, we estimate the number of winter rentals as follows: 359*(124/100)=445; Here, 359 is the average quarterly rental. 124 is the winter-quarter index. 445 the seasonalized winter-quarter rental. This method is also called the percentage moving average method. In this method, the original data values in the time-series are expressed as percentages of moving averages. The steps and the tabulations are given below. Ratio to trend method 1. Find the centered 12 monthly (or 4 quarterly) moving averages of the original data values in the time-series. 2. Express each original data value of the time-series as a percentage of the corresponding centered moving average values obtained in step(1). In other words, in a multiplicative time-series model, we get (Original data values) / (Trend values) × 100 = (T × C × S × I) / (T × C) × 100 = (S × I ) × 100. This implies that the ratio-to-moving average represents the seasonal and irregular components. 3. Arrange these percentages according to months or quarter of given years. Find the averages over all months or quarters of the given years. 4. If the sum of these indices is not 1200 (or 400 for quarterly figures), multiply then by a correction factor = 1200 / (sum of monthly indices). Otherwise, the 12 monthly averages will be considered as seasonal indices. Ratio-to-moving-average method Let us calculate the seasonal index by the ratio-to-moving-average method from the following data: Sample Data Year/Quarters 1 2 3 4 1996 75 60 54 59 1997 86 65 63 80 1998 90 72 66 85 1999 100 78 72 93 Now calculations for 4 quarterly moving averages and ratio-to-moving-averages are shown in the below table. Moving Averages Year Quarter Original Values(Y) 4 Figures Moving Total 4 Figures Moving Average 2 Figures Moving Total 2 Figures Moving Average(T) Ratio-to-Moving-Average(%)(Y)/ (T)*100 1996 1 75 — —  — — — 2 60 — —  — 248 62.00 3 54 126.75 63.375  85.21 259 64.75 4 59 130.75 65.375  90.25 264 66.00 1997 1 86 134.25 67.125 128.12 273 68.25 2 65 141.75 70.875  91.71 294 73.50 3 63 148.00 74.00  85.13 298 74.50 4 80 150.75 75.375 106.14 305 76.25 1998 1 90 153.25 76.625 117.45 308 77.00 2 72 155.25 77.625  92.75 313 78.25 3 66 159.00 79.50  83.02 323 80.75 4 85 163.00 81.50 104.29 329 82.25 1999 1 100 166.00 83.00 120.48 335 83.75 2 78 169.50 84.75  92.03 343 85.75 3 72 — —  — — — 4 93 — —  — Calculation of Seasonal Index Years/Quarters 1 2 3 4 Total 1996  —  —  85.21  90.25 1997 128.12  91.71  85.13 106.14 1998 117.45  92.75  83.02 104.29 1999 120.48  92.04  —  — Total 366.05 276.49 253.36 300.68 Seasonal Average 122.01  92.16  84.45 100.23 398.85 Adjusted Seasonal Average 122.36  92.43  84.69 100.52 400 Now the total of seasonal averages is 398.85. Therefore, the corresponding correction factor would be 400/398.85 = 1.00288. Each seasonal average is multiplied by the correction factor 1.00288 to get the adjusted seasonal indices as shown in the above table. Link relatives method 1. In an additive time-series model, the seasonal component is estimated as: S = Y – (T + C + I ) where S : Seasonal values Y : Actual data values of the time-series T : Trend values C : Cyclical values I : Irregular values. 2. In a multiplicative time-series model, the seasonal component is expressed in terms of ratio and percentage as Seasonal effect $={\frac {T\cdot S\cdot C\cdot I}{T\cdot C\cdot I}}\times 100\ ={\frac {Y}{T\cdot C\cdot I}}\times 100$; However, in practice the detrending of time-series is done to arrive at $S\cdot C\cdot I$. This is done by dividing both sides of $Y=T\cdot S\cdot C\cdot I$ by trend values T so that ${\frac {Y}{T}}=S\cdot C\cdot I$. 3. The deseasonalized time-series data will have only trend (T ), cyclical (C ) and irregular (I ) components and is expressed as: • Multiplicative model : ${\frac {Y}{S}}\times 100={\frac {T\cdot S\cdot C\cdot I}{S}}\times 100=(T\cdot C\cdot I)\times 100$ • Additive model: Y – S = (T + S + C + I ) – S = T + C + I Modeling A completely regular cyclic variation in a time series might be dealt with in time series analysis by using a sinusoidal model with one or more sinusoids whose period-lengths may be known or unknown depending on the context. A less completely regular cyclic variation might be dealt with by using a special form of an ARIMA model which can be structured so as to treat cyclic variations semi-explicitly. Such models represent cyclostationary processes. Another method of modelling periodic seasonality is the use of pairs of Fourier terms. Similar to using the sinusoidal model, Fourier terms added into regression models utilize sine and cosine terms in order to simulate seasonality. However, the seasonality of such a regression would be represented as the sum of sine or cosine terms, instead of a single sine or cosine term in a sinusoidal model. Every periodic function can be approximated with the inclusion of Fourier terms. The difference between a sinusoidal model and a regression with Fourier terms can be simplified as below: Sinusoidal Model: $Y_{i}=a+bt+\alpha \sin(2\pi \omega T_{i}+\phi )+E_{i}$ Regression With Fourier Terms: $Y_{i}=a+bt+(\sum _{k=1}^{K}\alpha _{k}\cdot \sin({\tfrac {2\pi kt}{m}})+\beta _{k}\cdot \cos({\tfrac {2\pi kt}{m}}))+E_{i}$ Seasonal adjustment Seasonal adjustment or deseasonalization is any method for removing the seasonal component of a time series. The resulting seasonally adjusted data are used, for example, when analyzing or reporting non-seasonal trends over durations rather longer than the seasonal period. An appropriate method for seasonal adjustment is chosen on the basis of a particular view taken of the decomposition of time series into components designated with names such as "trend", "cyclic", "seasonal" and "irregular", including how these interact with each other. For example, such components might act additively or multiplicatively. Thus, if a seasonal component acts additively, the adjustment method has two stages: • estimate the seasonal component of variation in the time series, usually in a form that has a zero mean across series; • subtract the estimated seasonal component from the original time series, leaving the seasonally adjusted series: $Y_{t}-S_{t}=T_{t}+E_{t}$.[3] If it is a multiplicative model, the magnitude of the seasonal fluctuations will vary with the level, which is more likely to occur with economic series.[3] When taking seasonality into account, the seasonally adjusted multiplicative decomposition can be written as $Y_{t}/S_{t}=T_{t}*E_{t}$; whereby the original time series is divided by the estimated seasonal component. The multiplicative model can be transformed into an additive model by taking the log of the time series; SA Multiplicative decomposition: $Y_{t}=S_{t}*T_{t}*E_{t}$ Taking log of the time series of the multiplicative model: $logY_{t}=logS_{t}+logT_{t}+logE_{t}$[3] One particular implementation of seasonal adjustment is provided by X-12-ARIMA. In regression analysis In regression analysis such as ordinary least squares, with a seasonally varying dependent variable being influenced by one or more independent variables, the seasonality can be accounted for and measured by including n-1 dummy variables, one for each of the seasons except for an arbitrarily chosen reference season, where n is the number of seasons (e.g., 4 in the case of meteorological seasons, 12 in the case of months, etc.). Each dummy variable is set to 1 if the data point is drawn from the dummy's specified season and 0 otherwise. Then the predicted value of the dependent variable for the reference season is computed from the rest of the regression, while for any other season it is computed using the rest of the regression and by inserting the value 1 for the dummy variable for that season. Related patterns It is important to distinguish seasonal patterns from related patterns. While a seasonal pattern occurs when a time series is affected by the season or the time of the year, such as annual, semiannual, quarterly, etc. A cyclic pattern, or simply a cycle, occurs when the data exhibit rises and falls in other periods, i.e., much longer (e.g., decadal) or much shorter (e.g., weekly) than seasonal. A quasiperiodicity is a more general, irregular periodicity. See also • Oscillation • Periodic function • Periodicity (disambiguation) • Photoperiodism References 1. "Seasonality". |title=Influencing Factors| 2. "Archived copy". Archived from the original on 2015-05-18. Retrieved 2015-05-13.{{cite web}}: CS1 maint: archived copy as title (link) 3. 6.1 Time series components - OTexts. 4. 2.1 Graphics - OTexts. 5. "time series - What method can be used to detect seasonality in data?". Cross Validated. • Barnett, A.G.; Dobson, A.J. (2010). Analysing Seasonal Health Data. Springer. ISBN 978-3-642-10747-4. • Complete Business Statistics (Chapter 12) by Amir D. Aczel. • Business Statistics: Why and When (Chapter 15) by Larry E. Richards and Jerry J. Lacava. • Business Statistics (Chapter 16) by J.K. Sharma. • Business Statistics, a decision making approach (Chapter 18) by David F. Groebner and Patric W. Shannon. • Statistics for Management (Chapter 15) by Richard I. Levin and David S. Rubin. • Forecasting: practice and principles by Rob J. Hyndman and George Athansopoulos External links • Media related to Seasonality at Wikimedia Commons • Seasonality at NIST/SEMATECH e-Handbook of Statistical Methods  This article incorporates public domain material from NIST/SEMATECH e-Handbook of Statistical Methods. National Institute of Standards and Technology. Inventory types • Cycle inventory • Safety inventory • Seasonal inventory Authority control: National • Israel • United States
Wikipedia
Time-varying covariate A time-varying covariate (also called time-dependent covariate) is a term used in statistics, particularly in survival analysis.[1] It reflects the phenomenon that a covariate is not necessarily constant through the whole study Time-varying covariates are included to represent time-dependent within-individual variation to predict individual responses.[2] For instance, if one wishes to examine the link between area of residence and cancer, this would be complicated by the fact that study subjects move from one area to another. The area of residency could then be introduced in the statistical model as a time-varying covariate. In survival analysis, this would be done by splitting each study subject into several observations, one for each area of residence. For example, if a person is born at time 0 in area A, moves to area B at time 5, and is diagnosed with cancer at time 8, two observations would be made. One with a length of 5 (5 − 0) in area A, and one with a length of 3 (8 − 5) in area B. References 1. Austin PC, Latouche A, Fine JP (January 2020). "A review of the use of time-varying covariates in the Fine-Gray subdistribution hazard competing risk regression model". Statistics in Medicine. 39 (2): 103–113. doi:10.1002/sim.8399. PMC 6916372. PMID 31660633. 2. Little TD (2013-02-01). The Oxford Handbook of Quantitative Methods, Vol. 2: Statistical Analysis. Oxford University Press. p. 397. ISBN 978-0-19-993490-4.
Wikipedia
Congruence (general relativity) In general relativity, a congruence (more properly, a congruence of curves) is the set of integral curves of a (nowhere vanishing) vector field in a four-dimensional Lorentzian manifold which is interpreted physically as a model of spacetime. Often this manifold will be taken to be an exact or approximate solution to the Einstein field equation. Types of congruences Congruences generated by nowhere vanishing timelike, null, or spacelike vector fields are called timelike, null, or spacelike respectively. A congruence is called a geodesic congruence if it admits a tangent vector field ${\vec {X}}$ with vanishing covariant derivative, $\nabla _{\vec {X}}{\vec {X}}=0$. Relation with vector fields The integral curves of the vector field are a family of non-intersecting parameterized curves which fill up the spacetime. The congruence consists of the curves themselves, without reference to a particular parameterization. Many distinct vector fields can give rise to the same congruence of curves, since if $f$ is a nowhere vanishing scalar function, then ${\vec {X}}$ and ${\vec {Y}}=\,f\,{\vec {X}}$ give rise to the same congruence. However, in a Lorentzian manifold, we have a metric tensor, which picks out a preferred vector field among the vector fields which are everywhere parallel to a given timelike or spacelike vector field, namely the field of tangent vectors to the curves. These are respectively timelike or spacelike unit vector fields. Physical interpretation In general relativity, a timelike congruence in a four-dimensional Lorentzian manifold can be interpreted as a family of world lines of certain ideal observers in our spacetime. In particular, a timelike geodesic congruence can be interpreted as a family of free-falling test particles. Null congruences are also important, particularly null geodesic congruences, which can be interpreted as a family of freely propagating light rays. Warning: the world line of a pulse of light moving in a fiber optic cable would not in general be a null geodesic, and light in the very early universe (the radiation-dominated epoch) was not freely propagating. The world line of a radar pulse sent from Earth past the Sun to Venus would however be modeled as a null geodesic arc. In dimensions other than four, the relationship between null geodesics and "light" no longer holds: If "light" is defined as the solution to the Laplacian wave equation, then the propagator has both null and time-like components in odd space-time dimensions, and is no longer a pure Dirac delta function in even space-time dimensions greater than four. Kinematical description Describing the mutual motion of the test particles in a null geodesic congruence in a spacetime such as the Schwarzschild vacuum or FRW dust is a very important problem in general relativity. It is solved by defining certain kinematical quantities which completely describe how the integral curves in a congruence may converge (diverge) or twist about one another. It should be stressed that the kinematical decomposition we are about to describe is pure mathematics valid for any Lorentzian manifold. However, the physical interpretation in terms of test particles and tidal accelerations (for timelike geodesic congruences) or pencils of light rays (for null geodesic congruences) is valid only for general relativity (similar interpretations may be valid in closely related theories). The kinematical decomposition of a timelike congruence Consider the timelike congruence generated by some timelike unit vector field X, which we should think of as a first order linear partial differential operator. Then the components of our vector field are now scalar functions given in tensor notation by writing ${\vec {X}}f=f_{,a}\,X^{a}$, where f is an arbitrary smooth function. The acceleration vector is the covariant derivative $\nabla _{\vec {X}}{\vec {X}}$; we can write its components in tensor notation as ${\dot {X}}^{a}={X^{a}}_{;b}X^{b}$ Next, observe that the equation $\left({\dot {X}}^{a}\,X_{b}+{X^{a}}_{;b}\right)\,X^{b}={X^{a}}_{;b}\,X^{b}-{\dot {X}}^{a}=0$ means that the term in parentheses at left is the transverse part of ${X^{a}}_{;b}$. This orthogonality relation holds only when X is a timelike unit vector of a Lorentzian Manifold. It does not hold in more general setting. Write $h_{ab}=g_{ab}+X_{a}\,X_{b}$ for the projection tensor which projects tensors into their transverse parts; for example, the transverse part of a vector is the part orthogonal to ${\vec {X}}$. This tensor can be seen as the metric tensor of the hypersurface whose tangent vectors are orthogonal to X. Thus we have shown that ${\dot {X}}_{a}\,X_{b}+X_{a;b}={h^{m}}_{a}\,{h^{n}}_{b}X_{m;n}$ Next, we decompose this into its symmetric and antisymmetric parts, ${\dot {X}}_{a}\,X_{b}+X_{a;b}=\theta _{ab}+\omega _{ab}$ Here, $\theta _{ab}={h^{m}}_{a}\,{h^{n}}_{b}X_{(m;n)}$ $\omega _{ab}={h^{m}}_{a}\,{h^{n}}_{b}X_{[m;n]}$ are known as the expansion tensor and vorticity tensor respectively. Because these tensors live in the spatial hyperplane elements orthogonal to ${\vec {X}}$, we may think of them as three-dimensional second rank tensors. This can be expressed more rigorously using the notion of Fermi Derivative. Therefore, we can decompose the expansion tensor into its traceless part plus a trace part. Writing the trace as $\theta $, we have $\theta _{ab}=\sigma _{ab}+{\frac {1}{3}}\,\theta \,h_{ab}$ Because the vorticity tensor is antisymmetric, its diagonal components vanish, so it is automatically traceless (and we can replace it with a three-dimensional vector, although we shall not do this). Therefore, we now have $X_{a;b}=\sigma _{ab}+\omega _{ab}+{\frac {1}{3}}\,\theta \,h_{ab}-{\dot {X}}_{a}\,X_{b}$ This is the desired kinematical decomposition. In the case of a timelike geodesic congruence, the last term vanishes identically. The expansion scalar, shear tensor ($\sigma _{ab}$), and vorticity tensor of a timelike geodesic congruence have the following intuitive meaning: 1. the expansion scalar represents the fractional rate at which the volume of a small initially spherical cloud of test particles changes with respect to proper time of the particle at the center of the cloud, 2. the shear tensor represents any tendency of the initial sphere to become distorted into an ellipsoidal shape, 3. the vorticity tensor represents any tendency of the initial sphere to rotate; the vorticity vanishes if and only if the world lines in the congruence are everywhere orthogonal to the spatial hypersurfaces in some foliation of the spacetime, in which case, for a suitable coordinate chart, each hyperslice can be considered as a surface of 'constant time'. See the citations and links below for justification of these claims. Curvature and timelike congruences By the Ricci identity (which is often used as the definition of the Riemann tensor), we can write $X_{a;bn}-X_{a;nb}=R_{ambn}\,X^{m}$ By plugging the kinematical decomposition into the left hand side, we can establish relations between the curvature tensor and the kinematical behavior of timelike congruences (geodesic or not). These relations can be used in two ways, both very important: 1. we can (in principle) experimentally determine the curvature tensor of a spacetime from detailed observations of the kinematical behavior of any timelike congruence (geodesic or not), 2. we can obtain evolution equations for the pieces of the kinematical decomposition (expansion scalar, shear tensor, and vorticity tensor) which exhibit direct curvature coupling. In the famous slogan of John Archibald Wheeler, Spacetime tells matter how to move; matter tells spacetime how to curve. We now see how to precisely quantify the first part of this assertion; the Einstein field equation quantifies the second part. In particular, according to the Bel decomposition of the Riemann tensor, taken with respect to our timelike unit vector field, the electrogravitic tensor (or tidal tensor) is defined by $E[{\vec {X}}]_{ab}=R_{ambn}\,X^{m}\,X^{n}$ The Ricci identity now gives $\left(X_{a:bn}-X_{a:nb}\right)\,X^{n}=E[{\vec {X}}]_{ab}$ Plugging in the kinematical decomposition we can eventually obtain ${\begin{aligned}E[{\vec {X}}]_{ab}&={\frac {2}{3}}\,\theta \,\sigma _{ab}-\sigma _{am}\,{\sigma ^{m}}_{b}-\omega _{am}\,{\omega ^{m}}_{b}\\&-{\frac {1}{3}}\left({\dot {\theta }}+{\frac {\theta ^{2}}{3}}\right)\,h_{ab}-{h^{m}}_{a}\,{h^{n}}_{b}\,\left({\dot {\sigma }}_{mn}-{\dot {X}}_{(m;n)}\right)-{\dot {X}}_{a}\,{\dot {X}}_{b}\\\end{aligned}}$ Here, overdots denote differentiation with respect to proper time, counted off along our timelike congruence (i.e. we take the covariant derivative with respect to the vector field X). This can be regarded as a description of how one can determine the tidal tensor from observations of a single timelike congruence. Evolution equations In this section, we turn to the problem of obtaining evolution equations (also called propagation equations or propagation formulae). It will be convenient to write the acceleration vector as ${\dot {X}}^{a}=W^{a}$ and also to set $J_{ab}=X_{a:b}={\frac {\theta }{3}}\,h_{ab}+\sigma _{ab}+\omega _{ab}-{\dot {X}}_{a}\,X_{b}$ Now from the Ricci identity for the tidal tensor we have ${\dot {J}}_{ab}=J_{an;b}\,X^{n}-E[{\vec {X}}]_{ab}$ But $\left(J_{an}\,X^{n}\right)_{;b}=J_{an;b}\,X^{n}+J_{an}\,{X^{n}}_{;b}=J_{an;b}\,X^{n}+J_{am}\,{J^{m}}_{b}$ so we have ${\dot {J}}_{ab}=-J_{am}\,{J^{m}}_{b}-{E[{\vec {X}}]}_{ab}+W_{a;b}$ By plugging in the definition of $J_{ab}$ and taking respectively the diagonal part, the traceless symmetric part, and the antisymmetric part of this equation, we obtain the desired evolution equations for the expansion scalar, the shear tensor, and the vorticity tensor. Consider first the easier case when the acceleration vector vanishes. Then (observing that the projection tensor can be used to lower indices of purely spatial quantities), we have $J_{am}\,{J^{m}}_{b}={\frac {\theta ^{2}}{9}}\,h_{ab}+{\frac {2\theta }{3}}\,\left(\sigma _{ab}+\omega _{ab}\right)+\left(\sigma _{am}\,{\sigma ^{m}}_{b}+\omega _{am}\,{\omega ^{m}}_{b}\right)+\left(\sigma _{am}\,{\omega ^{m}}_{b}+\omega _{am}\,{\sigma ^{m}}_{b}\right)$ or ${\dot {J}}_{ab}=-{\frac {\theta ^{2}}{9}}\,h_{ab}-{\frac {2\theta }{3}}\,\left(\sigma _{ab}+\omega _{ab}\right)-\left(\sigma _{am}\,{\sigma ^{m}}_{b}+\omega _{am}\,{\omega ^{m}}_{b}\right)-\left(\sigma _{am}\,{\omega ^{m}}_{b}+\omega _{am}\,{\sigma ^{m}}_{b}\right)-{E[{\vec {X}}]}_{ab}$ By elementary linear algebra, it is easily verified that if $\Sigma ,\Omega $ are respectively three dimensional symmetric and antisymmetric linear operators, then $\Sigma ^{2}+\Omega ^{2}$ is symmetric while $\Sigma \,\Omega +\Omega \,\Sigma $ is antisymmetric, so by lowering an index, the corresponding combinations in parentheses above are symmetric and antisymmetric respectively. Therefore, taking the trace gives Raychaudhuri's equation (for timelike geodesics): ${\dot {\theta }}=\omega ^{2}-\sigma ^{2}-{\frac {\theta ^{2}}{3}}-{E[{\vec {X}}]^{m}}_{m}$ Taking the traceless symmetric part gives ${\dot {\sigma }}_{ab}=-{\frac {2\theta }{3}}\,\sigma _{ab}-\left(\sigma _{am}\,{\sigma ^{m}}_{b}+\omega _{am}\,{\omega ^{m}}_{b}\right)-{E[{\vec {X}}]}_{ab}+{\frac {\sigma ^{2}-\omega ^{2}+{E[{\vec {X}}]^{m}}_{m}}{3}}\,h_{ab}$ and taking the antisymmetric part gives ${\dot {\omega }}_{ab}=-{\frac {2\theta }{3}}\,\omega _{ab}-\left(\sigma _{am}\,{\omega ^{m}}_{b}+\omega _{am}\,{\sigma ^{m}}_{b}\right)$ Here, $\sigma ^{2}=\sigma _{mn}\,\sigma ^{mn},\;\omega ^{2}=\omega _{mn}\,\omega ^{mn}$ are quadratic invariants which are never negative, so that $\sigma ,\omega $ are well-defined real invariants. The trace of the tidal tensor can also be written ${E[{\vec {X}}]^{a}}_{a}=R_{mn}\,X^{m}\,X^{n}$ It is sometimes called the Raychaudhuri scalar; needless to say, it vanishes identically in the case of a vacuum solution. See also • congruence (manifolds) • expansion scalar • expansion tensor • shear tensor • vorticity tensor • Raychaudhuri equation References • Poisson, Eric (2004). A Relativist's Toolkit: The Mathematics of Black Hole Mechanics. Cambridge: Cambridge University Press. Bibcode:2004rtmb.book.....P. ISBN 978-0-521-83091-1. See chapter 2 for an excellent and detailed introduction to geodesic congruences. Poisson's discussion of null geodesic congruences is particularly valuable. • Carroll, Sean M. (2004). Spacetime and Geometry: An Introduction to General Relativity. San Francisco: Addison-Wesley. ISBN 978-0-8053-8732-2. See appendix F for a good elementary discussion of geodesic congruences. (Carroll's notation is somewhat nonstandard.) • Stephani, Hans; Kramer, Dietrich; MacCallum, Malcolm; Hoenselaers, Cornelius; Herlt, Eduard (2003). Exact Solutions to Einstein's Field Equations (2nd ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-46136-8. See chapter 6 for a very detailed introduction to timelike and null congruences. • Wald, Robert M. (1984). General Relativity. Chicago: University of Chicago Press. ISBN 978-0-226-87033-5. See section 9.2 for the kinematics of timelike geodesic congruences. • Hawking, Stephen; Ellis, G. F. R. (1973). The Large Scale Structure of Space-Time. Cambridge: Cambridge University Press. ISBN 978-0-521-09906-6. See section 4.1 for the kinematics of timelike and null congruences. • Dasgupta, Anirvan; Nandan, Hemwati; Kar, Sayan (2009). "Kinematics of flows on curved, deformable media". International Journal of Geometric Methods in Modern Physics. 6 (4): 645–666. arXiv:0804.4089. Bibcode:2009IJGMM..06..645D. doi:10.1142/S0219887809003746. See for a detailed introduction to the kinematics of geodesic flows on specific, two dimensional curved surfaces (viz. sphere, hyperbolic space and torus).
Wikipedia
Timelike homotopy On a Lorentzian manifold, certain curves are distinguished as timelike. A timelike homotopy between two timelike curves is a homotopy such that each intermediate curve is timelike. No closed timelike curve (CTC) on a Lorentzian manifold is timelike homotopic to a point (that is, null timelike homotopic); such a manifold is therefore said to be multiply connected by timelike curves (or timelike multiply connected). A manifold such as the 3-sphere can be simply connected (by any type of curve), and at the same time be timelike multiply connected. Equivalence classes of timelike homotopic curves define their own fundamental group, as noted by Smith (1967). A smooth topological feature which prevents a CTC from being deformed to a point may be called a timelike topological feature. References • J. Wolfgang Smith (1960). "Fundamental groups on a Lorentz manifold". Amer. J. Math. The Johns Hopkins University Press. 82 (4): 873–890. doi:10.2307/2372946. hdl:2027/mdp.39015095257625. JSTOR 2372946. • André Avez (1963). "Essais de géométrie riemannienne hyperbolique globale. Applications à la relativité générale". Annales de l'Institut Fourier. 13 (2): 105–190. doi:10.5802/aif.144.
Wikipedia
Timeline of algebra The following is a timeline of key developments of algebra : YearEvent c. 1800 BCThe Old Babylonian Strassburg tablet seeks the solution of a quadratic elliptic equation. c. 1800 BCThe Plimpton 322 tablet gives a table of Pythagorean triples in Babylonian Cuneiform script.[1] 1800 BCBerlin Papyrus 6619 (19th dynasty) contains a quadratic equation and its solution.[2][3] 800 BCBaudhayana, author of the Baudhayana Sulba Sutra, a Vedic Sanskrit geometric text, contains quadratic equations, and calculates the square root of 2 correct to five decimal places c. 300 BCEuclid's Elements gives a geometric construction with Euclidean tools for the solution of the quadratic equation for positive real roots.[4] The construction is due to the Pythagorean School of geometry. c. 300 BCA geometric construction for the solution of the cubic is sought (doubling the cube problem). It is now well known that the general cubic has no such solution using Euclidean tools. 150 BCJain mathematicians in India write the “Sthananga Sutra”, which contains work on the theory of numbers, arithmetical operations, geometry, operations with fractions, simple equations, cubic equations, quartic equations, and permutations and combinations. 250 BCAlgebraic equations are treated in the Chinese mathematics book Jiuzhang suanshu (The Nine Chapters on the Mathematical Art), which contains solutions of linear equations solved using the rule of double false position, geometric solutions of quadratic equations, and the solutions of matrices equivalent to the modern method, to solve systems of simultaneous linear equations.[5] 1st century ADHero of Alexandria gives the earliest fleeting reference to square roots of negative numbers. c. 150Greek mathematician Hero of Alexandria, treats algebraic equations in three volumes of mathematics. c. 200Hellenistic mathematician Diophantus, who lived in Alexandria and is often considered to be the "father of algebra", writes his famous Arithmetica, a work featuring solutions of algebraic equations and on the theory of numbers. 499Indian mathematician Aryabhata, in his treatise Aryabhatiya, obtains whole-number solutions to linear equations by a method equivalent to the modern one, describes the general integral solution of the indeterminate linear equation, gives integral solutions of simultaneous indeterminate linear equations, and describes a differential equation. c. 625Chinese mathematician Wang Xiaotong finds numerical solutions to certain cubic equations.[6] c. 7th century Dates vary from the 3rd to the 12th centuries.[7] The Bakhshali Manuscript written in ancient India uses a form of algebraic notation using letters of the alphabet and other signs, and contains cubic and quartic equations, algebraic solutions of linear equations with up to five unknowns, the general algebraic formula for the quadratic equation, and solutions of indeterminate quadratic equations and simultaneous equations. 7th centuryBrahmagupta invents the method of solving indeterminate equations of the second degree and is the first to use algebra to solve astronomical problems. He also develops methods for calculations of the motions and places of various planets, their rising and setting, conjunctions, and the calculation of eclipses of the sun and the moon 628Brahmagupta writes the Brahmasphuta-siddhanta, where zero is clearly explained, and where the modern place-value Indian numeral system is fully developed. It also gives rules for manipulating both negative and positive numbers, methods for computing square roots, methods of solving linear and quadratic equations, and rules for summing series, Brahmagupta's identity, and the Brahmagupta theorem 8th centuryVirasena gives explicit rules for the Fibonacci sequence, gives the derivation of the volume of a frustum using an infinite procedure, and also deals with the logarithm to base 2 and knows its laws c. 800The Abbasid patrons of learning, al-Mansur, Haroun al-Raschid, and al-Mamun, has Greek, Babylonian, and Indian mathematical and scientific works translated into Arabic and begins a cultural, scientific and mathematical awakening after a century devoid of mathematical achievements.[8] 820The word algebra is derived from operations described in the treatise written by the Persian mathematician, Muḥammad ibn Mūsā al-Ḵhwārizmī, titled Al-Kitab al-Jabr wa-l-Muqabala (meaning "The Compendious Book on Calculation by Completion and Balancing") on the systematic solution of linear and quadratic equations. Al-Khwarizmi is often considered the "father of algebra", for founding algebra as an independent discipline and for introducing the methods of "reduction" and "balancing" (the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation) which was what he originally used the term al-jabr to refer to.[9] His algebra was also no longer concerned "with a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study." He also studied an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems."[10] c. 850Persian mathematician al-Mahani conceives the idea of reducing geometrical problems such as duplicating the cube to problems in algebra. c. 990Persian mathematician Al-Karaji (also known as al-Karkhi), in his treatise Al-Fakhri, further develops algebra by extending Al-Khwarizmi's methodology to incorporate integral powers and integral roots of unknown quantities. He replaces geometrical operations of algebra with modern arithmetical operations, and defines the monomials x, x2, x3, .. and 1/x, 1/x2, 1/x3, .. and gives rules for the products of any two of these.[11] He also discovers the first numerical solution to equations of the form ax2n + bxn = c.[12] Al-Karaji is also regarded as the first person to free algebra from geometrical operations and replace them with the type of arithmetic operations which are at the core of algebra today. His work on algebra and polynomials, gave the rules for arithmetic operations to manipulate polynomials. The historian of mathematics F. Woepcke, in Extrait du Fakhri, traité d'Algèbre par Abou Bekr Mohammed Ben Alhacan Alkarkhi (Paris, 1853), praised Al-Karaji for being "the first who introduced the theory of algebraic calculus". Stemming from this, Al-Karaji investigated binomial coefficients and Pascal's triangle.[11] 895Thabit ibn Qurra: the only surviving fragment of his original work contains a chapter on the solution and properties of cubic equations. He also generalized the Pythagorean theorem, and discovered the theorem by which pairs of amicable numbers can be found, (i.e., two numbers such that each is the sum of the proper divisors of the other). 953Al-Karaji is the “first person to completely free algebra from geometrical operations and to replace them with the arithmetical type of operations which are at the core of algebra today. He [is] first to define the monomials $x$, $x^{2}$, $x^{3}$, … and $1/x$, $1/x^{2}$, $1/x^{3}$, … and to give rules for products of any two of these. He start[s] a school of algebra which flourished for several hundreds of years”. He also discovers the binomial theorem for integer exponents, which “was a major factor in the development of numerical analysis based on the decimal system.” c. 1000Abū Sahl al-Qūhī (Kuhi) solves equations higher than the second degree. c. 1050Chinese mathematician Jia Xian finds numerical solutions of polynomial equations of arbitrary degree.[13] 1070Omar Khayyám begins to write Treatise on Demonstration of Problems of Algebra and classifies cubic equations. 1072Persian mathematician Omar Khayyam gives a complete classification of cubic equations with positive roots and gives general geometric solutions to these equations found by means of intersecting conic sections.[14] 12th centuryBhaskara Acharya writes the “Bijaganita” (“Algebra”), which is the first text that recognizes that a positive number has two square roots 1130Al-Samawal gives a definition of algebra: “[it is concerned] with operating on unknowns using all the arithmetical tools, in the same way as the arithmetician operates on the known.”[15] 1135Sharafeddin Tusi follows al-Khayyam's application of algebra to geometry, and writes a treatise on cubic equations which “represents an essential contribution to another algebra which aimed to study curves by means of equations, thus inaugurating the beginning of algebraic geometry.”[15] c. 1200Sharaf al-Dīn al-Tūsī (1135–1213) writes the Al-Mu'adalat (Treatise on Equations), which deals with eight types of cubic equations with positive solutions and five types of cubic equations which may not have positive solutions. He uses what would later be known as the "Ruffini-Horner method" to numerically approximate the root of a cubic equation. He also develops the concepts of the maxima and minima of curves in order to solve cubic equations which may not have positive solutions.[16] He understands the importance of the discriminant of the cubic equation and uses an early version of Cardano's formula[17] to find algebraic solutions to certain types of cubic equations. Some scholars, such as Roshdi Rashed, argue that Sharaf al-Din discovered the derivative of cubic polynomials and realized its significance, while other scholars connect his solution to the ideas of Euclid and Archimedes.[18] 1202Leonardo Fibonacci of Pisa publishes his Liber Abaci, a work on algebra that introduces Arabic numerals to Europe.[19] c. 1300Chinese mathematician Zhu Shijie deals with polynomial algebra, solves quadratic equations, simultaneous equations and equations with up to four unknowns, and numerically solves some quartic, quintic and higher-order polynomial equations.[20] c. 1400Jamshīd al-Kāshī develops an early form of Newton's method to numerically solve the equation $x^{P}-N=0$ to find roots of N.[21] c. 1400Indian mathematician Madhava of Sangamagrama finds the solution of transcendental equations by iteration, iterative methods for the solution of non-linear equations, and solutions of differential equations. 15th centuryNilakantha Somayaji, a Kerala school mathematician, writes the “Aryabhatiya Bhasya”, which contains work on infinite-series expansions, problems of algebra, and spherical geometry 1412–1482Arab mathematician Abū al-Hasan ibn Alī al-Qalasādī takes "the first steps toward the introduction of algebraic symbolism." He uses "short Arabic words, or just their initial letters, as mathematical symbols."[22] 1535Scipione del Ferro and Niccolò Fontana Tartaglia, in Italy, independently solve the general cubic equation.[23] 1545Girolamo Cardano publishes Ars magna -The great art which gives del Ferro's solution to the cubic equation[23] and Lodovico Ferrari's solution to the quartic equation. 1572Rafael Bombelli recognizes the complex roots of the cubic and improves current notation.[24] 1591Franciscus Vieta develops improved symbolic notation for various powers of an unknown and uses vowels for unknowns and consonants for constants in In artem analyticam isagoge. 1608Christopher Clavius publishes his Algebra 1619René Descartes discovers analytic geometry. (Pierre de Fermat claimed that he also discovered it independently.) 1631Thomas Harriot in a posthumous publication is the first to use symbols < and > to indicate "less than" and "greater than", respectively.[25] 1637Pierre de Fermat claims to have proven Fermat's Last Theorem in his copy of Diophantus' Arithmetica, 1637René Descartes introduces the use of the letters z, y, and x for unknown quantities.[26][27] 1637The term imaginary number is first used by René Descartes; it is meant to be derogatory. 1682Gottfried Wilhelm Leibniz develops his notion of symbolic manipulation with formal rules which he calls characteristica generalis.[28] 1683Japanese mathematician Kowa Seki, in his Method of solving the dissimulated problems, discovers the determinant,[29] discriminant, and Bernoulli numbers.[29] 1693Leibniz solves systems of simultaneous linear equations using matrices and determinants. 1722Abraham de Moivre states de Moivre's formula connecting trigonometric functions and complex numbers, 1750Gabriel Cramer, in his treatise Introduction to the analysis of algebraic curves, states Cramer's rule and studies algebraic curves, matrices and determinants.[30] 1797Caspar Wessel associates vectors with complex numbers and studies complex number operations in geometrical terms, 1799Carl Friedrich Gauss proves the fundamental theorem of algebra (every polynomial equation has a solution among the complex numbers), 1799Paolo Ruffini partially proves the Abel–Ruffini theorem that quintic or higher equations cannot be solved by a general formula, 1806Jean-Robert Argand publishes proof of the Fundamental theorem of algebra and the Argand diagram, 1824Niels Henrik Abel proves that the general quintic equation is insoluble by radicals.[23] 1832Galois theory is developed by Évariste Galois in his work on abstract algebra.[23] 1843William Rowan Hamilton discovers quaternions. 1853Arthur Cayley provides a modern definition of groups. 1847George Boole formalizes symbolic logic in The Mathematical Analysis of Logic, defining what now is called Boolean algebra. 1873Charles Hermite proves that e is transcendental. 1878Charles Hermite solves the general quintic equation by means of elliptic and modular functions. 1926Emmy Noether extends Hilbert's theorem on the finite basis problem to representations of a finite group over any field. 1929Emmy Noether combines work on structure theory of associative algebras and the representation theory of groups into a single arithmetic theory of modules and ideals in rings satisfying ascending chain conditions, providing the foundation for modern algebra. 1981Mikhail Gromov develops the theory of hyperbolic groups, revolutionizing both infinite group theory and global differential geometry, See also •  Mathematics portal References 1. Anglin, W.S (1994). Mathematics: A Concise History and Philosophy. Springer. p. 8. ISBN 978-0-387-94280-3. 2. Smith, David Eugene Smith (1958). History of Mathematics. Courier Dover Publications. p. 443. ISBN 978-0-486-20430-7. 3. "Egyptian Mathematics Papyri". Mathematicians and Scientists of the African Diaspora. The State University of New York at Buffalo. 4. Euclid (January 1956). Euclid's Elements. Courier Dover Publications. p. 258. ISBN 978-0-486-60089-5. 5. Crossley, John; W.-C. Lun, Anthony (1999). The Nine Chapters on the Mathematical Art: Companion and Commentary. Oxford University Press. p. 349. ISBN 978-0-19-853936-0. 6. O'Connor, John J.; Robertson, Edmund F., "Wang Xiaotong", MacTutor History of Mathematics Archive, University of St Andrews 7. Hayashi (2005), p. 371. "The dates so far proposed for the Bakhshali work vary from the third to the twelfth centuries AD, but a recently made comparative study has shown many similarities, particularly in the style of exposition and terminology, between Bakhshalī work and Bhāskara I's commentary on the Āryabhatīya. This seems to indicate that both works belong to nearly the same period, although this does not deny the possibility that some of the rules and examples in the Bakhshālī work date from anterior periods." 8. Boyer (1991), "The Arabic Hegemony" p. 227. "The first century of the Muslim empire had been devoid of scientific achievement. This period (from about 650 to 750) had been, in fact, perhaps the nadir in the development of mathematics, for the Arabs had not yet achieved intellectual drive, and concern for learning in other parts of the world had faded. Had it not been for the sudden cultural awakening in Islam during the second half of the eighth century, considerably more of ancient science and mathematics would have been lost. To Baghdad at that time were called scholars from Syria, Iran, and Mesopotamia, including Jews and Nestorian Christians; under three great Abbasid patrons of learning - al Mansur, Haroun al-Raschid, and al-Mamun - The city became a new Alexandria. It was during the caliphate of al-Mamun (809-833), however, that the Arabs fully indulged their passion for translation. The caliph is said to have had a dream in which Aristotle appeared, and as a consequence al-Mamun determined to have Arabic versions made of all the Greek works that he could lay his hands on, including Ptolemy's Almagest and a complete version of Euclid's Elements. From the Byzantine Empire, with which the Arabs maintained an uneasy peace, Greek manuscripts were obtained through peace treaties. Al-Mamun established at Baghdad a "House of Wisdom" (Bait al-hikma) comparable to the ancient Museum at Alexandria." 9. Boyer (1991), "The Arabic Hegemony" p. 229. "It is not certain just what the terms al-jabr and muqabalah mean, but the usual interpretation is similar to that implied in the translation above. The word al-jabr presumably meant something like "restoration" or "completion" and seems to refer to the transposition of subtracted terms to the other side of an equation; the word muqabalah is said to refer to "reduction" or "balancing" - that is, the cancellation of like terms on opposite sides of the equation." 10. Rashed, R.; Armstrong, Angela (1994). The Development of Arabic Mathematics. Springer. pp. 11–2. ISBN 0-7923-2565-6. OCLC 29181926. 11. O'Connor, John J.; Robertson, Edmund F., "Abu Bekr ibn Muhammad ibn al-Husayn Al-Karaji", MacTutor History of Mathematics Archive, University of St Andrews 12. Boyer (1991), "The Arabic Hegemony" p. 239. "Abu'l Wefa was a capable algebraist aws well as a trionometer. [..] His successor al-Karkhi evidently used this translation to become an Arabic disciple of Diophantus - but without Diophantine analysis! [..] In particular, to al-Karaji is attributed the first numerical solution of equations of the form ax2n + bxn = c (only equations with positive roots were considered)." 13. O'Connor, John J.; Robertson, Edmund F., "Jia Xian", MacTutor History of Mathematics Archive, University of St Andrews 14. Boyer (1991), "The Arabic Hegemony" pp. 241–242. "Omar Khayyam (ca. 1050-1123), the "tent-maker," wrote an Algebra that went beyond that of al-Khwarizmi to include equations of third degree. Like his Arab predecessors, Omar Khayyam provided for quadratic equations both arithmetic and geometric solutions; for general cubic equations, he believed (mistakenly, as the sixteenth century later showed), arithmetic solutions were impossible; hence he gave only geometric solutions. The scheme of using intersecting conics to solve cubics had been used earlier by Menaechmus, Archimedes, and Alhazan, but Omar Khayyam took the praiseworthy step of generalizing the method to cover all third-degree equations (having positive roots)." 15. Arabic mathematics, MacTutor History of Mathematics archive, University of St Andrews, Scotland 16. O'Connor, John J.; Robertson, Edmund F., "Sharaf al-Din al-Muzaffar al-Tusi", MacTutor History of Mathematics Archive, University of St Andrews 17. Rashed, Roshdi; Armstrong, Angela (1994). The Development of Arabic Mathematics. Springer. pp. 342–3. ISBN 0-7923-2565-6. 18. Berggren, J. L.; Al-Tūsī, Sharaf Al-Dīn; Rashed, Roshdi; Al-Tusi, Sharaf Al-Din (1990). "Innovation and Tradition in Sharaf al-Din al-Tusi's Muadalat". Journal of the American Oriental Society. 110 (2): 304–9. doi:10.2307/604533. JSTOR 604533. Rashed has argued that Sharaf al-Din discovered the derivative of cubic polynomials and realized its significance for investigating conditions under which cubic equations were solvable; however, other scholars have suggested quite difference explanations of Sharaf al-Din's thinking, which connect it with mathematics found in Euclid or Archimedes. 19. Ball, W. W. Rouse (1960). A Short Account of the History of Mathematics. Courier Dover Publications. p. 167. ISBN 978-0-486-15784-9. 20. Grattan-Guinness, Ivor (1997). The Norton History of the Mathematical Sciences. W.W. Norton. p. 108. ISBN 978-0-393-04650-2. 21. Ypma, Tjalling J. (1995). "Historical development of the Newton-Raphson method". SIAM Review. 37 (4): 531–51. doi:10.1137/1037125. 22. O'Connor, John J.; Robertson, Edmund F., "Abu'l Hasan ibn Ali al Qalasadi", MacTutor History of Mathematics Archive, University of St Andrews 23. Stewart, Ian (2004). Galois Theory (Third ed.). Chapman & Hall/CRC Mathematics. ISBN 9781584883937. 24. Cooke, Roger (16 May 2008). Classical Algebra: Its Nature, Origins, and Uses. John Wiley & Sons. p. 70. ISBN 978-0-470-27797-3. 25. Boyer (1991), "Prelude to Modern Mathematics" p. 306. "Harriot knew of relationships between roots and coefficients and between roots and factors, but like Viète he was hampered by failure to take note of negative and imaginary roots. In notation, however, he advanced the use of symbolism, being responsible for the signs > and < for 'greater than' and 'less than.'" 26. Cajori, Florian (1919). "How x Came to Stand for Unknown Quantity". School Science and Mathematics. 19 (8): 698–699. doi:10.1111/j.1949-8594.1919.tb07713.x. 27. Cajori, Florian (1928). A History of Mathematical Notations. Vol. 1. Chicago: Open Court Publishing. p. 381. ISBN 9780486677668. 28. Struik, D. J. A Source Book in Mathematics, 1200-1800. Harvard University Press. p. 123. ISBN 978-0-674-82355-6. 29. O'Connor, John J.; Robertson, Edmund F., "Takakazu Shinsuke Seki", MacTutor History of Mathematics Archive, University of St Andrews 30. O'Connor, John J.; Robertson, Edmund F., "Gabriel Cramer", MacTutor History of Mathematics Archive, University of St Andrews • Boyer, Carl B. (1991). A History of Mathematics (Second ed.). John Wiley & Sons, Inc. ISBN 0-471-54397-7. • Hayashi, Takao (2005). "Indian Mathematics". In Flood, Gavin (ed.). The Blackwell Companion to Hinduism. Oxford: Basil Blackwell. pp. 360–375. ISBN 978-1-4051-3251-0.
Wikipedia
Timeline of women in mathematics in the United States There is a long history of women in mathematics in the United States. All women mentioned here are American unless otherwise noted. Timeline 19th Century • 1829: The first public examination of an American girl in geometry was held.[1] • 1886: Winifred Edgerton Merrill became the first American woman to earn a PhD in mathematics, which she earned from Columbia University.[2] 20th Century • 1913: Mildred Sanderson earned her PhD for a thesis that included an important theorem about modular invariants.[3] • 1927: Anna Pell-Wheeler became the first woman to present a lecture at the American Mathematical Society Colloquium.[4] • 1943: Euphemia Haynes became the first African-American woman to earn a Ph.D. in mathematics, which she earned from Catholic University of America.[5] • 1949: Gertrude Mary Cox became the first woman elected into the International Statistical Institute.[6] • 1956: Gladys West began collecting data from satellites at the Naval Surface Warfare Center Dahlgren Division. Her calculations directly impacted the development of accurate GPS systems.[7] • 1962: Mina Rees became the first woman to win the Mathematical Association of America's highest honor, the Yueh-Gin Gung and Dr. Charles Y. Hu Award for Distinguished Service to Mathematics.[4] • 1966: Mary L. Boas published Mathematical Methods in the Physical Sciences, which was still widely used in college classrooms as of 1999.[8][9] 1970s • 1970: Mina Rees became the first female president of the American Association for the Advancement of Science.[10] • 1971: • Mary Ellen Rudin constructed the first Dowker space.[11] • The Association for Women in Mathematics (AWM) was founded. It is a professional society whose mission is to encourage women and girls to study and to have active careers in the mathematical sciences, and to promote equal opportunity for and the equal treatment of women and girls in the mathematical sciences. It is incorporated in the state of Massachusetts.[12] • The American Mathematical Society established its Joint Committee on Women in the Mathematical Sciences (JCW), which later became a joint committee of multiple scholarly societies.[13] • 1973: Jean Taylor published her dissertation on "Regularity of the Singular Set of Two-Dimensional Area-Minimizing Flat Chains Modulo 3 in R3" which solved a long-standing problem about length and smoothness of soap-film triple function curves.[14] • 1974: Joan Birman published the book Braids, Links, and Mapping Class Groups. It has become a standard introduction, with many of today's researchers having learned the subject through it.[15] • 1975–1977: Marjorie Rice, who had no formal training in mathematics beyond high school, discovered three new types of tessellating pentagons and more than sixty distinct tessellations by pentagons.[16] • 1975: Julia Robinson became the first female mathematician elected to the National Academy of Sciences.[17] • 1979: • Dorothy Lewis Bernstein became the first female president of the Mathematical Association of America.[18] • Mary Ellen Rudin became the first woman to present the MAA's Earle Raymond Hedrick Lectures, intended to showcase skilled expositors and enrich the understanding of instructors of college-level mathematics.[4] 1980s • 1981: Doris Schattschneider became the first female editor of Mathematics Magazine, a refereed bimonthly publication of the Mathematical Association of America.[19][20] • 1983: Julia Robinson became the first female president of the American Mathematical Society,[17] and the first female mathematician to be awarded a MacArthur Fellowship.[4] • 1988: Doris Schattschneider became the first woman to present the MAA's J. Sutherland Frame Lectures.[4] 1990s • 1992: Gloria Gilmer became the first woman to deliver a major National Association of Mathematicians lecture (it was the Cox-Talbot address).[21] • 1995: Margaret Wright became the first female president of the Society for Industrial and Applied Mathematics.[4] • 1996: Joan Birman became the first woman to receive the MAA's Chauvenet Prize, an annual award for expository articles.[4] • 1998: Melanie Wood became the first female American to make the U.S. International Math Olympiad Team. She won silver medals in the 1998 and 1999 International Mathematical Olympiads.[22] 21st Century • 2002: Melanie Wood became the first American woman and second woman overall to be named a Putnam Fellow in 2002. Putnam Fellows are the top five (or six, in case of a tie) scorers on William Lowell Putnam Mathematical Competition.[23][24] • 2004: • Melanie Wood became the first woman to win the Frank and Brennie Morgan Prize for Outstanding Research in Mathematics by an Undergraduate Student. It is an annual award given to an undergraduate student in the US, Canada, or Mexico who demonstrates superior mathematics research.[25] • Alison Miller became the first female gold medal winner on the U.S. International Mathematical Olympiad Team.[26] • 2006: Stefanie Petermichl, a German mathematical analyst then at the University of Texas at Austin, became the first woman to win the Salem Prize, an annual award given to young mathematicians who have worked in Raphael Salem's field of interest, chiefly topics in analysis related to Fourier series.[27][4] She shared the prize with Artur Avila.[28] • 2007: Kaumudi Joshipura, an Indian dentist-scientist, biostatistician, and epidemiologist, became the NIH endowed chair and director of the center for clinical research and health promotion at University of Puerto Rico, Medical Sciences Campus.[29][30] • 2019: Karen Uhlenbeck became the first woman to win the Abel Prize, with the award committee citing "the fundamental impact of her work on analysis, geometry and mathematical physics."[31] See also Timeline of women in mathematics References 1. Elizabeth Cady Stanton; Susan B. Anthony; Matilda Joslyn Gage; Ida Husted Harper, eds. (1889). History of Woman Suffrage: 1848–1861, Volume 1. Susan B. Anthony. p. 36. Retrieved 2011-04-18. 2. Susan E. Kelly & Sarah A. Rozner (28 February 2012). "Winifred Edgerton Merrill:"She Opened the Door"" (PDF). Notices of the AMS. 59 (4). Retrieved 25 January 2014. 3. "Mildred Leonora Sanderson". agnesscott.edu. Retrieved 2014-01-25. 4. "Prizes, Awards, and Honors for Women Mathematicians". agnesscott.edu. Retrieved 2014-01-25. 5. "Euphemia Lofton Haynes, first African American woman mathematician". math.buffalo.edu. Retrieved 2014-01-25. 6. "Gertrude Mary Cox". agnesscott.edu. Retrieved 2014-01-25. 7. "How Gladys West uncovered the 'Hidden Figures' of GPS". GPS World. 2018-03-19. Retrieved 2018-09-22. 8. Mary L. Boas (1966). Mathematical methods in the physical sciences. Wiley. ISBN 9780471084174. 9. Spector, Donald (1999). "Book Reviews". American Journal of Physics. 67 (2): 165–169. doi:10.1119/1.19216. 10. "Mina Rees". agnesscott.edu. Retrieved 2014-01-25. 11. "New Zealand Mathematical Societu Newsletter Number 84, April 2002". Massey.ac.nz. Retrieved 2017-06-20. 12. "About AWM - AWM Association for Women in Mathematics". Retrieved 2014-01-25. 13. "JCW-Math | Joint Committee on Women in the Mathematical Sciences". jcwmath.wordpress.com. Retrieved 2014-01-25. 14. "Jean Taylor". agnesscott.edu. Retrieved 2014-01-25. 15. "Interview with Joan Birman" (PDF). Notices of the AMS. 54 (1). 4 December 2006. Retrieved 25 January 2014. 16. Doris Schattschneider. "Perplexing Pentagons". britton.disted.camosun.bc.ca. Archived from the original on 2016-08-13. Retrieved 2014-01-25. 17. "Profiles of Women in Mathematics: Julia Robinson". awm-math.org. Retrieved 2014-01-25. 18. Oakes, E.H. (2007). Encyclopedia of World Scientists. Facts On File, Incorporated. ISBN 9781438118826. 19. "2005 Parson Lecturer - Dr. Doris Schattschneider". University of North Carolina at Asheville, Department of Mathematics. Archived from the original on 2014-01-11. Retrieved 2013-07-13.. 20. Riddle, Larry (April 5, 2013). "Biographies of Women Mathematicians | Doris Schattschneider". Agnes Scott College. Retrieved 2013-07-13. 21. "Gloria Ford Gilmer". math.buffalo.edu. Retrieved 2014-01-25. 22. Rimer, Sara (10 October 2008). "Math Skills Suffer in U.S., Study Finds". The New York Times. Retrieved 2019-11-20. 23. "Duke Magazine-Where Are They Now?-January/February 2010". dukemagazine.duke.edu. Retrieved 2014-01-25. 24. "Melanie Wood: The Making of a Mathematician - Cogito". cogito.cty.jhu.edu. Retrieved 2014-01-25. 25. "2003 Morgan Prize" (PDF). Notices of the AMS. 51 (4). 26 February 2004. Retrieved 25 January 2014. 26. "Math Forum @ Drexel: Congratulations, Alison!". mathforum.org. Retrieved 2014-01-25. 27. Short vita, retrieved 2016-07-04. 28. "UZH - Fields Medal Winner Artur Avila Appointed Full Professor at the University of Zurich". Media.uzh.ch. 2018-07-24. Retrieved 2018-10-09. 29. Joshipura, Kaumudi Jinraj (February 2017). "CV" (PDF). Harvard School of Public Health. 30. "Kaumudi Joshipura". Harvard School of Public Health. Retrieved 2019-07-29. 31. Change, Kenneth (March 19, 2019). "Karen Uhlenbeck Is First Woman to Receive Abel Prize in Mathematics". New York Times. Retrieved 19 March 2019. Further reading • A Brief History of the Association for Women in Mathematics: The Presidents' Perspectives, by Lenore Blum (1991)
Wikipedia
Timeline of abelian varieties This is a timeline of the theory of abelian varieties in algebraic geometry, including elliptic curves. Early history • c. 1000 Al-Karaji writes on congruent numbers[1] Seventeenth century • Fermat studies descent for elliptic curves • 1643 Fermat poses an elliptic curve Diophantine equation[2] • 1670 Fermat's son published his Diophantus with notes Eighteenth century • 1718 Giulio Carlo Fagnano dei Toschi, studies the rectification of the lemniscate, addition results for elliptic integrals.[3] • 1736 Leonhard Euler writes on the pendulum equation without the small-angle approximation.[4] • 1738 Euler writes on curves of genus 1 considered by Fermat and Frenicle • 1750 Euler writes on elliptic integrals • 23 December 1751 – 27 January 1752: Birth of the theory of elliptic functions, according to later remarks of Jacobi, as Euler writes on Fagnano's work.[5] • 1775 John Landen publishes Landen's transformation,[6] an isogeny formula. • 1786 Adrien-Marie Legendre begins to write on elliptic integrals • 1797 Carl Friedrich Gauss discovers double periodicity of the lemniscate function[7] • 1799 Gauss finds the connection of the length of a lemniscate and a case of the arithmetic-geometric mean, giving a numerical method for a complete elliptic integral.[8] Nineteenth century • 1826 Niels Henrik Abel, Abel-Jacobi map • 1827 Inversion of elliptic integrals independently by Abel and Carl Gustav Jacob Jacobi • 1829 Jacobi, Fundamenta nova theoriae functionum ellipticarum, introduces four theta functions of one variable • 1835 Jacobi points out the use of the group law for diophantine geometry, in De usu Theoriae Integralium Ellipticorum et Integralium Abelianorum in Analysi Diophantea[9] • 1836-7 Friedrich Julius Richelot, the Richelot isogeny.[10] • 1847 Adolph Göpel gives the equation of the Kummer surface[11] • 1851 Johann Georg Rosenhain writes a prize essay on the inversion problem in genus 2.[12] • c. 1850 Thomas Weddle - Weddle surface • 1856 Weierstrass elliptic functions • 1857 Bernhard Riemann[13] lays the foundations for further work on abelian varieties in dimension > 1, introducing the Riemann bilinear relations and Riemann theta function. • 1865 Carl Johannes Thomae, Theorie der ultraelliptischen Funktionen und Integrale erster und zweiter Ordnung[14] • 1866 Alfred Clebsch and Paul Gordan, Theorie der Abel'schen Functionen • 1869 Karl Weierstrass proves an abelian function satisfies an algebraic addition theorem • 1879, Charles Auguste Briot, Théorie des fonctions abéliennes • 1880 In a letter to Richard Dedekind, Leopold Kronecker describes his Jugendtraum,[15] to use complex multiplication theory to generate abelian extensions of imaginary quadratic fields • 1884 Sofia Kovalevskaya writes on the reduction of abelian functions to elliptic functions[16] • 1888 Friedrich Schottky finds a non-trivial condition on the theta constants for curves of genus $g=4$, launching the Schottky problem. • 1891 Appell–Humbert theorem of Paul Émile Appell and Georges Humbert, classifies the holomorphic line bundles on an abelian surface by cocycle data. • 1894 Die Entwicklung der Theorie der algebräischen Functionen in älterer und neuerer Zeit, report by Alexander von Brill and Max Noether • 1895 Wilhelm Wirtinger, Untersuchungen über Thetafunktionen, studies Prym varieties • 1897 H. F. Baker, Abelian Functions: Abel's Theorem and the Allied Theory of Theta Functions Twentieth century • c.1910 The theory of Poincaré normal functions implies that the Picard variety and Albanese variety are isogenous.[17] • 1913 Torelli's theorem[18] • 1916 Gaetano Scorza[19] applies the term "abelian variety" to complex tori. • 1921 Solomon Lefschetz shows that any complex torus with Riemann matrix satisfying the necessary conditions can be embedded in some complex projective space using theta-functions • 1922 Louis Mordell proves Mordell's theorem: the rational points on an elliptic curve over the rational numbers form a finitely-generated abelian group • 1929 Arthur B. Coble, Algebraic Geometry and Theta Functions • 1939 Siegel modular forms[20] • c. 1940 André Weil defines "abelian variety" • 1952 Weil defines an intermediate Jacobian • Theorem of the cube • Selmer group • Michael Atiyah classifies holomorphic vector bundles on an elliptic curve • 1961 Goro Shimura and Yutaka Taniyama, Complex Multiplication of Abelian Varieties and its Applications to Number Theory • Néron model • Birch–Swinnerton–Dyer conjecture • Moduli space for abelian varieties • Duality of abelian varieties • c.1967 David Mumford develops a new theory of the equations defining abelian varieties • 1968 Serre–Tate theorem on good reduction extends the results of Max Deuring on elliptic curves to the abelian variety case.[21] • c. 1980 Mukai–Fourier transform: the Poincaré line bundle as Mukai–Fourier kernel induces an equivalence of the derived categories of coherent sheaves for an abelian variety and its dual.[22] • 1983 Takahiro Shiota proves Novikov's conjecture on the Schottky problem • 1985 Jean-Marc Fontaine shows that any positive-dimensional abelian variety over the rationals has bad reduction somewhere.[23] Twenty-first century • 2001 Proof of the modularity theorem for elliptic curves is completed. Notes 1. PDF 2. Miscellaneous Diophantine Equations at MathPages 3. Fagnano_Giulio biography 4. E. T. Whittaker, A Treatise on the Analytical Dynamics of Particles and Rigid Bodies (fourth edition 1937), p. 72. 5. André Weil, Number Theory: An approach through history (1984), p. 1. 6. Landen biography 7. Chronology of the Life of Carl F. Gauss 8. Semen Grigorʹevich Gindikin, Tales of Physicists and Mathematicians (1988 translation), p. 143. 9. Dale Husemoller, Elliptic Curves. 10. Richelot, Essai sur une méthode générale pour déterminer les valeurs des intégrales ultra-elliptiques, fondée sur des transformations remarquables de ces transcendantes, C. R. Acad. Sci. Paris. 2 (1836), 622-627; De transformatione integralium Abelianorum primi ordinis commentatio, J. Reine Angew. Math. 16 (1837), 221-341. 11. Gopel biography 12. "Rosenhain biography". www.gap-system.org. Archived from the original on 2008-09-07. 13. Theorie der Abel'schen Funktionen, J. Reine Angew. Math. 54 (1857), 115-180 14. "Thomae biography". www.gap-system.org. Archived from the original on 2006-09-28. 15. Some Contemporary Problems with Origins in the Jugendtraum, Robert Langlands 16. Über die Reduction einer bestimmten Klasse Abel'scher Integrale Ranges auf elliptische Integrale, Acta Mathematica 4, 392–414 (1884). 17. PDF, p. 168. 18. Ruggiero Torelli, Sulle varietà di Jacobi, Rend. della R. Acc. Nazionale dei Lincei (5), 22, 1913, 98–103. 19. Gaetano Scorza, Intorno alla teoria generale delle matrici di Riemann e ad alcune sue applicazioni, Rend. del Circolo Mat. di Palermo 41 (1916) 20. Carl Ludwig Siegel, Einführung in die Theorie der Modulfunktionen n-ten Grades, Mathematische Annalen 116 (1939), 617–657 21. Jean-Pierre Serre and John Tate, Good Reduction of Abelian Varieties, Annals of Mathematics, Second Series, Vol. 88, No. 3 (Nov., 1968), pp. 492–517. 22. Daniel Huybrechts, Fourier–Mukai transforms in algebraic geometry (2006), Ch. 9. 23. Jean-Marc Fontaine, Il n'y a pas de variété abélienne sur Z, Inventiones Mathematicae (1985) no. 3, 515–538.
Wikipedia
Timeline of algorithms The following timeline of algorithms outlines the development of algorithms (mainly "mathematical recipes") since their inception. Medieval Period • Before – writing about "recipes" (on cooking, rituals, agriculture and other themes) • c. 1700–2000 BC – Egyptians develop earliest known algorithms for multiplying two numbers • c. 1600 BC – Babylonians develop earliest known algorithms for factorization and finding square roots • c. 300 BC – Euclid's algorithm • c. 200 BC – the Sieve of Eratosthenes • 263 AD – Gaussian elimination described by Liu Hui • 628 – Chakravala method described by Brahmagupta • c. 820 – Al-Khawarizmi described algorithms for solving linear equations and quadratic equations in his Algebra; the word algorithm comes from his name • 825 – Al-Khawarizmi described the algorism, algorithms for using the Hindu–Arabic numeral system, in his treatise On the Calculation with Hindu Numerals, which was translated into Latin as Algoritmi de numero Indorum, where "Algoritmi", the translator's rendition of the author's name gave rise to the word algorithm (Latin algorithmus) with a meaning "calculation method" • c. 850 – cryptanalysis and frequency analysis algorithms developed by Al-Kindi (Alkindus) in A Manuscript on Deciphering Cryptographic Messages, which contains algorithms on breaking encryptions and ciphers[1] • c. 1025 – Ibn al-Haytham (Alhazen), was the first mathematician to derive the formula for the sum of the fourth powers, and in turn, he develops an algorithm for determining the general formula for the sum of any integral powers, which was fundamental to the development of integral calculus[2] • c. 1400 – Ahmad al-Qalqashandi gives a list of ciphers in his Subh al-a'sha which include both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter; he also gives an exposition on and worked example of cryptanalysis, including the use of tables of letter frequencies and sets of letters which can not occur together in one word Before 1940 • 1540 – Lodovico Ferrari discovered a method to find the roots of a quartic polynomial • 1545 – Gerolamo Cardano published Cardano's method for finding the roots of a cubic polynomial • 1614 – John Napier develops method for performing calculations using logarithms • 1671 – Newton–Raphson method developed by Isaac Newton • 1690 – Newton–Raphson method independently developed by Joseph Raphson • 1706 – John Machin develops a quickly converging inverse-tangent series for π and computes π to 100 decimal places • 1768 – Leonard Euler publishes his method for numerical integration of ordinary differential equations in problem 85 of Institutiones calculi integralis[3] • 1789 – Jurij Vega improves Machin's formula and computes π to 140 decimal places, • 1805 – FFT-like algorithm known by Carl Friedrich Gauss • 1842 – Ada Lovelace writes the first algorithm for a computing engine • 1903 – A fast Fourier transform algorithm presented by Carle David Tolmé Runge • 1918 - Soundex • 1926 – Borůvka's algorithm • 1926 – Primary decomposition algorithm presented by Grete Hermann[4] • 1927 – Hartree–Fock method developed for simulating a quantum many-body system in a stationary state. • 1934 – Delaunay triangulation developed by Boris Delaunay • 1936 – Turing machine, an abstract machine developed by Alan Turing, with others developed the modern notion of algorithm. 1940s • 1942 – A fast Fourier transform algorithm developed by G.C. Danielson and Cornelius Lanczos • 1945 – Merge sort developed by John von Neumann • 1947 – Simplex algorithm developed by George Dantzig 1950s • 1952 – Huffman coding developed by David A. Huffman • 1953 – Simulated annealing introduced by Nicholas Metropolis • 1954 – Radix sort computer algorithm developed by Harold H. Seward • 1964 – Box–Muller transform for fast generation of normally distributed numbers published by George Edward Pelham Box and Mervin Edgar Muller. Independently pre-discovered by Raymond E. A. C. Paley and Norbert Wiener in 1934. • 1956 – Kruskal's algorithm developed by Joseph Kruskal • 1956 – Ford–Fulkerson algorithm developed and published by R. Ford Jr. and D. R. Fulkerson • 1957 – Prim's algorithm developed by Robert Prim • 1957 – Bellman–Ford algorithm developed by Richard E. Bellman and L. R. Ford, Jr. • 1959 – Dijkstra's algorithm developed by Edsger Dijkstra • 1959 – Shell sort developed by Donald L. Shell • 1959 – De Casteljau's algorithm developed by Paul de Casteljau • 1959 – QR factorization algorithm developed independently by John G.F. Francis and Vera Kublanovskaya[5][6] • 1959 – Rabin–Scott powerset construction for converting NFA into DFA published by Michael O. Rabin and Dana Scott 1960s • 1960 – Karatsuba multiplication • 1961 – CRC (Cyclic redundancy check) invented by W. Wesley Peterson • 1962 – AVL trees • 1962 – Quicksort developed by C. A. R. Hoare • 1962 – Bresenham's line algorithm developed by Jack E. Bresenham • 1962 – Gale–Shapley 'stable-marriage' algorithm developed by David Gale and Lloyd Shapley • 1964 – Heapsort developed by J. W. J. Williams • 1964 – multigrid methods first proposed by R. P. Fedorenko • 1965 – Cooley–Tukey algorithm rediscovered by James Cooley and John Tukey • 1965 – Levenshtein distance developed by Vladimir Levenshtein • 1965 – Cocke–Younger–Kasami (CYK) algorithm independently developed by Tadao Kasami • 1965 – Buchberger's algorithm for computing Gröbner bases developed by Bruno Buchberger • 1965 – LR parsers invented by Donald Knuth • 1966 – Dantzig algorithm for shortest path in a graph with negative edges • 1967 – Viterbi algorithm proposed by Andrew Viterbi • 1967 – Cocke–Younger–Kasami (CYK) algorithm independently developed by Daniel H. Younger • 1968 – A* graph search algorithm described by Peter Hart, Nils Nilsson, and Bertram Raphael • 1968 – Risch algorithm for indefinite integration developed by Robert Henry Risch • 1969 – Strassen algorithm for matrix multiplication developed by Volker Strassen 1970s • 1970 – Dinic's algorithm for computing maximum flow in a flow network by Yefim (Chaim) A. Dinitz • 1970 – Knuth–Bendix completion algorithm developed by Donald Knuth and Peter B. Bendix • 1970 – BFGS method of the quasi-Newton class • 1970 – Needleman–Wunsch algorithm published by Saul B. Needleman and Christian D. Wunsch • 1972 – Edmonds–Karp algorithm published by Jack Edmonds and Richard Karp, essentially identical to Dinic's algorithm from 1970 • 1972 – Graham scan developed by Ronald Graham • 1972 – Red–black trees and B-trees discovered • 1973 – RSA encryption algorithm discovered by Clifford Cocks • 1973 – Jarvis march algorithm developed by R. A. Jarvis • 1973 – Hopcroft–Karp algorithm developed by John Hopcroft and Richard Karp • 1974 – Pollard's p − 1 algorithm developed by John Pollard • 1974 – Quadtree developed by Raphael Finkel and J.L. Bentley • 1975 – Genetic algorithms popularized by John Holland • 1975 – Pollard's rho algorithm developed by John Pollard • 1975 – Aho–Corasick string matching algorithm developed by Alfred V. Aho and Margaret J. Corasick • 1975 – Cylindrical algebraic decomposition developed by George E. Collins • 1976 – Salamin–Brent algorithm independently discovered by Eugene Salamin and Richard Brent • 1976 – Knuth–Morris–Pratt algorithm developed by Donald Knuth and Vaughan Pratt and independently by J. H. Morris • 1977 – Boyer–Moore string-search algorithm for searching the occurrence of a string into another string. • 1977 – RSA encryption algorithm rediscovered by Ron Rivest, Adi Shamir, and Len Adleman • 1977 – LZ77 algorithm developed by Abraham Lempel and Jacob Ziv • 1977 – multigrid methods developed independently by Achi Brandt and Wolfgang Hackbusch • 1978 – LZ78 algorithm developed from LZ77 by Abraham Lempel and Jacob Ziv • 1978 – Bruun's algorithm proposed for powers of two by Georg Bruun • 1979 – Khachiyan's ellipsoid method developed by Leonid Khachiyan • 1979 – ID3 decision tree algorithm developed by Ross Quinlan 1980s • 1980 – Brent's Algorithm for cycle detection Richard P. Brendt • 1981 – Quadratic sieve developed by Carl Pomerance • 1981 – Smith–Waterman algorithm developed by Temple F. Smith and Michael S. Waterman • 1983 – Simulated annealing developed by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi • 1983 – Classification and regression tree (CART) algorithm developed by Leo Breiman, et al. • 1984 – LZW algorithm developed from LZ78 by Terry Welch • 1984 – Karmarkar's interior-point algorithm developed by Narendra Karmarkar • 1984 - ACORN PRNG discovered by Roy Wikramaratna and used privately • 1985 – Simulated annealing independently developed by V. Cerny • 1985 - Car–Parrinello molecular dynamics developed by Roberto Car and Michele Parrinello • 1985 – Splay trees discovered by Sleator and Tarjan • 1986 – Blum Blum Shub proposed by L. Blum, M. Blum, and M. Shub • 1986 – Push relabel maximum flow algorithm by Andrew Goldberg and Robert Tarjan • 1986 - Barnes–Hut tree method developed by Josh Barnes and Piet Hut for fast approximate simulation of n-body problems • 1987 – Fast multipole method developed by Leslie Greengard and Vladimir Rokhlin • 1988 – Special number field sieve developed by John Pollard • 1989 - ACORN PRNG published by Roy Wikramaratna • 1989 – Paxos protocol developed by Leslie Lamport • 1989 – Skip list discovered by William Pugh 1990s • 1990 – General number field sieve developed from SNFS by Carl Pomerance, Joe Buhler, Hendrik Lenstra, and Leonard Adleman • 1990 – Coppersmith–Winograd algorithm developed by Don Coppersmith and Shmuel Winograd • 1990 – BLAST algorithm developed by Stephen Altschul, Warren Gish, Webb Miller, Eugene Myers, and David J. Lipman from National Institutes of Health • 1991 – Wait-free synchronization developed by Maurice Herlihy • 1992 – Deutsch–Jozsa algorithm proposed by D. Deutsch and Richard Jozsa • 1992 – C4.5 algorithm, a descendant of ID3 decision tree algorithm, was developed by Ross Quinlan • 1993 – Apriori algorithm developed by Rakesh Agrawal and Ramakrishnan Srikant • 1993 – Karger's algorithm to compute the minimum cut of a connected graph by David Karger • 1994 – Shor's algorithm developed by Peter Shor • 1994 – Burrows–Wheeler transform developed by Michael Burrows and David Wheeler • 1994 – Bootstrap aggregating (bagging) developed by Leo Breiman • 1995 – AdaBoost algorithm, the first practical boosting algorithm, was introduced by Yoav Freund and Robert Schapire • 1995 – soft-margin support vector machine algorithm was published by Vladimir Vapnik and Corinna Cortes. It adds a soft-margin idea to the 1992 algorithm by Boser, Nguyon, Vapnik, and is the algorithm that people usually refer to when saying SVM • 1995 – Ukkonen's algorithm for construction of suffix trees • 1996 – Bruun's algorithm generalized to arbitrary even composite sizes by H. Murakami • 1996 – Grover's algorithm developed by Lov K. Grover • 1996 – RIPEMD-160 developed by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel • 1997 – Mersenne Twister a pseudo random number generator developed by Makoto Matsumoto and Tajuki Nishimura • 1998 – PageRank algorithm was published by Larry Page • 1998 – rsync algorithm developed by Andrew Tridgell • 1999 – gradient boosting algorithm developed by Jerome H. Friedman • 1999 – Yarrow algorithm designed by Bruce Schneier, John Kelsey, and Niels Ferguson 2000s • 2000 – Hyperlink-induced topic search a hyperlink analysis algorithm developed by Jon Kleinberg • 2001 – Lempel–Ziv–Markov chain algorithm for compression developed by Igor Pavlov • 2001 – Viola–Jones algorithm for real-time face detection was developed by Paul Viola and Michael Jones. • 2001 – DHT (Distributed hash table) is invented by multiple people from academia and application systems • 2001 – BitTorrent a first fully decentralized peer-to-peer file distribution system is published • 2001 – LOBPCG Locally Optimal Block Preconditioned Conjugate Gradient method finding extreme eigenvalues of symmetric eigenvalue problems by Andrew Knyazev • 2002 – AKS primality test developed by Manindra Agrawal, Neeraj Kayal and Nitin Saxena • 2002 – Girvan–Newman algorithm to detect communities in complex systems • 2002 – Packrat parser developed for generating a parser that parses PEG (Parsing expression grammar) in linear time parsing developed by Bryan Ford • 2009 – Bitcoin a first trust-less decentralized cryptocurrency system is published 2010s • 2013 – Raft consensus protocol published by Diego Ongaro and John Ousterhout • 2015 – YOLO (“You Only Look Once”) is an effective real-time object recognition algorithm, first described by Joseph Redmon et al. References 1. Simon Singh, The Code Book, pp. 14–20 2. Victor J. Katz (1995). "Ideas of Calculus in Islam and India", Mathematics Magazine 68 (3), pp. 163–174. 3. Bruce, Ian (June 29, 2010). "Euler's Institutionum Calculi Integralis". www.17centurymaths.com. Archived from the original on February 1, 2011. Retrieved 17 May 2023. 4. Ciliberto, Ciro; Hirzebruch, Friedrich; Miranda, Rick; Teicher, Mina, eds. (2001). Applications of Algebraic Geometry to Coding Theory, Physics and Computation. Dordrecht: Springer Netherlands. ISBN 978-94-010-1011-5. 5. Francis, J.G.F. (1961). "The QR Transformation, I". The Computer Journal. 4 (3): 265–271. doi:10.1093/comjnl/4.3.265. 6. Kublanovskaya, Vera N. (1961). "On some algorithms for the solution of the complete eigenvalue problem". USSR Computational Mathematics and Mathematical Physics. 1 (3): 637–657. doi:10.1016/0041-5553(63)90168-X. Also published in: Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki [Journal of Computational Mathematics and Mathematical Physics], 1(4), pages 555–570 (1961). Timelines of computing Computing • Before 1950 • 1950–1979 • 1980s • 1990s • 2000s • 2010s • 2020s • Scientific • Women in computing Computer science • Algorithms • Artificial intelligence • Binary prefixes • Cryptography • Machine learning • Quantum computing Software • Free and open-source software • Hypertext technology • Operating systems • DOS family • Windows • Linux • Programming languages • Virtualization development • Malware Internet • Internet conflicts • Web browsers • Web search engines Notable people • Kathleen Antonelli • John Vincent Atanasoff • Charles Babbage • John Backus • Jean Bartik • George Boole • Vint Cerf • John Cocke • Stephen Cook • Edsger W. Dijkstra • J. Presper Eckert • Adele Goldstine • Lois Haibt • Betty Holberton • Margaret Hamilton • Grace Hopper • David A. Huffman • Bob Kahn • Brian Kernighan • Andrew Koenig • Semyon Korsakov • Nancy Leveson • Ada Lovelace • Donald Knuth • Joseph Kruskal • Douglas McIlroy • Marlyn Meltzer • John von Neumann • Klára Dán von Neumann • Dennis Ritchie • Guido van Rossum • Claude Shannon • Frances Spence • Bjarne Stroustrup • Ruth Teitelbaum • Ken Thompson • Linus Torvalds • Alan Turing • Paul Vixie • Larry Wall • Stephen Wolfram • Niklaus Wirth • Steve Wozniak • Konrad Zuse
Wikipedia
Timeline of ancient Greek mathematicians This is a timeline of mathematicians in ancient Greece. Part of a series on the History of Greece Neolithic Greece • Pelasgians Greek Bronze Age • Helladic • Cycladic • Minoan • Mycenaean Greece 1750 BC – 1050 BC Ancient Greece • Greek Dark Ages 1050 BC–750 BC • Archaic Greece 800 BC–480 BC • Classical Greece 500 BC–323BC • Hellenistic Greece 323 BC–31 BC • Roman Greece 146 BC–330 Medieval Greece • Byzantine Greece • Frankish and Latin states 1204 Early modern Greece • Venetian Crete • Venetian Ionian Islands • Ottoman Greece Modern Greece • Septinsular Republic • War of Independence • First Hellenic Republic • Kingdom of Greece • National Schism • Second Hellenic Republic • 4th of August Regime • Axis occupation (Collaborationist regime, Free Greece) • Civil War • Military Junta • Third Hellenic Republic History by topic • Art • Constitution • Economy • Judaism • Military • Names  Greece portal See also: List of Greek mathematicians and Chronology of ancient Greek mathematicians Timeline Historians traditionally place the beginning of Greek mathematics proper to the age of Thales of Miletus (ca. 624–548 BC), which is indicated by the green line at 600 BC. The orange line at 300 BC indicates the approximate year in which Euclid's Elements was first published. The red line at 300 AD passes through Pappus of Alexandria (c. 290 – c. 350 AD), who was one of the last great Greek mathematicians of late antiquity. Note that the solid thick black line is at year zero, which is a year that does not exist in the Anno Domini (AD) calendar year system The mathematician Heliodorus of Larissa is not listed due to the uncertainty of when he lived, which was possibly during the 3rd century AD, after Ptolemy. Overview of the most important mathematicians and discoveries Of these mathematicians, those whose work stands out include: • Thales of Miletus (c. 624/623  – c. 548/545 BC) is the first known individual to use deductive reasoning applied to geometry, by deriving four corollaries to Thales' theorem. He is the first known individual to whom a mathematical discovery has been attributed.[1] • Pythagoras (c. 570 – c. 495 BC) was credited with many mathematical and scientific discoveries, including the Pythagorean theorem, Pythagorean tuning, the five regular solids, the Theory of Proportions, the sphericity of the Earth, and the identity of the morning and evening stars as the planet Venus. • Theaetetus (c. 417 – c. 369 BC) Proved that there are exactly five regular convex polyhedra (it is emphasized that it was, in particular, proved that there does not exist any regular convex polyhedra other than these five). This fact led these five solids, now called the Platonic solids, to play a prominent role in the philosophy of Plato (and consequently, also influenced later Western Philosophy) who associated each of the four classical elements with a regular solid: earth with the cube, air with the octahedron, water with the icosahedron, and fire with the tetrahedron (of the fifth Platonic solid, the dodecahedron, Plato obscurely remarked, "...the god used [it] for arranging the constellations on the whole heaven"). The last book (Book XIII) of the Euclid's Elements, which is probably derived from the work of Theaetetus, is devoted to constructing the Platonic solids and describing their properties; Andreas Speiser has advocated the view that the construction of the 5 regular solids is the chief goal of the deductive system canonized in the Elements.[2] Astronomer Johannes Kepler proposed a model of the Solar System in which the five solids were set inside one another and separated by a series of inscribed and circumscribed spheres. • Eudoxus of Cnidus (c. 408 – c. 355 BC) is considered by some to be the greatest of classical Greek mathematicians, and in all antiquity second only to Archimedes.[3] Book V of Euclid's Elements is though to be largely due to Eudoxus. • Aristarchus of Samos (c. 310 – c. 230 BC) presented the first known heliocentric model that placed the Sun at the center of the known universe with the Earth revolving around it. Aristarchus identified the "central fire" with the Sun, and he put the other planets in their correct order of distance around the Sun.[4] In On the Sizes and Distances, he calculates the sizes of the Sun and Moon, as well as their distances from the Earth in terms of Earth's radius. However, Eratosthenes (c. 276 – c. 194/195 BC) was the first person to calculate the circumference of the Earth. Posidonius (c. 135 – c. 51 BC) also measured the diameters and distances of the Sun and the Moon as well as the Earth's diameter; his measurement of the diameter of the Sun was more accurate than Aristarchus', differing from the modern value by about half. • Euclid (fl. 300 BC) is often referred to as the "founder of geometry"[5] or the "father of geometry" because of his incredibly influential treatise called the Elements, which was the first, or at least one of the first, axiomatized deductive systems. • Archimedes (c. 287 – c. 212 BC) is considered to be the greatest mathematician of ancient history, and one of the greatest of all time.[6][7] Archimedes anticipated modern calculus and analysis by applying concepts of infinitesimals and the method of exhaustion to derive and rigorously prove a range of geometrical theorems, including: the area of a circle; the surface area and volume of a sphere; area of an ellipse; the area under a parabola; the volume of a segment of a paraboloid of revolution; the volume of a segment of a hyperboloid of revolution; and the area of a spiral.[8] He was also one of the first to apply mathematics to physical phenomena, founding hydrostatics and statics, including an explanation of the principle of the lever. In a lost work, he discovered and enumerated the 13 Archimedean solids, which were later rediscovered by Johannes Kepler around 1620 A.D. • Apollonius of Perga (c. 240 – c. 190 BC) is known for his work on conic sections and his study of geometry in 3-dimensional space. He is considered one of the greatest ancient Greek mathematicians. • Hipparchus (c. 190 – c. 120 BC) is considered the founder of trigonometry[9] and also solved several problems of spherical trigonometry. He was the first whose quantitative and accurate models for the motion of the Sun and Moon survive. In his work On Sizes and Distances, he measured the apparent diameters of the Sun and Moon and their distances from Earth. He is also reputed to have measured the Earth's precession. • Diophantus (c. 201–215 – c. 285–299 AD) wrote Arithmetica which dealt with solving algebraic equations and also introduced syncopated algebra, which was a precursor to modern symbolic algebra. Because of this, Diophantus is sometimes known as "the father of algebra," which is a title he shares with Muhammad ibn Musa al-Khwarizmi. In contrast to Diophantus, al-Khwarizmi wasn't primarily interested in integers and he gave an exhaustive and systematic description of solving quadratic equations and some higher order algebraic equations. However, al-Khwarizmi did not use symbolic or syncopated algebra but rather "rhetorical algebra" or ancient Greek "geometric algebra" (the ancient Greeks had expressed and solved some particular instances of algebraic equations in terms of geometric properties such as length and area but they did not solve such problems in general; only particular instances). An example of "geometric algebra" is: given a triangle (or rectangle, etc.) with a certain area and also given the length of some of its sides (or some other properties), find the length of the remaining side (and justify/prove the answer with geometry). Solving such a problem is often equivalent to finding the roots of a polynomial. Hellenic mathematicians The conquests of Alexander the Great around c. 330 BC led to Greek culture being spread around much of the Mediterranean region, especially in Alexandria, Egypt. This is why the Hellenistic period of Greek mathematics is typically considered as beginning in the 4th century BC. During the Hellenistic period, many people living in those parts of the Mediterranean region subject to Greek influence ended up adopting the Greek language and sometimes also Greek culture. Consequently, some of the Greek mathematicians from this period may not have been "ethnically Greek" with respect to the modern Western notion of ethnicity, which is much more rigid than most other notions of ethnicity that existed in the Mediterranean region at the time. Ptolemy, for example, was said to have originated from Upper Egypt, which is far South of Alexandria, Egypt. Regardless, their contemporaries considered them Greek. Straightedge and compass constructions Main article: Straightedge and compass construction For the most part, straightedge and compass constructions dominated ancient Greek mathematics and most theorems and results were stated and proved in terms of geometry. These proofs involved a straightedge (such as that formed by a taut rope), which was used to construct lines, and a compass, which was used to construct circles. The straightedge is an idealized ruler that can draw arbitrarily long lines but (unlike modern rulers) it has no markings on it. A compass can draw a circle starting from two given points: the center and a point on the circle. A taut rope can be used to physically construct both lines (since it forms a straightedge) and circles (by rotating the taut rope around a point). Geometric constructions using lines and circles were also used outside of the Mediterranean region. The Shulba Sutras from the Vedic period of Indian mathematics, for instance, contains geometric instructions on how to physically construct a (quality) fire-altar by using a taut rope as a straightedge. These alters could have various shapes but for theological reasons, they were all required to have the same area. This consequently required a high precision construction along with (written) instructions on how to geometrically construct such alters with the tools that were most widely available throughout the Indian subcontinent (and elsewhere) at the time. Ancient Greek mathematicians went one step further by axiomatizing plane geometry in such a way that straightedge and compass constructions became mathematical proofs. Euclid's Elements was the culmination of this effort and for over two thousand years, even as late as the 19th century, it remained the "standard text" on mathematics throughout the Mediterranean region (including Europe and the Middle East), and later also in North and South America after European colonization. Algebra Ancient Greek mathematicians are known to have solved specific instances of polynomial equations with the use of straightedge and compass constructions, which simultaneously gave a geometric proof of the solution's correctness. Once a construction was completed, the answer could be found by measuring the length of a certain line segment (or possibly some other quantity). A quantity multiplied by itself, such as $5\cdot 5$ for example, would often be constructed as a literal square with sides of length $5,$ which is why the second power "$x^{2}=x\cdot x$" is referred to as "$x$ squared" in ordinary spoken language. Thus problems that would today be considered "algebra problems" were also solved by ancient Greek mathematicians, although not in full generality. A complete guide to systematically solving low-order polynomials equations for an unknown quantity (instead of just specific instances of such problems) would not appear until The Compendious Book on Calculation by Completion and Balancing by Muhammad ibn Musa al-Khwarizmi, who used Greek geometry to "prove the correctness" of the solutions that were given in the treatise. However, this treatise was entirely rhetorical (meaning that everything, including numbers, was written using words structured in ordinary sentences) and did not have any "algebraic symbols" that are today associated with algebra problems – not even the syncopated algebra that appeared in Arithmetica. See also • History of mathematics • History of calculus – Aspect of history • History of geometry – Historical development of geometry • Geometry and topology • Greek mathematics – Mathematics of Ancient Greeks • List of Greek mathematicians • Relationship between mathematics and physics – Study of how mathematics and physics relate to each other • Timeline of mathematics • Timeline of algebra – Notable events in the history of algebra • Timeline of calculus and mathematical analysis – Summary of advancements in Calculus • Timeline of geometry • Timeline of mathematical logic References 1. (Boyer 1991, "Ionia and the Pythagoreans" p. 43) harv error: no target: CITEREFBoyer1991 (help) 2. Weyl 1952, p. 74. 3. Calinger, Ronald (1982). Classics of Mathematics. Oak Park, Illinois: Moore Publishing Company, Inc. p. 75. ISBN 0-935610-13-8. 4. Draper, John William (2007) [1874]. "History of the Conflict Between Religion and Science". In Joshi, S. T. (ed.). The Agnostic Reader. Prometheus. pp. 172–173. ISBN 978-1-59102-533-7. 5. Bruno, Leonard C. (2003) [1999]. Math and Mathematicians: The History of Math Discoveries Around the World. Baker, Lawrence W. Detroit, Mich.: U X L. pp. 125. ISBN 978-0-7876-3813-9. OCLC 41497065. 6. John M. Henshaw (10 September 2014). An Equation for Every Occasion: Fifty-Two Formulas and Why They Matter. JHU Press. p. 68. ISBN 978-1-4214-1492-8. Archimedes is on most lists of the greatest mathematicians of all time and is considered the greatest mathematician of antiquity. 7. Hans Niels Jahnke. A History of Analysis. American Mathematical Soc. p. 21. ISBN 978-0-8218-9050-9. Archimedes was the greatest mathematician of antiquity and one of the greatest of all times 8. O'Connor, J.J.; Robertson, E.F. (February 1996). "A history of calculus". University of St Andrews. Archived from the original on 15 July 2007. Retrieved 7 August 2007. 9. C. M. Linton (2004). From Eudoxus to Einstein: a history of mathematical astronomy. Cambridge University Press. p. 52. ISBN 978-0-521-82750-8. • Boyer, C.B. (1989), A History of Mathematics (2nd ed.), New York: Wiley, ISBN 978-0-471-09763-1 (1991 pbk ed. ISBN 0-471-54397-7) • Weyl, Hermann (1952). Symmetry. Princeton, NJ: Princeton University Press. ISBN 0-691-02374-3. Ancient Greek mathematics Mathematicians (timeline) • Anaxagoras • Anthemius • Archytas • Aristaeus the Elder • Aristarchus • Aristotle • Apollonius • Archimedes • Autolycus • Bion • Bryson • Callippus • Carpus • Chrysippus • Cleomedes • Conon • Ctesibius • Democritus • Dicaearchus • Diocles • Diophantus • Dinostratus • Dionysodorus • Domninus • Eratosthenes • Eudemus • Euclid • Eudoxus • Eutocius • Geminus • Heliodorus • Heron • Hipparchus • Hippasus • Hippias • Hippocrates • Hypatia • Hypsicles • Isidore of Miletus • Leon • Marinus • Menaechmus • Menelaus • Metrodorus • Nicomachus • Nicomedes • Nicoteles • Oenopides • Pappus • Perseus • Philolaus • Philon • Philonides • Plato • Porphyry • Posidonius • Proclus • Ptolemy • Pythagoras • Serenus • Simplicius • Sosigenes • Sporus • Thales • Theaetetus • Theano • Theodorus • Theodosius • Theon of Alexandria • Theon of Smyrna • Thymaridas • Xenocrates • Zeno of Elea • Zeno of Sidon • Zenodorus Treatises • Almagest • Archimedes Palimpsest • Arithmetica • Conics (Apollonius) • Catoptrics • Data (Euclid) • Elements (Euclid) • Measurement of a Circle • On Conoids and Spheroids • On the Sizes and Distances (Aristarchus) • On Sizes and Distances (Hipparchus) • On the Moving Sphere (Autolycus) • Optics (Euclid) • On Spirals • On the Sphere and Cylinder • Ostomachion • Planisphaerium • Sphaerics • The Quadrature of the Parabola • The Sand Reckoner Problems • Constructible numbers • Angle trisection • Doubling the cube • Squaring the circle • Problem of Apollonius Concepts and definitions • Angle • Central • Inscribed • Axiomatic system • Axiom • Chord • Circles of Apollonius • Apollonian circles • Apollonian gasket • Circumscribed circle • Commensurability • Diophantine equation • Doctrine of proportionality • Euclidean geometry • Golden ratio • Greek numerals • Incircle and excircles of a triangle • Method of exhaustion • Parallel postulate • Platonic solid • Lune of Hippocrates • Quadratrix of Hippias • Regular polygon • Straightedge and compass construction • Triangle center Results In Elements • Angle bisector theorem • Exterior angle theorem • Euclidean algorithm • Euclid's theorem • Geometric mean theorem • Greek geometric algebra • Hinge theorem • Inscribed angle theorem • Intercept theorem • Intersecting chords theorem • Intersecting secants theorem • Law of cosines • Pons asinorum • Pythagorean theorem • Tangent-secant theorem • Thales's theorem • Theorem of the gnomon Apollonius • Apollonius's theorem Other • Aristarchus's inequality • Crossbar theorem • Heron's formula • Irrational numbers • Law of sines • Menelaus's theorem • Pappus's area theorem • Problem II.8 of Arithmetica • Ptolemy's inequality • Ptolemy's table of chords • Ptolemy's theorem • Spiral of Theodorus Centers • Cyrene • Mouseion of Alexandria • Platonic Academy Related • Ancient Greek astronomy • Attic numerals • Greek numerals • Latin translations of the 12th century • Non-Euclidean geometry • Philosophy of mathematics • Neusis construction History of • A History of Greek Mathematics • by Thomas Heath • algebra • timeline • arithmetic • timeline • calculus • timeline • geometry • timeline • logic • timeline • mathematics • timeline • numbers • prehistoric counting • numeral systems • list Other cultures • Arabian/Islamic • Babylonian • Chinese • Egyptian • Incan • Indian • Japanese  Ancient Greece portal •  Mathematics portal
Wikipedia
Timeline of information theory A timeline of events related to  information theory,  quantum information theory and statistical physics,  data compression,  error correcting codes and related subjects. • 1872 – Ludwig Boltzmann presents his H-theorem, and with it the formula Σpi log pi for the entropy of a single gas particle • 1878 – J. Willard Gibbs defines the Gibbs entropy: the probabilities in the entropy formula are now taken as probabilities of the state of the whole system • 1924 – Harry Nyquist discusses quantifying "intelligence" and the speed at which it can be transmitted by a communication system • 1927 – John von Neumann defines the von Neumann entropy, extending the Gibbs entropy to quantum mechanics • 1928 – Ralph Hartley introduces Hartley information as the logarithm of the number of possible messages, with information being communicated when the receiver can distinguish one sequence of symbols from any other (regardless of any associated meaning) • 1929 – Leó Szilárd analyses Maxwell's Demon, showing how a Szilard engine can sometimes transform information into the extraction of useful work • 1940 – Alan Turing introduces the deciban as a measure of information inferred about the German Enigma machine cypher settings by the Banburismus process • 1944 – Claude Shannon's theory of information is substantially complete • 1947 – Richard W. Hamming invents Hamming codes for error detection and correction (to protect patent rights, the result is not published until 1950) • 1948 – Claude E. Shannon publishes A Mathematical Theory of Communication • 1949 – Claude E. Shannon publishes Communication in the Presence of Noise – Nyquist–Shannon sampling theorem and Shannon–Hartley law • 1949 – Claude E. Shannon's Communication Theory of Secrecy Systems is declassified • 1949 – Robert M. Fano publishes Transmission of Information. M.I.T. Press, Cambridge, Massachusetts – Shannon–Fano coding • 1949 – Leon G. Kraft discovers Kraft's inequality, which shows the limits of prefix codes • 1949 – Marcel J. E. Golay introduces Golay codes for forward error correction • 1951 – Solomon Kullback and Richard Leibler introduce the Kullback–Leibler divergence • 1951 – David A. Huffman invents Huffman encoding, a method of finding optimal prefix codes for lossless data compression • 1953 – August Albert Sardinas and George W. Patterson devise the Sardinas–Patterson algorithm, a procedure to decide whether a given variable-length code is uniquely decodable • 1954 – Irving S. Reed and David E. Muller propose Reed–Muller codes • 1955 – Peter Elias introduces convolutional codes • 1957 – Eugene Prange first discusses cyclic codes • 1959 – Alexis Hocquenghem, and independently the next year Raj Chandra Bose and Dwijendra Kumar Ray-Chaudhuri, discover BCH codes • 1960 – Irving S. Reed and Gustave Solomon propose Reed–Solomon codes • 1962 – Robert G. Gallager proposes low-density parity-check codes; they are unused for 30 years due to technical limitations • 1965 – Dave Forney discusses concatenated codes • 1966 – Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) develop linear predictive coding (LPC), a form of speech coding[1] • 1967 – Andrew Viterbi reveals the Viterbi algorithm, making decoding of convolutional codes practicable • 1968 – Elwyn Berlekamp invents the Berlekamp–Massey algorithm; its application to decoding BCH and Reed–Solomon codes is pointed out by James L. Massey the following year • 1968 – Chris Wallace and David M. Boulton publish the first of many papers on Minimum Message Length (MML) statistical and inductive inference • 1970 – Valerii Denisovich Goppa introduces Goppa codes • 1972 – Jørn Justesen proposes Justesen codes, an improvement of Reed–Solomon codes • 1972 – Nasir Ahmed proposes the discrete cosine transform (DCT), which he develops with T. Natarajan and K. R. Rao in 1973;[2] the DCT later became the most widely used lossy compression algorithm, the basis for multimedia formats such as JPEG, MPEG and MP3 • 1973 – David Slepian and Jack Wolf discover and prove the Slepian–Wolf coding limits for distributed source coding[3] • 1976 – Gottfried Ungerboeck gives the first paper on trellis modulation; a more detailed exposition in 1982 leads to a raising of analogue modem POTS speeds from 9.6 kbit/s to 33.6 kbit/s • 1976 – Richard Pasco and Jorma J. Rissanen develop effective arithmetic coding techniques • 1977 – Abraham Lempel and Jacob Ziv develop Lempel–Ziv compression (LZ77) • 1982 – Valerii Denisovich Goppa introduces algebraic geometry codes • 1989 – Phil Katz publishes the .zip format including DEFLATE (LZ77 + Huffman coding); later to become the most widely used archive container • 1993 – Claude Berrou, Alain Glavieux and Punya Thitimajshima introduce Turbo codes • 1994 – Michael Burrows and David Wheeler publish the Burrows–Wheeler transform, later to find use in bzip2 • 1995 – Benjamin Schumacher coins the term qubit and proves the quantum noiseless coding theorem • 2003 – David J. C. MacKay shows the connection between information theory, inference and machine learning in his book. • 2006 – Jarosław Duda introduces first Asymmetric numeral systems entropy coding: since 2014 popular replacement of Huffman and arithmetic coding in compressors like Facebook Zstandard, Apple LZFSE, CRAM or JPEG XL • 2008 – Erdal Arıkan introduces polar codes, the first practical construction of codes that achieves capacity for a wide array of channels References 1. Gray, Robert M. (2010). "A History of Realtime Digital Speech on Packet Networks: Part II of Linear Predictive Coding and the Internet Protocol" (PDF). Found. Trends Signal Process. 3 (4): 203–303. doi:10.1561/2000000036. ISSN 1932-8346. 2. Nasir Ahmed. "How I Came Up With the Discrete Cosine Transform". Digital Signal Processing, Vol. 1, Iss. 1, 1991, pp. 4-5. 3. Slepian, David S.; Wolf, Jack K. (July 1973). "Noiseless coding of correlated information sources". IEEE Transactions on Information Theory. IEEE. 19 (4): 471–480. doi:10.1109/TIT.1973.1055037. ISSN 0018-9448.
Wikipedia
Timeline of mathematical innovation in South and West Asia South and West Asia consists of a wide region extending from the present-day country of Turkey in the west to Bangladesh and India in the east. Timeline • 3rd millennium BCE Sexagesimal system of the Sumerians: • 2nd millennium BCE Babylonian Pythagorean triples. According to mathematician S. G. Dani, the Babylonian cuneiform tablet Plimpton 322 written ca. 1850 BCE[1] "contains fifteen Pythagorean triples with quite large entries, including (13500, 12709, 18541) which is a primitive triple,[2] indicating, in particular, that there was sophisticated understanding on the topic" in Mesopotamia. • 1st millennium BCE Baudhayana Śulba Sūtras Earliest statement of Pythagorean Theorem: According to (Hayashi 2005, p. 363), the Śulba Sūtras contain "the earliest extant verbal expression of the Pythagorean Theorem in the world, although it had already been known to the Old Babylonians." The diagonal rope (akṣṇayā-rajju) of an oblong (rectangle) produces both which the flank (pārśvamāni) and the horizontal (tiryaṇmānī) <ropes> produce separately."[3] Since the statement is a sūtra, it is necessarily compressed and what the ropes produce is not elaborated on, but the context clearly implies the square areas constructed on their lengths, and would have been explained so by the teacher to the student.[3] See also • Timeline of cultivation and domestication in South and West Asia • Timeline of mathematics Notes 1. Mathematics Department, University of British Columbia, The Babylonian tablet Plimpton 322. 2. Three positive integers $(a,b,c)$ form a primitive Pythagorean triple if $c^{2}=a^{2}+b^{2}$ and if the highest common factor of $a,b,c$ is 1. In the particular Plimpton322 example, this means that $13500^{2}+12709^{2}=18541^{2}$ and that the three numbers do not have any common factors. However some scholars have disputed the Pythagorean interpretation of this tablet; see Plimpton 322 for details. 3. (Hayashi 2005, p. 363) References • Bourbaki, Nicolas (1998), Elements of the History of Mathematics, Berlin, Heidelberg, and New York: Springer-Verlag, 301 pages, ISBN 3-540-64767-8. • Boyer, C. B.; Merzback (fwd. by Isaac Asimov), U. C. (1991), History of Mathematics, New York: John Wiley and Sons, 736 pages, ISBN 0-471-54397-7. • Bressoud, David (2002), "Was Calculus Invented in India?", The College Mathematics Journal, 33 (1): 2–13, doi:10.2307/1558972, ISSN 0746-8342, JSTOR 1558972. • Bronkhorst, Johannes (2001), "Panini and Euclid: Reflections on Indian Geometry", Journal of Indian Philosophy, Springer Netherlands, 29 (1–2): 43–80, doi:10.1023/A:1017506118885, S2CID 115779583. • Burnett, Charles (2006), "The Semantics of Indian Numerals in Arabic, Greek and Latin", Journal of Indian Philosophy, Springer-Netherlands, 34 (1–2): 15–30, doi:10.1007/s10781-005-8153-z, S2CID 170783929. • Burton, David M. (1997), The History of Mathematics: An Introduction, The McGraw-Hill Companies, Inc., pp. 193–220. • Cooke, Roger (2005), The History of Mathematics: A Brief Course, New York: Wiley-Interscience, 632 pages, ISBN 0-471-44459-6. • Dani, S. G. (July 25, 2003), "On the Pythagorean triples in the Śulvasūtras" (PDF), Current Science, 85 (2): 219–224. • Datta, Bibhutibhusan (Dec 1931), "Early Literary Evidence of the Use of the Zero in India", The American Mathematical Monthly, 38 (10): 566–572, doi:10.2307/2301384, ISSN 0002-9890, JSTOR 2301384. • Datta, Bibhutibhusan; Singh, Avadesh Narayan (1962), History of Hindu Mathematics: A Source Book, Bombay: Asia Publishing House. • De Young, Gregg (1995), "Euclidean Geometry in the Mathematical Tradition of Islamic India", Historia Mathematica, 22 (2): 138–153, doi:10.1006/hmat.1995.1014. • Encyclopædia Britannica (Kim Plofker) (2007), "mathematics, South Asian", Encyclopædia Britannica Online: 1–12, retrieved May 18, 2007. • Filliozat, Pierre-Sylvain (2004), "Ancient Sanskrit Mathematics: An Oral Tradition and a Written Literature", in Chemla, Karine; Cohen, Robert S.; Renn, Jürgen; Gavroglu, Kostas (eds.), History of Science, History of Text (Boston Series in the Philosophy of Science), Dordrecht: Springer Netherlands, 254 pages, pp. 137-157, pp. 360–375, doi:10.1007/1-4020-2321-9_7, ISBN 978-1-4020-2320-0. • Fowler, David (1996), "Binomial Coefficient Function", The American Mathematical Monthly, 103 (1): 1–17, doi:10.2307/2975209, ISSN 0002-9890, JSTOR 2975209. • Hayashi, Takao (1995), The Bakhshali Manuscript, An ancient Indian mathematical treatise, Groningen: Egbert Forsten, 596 pages, ISBN 90-6980-087-X. • Hayashi, Takao (1997), "Aryabhata's Rule and Table of Sine-Differences", Historia Mathematica, 24 (4): 396–406, doi:10.1006/hmat.1997.2160. • Hayashi, Takao (2003), "Indian Mathematics", in Grattan-Guinness, Ivor (ed.), Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences, vol. 1, pp. 118-130, Baltimore, MD: The Johns Hopkins University Press, 976 pages, ISBN 0-8018-7396-7. • Hayashi, Takao (2005), "Indian Mathematics", in Flood, Gavin (ed.), The Blackwell Companion to Hinduism, Oxford: Basil Blackwell, 616 pages, pp. 360-375, pp. 360–375, ISBN 978-1-4051-3251-0. • Henderson, David W. (2000), "Square roots in the Sulba Sutras", in Gorini, Catherine A. (ed.), Geometry at Work: Papers in Applied Geometry, vol. 53, pp. 39-45, Washington DC: Mathematical Association of America Notes, 236 pages, pp. 39–45, ISBN 0-88385-164-4. • Ifrah, Georges (2000), A Universal History of Numbers: From Prehistory to Computers, New York: Wiley, 658 pages, ISBN 0-471-39340-1. • Joseph, G. G. (2000), The Crest of the Peacock: The Non-European Roots of Mathematics, Princeton, NJ: Princeton University Press, 416 pages, ISBN 0-691-00659-8. • Katz, Victor J. (1995), "Ideas of Calculus in Islam and India", Mathematics Magazine, 68 (3): 163–174, doi:10.2307/2691411, JSTOR 2691411. • Katz, Victor J., ed. (2007), The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook, Princeton, NJ: Princeton University Press, 685 pages, pp 385-514, ISBN 978-0-691-11485-9. • Keller, Agathe (2005), "Making diagrams speak, in Bhāskara I's commentary on the Aryabhaṭīya" (PDF), Historia Mathematica, 32 (3): 275–302, doi:10.1016/j.hm.2004.09.001. • Kichenassamy, Satynad (2006), "Baudhāyana's rule for the quadrature of the circle", Historia Mathematica, 33 (2): 149–183, doi:10.1016/j.hm.2005.05.001. • Pingree, David (1971), "On the Greek Origin of the Indian Planetary Model Employing a Double Epicycle", Journal of Historical Astronomy, 2 (1): 80–85, Bibcode:1971JHA.....2...80P, doi:10.1177/002182867100200202, S2CID 118053453. • Pingree, David (1973), "The Mesopotamian Origin of Early Indian Mathematical Astronomy", Journal of Historical Astronomy, 4 (1): 1–12, Bibcode:1973JHA.....4....1P, doi:10.1177/002182867300400102, S2CID 125228353. • Pingree, David; Staal, Frits (1988), "Reviewed Work(s): The Fidelity of Oral Tradition and the Origins of Science by Frits Staal", Journal of the American Oriental Society, 108 (4): 637–638, doi:10.2307/603154, JSTOR 603154. • Pingree, David (1992), "Hellenophilia versus the History of Science", Isis, 83 (4): 554–563, Bibcode:1992Isis...83..554P, doi:10.1086/356288, JSTOR 234257, S2CID 68570164 • Pingree, David (2003), "The logic of non-Western science: mathematical discoveries in medieval India", Daedalus, 132 (4): 45–54, doi:10.1162/001152603771338779, S2CID 57559157. • Plofker, Kim (1996), "An Example of the Secant Method of Iterative Approximation in a Fifteenth-Century Sanskrit Text", Historia Mathematica, 23 (3): 246–256, doi:10.1006/hmat.1996.0026. • Plofker, Kim (2001), "The "Error" in the Indian "Taylor Series Approximation" to the Sine", Historia Mathematica, 28 (4): 283–295, doi:10.1006/hmat.2001.2331. • Plofker, K. (2007), "Mathematics of India", in Katz, Victor J. (ed.), The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook, Princeton, NJ: Princeton University Press, 685 pages, pp 385-514, pp. 385–514, ISBN 978-0-691-11485-9. • Plofker, Kim (2009), Mathematics in India: 500 BCE–1800 CE, Princeton, NJ: Princeton University Press. Pp. 384., ISBN 978-0-691-12067-6. • Price, John F. (2000), "Applied geometry of the Sulba Sutras" (PDF), in Gorini, Catherine A. (ed.), Geometry at Work: Papers in Applied Geometry, vol. 53, pp. 46-58, Washington DC: Mathematical Association of America Notes, 236 pages, pp. 46–58, ISBN 0-88385-164-4. • Roy, Ranjan (1990), "Discovery of the Series Formula for $\pi $ by Leibniz, Gregory, and Nilakantha", Mathematics Magazine, 63 (5): 291–306, doi:10.2307/2690896, JSTOR 2690896. • Singh, A. N. (1936), "On the Use of Series in Hindu Mathematics", Osiris, 1 (1): 606–628, doi:10.1086/368443, ISSN 0369-7827, JSTOR 301627, S2CID 144760421 • Staal, Frits (1986), The Fidelity of Oral Tradition and the Origins of Science, Mededelingen der Koninklijke Nederlandse Akademie von Wetenschappen, Afd. Letterkunde, NS 49, 8. Amsterdam: North Holland Publishing Company, 40 pages. • Staal, Frits (1995), "The Sanskrit of science", Journal of Indian Philosophy, Springer Netherlands, 23 (1): 73–127, doi:10.1007/BF01062067, S2CID 170755274. • Staal, Frits (1999), "Greek and Vedic Geometry", Journal of Indian Philosophy, 27 (1–2): 105–127, doi:10.1023/A:1004364417713, S2CID 170894641. • Staal, Frits (2001), "Squares and oblongs in the Veda", Journal of Indian Philosophy, Springer Netherlands, 29 (1–2): 256–272, doi:10.1023/A:1017527129520, S2CID 170403804. • Staal, Frits (2006), "Artificial Languages Across Sciences and Civilizations", Journal of Indian Philosophy, Springer Netherlands, 34 (1): 89–141, doi:10.1007/s10781-005-8189-0, S2CID 170968871. • Stillwell, John (2004), Berlin and New York: Mathematics and its History (2 ed.), Springer, 568 pages, ISBN 0-387-95336-1. • Thibaut, George (1984) [1875], Mathematics in the Making in Ancient India: reprints of 'On the Sulvasutras' and 'Baudhyayana Sulva-sutra', Calcutta and Delhi: K. P. Bagchi and Company (orig. Journal of the Asiatic Society of Bengal), 133 pages. • van der Waerden, B. L. (1983), Geometry and Algebra in Ancient Civilizations, Berlin and New York: Springer, 223 pages, ISBN 0-387-12159-5 • van der Waerden, B. L. (1988), "On the Romaka-Siddhānta", Archive for History of Exact Sciences, 38 (1): 1–11, doi:10.1007/BF00329976, S2CID 189788738 • van der Waerden, B. L. (1988), "Reconstruction of a Greek table of chords", Archive for History of Exact Sciences, 38 (1): 23–38, doi:10.1007/BF00329978, S2CID 189793547 • Van Nooten, B. (1993), "Binary numbers in Indian antiquity", Journal of Indian Philosophy, Springer Netherlands, 21 (1): 31–50, doi:10.1007/BF01092744, S2CID 171039636 • Whish, Charles (1835), "On the Hindú Quadrature of the Circle, and the infinite Series of the proportion of the circumference to the diameter exhibited in the four S'ástras, the Tantra Sangraham, Yucti Bháshá, Carana Padhati, and Sadratnamála", Transactions of the Royal Asiatic Society of Great Britain and Ireland, 3 (3): 509–523, doi:10.1017/S0950473700001221, JSTOR 25581775 • Yano, Michio (2006), "Oral and Written Transmission of the Exact Sciences in Sanskrit", Journal of Indian Philosophy, Springer Netherlands, 34 (1–2): 143–160, doi:10.1007/s10781-005-8175-6, S2CID 170679879 Inventions and discoveries Lists of inventions or discoveries by country/region • Australia • Austria • Azerbaijan • Bangladesh • Brazil • Britain • England • Scotland • Wales • Canada • China • inventions • discoveries • Croatia • Egypt • France • Germany • Greece • India • Indonesia • Ireland • Israel • Italy • Jamaica • Japan • Korea • Malaysia • Mexico • Netherlands • Inventions • discoveries • explorations • Pakistan • Philippines • Poland • Portugal • inventions • discoveries • Russia • Serbia • South Africa • Spain • Switzerland • Taiwan • Thailand • Vietnam • United States • inventions • before 1890 • 1890–1945 • 1946–1991 • after 1991 • discoveries by topic • chemistry • cosmology • multiple discoveries • science • Historic inventions • Analog-to-digital • Byzantine Empire • Indus Valley • Medieval Islamic • Military • Native American • Lost inventions Lists of inventors or discoverers by country/region • Worldwide • African • American • African-American • Puerto Rican • Armenian • Austrian • British • English • Welsh • Bulgarian • German • Italian • New Zealand • Polish • Romanian • Russian • Serbian • Spanish • Swedish • Swiss
Wikipedia
Timeline of mathematical logic A timeline of mathematical logic; see also history of logic. 19th century • 1847 – George Boole proposes symbolic logic in The Mathematical Analysis of Logic, defining what is now called Boolean algebra. • 1854 – George Boole perfects his ideas, with the publication of An Investigation of the Laws of Thought. • 1874 – Georg Cantor proves that the set of all real numbers is uncountably infinite but the set of all real algebraic numbers is countably infinite. His proof does not use his famous diagonal argument, which he published in 1891. • 1895 – Georg Cantor publishes a book about set theory containing the arithmetic of infinite cardinal numbers and the continuum hypothesis. • 1899 – Georg Cantor discovers a contradiction in his set theory. 20th century • 1904 - Edward Vermilye Huntington develops the back-and-forth method to prove Cantor's result that countable dense linear orders (without endpoints) are isomorphic. • 1908 – Ernst Zermelo axiomatizes set theory, thus avoiding Cantor's contradictions. • 1915 - Leopold Löwenheim publishes a proof of the (downward) Löwenheim-Skolem theorem, implicitly using the axiom of choice. • 1918 - C. I. Lewis writes A Survey of Symbolic Logic, introducing the modal logic system later called S3. • 1920 - Thoralf Skolem proves the (downward) Löwenheim-Skolem theorem using the axiom of choice explicitly. • 1922 - Thoralf Skolem proves a weaker version of the Löwenheim-Skolem theorem without the axiom of choice. • 1929 - Mojzesj Presburger introduces Presburger arithmetic and proving its decidability and completeness. • 1928 - Hilbert and Wilhelm Ackermann propose the Entscheidungsproblem: to determine, for a statement of first-order logic whether it is universally valid (in all models). • 1930 - Kurt Gödel proves the completeness and countable compactness of first-order logic for countable languages. • 1930 - Oskar Becker introduces the modal logic systems now called S4 and S5 as variations of Lewis's system. • 1930 - Arend Heyting develops an intuitionistic propositional calculus. • 1931 – Kurt Gödel proves his incompleteness theorem which shows that every axiomatic system for mathematics is either incomplete or inconsistent. • 1932 - C. I. Lewis and C. H. Langford's Symbolic Logic contains descriptions of the modal logic systems S1-5. • 1933 - Kurt Gödel develops two interpretations of intuitionistic logic in terms of a provability logic, which would become the standard axiomatization of S4. • 1934 - Thoralf Skolem constructs a non-standard model of arithmetic. • 1936 - Alonzo Church develops the lambda calculus. Alan Turing introduces the Turing machine model proves the existence of universal Turing machines, and uses these results to settle the Entscheidungsproblem by proving it equivalent to (what is now called) the halting problem. • 1936 - Anatoly Maltsev proves the full compactness theorem for first-order logic, and the "upwards" version of the Löwenheim–Skolem theorem. • 1940 – Kurt Gödel shows that neither the continuum hypothesis nor the axiom of choice can be disproven from the standard axioms of set theory. • 1943 - Stephen Kleene introduces the assertion he calls "Church's Thesis" asserting the identity of general recursive functions with effective calculable ones. • 1944 - McKinsey and Alfred Tarski study the relationship between topological closure and Boolean closure algebras. • 1944 - Emil Leon Post introduces the partial order of the Turing degrees, and also introduces Post's problem: to determine if there are computably enumerable degrees lying in between the degree of computable functions and the degree of the halting problem. • 1947 - Andrey Markov Jr. and Emil Post independently prove the undecidability of the word problem for semigroups. • 1948 - McKinsey and Alfred Tarski study closure algebras for S4 and intuitionistic logic. 1950-1999 • 1950 - Boris Trakhtenbrot proves that validity in all finite models (the finite-model version of the Entscheidungsproblem) is also undecidable; here validity corresponds to non-halting, rather than halting as in the usual case. • 1952 - Kleene presents "Turing's Thesis", asserting the identity of computability in general with computability by Turing machines, as an equivalent form of Church's Thesis. • 1954 - Jerzy Łoś and Robert Lawson Vaught independently proved that a first-order theory which has only infinite models and is categorical in any infinite cardinal at least equal to the language cardinality is complete. Łoś further conjectures that, in the case where the language is countable, if the theory is categorical in an uncountable cardinal, it is categorical in all uncountable cardinals. • 1955 - Jerzy Łoś uses the ultraproduct construction to construct the hyperreals and prove the transfer principle. • 1955 - Pyotr Novikov finds a (finitely presented) group whose word problem is undecidable. • 1955 - Evertt William Beth develops semantic tableaux. • 1958 - William Boone independently proves the undecidability of the uniform word problem for groups. • 1959 - Saul Kripke develops a semantics for quantified S5 based on multiple models. • 1959 - Stanley Tennenbaum proves that all countable nonstandard models of Peano arithmetic are nonrecursive. • 1960 - Ray Solomonoff develops the concept of what would come to be called Kolmogorov complexity as part of his theory of Solomonoff induction. • 1961 – Abraham Robinson creates non-standard analysis. • 1963 – Paul Cohen uses his technique of forcing to show that neither the continuum hypothesis nor the axiom of choice can be proven from the standard axioms of set theory. • 1963 - Saul Kripke extends his possible-world semantics to normal modal logics. • 1965 - Michael D. Morley introduces the beginnings of stable theory in order to prove Morley's categoricity theorem confirming Łoś' conjecture. • 1965 - Andrei Kolmogorov independently develops the theory of Kolmogorov complexity and uses it to analyze the concept of randomness. • 1966 - Grothendieck proves the Ax-Grothendieck theorem: any injective polynomial self-map of algebraic varieties over algebraically closed fields is bijective. • 1968 - James Ax independently proves the Ax-Grothendieck theorem. • 1969 - Saharon Shelah introduces the concept of stable and superstable theories. • 1970 - Yuri Matiyasevich proves that the existence of solutions to Diophantine equations is undecidable • 1975 - Harvey Friedman introduces the Reverse Mathematics program. See also • History of logic • History of mathematics • Philosophy of mathematics • Timeline of ancient Greek mathematicians – Timeline and summary of ancient Greek mathematicians and their discoveries • Timeline of mathematics References Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Timeline of mathematics This is a timeline of pure and applied mathematics history. It is divided here into three stages, corresponding to stages in the development of mathematical notation: a "rhetorical" stage in which calculations are described purely by words, a "syncopated" stage in which quantities and common algebraic operations are beginning to be represented by symbolic abbreviations, and finally a "symbolic" stage, in which comprehensive notational systems for formulas are the norm. Rhetorical stage Before 1000 BC • ca. 70,000 BC – South Africa, ochre rocks adorned with scratched geometric patterns (see Blombos Cave).[1] • ca. 35,000 BC to 20,000 BC – Africa and France, earliest known prehistoric attempts to quantify time (see Lebombo bone).[2][3][4] • c. 20,000 BC – Nile Valley, Ishango bone: possibly the earliest reference to prime numbers and Egyptian multiplication. • c. 3400 BC – Mesopotamia, the Sumerians invent the first numeral system, and a system of weights and measures. • c. 3100 BC – Egypt, earliest known decimal system allows indefinite counting by way of introducing new symbols.[5] • c. 2800 BC – Indus Valley Civilisation on the Indian subcontinent, earliest use of decimal ratios in a uniform system of ancient weights and measures, the smallest unit of measurement used is 1.704 millimetres and the smallest unit of mass used is 28 grams. • 2700 BC – Egypt, precision surveying. • 2400 BC – Egypt, precise astronomical calendar, used even in the Middle Ages for its mathematical regularity. • c. 2000 BC – Mesopotamia, the Babylonians use a base-60 positional numeral system, and compute the first known approximate value of π at 3.125. • c. 2000 BC – Scotland, carved stone balls exhibit a variety of symmetries including all of the symmetries of Platonic solids, though it is not known if this was deliberate. • 1800 BC – Egypt, Moscow Mathematical Papyrus, finding the volume of a frustum. • c. 1800 BC – Berlin Papyrus 6619 (Egypt, 19th dynasty) contains a quadratic equation and its solution.[5] • 1650 BC – Rhind Mathematical Papyrus, copy of a lost scroll from around 1850 BC, the scribe Ahmes presents one of the first known approximate values of π at 3.16, the first attempt at squaring the circle, earliest known use of a sort of cotangent, and knowledge of solving first order linear equations. • The earliest recorded use of combinatorial techniques comes from problem 79 of the Rhind papyrus which dates to the 16th century BCE.[6] Syncopated stage 1st millennium BC • c. 1000 BC – Simple fractions used by the Egyptians. However, only unit fractions are used (i.e., those with 1 as the numerator) and interpolation tables are used to approximate the values of the other fractions.[7] • first half of 1st millennium BC – Vedic India – Yajnavalkya, in his Shatapatha Brahmana, describes the motions of the Sun and the Moon, and advances a 95-year cycle to synchronize the motions of the Sun and the Moon. • 800 BC – Baudhayana, author of the Baudhayana Shulba Sutra, a Vedic Sanskrit geometric text, contains quadratic equations, and calculates the square root of two correctly to five decimal places. • c. 8th century BC – the Yajurveda, one of the four Hindu Vedas, contains the earliest concept of infinity, and states "if you remove a part from infinity or add a part to infinity, still what remains is infinity." • 1046 BC to 256 BC – China, Zhoubi Suanjing, arithmetic, geometric algorithms, and proofs. • 624 BC – 546 BC – Greece, Thales of Miletus has various theorems attributed to him. • c. 600 BC – Greece, the other Vedic "Sulba Sutras" ("rule of chords" in Sanskrit) use Pythagorean triples, contain of a number of geometrical proofs, and approximate π at 3.16. • second half of 1st millennium BC – The Luoshu Square, the unique normal magic square of order three, was discovered in China. • 530 BC – Greece, Pythagoras studies propositional geometry and vibrating lyre strings; his group also discovers the irrationality of the square root of two. • c. 510 BC – Greece, Anaxagoras • c. 500 BC – Indian grammarian Pānini writes the Astadhyayi, which contains the use of metarules, transformations and recursions, originally for the purpose of systematizing the grammar of Sanskrit. • c. 500 BC – Greece, Oenopides of Chios • 470 BC – 410 BC – Greece, Hippocrates of Chios utilizes lunes in an attempt to square the circle. • 490 BC – 430 BC – Greece, Zeno of Elea Zeno's paradoxes • 5th century BC – India, Apastamba, author of the Apastamba Sulba Sutra, another Vedic Sanskrit geometric text, makes an attempt at squaring the circle and also calculates the square root of 2 correct to five decimal places. • 5th c. BC – Greece, Theodorus of Cyrene • 5th century – Greece, Antiphon the Sophist • 460 BC – 370 BC – Greece, Democritus • 460 BC – 399 BC – Greece, Hippias • 5th century (late) – Greece, Bryson of Heraclea • 428 BC – 347 BC – Greece, Archytas • 423 BC – 347 BC – Greece, Plato • 417 BC – 317 BC – Greece, Theaetetus • c. 400 BC – India, write the Surya Prajinapti, a mathematical text classifying all numbers into three sets: enumerable, innumerable and infinite. It also recognises five different types of infinity: infinite in one and two directions, infinite in area, infinite everywhere, and infinite perpetually. • 408 BC – 355 BC – Greece, Eudoxus of Cnidus • 400 BC – 350 BC – Greece, Thymaridas • 395 BC – 313 BC – Greece, Xenocrates • 390 BC – 320 BC – Greece, Dinostratus • 380–290 – Greece, Autolycus of Pitane • 370 BC – Greece, Eudoxus states the method of exhaustion for area determination. • 370 BC – 300 BC – Greece, Aristaeus the Elder • 370 BC – 300 BC – Greece, Callippus • 350 BC – Greece, Aristotle discusses logical reasoning in Organon. • 4th century BC – Indian texts use the Sanskrit word "Shunya" to refer to the concept of "void" (zero). • 4th century BC – China, Counting rods • 330 BC – China, the earliest known work on Chinese geometry, the Mo Jing, is compiled. • 310 BC – 230 BC – Greece, Aristarchus of Samos • 390 BC – 310 BC – Greece, Heraclides Ponticus • 380 BC – 320 BC – Greece, Menaechmus • 300 BC – India, Bhagabati Sutra, which contains the earliest information on combinations. • 300 BC  – Greece, Euclid in his Elements studies geometry as an axiomatic system, proves the infinitude of prime numbers and presents the Euclidean algorithm; he states the law of reflection in Catoptrics, and he proves the fundamental theorem of arithmetic. • c. 300 BC – India, Brahmi numerals (ancestor of the common modern base 10 numeral system) • 370 BC – 300 BC – Greece, Eudemus of Rhodes works on histories of arithmetic, geometry and astronomy now lost.[8] • 300 BC – Mesopotamia, the Babylonians invent the earliest calculator, the abacus. • c. 300 BC – Indian mathematician Pingala writes the Chhandah-shastra, which contains the first Indian use of zero as a digit (indicated by a dot) and also presents a description of a binary numeral system, along with the first use of Fibonacci numbers and Pascal's triangle. • 280 BC – 210 BC – Greece, Nicomedes (mathematician) • 280 BC – 220BC – Greece, Philo of Byzantium • 280 BC – 220 BC – Greece, Conon of Samos • 279 BC – 206 BC – Greece, Chrysippus • c. 3rd century BC – India, Kātyāyana • 250 BC – 190 BC – Greece, Dionysodorus • 262 -198 BC – Greece, Apollonius of Perga • 260 BC – Greece, Archimedes proved that the value of π lies between 3 + 1/7 (approx. 3.1429) and 3 + 10/71 (approx. 3.1408), that the area of a circle was equal to π multiplied by the square of the radius of the circle and that the area enclosed by a parabola and a straight line is 4/3 multiplied by the area of a triangle with equal base and height. He also gave a very accurate estimate of the value of the square root of 3. • c. 250 BC – late Olmecs had already begun to use a true zero (a shell glyph) several centuries before Ptolemy in the New World. See 0 (number). • 240 BC – Greece, Eratosthenes uses his sieve algorithm to quickly isolate prime numbers. • 240 BC 190 BC– Greece, Diocles (mathematician) • 225 BC – Greece, Apollonius of Perga writes On Conic Sections and names the ellipse, parabola, and hyperbola. • 202 BC to 186 BC –China, Book on Numbers and Computation, a mathematical treatise, is written in Han dynasty. • 200 BC – 140 BC – Greece, Zenodorus (mathematician) • 150 BC – India, Jain mathematicians in India write the Sthananga Sutra, which contains work on the theory of numbers, arithmetical operations, geometry, operations with fractions, simple equations, cubic equations, quartic equations, and permutations and combinations. • c. 150 BC – Greece, Perseus (geometer) • 150 BC – China, A method of Gaussian elimination appears in the Chinese text The Nine Chapters on the Mathematical Art. • 150 BC – China, Horner's method appears in the Chinese text The Nine Chapters on the Mathematical Art. • 150 BC – China, Negative numbers appear in the Chinese text The Nine Chapters on the Mathematical Art. • 150 BC – 75 BC – Phoenician, Zeno of Sidon • 190 BC – 120 BC – Greece, Hipparchus develops the bases of trigonometry. • 190 BC – 120 BC – Greece, Hypsicles • 160 BC – 100 BC – Greece, Theodosius of Bithynia • 135 BC – 51 BC – Greece, Posidonius • 78 BC – 37 BC – China, Jing Fang • 50 BC – Indian numerals, a descendant of the Brahmi numerals (the first positional notation base-10 numeral system), begins development in India. • mid 1st century Cleomedes (as late as 400 AD) • final centuries BC – Indian astronomer Lagadha writes the Vedanga Jyotisha, a Vedic text on astronomy that describes rules for tracking the motions of the Sun and the Moon, and uses geometry and trigonometry for astronomy. • 1st C. BC – Greece, Geminus • 50 BC – 23 AD – China, Liu Xin 1st millennium AD • 1st century – Greece, Heron of Alexandria, Hero, the earliest, fleeting reference to square roots of negative numbers. • c 100 – Greece, Theon of Smyrna • 60 – 120 – Greece, Nicomachus • 70 – 140 – Greece, Menelaus of Alexandria Spherical trigonometry • 78 – 139 – China, Zhang Heng • c. 2nd century – Greece, Ptolemy of Alexandria wrote the Almagest. • 132 – 192 – China, Cai Yong • 240 – 300 – Greece, Sporus of Nicaea • 250 – Greece, Diophantus uses symbols for unknown numbers in terms of syncopated algebra, and writes Arithmetica, one of the earliest treatises on algebra. • 263 – China, Liu Hui computes π using Liu Hui's π algorithm. • 300 – the earliest known use of zero as a decimal digit is introduced by Indian mathematicians. • 234 – 305 – Greece, Porphyry (philosopher) • 300 – 360 – Greece, Serenus of Antinoöpolis • 335 – 405– Greece, Theon of Alexandria • c. 340 – Greece, Pappus of Alexandria states his hexagon theorem and his centroid theorem. • 350 – 415 – Byzantine Empire, Hypatia • c. 400 – India, the Bakhshali manuscript , which describes a theory of the infinite containing different levels of infinity, shows an understanding of indices, as well as logarithms to base 2, and computes square roots of numbers as large as a million correct to at least 11 decimal places. • 300 to 500 – the Chinese remainder theorem is developed by Sun Tzu. • 300 to 500 – China, a description of rod calculus is written by Sun Tzu. • 412 – 485 – Greece, Proclus • 420 – 480 – Greece, Domninus of Larissa • b 440 – Greece, Marinus of Neapolis "I wish everything was mathematics." • 450 – China, Zu Chongzhi computes π to seven decimal places. This calculation remains the most accurate calculation for π for close to a thousand years. • c. 474 – 558 – Greece, Anthemius of Tralles • 500 – India, Aryabhata writes the Aryabhata-Siddhanta, which first introduces the trigonometric functions and methods of calculating their approximate numerical values. It defines the concepts of sine and cosine, and also contains the earliest tables of sine and cosine values (in 3.75-degree intervals from 0 to 90 degrees). • 480 – 540 – Greece, Eutocius of Ascalon • 490 – 560 – Greece, Simplicius of Cilicia • 6th century – Aryabhata gives accurate calculations for astronomical constants, such as the solar eclipse and lunar eclipse, computes π to four decimal places, and obtains whole number solutions to linear equations by a method equivalent to the modern method. • 505 – 587 – India, Varāhamihira • 6th century – India, Yativṛṣabha • 535 – 566 – China, Zhen Luan • 550 – Hindu mathematicians give zero a numeral representation in the positional notation Indian numeral system. • 600 – China, Liu Zhuo uses quadratic interpolation. • 602 – 670 – China, Li Chunfeng • 625 China, Wang Xiaotong writes the Jigu Suanjing, where cubic and quartic equations are solved. • 7th century – India, Bhāskara I gives a rational approximation of the sine function. • 7th century – India, Brahmagupta invents the method of solving indeterminate equations of the second degree and is the first to use algebra to solve astronomical problems. He also develops methods for calculations of the motions and places of various planets, their rising and setting, conjunctions, and the calculation of eclipses of the sun and the moon. • 628 – Brahmagupta writes the Brahma-sphuta-siddhanta, where zero is clearly explained, and where the modern place-value Indian numeral system is fully developed. It also gives rules for manipulating both negative and positive numbers, methods for computing square roots, methods of solving linear and quadratic equations, and rules for summing series, Brahmagupta's identity, and the Brahmagupta theorem. • 721 – China, Zhang Sui (Yi Xing) computes the first tangent table. • 8th century – India, Virasena gives explicit rules for the Fibonacci sequence, gives the derivation of the volume of a frustum using an infinite procedure, and also deals with the logarithm to base 2 and knows its laws. • 8th century – India, Sridhara gives the rule for finding the volume of a sphere and also the formula for solving quadratic equations. • 773 – Iraq, Kanka brings Brahmagupta's Brahma-sphuta-siddhanta to Baghdad to explain the Indian system of arithmetic astronomy and the Indian numeral system. • 773 – Muḥammad ibn Ibrāhīm al-Fazārī translates the Brahma-sphuta-siddhanta into Arabic upon the request of King Khalif Abbasid Al Mansoor. • 9th century – India, Govindasvāmi discovers the Newton-Gauss interpolation formula, and gives the fractional parts of Aryabhata's tabular sines. • 810 – The House of Wisdom is built in Baghdad for the translation of Greek and Sanskrit mathematical works into Arabic. • 820 – Al-Khwarizmi – Persian mathematician, father of algebra, writes the Al-Jabr, later transliterated as Algebra, which introduces systematic algebraic techniques for solving linear and quadratic equations. Translations of his book on arithmetic will introduce the Hindu–Arabic decimal number system to the Western world in the 12th century. The term algorithm is also named after him. • 820 – Iran, Al-Mahani conceived the idea of reducing geometrical problems such as doubling the cube to problems in algebra. • c. 850 – Iraq, al-Kindi pioneers cryptanalysis and frequency analysis in his book on cryptography. • c. 850 – India, Mahāvīra writes the Gaṇitasārasan̄graha otherwise known as the Ganita Sara Samgraha which gives systematic rules for expressing a fraction as the sum of unit fractions. • 895 – Syria, Thābit ibn Qurra: the only surviving fragment of his original work contains a chapter on the solution and properties of cubic equations. He also generalized the Pythagorean theorem, and discovered the theorem by which pairs of amicable numbers can be found, (i.e., two numbers such that each is the sum of the proper divisors of the other). • c. 900 – Egypt, Abu Kamil had begun to understand what we would write in symbols as $x^{n}\cdot x^{m}=x^{m+n}$ • 940 – Iran, Abu al-Wafa' al-Buzjani extracts roots using the Indian numeral system. • 953 – The arithmetic of the Hindu–Arabic numeral system at first required the use of a dust board (a sort of handheld blackboard) because "the methods required moving the numbers around in the calculation and rubbing some out as the calculation proceeded." Al-Uqlidisi modified these methods for pen and paper use. Eventually the advances enabled by the decimal system led to its standard use throughout the region and the world. • 953 – Persia, Al-Karaji is the "first person to completely free algebra from geometrical operations and to replace them with the arithmetical type of operations which are at the core of algebra today. He was first to define the monomials $x$, $x^{2}$, $x^{3}$, ... and $1/x$, $1/x^{2}$, $1/x^{3}$, ... and to give rules for products of any two of these. He started a school of algebra which flourished for several hundreds of years". He also discovered the binomial theorem for integer exponents, which "was a major factor in the development of numerical analysis based on the decimal system". • 975 – Mesopotamia, al-Battani extended the Indian concepts of sine and cosine to other trigonometrical ratios, like tangent, secant and their inverse functions. Derived the formulae: $\sin \alpha =\tan \alpha /{\sqrt {1+\tan ^{2}\alpha }}$ and $\cos \alpha =1/{\sqrt {1+\tan ^{2}\alpha }}$. Symbolic stage 1000–1500 • c. 1000 – Abu Sahl al-Quhi (Kuhi) solves equations higher than the second degree. • c. 1000 – Abu-Mahmud Khujandi first states a special case of Fermat's Last Theorem. • c. 1000 – Law of sines is discovered by Muslim mathematicians, but it is uncertain who discovers it first between Abu-Mahmud al-Khujandi, Abu Nasr Mansur, and Abu al-Wafa' al-Buzjani. • c. 1000 – Pope Sylvester II introduces the abacus using the Hindu–Arabic numeral system to Europe. • 1000 – Al-Karaji writes a book containing the first known proofs by mathematical induction. He used it to prove the binomial theorem, Pascal's triangle, and the sum of integral cubes.[9] He was "the first who introduced the theory of algebraic calculus".[10] • c. 1000 – Abu Mansur al-Baghdadi studied a slight variant of Thābit ibn Qurra's theorem on amicable numbers, and he also made improvements on the decimal system. • 1020 – Abu al-Wafa' al-Buzjani gave the formula: sin (α + β) = sin α cos β + sin β cos α. Also discussed the quadrature of the parabola and the volume of the paraboloid. • 1021 – Ibn al-Haytham formulated and solved Alhazen's problem geometrically. • 1030 – Alī ibn Ahmad al-Nasawī writes a treatise on the decimal and sexagesimal number systems. His arithmetic explains the division of fractions and the extraction of square and cubic roots (square root of 57,342; cubic root of 3, 652, 296) in an almost modern manner.[11] • 1070 – Omar Khayyam begins to write Treatise on Demonstration of Problems of Algebra and classifies cubic equations. • c. 1100 – Omar Khayyám "gave a complete classification of cubic equations with geometric solutions found by means of intersecting conic sections". He became the first to find general geometric solutions of cubic equations and laid the foundations for the development of analytic geometry and non-Euclidean geometry. He also extracted roots using the decimal system (Hindu–Arabic numeral system). • 12th century – Indian numerals have been modified by Arab mathematicians to form the modern Arabic numeral system . • 12th century – the Arabic numeral system reaches Europe through the Arabs. • 12th century – Bhaskara Acharya writes the Lilavati, which covers the topics of definitions, arithmetical terms, interest computation, arithmetical and geometrical progressions, plane geometry, solid geometry, the shadow of the gnomon, methods to solve indeterminate equations, and combinations. • 12th century – Bhāskara II (Bhaskara Acharya) writes the Bijaganita (Algebra), which is the first text to recognize that a positive number has two square roots. Furthermore, it also gives the Chakravala method which was the first generalized solution of so called Pell's equation • 12th century – Bhaskara Acharya conceives differential calculus, and also develops Rolle's theorem, Pell's equation, a proof for the Pythagorean theorem, proves that division by zero is infinity, computes π to 5 decimal places, and calculates the time taken for the Earth to orbit the Sun to 9 decimal places. • 1130 – Al-Samawal al-Maghribi gave a definition of algebra: "[it is concerned] with operating on unknowns using all the arithmetical tools, in the same way as the arithmetician operates on the known."[12] • 1135 – Sharaf al-Din al-Tusi followed al-Khayyam's application of algebra to geometry, and wrote a treatise on cubic equations that "represents an essential contribution to another algebra which aimed to study curves by means of equations, thus inaugurating the beginning of algebraic geometry".[12] • 1202 – Leonardo Fibonacci demonstrates the utility of Hindu–Arabic numerals in his Liber Abaci (Book of the Abacus). • 1247 – Qin Jiushao publishes Shùshū Jiǔzhāng (Mathematical Treatise in Nine Sections). • 1248 – Li Ye writes Ceyuan haijing, a 12 volume mathematical treatise containing 170 formulas and 696 problems mostly solved by polynomial equations using the method tian yuan shu. • 1260 – Al-Farisi gave a new proof of Thabit ibn Qurra's theorem, introducing important new ideas concerning factorization and combinatorial methods. He also gave the pair of amicable numbers 17296 and 18416 that have also been jointly attributed to Fermat as well as Thabit ibn Qurra.[13] • c. 1250 – Nasir al-Din al-Tusi attempts to develop a form of non-Euclidean geometry. • 1280 – Guo Shoujing and Wang Xun introduce cubic interpolation. • 1303 – Zhu Shijie publishes Precious Mirror of the Four Elements, which contains an ancient method of arranging binomial coefficients in a triangle. • 1356- Narayana Pandita completes his treatise Ganita Kaumudi, which for the first time contains Fermat's factorization method, generalized fibonacci sequence, and the first ever algorithm to systematically generate all permutations as well as many new magic figure techniques. • 14th century – Madhava is considered the father of mathematical analysis, who also worked on the power series for π and for sine and cosine functions, and along with other Kerala school mathematicians, founded the important concepts of calculus. • 14th century – Parameshvara Nambudiri, a Kerala school mathematician, presents a series form of the sine function that is equivalent to its Taylor series expansion, states the mean value theorem of differential calculus, and is also the first mathematician to give the radius of circle with inscribed cyclic quadrilateral. 15th century • 1400 – Madhava discovers the series expansion for the inverse-tangent function, the infinite series for arctan and sin, and many methods for calculating the circumference of the circle, and uses them to compute π correct to 11 decimal places. • c. 1400 – Jamshid al-Kashi "contributed to the development of decimal fractions not only for approximating algebraic numbers, but also for real numbers such as π. His contribution to decimal fractions is so major that for many years he was considered as their inventor. Although not the first to do so, al-Kashi gave an algorithm for calculating nth roots, which is a special case of the methods given many centuries later by [Paolo] Ruffini and [William George] Horner." He is also the first to use the decimal point notation in arithmetic and Arabic numerals. His works include The Key of arithmetics, Discoveries in mathematics, The Decimal point, and The benefits of the zero. The contents of the Benefits of the Zero are an introduction followed by five essays: "On whole number arithmetic", "On fractional arithmetic", "On astrology", "On areas", and "On finding the unknowns [unknown variables]". He also wrote the Thesis on the sine and the chord and Thesis on finding the first degree sine. • 15th century – Ibn al-Banna' al-Marrakushi and Abu'l-Hasan ibn Ali al-Qalasadi introduced symbolic notation for algebra and for mathematics in general.[12] • 15th century – Nilakantha Somayaji, a Kerala school mathematician, writes the Aryabhatiya Bhasya, which contains work on infinite-series expansions, problems of algebra, and spherical geometry. • 1424 – Ghiyath al-Kashi computes π to sixteen decimal places using inscribed and circumscribed polygons. • 1427 – Jamshid al-Kashi completes The Key to Arithmetic containing work of great depth on decimal fractions. It applies arithmetical and algebraic methods to the solution of various problems, including several geometric ones. • 1464 – Regiomontanus writes De Triangulis omnimodus which is one of the earliest texts to treat trigonometry as a separate branch of mathematics. • 1478 – An anonymous author writes the Treviso Arithmetic. • 1494 – Luca Pacioli writes Summa de arithmetica, geometria, proportioni et proportionalità; introduces primitive symbolic algebra using "co" (cosa) for the unknown. 16th century • 1501 – Nilakantha Somayaji writes the Tantrasamgraha. • 1520 – Scipione del Ferro develops a method for solving "depressed" cubic equations (cubic equations without an x2 term), but does not publish. • 1522 – Adam Ries explained the use of Arabic digits and their advantages over Roman numerals. • 1535 – Nicolo Tartaglia independently develops a method for solving depressed cubic equations but also does not publish. • 1539 – Gerolamo Cardano learns Tartaglia's method for solving depressed cubics and discovers a method for depressing cubics, thereby creating a method for solving all cubics. • 1540 – Lodovico Ferrari solves the quartic equation. • 1544 – Michael Stifel publishes Arithmetica integra. • 1545 – Gerolamo Cardano conceives the idea of complex numbers. • 1550 – Jyeṣṭhadeva, a Kerala school mathematician, writes the Yuktibhāṣā, the world's first calculus text, which gives detailed derivations of many calculus theorems and formulae. • 1572 – Rafael Bombelli writes Algebra treatise and uses imaginary numbers to solve cubic equations. • 1584 – Zhu Zaiyu calculates equal temperament. • 1596 – Ludolph van Ceulen computes π to twenty decimal places using inscribed and circumscribed polygons. 17th century • 1614 – John Napier publishes a table of Napierian logarithms in Mirifici Logarithmorum Canonis Descriptio. • 1617 – Henry Briggs discusses decimal logarithms in Logarithmorum Chilias Prima. • 1618 – John Napier publishes the first references to e in a work on logarithms. • 1619 – René Descartes discovers analytic geometry (Pierre de Fermat claimed that he also discovered it independently). • 1619 – Johannes Kepler discovers two of the Kepler-Poinsot polyhedra. • 1629 – Pierre de Fermat develops a rudimentary differential calculus. • 1634 – Gilles de Roberval shows that the area under a cycloid is three times the area of its generating circle. • 1636 – Muhammad Baqir Yazdi jointly discovered the pair of amicable numbers 9,363,584 and 9,437,056 along with Descartes (1636).[13] • 1637 – Pierre de Fermat claims to have proven Fermat's Last Theorem in his copy of Diophantus' Arithmetica. • 1637 – First use of the term imaginary number by René Descartes; it was meant to be derogatory. • 1643 – René Descartes develops Descartes' theorem. • 1654 – Blaise Pascal and Pierre de Fermat create the theory of probability. • 1655 – John Wallis writes Arithmetica Infinitorum. • 1658 – Christopher Wren shows that the length of a cycloid is four times the diameter of its generating circle. • 1665 – Isaac Newton works on the fundamental theorem of calculus and develops his version of infinitesimal calculus. • 1668 – Nicholas Mercator and William Brouncker discover an infinite series for the logarithm while attempting to calculate the area under a hyperbolic segment. • 1671 – James Gregory develops a series expansion for the inverse-tangent function (originally discovered by Madhava). • 1671 – James Gregory discovers Taylor's theorem. • 1673 – Gottfried Leibniz also develops his version of infinitesimal calculus. • 1675 – Isaac Newton invents an algorithm for the computation of functional roots. • 1680s – Gottfried Leibniz works on symbolic logic. • 1683 – Seki Takakazu discovers the resultant and determinant. • 1683 – Seki Takakazu develops elimination theory. • 1691 – Gottfried Leibniz discovers the technique of separation of variables for ordinary differential equations. • 1693 – Edmund Halley prepares the first mortality tables statistically relating death rate to age. • 1696 – Guillaume de l'Hôpital states his rule for the computation of certain limits. • 1696 – Jakob Bernoulli and Johann Bernoulli solve brachistochrone problem, the first result in the calculus of variations. • 1699 – Abraham Sharp calculates π to 72 digits but only 71 are correct. 18th century • 1706 – John Machin develops a quickly converging inverse-tangent series for π and computes π to 100 decimal places. • 1708 – Seki Takakazu discovers Bernoulli numbers. Jacob Bernoulli whom the numbers are named after is believed to have independently discovered the numbers shortly after Takakazu. • 1712 – Brook Taylor develops Taylor series. • 1722 – Abraham de Moivre states de Moivre's formula connecting trigonometric functions and complex numbers. • 1722 – Takebe Kenko introduces Richardson extrapolation. • 1724 – Abraham De Moivre studies mortality statistics and the foundation of the theory of annuities in Annuities on Lives. • 1730 – James Stirling publishes The Differential Method. • 1733 – Giovanni Gerolamo Saccheri studies what geometry would be like if Euclid's fifth postulate were false. • 1733 – Abraham de Moivre introduces the normal distribution to approximate the binomial distribution in probability. • 1734 – Leonhard Euler introduces the integrating factor technique for solving first-order ordinary differential equations. • 1735 – Leonhard Euler solves the Basel problem, relating an infinite series to π. • 1736 – Leonhard Euler solves the problem of the Seven bridges of Königsberg, in effect creating graph theory. • 1739 – Leonhard Euler solves the general homogeneous linear ordinary differential equation with constant coefficients. • 1742 – Christian Goldbach conjectures that every even number greater than two can be expressed as the sum of two primes, now known as Goldbach's conjecture. • 1747 – Jean le Rond d'Alembert solves the vibrating string problem (one-dimensional wave equation).[14] • 1748 – Maria Gaetana Agnesi discusses analysis in Instituzioni Analitiche ad Uso della Gioventu Italiana. • 1761 – Thomas Bayes proves Bayes' theorem. • 1761 – Johann Heinrich Lambert proves that π is irrational. • 1762 – Joseph-Louis Lagrange discovers the divergence theorem. • 1789 – Jurij Vega improves Machin's formula and computes π to 140 decimal places, 136 of which were correct. • 1794 – Jurij Vega publishes Thesaurus Logarithmorum Completus. • 1796 – Carl Friedrich Gauss proves that the regular 17-gon can be constructed using only a compass and straightedge. • 1796 – Adrien-Marie Legendre conjectures the prime number theorem. • 1797 – Caspar Wessel associates vectors with complex numbers and studies complex number operations in geometrical terms. • 1799 – Carl Friedrich Gauss proves the fundamental theorem of algebra (every polynomial equation has a solution among the complex numbers). • 1799 – Paolo Ruffini partially proves the Abel–Ruffini theorem that quintic or higher equations cannot be solved by a general formula. 19th century • 1801 – Disquisitiones Arithmeticae, Carl Friedrich Gauss's number theory treatise, is published in Latin. • 1805 – Adrien-Marie Legendre introduces the method of least squares for fitting a curve to a given set of observations. • 1806 – Louis Poinsot discovers the two remaining Kepler-Poinsot polyhedra. • 1806 – Jean-Robert Argand publishes proof of the Fundamental theorem of algebra and the Argand diagram. • 1807 – Joseph Fourier announces his discoveries about the trigonometric decomposition of functions. • 1811 – Carl Friedrich Gauss discusses the meaning of integrals with complex limits and briefly examines the dependence of such integrals on the chosen path of integration. • 1815 – Siméon Denis Poisson carries out integrations along paths in the complex plane. • 1817 – Bernard Bolzano presents the intermediate value theorem—a continuous function that is negative at one point and positive at another point must be zero for at least one point in between. Bolzano gives a first formal (ε, δ)-definition of limit. • 1821 – Augustin-Louis Cauchy publishes Cours d'Analyse which purportedly contains an erroneous “proof” that the pointwise limit of continuous functions is continuous. • 1822 – Augustin-Louis Cauchy presents the Cauchy's integral theorem for integration around the boundary of a rectangle in the complex plane. • 1822 – Irisawa Shintarō Hiroatsu analyzes Soddy's hexlet in a Sangaku. • 1823 – Sophie Germain's Theorem is published in the second edition of Adrien-Marie Legendre's Essai sur la théorie des nombres[15] • 1824 – Niels Henrik Abel partially proves the Abel–Ruffini theorem that the general quintic or higher equations cannot be solved by a general formula involving only arithmetical operations and roots. • 1825 – Augustin-Louis Cauchy presents the Cauchy integral theorem for general integration paths—he assumes the function being integrated has a continuous derivative, and he introduces the theory of residues in complex analysis. • 1825 – Peter Gustav Lejeune Dirichlet and Adrien-Marie Legendre prove Fermat's Last Theorem for n = 5. • 1825 – André-Marie Ampère discovers Stokes' theorem. • 1826 – Niels Henrik Abel gives counterexamples to Augustin-Louis Cauchy’s purported “proof” that the pointwise limit of continuous functions is continuous. • 1828 – George Green proves Green's theorem. • 1829 – János Bolyai, Gauss, and Lobachevsky invent hyperbolic non-Euclidean geometry. • 1831 – Mikhail Vasilievich Ostrogradsky rediscovers and gives the first proof of the divergence theorem earlier described by Lagrange, Gauss and Green. • 1832 – Évariste Galois presents a general condition for the solvability of algebraic equations, thereby essentially founding group theory and Galois theory. • 1832 – Lejeune Dirichlet proves Fermat's Last Theorem for n = 14. • 1835 – Lejeune Dirichlet proves Dirichlet's theorem about prime numbers in arithmetical progressions. • 1837 – Pierre Wantzel proves that doubling the cube and trisecting the angle are impossible with only a compass and straightedge, as well as the full completion of the problem of constructability of regular polygons. • 1837 – Peter Gustav Lejeune Dirichlet develops Analytic number theory. • 1838 – First mention of uniform convergence in a paper by Christoph Gudermann; later formalized by Karl Weierstrass. Uniform convergence is required to fix Augustin-Louis Cauchy erroneous “proof” that the pointwise limit of continuous functions is continuous from Cauchy’s 1821 Cours d'Analyse. • 1841 – Karl Weierstrass discovers but does not publish the Laurent expansion theorem. • 1843 – Pierre-Alphonse Laurent discovers and presents the Laurent expansion theorem. • 1843 – William Hamilton discovers the calculus of quaternions and deduces that they are non-commutative. • 1844 - Hermann Grassmann publishes his Ausdehnungslehre, from which linear algebra is later developed. • 1847 – George Boole formalizes symbolic logic in The Mathematical Analysis of Logic, defining what is now called Boolean algebra. • 1849 – George Gabriel Stokes shows that solitary waves can arise from a combination of periodic waves. • 1850 – Victor Alexandre Puiseux distinguishes between poles and branch points and introduces the concept of essential singular points. • 1850 – George Gabriel Stokes rediscovers and proves Stokes' theorem. • 1854 – Bernhard Riemann introduces Riemannian geometry. • 1854 – Arthur Cayley shows that quaternions can be used to represent rotations in four-dimensional space. • 1858 – August Ferdinand Möbius invents the Möbius strip. • 1858 – Charles Hermite solves the general quintic equation by means of elliptic and modular functions. • 1859 – Bernhard Riemann formulates the Riemann hypothesis, which has strong implications about the distribution of prime numbers. • 1868 – Eugenio Beltrami demonstrates independence of Euclid’s parallel postulate from the other axioms of Euclidean geometry. • 1870 – Felix Klein constructs an analytic geometry for Lobachevski's geometry thereby establishing its self-consistency and the logical independence of Euclid's fifth postulate. • 1872 – Richard Dedekind invents what is now called the Dedekind Cut for defining irrational numbers, and now used for defining surreal numbers. • 1873 – Charles Hermite proves that e is transcendental. • 1873 – Georg Frobenius presents his method for finding series solutions to linear differential equations with regular singular points. • 1874 – Georg Cantor proves that the set of all real numbers is uncountably infinite but the set of all real algebraic numbers is countably infinite. His proof does not use his diagonal argument, which he published in 1891. • 1882 – Ferdinand von Lindemann proves that π is transcendental and that therefore the circle cannot be squared with a compass and straightedge. • 1882 – Felix Klein invents the Klein bottle. • 1895 – Diederik Korteweg and Gustav de Vries derive the Korteweg–de Vries equation to describe the development of long solitary water waves in a canal of rectangular cross section. • 1895 – Georg Cantor publishes a book about set theory containing the arithmetic of infinite cardinal numbers and the continuum hypothesis. • 1895 – Henri Poincaré publishes paper "Analysis Situs" which started modern topology. • 1896 – Jacques Hadamard and Charles Jean de la Vallée-Poussin independently prove the prime number theorem. • 1896 – Hermann Minkowski presents Geometry of numbers. • 1899 – Georg Cantor discovers a contradiction in his set theory. • 1899 – David Hilbert presents a set of self-consistent geometric axioms in Foundations of Geometry. • 1900 – David Hilbert states his list of 23 problems, which show where some further mathematical work is needed. 20th century [16] • 1901 – Élie Cartan develops the exterior derivative. • 1901 – Henri Lebesgue publishes on Lebesgue integration. • 1903 – Carle David Tolmé Runge presents a fast Fourier transform algorithm • 1903 – Edmund Georg Hermann Landau gives considerably simpler proof of the prime number theorem. • 1908 – Ernst Zermelo axiomizes set theory, thus avoiding Cantor's contradictions. • 1908 – Josip Plemelj solves the Riemann problem about the existence of a differential equation with a given monodromic group and uses Sokhotsky – Plemelj formulae. • 1912 – Luitzen Egbertus Jan Brouwer presents the Brouwer fixed-point theorem. • 1912 – Josip Plemelj publishes simplified proof for the Fermat's Last Theorem for exponent n = 5. • 1915 – Emmy Noether proves her symmetry theorem, which shows that every symmetry in physics has a corresponding conservation law. • 1916 – Srinivasa Ramanujan introduces Ramanujan conjecture. This conjecture is later generalized by Hans Petersson. • 1919 – Viggo Brun defines Brun's constant B2 for twin primes. • 1921 – Emmy Noether introduces the first general definition of a commutative ring. • 1928 – John von Neumann begins devising the principles of game theory and proves the minimax theorem. • 1929 – Emmy Noether introduces the first general representation theory of groups and algebras. • 1930 – Casimir Kuratowski shows that the three-cottage problem has no solution. • 1931 – Kurt Gödel proves his incompleteness theorem, which shows that every axiomatic system for mathematics is either incomplete or inconsistent. • 1931 – Georges de Rham develops theorems in cohomology and characteristic classes. • 1933 – Karol Borsuk and Stanislaw Ulam present the Borsuk–Ulam antipodal-point theorem. • 1933 – Andrey Nikolaevich Kolmogorov publishes his book Basic notions of the calculus of probability (Grundbegriffe der Wahrscheinlichkeitsrechnung), which contains an axiomatization of probability based on measure theory. • 1936 – Alonzo Church and Alan Turing create, respectively, the λ-calculus and the Turing machine, formalizing the notion of computation and computability. • 1938 – Tadeusz Banachiewicz introduces LU decomposition. • 1940 – Kurt Gödel shows that neither the continuum hypothesis nor the axiom of choice can be disproven from the standard axioms of set theory. • 1942 – G.C. Danielson and Cornelius Lanczos develop a fast Fourier transform algorithm. • 1943 – Kenneth Levenberg proposes a method for nonlinear least squares fitting. • 1945 – Stephen Cole Kleene introduces realizability. • 1945 – Saunders Mac Lane and Samuel Eilenberg start category theory. • 1945 – Norman Steenrod and Samuel Eilenberg give the Eilenberg–Steenrod axioms for (co-)homology. • 1946 – Jean Leray introduces the Spectral sequence. • 1947 – George Dantzig publishes the simplex method for linear programming. • 1948 – John von Neumann mathematically studies self-reproducing machines. • 1948 – Atle Selberg and Paul Erdős prove independently in an elementary way the prime number theorem. • 1949 – John Wrench and L.R. Smith compute π to 2,037 decimal places using ENIAC. • 1949 – Claude Shannon develops notion of Information Theory. • 1950 – Stanisław Ulam and John von Neumann present cellular automata dynamical systems. • 1953 – Nicholas Metropolis introduces the idea of thermodynamic simulated annealing algorithms. • 1955 – H. S. M. Coxeter et al. publish the complete list of uniform polyhedron. • 1955 – Enrico Fermi, John Pasta, Stanisław Ulam, and Mary Tsingou numerically study a nonlinear spring model of heat conduction and discover solitary wave type behavior. • 1956 – Noam Chomsky describes a hierarchy of formal languages. • 1956 – John Milnor discovers the existence of an Exotic sphere in seven dimensions, inaugurating the field of differential topology. • 1957 – Kiyosi Itô develops Itô calculus. • 1957 – Stephen Smale provides the existence proof for crease-free sphere eversion. • 1958 – Alexander Grothendieck's proof of the Grothendieck–Riemann–Roch theorem is published. • 1959 – Kenkichi Iwasawa creates Iwasawa theory. • 1960 – Tony Hoare invents the quicksort algorithm. • 1960 – Irving S. Reed and Gustave Solomon present the Reed–Solomon error-correcting code. • 1961 – Daniel Shanks and John Wrench compute π to 100,000 decimal places using an inverse-tangent identity and an IBM-7090 computer. • 1961 – John G. F. Francis and Vera Kublanovskaya independently develop the QR algorithm to calculate the eigenvalues and eigenvectors of a matrix. • 1961 – Stephen Smale proves the Poincaré conjecture for all dimensions greater than or equal to 5. • 1962 – Donald Marquardt proposes the Levenberg–Marquardt nonlinear least squares fitting algorithm. • 1963 – Paul Cohen uses his technique of forcing to show that neither the continuum hypothesis nor the axiom of choice can be proven from the standard axioms of set theory. • 1963 – Martin Kruskal and Norman Zabusky analytically study the Fermi–Pasta–Ulam–Tsingou heat conduction problem in the continuum limit and find that the KdV equation governs this system. • 1963 – meteorologist and mathematician Edward Norton Lorenz published solutions for a simplified mathematical model of atmospheric turbulence – generally known as chaotic behaviour and strange attractors or Lorenz Attractor – also the Butterfly Effect. • 1965 – Iranian mathematician Lotfi Asker Zadeh founded fuzzy set theory as an extension of the classical notion of set and he founded the field of Fuzzy mathematics. • 1965 – Martin Kruskal and Norman Zabusky numerically study colliding solitary waves in plasmas and find that they do not disperse after collisions. • 1965 – James Cooley and John Tukey present an influential fast Fourier transform algorithm. • 1966 – E. J. Putzer presents two methods for computing the exponential of a matrix in terms of a polynomial in that matrix. • 1966 – Abraham Robinson presents non-standard analysis. • 1967 – Robert Langlands formulates the influential Langlands program of conjectures relating number theory and representation theory. • 1968 – Michael Atiyah and Isadore Singer prove the Atiyah–Singer index theorem about the index of elliptic operators. • 1973 – Lotfi Zadeh founded the field of fuzzy logic. • 1974 – Pierre Deligne solves the last and deepest of the Weil conjectures, completing the program of Grothendieck. • 1975 – Benoit Mandelbrot publishes Les objets fractals, forme, hasard et dimension. • 1976 – Kenneth Appel and Wolfgang Haken use a computer to prove the Four color theorem. • 1981 – Richard Feynman gives an influential talk "Simulating Physics with Computers" (in 1980 Yuri Manin proposed the same idea about quantum computations in "Computable and Uncomputable" (in Russian)). • 1983 – Gerd Faltings proves the Mordell conjecture and thereby shows that there are only finitely many whole number solutions for each exponent of Fermat's Last Theorem. • 1984 – Vaughan Jones discovers the Jones polynomial in knot theory, which leads to other new knot polynomials as well as connections between knot theory and other fields. • 1985 – Louis de Branges de Bourcia proves the Bieberbach conjecture. • 1986 – Ken Ribet proves Ribet's theorem. • 1987 – Yasumasa Kanada, David Bailey, Jonathan Borwein, and Peter Borwein use iterative modular equation approximations to elliptic integrals and a NEC SX-2 supercomputer to compute π to 134 million decimal places. • 1991 – Alain Connes and John W. Lott develop non-commutative geometry. • 1992 – David Deutsch and Richard Jozsa develop the Deutsch–Jozsa algorithm, one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm. • 1994 – Andrew Wiles proves part of the Taniyama–Shimura conjecture and thereby proves Fermat's Last Theorem. • 1994 – Peter Shor formulates Shor's algorithm, a quantum algorithm for integer factorization. • 1995 – Simon Plouffe discovers Bailey–Borwein–Plouffe formula capable of finding the nth binary digit of π. • 1998 – Thomas Callister Hales (almost certainly) proves the Kepler conjecture. • 1999 – the full Taniyama–Shimura conjecture is proven. • 2000 – the Clay Mathematics Institute proposes the seven Millennium Prize Problems of unsolved important classic mathematical questions. 21st century • 2002 – Manindra Agrawal, Nitin Saxena, and Neeraj Kayal of IIT Kanpur present an unconditional deterministic polynomial time algorithm to determine whether a given number is prime (the AKS primality test). • 2002 – Preda Mihăilescu proves Catalan's conjecture. • 2003 – Grigori Perelman proves the Poincaré conjecture. • 2004 – the classification of finite simple groups, a collaborative work involving some hundred mathematicians and spanning fifty years, is completed. • 2004 – Ben Green and Terence Tao prove the Green–Tao theorem. • 2009 – Fundamental lemma (Langlands program) is proved by Ngô Bảo Châu.[17] • 2010 – Larry Guth and Nets Hawk Katz solve the Erdős distinct distances problem. • 2013 – Yitang Zhang proves the first finite bound on gaps between prime numbers.[18] • 2014 – Project Flyspeck[19] announces that it completed a proof of Kepler's conjecture.[20][21][22][23] • 2015 – Terence Tao solves The Erdős discrepancy problem . • 2015 – László Babai finds that a quasipolynomial complexity algorithm would solve the Graph isomorphism problem. • 2016 – Maryna Viazovska solves the sphere packing problem in dimension 8. Subsequent work building on this leads to a solution for dimension 24. See also • History of mathematical notation explains Rhetorical, Syncopated and Symbolic • Timeline of ancient Greek mathematicians – Timeline and summary of ancient Greek mathematicians and their discoveries • Timeline of mathematical innovation in South and West Asia • Timeline of mathematical logic • Timeline of women in mathematics • Timeline of women in mathematics in the United States References 1. Art Prehistory, Sean Henahan, January 10, 2002. Archived July 19, 2008, at the Wayback Machine 2. How Menstruation Created Mathematics, Tacoma Community College, (archive link). 3. "OLDEST Mathematical Object is in Swaziland". Retrieved March 15, 2015. 4. "an old Mathematical Object". Retrieved March 15, 2015. 5. "Egyptian Mathematical Papyri - Mathematicians of the African Diaspora". Retrieved March 15, 2015. 6. Biggs, Norman; Keith Lloyd; Robin Wilson (1995). "44". In Ronald Graham; Martin Grötschel; László Lovász (eds.). Handbook of Combinatorics (Google book). MIT Press. pp. 2163–2188. ISBN 0-262-57172-2. Retrieved March 8, 2008. 7. Carl B. Boyer, A History of Mathematics, 2nd Ed. 8. Corsi, Pietro; Weindling, Paul (1983). Information sources in the history of science and medicine. Butterworth Scientific. ISBN 9780408107648. Retrieved July 6, 2014. 9. Victor J. Katz (1998). History of Mathematics: An Introduction, p. 255–259. Addison-Wesley. ISBN 0-321-01618-1. 10. F. Woepcke (1853). Extrait du Fakhri, traité d'Algèbre par Abou Bekr Mohammed Ben Alhacan Alkarkhi. Paris. 11. O'Connor, John J.; Robertson, Edmund F., "Abu l'Hasan Ali ibn Ahmad Al-Nasawi", MacTutor History of Mathematics Archive, University of St Andrews 12. Arabic mathematics, MacTutor History of Mathematics archive, University of St Andrews, Scotland 13. Various AP Lists and Statistics Archived July 28, 2012, at the Wayback Machine 14. D'Alembert (1747) "Recherches sur la courbe que forme une corde tenduë mise en vibration" (Researches on the curve that a tense cord [string] forms [when] set into vibration), Histoire de l'académie royale des sciences et belles lettres de Berlin, vol. 3, pages 214-219. 15. "Sophie Germain and FLT". 16. Paul Benacerraf and Hilary Putnam, Cambridge University Press, Philosophy of Mathematics: Selected Readings, ISBN 0-521-29648-X 17. Laumon, G.; Ngô, B. C. (2004), Le lemme fondamental pour les groupes unitaires, arXiv:math/0404454, Bibcode:2004math......4454L 18. "UNH Mathematician's Proof Is Breakthrough Toward Centuries-Old Problem". University of New Hampshire. May 1, 2013. Retrieved May 20, 2013. 19. Announcement of Completion. Project Flyspeck, Google Code. 20. Team announces construction of a formal computer-verified proof of the Kepler conjecture. August 13, 2014 by Bob Yirk. 21. Proof confirmed of 400-year-old fruit-stacking problem, 12 August 2014; New Scientist. 22. A formal proof of the Kepler conjecture, arXiv. 23. Solved: 400-Year-Old Maths Theory Finally Proven. Sky News, 16:39, UK, Tuesday 12 August 2014. • David Eugene Smith, 1929 and 1959, A Source Book in Mathematics, Dover Publications. ISBN 0-486-64690-4. External links • O'Connor, John J.; Robertson, Edmund F., "A Mathematical Chronology", MacTutor History of Mathematics Archive, University of St Andrews Major mathematics areas • History • Timeline • Future • Outline • Lists • Glossary Foundations • Category theory • Information theory • Mathematical logic • Philosophy of mathematics • Set theory • Type theory Algebra • Abstract • Commutative • Elementary • Group theory • Linear • Multilinear • Universal • Homological Analysis • Calculus • Real analysis • Complex analysis • Hypercomplex analysis • Differential equations • Functional analysis • Harmonic analysis • Measure theory Discrete • Combinatorics • Graph theory • Order theory Geometry • Algebraic • Analytic • Arithmetic • Differential • Discrete • Euclidean • Finite Number theory • Arithmetic • Algebraic number theory • Analytic number theory • Diophantine geometry Topology • General • Algebraic • Differential • Geometric • Homotopy theory Applied • Engineering mathematics • Mathematical biology • Mathematical chemistry • Mathematical economics • Mathematical finance • Mathematical physics • Mathematical psychology • Mathematical sociology • Mathematical statistics • Probability • Statistics • Systems science • Control theory • Game theory • Operations research Computational • Computer science • Theory of computation • Computational complexity theory • Numerical analysis • Optimization • Computer algebra Related topics • Mathematicians • lists • Informal mathematics • Films about mathematicians • Recreational mathematics • Mathematics and art • Mathematics education •  Mathematics portal • Category • Commons • WikiProject
Wikipedia
Timeline of probability and statistics The following is a timeline of probability and statistics. Before 1600 • 8th century – Al-Khalil, an Arab mathematician studying cryptology, wrote the Book of Cryptographic Messages. The work has been lost, but based on the reports of later authors, it contained the first use of permutations and combinations to list all possible Arabic words with and without vowels.[1] • 9th century - Al-Kindi was the first to use frequency analysis to decipher encrypted messages and developed the first code breaking algorithm. He wrote a book entitled Manuscript on Deciphering Cryptographic Messages, containing detailed discussions on statistics and cryptanalysis.[2][3][4] Al-Kindi also made the earliest known use of statistical inference.[1] • 13th century – An important contribution of Ibn Adlan was on sample size for use of frequency analysis.[1] • 13th century – the first known calculation of the probability for throwing 3 dices is published in the Latin poem De vetula. • 1560s (published 1663) – Cardano's Liber de ludo aleae attempts to calculate probabilities of dice throws. He demonstrates the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes [5]). • 1577 – Bartolomé de Medina defends probabilism, the view that in ethics one may follow a probable opinion even if the opposite is more probable 17th century • 1654 – Pascal and Fermat create the mathematical theory of probability, • 1657 – Huygens's De ratiociniis in ludo aleae is the first book on mathematical probability, • 1662 – Graunt's Natural and Political Observations Made upon the Bills of Mortality makes inferences from statistical data on deaths in London, • 1666 – In Le Journal des Sçavans xxxi, 2 August 1666 (359–370(=364)) appears a review of the third edition (1665) of John Graunt's Observations on the Bills of Mortality. This review gives a summary of 'plusieurs reflexions curieuses', of which the second are Graunt's data on life expectancy. This review is used by Nicolaus Bernoulli in his De Usu Artis Conjectandi in Jure (1709). • 1669 – Christiaan Huygens and his brother Lodewijk discuss between August and December that year Graunts mortality table (Graunt 1662, p. 62) in letters #1755 • 1693 – Halley prepares the first mortality tables statistically relating death rate to age, 18th century • 1710 – Arbuthnot argues that the constancy of the ratio of male to female births is a sign of divine providence, • 1713 – Posthumous publication of Jacob Bernoulli's Ars Conjectandi, containing the first derivation of a law of large numbers, • 1724 – Abraham de Moivre studies mortality statistics and the foundation of the theory of annuities in Annuities upon Lives, • 1733 – Abraham de Moivre introduces the normal distribution to approximate the binomial distribution in probability, • 1739 – Hume's Treatise of Human Nature argues that inductive reasoning is unjustified, • 1761 – Thomas Bayes proves Bayes' theorem, • 1786 – Playfair's Commercial and Political Atlas introduces graphs and bar charts of data, 19th century • 1801 – Gauss predicts the orbit of Ceres using a line of best fit • 1805 – Adrien-Marie Legendre introduces the method of least squares for fitting a curve to a given set of observations, • 1814 – Laplace's Essai philosophique sur les probabilités defends a definition of probabilities in terms of equally possible cases, introduces generating functions and Laplace transforms, uses conjugate priors for exponential families, proves an early version of the Bernstein–von Mises theorem on the asymptotic irrelevance of prior distributions on the limiting posterior distribution and the role of the Fisher information on asymptotically normal posterior modes. • 1835 – Quetelet's Treatise on Man introduces social science statistics and the concept of the "average man", • 1866 – Venn's Logic of Chance defends the frequency interpretation of probability. • 1877–1883 – Charles Sanders Peirce outlines frequentist statistics, emphasizing the use of objective randomization in experiments and in sampling. Peirce also invented an optimally designed experiment for regression. • 1880 – Thiele gives a mathematical analysis of Brownian motion, introduces the likelihood function, and invents cumulants. • 1888 – Galton introduces the concept of correlation, • 1900 – Bachelier analyzes stock price movements as a stochastic process, 20th century • 1908 – Student's t-distribution for the mean of small samples published in English (following earlier derivations in German). • 1913 – Michel Plancherel states fundamental results in Ergodic theory. • 1920 – The central limit theorem in its modern form was formally stated. • 1921 – Keynes' Treatise on Probability defends a logical interpretation of probability. Wright develops path analysis.[6] • 1928 – Tippett and Fisher introduce extreme value theory, • 1933 – Andrey Nikolaevich Kolmogorov publishes his book Basic notions of the calculus of probability (Grundbegriffe der Wahrscheinlichkeitsrechnung) which contains an axiomatization of probability based on measure theory, • 1935 – R. A. Fisher's Design of Experiments (1st ed), • 1937 – Neyman introduces the concept of confidence interval in statistical testing, • 1941 – Due to the World War II, research on detection theory started, leading to the Receiver operating characteristic • 1946 – Cox's theorem derives the axioms of probability from simple logical assumptions, • 1948 – Shannon's Mathematical Theory of Communication defines capacity of communication channels in terms of probabilities, • 1953 – Nicholas Metropolis introduces the idea of thermodynamic simulated annealing methods See also • Founders of statistics • List of important publications in statistics • History of probability • History of statistics References 1. Broemeling, Lyle D. (1 November 2011). "An Account of Early Statistical Inference in Arab Cryptology". The American Statistician. 65 (4): 255–257. doi:10.1198/tas.2011.10191. 2. Singh, Simon (2000). The code book : the science of secrecy from ancient Egypt to quantum cryptography (1st Anchor Books ed.). New York: Anchor Books. ISBN 0-385-49532-3. 3. Singh, Simon (2000). The code book : the science of secrecy from ancient Egypt to quantum cryptography (1st Anchor Books ed.). New York: Anchor Books. ISBN 978-0-385-49532-5. 4. Ibrahim A. Al-Kadi "The origins of cryptology: The Arab contributions", Cryptologia, 16(2) (April 1992) pp. 97–126. 5. Some laws and problems in classical probability and how Cardano anticipated them Gorrochum, P. Chance magazine 2012 6. Wright, Sewall (1921). "Correlation and causation". Journal of Agricultural Research. 20 (7): 557–585. Further reading • Kees Verduin (2007), A Short History of Probability and Statistics • John Aldrich (2008), Figures from the History of Probability and Statistics • John Aldrich (2008), Probability and Statistics on the Earliest Uses Pages • Michael Friendly and Daniel J. Denis (2008). "Milestones in the History of Thematic Cartography, Statistical Graphics, and Data Visualization: An illustrated chronology of innovations". Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Inverse second The inverse second or reciprocal second (s−1), also called per second, is a unit defined as the multiplicative inverse of the second (a unit of time). It is applicable for physical quantities of dimension reciprocal time, such as frequency and strain rate. It is dimensionally equivalent to: • hertz (Hz), historically known as cycles per second – the SI unit for frequency and rotational frequency • becquerel (Bq) – the SI unit for the rate of occurrence of aperiodic or stochastic radionuclide events • baud (Bd) – the unit for symbol rate over a communication link • bits per second (bit/s) – the unit of bit rate However, the special names and symbols above for s−1 are recommend for clarity.[lower-alpha 1][lower-alpha 2] Reciprocal second should not me confused with radian per second (rad⋅s−1), the SI unit for angular frequency and angular velocity. As the radian is a dimensionless unit, radian per second is dimensionally consistent with reciprocal second. However, they are used for different kinds of quantity, frequency and angular frequency, whose numerical value differs by 2π. The inverse minute or reciprocal minute (min−1), also called per minute, is 60−1 s−1, as 1 min = 60 s; it is used in quantities of type "counts per minute", such as: • Actions per minute • Beats per minute • Counts per minute • Revolutions per minute (rpm) • Words per minute See also • Aperiodic frequency • Inverse metre • Reciprocal length • Unit of time Notes 1. "The SI unit of frequency is given as the hertz, implying the unit cycles per second; the SI unit of angular velocity is given as the radian per second; and the SI unit of activity is designated the becquerel, implying the unit counts per second. Although it would be formally correct to write all three of these units as the reciprocal second, the use of the different names emphasises the different nature of the quantities concerned."[1] 2. "(d) The hertz is used only for periodic phenomena, and the becquerel (Bq) is used only for stochastic processes in activity referred to a radionuclide."[2] References 1. "Units with special names and symbols; units that incorporate special names and symbols". 2. "BIPM - Table 3". BIPM. Retrieved 2012-10-24.
Wikipedia
Multiplication sign The multiplication sign, also known as the times sign or the dimension sign, is the symbol ×, used in mathematics to denote the multiplication operation and its resulting product.[1] While similar to a lowercase X (x), the form is properly a four-fold rotationally symmetric saltire.[2] × Multiplication sign In UnicodeU+00D7 × MULTIPLICATION SIGN (&times;) Different from Different fromU+0078 x LATIN SMALL LETTER X Related See alsoU+22C5 ⋅ DOT OPERATOR U+00F7 ÷ DIVISION SIGN History The earliest known use of the × symbol to represent multiplication appears in an anonymous appendix to the 1618 edition of John Napier's Mirifici Logarithmorum Canonis Descriptio.[3] This appendix has been attributed to William Oughtred,[3] who used the same symbol in his 1631 algebra text, Clavis Mathematicae, stating: "Multiplication of species [i.e. unknowns] connects both proposed magnitudes with the symbol 'in' or ×: or ordinarily without the symbol if the magnitudes be denoted with one letter."[4] Two earlier uses of a ✕ notation have been identified, but do not stand critical examination.[3] Uses In mathematics, the symbol × has a number of uses, including • Multiplication of two numbers, where it is read as "times" or "multiplied by"[1] • Cross product of two vectors, where it is usually read as "cross" • Cartesian product of two sets, where it is usually read as "cross"[5] • Geometric dimension of an object, such as noting that a room is 10 feet × 12 feet in area, where it is usually read as "by" (e.g., "10 feet by 12 feet") • Screen resolution in pixels, such as 1920 pixels across × 1080 pixels down. Read as "by". • Dimensions of a matrix, where it is usually read as "by" • A statistical interaction between two explanatory variables, where it is usually read as "by" In biology, the multiplication sign is used in a botanical hybrid name, for instance Ceanothus papillosus × impressus (a hybrid between C. papillosus and C. impressus) or Crocosmia × crocosmiiflora (a hybrid between two other species of Crocosmia). However, the communication of these hybrid names with a Latin letter "x" is common, when the actual "×" symbol is not readily available. The multiplication sign is also used by historians for an event between two dates. When employed between two dates – for example 1225 and 1232 – the expression "1225×1232" means "no earlier than 1225 and no later than 1232".[6] A monadic × symbol is used by the APL programming language to denote the sign function. Similar notations Main article: Multiplication: Notation and terminology The lower-case Latin letter x is sometimes used in place of the multiplication sign. This is considered incorrect in mathematical writing. In algebraic notation, widely used in mathematics, a multiplication symbol is usually omitted wherever it would not cause confusion: "a multiplied by b" can be written as ab or a b.[1] Other symbols can also be used to denote multiplication, often to reduce confusion between the multiplication sign × and the common variable x. In some countries, such as Germany, the primary symbol for multiplication is the "dot operator" ⋅ (as in a⋅b). This symbol is also used in compound units of measurement, e.g., N⋅m (see International System of Units#Lexicographic conventions). In algebra, it is a notation to resolve ambiguity (for instance, "b times 2" may be written as b⋅2, to avoid being confused with a value called b2). This notation is used wherever multiplication should be written explicitly, such as in "ab = a⋅2 for b = 2"; this usage is also seen in English-language texts. In some languages, the use of full stop as a multiplication symbol, such as a.b, is common when the symbol for decimal point is comma. Historically, computer language syntax was restricted to the ASCII character set, and the asterisk * became the de facto symbol for the multiplication operator. This selection is reflected in the numeric keypad on English-language keyboards, where the arithmetic operations of addition, subtraction, multiplication and division are represented by the keys +, -, * and /, respectively. Typing the character HTML, SGML, XML &times; or &#215; macOS In the Character Palette by searching for MULTIPLICATION SIGN[7] Microsoft Windows • Via the Emoji and Symbol input panel, invoked with the ⊞ Win+. key combination (Windows 10 version 1803 and later) • Via the Touch Keyboard component of the Taskbar (Windows 10 and later) • Some non-English keyboard layouts have it as an explicit keytop, like in Arabic keyboard. • Using US International keyboard layout, use Alt+= • Via the Character Map utility: in the eighth row, or by searching • The Alt+0215 key combination using the numeric keypad[8] OpenOffice.org times TeX \times Unix-like (Linux, ChromeOS) • Ctrl+⇧ Shift+UD7 • ComposeXX • AltGr+⇧ Shift+, (UK extended layout) Unicode and HTML entities • U+00D7 × MULTIPLICATION SIGN (&times;) Other variants and related characters: • U+002A * ASTERISK (&ast;, &midast;) • U+2062 INVISIBLE TIMES (&InvisibleTimes;, &it;) (a zero-width space indicating multiplication) • U+00B7 · MIDDLE DOT (&middot;, &CenterDot;, &centerdot;) (the interpunct, may be easier to type than the dot operator) • U+2297 ⊗ CIRCLED TIMES (&CircleTimes;, &otimes;) • U+22C5 ⋅ DOT OPERATOR (&sdot;) • U+2715 ✕ MULTIPLICATION X • U+2716 ✖ HEAVY MULTIPLICATION X • U+2A09 ⨉ N-ARY TIMES OPERATOR • U+2A2F ⨯ VECTOR OR CROSS PRODUCT (&Cross;) (intended to explicitly denote the cross product of two vectors) • U+2A30 ⨰ MULTIPLICATION SIGN WITH DOT ABOVE (&timesd;) • U+2A31 ⨱ MULTIPLICATION SIGN WITH UNDERBAR (&timesbar;) • U+2A34 ⨴ MULTIPLICATION SIGN IN LEFT HALF CIRCLE (&lotimes;) • U+2A35 ⨵ MULTIPLICATION SIGN IN RIGHT HALF CIRCLE (&rotimes;) • U+2A36 ⨶ CIRCLED MULTIPLICATION SIGN WITH CIRCUMFLEX ACCENT (&otimesas;) • U+2A37 ⨷ MULTIPLICATION SIGN IN DOUBLE CIRCLE (&Otimes;) • U+2A3B ⨻ MULTIPLICATION SIGN IN TRIANGLE (&tritime;) • U+2AC1 ⫁ SUBSET WITH MULTIPLICATION SIGN BELOW (&submult;) • U+2AC2 ⫂ SUPERSET WITH MULTIPLICATION SIGN BELOW (&supmult;) See also • Division sign • List of mathematical symbols • Plus and minus signs • Reference mark • X mark References 1. Weisstein, Eric W. "Multiplication". mathworld.wolfram.com. Retrieved 2020-08-26. 2. Stallings, L. (2000). "A Brief History of Algebraic Notation". School Science and Mathematics. 100 (5): 230–235. doi:10.1111/j.1949-8594.2000.tb17262.x. ISSN 0036-6803. 3. Cajori, Florian (1928). A History of Mathematical Notations, Volume I: Notations in Elementary Mathematics. Open Court. pp. 251–252. 4. William Oughtred (1667). Clavis Mathematicae. p. 10. Multiplicatio speciosa connectit utramque magintudinem propositam cum notâ in vel ×: vel plerumque absque notâ, si magnitudines denotentur unica litera 5. Nykamp, Duane. "Cartesian product definition". Math Insight. Retrieved August 26, 2020. 6. New Hart's rules: the handbook of style for writers and editors, Oxford University Press, 2005, p. 183, ISBN 978-0-19-861041-0 7. "Mac Zeichenpalette" (in German). TypoWiki. Archived from the original on 2007-10-25. Retrieved 2009-10-09. 8. "Unicode Character 'MULTIPLICATION SIGN' (U+00D7)". Fileformat.info. Retrieved 2017-01-13. External links • "Letter Database". Eki.ee. Retrieved 2017-01-13. • "Unicode Character 'MULTIPLICATION SIGN' (U+00D7)". Fileformat.info. Retrieved 2017-01-13. • "Unicode Character 'VECTOR OR CROSS PRODUCT' (U+2A2F)". Fileformat.info. Retrieved 2017-01-13. Common punctuation marks and other typographical marks or symbols •       space  •   ,   comma  •   :   colon  •   ;   semicolon  •   ‐   hyphen  •   ’   '   apostrophe  •   ′   ″   ‴   prime  •   .   full stop  •   &   ampersand  •   @   at sign  •   ^   caret  •   /   slash  •   \   backslash  •   …   ellipsis  •   *   asterisk  •   ⁂   asterism  •     *  *  *      dinkus  •   -   hyphen-minus  •   ‒   –   —   dash  •   =   ⸗   double hyphen  •   ?   question mark  •   !   exclamation mark  •   ‽   interrobang  •   ¡   ¿   inverted ! and ?  •   ⸮   irony punctuation  •   #   number sign  •   №   numero sign  •   º   ª   ordinal indicator  •   %   percent sign  •   ‰   per mille  •   ‱   basis point  •   °   degree symbol  •   ⌀   diameter sign  •   +   −   plus and minus signs  •   ×   multiplication sign  •   ÷   division sign  •   ~   tilde  •   ±   plus–minus sign  •   ∓   minus-plus sign  •   _   underscore  •   ⁀   tie  •   |   ¦   ‖   vertical bar  •   •   bullet  •   ·   interpunct  •   ©   copyright symbol  •   ©   copyleft  •   ℗   sound recording copyright  •   ®   registered trademark  •   SM   service mark symbol  •   TM   trademark symbol  •   ‘ ’   “ ”   ' '   " "   quotation mark  •   ‹ ›   « »   guillemet  •   ( )   [ ]   { }   ⟨ ⟩   bracket  •   ”   〃   ditto mark  •   †   ‡   dagger  •   ❧   hedera/floral heart  •   ☞   manicule  •   ◊   lozenge  •   ¶   ⸿   pilcrow (paragraph mark)  •   ※   reference mark  •   §   section mark  • Version of this table as a sortable list • Currency symbols • Diacritics (accents) • Logic symbols • Math symbols • Whitespace • Chinese punctuation • Hebrew punctuation • Japanese punctuation • Korean punctuation
Wikipedia
Multiplication table In mathematics, a multiplication table (sometimes, less formally, a times table) is a mathematical table used to define a multiplication operation for an algebraic system. The decimal multiplication table was traditionally taught as an essential part of elementary arithmetic around the world, as it lays the foundation for arithmetic operations with base-ten numbers. Many educators believe it is necessary to memorize the table up to 9 × 9.[1] History Pre-modern times The oldest known multiplication tables were used by the Babylonians about 4000 years ago.[2] However, they used a base of 60.[2] The oldest known tables using a base of 10 are the Chinese decimal multiplication table on bamboo strips dating to about 305 BC, during China's Warring States period.[2] The multiplication table is sometimes attributed to the ancient Greek mathematician Pythagoras (570–495 BC). It is also called the Table of Pythagoras in many languages (for example French, Italian and Russian), sometimes in English.[4] The Greco-Roman mathematician Nichomachus (60–120 AD), a follower of Neopythagoreanism, included a multiplication table in his Introduction to Arithmetic, whereas the oldest surviving Greek multiplication table is on a wax tablet dated to the 1st century AD and currently housed in the British Museum.[5] In 493 AD, Victorius of Aquitaine wrote a 98-column multiplication table which gave (in Roman numerals) the product of every number from 2 to 50 times and the rows were "a list of numbers starting with one thousand, descending by hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the fractions down to 1/144."[6] Modern times In his 1820 book The Philosophy of Arithmetic,[7] mathematician John Leslie published a multiplication table up to 99 × 99, which allows numbers to be multiplied in pairs of digits at a time. Leslie also recommended that young pupils memorize the multiplication table up to 50 × 50. The illustration below shows a table up to 12 × 12, which is a size commonly used nowadays in English-world schools. × 0 123456789101112 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 123456789101112 2 0 24681012141618202224 3 0 369121518212427303336 4 0 4812162024283236404448 5 0 51015202530354045505560 6 0 61218243036424854606672 7 0 71421283542495663707784 8 0 81624324048566472808896 9 0 918273645546372819099108 10 0 102030405060708090100110120 11 0 112233445566778899110121132 12 0 1224364860728496108120132144 In China, however, because multiplication of integers is commutative, many schools use a smaller table as below. Some schools even remove the first column since 1 is the multiplicative identity. 1 1 2 24 3 36 9 4 48 1216 5 510 15 2025 6 612 18 2430 36 7 714 21 2835 4249 8 816 24 3240 48 5664 9 918 27 3645 54 6372 81 × 1 2 3 4 5 6 7 8 9 The traditional rote learning of multiplication was based on memorization of columns in the table, in a form like   1 × 10 = 10   2 × 10 = 20   3 × 10 = 30   4 × 10 = 40   5 × 10 = 50   6 × 10 = 60   7 × 10 = 70   8 × 10 = 80   9 × 10 = 90 This form of writing the multiplication table in columns with complete number sentences is still used in some countries, such as Bosnia and Herzegovina, instead of the modern grids above. Patterns in the tables There is a pattern in the multiplication table that can help people to memorize the table more easily. It uses the figures below: →   → ↑ 1 2 3 ↓ ↑ 2   4 ↓ 456     7 8 9 6   8 ← ←   0  5     0   Figure 1: Odd Figure 2: Even Figure 1 is used for multiples of 1, 3, 7, and 9. Figure 2 is used for the multiples of 2, 4, 6, and 8. These patterns can be used to memorize the multiples of any number from 0 to 10, except 5. As you would start on the number you are multiplying, when you multiply by 0, you stay on 0 (0 is external and so the arrows have no effect on 0, otherwise 0 is used as a link to create a perpetual cycle). The pattern also works with multiples of 10, by starting at 1 and simply adding 0, giving you 10, then just apply every number in the pattern to the "tens" unit as you would normally do as usual to the "ones" unit. For example, to recall all the multiples of 7: 1. Look at the 7 in the first picture and follow the arrow. 2. The next number in the direction of the arrow is 4. So think of the next number after 7 that ends with 4, which is 14. 3. The next number in the direction of the arrow is 1. So think of the next number after 14 that ends with 1, which is 21. 4. After coming to the top of this column, start with the bottom of the next column, and travel in the same direction. The number is 8. So think of the next number after 21 that ends with 8, which is 28. 5. Proceed in the same way until the last number, 3, corresponding to 63. 6. Next, use the 0 at the bottom. It corresponds to 70. 7. Then, start again with the 7. This time it will correspond to 77. 8. Continue like this. In abstract algebra Tables can also define binary operations on groups, fields, rings, and other algebraic systems. In such contexts they are called Cayley tables. Here are the addition and multiplication tables for the finite field Z5: • for every natural number n, there are also addition and multiplication tables for the ring Zn. + 0 1 2 3 4 0 01234 1 12340 2 23401 3 34012 4 40123 × 0 1 2 3 4 0 00000 1 01234 2 02413 3 03142 4 04321 For other examples, see group, and octonion. Chinese and Japanese multiplication tables Main article: Chinese multiplication table Mokkan discovered at Heijō Palace suggest that the multiplication table may have been introduced to Japan through Chinese mathematical treatises such as the Sunzi Suanjing, because their expression of the multiplication table share the character 如 in products less than ten.[8] Chinese and Japanese share a similar system of eighty-one short, easily memorable sentences taught to students to help them learn the multiplication table up to 9 × 9. In current usage, the sentences that express products less than ten include an additional particle in both languages. In the case of modern Chinese, this is 得 (dé); and in Japanese, this is が (ga). This is useful for those who practice calculation with a suanpan or a soroban, because the sentences remind them to shift one column to the right when inputting a product that does not begin with a tens digit. In particular, the Japanese multiplication table uses non-standard pronunciations for numbers in some specific instances (such as the replacement of san roku with saburoku). The Japanese multiplication table × 1 ichi 2 ni 3 san 4 shi 5 go 6 roku 7 shichi 8 ha 9 ku 1 in in'ichi ga ichi inni ga ni insan ga san inshi ga shi ingo ga go inroku ga roku inshichi ga shichi inhachi ga hachi inku ga ku 2 ni ni ichi ga ni ni nin ga shi ni san ga roku ni shi ga hachi ni go jū ni roku jūni ni shichi jūshi ni hachi jūroku ni ku jūhachi 3 san san ichi ga san san ni ga roku sazan ga ku san shi jūni san go jūgo saburoku jūhachi san shichi nijūichi sanpa nijūshi san ku nijūshichi 4 shi shi ichi ga shi shi ni ga hachi shi san jūni shi shi jūroku shi go nijū shi roku nijūshi shi shichi nijūhachi shi ha sanjūni shi ku sanjūroku 5 go go ichi ga go go ni jū go san jūgo go shi nijū go go nijūgo go roku sanjū go shichi sanjūgo go ha shijū gokku shijūgo 6 roku roku ichi ga roku roku ni nijū roku san jūhachi roku shi nijūshi roku go sanjū roku roku sanjūroku roku shichi shijūni roku ha shijūhachi rokku gojūshi 7 shichi shichi ichi ga shichi shichi ni jūshi shichi san nijūichi shichi shi nijūhachi shichi go sanjūgo shichi roku shijūni shichi shichi shijūku shichi ha gojūroku shichi ku rokujūsan 8 hachi hachi ichi ga hachi hachi ni jūroku hachi san nijūshi hachi shi sanjūni hachi go shijū hachi roku shijūhachi hachi shichi gojūroku happa rokujūshi hakku shichijūni 9 ku ku ichi ga ku ku ni jūhachi ku san nijūshichi ku shi sanjūroku ku go shijūgo ku roku gojūshi ku shichi rokujūsan ku ha shichijūni ku ku hachijūichi Warring States decimal multiplication bamboo slips A bundle of 21 bamboo slips dated 305 BC in the Warring States period in the Tsinghua Bamboo Slips (清華簡) collection is the world's earliest known example of a decimal multiplication table.[9] A modern representation of the Warring States decimal multiplication table used to calculate 12 × 34.5 Standards-based mathematics reform in the US In 1989, the National Council of Teachers of Mathematics (NCTM) developed new standards which were based on the belief that all students should learn higher-order thinking skills, which recommended reduced emphasis on the teaching of traditional methods that relied on rote memorization, such as multiplication tables. Widely adopted texts such as Investigations in Numbers, Data, and Space (widely known as TERC after its producer, Technical Education Research Centers) omitted aids such as multiplication tables in early editions. NCTM made it clear in their 2006 Focal Points that basic mathematics facts must be learned, though there is no consensus on whether rote memorization is the best method. In recent years, a number of nontraditional methods have been devised to help children learn multiplication facts, including video-game style apps and books that aim to teach times tables through character-based stories. See also • Vedic square • IBM 1620, an early computer that used tables stored in memory to perform addition and multiplication References 1. Trivett, John (1980), "The Multiplication Table: To Be Memorized or Mastered!", For the Learning of Mathematics, 1 (1): 21–25, JSTOR 40247697. 2. Qiu, Jane (January 7, 2014). "Ancient times table hidden in Chinese bamboo strips". Nature News. doi:10.1038/nature.2014.14482. S2CID 130132289. 3. Wikisource:Page:Popular Science Monthly Volume 26.djvu/467 4. for example in An Elementary Treatise on Arithmetic by John Farrar 5. David E. Smith (1958), History of Mathematics, Volume I: General Survey of the History of Elementary Mathematics. New York: Dover Publications (a reprint of the 1951 publication), ISBN 0-486-20429-4, pp. 58, 129. 6. David W. Maher and John F. Makowski. "Literary evidence for Roman arithmetic with fractions". Classical Philology, 96/4 (October 2001), p. 383. 7. Leslie, John (1820). The Philosophy of Arithmetic; Exhibiting a Progressive View of the Theory and Practice of Calculation, with Tables for the Multiplication of Numbers as Far as One Thousand. Edinburgh: Abernethy & Walker. 8. "「九九」は中国伝来…平城宮跡から木簡出土". Yomiuri Shimbun. December 4, 2010. Archived from the original on December 7, 2010. 9. Nature article The 2,300-year-old matrix is the world's oldest decimal multiplication table 1 x 1 to 23 x 23 Authority control: National • Germany • Israel • United States
Wikipedia
Time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average. A time series is very frequently plotted via a run chart (which is a temporal line chart). Time series are used in statistics, signal processing, pattern recognition, econometrics, mathematical finance, weather forecasting, earthquake prediction, electroencephalography, control engineering, astronomy, communications engineering, and largely in any domain of applied science and engineering which involves temporal measurements. Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one or more different time series, this type of analysis is not usually called "time series analysis", which refers in particular to relationships between different points in time within a single series. Time series data have a natural temporal ordering. This makes time series analysis distinct from cross-sectional studies, in which there is no natural ordering of the observations (e.g. explaining people's wages by reference to their respective education levels, where the individuals' data could be entered in any order). Time series analysis is also distinct from spatial data analysis where the observations typically relate to geographical locations (e.g. accounting for house prices by the location as well as the intrinsic characteristics of the houses). A stochastic model for a time series will generally reflect the fact that observations close together in time will be more closely related than observations further apart. In addition, time series models will often make use of the natural one-way ordering of time so that values for a given period will be expressed as deriving in some way from past values, rather than from future values (see time reversibility). Time series analysis can be applied to real-valued, continuous data, discrete numeric data, or discrete symbolic data (i.e. sequences of characters, such as letters and words in the English language[1]). Methods for analysis Methods for time series analysis may be divided into two classes: frequency-domain methods and time-domain methods. The former include spectral analysis and wavelet analysis; the latter include auto-correlation and cross-correlation analysis. In the time domain, correlation and analysis can be made in a filter-like manner using scaled correlation, thereby mitigating the need to operate in the frequency domain. Additionally, time series analysis techniques may be divided into parametric and non-parametric methods. The parametric approaches assume that the underlying stationary stochastic process has a certain structure which can be described using a small number of parameters (for example, using an autoregressive or moving average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure. Methods of time series analysis may also be divided into linear and non-linear, and univariate and multivariate. Panel data A time series is one type of panel data. Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel (as is a cross-sectional dataset). A data set may exhibit characteristics of both panel data and time series data. One way to tell is to ask what makes one data record unique from the other records. If the answer is the time data field, then this is a time series data set candidate. If determining a unique record requires a time data field and an additional identifier which is unrelated to time (e.g. student ID, stock symbol, country code), then it is panel data candidate. If the differentiation lies on the non-time identifier, then the data set is a cross-sectional data set candidate. Analysis There are several types of motivation and data analysis available for time series which are appropriate for different purposes. Motivation In the context of statistics, econometrics, quantitative finance, seismology, meteorology, and geophysics the primary goal of time series analysis is forecasting. In the context of signal processing, control engineering and communication engineering it is used for signal detection. Other applications are in data mining, pattern recognition and machine learning, where time series analysis can be used for clustering,[2][3] classification,[4] query by content,[5] anomaly detection as well as forecasting.[6] Exploratory analysis A straightforward way to examine a regular time series is manually with a line chart. An example chart is shown on the right for tuberculosis incidence in the United States, made with a spreadsheet program. The number of cases was standardized to a rate per 100,000 and the percent change per year in this rate was calculated. The nearly steadily dropping line shows that the TB incidence was decreasing in most years, but the percent change in this rate varied by as much as +/- 10%, with 'surges' in 1975 and around the early 1990s. The use of both vertical axes allows the comparison of two time series in one graphic. A study of corporate data analysts found two challenges to exploratory time series analysis: discovering the shape of interesting patterns, and finding an explanation for these patterns.[7] Visual tools that represent time series data as heat map matrices can help overcome these challenges. Other techniques include: • Autocorrelation analysis to examine serial dependence • Spectral analysis to examine cyclic behavior which need not be related to seasonality. For example, sunspot activity varies over 11 year cycles.[8][9] Other common examples include celestial phenomena, weather patterns, neural activity, commodity prices, and economic activity. • Separation into components representing trend, seasonality, slow and fast variation, and cyclical irregularity: see trend estimation and decomposition of time series Curve fitting Main article: Curve fitting Curve fitting[10][11] is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points,[12] possibly subject to constraints.[13][14] Curve fitting can involve either interpolation,[15][16] where an exact fit to the data is required, or smoothing,[17][18] in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis,[19][20] which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization,[21][22] to infer values of a function where no data are available,[23] and to summarize the relationships among two or more variables.[24] Extrapolation refers to the use of a fitted curve beyond the range of the observed data,[25] and is subject to a degree of uncertainty[26] since it may reflect the method used to construct the curve as much as it reflects the observed data. For processes that are expected to generally grow in magnitude one of the curves in the graphic at right (and many others) can be fitted by estimating their parameters. The construction of economic time series involves the estimation of some components for some dates by interpolation between values ("benchmarks") for earlier and later dates. Interpolation is estimation of an unknown quantity between two known quantities (historical data), or drawing conclusions about missing information from the available information ("reading between the lines").[27] Interpolation is useful where the data surrounding the missing data is available and its trend, seasonality, and longer-term cycles are known. This is often done by using a related series known for all relevant dates.[28] Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fit into time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression). The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set. Spline interpolation, however, yield a piecewise continuous function composed of many polynomials to model the data set. Extrapolation is the process of estimating, beyond the original observation range, the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results. Function approximation Main article: Function approximation In general, a function approximation problem asks us to select a function among a well-defined class that closely matches ("approximates") a target function in a task-specific way. One can distinguish two major classes of function approximation problems: First, for known target functions, approximation theory is the branch of numerical analysis that investigates how certain known functions (for example, special functions) can be approximated by a specific class of functions (for example, polynomials or rational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.). Second, the target function, call it g, may be unknown; instead of an explicit formula, only a set of points (a time series) of the form (x, g(x)) is provided. Depending on the structure of the domain and codomain of g, several techniques for approximating g may be applicable. For example, if g is an operation on the real numbers, techniques of interpolation, extrapolation, regression analysis, and curve fitting can be used. If the codomain (range or target set) of g is a finite set, one is dealing with a classification problem instead. A related problem of online time series approximation[29] is to summarize the data in one-pass and construct an approximate representation that can support a variety of time series queries with bounds on worst-case error. To some extent, the different problems (regression, classification, fitness approximation) have received a unified treatment in statistical learning theory, where they are viewed as supervised learning problems. Prediction and forecasting In statistics, prediction is a part of statistical inference. One particular approach to such inference is known as predictive inference, but the prediction can be undertaken within any of the several approaches to statistical inference. Indeed, one description of statistics is that it provides a means of transferring knowledge about a sample of a population to the whole population, and to other related populations, which is not necessarily the same as prediction over time. When information is transferred across time, often to specific points in time, the process is known as forecasting. • Fully formed statistical models for stochastic simulation purposes, so as to generate alternative versions of the time series, representing what might happen over non-specific time-periods in the future • Simple or fully formed statistical models to describe the likely outcome of the time series in the immediate future, given knowledge of the most recent outcomes (forecasting). • Forecasting on time series is usually done using automated statistical software packages and programming languages, such as Julia, Python, R, SAS, SPSS and many others. • Forecasting on large scale data can be done with Apache Spark using the Spark-TS library, a third-party package.[30] Classification Assigning time series pattern to a specific category, for example identify a word based on series of hand movements in sign language. Signal estimation See also: Signal processing and Estimation theory This approach is based on harmonic analysis and filtering of signals in the frequency domain using the Fourier transform, and spectral density estimation, the development of which was significantly accelerated during World War II by mathematician Norbert Wiener, electrical engineers Rudolf E. Kálmán, Dennis Gabor and others for filtering signals from noise and predicting signal values at a certain point in time. See Kalman filter, Estimation theory, and Digital signal processing Segmentation Splitting a time-series into a sequence of segments. It is often the case that a time-series can be represented as a sequence of individual segments, each with its own characteristic properties. For example, the audio signal from a conference call can be partitioned into pieces corresponding to the times during which each person was speaking. In time-series segmentation, the goal is to identify the segment boundary points in the time-series, and to characterize the dynamical properties associated with each segment. One can approach this problem using change-point detection, or by modeling the time-series as a more sophisticated system, such as a Markov jump linear system. Models Models for time series data can have many forms and represent different stochastic processes. When modeling variations in the level of a process, three broad classes of practical importance are the autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models. These three classes depend linearly on previous data points.[31] Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial "V" for "vector", as in VAR for vector autoregression. An additional set of extensions of these models is available for use where the observed time-series is driven by some "forcing" time-series (which may not have a causal effect on the observed series): the distinction from the multivariate case is that the forcing series may be deterministic or under the experimenter's control. For these models, the acronyms are extended with a final "X" for "exogenous". Non-linear dependence of the level of a series on previous data points is of interest, partly because of the possibility of producing a chaotic time series. However, more importantly, empirical investigations can indicate the advantage of using predictions derived from non-linear models, over those from linear models, as for example in nonlinear autoregressive exogenous models. Further references on nonlinear time series analysis: (Kantz and Schreiber),[32] and (Abarbanel)[33] Among other types of non-linear time series models, there are models to represent the changes of variance over time (heteroskedasticity). These models represent autoregressive conditional heteroskedasticity (ARCH) and the collection comprises a wide variety of representation (GARCH, TARCH, EGARCH, FIGARCH, CGARCH, etc.). Here changes in variability are related to, or predicted by, recent past values of the observed series. This is in contrast to other possible representations of locally varying variability, where the variability might be modelled as being driven by a separate time-varying process, as in a doubly stochastic model. In recent work on model-free analyses, wavelet transform based methods (for example locally stationary wavelets and wavelet decomposed neural networks) have gained favor. Multiscale (often referred to as multiresolution) techniques decompose a given time series, attempting to illustrate time dependence at multiple scales. See also Markov switching multifractal (MSMF) techniques for modeling volatility evolution. A Hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplest dynamic Bayesian network. HMM models are widely used in speech recognition, for translating a time series of spoken words into text. Notation A number of different notations are in use for time-series analysis. A common notation specifying a time series X that is indexed by the natural numbers is written X = (X1, X2, ...). Another common notation is Y = (Yt: t ∈ T), where T is the index set. Conditions There are two sets of conditions under which much of the theory is built: • Stationary process • Ergodic process Ergodicity implies stationarity, but the converse is not necessarily the case. Stationarity is usually classified into strict stationarity and wide-sense or second-order stationarity. Both models and applications can be developed under each of these conditions, although the models in the latter case might be considered as only partly specified. In addition, time-series analysis can be applied where the series are seasonally stationary or non-stationary. Situations where the amplitudes of frequency components change with time can be dealt with in time-frequency analysis which makes use of a time–frequency representation of a time-series or signal.[34] Tools Tools for investigating time-series data include: • Consideration of the autocorrelation function and the spectral density function (also cross-correlation functions and cross-spectral density functions) • Scaled cross- and auto-correlation functions to remove contributions of slow components[35] • Performing a Fourier transform to investigate the series in the frequency domain • Discrete, continuous or mixed spectra of time series, depending on whether the time series contains a (generalized) harmonic signal or not • Use of a filter to remove unwanted noise • Principal component analysis (or empirical orthogonal function analysis) • Singular spectrum analysis • "Structural" models: • General State Space Models • Unobserved Components Models • Machine Learning • Artificial neural networks • Support vector machine • Fuzzy logic • Gaussian process • Genetic Programming • Gene expression programming • Hidden Markov model • Multi expression programming • Queueing theory analysis • Control chart • Shewhart individuals control chart • CUSUM chart • EWMA chart • Detrended fluctuation analysis • Nonlinear mixed-effects modeling • Dynamic time warping[36] • Dynamic Bayesian network • Time-frequency analysis techniques: • Fast Fourier transform • Continuous wavelet transform • Short-time Fourier transform • Chirplet transform • Fractional Fourier transform • Chaotic analysis • Correlation dimension • Recurrence plots • Recurrence quantification analysis • Lyapunov exponents • Entropy encoding Measures Time series metrics or features that can be used for time series classification or regression analysis:[37] • Univariate linear measures • Moment (mathematics) • Spectral band power • Spectral edge frequency • Accumulated Energy (signal processing) • Characteristics of the autocorrelation function • Hjorth parameters • FFT parameters • Autoregressive model parameters • Mann–Kendall test • Univariate non-linear measures • Measures based on the correlation sum • Correlation dimension • Correlation integral • Correlation density • Correlation entropy • Approximate entropy[38] • Sample entropy • Fourier entropyuk • Wavelet entropy • Dispersion entropy • Fluctuation dispersion entropy • Rényi entropy • Higher-order methods • Marginal predictability • Dynamical similarity index • State space dissimilarity measures • Lyapunov exponent • Permutation methods • Local flow • Other univariate measures • Algorithmic complexity • Kolmogorov complexity estimates • Hidden Markov Model states • Rough path signature[39] • Surrogate time series and surrogate correction • Loss of recurrence (degree of non-stationarity) • Bivariate linear measures • Maximum linear cross-correlation • Linear Coherence (signal processing) • Bivariate non-linear measures • Non-linear interdependence • Dynamical Entrainment (physics) • Measures for Phase synchronization • Measures for Phase locking • Similarity measures:[40] • Cross-correlation • Dynamic Time Warping[36] • Hidden Markov Models • Edit distance • Total correlation • Newey–West estimator • Prais–Winsten transformation • Data as Vectors in a Metrizable Space • Minkowski distance • Mahalanobis distance • Data as time series with envelopes • Global standard deviation • Local standard deviation • Windowed standard deviation • Data interpreted as stochastic series • Pearson product-moment correlation coefficient • Spearman's rank correlation coefficient • Data interpreted as a probability distribution function • Kolmogorov–Smirnov test • Cramér–von Mises criterion Visualization Time series can be visualized with two categories of chart: Overlapping Charts and Separated Charts. Overlapping Charts display all-time series on the same layout while Separated Charts presents them on different layouts (but aligned for comparison purpose)[41] Overlapping charts • Braided graphs • Line charts • Slope graphs • GapChartfr Separated charts • Horizon graphs • Reduced line chart (small multiples) • Silhouette graph • Circular silhouette graph See also • Anomaly time series • Chirp • Decomposition of time series • Detrended fluctuation analysis • Digital signal processing • Distributed lag • Estimation theory • Forecasting • Frequency spectrum • Hurst exponent • Least-squares spectral analysis • Monte Carlo method • Panel analysis • Random walk • Scaled correlation • Seasonal adjustment • Sequence analysis • Signal processing • Time series database (TSDB) • Trend estimation • Unevenly spaced time series References 1. Lin, Jessica; Keogh, Eamonn; Lonardi, Stefano; Chiu, Bill (2003). "A symbolic representation of time series, with implications for streaming algorithms". Proceedings of the 8th ACM SIGMOD workshop on Research issues in data mining and knowledge discovery. New York: ACM Press. pp. 2–11. CiteSeerX 10.1.1.14.5597. doi:10.1145/882082.882086. ISBN 9781450374224. S2CID 6084733. 2. Liao, T. Warren (2005). "Clustering of time series data - a survey". Pattern Recognition. Elsevier. 38 (11): 1857–1874. Bibcode:2005PatRe..38.1857W. doi:10.1016/j.patcog.2005.01.025. S2CID 8973749. – via ScienceDirect (subscription required) 3. Aghabozorgi, Saeed; Shirkhorshidi, Ali S.; Wah, Teh Y. (2015). "Time-series clustering – A decade review". Information Systems. Elsevier. 53: 16–38. doi:10.1016/j.is.2015.04.007. S2CID 158707. – via ScienceDirect (subscription required) 4. Keogh, Eamonn J. (2003). "On the need for time series data mining benchmarks". Data Mining and Knowledge Discovery. Kluwer. 7: 349–371. doi:10.1145/775047.775062. ISBN 158113567X. S2CID 41617550. – via ACM Digital Library (subscription required) 5. Agrawal, Rakesh; Faloutsos, Christos; Swami, Arun (October 1993). "Foundations of Data Organization and Algorithms". Proceedings of the 4th International Conference on Foundations of Data Organization and Algorithms. International Conference on Foundations of Data Organization and Algorithms. Lecture Notes in Computer Science. Vol. 730. pp. 69–84. doi:10.1007/3-540-57301-1_5. ISBN 978-3-540-57301-2. – via SpringerLink (subscription required) 6. Chen, Cathy W. S.; Chiu, L. M. (September 2021). "Ordinal Time Series Forecasting of the Air Quality Index". Entropy. 23 (9): 1167. Bibcode:2021Entrp..23.1167C. doi:10.3390/e23091167. PMC 8469594. PMID 34573792. 7. Sarkar, Advait; Spott, Martin; Blackwell, Alan F.; Jamnik, Mateja (2016). "Visual discovery and model-driven explanation of time series patterns". 2016 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE. pp. 78–86. doi:10.1109/vlhcc.2016.7739668. ISBN 978-1-5090-0252-8. S2CID 9787931. 8. Bloomfield, P. (1976). Fourier analysis of time series: An introduction. New York: Wiley. ISBN 978-0471082569. 9. Shumway, R. H. (1988). Applied statistical time series analysis. Englewood Cliffs, NJ: Prentice Hall. ISBN 978-0130415004. 10. Sandra Lach Arlinghaus, PHB Practical Handbook of Curve Fitting. CRC Press, 1994. 11. William M. Kolb. Curve Fitting for Programmable Calculators. Syntec, Incorporated, 1984. 12. S.S. Halli, K.V. Rao. 1992. Advanced Techniques of Population Analysis. ISBN 0306439972 Page 165 (cf. ... functions are fulfilled if we have a good to moderate fit for the observed data.) 13. The Signal and the Noise: Why So Many Predictions Fail-but Some Don't. By Nate Silver 14. Data Preparation for Data Mining: Text. By Dorian Pyle. 15. Numerical Methods in Engineering with MATLAB®. By Jaan Kiusalaas. Page 24. 16. Numerical Methods in Engineering with Python 3. By Jaan Kiusalaas. Page 21. 17. Numerical Methods of Curve Fitting. By P. G. Guest, Philip George Guest. Page 349. 18. See also: Mollifier 19. Fitting Models to Biological Data Using Linear and Nonlinear Regression. By Harvey Motulsky, Arthur Christopoulos. 20. Regression Analysis By Rudolf J. Freund, William J. Wilson, Ping Sa. Page 269. 21. Visual Informatics. Edited by Halimah Badioze Zaman, Peter Robinson, Maria Petrou, Patrick Olivier, Heiko Schröder. Page 689. 22. Numerical Methods for Nonlinear Engineering Models. By John R. Hauser. Page 227. 23. Methods of Experimental Physics: Spectroscopy, Volume 13, Part 1. By Claire Marton. Page 150. 24. Encyclopedia of Research Design, Volume 1. Edited by Neil J. Salkind. Page 266. 25. Community Analysis and Planning Techniques. By Richard E. Klosterman. Page 1. 26. An Introduction to Risk and Uncertainty in the Evaluation of Environmental Investments. DIANE Publishing. Pg 69 27. Hamming, Richard. Numerical methods for scientists and engineers. Courier Corporation, 2012. 28. Friedman, Milton. "The interpolation of time series by related series." Journal of the American Statistical Association 57.300 (1962): 729–757. 29. Gandhi, Sorabh, Luca Foschini, and Subhash Suri. "Space-efficient online approximation of time series data: Streams, amnesia, and out-of-order." Data Engineering (ICDE), 2010 IEEE 26th International Conference on. IEEE, 2010. 30. Sandy Ryza (2020-03-18). "Time Series Analysis with Spark" (slides of a talk at Spark Summit East 2016). Databricks. Retrieved 2021-01-12. 31. Gershenfeld, N. (1999). The Nature of Mathematical Modeling. New York: Cambridge University Press. pp. 205–208. ISBN 978-0521570954. 32. Kantz, Holger; Thomas, Schreiber (2004). Nonlinear Time Series Analysis. London: Cambridge University Press. ISBN 978-0521529020. 33. Abarbanel, Henry (Nov 25, 1997). Analysis of Observed Chaotic Data. New York: Springer. ISBN 978-0387983721. 34. Boashash, B. (ed.), (2003) Time-Frequency Signal Analysis and Processing: A Comprehensive Reference, Elsevier Science, Oxford, 2003 ISBN 0-08-044335-4 35. Nikolić, D.; Muresan, R. C.; Feng, W.; Singer, W. (2012). "Scaled correlation analysis: a better way to compute a cross-correlogram". European Journal of Neuroscience. 35 (5): 742–762. doi:10.1111/j.1460-9568.2011.07987.x. PMID 22324876. S2CID 4694570. 36. Sakoe, Hiroaki; Chiba, Seibi (1978). "Dynamic programming algorithm optimization for spoken word recognition". pp. 43–49. doi:10.1109/TASSP.1978.1163055. S2CID 17900407. {{cite book}}: |journal= ignored (help); Missing or empty |title= (help) 37. Mormann, Florian; Andrzejak, Ralph G.; Elger, Christian E.; Lehnertz, Klaus (2007). "Seizure prediction: the long and winding road". Brain. 130 (2): 314–333. doi:10.1093/brain/awl241. PMID 17008335. 38. Land, Bruce; Elias, Damian. "Measuring the 'Complexity' of a time series". 39. [1] Chevyrev, I., Kormilitzin, A. (2016) "A Primer on the Signature Method in Machine Learning, arXiv:1603.03788v1" 40. Ropella, G. E. P.; Nag, D. A.; Hunt, C. A. (2003). "Similarity measures for automated comparison of in silico and in vitro experimental results". Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE Cat. No.03CH37439). pp. 2933–2936. doi:10.1109/IEMBS.2003.1280532. ISBN 978-0-7803-7789-9. S2CID 17798157. {{cite book}}: |journal= ignored (help) 41. Tominski, Christian; Aigner, Wolfgang. "The TimeViz Browser:A Visual Survey of Visualization Techniques for Time-Oriented Data". Retrieved 1 June 2014. Further reading • De Gooijer, Jan G.; Hyndman, Rob J. (2006). "25 Tears of Time Series Forecasting". International Journal of Forecasting. Twenty Five Years of Forecasting. 22 (3): 443–473. CiteSeerX 10.1.1.154.9227. doi:10.1016/j.ijforecast.2006.01.001. S2CID 14996235. • Box, George; Jenkins, Gwilym (1976), Time Series Analysis: forecasting and control, rev. ed., Oakland, California: Holden-Day • Durbin J., Koopman S.J. (2001), Time Series Analysis by State Space Methods, Oxford University Press. • Gershenfeld, Neil (2000), The Nature of Mathematical Modeling, Cambridge University Press, ISBN 978-0-521-57095-4, OCLC 174825352 • Hamilton, James (1994), Time Series Analysis, Princeton University Press, ISBN 978-0-691-04289-3 • Priestley, M. B. (1981), Spectral Analysis and Time Series, Academic Press. ISBN 978-0-12-564901-8 • Shasha, D. (2004), High Performance Discovery in Time Series, Springer, ISBN 978-0-387-00857-8 • Shumway R. H., Stoffer D. S. (2017), Time Series Analysis and its Applications: With R Examples (ed. 4), Springer, ISBN 978-3-319-52451-1 • Weigend A. S., Gershenfeld N. A. (Eds.) (1994), Time Series Prediction: Forecasting the Future and Understanding the Past. Proceedings of the NATO Advanced Research Workshop on Comparative Time Series Analysis (Santa Fe, May 1992), Addison-Wesley. • Wiener, N. (1949), Extrapolation, Interpolation, and Smoothing of Stationary Time Series, MIT Press. • Woodward, W. A., Gray, H. L. & Elliott, A. C. (2012), Applied Time Series Analysis, CRC Press. • Auffarth, Ben (2021). Machine Learning for Time-Series with Python: Forecast, predict, and detect anomalies with state-of-the-art machine learning methods (1st ed.). Packt Publishing. ISBN 978-1801819626. Retrieved 5 November 2021. External links Wikimedia Commons has media related to Time series. • Introduction to Time series Analysis (Engineering Statistics Handbook) — A practical guide to Time series analysis. Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject Authority control: National • France • BnF data • Israel • United States • Japan
Wikipedia
Time–frequency analysis In signal processing, time–frequency analysis comprises those techniques that study a signal in both the time and frequency domains simultaneously, using various time–frequency representations. Rather than viewing a 1-dimensional signal (a function, real or complex-valued, whose domain is the real line) and some transform (another function whose domain is the real line, obtained from the original via some transform), time–frequency analysis studies a two-dimensional signal – a function whose domain is the two-dimensional real plane, obtained from the signal via a time–frequency transform.[1][2] See also: Time–frequency representation The mathematical motivation for this study is that functions and their transform representation are tightly connected, and they can be understood better by studying them jointly, as a two-dimensional object, rather than separately. A simple example is that the 4-fold periodicity of the Fourier transform – and the fact that two-fold Fourier transform reverses direction – can be interpreted by considering the Fourier transform as a 90° rotation in the associated time–frequency plane: 4 such rotations yield the identity, and 2 such rotations simply reverse direction (reflection through the origin). The practical motivation for time–frequency analysis is that classical Fourier analysis assumes that signals are infinite in time or periodic, while many signals in practice are of short duration, and change substantially over their duration. For example, traditional musical instruments do not produce infinite duration sinusoids, but instead begin with an attack, then gradually decay. This is poorly represented by traditional methods, which motivates time–frequency analysis. One of the most basic forms of time–frequency analysis is the short-time Fourier transform (STFT), but more sophisticated techniques have been developed, notably wavelets and least-squares spectral analysis methods for unevenly spaced data. Motivation In signal processing, time–frequency analysis[3] is a body of techniques and methods used for characterizing and manipulating signals whose statistics vary in time, such as transient signals. It is a generalization and refinement of Fourier analysis, for the case when the signal frequency characteristics are varying with time. Since many signals of interest – such as speech, music, images, and medical signals – have changing frequency characteristics, time–frequency analysis has broad scope of applications. Whereas the technique of the Fourier transform can be extended to obtain the frequency spectrum of any slowly growing locally integrable signal, this approach requires a complete description of the signal's behavior over all time. Indeed, one can think of points in the (spectral) frequency domain as smearing together information from across the entire time domain. While mathematically elegant, such a technique is not appropriate for analyzing a signal with indeterminate future behavior. For instance, one must presuppose some degree of indeterminate future behavior in any telecommunications systems to achieve non-zero entropy (if one already knows what the other person will say one cannot learn anything). To harness the power of a frequency representation without the need of a complete characterization in the time domain, one first obtains a time–frequency distribution of the signal, which represents the signal in both the time and frequency domains simultaneously. In such a representation the frequency domain will only reflect the behavior of a temporally localized version of the signal. This enables one to talk sensibly about signals whose component frequencies vary in time. For instance rather than using tempered distributions to globally transform the following function into the frequency domain one could instead use these methods to describe it as a signal with a time varying frequency. $x(t)={\begin{cases}\cos(\pi t);&t<10\\\cos(3\pi t);&10\leq t<20\\\cos(2\pi t);&t>20\end{cases}}$ Once such a representation has been generated other techniques in time–frequency analysis may then be applied to the signal in order to extract information from the signal, to separate the signal from noise or interfering signals, etc. Time–frequency distribution functions Main article: Time–frequency distribution Formulations There are several different ways to formulate a valid time–frequency distribution function, resulting in several well-known time–frequency distributions, such as: • Short-time Fourier transform (including the Gabor transform), • Wavelet transform, • Bilinear time–frequency distribution function (Wigner distribution function, or WDF), • Modified Wigner distribution function, Gabor–Wigner distribution function, and so on (see Gabor–Wigner transform). • Hilbert–Huang transform More information about the history and the motivation of development of time–frequency distribution can be found in the entry Time–frequency representation. Ideal TF distribution function A time–frequency distribution function ideally has the following properties: 1. High resolution in both time and frequency, to make it easier to be analyzed and interpreted. 2. No cross-term to avoid confusing real components from artifacts or noise. 3. A list of desirable mathematical properties to ensure such methods benefit real-life application. 4. Lower computational complexity to ensure the time needed to represent and process a signal on a time–frequency plane allows real-time implementations. Below is a brief comparison of some selected time–frequency distribution functions.[4] Clarity Cross-term Good mathematical properties Computational complexity Gabor transform Worst No Worst Low Wigner distribution function Best Yes Best High Gabor–Wigner distribution function Good Almost eliminated Good High Cone-shape distribution function Good No (eliminated, in time) Good Medium (if recursively defined) To analyze the signals well, choosing an appropriate time–frequency distribution function is important. Which time–frequency distribution function should be used depends on the application being considered, as shown by reviewing a list of applications.[5] The high clarity of the Wigner distribution function (WDF) obtained for some signals is due to the auto-correlation function inherent in its formulation; however, the latter also causes the cross-term problem. Therefore, if we want to analyze a single-term signal, using the WDF may be the best approach; if the signal is composed of multiple components, some other methods like the Gabor transform, Gabor-Wigner distribution or Modified B-Distribution functions may be better choices. As an illustration, magnitudes from non-localized Fourier analysis cannot distinguish the signals: $x_{1}(t)={\begin{cases}\cos(\pi t);&t<10\\\cos(3\pi t);&10\leq t<20\\\cos(2\pi t);&t>20\end{cases}}$ $x_{2}(t)={\begin{cases}\cos(\pi t);&t<10\\\cos(2\pi t);&10\leq t<20\\\cos(3\pi t);&t>20\end{cases}}$ But time–frequency analysis can. TF analysis and Random Process[6] For a random process x(t), we cannot find the explicit value of x(t). The value of x(t) is expressed as a probability function. • Auto-covariance function $R_{x}(t,\tau )$ $R_{x}(t,\tau )=E[x(t+\tau /2)x^{*}(t-\tau /2)]$ In usual, we suppose that $E[x(t)]=0$ for any t, $E[x(t+\tau /2)x^{*}(t-\tau /2)]$ $=\iint x(t+\tau /2,\xi _{1})x^{*}(t-\tau /2,\xi _{2})P(\xi _{1},\xi _{2})d\xi _{1}d\xi _{2}$(alternative definition of the auto-covariance function) ${\overset {\land }{R_{x}}}(t,\tau )=E[x(t)x(t+\tau )]$ • Power spectral density (PSD) $S_{x}(t,f)$ $S_{x}(t,f)=\int _{-\infty }^{\infty }R_{x}(t,\tau )e^{-j2\pi f\tau }d\tau $ • Relation between the WDF (Wigner Distribution Function) and the random process $E[W_{x}(t,f)]=\int _{-\infty }^{\infty }E[x(t+\tau /2)x^{*}(t-\tau /2)]\cdot e^{-j2\pi f\tau }\cdot d\tau $ $=\int _{-\infty }^{\infty }R_{x}(t,\tau )\cdot e^{-j2\pi f\tau }\cdot d\tau $$=S_{x}(t,f)$ • Relation between the ambiguity function and the random process $E[A_{X}(\eta ,\tau )]=\int _{-\infty }^{\infty }E[x(t+\tau /2)x^{*}(t-\tau /2)]e^{-j2\pi t\eta }dt$ $=\int _{-\infty }^{\infty }R_{x}(t,\tau )e^{-j2\pi t\eta }dt$ • Stationary random process: the statistical properties do not change with t. Its auto-covariance function: $R_{x}(t_{1},\tau )=R_{x}(t_{2},\tau )=R_{x}(\tau )$ for any $t$, Therefore, $R_{x}(\tau )=E[x(\tau /2)x^{*}(-\tau /2)]$ $=\iint x(\tau /2,\xi _{1})x^{*}(-\tau /2,\xi _{2})P(\xi _{1},\xi _{2})d\xi _{1}d\xi _{2}$PSD, $S_{x}(f)=\int _{-\infty }^{\infty }R_{x}(\tau )e^{-j2\pi f\tau }d\tau $ White noise: $S_{x}(f)=\sigma $ , where $\sigma $ is some constant. • When x(t) is stationary, $E[W_{x}(t,f)]=S_{x}(f)$ , (invariant with $t$) $E[A_{x}(\eta ,\tau )]=\int _{-\infty }^{\infty }R_{x}(\tau )\cdot e^{-j2\pi t\eta }\cdot dt$ $=R_{x}(\tau )\int _{-\infty }^{\infty }e^{-j2\pi t\eta }\cdot dt$$=R_{x}(\tau )\delta (\eta )$ , (nonzero only when $\eta =0$) • For white noise, $E[W_{g}(t,f)]=\sigma $ $E[A_{x}(\eta ,\tau )]=\sigma \delta (\tau )\delta (\eta )$ Filter Design for White noise $E_{x}$: energy of the signal $A$  : area of the time frequency distribution of the signal The PSD of the white noise is $S_{n}(f)=\sigma $ $SNR\approx 10\log _{10}{\frac {E_{x}}{\iint \limits _{(t,f)\in {\text{signal part}}}S_{x}(t,f)dtdf}}$ $SNR\approx 10\log _{10}{\frac {E_{x}}{\sigma \mathrm {A} }}$ • If $E[W_{x}(t,f)]$ varies with $t$ and $E[A_{x}(\eta ,\tau )]$ is nonzero when $\eta =0$, then $x(t)$ is a non-stationary random process. • If 1. $h(t)=x_{1}(t)+x_{2}(t)+x_{3}(t)+......+x_{k}(t)$ 2. $x_{n}(t)$'s have zero mean for all $t$'s 3. $x_{n}(t)$'s are mutually independent for all $t$'s and $\tau $'s $E[x_{m}(t+\tau /2)x_{n}^{*}(t-\tau /2)]=E[x_{m}(t+\tau /2)]E[x_{n}^{*}(t-\tau /2)]=0$ if $m\neq n$, then $E[W_{h}(t,f)]=\sum _{n=1}^{k}E[W_{x_{n}}(t,f)]$ $E[A_{h}(\eta ,\tau )]=\sum _{n=1}^{k}E[A_{x_{n}}(\eta ,\tau )]$ 1. Random process for STFT (Short Time Fourier Transform) $E[x(t)]\neq 0$ should be satisfied. Otherwise, $E[X(t,f)]=E[\int _{t-B}^{t+B}x(\tau )w(t-\tau )e^{-j2\pi f\tau }d\tau ]$ $=\int _{t-B}^{t+B}E[x(\tau )]w(t-\tau )e^{-j2\pi f\tau }d\tau $for zero-mean random process, $E[X(t,f)]=0$ 1. Decompose by the AF and the FRFT Any non-stationary random process can be expressed as a summation of the fractional Fourier transform (or chirp multiplication) of stationary random process. Applications The following applications need not only the time–frequency distribution functions but also some operations to the signal. The Linear canonical transform (LCT) is really helpful. By LCTs, the shape and location on the time–frequency plane of a signal can be in the arbitrary form that we want it to be. For example, the LCTs can shift the time–frequency distribution to any location, dilate it in the horizontal and vertical direction without changing its area on the plane, shear (or twist) it, and rotate it (Fractional Fourier transform). This powerful operation, LCT, make it more flexible to analyze and apply the time–frequency distributions. Instantaneous frequency estimation The definition of instantaneous frequency is the time rate of change of phase, or ${\frac {1}{2\pi }}{\frac {d}{dt}}\phi (t),$ where $\phi (t)$ is the instantaneous phase of a signal. We can know the instantaneous frequency from the time–frequency plane directly if the image is clear enough. Because the high clarity is critical, we often use WDF to analyze it. TF filtering and signal decomposition The goal of filter design is to remove the undesired component of a signal. Conventionally, we can just filter in the time domain or in the frequency domain individually as shown below. The filtering methods mentioned above can’t work well for every signal which may overlap in the time domain or in the frequency domain. By using the time–frequency distribution function, we can filter in the Euclidean time–frequency domain or in the fractional domain by employing the fractional Fourier transform. An example is shown below. Filter design in time–frequency analysis always deals with signals composed of multiple components, so one cannot use WDF due to cross-term. The Gabor transform, Gabor–Wigner distribution function, or Cohen's class distribution function may be better choices. The concept of signal decomposition relates to the need to separate one component from the others in a signal; this can be achieved through a filtering operation which require a filter design stage. Such filtering is traditionally done in the time domain or in the frequency domain; however, this may not be possible in the case of non-stationary signals that are multicomponent as such components could overlap in both the time domain and also in the frequency domain; as a consequence, the only possible way to achieve component separation and therefore a signal decomposition is to implement a time–frequency filter. Sampling theory By the Nyquist–Shannon sampling theorem, we can conclude that the minimum number of sampling points without aliasing is equivalent to the area of the time–frequency distribution of a signal. (This is actually just an approximation, because the TF area of any signal is infinite.) Below is an example before and after we combine the sampling theory with the time–frequency distribution: It is noticeable that the number of sampling points decreases after we apply the time–frequency distribution. When we use the WDF, there might be the cross-term problem (also called interference). On the other hand, using Gabor transform causes an improvement in the clarity and readability of the representation, therefore improving its interpretation and application to practical problems. Consequently, when the signal we tend to sample is composed of single component, we use the WDF; however, if the signal consists of more than one component, using the Gabor transform, Gabor-Wigner distribution function, or other reduced interference TFDs may achieve better results. The Balian–Low theorem formalizes this, and provides a bound on the minimum number of time–frequency samples needed. Modulation and multiplexing Conventionally, the operation of modulation and multiplexing concentrates in time or in frequency, separately. By taking advantage of the time–frequency distribution, we can make it more efficient to modulate and multiplex. All we have to do is to fill up the time–frequency plane. We present an example as below. As illustrated in the upper example, using the WDF is not smart since the serious cross-term problem make it difficult to multiplex and modulate. Electromagnetic wave propagation We can represent an electromagnetic wave in the form of a 2 by 1 matrix ${\begin{bmatrix}x\\y\end{bmatrix}},$ which is similar to the time–frequency plane. When electromagnetic wave propagates through free-space, the Fresnel diffraction occurs. We can operate with the 2 by 1 matrix ${\begin{bmatrix}x\\y\end{bmatrix}}$ by LCT with parameter matrix ${\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&\lambda z\\0&1\end{bmatrix}},$ where z is the propagation distance and $\lambda $ is the wavelength. When electromagnetic wave pass through a spherical lens or be reflected by a disk, the parameter matrix should be ${\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\-{\frac {1}{\lambda f}}&1\end{bmatrix}}$ and ${\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\{\frac {1}{\lambda R}}&1\end{bmatrix}}$ respectively, where ƒ is the focal length of the lens and R is the radius of the disk. These corresponding results can be obtained from ${\begin{bmatrix}a&b\\c&d\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}.$ Optics, acoustics, and biomedicine Light is an electromagnetic wave, so time–frequency analysis applies to optics in the same way as for general electromagnetic wave propagation. Similarly, it is a characteristic of acoustic signals, that their frequency components undergo abrupt variations in time and would hence be not well represented by a single frequency component analysis covering their entire durations. As acoustic signals are used as speech in communication between the human-sender and -receiver, their undelayedly transmission in technical communication systems is crucial, which makes the use of simpler TFDs, such as the Gabor transform, suitable to analyze these signals in real-time by reducing computational complexity. If frequency analysis speed is not a limitation, a detailed feature comparison with well defined criteria should be made before selecting a particular TFD. Another approach is to define a signal dependent TFD that is adapted to the data. In biomedicine, one can use time–frequency distribution to analyze the electromyography (EMG), electroencephalography (EEG), electrocardiogram (ECG) or otoacoustic emissions (OAEs). History See also: History of wavelets Early work in time–frequency analysis can be seen in the Haar wavelets (1909) of Alfréd Haar, though these were not significantly applied to signal processing. More substantial work was undertaken by Dennis Gabor, such as Gabor atoms (1947), an early form of wavelets, and the Gabor transform, a modified short-time Fourier transform. The Wigner–Ville distribution (Ville 1948, in a signal processing context) was another foundational step. Particularly in the 1930s and 1940s, early time–frequency analysis developed in concert with quantum mechanics (Wigner developed the Wigner–Ville distribution in 1932 in quantum mechanics, and Gabor was influenced by quantum mechanics – see Gabor atom); this is reflected in the shared mathematics of the position-momentum plane and the time–frequency plane – as in the Heisenberg uncertainty principle (quantum mechanics) and the Gabor limit (time–frequency analysis), ultimately both reflecting a symplectic structure. An early practical motivation for time–frequency analysis was the development of radar – see ambiguity function. See also • Motions in the time-frequency distribution • Multiresolution analysis • Spectral density estimation • Time–frequency analysis for music signals • Wavelet analysis References 1. L. Cohen, "Time–Frequency Analysis," Prentice-Hall, New York, 1995. ISBN 978-0135945322 2. E. Sejdić, I. Djurović, J. Jiang, “Time-frequency feature representation using energy concentration: An overview of recent advances,” Digital Signal Processing, vol. 19, no. 1, pp. 153-183, January 2009. 3. P. Flandrin, "Time–frequency/Time–Scale Analysis," Wavelet Analysis and its Applications, Vol. 10 Academic Press, San Diego, 1999. 4. Shafi, Imran; Ahmad, Jamil; Shah, Syed Ismail; Kashif, F. M. (2009-06-09). "Techniques to Obtain Good Resolution and Concentrated Time-Frequency Distributions: A Review". EURASIP Journal on Advances in Signal Processing. 2009 (1): 673539. Bibcode:2009EJASP2009..109S. doi:10.1155/2009/673539. ISSN 1687-6180. 5. A. Papandreou-Suppappola, Applications in Time–Frequency Signal Processing (CRC Press, Boca Raton, Fla., 2002) 6. Ding, Jian-Jiun (2022). Time frequency analysis and wavelet transform class notes. Taipei, Taiwan: Graduate Institute of Communication Engineering, National Taiwan University (NTU).
Wikipedia
Tim Pedley Timothy John Pedley FRS (born 23 March 1942) is a British mathematician and a former G. I. Taylor Professor of Fluid Mechanics at the University of Cambridge.[1] His principal research interest is the application of fluid mechanics to biology and medicine.[3] Tim Pedley Born (1942-03-23) 23 March 1942[1] Leicester Alma materTrinity College, Cambridge AwardsMayhew Prize (1963) Adams Prize (1977)[1] IMA Gold Medal (2008) Scientific career InstitutionsJohns Hopkins University Imperial College London University of Cambridge University of Leeds ThesisPlumes, Bubbles and Vortices (1966) Doctoral advisorGeorge Batchelor[2] Doctoral studentsSarah L. Waters Early life and education Pedley is the son of Richard Rodman Pedley and Jeanie Mary Mudie Pedley. He was educated at Rugby School and Trinity College, Cambridge. Academic career Pedley spent three years at Johns Hopkins University as a post-doctoral fellow.[1] From 1968 to 1973 he was a lecturer at Imperial College London, after which he moved to the Department of Applied Mathematics and Theoretical Physics (DAMTP) at the University of Cambridge. Pedley remained at Cambridge until 1990 when he was appointed Professor of Applied Mathematics at the University of Leeds. In 1996 he returned to Cambridge and from 2000 to 2005 he was head of DAMTP.[4] Research Pedley has pioneered the application of fluid mechanics to understanding biological phenomena. His best-known work includes the study of blood flow in arteries, flow–structure interactions in elastic tubes, flow and pressure drop in the lung, and the collective behaviour of swimming microorganisms. His research has touched on issues of medical importance, including arterial bypass grafts, urine flow from kidneys to bladder, and the ventilation of premature infants. His work on microorganisms has application to plankton ecology.[5] Honours Pedley is a fellow of Gonville and Caius College, Cambridge[6] and was elected a Fellow of the Royal Society (FRS) in 1995.[7] Pedley was elected a member of the National Academy of Engineering (1999) for research on biofluid dynamics, collapsible tube flow, and the theory of swimming of fish and microorganisms. In 2008 Pedley and Professor James Murray FRS were jointly awarded the Gold Medal of the Institute of Mathematics and its Applications in recognition of their "outstanding contributions to mathematics and its applications over a period of years".[8] Marriage and children In 1965 Pedley married Avril Jennifer Martin Uden, with whom he has two sons. He enjoys birdwatching, running and reading.[1] References 1. Sleeman, Elizabeth (2003). The International Who's Who 2004. Routledge. ISBN 1-85743-217-7. 2. Tim Pedley at the Mathematics Genealogy Project 3. "Timothy Pedley". EPSRC. 9 June 2009. Retrieved 22 July 2009. 4. "Professor T.J. Pedley". University of Cambridge Department of Applied Mathematics and Theoretical Physics. Retrieved 22 July 2009. 5. "Timothy Pedley". London: Royal Society. One or more of the preceding sentences may incorporate text from the royalsociety.org website where "all text published under the heading 'Biography' on Fellow profile pages is available under Creative Commons Attribution 4.0 International License." "Royal Society Terms, conditions and policies". Archived from the original on 20 February 2016. Retrieved 9 March 2016.{{cite web}}: CS1 maint: bot: original URL status unknown (link) 6. "Fellows of Gonville and Caius College". Cambridge University Reporter. Retrieved 22 July 2009. 7. "List of Fellows of the Royal Society 1660 – 2007" (PDF). Royal Society. Retrieved 3 March 2012. 8. "IMA Gold Medal". Retrieved 16 May 2018. Institute of Mathematics and its Applications Fellows of the Royal Society elected in 1995 Fellows •  Martin A. Bennett •  Iain Campbell •  Johnson Cann •  Keith Chater •  Ed Corrigan •  E. Brian Davies •  Graham Dixon-Lewis •  Richard Ellis •  Graham Farquhar •  Jeffrey Harborne •  Julia Higgins •  Jonathan Charles Howard •  Antony Jameson •  Jack Henri Kaplan •  Gurdev Khush •  Anthony Ledwith  •  Frank Matthews Leslie •  Robin Marshall •  Christopher John Marshall •  Paul Mason •  David Miller •  Richard Alan North •  Tim Pedley •  Geoffrey James Pert •  Keith Peters •  Jeremy David Pickett-Heaps •  Richard J. Roberts •  Joseph Rotblat •  Jeremy Sanders •  Robert Malcolm Simmons •  Ian William Murison Smith •  Peter Sneath •  Laszlo Solymar •  Roger Tayler •  Richard Taylor •  Shirley M. Tilghman •  John E. Walker •  Stephen C. West •  Colin Windsor •  Andrew Wyllie Foreign •  Ugo Fano •  Gertrude B. Elion •  Salome Gluecksohn-Waelsch •  Rita Levi-Montalcini •  Calvin Quate •  John Archibald Wheeler Authority control International • ISNI • VIAF National • Catalonia • Israel • United States • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • Scopus Other • IdRef
Wikipedia
Tina Pizzardo Battistina Pizzardo, known as Tina (5 February 1903 Turin – 15 February 1989 Turin), was an Italian mathematician, and an anti-fascist. Tina Pizzardo Born5 February 1903  Turin  Died17 February 1989  (aged 86) Turin  OccupationResearcher  Life She graduated from the University of Turin in 1925. In 1926, she became a member of the "Academia pro interlingua". She was in Rome in March 1926 to participate in the qualification competition for teaching in secondary schools. On this occasion he met Altiero Spinelli and other anti-fascists. In July she joined the Communist Party and in October she began teaching mathematics and physics at the Liceo classico Carducci-Ricasoli in Grosseto. Through letters, the police traced her, and arrested her in September for "subversive activity" and sentenced to one year in prison and three years of probation.[1][2] She was transferred to the prison in Turin, then to that of Ancona and finally to the women's prison in Rome, where she organized protests with other inmates. She was released from prison on 13 September 1928. She lost her job and was unable to continue teaching in state schools; she had to live precariously giving private mathematics lessons. In Turin she maintained relations with the anti-fascists: her regular friends were Mario Carrara, his wife Paola Lombroso, Giuseppe Levi, Adriano Olivetti, and Barbara Allason. In this period she was attracted by three men: by Altiero Spinelli, Henek Rieser, and by Cesare Pavese.[3][4] On 15 May 1935, she was again arrested by the police. The raid involved the editorial staff of the magazine «Cultura» and Pavese, Bruno Maffi, Carlo Levi , Franco Antonicelli and others ended up in prison. Tina was released at the end of June: "in the opinion of the police, a poor teacher who lives in private to the high bourgeoisie and all famous intellectuals." Her marriage to Henek Rieser took place on the following 19 April.[1] Tina Pizzardo saw Altiero Spinelli again at the fall of fascism. She joined the European Federalist Movement he founded in 1943. She was a candidate for the House in the 1948 general elections and with the Action Party. In 1962, she wrote her memoirs about herself, which were circulated in typescript, and then were published posthumously in 1996, under the title Senza pensarci due volte (Without thinking twice). In her memoirs, Tina Pizzardo herself described herself "as a free and uninhibited woman, full of life and sociability, even fickle, who needed ties with several men at the same time".[5] Works • Pizzardo, Tina (1996). Senza pensarci due volte. Bologna: Il Mulino. ISBN 88-15-05615-7. OCLC 36262238. References 1. "Pizzardo Rieser Battistina (Tina) — Scienza a due voci". scienzaa2voci.unibo.it. Retrieved 2022-05-09. 2. Marrone, Gaetana; Puppa, Paolo (2006-12-26). Encyclopedia of Italian Literary Studies. Routledge. ISBN 978-1-135-45530-9. 3. Stille, Alexander (April 2003). Benevolence and Betrayal: Five Italian Jewish Families Under Fascism. Macmillan. ISBN 978-0-312-42153-3. 4. "Tutti con il naso all'insù: dopo due anni torna a Poggio Rusco il lancio dei paracadutisti". Gazzetta di Mantova (in Italian). 2022-04-19. Retrieved 2022-05-09. 5. "Tina Pizzardo la mia storia con Pavese". 2016-03-05. Archived from the original on 5 March 2016. Retrieved 2022-05-09. Authority control International • FAST • ISNI • VIAF National • Italy • United States People • Italian People Other • IdRef
Wikipedia
Tits alternative In mathematics, the Tits alternative, named after Jacques Tits, is an important theorem about the structure of finitely generated linear groups. Statement The theorem, proven by Tits,[1] is stated as follows. Theorem —  Let $G$ be a finitely generated linear group over a field. Then two following possibilities occur: • either $G$ is virtually solvable (i.e., has a solvable subgroup of finite index) • or it contains a nonabelian free group (i.e., it has a subgroup isomorphic to the free group on two generators). Consequences A linear group is not amenable if and only if it contains a non-abelian free group (thus the von Neumann conjecture, while not true in general, holds for linear groups). The Tits alternative is an important ingredient[2] in the proof of Gromov's theorem on groups of polynomial growth. In fact the alternative essentially establishes the result for linear groups (it reduces it to the case of solvable groups, which can be dealt with by elementary means). Generalizations In geometric group theory, a group G is said to satisfy the Tits alternative if for every subgroup H of G either H is virtually solvable or H contains a nonabelian free subgroup (in some versions of the definition this condition is only required to be satisfied for all finitely generated subgroups of G). Examples of groups satisfying the Tits alternative which are either not linear, or at least not known to be linear, are: • Hyperbolic groups • Mapping class groups;[3][4] • Out(Fn);[5] • Certain groups of birational transformations of algebraic surfaces.[6] Examples of groups not satisfying the Tits alternative are: • the Grigorchuk group; • Thompson's group F. Proof The proof of the original Tits alternative[1] is by looking at the Zariski closure of $G$ in $\mathrm {GL} _{n}(k)$. If it is solvable then the group is solvable. Otherwise one looks at the image of $G$ in the Levi component. If it is noncompact then a ping-pong argument finishes the proof. If it is compact then either all eigenvalues of elements in the image of $G$ are roots of unity and then the image is finite, or one can find an embedding of $k$ in which one can apply the ping-pong strategy. Note that the proof of all generalisations above also rests on a ping-pong argument. References 1. Tits, J. (1972). "Free subgroups in linear groups". Journal of Algebra. 20 (2): 250–270. doi:10.1016/0021-8693(72)90058-0. 2. Tits, Jacques (1981). "Groupes à croissance polynomiale". Séminaire Bourbaki (in French). 1980/1981. 3. Ivanov, Nikolai (1984). "Algebraic properties of the Teichmüller modular group". Dokl. Akad. Nauk SSSR. 275: 786–789. 4. McCarthy, John (1985). "A "Tits-alternative" for subgroups of surface mapping class groups". Trans. Amer. Math. Soc. 291: 583–612. doi:10.1090/s0002-9947-1985-0800253-8. 5. Bestvina, Mladen; Feighn, Mark; Handel, Michael (2000). "The Tits alternative for Out(Fn) I: Dynamics of exponentially-growing automorphisms". Annals of Mathematics. 151 (2): 517–623. arXiv:math/9712217. doi:10.2307/121043. JSTOR 121043. 6. Cantat, Serge (2011). "Sur les groupes de transformations birationnelles des surfaces". Ann. Math. (in French). 174: 299–340. doi:10.4007/annals.2011.174.1.8.
Wikipedia
Coxeter complex In mathematics, the Coxeter complex, named after H. S. M. Coxeter, is a geometrical structure (a simplicial complex) associated to a Coxeter group. Coxeter complexes are the basic objects that allow the construction of buildings; they form the apartments of a building. Construction The canonical linear representation The first ingredient in the construction of the Coxeter complex associated to a Coxeter system $(W,S)$ is a certain representation of $W$, called the canonical representation of $W$. Let $(W,S)$ be a Coxeter system with Coxeter matrix $M=(m(s,t))_{s,t\in S}$. The canonical representation is given by a vector space $V$ with basis of formal symbols $(e_{s})_{s\in S}$, which is equipped with the symmetric bilinear form $B(e_{s},e_{t})=-\cos \left({\frac {\pi }{m(s,t)}}\right)$. In particular, $B(e_{s},e_{s})=1$. The action of $W$ on $V$ is then given by $s(v)=v-2B(e_{s},v)e_{s}$. This representation has several foundational properties in the theory of Coxeter groups; for instance, $B$ is positive definite if and only if $W$ is finite. It is a faithful representation of $W$. Chambers and the Tits cone This representation describes $W$ as a reflection group, with the caveat that $B$ might not be positive definite. It becomes important then to distinguish the representation $V$ from its dual $V^{*}$. The vectors $e_{s}$ lie in $V$ and have corresponding dual vectors $e_{s}^{\vee }$ in $V^{*}$ given by $\langle e_{s}^{\vee },v\rangle =2B(e_{s},v),$ where the angled brackets indicate the natural pairing between $V^{*}$ and $V$. Now $W$ acts on $V^{*}$ and the action is given by $s(f)=f-\langle f,e_{s}\rangle e_{s}^{\vee },$ for $s\in S$ and any $f\in V^{*}$. Then $s$ is a reflection in the hyperplane $H_{s}=\{f\in V^{*}:\langle f,e_{s}\rangle =0\}$. One has the fundamental chamber ${\mathcal {C}}=\{f\in V^{*}:\langle f,e_{s}\rangle >0\ \forall s\in S\}$; this has faces the so-called walls, $H_{s}$. The other chambers can be obtained from ${\mathcal {C}}$ by translation: they are the $w{\mathcal {C}}$ for $w\in W$. The Tits cone is $X=\bigcup _{w\in W}w{\overline {\mathcal {C}}}$. This need not be the whole of $V^{*}$. Of major importance is the fact that $X$ is convex. The closure ${\overline {\mathcal {C}}}$ of ${\mathcal {C}}$ is a fundamental domain for the action of $W$ on $X$. The Coxeter complex The Coxeter complex $\Sigma (W,S)$ of $W$ with respect to $S$ is $\Sigma (W,S)=(X\setminus \{0\})/\mathbb {R} _{+}$, where $\mathbb {R} _{+}$ is the multiplicative group of positive reals. Examples Finite dihedral groups The dihedral groups $D_{n}$ (of order 2n) are Coxeter groups, of corresponding type $\mathrm {I} _{2}(n)$. These have the presentation $\left\langle s,t\,\left|\,s^{2},t^{2},(st)^{n}\right\rangle \right.$. The canonical linear representation of $\mathrm {I} _{2}(n)$ is the usual reflection representation of the dihedral group, as acting on an $n$-gon in the plane (so $V=\mathbb {R} ^{2}$ in this case). For instance, in the case $n=3$ we get the Coxeter group of type $\mathrm {I} _{2}(3)=\mathrm {A} _{2}$, acting on an equilateral triangle in the plane. Each reflection $s$ has an associated hyperplane $H_{s}$ in the dual vector space (which can be canonically identified with the vector space itself using the bilinear form $B$, which is an inner product in this case as remarked above); these are the walls. They cut out chambers, as seen below: The Coxeter complex is then the corresponding $2n$-gon, as in the image above. This is a simplicial complex of dimension 1, and it can be colored by cotype. The infinite dihedral group Another motivating example is the infinite dihedral group $D_{\infty }$. This can be seen as the group of symmetries of the real line that preserves the set of points with integer coordinates; it is generated by the reflections in $x=0$ and $x={1 \over 2}$. This group has the Coxeter presentation $\left\langle s,t\,\left|\,s^{2},t^{2}\right\rangle \right.$. In this case, it is no longer possible to identify $V$ with its dual space $V^{*}$, as $B$ is degenerate. It is then better to work solely with $V^{*}$, which is where the hyperplanes are defined. This then gives the following picture: In this case, the Tits cone is not the whole plane, but only the upper half plane. Taking the quotient by the positive reals then yields another copy of the real line, with marked points at the integers. This is the Coxeter complex of the infinite dihedral group. Alternative construction of the Coxeter complex Another description of the Coxeter complex uses standard cosets of the Coxeter group $W$. A standard coset is a coset of the form $wW_{J}$, where $W_{J}=\langle J\rangle $ for some proper subset $J$ of $S$. For instance, $W_{S}=W$ and $W_{\emptyset }=\{1\}$. The Coxeter complex $\Sigma (W,S)$ is then the poset of standard cosets, ordered by reverse inclusion. This has a canonical structure of a simplicial complex, as do all posets that satisfy: • Any two elements have a greatest lower bound. • The poset of elements less than or equal to any given element is isomorphic to the poset of subsets of $\{1,2,\ldots ,n\}$ for some integer n. Properties The Coxeter complex associated to $(W,S)$ has dimension $|S|-1$. It is homeomorphic to a $(|S|-1)$-sphere if W is finite and is contractible if W is infinite. Every apartment of a spherical Tits building is a Coxeter complex.[1] See also • Buildings • Weyl group • Root system References 1. https://dept.math.lsa.umich.edu/~lji/building-curve-complex-handbook.pdfpg. 8, definition 2.5 Sources • Peter Abramenko and Kenneth S. Brown, Buildings, Theory and Applications. Springer, 2008. Authority control International • FAST National • Germany • Israel • United States
Wikipedia
Tits group In group theory, the Tits group 2F4(2)′, named for Jacques Tits (French: [tits]), is a finite simple group of order    211 · 33 · 52 · 13 = 17,971,200. Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve It is sometimes considered a 27th sporadic group. History and properties The Ree groups 2F4(22n+1) were constructed by Ree (1961), who showed that they are simple if n ≥ 1. The first member of this series 2F4(2) is not simple. It was studied by Jacques Tits (1964) who showed that it is almost simple, its derived subgroup 2F4(2)′ of index 2 being a new simple group, now called the Tits group. The group 2F4(2) is a group of Lie type and has a BN pair, but the Tits group itself does not have a BN pair. The Tits group is member of the infinite family 2F4(22n+1)′ of commutator groups of the Ree groups, and thus by definition not sporadic. But because it is also not strictly a group of Lie type, it is sometimes regarded as a 27th sporadic group.[1] The Schur multiplier of the Tits group is trivial and its outer automorphism group has order 2, with the full automorphism group being the group 2F4(2). The Tits group occurs as a maximal subgroup of the Fischer group Fi22. The groups 2F4(2) also occurs as a maximal subgroup of the Rudvalis group, as the point stabilizer of the rank-3 permutation action on 4060 = 1 + 1755 + 2304 points. The Tits group is one of the simple N-groups, and was overlooked in John G. Thompson's first announcement of the classification of simple N-groups, as it had not been discovered at the time. It is also one of the thin finite groups. The Tits group was characterized in various ways by Parrott (1972, 1973) and Stroth (1980). Maximal subgroups Wilson (1984) and Tchakerian (1986) independently found the 8 classes of maximal subgroups of the Tits group as follows: L3(3):2 Two classes, fused by an outer automorphism. These subgroups fix points of rank 4 permutation representations. 2.[28].5.4 Centralizer of an involution. L2(25) 22.[28].S3 A6.22 (Two classes, fused by an outer automorphism) 52:4A4 Presentation The Tits group can be defined in terms of generators and relations by $a^{2}=b^{3}=(ab)^{13}=[a,b]^{5}=[a,bab]^{4}=((ab)^{4}ab^{-1})^{6}=1,\,$ where [a, b] is the commutator a−1b−1ab. It has an outer automorphism obtained by sending (a, b) to (a, b(ba)5b(ba)5). Notes 1. For instance, by the ATLAS of Finite Groups and its web-based descendant References • Parrott, David (1972), "A characterization of the Tits' simple group", Canadian Journal of Mathematics, 24 (4): 672–685, doi:10.4153/cjm-1972-063-0, ISSN 0008-414X, MR 0325757 • Parrott, David (1973), "A characterization of the Ree groups 2F4(q)", Journal of Algebra, 27 (2): 341–357, doi:10.1016/0021-8693(73)90109-9, ISSN 0021-8693, MR 0347965 • Ree, Rimhak (1961), "A family of simple groups associated with the simple Lie algebra of type (F4)", Bulletin of the American Mathematical Society, 67: 115–116, doi:10.1090/S0002-9904-1961-10527-2, ISSN 0002-9904, MR 0125155 • Stroth, Gernot (1980), "A general characterization of the Tits simple group", Journal of Algebra, 64 (1): 140–147, doi:10.1016/0021-8693(80)90138-6, ISSN 0021-8693, MR 0575787 • Tchakerian, Kerope B. (1986), "The maximal subgroups of the Tits simple group", Pliska Studia Mathematica Bulgarica, 8: 85–93, ISSN 0204-9805, MR 0866648 • Tits, Jacques (1964), "Algebraic and abstract simple groups", Annals of Mathematics, Second Series, 80 (2): 313–329, doi:10.2307/1970394, ISSN 0003-486X, JSTOR 1970394, MR 0164968 • Wilson, Robert A. (1984), "The geometry and maximal subgroups of the simple groups of A. Rudvalis and J. Tits", Proceedings of the London Mathematical Society, Third Series, 48 (3): 533–563, doi:10.1112/plms/s3-48.3.533, ISSN 0024-6115, MR 0735227 External links • ATLAS of Group Representations — The Tits Group
Wikipedia
Satake diagram In the mathematical study of Lie algebras and Lie groups, a Satake diagram is a generalization of a Dynkin diagram introduced by Satake (1960, p.109) whose configurations classify simple Lie algebras over the field of real numbers. The Satake diagrams associated to a Dynkin diagram classify real forms of the complex Lie algebra corresponding to the Dynkin diagram. More generally, the Tits index or Satake–Tits diagram of a reductive algebraic group over a field is a generalization of the Satake diagram to arbitrary fields, introduced by Tits (1966), that reduces the classification of reductive algebraic groups to that of anisotropic reductive algebraic groups. Satake diagrams are not the same as Vogan diagrams of a Lie group, although they look similar. Definition A Satake diagram is obtained from a Dynkin diagram by blackening some vertices, and connecting other vertices in pairs by arrows, according to certain rules. Suppose that G is an algebraic group defined over a field k, such as the reals. We let S be a maximal split torus in G, and take T to be a maximal torus containing S defined over the separable algebraic closure K of k. Then G(K) has a Dynkin diagram with respect to some choice of positive roots of T. This Dynkin diagram has a natural action of the Galois group of K/k. Also some of the simple roots vanish on S. The Satake–Tits diagram is given by the Dynkin diagram D, together with the action of the Galois group, with the simple roots vanishing on S colored black. In the case when k is the field of real numbers, the absolute Galois group has order 2, and its action on D is represented by drawing conjugate points of the Dynkin diagram near each other, and the Satake–Tits diagram is called a Satake diagram. Examples • Compact Lie algebras correspond to the Satake diagram with all vertices blackened. • Split Lie algebras correspond to the Satake diagram with only white (i.e., non blackened) and unpaired vertices. • A table can be found at (Onishchik & Vinberg 1994, Table 4, pp. 229–230). Differences between Satake and Vogan diagrams Both Satake and Vogan diagrams are used to classify semisimple Lie groups or algebras (or algebraic groups) over the reals and both consist of Dynkin diagrams enriched by blackening a subset of the nodes and connecting some pairs of vertices by arrows. Satake diagrams, however, can be generalized to any field (see above) and fall under the general paradigm of Galois cohomology, whereas Vogan diagrams are defined specifically over the reals. Generally speaking, the structure of a real semisimple Lie algebra is encoded in a more transparent way in its Satake diagram, but Vogan diagrams are simpler to classify. The essential difference is that the Satake diagram of a real semisimple Lie algebra ${\mathfrak {g}}$ with Cartan involution θ and associated Cartan pair ${\mathfrak {g}}={\mathfrak {k}}\oplus {\mathfrak {p}}$ (the +1 and −1 eigenspaces of θ) is defined by starting from a maximally noncompact θ-stable Cartan subalgebra ${\mathfrak {h}}$, that is, one for which $\theta ({\mathfrak {h}})={\mathfrak {h}}$ and ${\mathfrak {h}}\cap {\mathfrak {k}}$ is as small as possible (in the presentation above, ${\mathfrak {h}}$ appears as the Lie algebra of the maximal split torus S), whereas Vogan diagrams are defined starting from a maximally compact θ-stable Cartan subalgebra, that is, one for which $\theta ({\mathfrak {h}})={\mathfrak {h}}$ and ${\mathfrak {h}}\cap {\mathfrak {k}}$ is as large as possible. The unadorned Dynkin diagram (i.e., that with only white nodes and no arrows), when interpreted as a Satake diagram, represents the split real form of the Lie algebra, whereas it represents the compact form when interpreted as a Vogan diagram. See also • Relative root system • List of irreducible Tits indices References • Bump, Daniel (2004), Lie groups, Graduate Texts in Mathematics, vol. 225, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4757-4094-3, ISBN 978-0-387-21154-1, MR 2062813 • Helgason, Sigurdur (2001), Differential geometry, Lie groups, and symmetric spaces, Graduate Studies in Mathematics, vol. 34, Providence, R.I.: American Mathematical Society, doi:10.1090/gsm/034, ISBN 978-0-8218-2848-9, MR 1834454 • Onishchik, A. L.; Vinberg, Ėrnest Borisovich (1994), Lie groups and Lie algebras III: structure of Lie groups and Lie algebras, Springer, ISBN 978-3-540-54683-2 • Satake, Ichirô (1960), "On representations and compactifications of symmetric Riemannian spaces", Annals of Mathematics, Second Series, 71 (1): 77–110, doi:10.2307/1969880, ISSN 0003-486X, JSTOR 1969880, MR 0118775 • Satake, Ichiro (1971), Classification theory of semi-simple algebraic groups, Lecture Notes in Pure and Applied Mathematics, vol. 3, New York: Marcel Dekker Inc., ISBN 978-0-8247-1607-3, MR 0316588 • Spindel, Philippe; Persson, Daniel; Henneaux, Marc (2008), "Spacelike Singularities and Hidden Symmetries of Gravity", Living Reviews in Relativity, 11 (1): 1, arXiv:0710.1818, doi:10.12942/lrr-2008-1, PMC 5255974, PMID 28179821 • Tits, Jacques (1966), "Classification of algebraic semisimple groups", Algebraic Groups and Discontinuous Subgroups (Proc. Sympos. Pure Math., Boulder, Colo., 1965), Providence, R.I.: American Mathematical Society, pp. 33–62, MR 0224710 • Tits, Jacques (1971), "Représentations linéaires irréductibles d'un groupe réductif sur un corps quelconque", Journal für die reine und angewandte Mathematik, 1971 (247): 196–220, doi:10.1515/crll.1971.247.196, ISSN 0075-4102, MR 0277536, S2CID 116999784
Wikipedia
Tits metric In mathematics, the Tits metric is a metric defined on the ideal boundary of an Hadamard space (also called a complete CAT(0) space). It is named after Jacques Tits. Ideal boundary of an Hadamard space Let (X, d) be an Hadamard space. Two geodesic rays c1, c2 : [0, ∞] → X are called asymptotic if they stay within a certain distance when traveling, i.e. $\sup _{t\geq 0}d(c_{1}(t),c_{2}(t))<\infty .$ Equivalently, the Hausdorff distance between the two rays is finite. The asymptotic property defines an equivalence relation on the set of geodesic rays, and the set of equivalence classes is called the ideal boundary ∂X of X. An equivalence class of geodesic rays is called a boundary point of X. For any equivalence class of rays and any point p in X, there is a unique ray in the class that issues from p. Definition of the Tits metric First we define an angle between boundary points with respect to a point p in X. For any two boundary points $\xi _{1},\xi _{2}$ in ∂X, take the two geodesic rays c1, c2 issuing from p corresponding to the two boundary points respectively. One can define an angle of the two rays at p called the Alexandrov angle. Intuitively, take the triangle with vertices p, c1(t), c2(t) for a small t, and construct a triangle in the flat plane with the same side lengths as this triangle. Consider the angle at the vertex of the flat triangle corresponding to p. The limit of this angle when t goes to zero is defined as the Alexandrov angle of the two rays at p. (By definition of a CAT(0) space, the angle monotonically decreases as t decreases, so the limit exists.) Now we define $\angle _{p}(\xi _{1},\xi _{2})$ to be this angle. To define the angular metric on the boundary ∂X that does not depend on the choice of p, we take the supremum over all points in X $\angle (\xi _{1},\xi _{2}):=\sup _{p\in X}\angle _{p}(\xi _{1},\xi _{2}).$ The Tits metric dT is the length metric associated to the angular metric, that is for any two boundary points, the Tits distance between them is the infimum of lengths of all the curves on the boundary that connect them measured in the angular metric. If there is no such curve with finite length, the Tits distance between the two points is defined as infinity. The ideal boundary of X equipped with the Tits metric is called the Tits boundary, denoted as ∂TX. For a complete CAT(0) space, it can be shown that its ideal boundary with the angular metric is a complete CAT(1) space, and its Tits boundary is also a complete CAT(1) space. Thus for any two boundary points $\xi _{1},\xi _{2}$ such that $\angle (\xi _{1},\xi _{2})<\pi $, we have $d_{\mathrm {T} }(\xi _{1},\xi _{2})=\angle (\xi _{1},\xi _{2}),$ and the points can be joined by a unique geodesic segment on the boundary. If the space is proper, then any two boundary points at finite Tits distance apart can be joined by a geodesic segment on the boundary. Examples • For a Euclidean space En, its Tits boundary is the unit sphere Sn - 1. • An Hadamard space X is called a visibility space if any two distinct boundary points are the endpoints of a geodesic line in X. For such a space, the angular distance between any two boundary points is equal to π, so there is no curve with finite length on the ideal boundary that connects any two distinct boundary points, which means that the Tits distance between any two of them is infinity. References • Bridson, Martin R.; Haefliger, André (1999). Metric spaces of non-positive curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 319. Berlin: Springer-Verlag. pp. xxii+643. ISBN 3-540-64324-9. MR 1744486.
Wikipedia
(B, N) pair In mathematics, a (B, N) pair is a structure on groups of Lie type that allows one to give uniform proofs of many results, instead of giving a large number of case-by-case proofs. Roughly speaking, it shows that all such groups are similar to the general linear group over a field. They were introduced by the mathematician Jacques Tits, and are also sometimes known as Tits systems. Definition A (B, N) pair is a pair of subgroups B and N of a group G such that the following axioms hold: • G is generated by B and N. • The intersection, T, of B and N is a normal subgroup of N. • The group W = N/T is generated by a set S of elements of order 2 such that • If s is an element of S and w is an element of W then sBw is contained in the union of BswB and BwB. • No element of S normalizes B. The set S is uniquely determined by B and N and the pair (W,S) is a Coxeter system.[1] Terminology BN pairs are closely related to reductive groups and the terminology in both subjects overlaps. The size of S is called the rank. We call • B the (standard) Borel subgroup, • T the (standard) Cartan subgroup, and • W the Weyl group. A subgroup of G is called • parabolic if it contains a conjugate of B, • standard parabolic if, in fact, it contains B itself, and • a Borel (or minimal parabolic) if it is a conjugate of B. Examples Abstract examples of BN pairs arise from certain group actions. • Suppose that G is any doubly transitive permutation group on a set E with more than 2 elements. We let B be the subgroup of G fixing a point x, and we let N be the subgroup fixing or exchanging 2 points x and y. The subgroup T is then the set of elements fixing both x and y, and W has order 2 and its nontrivial element is represented by anything exchanging x and y. • Conversely, if G has a (B, N) pair of rank 1, then the action of G on the cosets of B is doubly transitive. So BN pairs of rank 1 are more or less the same as doubly transitive actions on sets with more than 2 elements. More concrete examples of BN pairs can be found in reductive groups. • Suppose that G is the general linear group GLn(K) over a field K. We take B to be the upper triangular matrices, T to be the diagonal matrices, and N to be the monomial matrices, i.e. matrices with exactly one non-zero element in each row and column. There are n − 1 generators, represented by the matrices obtained by swapping two adjacent rows of a diagonal matrix. The Weyl group is the symmetric group on n letters. • More generally, if G is a reductive group over a field K then the group G=G(K) has a BN pair in which • B=P(K), where P is a minimal parabolic subgroup of G, and • N=N(K), where N is the normalizer of a split maximal torus contained in P.[2] • In particular, any finite group of Lie type has the structure of a BN-pair. • Over the field of two elements, the Cartan subgroup is trivial in this example. • A semisimple simply-connected algebraic group over a local field has a BN-pair where B is an Iwahori subgroup. Properties Bruhat decomposition The Bruhat decomposition states that G = BWB. More precisely, the double cosets B\G/B are represented by a set of lifts of W to N.[3] Parabolic subgroups Every parabolic subgroup equals its normalizer in G.[4] Every standard parabolic is of the form BW(X)B for some subset X of S, where W(X) denotes the Coxeter subgroup generated by X. Moreover, two standard parabolics are conjugate if and only if their sets X are the same. Hence there is a bijection between subsets of S and standard parabolics.[5] More generally, this bijection extends to conjugacy classes of parabolic subgroups.[6] Tits's simplicity theorem BN-pairs can be used to prove that many groups of Lie type are simple modulo their centers. More precisely, if G has a BN-pair such that B is a solvable group, the intersection of all conjugates of B is trivial, and the set of generators of W cannot be decomposed into two non-empty commuting sets, then G is simple whenever it is a perfect group. In practice all of these conditions except for G being perfect are easy to check. Checking that G is perfect needs some slightly messy calculations (and in fact there are a few small groups of Lie type which are not perfect). But showing that a group is perfect is usually far easier than showing it is simple. Citations 1. Abramenko & Brown 2008, p. 319, Theorem 6.5.6(1). 2. Borel 1991, p. 236, Theorem 21.15. 3. Bourbaki 1981, p. 25, Théorème 1. 4. Bourbaki 1981, p. 29, Théorème 4(iv). 5. Bourbaki 1981, p. 27, Théorème 3. 6. Bourbaki 1981, p. 29, Théorème 4. References • Abramenko, Peter; Brown, Kenneth S. (2008). Buildings. Theory and Applications. Springer. ISBN 978-0-387-78834-0. MR 2439729. Zbl 1214.20033. Section 6.2.6 discusses BN pairs. • Borel, Armand (1991) [1969], Linear Algebraic Groups, Graduate Texts in Mathematics, vol. 126 (2nd ed.), New York: Springer Nature, doi:10.1007/978-1-4612-0941-6, ISBN 0-387-97370-2, MR 1102012 • Bourbaki, Nicolas (1981). Lie Groups and Lie Algebras: Chapters 4–6. Elements of Mathematics (in French). Hermann. ISBN 2-225-76076-4. MR 0240238. Zbl 0483.22001. Chapitre IV, § 2 is the standard reference for BN pairs. • Bourbaki, Nicolas (2002). Lie Groups and Lie Algebras: Chapters 4–6. Elements of Mathematics. Springer. ISBN 3-540-42650-7. MR 1890629. Zbl 0983.17001. • Serre, Jean-Pierre (2003). Trees. Springer. ISBN 3-540-44237-5. Zbl 1013.20001.
Wikipedia
Sedrakyan's inequality The following inequality is known as Sedrakyan's inequality, Bergström's inequality, Engel's form or Titu's lemma, respectively, referring to the article About the applications of one useful inequality of Nairi Sedrakyan published in 1997,[1] to the book Problem-solving strategies of Arthur Engel published in 1998 and to the book Mathematical Olympiad Treasures of Titu Andreescu published in 2003.[2][3] It is a direct consequence of Cauchy–Bunyakovsky–Schwarz inequality. Nevertheless, in his article (1997) Sedrakyan has noticed that written in this form this inequality can be used as a mathematical proof technique and it has very useful new applications. In the book Algebraic Inequalities (Sedrakyan) are provided several generalizations of this inequality.[4] Statement of the inequality For any reals $a_{1},a_{2},a_{3},\ldots ,a_{n}$ and positive reals $b_{1},b_{2},b_{3},\ldots ,b_{n},$ we have ${\frac {a_{1}^{2}}{b_{1}}}+{\frac {a_{2}^{2}}{b_{2}}}+\cdots +{\frac {a_{n}^{2}}{b_{n}}}\geq {\frac {\left(a_{1}+a_{2}+\cdots +a_{n}\right)^{2}}{b_{1}+b_{2}+\cdots +b_{n}}}.$ (Nairi Sedrakyan (1997), Arthur Engel (1998), Titu Andreescu (2003)) Probabilistic statement Similarly to the Cauchy–Schwarz inequality, one can generalize Sedrakyan's inequality to random variable. In this formulation let $X$ be a real random variable, and let $Y$ be a positive random variable. X and Y need not be independent, but we assume $E[|X|]$ and $E[Y]$ are both defined. Then $\operatorname {E} [X^{2}/Y]\geq \operatorname {E} [|X|]^{2}/\operatorname {E} [Y]\geq \operatorname {E} [X]^{2}/\operatorname {E} [Y].$ Direct applications Example 1. Nesbitt's inequality. For positive real numbers $a,b,c:$ ${\frac {a}{b+c}}+{\frac {b}{a+c}}+{\frac {c}{a+b}}\geq {\frac {3}{2}}.$ Example 2. International Mathematical Olympiad (IMO) 1995. For positive real numbers $a,b,c$, where $abc=1$ we have that ${\frac {1}{a^{3}(b+c)}}+{\frac {1}{b^{3}(a+c)}}+{\frac {1}{c^{3}(a+b)}}\geq {\frac {3}{2}}.$ Example 3. For positive real numbers $a,b$ we have that $8(a^{4}+b^{4})\geq (a+b)^{4}.$ Example 4. For positive real numbers $a,b,c$ we have that ${\frac {1}{a+b}}+{\frac {1}{b+c}}+{\frac {1}{a+c}}\geq {\frac {9}{2(a+b+c)}}.$ Proofs Example 1. Proof: Use $n=3,$ $\left(a_{1},a_{2},a_{3}\right):=(a,b,c),$ and $\left(b_{1},b_{2},b_{3}\right):=(a(b+c),b(c+a),c(a+b))$ to conclude: ${\frac {a^{2}}{a(b+c)}}+{\frac {b^{2}}{b(c+a)}}+{\frac {c^{2}}{c(a+b)}}\geq {\frac {(a+b+c)^{2}}{a(b+c)+b(c+a)+c(a+b)}}={\frac {a^{2}+b^{2}+c^{2}+2(ab+bc+ca)}{2(ab+bc+ca)}}={\frac {a^{2}+b^{2}+c^{2}}{2(ab+bc+ca)}}+1\geq {\frac {1}{2}}(1)+1={\frac {3}{2}}.\blacksquare $ Example 2. We have that ${\frac {{\Big (}{\frac {1}{a}}{\Big )}^{2}}{a(b+c)}}+{\frac {{\Big (}{\frac {1}{b}}{\Big )}^{2}}{b(a+c)}}+{\frac {{\Big (}{\frac {1}{c}}{\Big )}^{2}}{c(a+b)}}\geq {\frac {{\Big (}{\frac {1}{a}}+{\frac {1}{b}}+{\frac {1}{c}}{\Big )}^{2}}{2(ab+bc+ac)}}={\frac {ab+bc+ac}{2a^{2}b^{2}c^{2}}}\geq {\frac {3{\sqrt[{3}]{a^{2}b^{2}c^{2}}}}{2a^{2}b^{2}c^{2}}}={\frac {3}{2}}.$ Example 3. We have ${\frac {a^{2}}{1}}+{\frac {b^{2}}{1}}\geq {\frac {(a+b)^{2}}{2}}$ so that $a^{4}+b^{4}={\frac {\left(a^{2}\right)^{2}}{1}}+{\frac {\left(b^{2}\right)^{2}}{1}}\geq {\frac {\left(a^{2}+b^{2}\right)^{2}}{2}}\geq {\frac {\left({\frac {(a+b)^{2}}{2}}\right)^{2}}{2}}={\frac {(a+b)^{4}}{8}}.$ Example 4. We have that ${\frac {1}{a+b}}+{\frac {1}{b+c}}+{\frac {1}{a+c}}\geq {\frac {(1+1+1)^{2}}{2(a+b+c)}}={\frac {9}{2(a+b+c)}}.$ References 1. Sedrakyan, Nairi (1997). "About the applications of one useful inequality". Kvant Journal. pp. 42–44, 97(2), Moscow. 2. Sedrakyan, Nairi (1997). A useful inequality. Springer International publishing. p. 107. ISBN 9783319778365. 3. "Statement of the inequality". Brilliant Math & Science. 2018. 4. Sedrakyan, Nairi (2018). Algebraic inequalities. Springer International publishing. pp. 107–109. Linear algebra • Outline • Glossary Basic concepts • Scalar • Vector • Vector space • Scalar multiplication • Vector projection • Linear span • Linear map • Linear projection • Linear independence • Linear combination • Basis • Change of basis • Row and column vectors • Row and column spaces • Kernel • Eigenvalues and eigenvectors • Transpose • Linear equations Matrices • Block • Decomposition • Invertible • Minor • Multiplication • Rank • Transformation • Cramer's rule • Gaussian elimination Bilinear • Orthogonality • Dot product • Hadamard product • Inner product space • Outer product • Kronecker product • Gram–Schmidt process Multilinear algebra • Determinant • Cross product • Triple product • Seven-dimensional cross product • Geometric algebra • Exterior algebra • Bivector • Multivector • Tensor • Outermorphism Vector space constructions • Dual • Direct sum • Function space • Quotient • Subspace • Tensor product Numerical • Floating-point • Numerical stability • Basic Linear Algebra Subprograms • Sparse matrix • Comparison of linear algebra libraries • Category • Mathematics portal • Commons • Wikibooks • Wikiversity
Wikipedia
To Mock a Mockingbird To Mock a Mockingbird and Other Logic Puzzles: Including an Amazing Adventure in Combinatory Logic (1985, ISBN 0-19-280142-2) is a book by the mathematician and logician Raymond Smullyan. It contains many nontrivial recreational puzzles of the sort for which Smullyan is well known. It is also a gentle and humorous introduction to combinatory logic and the associated metamathematics, built on an elaborate ornithological metaphor. To Mock a Mockingbird and Other Logic Puzzles: Including an Amazing Adventure in Combinatory Logic AuthorRaymond Smullyan CountryUnited States LanguageEnglish PublisherKnopf Publication date 1985 Media typePrint (Paperback) Pages246 ISBN0-19-280142-2 OCLC248314322 Combinatory logic, functionally equivalent to the lambda calculus, is a branch of symbolic logic having the expressive power of set theory, and with deep connections to questions of computability and provability. Smullyan's exposition takes the form of an imaginary account of two men going into a forest and discussing the unusual "birds" (combinators) they find there (bird watching was a hobby of one of the founders of combinatory logic, Haskell Curry, and another founder Moses Schönfinkel's name means beautiful bird). Each species of bird in Smullyan's forest stands for a particular kind of combinator appearing in the conventional treatment of combinatory logic. Each bird has a distinctive call, which it emits when it hears the call of another bird. Hence an initial call by certain "birds" gives rise to a cascading sequence of calls by a succession of birds. Deep inside the forest dwells the Mockingbird, which imitates other birds hearing themselves. The resulting cascade of calls and responses analogizes to abstract models of computing. With this analogy in hand, one can explore advanced topics in the mathematical theory of computability, such as Church–Turing computability and Gödel's theorem. While the book starts off with simple riddles, it eventually shifts to a tale of Inspector Craig of Scotland Yard, who appears in Smullyan's other books; traveling from forest to forest, learning from different professors about all the different kinds of birds. He starts off in a certain enchanted forest, then goes to an unnamed forest, then to Curry's Forest (named after Haskell Curry), then to Russell's Forest, then to The Forest Without a Name, then to Gödel's Forest and finally to The Master Forest where he also answers The Grand Question. See also • SKI combinator calculus • B, C, K, W system • Fixed-point combinator • Lambda calculus • Logic puzzle • Brain teaser • Paradox External links • Keenan, David C. (2001) "To Dissect a Mockingbird." • Rathman, Chris, "Combinator Birds."
Wikipedia
Toads and Frogs The combinatorial game Toads and Frogs is a partisan game invented by Richard Guy. This mathematical game was used as an introductory game in the book Winning Ways for your Mathematical Plays.[1] Known for its simplicity and the elegance of its rules, Toads-and-Frogs is useful to illustrate the main concepts of combinatorial game theory. In particular, it is not difficult to evaluate simple games involving only one toad and one frog, by constructing the game tree of the starting position.[1] However, the general case of evaluating an arbitrary position is known to be NP-hard. There are some open conjectures on the value of some remarkable positions. A one-player puzzle version of the game has also been considered. Rules Toads and Frogs is played on a 1 × n strip of squares. At any time, each square is either empty or occupied by a single toad or frog. Although the game may start at any configuration, it is customary to begin with toads occupying consecutive squares on the leftmost end and frogs occupying consecutive squares on the rightmost end of the strip. When it is the Left player's turn to move, they may either move a toad one square to the right, into an empty square, or "hop" a toad two squares to the right, over a frog, into an empty square. Hops over an empty square, a toad, or more than one square are not allowed. Analogous rules apply for Right: on a turn, the Right player may move a frog left into a neighboring empty space, or hop a frog over a single toad into an empty square immediately to the toad's left. Under the normal play rule conventional for combinatorial game theory, the first player to be unable to move on their turn loses. Notation A position of Toads-and-Frogs may be represented with a string of three characters : $T$ for a toad, $F$ for a frog, and $\square $ for an empty space. For example, the string $T\square \square F$ represents a strip of four squares with a toad on the first one, and a frog on the last one. In combinatorial game theory, a position can be described recursively in terms of its options, i.e. the positions that the Left player and the Right player can move to. If Left can move from a position $P$ to the positions $L_{1}$, $L_{2}$, ... and Right to the positions $R_{1}$, $R_{2}$, ..., then the position $P$ is written conventionally $P=\{L_{1},L_{2},\dots |R_{1},R_{2},\dots \}.$ In this notation, for example, $T\square \square F=\{\square T\square F|T\square F\square \}$. This means that Left can move a toad one square to the right, and Right can move a frog one square to the left. Game-theoretic values Most of the research around Toads-and-Frogs has been around determining the game-theoretic values of some particular Toads-and-Frogs positions, or determining whether some particular values can arise in the game. Winning Ways for your Mathematical Plays showed first numerous possible values. For example, : $T\square \square F=0$ $TF\square \square =1$ $T\square F\square ={\frac {1}{2}}$ $TFT\square F=\{0|0\}=\star $ $T\square TFF=\{0|\star \}=\uparrow $ In 1996, Jeff Erickson proved that for any dyadic rational number q (which are the only numbers that can arise in finite games), there exists a Toads-and-Frogs positions with value q. He also found an explicit formula for some remarkable positions, like $T^{a}\square ^{b}F$, and formulated six conjectures on the values of other positions and the hardness of the game.[2] These conjectures fueled further research. Jesse Hull proved conjecture 6 in 2000,[3] which states that determining the value of an arbitrary Toads-and-Frogs position is NP-hard. Doron Zeilberger and Thotsaporn Aek Thanatipanonda proved conjecture 1, 2 and 3 and found a counter-example to conjecture 4 in 2008.[4] Conjecture 5, the last one still open, states that $T^{a}\square ^{b}F^{a}$ is an infinitesimal value, for all (a, b) except (3, 2). Single-player puzzle It is possible for a game of Toads and Frogs to end early. A one-player puzzle version of the Toads and Frogs game, published in 1883 by Édouard Lucas, asks for a sequence of moves beginning in the standard starting position that lasts as long as possible, ending with all of the toads on the right and all of the frogs on the left. The moves are not required to alternate between toads and frogs.[5] References 1. Berlekamp, Elwyn R.; Conway, John H.; Guy, Richard K. (2001), "Toads-and-Frogs", Winning Ways for your Mathematical Plays, vol. 1 (2nd ed.), A K Peters, pp. 12–13 2. Erickson, Jeff (1996), "New Toads and Frogs results", in Nowakowski, Richard J. (ed.), Games of No Chance, Mathematical Sciences Research Institute Publications, vol. 29, Cambridge University Press, pp. 299–310 3. As mentioned both by Erickson on his website and Thanatipanonda in his paper. 4. Thanatipanonda, Thotsaporn (2011), "Further hopping with Toads and Frogs", Electronic Journal of Combinatorics, 18 (1): P67:1–P67:12, arXiv:0804.0640, doi:10.37236/554, MR 2788684, S2CID 35020735 5. Levitin, Anany (2011). "Toads and Frogs". Algorithmic Puzzles. Oxford University Press. p. 53.
Wikipedia
Tobler's hiking function Tobler's hiking function is an exponential function determining the hiking speed, taking into account the slope angle.[1][2][3] It was formulated by Waldo Tobler. This function was estimated from empirical data of Eduard Imhof.[4] Formula Walking velocity: $W=6e^-3.5\left\vert {\frac {dh}{dx}}+0.05\right\vert }$ ${\frac {dh}{dx}}=S=\tan \theta $ where W = walking velocity [km/h][2] dh = elevation difference, dx = distance, S = slope, θ = angle of slope (inclination). The velocity on the flat terrain is 5 km / h, the maximum speed of 6 km / h is achieved roughly at -2.86°.[5] On flat terrain this formula works out to 5 km/h. For off-path travel, this value should be multiplied by 3/5, for horseback by 5/4.[1] Pace Pace is the reciprocal of speed.[6][7] For Tobler's hiking function it can be calculated from the following conversion:[7] $p=0.6e^3.5\left\vert m+0.05\right\vert }$ where p = pace [s/m] m = gradient uphill or downhill (dh/dx = S in Tobler's formula), Sample values Slope (deg) Gradient (dh/dx) SpeedPace km / hmi / hmin / kmmin / mis / m -60-1.730.020.013603.95799.9216.23 -50-1.190.110.07543.9875.332.63 -40-0.840.380.24158.3254.79.50 -30-0.580.950.5963.3101.93.80 -25-0.471.400.8742.969.12.58 -20-0.362.001.2430.048.31.80 -15-0.272.801.7421.434.51.29 -10-0.183.862.4015.625.00.93 -5-0.095.263.2711.418.30.68 -2.8624-0.056.003.7310.016.10.60 005.043.1311.919.20.71 10.024.742.9412.720.40.76 50.093.712.3016.226.00.97 100.182.721.6922.135.51.32 150.271.971.2330.449.01.83 200.361.410.8842.668.52.56 250.470.980.6160.998.13.66 300.580.670.4189.9144.65.39 400.840.270.17224.6361.513.48 501.190.080.05771.81242.146.31 See also • Naismith's rule • Preferred walking speed References 1. Tobler, Waldo (February 1993). "Three presentations on geographical analysis and modeling: Non-isotropic geographic modeling speculations on the geometry of geography global spatial analysis" (PDF). Technical Report. National center for geographic information and analysis. 93 (1). Retrieved 21 March 2013. Available also in HTML format. 2. Magyari-Sáska, Zsolt; Dombay, Ştefan (2012). "Determining minimum hiking time using DEM" (PDF). Geographia Napocensis. Academia Romana − Filiala Cluj Colectivul de Geografie. Anul VI (2): 124–129. Retrieved 21 March 2013. 3. Kondo, Yasuhisa; Seino, Yoichi (2010). "GPS-aided Walking Experiments and Data-driven Travel Cost Modeling on the Historical Road of Nakasendō-Kisoji (Central Highland Japan)". In Frischer, Bernard (ed.). Making history interactive: computer applications and quantitative methods in archaeology (CAA); proceedings of the 37th international conference, Williamsburg, Virginia, United States of America, March 22−26, 2009. BAR International Series. Oxford u.a.: Archaeopress. pp. 158–165. Retrieved 21 March 2013. 4. Imhof, Eduard (1950). Gelaende und Karte. Rentsch, Zurich. 5. Analyzing Tobler's Hiking Function and Naismith's Rule Using Crowd-Sourced GPS Data. Erik Irtenkauf. The Pennsylvania State University. May 2014 6. Kay, A. (2012). "Route Choice in Hilly Terrain" (PDF). Geogr Anal. 44 (2): 87–108. CiteSeerX 10.1.1.391.1203. doi:10.1111/j.1538-4632.2012.00838.x. Archived from the original (PDF) on 2012-11-14. Retrieved 19 January 2017. 7. Kay, A. (November 2012). "Pace and critical gradient for hill runners: an analysis of race records" (PDF). Journal of Quantitative Analysis in Sports. 8 (4). doi:10.1515/1559-0410.1456. ISSN 1559-0410. Retrieved 19 January 2017.
Wikipedia
Toby Gee Toby Stephen Gee (born 2 January 1980) is a British mathematician working in number theory and arithmetic aspects of the Langlands Program. He specialises in algebraic number theory. Toby Stephen Gee Born (1980-01-02) 2 January 1980 Alma materTrinity College, Cambridge Awards • Whitehead Prize (2012) • Leverhulme Prize (2012) • Fellow of the American Mathematical Society (2013) Scientific career FieldsMathematics Institutions • Imperial College London • Harvard University ThesisCompanion Forms Over Totally Real Fields (2004) Doctoral advisorKevin Buzzard Gee was awarded the Whitehead Prize in 2012,[1] the Leverhulme Prize in 2012,[2] and was elected as a Fellow of the American Mathematical Society in 2014.[3] Career Gee read mathematics at Trinity College, Cambridge, where he was Senior Wrangler in 2000. After completing his PhD with Kevin Buzzard at Imperial College in 2004, he was a Benjamin Peirce Assistant Professor at Harvard University until 2010. From 2010 to 2011 Gee was an assistant professor at Northwestern University, at which point he moved to Imperial College London, where he has been a professor since 2013. With Mark Kisin, he proved the Breuil–Mézard conjecture for potentially Barsotti–Tate representations,[4] and with Thomas Barnet-Lamb and David Geraghty, he proved the Sato–Tate conjecture for Hilbert modular forms.[5] One of his most influential ideas has been the introduction of a general 'philosophy of weights', which has immensely clarified some aspects of the emerging mod p Langlands philosophy.[6] References 1. "Prizes 2012" (PDF). London Mathematical Society. 2. "Philip Leverhulme Prize Winners 2012" (PDF). The Leverhulme Trust. 3. "2014 Class of the Fellows of the AMS" (PDF). American Mathematical Society. 4. Gee, Toby; Kisin, Mark (December 2014). "The Breuil–Mézard Conjecture For Potentially Barsotti–Tate Representations". Forum of Mathematics, Pi. 2. arXiv:1208.3179. doi:10.1017/fmp.2014.1. ISSN 2050-5086. S2CID 16351884. 5. Barnet-Lamb, Thomas; Gee, Toby; Geraghty, David (2011). "The Sato–Tate conjecture for Hilbert modular forms". Journal of the American Mathematical Society. 24 (2): 411–469. arXiv:0912.1054. doi:10.1090/S0894-0347-2010-00689-3. ISSN 0894-0347. S2CID 12534084. 6. "Prizewinners 2012". Bulletin of the London Mathematical Society. 45 (2): 421–428. 1 April 2013. doi:10.1112/blms/bdt015. ISSN 0024-6093. S2CID 247674468. External links • Toby Gee's Professional Webpage • Toby Gee's Curriculum Vitae • Toby Gee's results at International Mathematical Olympiad Authority control International • VIAF National • Germany Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH Other • IdRef
Wikipedia
Toda's theorem Toda's theorem is a result in computational complexity theory that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy"[1] and was given the 1998 Gödel Prize. Statement The theorem states that the entire polynomial hierarchy PH is contained in PPP; this implies a closely related statement, that PH is contained in P#P. Definitions #P is the problem of exactly counting the number of solutions to a polynomially-verifiable question (that is, to a question in NP), while loosely speaking, PP is the problem of giving an answer that is correct more than half the time. The class P#P consists of all the problems that can be solved in polynomial time if you have access to instantaneous answers to any counting problem in #P (polynomial time relative to a #P oracle). Thus Toda's theorem implies that for any problem in the polynomial hierarchy there is a deterministic polynomial-time Turing reduction to a counting problem.[2] An analogous result in the complexity theory over the reals (in the sense of Blum–Shub–Smale real Turing machines) was proved by Saugata Basu and Thierry Zell in 2009[3] and a complex analogue of Toda's theorem was proved by Saugata Basu in 2011.[4] Proof The proof is broken into two parts. • First, it is established that $\Sigma ^{P}\cdot {\mathsf {BP}}\cdot \oplus {\mathsf {P}}\subseteq {\mathsf {BP}}\cdot \oplus {\mathsf {P}}$ The proof uses a variation of Valiant–Vazirani theorem. Because ${\mathsf {BP}}\cdot \oplus {\mathsf {P}}$ contains ${\mathsf {P}}$ and is closed under complement, it follows by induction that ${\mathsf {PH}}\subseteq {\mathsf {BP}}\cdot \oplus {\mathsf {P}}$. • Second, it is established that ${\mathsf {BP}}\cdot \oplus {\mathsf {P}}\subseteq {\mathsf {P}}^{\sharp P}$ Together, the two parts imply ${\mathsf {PH}}\subseteq {\mathsf {BP}}\cdot \oplus {\mathsf {P}}\subseteq {\mathsf {P}}\cdot \oplus {\mathsf {P}}\subseteq {\mathsf {P}}^{\sharp P}$ References 1. Toda, Seinosuke (October 1991). "PP is as Hard as the Polynomial-Time Hierarchy". SIAM Journal on Computing. 20 (5): 865–877. CiteSeerX 10.1.1.121.1246. doi:10.1137/0220053. ISSN 0097-5397. 2. 1998 Gödel Prize. Seinosuke Toda 3. Saugata Basu and Thierry Zell (2009); Polynomial Hierarchy, Betti Numbers and a Real Analogue of Toda's Theorem, in Foundations of Computational Mathematics 4. Saugata Basu (2011); A Complex Analogue of Toda's Theorem, in Foundations of Computational Mathematics
Wikipedia
Toda bracket In mathematics, the Toda bracket is an operation on homotopy classes of maps, in particular on homotopy groups of spheres, named after Hiroshi Toda, who defined them and used them to compute homotopy groups of spheres in (Toda 1962). Definition See (Kochman 1990) or (Toda 1962) for more information. Suppose that $W{\stackrel {f}{\ \to \ }}X{\stackrel {g}{\ \to \ }}Y{\stackrel {h}{\ \to \ }}Z$ is a sequence of maps between spaces, such that the compositions $g\circ f$ and $h\circ g$ are both nullhomotopic. Given a space $A$, let $CA$ denote the cone of $A$. Then we get a (non-unique) map $F\colon CW\to Y$ induced by a homotopy from $g\circ f$ to a trivial map, which when post-composed with $h$ gives a map $h\circ F\colon CW\to Z$. Similarly we get a non-unique map $G\colon CX\to Z$ induced by a homotopy from $h\circ g$ to a trivial map, which when composed with $C_{f}\colon CW\to CX$, the cone of the map $f$, gives another map, $G\circ C_{f}\colon CW\to Z$. By joining these two cones on $W$ and the maps from them to $Z$, we get a map $\langle f,g,h\rangle \colon SW\to Z$ representing an element in the group $[SW,Z]$ of homotopy classes of maps from the suspension $SW$ to $Z$, called the Toda bracket of $f$, $g$, and $h$. The map $\langle f,g,h\rangle $ is not uniquely defined up to homotopy, because there was some choice in choosing the maps from the cones. Changing these maps changes the Toda bracket by adding elements of $h[SW,Y]$ and $[SX,Z]f$. There are also higher Toda brackets of several elements, defined when suitable lower Toda brackets vanish. This parallels the theory of Massey products in cohomology. The Toda bracket for stable homotopy groups of spheres The direct sum $\pi _{\ast }^{S}=\bigoplus _{k\geq 0}\pi _{k}^{S}$ of the stable homotopy groups of spheres is a supercommutative graded ring, where multiplication (called composition product) is given by composition of representing maps, and any element of non-zero degree is nilpotent (Nishida 1973). If f and g and h are elements of $\pi _{\ast }^{S}$ with $f\cdot g=0$ and $g\cdot h=0$, there is a Toda bracket $\langle f,g,h\rangle $ of these elements. The Toda bracket is not quite an element of a stable homotopy group, because it is only defined up to addition of composition products of certain other elements. Hiroshi Toda used the composition product and Toda brackets to label many of the elements of homotopy groups. Cohen (1968) showed that every element of the stable homotopy groups of spheres can be expressed using composition products and higher Toda brackets in terms of certain well known elements, called Hopf elements. The Toda bracket for general triangulated categories In the case of a general triangulated category the Toda bracket can be defined as follows. Again, suppose that $W{\stackrel {f}{\ \to \ }}X{\stackrel {g}{\ \to \ }}Y{\stackrel {h}{\ \to \ }}Z$ is a sequence of morphism in a triangulated category such that $g\circ f=0$ and $h\circ g=0$. Let $C_{f}$ denote the cone of f so we obtain an exact triangle $W{\stackrel {f}{\ \to \ }}X{\stackrel {i}{\ \to \ }}C_{f}{\stackrel {q}{\ \to \ }}W[1]$ The relation $g\circ f=0$ implies that g factors (non-uniquely) through $C_{f}$ as $X{\stackrel {i}{\ \to \ }}C_{f}{\stackrel {a}{\ \to \ }}Y$ for some $a$. Then, the relation $h\circ a\circ i=h\circ g=0$ implies that $h\circ a$ factors (non-uniquely) through W[1] as $C_{f}{\stackrel {q}{\ \to \ }}W[1]{\stackrel {b}{\ \to \ }}Z$ for some b. This b is (a choice of) the Toda bracket $\langle f,g,h\rangle $ in the group $\operatorname {hom} (W[1],Z)$. Convergence theorem There is a convergence theorem originally due to Moss[1] which states that special Massey products $\langle a,b,c\rangle $ of elements in the $E_{r}$-page of the Adams spectral sequence contain a permanent cycle, meaning has an associated element in $\pi _{*}^{s}(\mathbb {S} )$, assuming the elements $a,b,c$ are permanent cycles[2]pg 18-19. Moreover, these Massey products have a lift to a motivic Adams spectral sequence giving an element in the Toda bracket $\langle \alpha ,\beta ,\gamma \rangle $ in $\pi _{*,*}$ for elements $\alpha ,\beta ,\gamma $ lifting $a,b,c$. References 1. Moss, R. Michael F. (1970-08-01). "Secondary compositions and the Adams spectral sequence". Mathematische Zeitschrift. 115 (4): 283–310. doi:10.1007/BF01129978. ISSN 1432-1823. S2CID 122909581. 2. Isaksen, Daniel C.; Wang, Guozhen; Xu, Zhouli (2020-06-17). "More stable stems". arXiv:2001.04511 [math.AT]. • Cohen, Joel M. (1968), "The decomposition of stable homotopy.", Annals of Mathematics, Second Series, 87 (2): 305–320, doi:10.2307/1970586, JSTOR 1970586, MR 0231377, PMC 224450, PMID 16591550. • Kochman, Stanley O. (1990), "Toda brackets", Stable homotopy groups of spheres. A computer-assisted approach, Lecture Notes in Mathematics, vol. 1423, Berlin: Springer-Verlag, pp. 12–34, doi:10.1007/BFb0083797, ISBN 978-3-540-52468-7, MR 1052407. • Nishida, Goro (1973), "The nilpotency of elements of the stable homotopy groups of spheres", Journal of the Mathematical Society of Japan, 25 (4): 707–732, doi:10.2969/jmsj/02540707, ISSN 0025-5645, MR 0341485. • Toda, Hiroshi (1962), Composition methods in homotopy groups of spheres, Annals of Mathematics Studies, vol. 49, Princeton University Press, ISBN 978-0-691-09586-8, MR 0143217.
Wikipedia
Toda lattice The Toda lattice, introduced by Morikazu Toda (1967), is a simple model for a one-dimensional crystal in solid state physics. It is famous because it is one of the earliest examples of a non-linear completely integrable system. It is given by a chain of particles with nearest neighbor interaction, described by the Hamiltonian ${\begin{aligned}H(p,q)&=\sum _{n\in \mathbb {Z} }\left({\frac {p(n,t)^{2}}{2}}+V(q(n+1,t)-q(n,t))\right)\end{aligned}}$ and the equations of motion ${\begin{aligned}{\frac {d}{dt}}p(n,t)&=-{\frac {\partial H(p,q)}{\partial q(n,t)}}=e^{-(q(n,t)-q(n-1,t))}-e^{-(q(n+1,t)-q(n,t))},\\{\frac {d}{dt}}q(n,t)&={\frac {\partial H(p,q)}{\partial p(n,t)}}=p(n,t),\end{aligned}}$ where $q(n,t)$ is the displacement of the $n$-th particle from its equilibrium position, and $p(n,t)$ is its momentum (mass $m=1$), and the Toda potential $V(r)=e^{-r}+r-1$. Soliton solutions Soliton solutions are solitary waves spreading in time with no change to their shape and size and interacting with each other in a particle-like way. The general N-soliton solution of the equation is ${\begin{aligned}q_{N}(n,t)=q_{+}+\log {\frac {\det(\mathbb {I} +C_{N}(n,t))}{\det(\mathbb {I} +C_{N}(n+1,t))}},\end{aligned}}$ where $C_{N}(n,t)={\Bigg (}{\frac {\sqrt {\gamma _{i}(n,t)\gamma _{j}(n,t)}}{1-e^{\kappa _{i}+\kappa _{j}}}}{\Bigg )}_{1<i,j<N},$ with $\gamma _{j}(n,t)=\gamma _{j}\,e^{-2\kappa _{j}n-2\sigma _{j}\sinh(\kappa _{j})t}$ where $\kappa _{j},\gamma _{j}>0$ and $\sigma _{j}\in \{\pm 1\}$. Integrability The Toda lattice is a prototypical example of a completely integrable system. To see this one uses Flaschka's variables $a(n,t)={\frac {1}{2}}{\rm {e}}^{-(q(n+1,t)-q(n,t))/2},\qquad b(n,t)=-{\frac {1}{2}}p(n,t)$ such that the Toda lattice reads ${\begin{aligned}{\dot {a}}(n,t)&=a(n,t){\Big (}b(n+1,t)-b(n,t){\Big )},\\{\dot {b}}(n,t)&=2{\Big (}a(n,t)^{2}-a(n-1,t)^{2}{\Big )}.\end{aligned}}$ To show that the system is completely integrable, it suffices to find a Lax pair, that is, two operators L(t) and P(t) in the Hilbert space of square summable sequences $\ell ^{2}(\mathbb {Z} )$ such that the Lax equation ${\frac {d}{dt}}L(t)=[P(t),L(t)]$ (where [L, P] = LP - PL is the Lie commutator of the two operators) is equivalent to the time derivative of Flaschka's variables. The choice ${\begin{aligned}L(t)f(n)&=a(n,t)f(n+1)+a(n-1,t)f(n-1)+b(n,t)f(n),\\P(t)f(n)&=a(n,t)f(n+1)-a(n-1,t)f(n-1).\end{aligned}}$ where f(n+1) and f(n-1) are the shift operators, implies that the operators L(t) for different t are unitarily equivalent. The matrix $L(t)$ has the property that its eigenvalues are invariant in time. These eigenvalues constitute independent integrals of motion, therefore the Toda lattice is completely integrable. In particular, the Toda lattice can be solved by virtue of the inverse scattering transform for the Jacobi operator L. The main result implies that arbitrary (sufficiently fast) decaying initial conditions asymptotically for large t split into a sum of solitons and a decaying dispersive part. See also • Lax pair • Lie bialgebra • Poisson–Lie group References • Krüger, Helge; Teschl, Gerald (2009), "Long-time asymptotics of the Toda lattice for decaying initial data revisited", Rev. Math. Phys., 21 (1): 61–109, arXiv:0804.4693, Bibcode:2009RvMaP..21...61K, doi:10.1142/S0129055X0900358X, MR 2493113, S2CID 14214460 • Teschl, Gerald (2000), Jacobi Operators and Completely Integrable Nonlinear Lattices, Providence: Amer. Math. Soc., ISBN 978-0-8218-1940-1, MR 1711536 • Teschl, Gerald (2001), "Almost everything you always wanted to know about the Toda equation", Jahresbericht der Deutschen Mathematiker-Vereinigung, 103 (4): 149–162, MR 1879178 • Eugene Gutkin, Integrable Hamiltonians with Exponential Potential, Physica 16D (1985) 398-404. doi:10.1016/0167-2789(85)90017-X • Toda, Morikazu (1967), "Vibration of a chain with a non-linear interaction", J. Phys. Soc. Jpn., 22 (2): 431–436, Bibcode:1967JPSJ...22..431T, doi:10.1143/JPSJ.22.431 • Toda, Morikazu (1989), Theory of Nonlinear Lattices, Springer Series in Solid-State Sciences, vol. 20 (2 ed.), Berlin: Springer, doi:10.1007/978-3-642-83219-2, ISBN 978-0-387-10224-5, MR 0971987 External links • E. W. Weisstein, Toda Lattice at ScienceWorld • G. Teschl, The Toda Lattice • J Phys A Special issue on fifty years of the Toda lattice
Wikipedia
Toda–Smith complex In mathematics, Toda–Smith complexes are spectra characterized by having a particularly simple BP-homology, and are useful objects in stable homotopy theory. Toda–Smith complexes provide examples of periodic self maps. These self maps were originally exploited in order to construct infinite families of elements in the homotopy groups of spheres. Their existence pointed the way towards the nilpotence and periodicity theorems.[1] Mathematical context The story begins with the degree $p$ map on $S^{1}$ (as a circle in the complex plane): $S^{1}\to S^{1}\,$ $z\mapsto z^{p}\,$ The degree $p$ map is well defined for $S^{k}$ in general, where $k\in \mathbb {N} $. If we apply the infinite suspension functor to this map, $\Sigma ^{\infty }S^{1}\to \Sigma ^{\infty }S^{1}=:\mathbb {S} ^{1}\to \mathbb {S} ^{1}$ and we take the cofiber of the resulting map: $S{\xrightarrow {p}}S\to S/p$ We find that $S/p$ has the remarkable property of coming from a Moore space (i.e., a designer (co)homology space: $H^{n}(X)\simeq Z/p$, and ${\tilde {H}}^{*}(X)$ is trivial for all $*\neq n$). It is also of note that the periodic maps, $\alpha _{t}$, $\beta _{t}$, and $\gamma _{t}$, come from degree maps between the Toda–Smith complexes, $V(0)_{k}$, $V(1)_{k}$, and $V_{2}(k)$ respectively. Formal definition The $n$th Toda–Smith complex, $V(n)$ where $n\in -1,0,1,2,3,\ldots $, is a finite spectrum which satisfies the property that its BP-homology, $BP_{*}(V(n)):=[\mathbb {S} ^{0},BP\wedge V(n)]$, is isomorphic to $BP_{*}/(p,\ldots ,v_{n})$. That is, Toda–Smith complexes are completely characterized by their $BP$-local properties, and are defined as any object $V(n)$ satisfying one of the following equations: ${\begin{aligned}BP_{*}(V(-1))&\simeq BP_{*}\\[6pt]BP_{*}(V(0))&\simeq BP_{*}/p\\[6pt]BP_{*}(V(1))&\simeq BP_{*}/(p,v_{1})\\[2pt]&{}\,\,\,\vdots \end{aligned}}$ It may help the reader to recall that $BP_{*}=\mathbb {Z} _{p}[v_{1},v_{2},...]$, $\deg v_{i}$ = $2(p^{i}-1)$. Examples of Toda–Smith complexes • the sphere spectrum, $BP_{*}(S^{0})\simeq BP_{*}$, which is $V(-1)$. • the mod p Moore spectrum, $BP_{*}(S/p)\simeq BP_{*}/p$, which is $V(0)$ References 1. James, I. M. (1995-07-18). Handbook of Algebraic Topology. Elsevier. ISBN 9780080532981.
Wikipedia
Todd–Coxeter algorithm In group theory, the Todd–Coxeter algorithm, created by J. A. Todd and H. S. M. Coxeter in 1936, is an algorithm for solving the coset enumeration problem. Given a presentation of a group G by generators and relations and a subgroup H of G, the algorithm enumerates the cosets of H on G and describes the permutation representation of G on the space of the cosets (given by the left multiplication action). If the order of a group G is relatively small and the subgroup H is known to be uncomplicated (for example, a cyclic group), then the algorithm can be carried out by hand and gives a reasonable description of the group G. Using their algorithm, Coxeter and Todd showed that certain systems of relations between generators of known groups are complete, i.e. constitute systems of defining relations. The Todd–Coxeter algorithm can be applied to infinite groups and is known to terminate in a finite number of steps, provided that the index of H in G is finite. On the other hand, for a general pair consisting of a group presentation and a subgroup, its running time is not bounded by any computable function of the index of the subgroup and the size of the input data. Description of the algorithm One implementation of the algorithm proceeds as follows. Suppose that $G=\langle X\mid R\rangle $, where $X$ is a set of generators and $R$ is a set of relations and denote by $X'$ the set of generators $X$ and their inverses. Let $H=\langle h_{1},h_{2},\ldots ,h_{s}\rangle $ where the $h_{i}$ are words of elements of $X'$. There are three types of tables that will be used: a coset table, a relation table for each relation in $R$, and a subgroup table for each generator $h_{i}$ of $H$. Information is gradually added to these tables, and once they are filled in, all cosets have been enumerated and the algorithm terminates. The coset table is used to store the relationships between the known cosets when multiplying by a generator. It has rows representing cosets of $H$ and a column for each element of $X'$. Let $C_{i}$ denote the coset of the ith row of the coset table, and let $g_{j}\in X'$ denote generator of the jth column. The entry of the coset table in row i, column j is defined to be (if known) k, where k is such that $C_{k}=C_{i}g_{j}$. The relation tables are used to detect when some of the cosets we have found are actually equivalent. One relation table for each relation in $R$ is maintained. Let $1=g_{n_{1}}g_{n_{2}}\cdots g_{n_{t}}$ be a relation in $R$, where $g_{n_{i}}\in X'$. The relation table has rows representing the cosets of $H$, as in the coset table. It has t columns, and the entry in the ith row and jth column is defined to be (if known) k, where $C_{k}=C_{i}g_{n_{1}}g_{n_{2}}\cdots g_{n_{j}}$. In particular, the $(i,t)$'th entry is initially i, since $g_{n_{1}}g_{n_{2}}\cdots g_{n_{t}}=1$. Finally, the subgroup tables are similar to the relation tables, except that they keep track of possible relations of the generators of $H$. For each generator $h_{n}=g_{n_{1}}g_{n_{2}}\cdots g_{n_{t}}$ of $H$, with $g_{n_{i}}\in X'$, we create a subgroup table. It has only one row, corresponding to the coset of $H$ itself. It has t columns, and the entry in the jth column is defined (if known) to be k, where $C_{k}=Hg_{n_{1}}g_{n_{2}}\cdots g_{n_{j}}$. When a row of a relation or subgroup table is completed, a new piece of information $C_{i}=C_{j}g$, $g\in X'$, is found. This is known as a deduction. From the deduction, we may be able to fill in additional entries of the relation and subgroup tables, resulting in possible additional deductions. We can fill in the entries of the coset table corresponding to the equations $C_{i}=C_{j}g$ and $C_{j}=C_{i}g^{-1}$. However, when filling in the coset table, it is possible that we may already have an entry for the equation, but the entry has a different value. In this case, we have discovered that two of our cosets are actually the same, known as a coincidence. Suppose $C_{i}=C_{j}$, with $i<j$. We replace all instances of j in the tables with i. Then, we fill in all possible entries of the tables, possibly leading to more deductions and coincidences. If there are empty entries in the table after all deductions and coincidences have been taken care of, add a new coset to the tables and repeat the process. We make sure that when adding cosets, if Hx is a known coset, then Hxg will be added at some point for all $g\in X'$. (This is needed to guarantee that the algorithm will terminate provided $|G:H|$ is finite.) When all the tables are filled, the algorithm terminates. We then have all needed information on the action of $G$ on the cosets of $H$. See also • Coxeter group References • Todd, J. A.; Coxeter, H. S. M. (1936). "A practical method for enumerating cosets of a finite abstract group". Proceedings of the Edinburgh Mathematical Society. Series II. 5: 26–34. doi:10.1017/S0013091500008221. JFM 62.1094.02. Zbl 0015.10103. • Coxeter, H. S. M.; Moser, W. O. J. (1980). Generators and Relations for Discrete Groups. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 14 (4th ed.). Springer-Verlag 1980. ISBN 3-540-09212-9. MR 0562913. • Seress, Ákos (1997). "An introduction to computational group theory" (PDF). Notices of the American Mathematical Society. 44 (6): 671–679. MR 1452069.
Wikipedia
Todd class In mathematics, the Todd class is a certain construction now considered a part of the theory in algebraic topology of characteristic classes. The Todd class of a vector bundle can be defined by means of the theory of Chern classes, and is encountered where Chern classes exist — most notably in differential topology, the theory of complex manifolds and algebraic geometry. In rough terms, a Todd class acts like a reciprocal of a Chern class, or stands in relation to it as a conormal bundle does to a normal bundle. The Todd class plays a fundamental role in generalising the classical Riemann–Roch theorem to higher dimensions, in the Hirzebruch–Riemann–Roch theorem and the Grothendieck–Hirzebruch–Riemann–Roch theorem. History It is named for J. A. Todd, who introduced a special case of the concept in algebraic geometry in 1937, before the Chern classes were defined. The geometric idea involved is sometimes called the Todd-Eger class. The general definition in higher dimensions is due to Friedrich Hirzebruch. Definition To define the Todd class $\operatorname {td} (E)$ where $E$ is a complex vector bundle on a topological space $X$, it is usually possible to limit the definition to the case of a Whitney sum of line bundles, by means of a general device of characteristic class theory, the use of Chern roots (aka, the splitting principle). For the definition, let $Q(x)={\frac {x}{1-e^{-x}}}=1+{\dfrac {x}{2}}+\sum _{i=1}^{\infty }{\frac {B_{2i}}{(2i)!}}x^{2i}=1+{\dfrac {x}{2}}+{\dfrac {x^{2}}{12}}-{\dfrac {x^{4}}{720}}+\cdots $ be the formal power series with the property that the coefficient of $x^{n}$ in $Q(x)^{n+1}$ is 1, where $B_{i}$ denotes the $i$-th Bernoulli number. Consider the coefficient of $x^{j}$ in the product $\prod _{i=1}^{m}Q(\beta _{i}x)\ $ for any $m>j$. This is symmetric in the $\beta _{i}$s and homogeneous of weight $j$: so can be expressed as a polynomial $\operatorname {td} _{j}(p_{1},\ldots ,p_{j})$ in the elementary symmetric functions $p$ of the $\beta _{i}$s. Then $\operatorname {td} _{j}$ defines the Todd polynomials: they form a multiplicative sequence with $Q$ as characteristic power series. If $E$ has the $\alpha _{i}$ as its Chern roots, then the Todd class $\operatorname {td} (E)=\prod Q(\alpha _{i})$ which is to be computed in the cohomology ring of $X$ (or in its completion if one wants to consider infinite-dimensional manifolds). The Todd class can be given explicitly as a formal power series in the Chern classes as follows: $\operatorname {td} (E)=1+{\frac {c_{1}}{2}}+{\frac {c_{1}^{2}+c_{2}}{12}}+{\frac {c_{1}c_{2}}{24}}+{\frac {-c_{1}^{4}+4c_{1}^{2}c_{2}+c_{1}c_{3}+3c_{2}^{2}-c_{4}}{720}}+\cdots $ where the cohomology classes $c_{i}$ are the Chern classes of $E$, and lie in the cohomology group $H^{2i}(X)$. If $X$ is finite-dimensional then most terms vanish and $\operatorname {td} (E)$ is a polynomial in the Chern classes. Properties of the Todd class The Todd class is multiplicative: $\operatorname {td} (E\oplus F)=\operatorname {td} (E)\cdot \operatorname {td} (F).$ Let $\xi \in H^{2}({\mathbb {C} }P^{n})$ be the fundamental class of the hyperplane section. From multiplicativity and the Euler exact sequence for the tangent bundle of ${\mathbb {C} }P^{n}$ $0\to {\mathcal {O}}\to {\mathcal {O}}(1)^{n+1}\to T{\mathbb {C} }P^{n}\to 0,$ one obtains [1] $\operatorname {td} (T{\mathbb {C} }P^{n})=\left({\dfrac {\xi }{1-e^{-\xi }}}\right)^{n+1}.$ Computations of the Todd class For any algebraic curve $C$ the Todd class is just $\operatorname {td} (X)=1+c_{1}(T_{X})$. Since $C$ is projective, it can be embedded into some $\mathbb {P} ^{n}$ and we can find $c_{1}(T_{X})$ using the normal sequence $0\to T_{X}\to T_{\mathbb {P} }^{n}|_{X}\to N_{X/\mathbb {P} ^{n}}\to 0$ and properties of chern classes. For example, if we have a degree $d$ plane curve in $\mathbb {P} ^{2}$, we find the total chern class is ${\begin{aligned}c(T_{C})&={\frac {c(T_{\mathbb {P} ^{2}}|_{C})}{c(N_{C/\mathbb {P} ^{2}})}}\\&={\frac {1+3[H]}{1+d[H]}}\\&=(1+3[H])(1-d[H])\\&=1+(3-d)[H]\end{aligned}}$ where $[H]$ is the hyperplane class in $\mathbb {P} ^{2}$ restricted to $C$. Hirzebruch-Riemann-Roch formula Main article: Hirzebruch–Riemann–Roch theorem For any coherent sheaf F on a smooth compact complex manifold M, one has $\chi (F)=\int _{M}\operatorname {ch} (F)\wedge \operatorname {td} (TM),$ where $\chi (F)$ is its holomorphic Euler characteristic, $\chi (F):=\sum _{i=0}^{{\text{dim}}_{\mathbb {C} }M}(-1)^{i}{\text{dim}}_{\mathbb {C} }H^{i}(M,F),$ and $\operatorname {ch} (F)$ its Chern character. See also • Genus of a multiplicative sequence Notes 1. Intersection Theory Class 18, by Ravi Vakil References • Todd, J. A. (1937), "The Arithmetical Invariants of Algebraic Loci", Proceedings of the London Mathematical Society, 43 (1): 190–225, doi:10.1112/plms/s2-43.3.190, Zbl 0017.18504 • Friedrich Hirzebruch, Topological methods in algebraic geometry, Springer (1978) • M.I. Voitsekhovskii (2001) [1994], "Todd class", Encyclopedia of Mathematics, EMS Press
Wikipedia
Todorov surface In algebraic geometry, a Todorov surface is one of a class of surfaces of general type introduced by Todorov (1981) for which the conclusion of the Torelli theorem does not hold. References • Morrison, David R. (1988), "On the moduli of Todorov surfaces", Algebraic geometry and commutative algebra, vol. I, Tokyo: Kinokuniya, pp. 313–355, MR 0977767 • Todorov, Andrei N. (1981), "A construction of surfaces with pg = 1, q = 0 and 2 ≤ (K2) ≤ 8. Counterexamples of the global Torelli theorem.", Invent. Math., 63 (2): 287–304, doi:10.1007/BF01393879, MR 0610540
Wikipedia
Inscribed square problem The inscribed square problem, also known as the square peg problem or the Toeplitz' conjecture, is an unsolved question in geometry: Does every plane simple closed curve contain all four vertices of some square? This is true if the curve is convex or piecewise smooth and in other special cases. The problem was proposed by Otto Toeplitz in 1911.[1] Some early positive results were obtained by Arnold Emch[2] and Lev Schnirelmann.[3] As of 2020, the general case remains open.[4] Unsolved problem in mathematics: Does every Jordan curve have an inscribed square? (more unsolved problems in mathematics) Problem statement Let C be a Jordan curve. A polygon P is inscribed in C if all vertices of P belong to C. The inscribed square problem asks: Does every Jordan curve admit an inscribed square? It is not required that the vertices of the square appear along the curve in any particular order. Examples Some figures, such as circles and squares, admit infinitely many inscribed squares. If C is an obtuse triangle then it admits exactly one inscribed square; right triangles admit exactly two, and acute triangles admit exactly three.[5] Resolved cases It is tempting to attempt to solve the inscribed square problem by proving that a special class of well-behaved curves always contains an inscribed square, and then to approximate an arbitrary curve by a sequence of well-behaved curves and infer that there still exists an inscribed square as a limit of squares inscribed in the curves of the sequence. One reason this argument has not been carried out to completion is that the limit of a sequence of squares may be a single point rather than itself being a square. Nevertheless, many special cases of curves are now known to have an inscribed square.[6] Piecewise analytic curves Arnold Emch (1916) showed that piecewise analytic curves always have inscribed squares. In particular this is true for polygons. Emch's proof considers the curves traced out by the midpoints of secant line segments to the curve, parallel to a given line. He shows that, when these curves are intersected with the curves generated in the same way for a perpendicular family of secants, there are an odd number of crossings. Therefore, there always exists at least one crossing, which forms the center of a rhombus inscribed in the given curve. By rotating the two perpendicular lines continuously through a right angle, and applying the intermediate value theorem, he shows that at least one of these rhombi is a square.[6] Locally monotone curves Stromquist has proved that every local monotone plane simple curve admits an inscribed square.[7] The condition for the admission to happen is that for any point p, the curve C should be locally represented as a graph of a function y=f(x). In more precise terms, for any given point p on C, there is a neighborhood U(p) and a fixed direction n(p) (the direction of the “y-axis”) such that no chord of C -in this neighborhood- is parallel to n(p). Locally monotone curves include all types of polygons, all closed convex curves, and all piecewise C1 curves without any cusps. Curves without special trapezoids An even weaker condition on the curve than local monotonicity is that, for some ε > 0, the curve does not have any inscribed special trapezoids of size ε. A special trapezoid is an isosceles trapezoid with three equal sides, each longer than the fourth side, inscribed in the curve with a vertex ordering consistent with the clockwise ordering of the curve itself. Its size is the length of the part of the curve that extends around the three equal sides. Here, this length is measured in the domain of a fixed parametrization of C, as C may not be rectifiable. Instead of a limit argument, the proof is based on relative obstruction theory. This condition is open and dense in the space of all Jordan curves with respect to the compact-open topology. In this sense, the inscribed square problem is solved for generic curves.[6] Curves in annuli If a Jordan curve is inscribed in an annulus whose outer radius is at most 1 + √2 times its inner radius, and it is drawn in such a way that it separates the inner circle of the annulus from the outer circle, then it contains an inscribed square. In this case, if the given curve is approximated by some well-behaved curve, then any large squares that contain the center of the annulus and are inscribed in the approximation are topologically separated from smaller inscribed squares that do not contain the center. The limit of a sequence of large squares must again be a large square, rather than a degenerate point, so the limiting argument may be used.[6] Symmetric curves The affirmative answer is also known for centrally symmetric curves, even fractals such as the Koch snowflake, and curves with reflective symmetry across a line.[8] Lipschitz graphs In 2017, Terence Tao published a proof of the existence of a square in curves formed by the union of the graphs of two functions, both of which have the same value at the endpoints of the curves and both of which obey a Lipschitz continuity condition with Lipschitz constant less than one. Tao also formulated several related conjectures.[9] Jordan curves close to a C2 Jordan curve In March 2022, Gregory R. Chambers showed that if γ is a Jordan curve which is close to a C2 Jordan curve β in R2, then γ contains an inscribed square. He showed that, if κ > 0 is the maximum unsigned curvature of β and there is a map f from the image of γ to the image of β with ||f(x) − x|| < 1/10κ and f∘γ having winding number 1, then γ has an inscribed square of positive sidelength.[10] Variants and generalizations One may ask whether other shapes can be inscribed into an arbitrary Jordan curve. It is known that for any triangle T and Jordan curve C, there is a triangle similar to T and inscribed in C.[11][12] Moreover, the set of the vertices of such triangles is dense in C.[13] In particular, there is always an inscribed equilateral triangle. It is also known that any Jordan curve admits an inscribed rectangle. This was proved by Vaughan by reducing the problem to the non-embeddability of the projective plane in R3; his proof is published in Meyerson.[11] In 2020, Morales and Villanueva characterized locally connected plane continua that admit at least one inscribed rectangle.[14] In 2020, Joshua Evan Greene and Andrew Lobb proved that for every smooth Jordan curve C and rectangle R in the Euclidean plane there exists a rectangle similar to R whose vertices lie on C.[4][15][16] This generalizes both the existence of rectangles (of arbitrary shape) and the existence of squares on smooth curves, which has been known since the work of Šnirel'man (1944).[3] Some generalizations of the inscribed square problem consider inscribed polygons for curves and even more general continua in higher dimensional Euclidean spaces. For example, Stromquist proved that every continuous closed curve C in Rn satisfying "Condition A" that no two chords of C in a suitable neighborhood of any point are perpendicular admits an inscribed quadrilateral with equal sides and equal diagonals.[7] This class of curves includes all C2 curves. Nielsen and Wright proved that any symmetric continuum K in Rn contains many inscribed rectangles.[8] References 1. Toeplitz, O. (1911), "Über einige Aufgaben der Analysis situs", Verhandlungen der Schweizerischen Naturforschenden Gesellschaft (in German), 94: 197 2. Emch, Arnold (1916), "On some properties of the medians of closed continuous curves formed by analytic arcs", American Journal of Mathematics, 38 (1): 6–18, doi:10.2307/2370541, JSTOR 2370541, MR 1506274 3. Šnirel'man, L. G. (1944), "On certain geometrical properties of closed curves", Akademiya Nauk SSSR I Moskovskoe Matematicheskoe Obshchestvo. Uspekhi Matematicheskikh Nauk, 10: 34–44, MR 0012531 4. Hartnett, Kevin (June 25, 2020), "New geometric perspective cracks old problem about rectangles", Quanta Magazine, retrieved 2020-06-26 5. Bailey, Herbert; DeTemple, Duane (1998), "Squares inscribed in angles and triangles", Mathematics Magazine, 71 (4): 278–284, doi:10.2307/2690699, JSTOR 2690699 6. Matschke, Benjamin (2014), "A survey on the square peg problem", Notices of the American Mathematical Society, 61 (4): 346–352, doi:10.1090/noti1100 7. Stromquist, Walter (1989), "Inscribed squares and square-like quadrilaterals in closed curves", Mathematika, 36 (2): 187–197, doi:10.1112/S0025579300013061, MR 1045781 8. Nielsen, Mark J.; Wright, S. E. (1995), "Rectangles inscribed in symmetric continua", Geometriae Dedicata, 56 (3): 285–297, doi:10.1007/BF01263570, MR 1340790 9. Tao, Terence (2017), "An integration approach to the Toeplitz square peg problem", Forum of Mathematics, 5: e30, doi:10.1017/fms.2017.23, MR 3731730; see also Tao's blog post on the same set of results 10. Chambers, Gregory (March 2022). "On the square peg problem". arXiv:2203.02613 [math.GT]. 11. Meyerson, Mark D. (1980), "Equilateral triangles and continuous curves", Fundamenta Mathematicae, 110 (1): 1–9, doi:10.4064/fm-110-1-1-9, MR 0600575 12. Kronheimer, E. H.; Kronheimer, P. B. (1981), "The tripos problem", Journal of the London Mathematical Society, Second Series, 24 (1): 182–192, doi:10.1112/jlms/s2-24.1.182, MR 0623685 13. Nielsen, Mark J. (1992), "Triangles inscribed in simple closed curves", Geometriae Dedicata, 43 (3): 291–297, doi:10.1007/BF00151519, MR 1181760 14. Morales-Fuentes, Ulises; Villanueva-Segovia, Cristina (2021), "Rectangles Inscribed in Locally Connected Plane Continua", Topology Proceedings, 58: 37–43 15. Greene, Joshua Evan; Lobb, Andrew (September 2021), "The rectangular peg problem", Annals of Mathematics, 194 (2): 509–517, arXiv:2005.09193, doi:10.4007/annals.2021.194.2.4, S2CID 218684701 16. Schwartz, Richard Evan (2021-09-13). "Rectangles, curves, and Klein bottles". Bulletin of the American Mathematical Society. 59 (1): 1–17. doi:10.1090/bull/1755. ISSN 0273-0979. Further reading • Klee, Victor; Wagon, Stan (1991), "Inscribed squares", Old and New Unsolved Problems in Plane Geometry and Number Theory, The Dolciani Mathematical Expositions, vol. 11, Cambridge University Press, pp. 58–65, 137–144, ISBN 978-0-88385-315-3 External links • Mark J. Nielsen, Figures Inscribed in Curves. A short tour of an old problem • Inscribed squares: Denne speaks at Jordan Ellenberg's blog • Grant Sanderson, Who cares about topology? (Inscribed rectangle problem), 3Blue1Brown, YouTube a – video showing a topological solution to a simplified version of the problem.
Wikipedia
Toeplitz matrix In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix: $\qquad {\begin{bmatrix}a&b&c&d&e\\f&a&b&c&d\\g&f&a&b&c\\h&g&f&a&b\\i&h&g&f&a\end{bmatrix}}.$ Any $n\times n$ matrix $A$ of the form $A={\begin{bmatrix}a_{0}&a_{-1}&a_{-2}&\cdots &\cdots &a_{-(n-1)}\\a_{1}&a_{0}&a_{-1}&\ddots &&\vdots \\a_{2}&a_{1}&\ddots &\ddots &\ddots &\vdots \\\vdots &\ddots &\ddots &\ddots &a_{-1}&a_{-2}\\\vdots &&\ddots &a_{1}&a_{0}&a_{-1}\\a_{n-1}&\cdots &\cdots &a_{2}&a_{1}&a_{0}\end{bmatrix}}$ is a Toeplitz matrix. If the $i,j$ element of $A$ is denoted $A_{i,j}$ then we have $A_{i,j}=A_{i+1,j+1}=a_{i-j}.$ A Toeplitz matrix is not necessarily square. Solving a Toeplitz system A matrix equation of the form $Ax=b$ is called a Toeplitz system if $A$ is a Toeplitz matrix. If $A$ is an $n\times n$ Toeplitz matrix, then the system has at-most only $2n-1$ unique values, rather than $n^{2}$. We might therefore expect that the solution of a Toeplitz system would be easier, and indeed that is the case. Toeplitz systems can be solved by the Levinson algorithm in $O(n^{2})$ time.[1] Variants of this algorithm have been shown to be weakly stable (i.e. they exhibit numerical stability for well-conditioned linear systems).[2] The algorithm can also be used to find the determinant of a Toeplitz matrix in $O(n^{2})$ time.[3] A Toeplitz matrix can also be decomposed (i.e. factored) in $O(n^{2})$ time.[4] The Bareiss algorithm for an LU decomposition is stable.[5] An LU decomposition gives a quick method for solving a Toeplitz system, and also for computing the determinant. Algorithms that are asymptotically faster than those of Bareiss and Levinson have been described in the literature, but their accuracy cannot be relied upon.[6][7][8][9] General properties • An $n\times n$ Toeplitz matrix may be defined as a matrix $A$ where $A_{i,j}=c_{i-j}$, for constants $c_{1-n},\ldots ,c_{n-1}$. The set of $n\times n$ Toeplitz matrices is a subspace of the vector space of $n\times n$ matrices (under matrix addition and scalar multiplication). • Two Toeplitz matrices may be added in $O(n)$ time (by storing only one value of each diagonal) and multiplied in $O(n^{2})$ time. • Toeplitz matrices are persymmetric. Symmetric Toeplitz matrices are both centrosymmetric and bisymmetric. • Toeplitz matrices are also closely connected with Fourier series, because the multiplication operator by a trigonometric polynomial, compressed to a finite-dimensional space, can be represented by such a matrix. Similarly, one can represent linear convolution as multiplication by a Toeplitz matrix. • Toeplitz matrices commute asymptotically. This means they diagonalize in the same basis when the row and column dimension tends to infinity. • For symmetric Toeplitz matrices, there is the decomposition ${\frac {1}{a_{0}}}A=GG^{\operatorname {T} }-(G-I)(G-I)^{\operatorname {T} }$ where $G$ is the lower triangular part of ${\frac {1}{a_{0}}}A$. • The inverse of a nonsingular symmetric Toeplitz matrix has the representation $A^{-1}={\frac {1}{\alpha _{0}}}(BB^{\operatorname {T} }-CC^{\operatorname {T} })$ where $B$ and $C$ are lower triangular Toeplitz matrices and $C$ is a strictly lower triangular matrix.[10] Discrete convolution The convolution operation can be constructed as a matrix multiplication, where one of the inputs is converted into a Toeplitz matrix. For example, the convolution of $h$ and $x$ can be formulated as: $y=h\ast x={\begin{bmatrix}h_{1}&0&\cdots &0&0\\h_{2}&h_{1}&&\vdots &\vdots \\h_{3}&h_{2}&\cdots &0&0\\\vdots &h_{3}&\cdots &h_{1}&0\\h_{m-1}&\vdots &\ddots &h_{2}&h_{1}\\h_{m}&h_{m-1}&&\vdots &h_{2}\\0&h_{m}&\ddots &h_{m-2}&\vdots \\0&0&\cdots &h_{m-1}&h_{m-2}\\\vdots &\vdots &&h_{m}&h_{m-1}\\0&0&0&\cdots &h_{m}\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\\\vdots \\x_{n}\end{bmatrix}}$ $y^{T}={\begin{bmatrix}h_{1}&h_{2}&h_{3}&\cdots &h_{m-1}&h_{m}\end{bmatrix}}{\begin{bmatrix}x_{1}&x_{2}&x_{3}&\cdots &x_{n}&0&0&0&\cdots &0\\0&x_{1}&x_{2}&x_{3}&\cdots &x_{n}&0&0&\cdots &0\\0&0&x_{1}&x_{2}&x_{3}&\ldots &x_{n}&0&\cdots &0\\\vdots &&\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\cdots &0&0&x_{1}&\cdots &x_{n-2}&x_{n-1}&x_{n}&0\\0&\cdots &0&0&0&x_{1}&\cdots &x_{n-2}&x_{n-1}&x_{n}\end{bmatrix}}.$ This approach can be extended to compute autocorrelation, cross-correlation, moving average etc. Infinite Toeplitz matrix Main article: Toeplitz operator A bi-infinite Toeplitz matrix (i.e. entries indexed by $\mathbb {Z} \times \mathbb {Z} $) $A$ induces a linear operator on $\ell ^{2}$. $A={\begin{bmatrix}&\vdots &\vdots &\vdots &\vdots \\\cdots &a_{0}&a_{-1}&a_{-2}&a_{-3}&\cdots \\\cdots &a_{1}&a_{0}&a_{-1}&a_{-2}&\cdots \\\cdots &a_{2}&a_{1}&a_{0}&a_{-1}&\cdots \\\cdots &a_{3}&a_{2}&a_{1}&a_{0}&\cdots \\&\vdots &\vdots &\vdots &\vdots \end{bmatrix}}.$ The induced operator is bounded if and only if the coefficients of the Toeplitz matrix $A$ are the Fourier coefficients of some essentially bounded function $f$. In such cases, $f$ is called the symbol of the Toeplitz matrix $A$, and the spectral norm of the Toeplitz matrix $A$ coincides with the $L^{\infty }$ norm of its symbol. The proof is easy to establish and can be found as Theorem 1.1 of: [11] See also • Circulant matrix, a square Toeplitz matrix with the additional property that $a_{i}=a_{i+n}$ • Hankel matrix, an "upside down" (i.e., row-reversed) Toeplitz matrix • Szegő limit theorems Notes 1. Press et al. 2007, §2.8.2—Toeplitz matrices 2. Krishna & Wang 1993 3. Monahan 2011, §4.5—Toeplitz systems 4. Brent 1999 5. Bojanczyk et al. 1995 6. Stewart 2003 7. Chen, Hurvich & Lu 2006 8. Chan & Jin 2007 9. Chandrasekeran et al. 2007 10. Mukherjee & Maiti 1988 11. Böttcher & Grudsky 2012 References • Bojanczyk, A. W.; Brent, R. P.; de Hoog, F. R.; Sweet, D. R. (1995), "On the stability of the Bareiss and related Toeplitz factorization algorithms", SIAM Journal on Matrix Analysis and Applications, 16: 40–57, arXiv:1004.5510, doi:10.1137/S0895479891221563, S2CID 367586 • Böttcher, Albrecht; Grudsky, Sergei M. (2012), Toeplitz Matrices, Asymptotic Linear Algebra, and Functional Analysis, Birkhäuser, ISBN 978-3-0348-8395-5 • Brent, R. P. (1999), "Stability of fast algorithms for structured linear systems", in Kailath, T.; Sayed, A. H. (eds.), Fast Reliable Algorithms for Matrices with Structure, SIAM, pp. 103–116, doi:10.1137/1.9781611971354.ch4, hdl:1885/40746, S2CID 13905858 • Chan, R. H.-F.; Jin, X.-Q. (2007), An Introduction to Iterative Toeplitz Solvers, SIAM, doi:10.1137/1.9780898718850, ISBN 978-0-89871-636-8 • Chandrasekeran, S.; Gu, M.; Sun, X.; Xia, J.; Zhu, J. (2007), "A superfast algorithm for Toeplitz systems of linear equations", SIAM Journal on Matrix Analysis and Applications, 29 (4): 1247–1266, CiteSeerX 10.1.1.116.3297, doi:10.1137/040617200 • Chen, W. W.; Hurvich, C. M.; Lu, Y. (2006), "On the correlation matrix of the discrete Fourier transform and the fast solution of large Toeplitz systems for long-memory time series", Journal of the American Statistical Association, 101 (474): 812–822, CiteSeerX 10.1.1.574.4394, doi:10.1198/016214505000001069, S2CID 55893963 • Krishna, H.; Wang, Y. (1993), "The Split Levinson Algorithm is weakly stable", SIAM Journal on Numerical Analysis, 30 (5): 1498–1508, doi:10.1137/0730078 • Monahan, J. F. (2011), Numerical Methods of Statistics, Cambridge University Press • Mukherjee, Bishwa Nath; Maiti, Sadhan Samar (1988), "On some properties of positive definite Toeplitz matrices and their possible applications" (PDF), Linear Algebra and Its Applications, 102: 211–240, doi:10.1016/0024-3795(88)90326-6 • Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007), Numerical Recipes: The Art of Scientific Computing (Third ed.), Cambridge University Press, ISBN 978-0-521-88068-8 • Stewart, M. (2003), "A superfast Toeplitz solver with improved numerical stability", SIAM Journal on Matrix Analysis and Applications, 25 (3): 669–693, doi:10.1137/S089547980241791X, S2CID 15717371 • Yang, Zai; Xie, Lihua; Stoica, Petre (2016), "Vandermonde decomposition of multilevel Toeplitz matrices with application to multidimensional super-resolution", IEEE Transactions on Information Theory, 62 (6): 3685–3701, arXiv:1505.02510, doi:10.1109/TIT.2016.2553041, S2CID 6291005 Further reading • Bareiss, E. H. (1969), "Numerical solution of linear equations with Toeplitz and vector Toeplitz matrices", Numerische Mathematik, 13 (5): 404–424, doi:10.1007/BF02163269, S2CID 121761517 • Goldreich, O.; Tal, A. (2018), "Matrix rigidity of random Toeplitz matrices", Computational Complexity, 27 (2): 305–350, doi:10.1007/s00037-016-0144-9, S2CID 253641700 • Golub G. H., van Loan C. F. (1996), Matrix Computations (Johns Hopkins University Press) §4.7—Toeplitz and Related Systems • Gray R. M., Toeplitz and Circulant Matrices: A Review (Now Publishers) doi:10.1561/0100000006 • Noor, F.; Morgera, S. D. (1992), "Construction of a Hermitian Toeplitz matrix from an arbitrary set of eigenvalues", IEEE Transactions on Signal Processing, 40 (8): 2093–2094, Bibcode:1992ITSP...40.2093N, doi:10.1109/78.149978 • Pan, Victor Y. (2001), Structured Matrices and Polynomials: unified superfast algorithms, Birkhäuser, ISBN 978-0817642402 • Ye, Ke; Lim, Lek-Heng (2016), "Every matrix is a product of Toeplitz matrices", Foundations of Computational Mathematics, 16 (3): 577–598, arXiv:1307.5132, doi:10.1007/s10208-015-9254-z, S2CID 254166943 Matrix classes Explicitly constrained entries • Alternant • Anti-diagonal • Anti-Hermitian • Anti-symmetric • Arrowhead • Band • Bidiagonal • Bisymmetric • Block-diagonal • Block • Block tridiagonal • Boolean • Cauchy • Centrosymmetric • Conference • Complex Hadamard • Copositive • Diagonally dominant • Diagonal • Discrete Fourier Transform • Elementary • Equivalent • Frobenius • Generalized permutation • Hadamard • Hankel • Hermitian • Hessenberg • Hollow • Integer • Logical • Matrix unit • Metzler • Moore • Nonnegative • Pentadiagonal • Permutation • Persymmetric • Polynomial • Quaternionic • Signature • Skew-Hermitian • Skew-symmetric • Skyline • Sparse • Sylvester • Symmetric • Toeplitz • Triangular • Tridiagonal • Vandermonde • Walsh • Z Constant • Exchange • Hilbert • Identity • Lehmer • Of ones • Pascal • Pauli • Redheffer • Shift • Zero Conditions on eigenvalues or eigenvectors • Companion • Convergent • Defective • Definite • Diagonalizable • Hurwitz • Positive-definite • Stieltjes Satisfying conditions on products or inverses • Congruent • Idempotent or Projection • Invertible • Involutory • Nilpotent • Normal • Orthogonal • Unimodular • Unipotent • Unitary • Totally unimodular • Weighing With specific applications • Adjugate • Alternating sign • Augmented • Bézout • Carleman • Cartan • Circulant • Cofactor • Commutation • Confusion • Coxeter • Distance • Duplication and elimination • Euclidean distance • Fundamental (linear differential equation) • Generator • Gram • Hessian • Householder • Jacobian • Moment • Payoff • Pick • Random • Rotation • Seifert • Shear • Similarity • Symplectic • Totally positive • Transformation Used in statistics • Centering • Correlation • Covariance • Design • Doubly stochastic • Fisher information • Hat • Precision • Stochastic • Transition Used in graph theory • Adjacency • Biadjacency • Degree • Edmonds • Incidence • Laplacian • Seidel adjacency • Tutte Used in science and engineering • Cabibbo–Kobayashi–Maskawa • Density • Fundamental (computer vision) • Fuzzy associative • Gamma • Gell-Mann • Hamiltonian • Irregular • Overlap • S • State transition • Substitution • Z (chemistry) Related terms • Jordan normal form • Linear independence • Matrix exponential • Matrix representation of conic sections • Perfect matrix • Pseudoinverse • Row echelon form • Wronskian •  Mathematics portal • List of matrices • Category:Matrices Authority control: National • Israel • United States
Wikipedia
Toeplitz operator In operator theory, a Toeplitz operator is the compression of a multiplication operator on the circle to the Hardy space. Details Let S1 be the circle, with the standard Lebesgue measure, and L2(S1) be the Hilbert space of square-integrable functions. A bounded measurable function g on S1 defines a multiplication operator Mg on L2(S1). Let P be the projection from L2(S1) onto the Hardy space H2. The Toeplitz operator with symbol g is defined by $T_{g}=PM_{g}\vert _{H^{2}},$ where " | " means restriction. A bounded operator on H2 is Toeplitz if and only if its matrix representation, in the basis {zn, n ≥ 0}, has constant diagonals. Theorems • Theorem: If $g$ is continuous, then $T_{g}-\lambda $ is Fredholm if and only if $\lambda $ is not in the set $g(S^{1})$. If it is Fredholm, its index is minus the winding number of the curve traced out by $g$ with respect to the origin. For a proof, see Douglas (1972, p.185). He attributes the theorem to Mark Krein, Harold Widom, and Allen Devinatz. This can be thought of as an important special case of the Atiyah-Singer index theorem. • Axler-Chang-Sarason Theorem: The operator $T_{f}T_{g}-T_{fg}$ is compact if and only if $H^{\infty }[{\bar {f}}]\cap H^{\infty }[g]\subseteq H^{\infty }+C^{0}(S^{1})$. Here, $H^{\infty }$ denotes the closed subalgebra of $L^{\infty }(S^{1})$ of analytic functions (functions with vanishing negative Fourier coefficients), $H^{\infty }[f]$ is the closed subalgebra of $L^{\infty }(S^{1})$ generated by $f$ and $H^{\infty }$, and $C^{0}(S^{1})$ is the space (as an algebraic set) of continuous functions on the circle. See S.Axler, S-Y. Chang, D. Sarason (1978). See also • Toeplitz matrix – Matrix with equal values along diagonals References • S.Axler, S-Y. Chang, D. Sarason (1978), "Products of Toeplitz operators", Integral Equations and Operator Theory, 1 (3): 285–309, doi:10.1007/BF01682841, S2CID 120610368{{citation}}: CS1 maint: multiple names: authors list (link) • Böttcher, Albrecht; Grudsky, Sergei M. (2000), Toeplitz Matrices, Asymptotic Linear Algebra, and Functional Analysis, Birkhäuser, ISBN 978-3-0348-8395-5. • Böttcher, A.; Silbermann, B. (2006), Analysis of Toeplitz Operators, Springer Monographs in Mathematics (2nd ed.), Springer-Verlag, ISBN 978-3-540-32434-8. • Douglas, Ronald (1972), Banach Algebra techniques in Operator theory, Academic Press. • Rosenblum, Marvin; Rovnyak, James (1985), Hardy Classes and Operator Theory, Oxford University Press. Reprinted by Dover Publications, 1997, ISBN 978-0-486-69536-5.
Wikipedia
Toeplitz algebra In operator algebras, the Toeplitz algebra is the C*-algebra generated by the unilateral shift on the Hilbert space l2(N).[1] Taking l2(N) to be the Hardy space H2, the Toeplitz algebra consists of elements of the form $T_{f}+K\;$ where Tf is a Toeplitz operator with continuous symbol and K is a compact operator. Toeplitz operators with continuous symbols commute modulo the compact operators. So the Toeplitz algebra can be viewed as the C*-algebra extension of continuous functions on the circle by the compact operators. This extension is called the Toeplitz extension. By Atkinson's theorem, an element of the Toeplitz algebra Tf + K is a Fredholm operator if and only if the symbol f of Tf is invertible. In that case, the Fredholm index of Tf + K is precisely the winding number of f, the equivalence class of f in the fundamental group of the circle. This is a special case of the Atiyah-Singer index theorem. Wold decomposition characterizes proper isometries acting on a Hilbert space. From this, together with properties of Toeplitz operators, one can conclude that the Toeplitz algebra is the universal C*-algebra generated by a proper isometry; this is Coburn's theorem. References 1. William, Arveson, A Short Course in Spectral Theory, Graduate Texts in Mathematics, vol. 209, Springer, ISBN 0387953000
Wikipedia
Togliatti surface In algebraic geometry, a Togliatti surface is a nodal surface of degree five with 31 nodes. The first examples were constructed by Eugenio G. Togliatti (1940). Arnaud Beauville (1980) proved that 31 is the maximum possible number of nodes for a surface of this degree, showing this example to be optimal. See also • Barth surface • Endrass surface • Sarti surface • List of algebraic surfaces References • Beauville, Arnaud (1980), "Sur le nombre maximum de points doubles d'une surface dans $\mathbf {P} ^{3}(\mu (5)=31)$", Journées de Géometrie Algébrique d'Angers, Juillet 1979/Algebraic Geometry, Angers, 1979 (PDF) (in French), Alphen aan den Rijn—Germantown, Md.: Sijthoff & Noordhoff, pp. 207–215, MR 0605342. • Togliatti, Eugenio G. (1940), "Una notevole superficie di 5o ordine con soli punti doppi isolati", Beiblatt (Festschrift Rudolf Fueter) (PDF), Vierteljschr. Naturforsch. Ges. Zürich (in Italian), vol. 85, pp. 127–132, MR 0004492. External links • Endraß, Stephan (2003). "Togliatti surfaces". • Weisstein, Eric W. "Togliatti surface". MathWorld.
Wikipedia
Toida's conjecture In combinatorial mathematics, Toida's conjecture, due to Shunichi Toida in 1977,[1] is a refinement of the disproven Ádám's conjecture from 1967. Statement Both conjectures concern circulant graphs. These are graphs defined from a positive integer $n$ and a set $S$ of positive integers. Their vertices can be identified with the numbers from 0 to $n-1$, and two vertices $i$ and $j$ are connected by an edge whenever their difference modulo $n$ belongs to set $S$. Every symmetry of the cyclic group of addition modulo $n$ gives rise to a symmetry of the $n$-vertex circulant graphs, and Ádám conjectured (incorrectly) that these are the only symmetries of the circulant graphs. However, the known counterexamples to Ádám's conjecture involve sets $S$ in which some elements share non-trivial divisors with $n$. Toida's conjecture states that, when every member of $S$ is relatively prime to $n$, then the only symmetries of the circulant graph for $n$ and $S$ are symmetries coming from the underlying cyclic group. Proofs The conjecture was proven in the special case where n is a prime power by Klin and Poschel in 1978,[2] and by Golfand, Najmark, and Poschel in 1984.[3] The conjecture was then fully proven by Muzychuk, Klin, and Poschel in 2001 by using Schur algebra,[4] and simultaneously by Dobson and Morris in 2002 by using the classification of finite simple groups.[5] Notes 1. S. Toida: "A note on Adam's conjecture", Journal of Combinatorial Theory (B), pp. 239–246, October–December 1977 2. Klin, M.H. and R. Poschel: The Konig problem, the isomorphism problem for cyclic graphs and the method of Schur rings, Algebraic methods in graph theory, Vol. I, II., Szeged, 1978, pp. 405–434. 3. Golfand, J.J., N.L. Najmark and R. Poschel: The structure of S-rings over Z2m, preprint (1984). 4. Klin, M.H., M. Muzychuk and R. Poschel: The isomorphism problem for circulant graphs via Schur ring theory, Codes and Association Schemes, American Math. Society, 2001. 5. Dobson, Edward; Morris, Joy (2002), "Toida's conjecture is true", Electronic Journal of Combinatorics, 9 (1): R35:1–R35:14, MR 1928787
Wikipedia
Tolerance relation In universal algebra and lattice theory, a tolerance relation on an algebraic structure is a reflexive symmetric relation that is compatible with all operations of the structure. Thus a tolerance is like a congruence, except that the assumption of transitivity is dropped.[1] On a set, an algebraic structure with empty family of operations, tolerance relations are simply reflexive symmetric relations. A set that possesses a tolerance relation can be described as a tolerance space.[2] Tolerance relations provide a convenient general tool for studying indiscernibility/indistinguishability phenomena. The importance of those for mathematics had been first recognized by Poincaré.[3] Definitions A tolerance relation on an algebraic structure $(A,F)$ is usually defined to be a reflexive symmetric relation on $A$ that is compatible with every operation in $F$. A tolerance relation can also be seen as a cover of $A$ that satisfies certain conditions. The two definitions are equivalent, since for a fixed algebraic structure, the tolerance relations in the two definitions are in one-to-one correspondence. The tolerance relations on an algebraic structure $(A,F)$ form an algebraic lattice $\operatorname {Tolr} (A)$ under inclusion. Since every congruence relation is a tolerance relation, the congruence lattice $\operatorname {Cong} (A)$ is a subset of the tolerance lattice $\operatorname {Tolr} (A)$, but $\operatorname {Cong} (A)$ is not necessarily a sublattice of $\operatorname {Tolr} (A)$.[4] As binary relations A tolerance relation on an algebraic structure $(A,F)$ is a binary relation $\sim $ on $A$ that satisfies the following conditions. • (Reflexivity) $a\sim a$ for all $a\in A$ • (Symmetry) if $a\sim b$ then $b\sim a$ for all $a,b\in A$ • (Compatibility) for each $n$-ary operation $f\in F$ and $a_{1},\dots ,a_{n},b_{1},\dots ,b_{n}\in A$, if $a_{i}\sim b_{i}$ for each $i=1,\dots ,n$ then $f(a_{1},\dots ,a_{n})\sim f(b_{1},\dots ,b_{n})$. That is, the set $\{(a,b)\colon a\sim b\}$ is a subalgebra of the direct product $A^{2}$ of two $A$. A congruence relation is a tolerance relation that is also transitive. As covers A tolerance relation on an algebraic structure $(A,F)$ is a cover ${\mathcal {C}}$ of $A$ that satisfies the following three conditions.[5]: 307, Theorem 3  • For every $C\in {\mathcal {C}}$ and ${\mathcal {S}}\subseteq {\mathcal {C}}$, if $\textstyle C\subseteq \bigcup {\mathcal {S}}$, then $\textstyle \bigcap {\mathcal {S}}\subseteq C$. • In particular, no two distinct elements of ${\mathcal {C}}$ are comparable. (To see this, take ${\mathcal {S}}=\{D\}$.) • For every $S\subseteq A$, if $S$ is not contained in any set in ${\mathcal {C}}$, then there is a two-element subset $\{s,t\}\subseteq S$ such that $\{s,t\}$ is not contained in any set in ${\mathcal {C}}$. • For every $n$-ary $f\in F$ and $C_{1},\dots ,C_{n}\in {\mathcal {C}}$, there is a $(f/{\sim })(C_{1},\dots ,C_{n})\in {\mathcal {C}}$ such that $\{f(c_{1},\dots ,c_{n})\colon c_{i}\in C_{i}\}\subseteq (f/{\sim })(C_{1},\dots ,C_{n})$. (Such a $(f/{\sim })(C_{1},\dots ,C_{n})$ need not be unique.) Every partition of $A$ satisfies the first two conditions, but not conversely. A congruence relation is a tolerance relation that also forms a set partition. Equivalence of the two definitions Let $\sim $ be a tolerance binary relation on an algebraic structure $(A,F)$. Let $A/{\sim }$ be the family of maximal subsets $C\subseteq A$ such that $c\sim d$ for every $c,d\in C$. Using graph theoretical terms, $A/{\sim }$ is the set of all maximal cliques of the graph $(A,\sim )$. If $\sim $ is a congruence relation, $A/{\sim }$ is just the quotient set of equivalence classes. Then $A/{\sim }$ is a cover of $A$ and satisfies all the three conditions in the cover definition. (The last condition is shown using Zorn's lemma.) Conversely, let ${\mathcal {C}}$ be a cover of $A$ and suppose that ${\mathcal {C}}$ forms a tolerance on $A$. Consider a binary relation $\sim _{\mathcal {C}}$ on $A$ for which $a\sim _{\mathcal {C}}b$ if and only if $a,b\in C$ for some $C\in {\mathcal {C}}$. Then $\sim _{\mathcal {C}}$ is a tolerance on $A$ as a binary relation. The map ${\sim }\mapsto A/{\sim }$ is a one-to-one correspondence between the tolerances as binary relations and as covers whose inverse is ${\mathcal {C}}\mapsto {\sim _{\mathcal {C}}}$. Therefore, the two definitions are equivalent. A tolerance is transitive as a binary relation if and only if it is a partition as a cover. Thus the two characterizations of congruence relations also agree. Quotient algebras over tolerance relations Let $(A,F)$ be an algebraic structure and let $\sim $ be a tolerance relation on $A$. Suppose that, for each $n$-ary operation $f\in F$ and $C_{1},\dots ,C_{n}\in A/{\sim }$, there is a unique $(f/{\sim })(C_{1},\dots ,C_{n})\in A/{\sim }$ such that $\{f(c_{1},\dots ,c_{n})\colon c_{i}\in C_{i}\}\subseteq (f/{\sim })(C_{1},\dots ,C_{n})$ Then this provides a natural definition of the quotient algebra $(A/{\sim },F/{\sim })$ of $(A,F)$ over $\sim $. In the case of congruence relations, the uniqueness condition always holds true and the quotient algebra defined here coincides with the usual one. A main difference from congruence relations is that for a tolerance relation the uniqueness condition may fail, and even if it does not, the quotient algebra may not inherit the identities defining the variety that $(A,F)$ belongs to, so that the quotient algebra may fail to be a member of the variety again. Therefore, for a variety ${\mathcal {V}}$ of algebraic structures, we may consider the following two conditions.[4] • (Tolerance factorability) for any $(A,F)\in {\mathcal {V}}$ and any tolerance relation $\sim $ on $(A,F)$, the uniqueness condition is true, so that the quotient algebra $(A/{\sim },F/{\sim })$ is defined. • (Strong tolerance factorability) for any $(A,F)\in {\mathcal {V}}$ and any tolerance relation $\sim $ on $(A,F)$, the uniqueness condition is true, and $(A/{\sim },F/{\sim })\in {\mathcal {V}}$. Every strongly tolerance factorable variety is tolerance factorable, but not vice versa. Examples Sets A set is an algebraic structure with no operations at all. In this case, tolerance relations are simply reflexive symmetric relations and it is trivial that the variety of sets is strongly tolerance factorable. Groups On a group, every tolerance relation is a congruence relation. In particular, this is true for all algebraic structures that are groups when some of their operations are forgot, e.g. rings, vector spaces, modules, Boolean algebras, etc.[6]: 261–262  Therefore, the varieties of groups, rings, vector spaces, modules and Boolean algebras are also strongly tolerance factorable trivially. Lattices For a tolerance relation $\sim $ on a lattice $L$, every set in $L/{\sim }$ is a convex sublattice of $L$. Thus, for all $A\in L/{\sim }$, we have $A=\mathop {\uparrow } A\cap \mathop {\downarrow } A$ In particular, the following results hold. • $a\sim b$ if and only if $a\vee b\sim a\wedge b$. • If $a\sim b$ and $a\leq c,d\leq b$, then $c\sim d$. The variety of lattices is strongly tolerance factorable. That is, given any lattice $(L,\vee _{L},\wedge _{L})$ and any tolerance relation $\sim $ on $L$, for each $A,B\in L/{\sim }$ there exist unique $A\vee _{L/{\sim }}B,A\wedge _{L/{\sim }}B\in L/{\sim }$ such that $\{a\vee _{L}b\colon a\in A,\;b\in B\}\subseteq A\vee _{L/{\sim }}B$ $\{a\wedge _{L}b\colon a\in A,\;b\in B\}\subseteq A\wedge _{L/{\sim }}B$ and the quotient algebra $(L/{\sim },\vee _{L/{\sim }},\wedge _{L/{\sim }})$ is a lattice again.[7][8][9]: 44, Theorem 22  In particular, we can form quotient lattices of distributive lattices and modular lattices over tolerance relations. However, unlike in the case of congruence relations, the quotient lattices need not be distributive or modular again. In other words, the varieties of distributive lattices and modular lattices are tolerance factorable, but not strongly tolerance factorable.[7]: 40 [4] Actually, every subvariety of the variety of lattices is tolerance factorable, and the only strongly tolerance factorable subvariety other than itself is the trivial subvariety (consisting of one-element lattices).[7]: 40  This is because every lattice is isomorphic to a sublattice of the quotient lattice over a tolerance relation of a sublattice of a direct product of two-element lattices.[7]: 40, Theorem 3  See also • Dependency relation • Quasitransitive relation—a generalization to formalize indifference in social choice theory • Rough set References 1. Kearnes, Keith; Kiss, Emil W. (2013). The Shape of Congruence Lattices. American Mathematical Soc. p. 20. ISBN 978-0-8218-8323-5. 2. Sossinsky, Alexey (1986-02-01). "Tolerance space theory and some applications". Acta Applicandae Mathematicae. 5 (2): 137–167. doi:10.1007/BF00046585. S2CID 119731847. 3. Poincare, H. (1905). Science and Hypothesis (with a preface by J.Larmor ed.). New York: 3 East 14th Street: The Walter Scott Publishing Co., Ltd. pp. 22-23.{{cite book}}: CS1 maint: location (link) 4. Chajda, Ivan; Radeleczki, Sándor (2014). "Notes on tolerance factorable classes of algebras". Acta Scientiarum Mathematicarum. 80 (3–4): 389–397. doi:10.14232/actasm-012-861-x. ISSN 0001-6969. MR 3307031. S2CID 85560830. Zbl 1321.08002. 5. Chajda, Ivan; Niederle, Josef; Zelinka, Bohdan (1976). "On existence conditions for compatible tolerances". Czechoslovak Mathematical Journal. 26 (101): 304–311. doi:10.21136/CMJ.1976.101403. ISSN 0011-4642. MR 0401561. Zbl 0333.08006. 6. Schein, Boris M. (1987). "Semigroups of tolerance relations". Discrete Mathematics. 64: 253–262. doi:10.1016/0012-365X(87)90194-4. ISSN 0012-365X. MR 0887364. Zbl 0615.20045. 7. Czédli, Gábor (1982). "Factor lattices by tolerances". Acta Scientiarum Mathematicarum. 44: 35–42. ISSN 0001-6969. MR 0660510. Zbl 0484.06010. 8. Grätzer, George; Wenzel, G. H. (1990). "Notes on tolerance relations of lattices". Acta Scientiarum Mathematicarum. 54 (3–4): 229–240. ISSN 0001-6969. MR 1096802. Zbl 0727.06011. 9. Grätzer, George (2011). Lattice Theory: Foundation. Basel: Springer. doi:10.1007/978-3-0348-0018-1. ISBN 978-3-0348-0017-4. LCCN 2011921250. MR 2768581. Zbl 1233.06001. Further reading • Gerasin, S. N., Shlyakhov, V. V., and Yakovlev, S. V. 2008. Set coverings and tolerance relations. Cybernetics and Sys. Anal. 44, 3 (May 2008), 333–340. doi:10.1007/s10559-008-9007-y • Hryniewiecki, K. 1991, Relations of Tolerance, FORMALIZED MATHEMATICS, Vol. 2, No. 1, January–February 1991.
Wikipedia
Tolerant sequence In mathematical logic, a tolerant sequence is a sequence $T_{1}$,...,$T_{n}$ of formal theories such that there are consistent extensions $S_{1}$,...,$S_{n}$ of these theories with each $S_{i+1}$ interpretable in $S_{i}$. Tolerance naturally generalizes from sequences of theories to trees of theories. Weak interpretability can be shown to be a special, binary case of tolerance. This concept, together with its dual concept of cotolerance, was introduced by Japaridze in 1992, who also proved that, for Peano arithmetic and any stronger theories with effective axiomatizations, tolerance is equivalent to $\Pi _{1}$-consistency. See also • Interpretability • Cointerpretability • Interpretability logic References • G. Japaridze, The logic of linear tolerance. Studia Logica 51 (1992), pp. 249–277. • G. Japaridze, A generalized notion of weak interpretability and the corresponding logic. Annals of Pure and Applied Logic 61 (1993), pp. 113–160. • G. Japaridze and D. de Jongh, The logic of provability. Handbook of Proof Theory. S. Buss, ed. Elsevier, 1998, pp. 476–546.
Wikipedia
Toom–Cook multiplication Toom–Cook, sometimes known as Toom-3, named after Andrei Toom, who introduced the new algorithm with its low complexity, and Stephen Cook, who cleaned the description of it, is a multiplication algorithm for large integers. Given two large integers, a and b, Toom–Cook splits up a and b into k smaller parts each of length l, and performs operations on the parts. As k grows, one may combine many of the multiplication sub-operations, thus reducing the overall computational complexity of the algorithm. The multiplication sub-operations can then be computed recursively using Toom–Cook multiplication again, and so on. Although the terms "Toom-3" and "Toom–Cook" are sometimes incorrectly used interchangeably, Toom-3 is only a single instance of the Toom–Cook algorithm, where k = 3. Toom-3 reduces 9 multiplications to 5, and runs in Θ(nlog(5)/log(3)) ≈ Θ(n1.46). In general, Toom-k runs in Θ(c(k) ne), where e = log(2k − 1) / log(k), ne is the time spent on sub-multiplications, and c is the time spent on additions and multiplication by small constants.[1] The Karatsuba algorithm is equivalent to Toom-2, where the number is split into two smaller ones. It reduces 4 multiplications to 3 and so operates at Θ(nlog(3)/log(2)) ≈ Θ(n1.58). Ordinary long multiplication is equivalent to Toom-1, with complexity Θ(n2). Although the exponent e can be set arbitrarily close to 1 by increasing k, the function c grows very rapidly.[1][2] The growth rate for mixed-level Toom–Cook schemes was still an open research problem in 2005.[3] An implementation described by Donald Knuth achieves the time complexity Θ(n 2√2 log n log n).[4] Due to its overhead, Toom–Cook is slower than long multiplication with small numbers, and it is therefore typically used for intermediate-size multiplications, before the asymptotically faster Schönhage–Strassen algorithm (with complexity Θ(n log n log log n)) becomes practical. Toom first described this algorithm in 1963, and Cook published an improved (asymptotically equivalent) algorithm in his PhD thesis in 1966.[5] Details This section discusses exactly how to perform Toom-k for any given value of k, and is a simplification of a description of Toom–Cook polynomial multiplication described by Marco Bodrato.[6] The algorithm has five main steps: 1. Splitting 2. Evaluation 3. Pointwise multiplication 4. Interpolation 5. Recomposition In a typical large integer implementation, each integer is represented as a sequence of digits in positional notation, with the base or radix set to some (typically large) value b; for this example we use b = 10000, so that each digit corresponds to a group of four decimal digits (in a computer implementation, b would typically be a power of 2 instead). Say the two integers being multiplied are: m=1234567890123456789012 n= 987654321987654321098. These are much smaller than would normally be processed with Toom–Cook (grade-school multiplication would be faster) but they will serve to illustrate the algorithm. Splitting In Toom-k, we want to split the factors into k parts. The first step is to select the base B = bi, such that the number of digits of both m and n in base B is at most k (e.g., 3 in Toom-3). A typical choice for i is given by: $i=\max \left\{\left\lfloor {\frac {\left\lfloor \log _{b}m\right\rfloor }{k}}\right\rfloor ,\left\lfloor {\frac {\left\lfloor \log _{b}n\right\rfloor }{k}}\right\rfloor \right\}+1.$ In our example we'll be doing Toom-3, so we choose B = b2 = 108. We then separate m and n into their base B digits mi, ni: ${\begin{aligned}m_{2}&{}=123456\\m_{1}&{}=78901234\\m_{0}&{}=56789012\\n_{2}&{}=98765\\n_{1}&{}=43219876\\n_{0}&{}=54321098\end{aligned}}$ We then use these digits as coefficients in degree-(k − 1) polynomials p and q, with the property that p(B) = m and q(B) = n: $p(x)=m_{2}x^{2}+m_{1}x+m_{0}=123456x^{2}+78901234x+56789012\,$ $q(x)=n_{2}x^{2}+n_{1}x+n_{0}=98765x^{2}+43219876x+54321098\,$ The purpose of defining these polynomials is that if we can compute their product r(x) = p(x)q(x), our answer will be r(B) = m × n. In the case where the numbers being multiplied are of different sizes, it's useful to use different values of k for m and n, which we'll call km and kn. For example, the algorithm "Toom-2.5" refers to Toom–Cook with km = 3 and kn = 2. In this case the i in B = bi is typically chosen by: $i=\max \left\{\left\lfloor {\frac {\left\lceil \log _{b}m\right\rceil }{k_{m}}}\right\rfloor ,\left\lfloor {\frac {\left\lceil \log _{b}n\right\rceil }{k_{n}}}\right\rfloor \right\}.$ Evaluation The Toom–Cook approach to computing the polynomial product p(x)q(x) is a commonly used one. Note that a polynomial of degree d is uniquely determined by d + 1 points (for example, a line - polynomial of degree one is specified by two points). The idea is to evaluate p(·) and q(·) at various points. Then multiply their values at these points to get points on the product polynomial. Finally interpolate to find its coefficients. Since deg(pq) = deg(p) + deg(q), we will need deg(p) + deg(q) + 1 = km + kn − 1 points to determine the final result. Call this d. In the case of Toom-3, d = 5. The algorithm will work no matter what points are chosen (with a few small exceptions, see matrix invertibility requirement in Interpolation), but in the interest of simplifying the algorithm it's better to choose small integer values like 0, 1, −1, and −2. One unusual point value that is frequently used is infinity, written ∞ or 1/0. To "evaluate" a polynomial p at infinity actually means to take the limit of p(x)/xdeg p as x goes to infinity. Consequently, p(∞) is always the value of its highest-degree coefficient (in the example above coefficient m2). In our Toom-3 example, we will use the points 0, 1, −1, −2, and ∞. These choices simplify evaluation, producing the formulas: ${\begin{array}{lrlrl}p(0)&=&m_{0}+m_{1}(0)+m_{2}(0)^{2}&=&m_{0}\\p(1)&=&m_{0}+m_{1}(1)+m_{2}(1)^{2}&=&m_{0}+m_{1}+m_{2}\\p(-1)&=&m_{0}+m_{1}(-1)+m_{2}(-1)^{2}&=&m_{0}-m_{1}+m_{2}\\p(-2)&=&m_{0}+m_{1}(-2)+m_{2}(-2)^{2}&=&m_{0}-2m_{1}+4m_{2}\\p(\infty )&=&m_{2}&&\end{array}}$ and analogously for q. In our example, the values we get are: p(0)=m0=56789012=56789012 p(1)=m0 + m1 + m2=56789012 + 78901234 + 123456=135813702 p(−1)=m0 − m1 + m2=56789012 − 78901234 + 123456=−21988766 p(−2)=m0 − 2m1 + 4m2=56789012 − 2 × 78901234 + 4 × 123456=−100519632 p(∞)=m2=123456=123456 q(0)=n0=54321098=54321098 q(1)=n0 + n1 + n2=54321098 + 43219876 + 98765=97639739 q(−1)=n0 − n1 + n2=54321098 − 43219876 + 98765=11199987 q(−2)=n0 − 2n1 + 4n2=54321098 − 2 × 43219876 + 4 × 98765=−31723594 q(∞)=n2=98765=98765. As shown, these values may be negative. For the purpose of later explanation, it will be useful to view this evaluation process as a matrix-vector multiplication, where each row of the matrix contains powers of one of the evaluation points, and the vector contains the coefficients of the polynomial: $\left({\begin{matrix}p(0)\\p(1)\\p(-1)\\p(-2)\\p(\infty )\end{matrix}}\right)=\left({\begin{matrix}0^{0}&0^{1}&0^{2}\\1^{0}&1^{1}&1^{2}\\(-1)^{0}&(-1)^{1}&(-1)^{2}\\(-2)^{0}&(-2)^{1}&(-2)^{2}\\0&0&1\end{matrix}}\right)\left({\begin{matrix}m_{0}\\m_{1}\\m_{2}\end{matrix}}\right)=\left({\begin{matrix}1&0&0\\1&1&1\\1&-1&1\\1&-2&4\\0&0&1\end{matrix}}\right)\left({\begin{matrix}m_{0}\\m_{1}\\m_{2}\end{matrix}}\right).$ The dimensions of the matrix are d by km for p and d by kn for q. The row for infinity is always all zero except for a 1 in the last column. Faster evaluation Multipoint evaluation can be obtained faster than with the above formulas. The number of elementary operations (addition/subtraction) can be reduced. The sequence given by Bodrato[6] for Toom-3, executed here over the first operand (polynomial p) of the running example is the following: p0←m0 + m2=56789012 + 123456=56912468 p(0)=m0=56789012=56789012 p(1)=p0 + m1=56912468 + 78901234=135813702 p(−1)=p0 − m1=56912468 − 78901234=−21988766 p(−2)=(p(−1) + m2) × 2 − m0=(− 21988766 + 123456 ) × 2 − 56789012=− 100519632 p(∞)=m2=123456=123456. This sequence requires five addition/subtraction operations, one less than the straightforward evaluation. Moreover the multiplication by 4 in the calculation of p(−2) was saved. Pointwise multiplication Unlike multiplying the polynomials p(·) and q(·), multiplying the evaluated values p(a) and q(a) just involves multiplying integers — a smaller instance of the original problem. We recursively invoke our multiplication procedure to multiply each pair of evaluated points. In practical implementations, as the operands become smaller, the algorithm will switch to schoolbook long multiplication. Letting r be the product polynomial, in our example we have: r(0)=p(0)q(0)=56789012 × 54321098=3084841486175176 r(1)=p(1)q(1)=135813702 × 97639739=13260814415903778 r(−1)=p(−1)q(−1)=−21988766 × 11199987=−246273893346042 r(−2)=p(−2)q(−2)=−100519632 × −31723594=3188843994597408 r(∞)=p(∞)q(∞)=123456 × 98765=12193131840. As shown, these can also be negative. For large enough numbers, this is the most expensive step, the only step that is not linear in the sizes of m and n. Interpolation This is the most complex step, the reverse of the evaluation step: given our d points on the product polynomial r(·), we need to determine its coefficients. In other words, we want to solve this matrix equation for the vector on the right-hand side: ${\begin{aligned}\left({\begin{matrix}r(0)\\r(1)\\r(-1)\\r(-2)\\r(\infty )\end{matrix}}\right)&{}=\left({\begin{matrix}0^{0}&0^{1}&0^{2}&0^{3}&0^{4}\\1^{0}&1^{1}&1^{2}&1^{3}&1^{4}\\(-1)^{0}&(-1)^{1}&(-1)^{2}&(-1)^{3}&(-1)^{4}\\(-2)^{0}&(-2)^{1}&(-2)^{2}&(-2)^{3}&(-2)^{4}\\0&0&0&0&1\end{matrix}}\right)\left({\begin{matrix}r_{0}\\r_{1}\\r_{2}\\r_{3}\\r_{4}\end{matrix}}\right)\\&{}=\left({\begin{matrix}1&0&0&0&0\\1&1&1&1&1\\1&-1&1&-1&1\\1&-2&4&-8&16\\0&0&0&0&1\end{matrix}}\right)\left({\begin{matrix}r_{0}\\r_{1}\\r_{2}\\r_{3}\\r_{4}\end{matrix}}\right).\end{aligned}}$ This matrix is constructed the same way as the one in the evaluation step, except that it's d × d. We could solve this equation with a technique like Gaussian elimination, but this is too expensive. Instead, we use the fact that, provided the evaluation points were chosen suitably, this matrix is invertible (see also Vandermonde matrix), and so: ${\begin{aligned}\left({\begin{matrix}r_{0}\\r_{1}\\r_{2}\\r_{3}\\r_{4}\end{matrix}}\right)&{}=\left({\begin{matrix}1&0&0&0&0\\1&1&1&1&1\\1&-1&1&-1&1\\1&-2&4&-8&16\\0&0&0&0&1\end{matrix}}\right)^{-1}\left({\begin{matrix}r(0)\\r(1)\\r(-1)\\r(-2)\\r(\infty )\end{matrix}}\right)\\&{}=\left({\begin{matrix}1&0&0&0&0\\{\tfrac {1}{2}}&{\tfrac {1}{3}}&-1&{\tfrac {1}{6}}&-2\\-1&{\tfrac {1}{2}}&{\tfrac {1}{2}}&0&-1\\-{\tfrac {1}{2}}&{\tfrac {1}{6}}&{\tfrac {1}{2}}&-{\tfrac {1}{6}}&2\\0&0&0&0&1\end{matrix}}\right)\left({\begin{matrix}r(0)\\r(1)\\r(-1)\\r(-2)\\r(\infty )\end{matrix}}\right).\end{aligned}}$ All that remains is to compute this matrix-vector product. Although the matrix contains fractions, the resulting coefficients will be integers — so this can all be done with integer arithmetic, just additions, subtractions, and multiplication/division by small constants. A difficult design challenge in Toom–Cook is to find an efficient sequence of operations to compute this product; one sequence given by Bodrato[6] for Toom-3 is the following, executed here over the running example: r0←r(0)=3084841486175176 r4←r(∞)=12193131840 r3←(r(−2) − r(1))/3=(3188843994597408 − 13260814415903778)/3 =−3357323473768790 r1←(r(1) − r(−1))/2=(13260814415903778 − (−246273893346042))/2 =6753544154624910 r2←r(−1) − r(0)=−246273893346042 − 3084841486175176 =−3331115379521218 r3←(r2 − r3)/2 + 2r(∞)=(−3331115379521218 − (−3357323473768790))/2 + 2 × 12193131840 =13128433387466 r2←r2 + r1 − r4=−3331115379521218 + 6753544154624910 − 12193131840 =3422416581971852 r1←r1 − r3=6753544154624910 − 13128433387466 =6740415721237444. We now know our product polynomial r: ${\begin{array}{rrr}r(x)=&{}&3084841486175176\\&+&6740415721237444x\\&+&3422416581971852x^{2}\\&+&13128433387466x^{3}\\&+&12193131840x^{4}\end{array}}$ If we were using different km, kn, or evaluation points, the matrix and so our interpolation strategy would change; but it does not depend on the inputs and so can be hard-coded for any given set of parameters. Recomposition Finally, we evaluate r(B) to obtain our final answer. This is straightforward since B is a power of b and so the multiplications by powers of B are all shifts by a whole number of digits in base b. In the running example b = 104 and B = b2 = 108. 3084841486175176 6740415721237444 3422416581971852 13128433387466 +12193131840 1219326312467611632493760095208585886175176 And this is in fact the product of 1234567890123456789012 and 987654321987654321098. Interpolation matrices for various k Here we give common interpolation matrices for a few different common small values of km and kn. Toom-1 Toom-1 (km = kn = 1) requires 1 evaluation point, here chosen to be 0. It degenerates to long multiplication, with an interpolation matrix of the identity matrix: $\left({\begin{matrix}1\end{matrix}}\right)^{-1}=\left({\begin{matrix}1\end{matrix}}\right).$ Toom-1.5 Toom-1.5 (km = 2, kn = 1) requires 2 evaluation points, here chosen to be 0 and ∞. Its interpolation matrix is then the identity matrix: $\left({\begin{matrix}1&0\\0&1\end{matrix}}\right)^{-1}=\left({\begin{matrix}1&0\\0&1\end{matrix}}\right).$ This also degenerates to long multiplication: both coefficients of one factor are multipled by the sole coefficient of the other factor. Toom-2 Toom-2 (km = 2, kn = 2) requires 3 evaluation points, here chosen to be 0, 1, and ∞. It is the same as Karatsuba multiplication, with an interpolation matrix of: $\left({\begin{matrix}1&0&0\\1&1&1\\0&0&1\end{matrix}}\right)^{-1}=\left({\begin{matrix}1&0&0\\-1&1&-1\\0&0&1\end{matrix}}\right).$ Toom-2.5 Toom-2.5 (km = 3, kn = 2) requires 4 evaluation points, here chosen to be 0, 1, −1, and ∞. It then has an interpolation matrix of: $\left({\begin{matrix}1&0&0&0\\1&1&1&1\\1&-1&1&-1\\0&0&0&1\end{matrix}}\right)^{-1}=\left({\begin{matrix}1&0&0&0\\0&{\tfrac {1}{2}}&-{\tfrac {1}{2}}&-1\\-1&{\tfrac {1}{2}}&{\tfrac {1}{2}}&0\\0&0&0&1\end{matrix}}\right).$ Notes 1. Knuth, p. 296 2. Crandall & Pomerance, p. 474 3. Crandall & Pomerance, p. 536 4. Knuth, p. 302 5. Positive Results, chapter III of Stephen A. Cook: On the Minimum Computation Time of Functions. 6. Marco Bodrato. Towards Optimal Toom–Cook Multiplication for Univariate and Multivariate Polynomials in Characteristic 2 and 0. In WAIFI'07 proceedings, volume 4547 of LNCS, pages 116–133. June 21–22, 2007. author website References • D. Knuth. The Art of Computer Programming, Volume 2. Third Edition, Addison-Wesley, 1997. Section 4.3.3.A: Digital methods, pg.294. • R. Crandall & C. Pomerance. Prime Numbers – A Computational Perspective. Second Edition, Springer, 2005. Section 9.5.1: Karatsuba and Toom–Cook methods, pg.473. • M. Bodrato. Toward Optimal Toom–Cook Multiplication.... In WAIFI'07, Springer, 2007. External links • Toom–Cook 3-way multiplication from GMP documentation Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms
Wikipedia
Tom Brown (mathematician) Thomas Craig Brown (born 1938) is an American-Canadian mathematician, Ramsey Theorist, and Professor Emeritus at Simon Fraser University.[1] Tom Brown Born Thomas Craig Brown 1938 Portland, Oregon, U.S. Alma mater • Reed College (BS) • Washington University in St. Louis (Ph.D.) Known for • Brown's Lemma Scientific career Fields • Mathematics • Ramsey Theory InstitutionsSimon Fraser University ThesisOn Semigroups which are Unions of Periodic Groups (1964) Doctoral advisorEarl Edwin Lazerson Collaborations As a mathematician, Brown’s primary focus in his research is in the field of Ramsey Theory. When completing his Ph.D., his thesis was 'On Semigroups which are Unions of Periodic Groups'[2] In 1963 as a graduate student, he showed that if the positive integers are finitely colored, then some color class is piece-wise syndetic.[3] In A Density Version of a Geometric Ramsey Theorem.[4] he and Joe P. Buhler show that “for every $\varepsilon >0$ there is an $n(\varepsilon )$ such that if $n=dim(V)\geq n(\varepsilon )$ then any subset of $V$ with more than $\varepsilon |V|$ elements must contain 3 collinear points” where $V$ is an $n$-dimensional affine space over the field with $p^{d}$ elements, and $p\neq 2$". In Descriptions of the characteristic sequence of an irrational,[5] Brown discusses the following idea: Let $\alpha $ be a positive irrational real number. The characteristic sequence of $\alpha $ is $f(\alpha )=f_{1}f_{2}\ldots $; where $f_{n}=[(n+1)\alpha ][\alpha ]$.” From here he discusses “the various descriptions of the characteristic sequence of α which have appeared in the literature” and refines this description to “obtain a very simple derivation of an arithmetic expression for $[n\alpha ]$.” He then gives some conclusions regarding the conditions for $[n\alpha ]$ which are equivalent to $f_{n}=1$. He has collaborated with Paul Erdős, including Quasi-Progressions and Descending Waves[6] and Quantitative Forms of a Theorem of Hilbert.[7] References 1. "Tom Brown Professor Emeritus at SFU". Retrieved 10 November 2020. 2. Jensen, Gary R.; Krantz, Steven G. (2006). 150 Years of Mathematics at Washington University in St. Louis. American Mathematical Society. p. 15. ISBN 978-0-8218-3603-3. 3. Brown, T. C. (1971). "An interesting combinatorial method in the theory of locally finite semigroups" (PDF). Pacific Journal of Mathematics. 36 (2): 285–289. doi:10.2140/pjm.1971.36.285. 4. Brown, T. C.; Buhler, J. P. (1982). "A Density version of a Geometric Ramsey Theorem" (PDF). Journal of Combinatorial Theory. Series A. 32: 20–34. doi:10.1016/0097-3165(82)90062-0. 5. Brown, T. C. (1993). "Descriptions of the Characteristic Sequence of an Irrational" (PDF). Canadian Mathematical Bulletin. 36: 15–21. doi:10.4153/CMB-1993-003-6. 6. Brown, T. C.; Erdős, P.; Freedman, A. R. (1990). "Quasi-Progressions and Descending Waves". Journal of Combinatorial Theory. Series A. 53: 81–95. doi:10.1016/0097-3165(90)90021-N. 7. Brown, T. C.; Chung, F. R. K.; Erdős, P. (1985). "Quantitative Forms of a Theorem of Hilbert" (PDF). Journal of Combinatorial Theory. Series A. 38 (2): 210–216. doi:10.1016/0097-3165(85)90071-8. External links • Archive of papers published by Tom Brown • 2003 INTEGERS conference dedicated to Tom Brown Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Thomas J. Laffey Thomas J. Laffey (born December 1943) is an Irish mathematician known for his contributions to group theory and matrix theory. His entire career has been spent at University College Dublin (UCD), where he served two terms as head of the school of mathematics. While he formally retired in 2009, he remains active in research and publishing. The journal Linear Algebra and Its Applications had a special issue (April 2009) to mark his 65th birthday.[1] He received the Hans Schneider Prize in 2013. In May 2019 at UCD, the International Conference on Linear Algebra and Matrix Theory held a celebration to honor Professor Laffey on his 75th birthday.[2] Education and career Tom Laffey was born in Cross, County Mayo.[3] His parents were farmers, and the family had no tradition of education. His own early schooling was entirely through the Irish language, and in mathematics and physics he was more or less self-taught. The technical books he had to study were in English, which at first he found challenging. Nobody at his school had attempted honours Leaving Cert maths before. However, he got one of the highest marks in the country in the 1961 Leaving Certificate mathematics examination, thereby earning a state scholarship to university.[3] He attended University College Galway, earning bachelor's (1964) and master's (1965) degrees in mathematical science and also winning a National University of Ireland Traveling Studentship Prize. In 1968 he was awarded the D.Phil. by the University of Sussex for a thesis on "Structure Theorems for Linear Groups" done under advisor Walter Ledermann.[4] He immediately joined the staff at University College Dublin, from which he officially retired in 2009, but he has continued to publish regularly. His research has focussed on group theory, and later linear algebra too, and he has supervised five Ph.D. students. He has also played a significant role in the establishment of the Irish Mathematical Olympiad, and had frequently served the BT Young Scientists Exhibition as a judge and reviewer. Early in his career, he developed a strong interest in matrix theory, due to the influence of Olga Taussky-Todd, with whom he often corresponded. This was cemented by a 1972–3 sabbatical spent at Northern Illinois University.[5] He received the Hans Schneider Prize in 2013 in recognition of his constructive solution to the NIEP (Non-negative inverse eigenvalue problem) for non-zero spectra.[6] Selected publications • 2018   "The Diagonalizable Nonnegative Inverse Eigenvalue Problem" (with Cronin, A.). Special Matrices, 6 (1) 273–281. • 2015   "On a conjecture of Deveci and Karaduman". Linear Algebra and its Applications, Vol 471, 15 April 2015, Pages 569–574. • 2012   "A constructive version of the Boyle–Handelman theorem on the spectra of nonnegative matrices". Linear Algebra Appl., Volume 436, Issue 6, 15 March 2012, pages 1701–1709. • 2006   "Nonnegative realization of spectra having negative real parts" (with H. Šmigoc). Linear Algebra Appl., 416 (1) 148–159. • 2004/2005   "Perturbing non-real eigenvalues of nonnegative real matrices". J. Linear Algebra, 12 73–76 (electronic). • 1999   "A characterization of trace zero nonnegative 5 × 5 matrices". (with E. Meehan) Linear Algebra Appl., 302–3. • 1996   "The real and the symmetric nonnegative inverse eigenvalue problems are different" (with C.R. Johnson, R. Loewy). Proc. Amer. Math. Soc, 124 (12) 3647–3651. References 1. Special Issue in Honor of Thomas J. Laffey Linear Algebra and its Applications Vol 430, Issue 7, pp. 1725–1876, 1 April 2009 2. International Conference on Linear Algebra and Matrix Theory: In honour of Professor Thomas J Laffey on the occasion of his 75th birthday University College Dublin 3. An Interview with Professor Thomas J. Laffey by Gary McGuire, Irish Math. Soc. Bulletin 63 (2009), 47–61 4. Thomas J. Laffey at the Mathematics Genealogy Project 5. An Interview with Thomas J. Laffey by J. F. Queiró, Bulletin of International Center for Mathematical Sciences 13 (2002), 17–23 6. Laffey T.J. A constructive version of the Boyle–Handelman theorem on the spectra of nonnegative matrices Linear Algebra and its Applications, Volume 436, Issue 6, 15 March 2012, pages 1701–1709 External links • Thomas J. Laffey at the Mathematics Genealogy Project Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Tomahawk (geometry) The tomahawk is a tool in geometry for angle trisection, the problem of splitting an angle into three equal parts. The boundaries of its shape include a semicircle and two line segments, arranged in a way that resembles a tomahawk, a Native American axe.[1][2] The same tool has also been called the shoemaker's knife,[3] but that name is more commonly used in geometry to refer to a different shape, the arbelos (a curvilinear triangle bounded by three mutually tangent semicircles).[4] Description The basic shape of a tomahawk consists of a semicircle (the "blade" of the tomahawk), with a line segment the length of the radius extending along the same line as the diameter of the semicircle (the tip of which is the "spike" of the tomahawk), and with another line segment of arbitrary length (the "handle" of the tomahawk) perpendicular to the diameter. In order to make it into a physical tool, its handle and spike may be thickened, as long as the line segment along the handle continues to be part of the boundary of the shape. Unlike a related trisection using a carpenter's square, the other side of the thickened handle does not need to be made parallel to this line segment.[1] In some sources a full circle rather than a semicircle is used,[5] or the tomahawk is also thickened along the diameter of its semicircle,[6] but these modifications make no difference to the action of the tomahawk as a trisector. Trisection To use the tomahawk to trisect an angle, it is placed with its handle line touching the apex of the angle, with the blade inside the angle, tangent to one of the two rays forming the angle, and with the spike touching the other ray of the angle. One of the two trisecting lines then lies on the handle segment, and the other passes through the center point of the semicircle.[1][6] If the angle to be trisected is too sharp relative to the length of the tomahawk's handle, it may not be possible to fit the tomahawk into the angle in this way, but this difficulty may be worked around by repeatedly doubling the angle until it is large enough for the tomahawk to trisect it, and then repeatedly bisecting the trisected angle the same number of times as the original angle was doubled.[2] If the apex of the angle is labeled A, the point of tangency of the blade is B, the center of the semicircle is C, the top of the handle is D, and the spike is E, then triangles △ACD and △ADE are both right triangles with a shared base and equal height, so they are congruent triangles. Because the sides AB and BC of triangle △ABC are respectively a tangent and a radius of the semicircle, they are at right angles to each other and △ABC is also a right triangle; it has the same hypotenuse as △ACD and the same side lengths BC = CD, so again it is congruent to the other two triangles, showing that the three angles formed at the apex are equal.[5][6] Although the tomahawk may itself be constructed using a compass and straightedge,[7] and may be used to trisect an angle, it does not contradict Pierre Wantzel's 1837 theorem that arbitrary angles cannot be trisected by compass and unmarked straightedge alone.[8] The reason for this is that placing the constructed tomahawk into the required position is a form of neusis that is not allowed in compass and straightedge constructions.[9] History The inventor of the tomahawk is unknown,[1][10] but the earliest references to it come from 19th-century France. It dates back at least as far as 1835, when it appeared in a book by Claude Lucien Bergery, Géométrie appliquée à l'industrie, à l'usage des artistes et des ouvriers (3rd edition).[1] Another early publication of the same trisection was made by Henri Brocard in 1877;[11] Brocard in turn attributes its invention to an 1863 memoir by French naval officer Pierre-Joseph Glotin.[12][13][14] References 1. Yates, Robert C. (1941), "The Trisection Problem, Chapter III: Mechanical trisectors", National Mathematics Magazine, 15 (6): 278–293, doi:10.2307/3028413, JSTOR 3028413, MR 1569903. 2. Gardner, Martin (1975), Mathematical Carnival: from penny puzzles, card shuffles and tricks of lightning calculators to roller coaster rides into the fourth dimension, Knopf, pp. 262–263. 3. Dudley, Underwood (1996), The Trisectors, MAA Spectrum (2nd ed.), Cambridge University Press, pp. 14–16, ISBN 9780883855140. 4. Alsina, Claudi; Nelsen, Roger B. (2010), "9.4 The shoemaker's knife and the salt cellar", Charming Proofs: A Journey Into Elegant Mathematics, Dolciani Mathematical Expositions, vol. 42, Mathematical Association of America, pp. 147–148, ISBN 9780883853481. 5. Meserve, Bruce E. (1982), Fundamental Concepts of Algebra, Courier Dover Publications, p. 244, ISBN 9780486614700. 6. Isaacs, I. Martin (2009), Geometry for College Students, Pure and Applied Undergraduate Texts, vol. 8, American Mathematical Society, pp. 209–210, ISBN 9780821847947. 7. Eves, Howard Whitley (1995), College Geometry, Jones & Bartlett Learning, p. 191, ISBN 9780867204759. 8. Wantzel, L. (1837), "Recherches sur les moyens de reconnaître si un Problème de Géométrie peut se résoudre avec la règle et le compas", Journal de Mathématiques Pures et Appliquées (in French), 1 (2): 366–372. 9. The word "neusis" is described by La Nave, Federica; Mazur, Barry (2002), "Reading Bombelli", The Mathematical Intelligencer, 24 (1): 12–21, doi:10.1007/BF03025306, MR 1889932, S2CID 189888034 as meaning "a family of constructions dependent upon a single parameter" in which, as the parameter varies, some combinatorial change in the construction occurs at the desired parameter value. La Nave and Mazur describe other trisections than the tomahawk, but the same description applies here: a tomahawk placed with its handle on the apex, parameterized by the position of the spike on its ray, gives a family of constructions in which the relative positions of the blade and its ray change as the spike is placed at the correct point. 10. Aaboe, Asger (1997), Episodes from the Early History of Mathematics, New Mathematical Library, vol. 13, Mathematical Association of America, p. 87, ISBN 9780883856130. 11. Brocard, H. (1877), "Note sur la division mécanique de l'angle", Bulletin de la Société Mathématique de France (in French), 5: 43–47. 12. Glotin (1863), "De quelques moyens pratiques de diviser les angles en parties égales", Mémoires de la Société des Sciences physiques et naturelles de Bordeaux (in French), 2: 253–278. 13. George E. Martin (1998), "Preface", Geometric Constructions, Springer 14. Dudley (1996) incorrectly writes these names as Bricard and Glatin. External links • Trisection using special tools: "Tomahawk", Takaya Iwamoto, 2006, featuring a tomahawk tool made from transparent vinyl and comparisons for accuracy against other trisectors • Weisstein, Eric W., "Tomahawk", MathWorld • Construction heptagon with tomahawk, animation
Wikipedia
Tomasz Mrowka Tomasz Mrowka (born September 8, 1961) is an American mathematician specializing in differential geometry and gauge theory. He is the Singer Professor of Mathematics and former head of the Department of Mathematics at the Massachusetts Institute of Technology. Tomasz Mrowka Mrowka at Aarhus University, 2011. Born (1961-09-08) September 8, 1961 State College, Pennsylvania, US NationalityAmerican Alma mater • MIT • University of California, Berkeley Awards • Fellow, American Academy of Arts and Sciences (2007) • Veblen Prize (2007) • Doob Prize (2011) • Member, National Academy of Sciences (2015) • Leroy P. Steele Prize for Seminal Contribution to Research (2023) Scientific career FieldsMathematics InstitutionsMIT ThesisA local Mayer-Vietoris principle for Yang-Mills moduli spaces (1988) Doctoral advisorClifford Taubes Robion Kirby Doctoral studentsLarry Guth Lenhard Ng Sherry Gong Mrowka is the son of Polish mathematician Stanisław Mrówka[1] and is married to MIT mathematics professor Gigliola Staffilani.[2] Career A 1983 graduate of the Massachusetts Institute of Technology, he received the PhD from the University of California, Berkeley in 1988 under the direction of Clifford Taubes and Robion Kirby. He joined the MIT mathematics faculty as professor in 1996, following faculty appointments at Stanford University and at the California Institute of Technology (professor 1994–96).[3] At MIT, he was the Simons Professor of Mathematics from 2007–2010. Upon Isadore Singer's retirement in 2010 the name of the chair became the Singer Professor of Mathematics which Mrowka held until 2017. He was named head of the Department of Mathematics in 2014 and held that position for 3 years.[4] A prior Sloan fellow and Young Presidential Investigator, in 1994 he was an invited speaker at the International Congress of Mathematicians (ICM) in Zurich. In 2007, he received the Oswald Veblen Prize in Geometry from the AMS jointly with Peter Kronheimer, "for their joint contributions to both three- and four-dimensional topology through the development of deep analytical techniques and applications."[5] He was named a Guggenheim Fellow in 2010, and in 2011 he received the Doob Prize with Peter B. Kronheimer for their book Monopoles and Three-Manifolds (Cambridge University Press, 2007).[6][7] In 2018 he gave a plenary lecture at the ICM in Rio de Janeiro, together with Peter Kronheimer. In 2023 he was awarded the Leroy P. Steele Prize for Seminal Contribution to Research (with Peter Kronheimer).[8] He became a fellow of the American Academy of Arts & Sciences in 2007,[9] and a member of the National Academy of Sciences in 2015.[10] Research Mrowka's work combines analysis, geometry, and topology, specializing in the use of partial differential equations, such as the Yang-Mills equations from particle physics to analyze low-dimensional mathematical objects.[4] Jointly with Robert Gompf, he discovered four-dimensional models of space-time topology.[11] In joint work with Peter Kronheimer, Mrowka settled many long-standing conjectures, three of which earned them the 2007 Veblen Prize. The award citation mentions three papers that Mrowka and Kronheimer wrote together. The first paper in 1995 deals with Donaldson's polynomial invariants and introduced Kronheimer–Mrowka basic class, which have been used to prove a variety of results about the topology and geometry of 4-manifolds, and partly motivated Witten's introduction of the Seiberg–Witten invariants.[12] The second paper proves the so-called Thom conjecture and was one of the first deep applications of the then brand new Seiberg–Witten equations to four-dimensional topology.[13] In the third paper in 2004, Mrowka and Kronheimer used their earlier development of Seiberg–Witten monopole Floer homology to prove the Property P conjecture for knots.[14] The citation says: "The proof is a beautiful work of synthesis which draws upon advances made in the fields of gauge theory, symplectic and contact geometry, and foliations over the past 20 years."[5] In further recent work with Kronheimer, Mrowka showed that a certain subtle combinatorially-defined knot invariant introduced by Mikhail Khovanov can detect “unknottedness.”[15] References 1. W. Piotrowski, Stanisław G. Mrówka (1933–2010), Wiadom. Mat. 51 (2015), 347–348 . 2. Baker, Billy (April 28, 2008), "A life of unexpected twists takes her from farm to math department", Boston Globe. Archived by the Indian Academy of Sciences, Women in Science initiative. 3. "Tomasz Mrowka | MIT Mathematics". math.mit.edu. Retrieved September 18, 2015. 4. "Tomasz Mrowka named head of the Department of Mathematics". Retrieved September 18, 2015. 5. "2007 Veblen Prize" (PDF). American Mathematical Society. April 2007. 6. Kronheimer and Mrowka Receive 2011 Doob Prize 7. Taubes, Clifford Henry (2009). "Review of Monopoles and three-manifolds by Peter Kronheimer and Tomasz Mrowka". Bull. Amer. Math. Soc. (N.S.). 46 (3): 505–509. doi:10.1090/S0273-0979-09-01250-6. 8. Leroy P. Steele Prize for Seminal Contribution 2023 9. "Tomasz Stanislaw Mrowka". Member Directory. American Academy of Arts & Sciences. Retrieved March 9, 2020. 10. "Tomasz S. Mrowka". Member Directory. National Academy of Sciences. Retrieved March 9, 2020. 11. Gompf, Robert E.; Mrowka, Tomasz S. (July 1, 1993). "Irreducible 4-Manifolds Need not be Complex". Annals of Mathematics. Second Series. 138 (1): 61–111. doi:10.2307/2946635. JSTOR 2946635. 12. Kronheimer, Peter; Mrowka, Tomasz (1995). "Embedded surfaces and the structure of Donaldson's polynomial invariants" (PDF). J. Differential Geom. 41 (3): 573–34. doi:10.4310/jdg/1214456482. 13. Kronheimer, P. B.; Mrowka, T. S. (January 1, 1994). "The Genus of Embedded Surfaces in the Projective Plane". Mathematical Research Letters. 1 (6): 797–808. doi:10.4310/mrl.1994.v1.n6.a14. 14. Kronheimer, Peter B; Mrowka, Tomasz S (January 1, 2004). "Witten's conjecture and Property P". Geometry & Topology. 8 (1): 295–310. arXiv:math/0311489. doi:10.2140/gt.2004.8.295. S2CID 10764084. 15. Kronheimer, P. B.; Mrowka, T. S. (February 11, 2011). "Khovanov homology is an unknot-detector". Publications Mathématiques de l'IHÉS. 113 (1): 97–208. arXiv:1005.4346. doi:10.1007/s10240-010-0030-y. ISSN 0073-8301. S2CID 119586228. External links • Mrowka's website at MIT • Tomasz Mrowka at the Mathematics Genealogy Project Recipients of the Oswald Veblen Prize in Geometry • 1964 Christos Papakyriakopoulos • 1964 Raoul Bott • 1966 Stephen Smale • 1966 Morton Brown and Barry Mazur • 1971 Robion Kirby • 1971 Dennis Sullivan • 1976 William Thurston • 1976 James Harris Simons • 1981 Mikhail Gromov • 1981 Shing-Tung Yau • 1986 Michael Freedman • 1991 Andrew Casson and Clifford Taubes • 1996 Richard S. Hamilton and Gang Tian • 2001 Jeff Cheeger, Yakov Eliashberg and Michael J. Hopkins • 2004 David Gabai • 2007 Peter Kronheimer and Tomasz Mrowka; Peter Ozsváth and Zoltán Szabó • 2010 Tobias Colding and William Minicozzi; Paul Seidel • 2013 Ian Agol and Daniel Wise • 2016 Fernando Codá Marques and André Neves • 2019 Xiuxiong Chen, Simon Donaldson and Song Sun Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Czech Republic • Poland Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Tomasz Łuczak Tomasz Łuczak (born 13 March 1963 in Poznań)[1] is a Polish mathematician and professor at Adam Mickiewicz University in Poznań and Emory University. His main field of research is combinatorics, specifically discrete structures, such as random graphs, and their chromatic number.[2] Under supervision of Michał Karoński, Łuczak earned his doctorate at Adam Mickiewicz University in Poznań in 1987.[3] In 1992, he was awarded the EMS Prize and in 1997 he received the prestigious Prize of the Foundation for Polish Science for his work on the theory of random discrete structures. References 1. Joseph A., Mignot F., Murat F., Prum B., Rentschler R. (1994.) First European Congress of Mathematics Paris, July 6–10, 1992: Vol. 1: Invited Lectures, Nelson Thornes. 2. ——— (1991). "The chromatic number of random graphs". Combinatorica. 11 (1): 45–54. doi:10.1007/BF01375472. S2CID 189917450. 3. Tomasz Łuczak at the Mathematics Genealogy Project External links • Website at Emory University Archived 2011-05-25 at the Wayback Machine • Website at Adam Mickiewicz University Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands • Poland Academics • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH Other • IdRef
Wikipedia
Tomoyuki Arakawa Tomoyuki Arakawa (Japanese: 荒川 知幸; born 22 May 1968) is a Japanese mathematician, mathematical physicist, and a professor at the RIMS of the Kyoto University. His research interests are representation theory and vertex algebras and he is known especially for the work in W-algebras. He obtained his PhD from the University of Nagoya in 1999. In 2018 he was invited speaker at the International Congress of Mathematicians in Rio de Janeiro.[1] He won the MSJ Autumn Prize in 2017 for his work on representation theory of W-algebras.[2] Selected publications • Arakawa, Tomoyuki (1 August 2007). "Representation theory of W-algebras". Inventiones Mathematicae. 169 (2): 219–320. doi:10.1007/s00222-007-0046-1. ISSN 1432-1297. S2CID 122047968. • Arakawa, Tomoyuki (2015). "Rationality of W-algebras: principal nilpotent cases". Annals of Mathematics. 182 (2): 565–604. arXiv:1211.7124. doi:10.4007/annals.2015.182.2.4. ISSN 0003-486X. JSTOR 24523343. S2CID 53477362. • Arakawa, Tomoyuki; Creutzig, Thomas; Linshaw, Andrew R. (1 October 2019). "W-algebras as coset vertex algebras". Inventiones Mathematicae. 218 (1): 145–195. arXiv:1801.03822. Bibcode:2019InMat.218..145A. doi:10.1007/s00222-019-00884-3. ISSN 1432-1297. S2CID 253745764. References 1. "ICM 2018 INVITED LECTURES ToC". impa.br. Retrieved 13 August 2022. 2. "The 2017 MSJ Autumn Prize". The Mathematical Society of Japan. Retrieved 13 August 2022. External links • Personal website Authority control: Academics • Google Scholar • MathSciNet • zbMATH
Wikipedia
Tompkins–Paige algorithm The Tompkins–Paige algorithm[1] is a computer algorithm for generating all permutations of a finite set of objects. The method Let P and c be arrays of length n with 1-based indexing (i.e. the first entry of an array has index 1). The algorithm for generating all n! permutations of the set {1, 2, ..., n} is given by the following pseudocode: P ← [1, 2, ..., n]; yield P; c ← [*, 1, ..., 1]; (the first entry of c is not used) i ← 2; while i ≤ n do left-rotate the first i entries of P; (e.g. left-rotating the first 4 entries of [4, 2, 5, 3, 1] would give [2, 5, 3, 4, 1]) if c[i] < i then c[i] ← c[i] + 1; i ← 2; yield P; else c[i] ← 1; i ← i+1; In the above pseudocode, the statement "yield P" means to output or record the set of permuted indices P. If the algorithm is implemented correctly, P will be yielded exactly n! times, each with a different set of permuted indices. This algorithm is not the most efficient one among all existing permutation generation methods.[2] Not only does it have to keep track of an auxiliary counting array (c), redundant permutations are also produced and ignored (because P is not yielded after left-rotation if c[i] ≥ i) in the course of generation. For instance, when n = 4, the algorithm will first yield P = [1,2,3,4] and then generate the other 23 permutations in 40 iterations (i.e. in 17 iterations, there are redundant permutations and P is not yielded). The following lists, in the order of generation, all 41 values of P, where the parenthesized ones are redundant: P = 1234 c = *111 i=2 P = 2134 c = *211 i=2 P = (1234) c = *111 i=3 P = 2314 c = *121 i=2 P = 3214 c = *221 i=2 P = (2314) c = *121 i=3 P = 3124 c = *131 i=2 P = 1324 c = *231 i=2 P = (3124) c = *131 i=3 P = (1234) c = *111 i=4 P = 2341 c = *112 i=2 P = 3241 c = *212 i=2 P = (2341) c = *112 i=3 P = 3421 c = *122 i=2 P = 4321 c = *222 i=2 P = (3421) c = *122 i=3 P = 4231 c = *132 i=2 P = 2431 c = *232 i=2 P = (4231) c = *132 i=3 P = (2341) c = *112 i=4 P = 3412 c = *113 i=2 P = 4312 c = *213 i=2 P = (3412) c = *113 i=3 P = 4132 c = *123 i=2 P = 1432 c = *223 i=2 P = (4132) c = *123 i=3 P = 1342 c = *133 i=2 P = 3142 c = *233 i=2 P = (1342) c = *133 i=3 P = (3412) c = *113 i=4 P = 4123 c = *114 i=2 P = 1423 c = *214 i=2 P = (4123) c = *114 i=3 P = 1243 c = *124 i=2 P = 2143 c = *224 i=2 P = (1243) c = *124 i=3 P = 2413 c = *134 i=2 P = 4213 c = *234 i=2 P = (2413) c = *134 i=3 P = (4123) c = *114 i=4 P = (1234) c = *111 i=5 References 1. Tompkins, C. (1956). "Machine attacks on problems whose variables are permutations". Proc. Symposium in Appl. Math., Numerical Analysis. Vol. 6. McGraw–Hill, Inc., N.Y. pp. 195–211. 2. Sedgewick, Robert (1977). "Permutation Generation Methods". Computing Surveys. 9 (2): 137–164. doi:10.1145/356689.356692.
Wikipedia
Tonnetz In musical tuning and harmony, the Tonnetz (German for 'tone network') is a conceptual lattice diagram representing tonal space first described by Leonhard Euler in 1739.[1] Various visual representations of the Tonnetz can be used to show traditional harmonic relationships in European classical music. History through 1900 The Tonnetz originally appeared in Leonhard Euler's 1739 Tentamen novae theoriae musicae ex certissismis harmoniae principiis dilucide expositae. Euler's Tonnetz, pictured at left, shows the triadic relationships of the perfect fifth and the major third: at the top of the image is the note F, and to the left underneath is C (a perfect fifth above F), and to the right is A (a major third above F). The Tonnetz was rediscovered in 1858 by Ernst Naumann, and was disseminated in an 1866 treatise of Arthur von Oettingen. Oettingen and the influential musicologist Hugo Riemann (not to be confused with the mathematician Bernhard Riemann) explored the capacity of the space to chart harmonic motion between chords and modulation between keys. Similar understandings of the Tonnetz appeared in the work of many late-19th century German music theorists.[2] Oettingen and Riemann both conceived of the relationships in the chart being defined through just intonation, which uses pure intervals. One can extend out one of the horizontal rows of the Tonnetz indefinitely, to form a never-ending sequence of perfect fifths: F-C-G-D-A-E-B-F♯-C♯-G♯-D♯-A♯-E♯-B♯-F𝄪-C𝄪-G𝄪- (etc.) Starting with F, after 12 perfect fifths, one reaches E♯. Perfect fifths in just intonation are slightly larger than the compromised fifths used in equal temperament tuning systems more common in the present. This means that when one stacks 12 fifths starting from F, the E♯ we arrive at will not be seven octaves above the F we started with. Oettingen and Riemann's Tonnetz thus extended on infinitely in every direction without actually repeating any pitches. In the twentieth century, composer-theorists such as Ben Johnston and James Tenney continued to developed theories and applications involving just-intoned Tonnetze. The appeal of the Tonnetz to 19th-century German theorists was that it allows spatial representations of tonal distance and tonal relationships. For example, looking at the dark blue A minor triad in the graphic at the beginning of the article, its parallel major triad (A-C♯-E) is the triangle right below, sharing the vertices A and E. The relative major of A minor, C major (C-E-G) is the upper-right adjacent triangle, sharing the C and the E vertices. The dominant triad of A minor, E major (E-G♯-B) is diagonally across the E vertex, and shares no other vertices. One important point is that every shared vertex between a pair of triangles is a shared pitch between chords - the more shared vertices, the more shared pitches the chord will have. This provides a visualization of the principle of parsimonious voice-leading, in which motions between chords are considered smoother when fewer pitches change. This principle is especially important in analyzing the music of late-19th century composers like Wagner, who frequently avoided traditional tonal relationships.[2] Twentieth-century reinterpretation Recent research by Neo-Riemannian music theorists David Lewin, Brian Hyer, and others, have revived the Tonnetz to further explore properties of pitch structures.[2] Modern music theorists generally construct the Tonnetz using equal temperament,[2] and using pitch-classes, which make no distinction between octave transpositions of a pitch. Under equal temperament, the never-ending series of ascending fifths mentioned earlier becomes a cycle. Neo-Riemannian theorists typically assume enharmonic equivalence (in other words, A♭ = G♯), and so the two-dimensional plane of the 19th-century Tonnetz cycles in on itself in two different directions, and is mathematically isomorphic to a torus. Theorists have studied the structure of this new cyclical version using mathematical group theory. Neo-Riemannian theorists have also used the Tonnetz to visualize non-tonal triadic relationships. For example, the diagonal going up and to the left from C in the diagram at the beginning of the article forms a division of the octave in three major thirds: C-A♭-E-C (the E is actually an F♭, and the final C a D♭♭). Richard Cohn argues that while a sequence of triads built on these three pitches (C major, A♭ major, and E major) cannot be adequately described using traditional concepts of functional harmony, this cycle has smooth voice leading and other important group properties which can be easily observed on the Tonnetz.[3] Similarities to other graphical systems The harmonic table note layout is a note layout that is topologically equivalent to the Tonnetz, and is used on several music instruments that allow playing major and minor chords with a single finger. The Tonnetz can be overlayed on the Wicki–Hayden note layout, where the major second can be found half way the major third. The Tonnetz is the dual graph of Schoenberg's chart of the regions,[4] and of course vice versa. Research into music cognition has demonstrated that the human brain uses a "chart of the regions" to process tonal relationships.[5] See also • Neo-Riemannian theory • Musical set-theory • Riemannian theory • Transformational theory • Tuning theory • Treatise on Harmony References 1. Euler, Leonhard (1739). Tentamen novae theoriae musicae ex certissismis harmoniae principiis dilucide expositae (in Latin). Saint Petersburg Academy. p. 147. 2. Cohn, Richard (1998). "Introduction to Neo-Riemannian Theory: A Survey and a Historical Perspective". Journal of Music Theory. 42 (2 Autumn): 167–180. doi:10.2307/843871. JSTOR 843871. 3. Cohn, Richard (March 1996). "Maximally Smooth Cycles, Hexatonic Systems, and the Analysis of Late-Romantic Triadic Progressions". Music Analysis. 15 (1): 9–40. doi:10.2307/854168. JSTOR 854168. 4. Schoenberg, Arnold; Stein, L. (1969). Structural Functions of Harmony. New York: Norton. ISBN 978-0-393-00478-6. 5. Janata, Petr; Jeffrey L. Birk; John D. Van Horn; Marc Leman; Barbara Tillmann; Jamshed J. Bharucha (December 2002). "The Cortical Topography of Tonal Structures Underlying Western Music". Science. 298 (5601): 2167–2170. Bibcode:2002Sci...298.2167J. doi:10.1126/science.1076262. PMID 12481131. S2CID 3031759. Further reading • Johnston, Ben (2006). "Rational Structure in Music", "Maximum Clarity" and Other Writings on Music, edited by Bob Gilmore. Urbana: University of Illinois Press. ISBN 0-252-03098-2. • Wannamaker, Robert, The Music of James Tenney, Volume 1: Contexts and Paradigms (University of Illinois Press, 2021), 155-65.</ref> External links • Music harmony and donuts by Paul Dysart • Charting Enharmonicism on the Just-Intonation Tonnetz by Robert T. Kelley • Midi-Instrument based on Tonnetz (Melodic Table) by The Shape of Music • Midi-Instrument based on Tonnetz (Harmonic Table) by C-Thru-Music • TonnetzViz (interactive visualization) by Ondřej Cífka; a modified version by Anton Salikhmetov Pitch space • Chordal space • Chromatic circle • Circle of fifths • Fokker periodicity blocks • Lattice (music) • Pitch class space • Pitch constellation • Spiral array model • Tonality diamond • Tonnetz Riemannian theory • Functional harmony • Parallel and contrast chords • Harmonic dualism • Hugo Riemann • Klang • Neo-Riemannian theory • Terzschritt • Tonnetz • Handbuch der Harmonielehre • Lehrbuch des Contrapunkts
Wikipedia
Tonal system The tonal system is a base 16 system of notation (predating the widespread use of hexadecimal in computing), arithmetic, and metrology proposed in 1859 by John W. Nystrom.[1] In addition to new weights and measures, his proposal included a new calendar with sixteen months, a new system of coinage, and a clock with sixteen major divisions of the day (called tims). Nystrom advocated his system thus: I am not afraid, or do not hesitate, to advocate a binary system of arithmetic and metrology. I know I have nature on my side; if I do not succeed to impress upon you its utility and great importance to mankind, it will reflect that much less credit upon our generation, upon scientific men and philosophers.[2] Names for the numbers He proposed names for the digits, calling zero "noll" and counting (from one to sixteen): "An,  de,  ti,  go,  su,  by,  ra,  me,  ni,  ko,  hu,  vy,  la,  po,  fy,  ton." (Therefore, tonal system.) Because hexadecimal requires sixteen digits, Nystrom supplemented the existing decimal digits 0 through 9 with his own invented characters (shown on his clockface above) and changed the value of 9 to ten. Later, the hexadecimal notation overcame this same obstacle by using the digits 0 through 9 followed by the letters A through F. The numbers 1116 and 1216 would be said "tonan", "tonde", etc. The table below shows Nystrom's names for successive powers of 1016. Base 16 Number Tonal Name Base 10 Equivalent 10 ton 16 100 san 256 1000 mill 4,096 1,0000 bong 65,536 10,0000 tonbong 1,048,576 100,0000 sanbong 16,777,216 1000,0000 millbong 268,435,456 1,0000,0000 tam 4,294,967,296 1,0000,0000,0000 song 16^12 1,0000,0000,0000,0000 tran 16^16 1,0000,0000,0000,0000,0000 bongtran 16^20 Thus, the hexadecimal number 1510,0000 would be "mill-susanton-bong". This first hexadecimal system, proposed in the 19th century, has thus far not achieved widespread usage. Although Nystrom did not propose a language for tonal fractions, his nomenclature for units of measure does provide one: the name of a power of sixteen before the base unit's name multiplies it by that number, but a power of sixteen after the base unit's name divides it by that number. Thus, de timtons means 1⁄8 tim. Geography For latitudes he put 0 at the North Pole, 4 at the equator and 8 at the South Pole. The units were called tims. They are the same as the colatitudes measured in turns times 16. Tonal (in tims) ISO 6709 Colatitude (in degrees) Colatitude (in turns) 0 090 0° 0 1 67.5 2 045 45° 0.125 3 022.5 4 000 90° 0.25 5 −22.5 6 −045 135° 0.375 7 −67.5 8 −090 180° 0.5 Music In his book he made a reference to music notation, where binary division is already in use for time. He also discussed the problem of pitch inflation, which he proposed to solve by setting the A below middle C to a frequency of san per timmill (194 Hz). References 1. Nystrom, John W. Project of a New System of Arithmetic, Weight, Measure and Coins, Proposed to be Called the Tonal System, with Sixteen to the Base 2. (Quotation: John W. Nystrom, ca. 1863) The Art of Computer Programming section 4.1, Donald Knuth.
Wikipedia
Tonelli–Shanks algorithm The Tonelli–Shanks algorithm (referred to by Shanks as the RESSOL algorithm) is used in modular arithmetic to solve for r in a congruence of the form r2 ≡ n (mod p), where p is a prime: that is, to find a square root of n modulo p. Tonelli–Shanks cannot be used for composite moduli: finding square roots modulo composite numbers is a computational problem equivalent to integer factorization.[1] An equivalent, but slightly more redundant version of this algorithm was developed by Alberto Tonelli[2][3] in 1891. The version discussed here was developed independently by Daniel Shanks in 1973, who explained: My tardiness in learning of these historical references was because I had lent Volume 1 of Dickson's History to a friend and it was never returned.[4] According to Dickson,[3] Tonelli's algorithm can take square roots of x modulo prime powers pλ apart from primes. Core ideas Given a non-zero $n$ and an odd prime $p$, Euler's criterion tells us that $n$ has a square root (i.e., $n$ is a quadratic residue) if and only if: $n^{\frac {p-1}{2}}\equiv 1{\pmod {p}}$. In contrast, if a number $z$ has no square root (is a non-residue), Euler's criterion tells us that: $z^{\frac {p-1}{2}}\equiv -1{\pmod {p}}$. It is not hard to find such $z$, because half of the integers between 1 and $p-1$ have this property. So we assume that we have access to such a non-residue. By (normally) dividing by 2 repeatedly, we can write $p-1$ as $Q2^{S}$, where $Q$ is odd. Note that if we try $R\equiv n^{\frac {Q+1}{2}}{\pmod {p}}$, then $R^{2}\equiv n^{Q+1}=(n)(n^{Q}){\pmod {p}}$. If $t\equiv n^{Q}\equiv 1{\pmod {p}}$, then $R$ is a square root of $n$. Otherwise, for $M=S$, we have $R$ and $t$ satisfying: • $R^{2}\equiv nt{\pmod {p}}$; and • $t$ is a $2^{M-1}$-th root of 1 (because $t^{2^{M-1}}=t^{2^{S-1}}\equiv n^{Q2^{S-1}}=n^{\frac {p-1}{2}}$). If, given a choice of $R$ and $t$ for a particular $M$ satisfying the above (where $R$ is not a square root of $n$), we can easily calculate another $R$ and $t$ for $M-1$ such that the above relations hold, then we can repeat this until $t$ becomes a $2^{0}$-th root of 1, i.e., $t=1$. At that point $R$ is a square root of $n$. We can check whether $t$ is a $2^{M-2}$-th root of 1 by squaring it $M-2$ times and check whether it is 1. If it is, then we do not need to do anything, the same choice of $R$ and $t$ works. But if it is not, $t^{2^{M-2}}$ must be -1 (because squaring it gives 1, and there can only be two square roots 1 and -1 of 1 modulo $p$). To find a new pair of $R$ and $t$, we can multiply $R$ by a factor $b$, to be determined. Then $t$ must be multiplied by a factor $b^{2}$ to keep $R^{2}\equiv nt{\pmod {p}}$. So we need to find a factor $b^{2}$ so that $tb^{2}$ is a $2^{M-2}$-th root of 1, or equivalently $b^{2}$ is a $2^{M-2}$-th root of -1. The trick here is to make use of $z$, the known non-residue. The Euler's criterion applied to $z$ shown above says that $z^{Q}$ is a $2^{S-1}$-th root of -1. So by squaring $z^{Q}$ repeatedly, we have access to a sequence of $2^{i}$-th root of -1. We can select the right one to serve as $b$. With a little bit of variable maintenance and trivial case compression, the algorithm below emerges naturally. The algorithm Operations and comparisons on elements of the multiplicative group of integers modulo p $\mathbb {Z} /p\mathbb {Z} $ are implicitly mod p. Inputs: • p, a prime • n, an element of $\mathbb {Z} /p\mathbb {Z} $ such that solutions to the congruence r2 = n exist; when this is so we say that n is a quadratic residue mod p. Outputs: • r in $\mathbb {Z} /p\mathbb {Z} $ such that r2 = n Algorithm: 1. By factoring out powers of 2, find Q and S such that $p-1=Q2^{S}$ with Q odd 2. Search for a z in $\mathbb {Z} /p\mathbb {Z} $ which is a quadratic non-residue • Half of the elements in the set will be quadratic non-residues • Candidates can be tested with Euler's criterion or by finding the Jacobi symbol 3. Let ${\begin{aligned}M&\leftarrow S\\c&\leftarrow z^{Q}\\t&\leftarrow n^{Q}\\R&\leftarrow n^{\frac {Q+1}{2}}\end{aligned}}$ 4. Loop: • If t = 0, return r = 0 • If t = 1, return r = R • Otherwise, use repeated squaring to find the least i, 0 < i < M, such that $t^{2^{i}}=1$ • Let $b\leftarrow c^{2^{M-i-1}}$, and set ${\begin{aligned}M&\leftarrow i\\c&\leftarrow b^{2}\\t&\leftarrow tb^{2}\\R&\leftarrow Rb\end{aligned}}$ Once you have solved the congruence with r the second solution is $-r{\pmod {p}}$. If the least i such that $t^{2^{i}}=1$ is M, then no solution to the congruence exists, i.e. n is not a quadratic residue. This is most useful when p ≡ 1 (mod 4). For primes such that p ≡ 3 (mod 4), this problem has possible solutions $r=\pm n^{\frac {p+1}{4}}{\pmod {p}}$. If these satisfy $r^{2}\equiv n{\pmod {p}}$, they are the only solutions. If not, $r^{2}\equiv -n{\pmod {p}}$, n is a quadratic non-residue, and there are no solutions. Proof We can show that at the start of each iteration of the loop the following loop invariants hold: • $c^{2^{M-1}}=-1$ • $t^{2^{M-1}}=1$ • $R^{2}=tn$ Initially: • $c^{2^{M-1}}=z^{Q2^{S-1}}=z^{\frac {p-1}{2}}=-1$ (since z is a quadratic nonresidue, per Euler's criterion) • $t^{2^{M-1}}=n^{Q2^{S-1}}=n^{\frac {p-1}{2}}=1$ (since n is a quadratic residue) • $R^{2}=n^{Q+1}=tn$ At each iteration, with M' , c' , t' , R' the new values replacing M, c, t, R: • $c'^{2^{M'-1}}=(b^{2})^{2^{i-1}}=c^{2^{M-i}2^{i-1}}=c^{2^{M-1}}=-1$ • $t'^{2^{M'-1}}=(tb^{2})^{2^{i-1}}=t^{2^{i-1}}b^{2^{i}}=-1\cdot -1=1$ • $t^{2^{i-1}}=-1$ since we have that $t^{2^{i}}=1$ but $t^{2^{i-1}}\neq 1$ (i is the least value such that $t^{2^{i}}=1$) • $b^{2^{i}}=c^{2^{M-i-1}2^{i}}=c^{2^{M-1}}=-1$ • $R'^{2}=R^{2}b^{2}=tnb^{2}=t'n$ From $t^{2^{M-1}}=1$ and the test against t = 1 at the start of the loop, we see that we will always find an i in 0 < i < M such that $t^{2^{i}}=1$. M is strictly smaller on each iteration, and thus the algorithm is guaranteed to halt. When we hit the condition t = 1 and halt, the last loop invariant implies that R2 = n. Order of t We can alternately express the loop invariants using the order of the elements: • $\operatorname {ord} (c)=2^{M}$ • $\operatorname {ord} (t)|2^{M-1}$ • $R^{2}=tn$ as before Each step of the algorithm moves t into a smaller subgroup by measuring the exact order of t and multiplying it by an element of the same order. Example Solving the congruence r2 ≡ 5 (mod 41). 41 is prime as required and 41 ≡ 1 (mod 4). 5 is a quadratic residue by Euler's criterion: $5^{\frac {41-1}{2}}=5^{20}=1$ (as before, operations in $(\mathbb {Z} /41\mathbb {Z} )^{\times }$ are implicitly mod 41). 1. $p-1=40=5\cdot 2^{3}$ so $Q\leftarrow 5$, $S\leftarrow 3$ 2. Find a value for z: • $2^{\frac {41-1}{2}}=1$, so 2 is a quadratic residue by Euler's criterion. • $3^{\frac {41-1}{2}}=40=-1$, so 3 is a quadratic nonresidue: set $z\leftarrow 3$ 3. Set • $M\leftarrow S=3$ • $c\leftarrow z^{Q}=3^{5}=38$ • $t\leftarrow n^{Q}=5^{5}=9$ • $R\leftarrow n^{\frac {Q+1}{2}}=5^{\frac {5+1}{2}}=2$ 4. Loop: • First iteration: • $t\neq 1$, so we're not finished • $t^{2^{1}}=40$, $t^{2^{2}}=1$ so $i\leftarrow 2$ • $b\leftarrow c^{2^{M-i-1}}=38^{2^{3-2-1}}=38$ • $M\leftarrow i=2$ • $c\leftarrow b^{2}=38^{2}=9$ • $t\leftarrow tb^{2}=9\cdot 9=40$ • $R\leftarrow Rb=2\cdot 38=35$ • Second iteration: • $t\neq 1$, so we're still not finished • $t^{2^{1}}=1$ so $i\leftarrow 1$ • $b\leftarrow c^{2^{M-i-1}}=9^{2^{2-1-1}}=9$ • $M\leftarrow i=1$ • $c\leftarrow b^{2}=9^{2}=40$ • $t\leftarrow tb^{2}=40\cdot 40=1$ • $R\leftarrow Rb=35\cdot 9=28$ • Third iteration: • $t=1$, and we are finished; return $r=R=28$ Indeed, 282 ≡ 5 (mod 41) and (−28)2 ≡ 132 ≡ 5 (mod 41). So the algorithm yields the two solutions to our congruence. Speed of the algorithm The Tonelli–Shanks algorithm requires (on average over all possible input (quadratic residues and quadratic nonresidues)) $2m+2k+{\frac {S(S-1)}{4}}+{\frac {1}{2^{S-1}}}-9$ modular multiplications, where $m$ is the number of digits in the binary representation of $p$ and $k$ is the number of ones in the binary representation of $p$. If the required quadratic nonresidue $z$ is to be found by checking if a randomly taken number $y$ is a quadratic nonresidue, it requires (on average) $2$ computations of the Legendre symbol.[5] The average of two computations of the Legendre symbol are explained as follows: $y$ is a quadratic residue with chance ${\tfrac {\tfrac {p+1}{2}}{p}}={\tfrac {1+{\tfrac {1}{p}}}{2}}$, which is smaller than $1$ but $\geq {\tfrac {1}{2}}$, so we will on average need to check if a $y$ is a quadratic residue two times. This shows essentially that the Tonelli–Shanks algorithm works very well if the modulus $p$ is random, that is, if $S$ is not particularly large with respect to the number of digits in the binary representation of $p$. As written above, Cipolla's algorithm works better than Tonelli–Shanks if (and only if) $S(S-1)>8m+20$. However, if one instead uses Sutherland's algorithm to perform the discrete logarithm computation in the 2-Sylow subgroup of $\mathbb {F} _{p}$, one may replace $S(S-1)$ with an expression that is asymptotically bounded by $O(S\log S/\log \log S)$.[6] Explicitly, one computes $e$ such that $c^{e}\equiv n^{Q}$ and then $R\equiv c^{-e/2}n^{(Q+1)/2}$ satisfies $R^{2}\equiv n$ (note that $e$ is a multiple of 2 because $n$ is a quadratic residue). The algorithm requires us to find a quadratic nonresidue $z$. There is no known deterministic algorithm that runs in polynomial time for finding such a $z$. However, if the generalized Riemann hypothesis is true, there exists a quadratic nonresidue $z<2\ln ^{2}{p}$,[7] making it possible to check every $z$ up to that limit and find a suitable $z$ within polynomial time. Keep in mind, however, that this is a worst-case scenario; in general, $z$ is found in on average 2 trials as stated above. Uses The Tonelli–Shanks algorithm can (naturally) be used for any process in which square roots modulo a prime are necessary. For example, it can be used for finding points on elliptic curves. It is also useful for the computations in the Rabin cryptosystem and in the sieving step of the quadratic sieve. Generalizations Tonelli–Shanks can be generalized to any cyclic group (instead of $(\mathbb {Z} /p\mathbb {Z} )^{\times }$) and to kth roots for arbitrary integer k, in particular to taking the kth root of an element of a finite field.[8] If many square-roots must be done in the same cyclic group and S is not too large, a table of square-roots of the elements of 2-power order can be prepared in advance and the algorithm simplified and sped up as follows. 1. Factor out powers of 2 from p − 1, defining Q and S as: $p-1=Q2^{S}$ with Q odd. 2. Let $R\leftarrow n^{\frac {Q+1}{2}},t\leftarrow n^{Q}\equiv R^{2}/n$ 3. Find $b$ from the table such that $b^{2}\equiv t$ and set $R\equiv R/b$ 4. return R. Tonelli's algorithm will work on mod p^k According to Dickson's "Theory of Numbers"[3] A. Tonelli[9] gave an explicit formula for the roots of $x^{2}=c{\pmod {p^{\lambda }}}$[3] The Dickson reference shows the following formula for the square root of $x^{2}{\bmod {p^{\lambda }}}$. when $p=4\cdot 7+1$, or $s=2$ (s must be 2 for this equation) and $A=7$ such that $29=2^{2}\cdot 7+1$ for $x^{2}{\bmod {p^{\lambda }}}\equiv c$ then $x{\bmod {p^{\lambda }}}\equiv \pm (c^{A}+3)^{\beta }\cdot c^{(\beta +1)/2}$ where $\beta \equiv a\cdot p^{\lambda -1}$ Noting that $23^{2}{\bmod {29^{3}}}\equiv 529$ and noting that $\beta =7\cdot 29^{2}$ then $(529^{7}+3)^{7\cdot 29^{2}}\cdot 529^{(7\cdot 29^{2}+1)/2}{\bmod {29^{3}}}\equiv 24366\equiv -23$ To take another example: $2333^{2}{\bmod {29^{3}}}\equiv 4142$ and $(4142^{7}+3)^{7\cdot 29^{2}}\cdot 4142^{(7\cdot 29^{2}+1)/2}{\bmod {29^{3}}}\equiv 2333$ Dickson also attributes the following equation to Tonelli: $X{\bmod {p^{\lambda }}}\equiv x^{p^{\lambda -1}}\cdot c^{(p^{\lambda }-2p^{\lambda -1}+1)/2}$ where $X^{2}{\bmod {p^{\lambda }}}\equiv c$ and $x^{2}{\bmod {p}}\equiv c$; Using $p=23$ and using the modulus of $p^{3}$ the math follows: $1115^{2}{\bmod {23^{3}}}=2191$ First, find the modular square root mod $p$ which can be done by the regular Tonelli algorithm: $1115^{2}{\bmod {23}}\equiv 6$ and thus ${\sqrt {6}}{\bmod {23}}\equiv 11$ And applying Tonelli's equation (see above): $11^{23^{2}}\cdot 2191^{(23^{3}-2\cdot 23^{2}+1)/2}{\bmod {23^{3}}}\equiv 1115$ Dickson's reference[3] clearly shows that Tonelli's algorithm works on moduli of $p^{\lambda }$. Notes 1. Oded Goldreich, Computational complexity: a conceptual perspective, Cambridge University Press, 2008, p. 588. 2. Volker Diekert; Manfred Kufleitner; Gerhard Rosenberger; Ulrich Hertrampf (24 May 2016). Discrete Algebraic Methods: Arithmetic, Cryptography, Automata and Groups. De Gruyter. pp. 163–165. ISBN 978-3-11-041632-9. 3. Leonard Eugene Dickson (1919). History of the Theory of Numbers. Vol. 1. Washington, Carnegie Institution of Washington. pp. 215–216. 4. Daniel Shanks. Five Number-theoretic Algorithms. Proceedings of the Second Manitoba Conference on Numerical Mathematics. Pp. 51–70. 1973. 5. Gonzalo Tornaria - Square roots modulo p, page 2 https://doi.org/10.1007%2F3-540-45995-2_38 6. Sutherland, Andrew V. (2011), "Structure computation and discrete logarithms in finite abelian p-groups", Mathematics of Computation, 80 (273): 477–500, arXiv:0809.3413, doi:10.1090/s0025-5718-10-02356-2, S2CID 13940949 7. Bach, Eric (1990), "Explicit bounds for primality testing and related problems", Mathematics of Computation, 55 (191): 355–380, doi:10.2307/2008811, JSTOR 2008811 8. Adleman, L. M., K. Manders, and G. Miller: 1977, `On taking roots in finite fields'. In: 18th IEEE Symposium on Foundations of Computer Science. pp. 175-177 9. "Accademia nazionale dei Lincei, Rome. Rendiconti, (5), 1, 1892, 116-120." References • Ivan Niven; Herbert S. Zuckerman; Hugh L. Montgomery (1991). An Introduction to the Theory of Numbers (5th ed.). Wiley. pp. 110–115. ISBN 0-471-62546-9. • Daniel Shanks. Five Number Theoretic Algorithms. Proceedings of the Second Manitoba Conference on Numerical Mathematics. Pp. 51–70. 1973. • Alberto Tonelli, Bemerkung über die Auflösung quadratischer Congruenzen. Nachrichten von der Königlichen Gesellschaft der Wissenschaften und der Georg-Augusts-Universität zu Göttingen. Pp. 344–346. 1891. • Gagan Tara Nanda - Mathematics 115: The RESSOL Algorithm • Gonzalo Tornaria Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms
Wikipedia
Tonelli's theorem (functional analysis) In mathematics, Tonelli's theorem in functional analysis is a fundamental result on the weak lower semicontinuity of nonlinear functionals on Lp spaces. As such, it has major implications for functional analysis and the calculus of variations. Roughly, it shows that weak lower semicontinuity for integral functionals is equivalent to convexity of the integral kernel. The result is attributed to the Italian mathematician Leonida Tonelli. Statement of the theorem Let $\Omega $ be a bounded domain in $n$-dimensional Euclidean space $\mathbb {R} ^{n}$ and let $f:\mathbb {R} ^{m}\to \mathbb {R} \cup \{\pm \infty \}$ be a continuous extended real-valued function. Define a nonlinear functional $F$ on functions $u:\Omega \to \mathbb {R} ^{m}$by $F[u]=\int _{\Omega }f(u(x))\,\mathrm {d} x.$ Then $F$ is sequentially weakly lower semicontinuous on the $L^{p}$ space $L^{p}(\Omega ,\mathbb {R} ^{m})$ for $1<p<+\infty $ and weakly-∗ lower semicontinuous on $L^{\infty }(\Omega ,\mathbb {R} ^{m})$ if and only if the function $f:\mathbb {R} ^{m}\to \mathbb {R} \cup \{\pm \infty \}$ defined by $\mathbb {R} ^{m}\ni u\mapsto f(u)\in \mathbb {R} \cup \{\pm \infty \}\ $ is convex. See also • Discontinuous linear functional References • Renardy, Michael & Rogers, Robert C. (2004). An introduction to partial differential equations. Texts in Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. p. 347. ISBN 0-387-00444-0. (Theorem 10.16) Convex analysis and variational analysis Basic concepts • Convex combination • Convex function • Convex set Topics (list) • Choquet theory • Convex geometry • Convex metric space • Convex optimization • Duality • Lagrange multiplier • Legendre transformation • Locally convex topological vector space • Simplex Maps • Convex conjugate • Concave • (Closed • K- • Logarithmically • Proper • Pseudo- • Quasi-) Convex function • Invex function • Legendre transformation • Semi-continuity • Subderivative Main results (list) • Carathéodory's theorem • Ekeland's variational principle • Fenchel–Moreau theorem • Fenchel-Young inequality • Jensen's inequality • Hermite–Hadamard inequality • Krein–Milman theorem • Mazur's lemma • Shapley–Folkman lemma • Robinson-Ursescu • Simons • Ursescu Sets • Convex hull • (Orthogonally, Pseudo-) Convex set • Effective domain • Epigraph • Hypograph • John ellipsoid • Lens • Radial set/Algebraic interior • Zonotope Series • Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx)) Duality • Dual system • Duality gap • Strong duality • Weak duality Applications and related • Convexity in economics Measure theory Basic concepts • Absolute continuity of measures • Lebesgue integration • Lp spaces • Measure • Measure space • Probability space • Measurable space/function Sets • Almost everywhere • Atom • Baire set • Borel set • equivalence relation • Borel space • Carathéodory's criterion • Cylindrical σ-algebra • Cylinder set • 𝜆-system • Essential range • infimum/supremum • Locally measurable • π-system • σ-algebra • Non-measurable set • Vitali set • Null set • Support • Transverse measure • Universally measurable Types of Measures • Atomic • Baire • Banach • Besov • Borel • Brown • Complex • Complete • Content • (Logarithmically) Convex • Decomposable • Discrete • Equivalent • Finite • Inner • (Quasi-) Invariant • Locally finite • Maximising • Metric outer • Outer • Perfect • Pre-measure • (Sub-) Probability • Projection-valued • Radon • Random • Regular • Borel regular • Inner regular • Outer regular • Saturated • Set function • σ-finite • s-finite • Signed • Singular • Spectral • Strictly positive • Tight • Vector Particular measures • Counting • Dirac • Euler • Gaussian • Haar • Harmonic • Hausdorff • Intensity • Lebesgue • Infinite-dimensional • Logarithmic • Product • Projections • Pushforward • Spherical measure • Tangent • Trivial • Young Maps • Measurable function • Bochner • Strongly • Weakly • Convergence: almost everywhere • of measures • in measure • of random variables • in distribution • in probability • Cylinder set measure • Random: compact set • element • measure • process • variable • vector • Projection-valued measure Main results • Carathéodory's extension theorem • Convergence theorems • Dominated • Monotone • Vitali • Decomposition theorems • Hahn • Jordan • Maharam's • Egorov's • Fatou's lemma • Fubini's • Fubini–Tonelli • Hölder's inequality • Minkowski inequality • Radon–Nikodym • Riesz–Markov–Kakutani representation theorem Other results • Disintegration theorem • Lifting theory • Lebesgue's density theorem • Lebesgue differentiation theorem • Sard's theorem For Lebesgue measure • Isoperimetric inequality • Brunn–Minkowski theorem • Milman's reverse • Minkowski–Steiner formula • Prékopa–Leindler inequality • Vitale's random Brunn–Minkowski inequality Applications & related • Convex analysis • Descriptive set theory • Probability theory • Real analysis • Spectral theory Banach space topics Types of Banach spaces • Asplund • Banach • list • Banach lattice • Grothendieck • Hilbert • Inner product space • Polarization identity • (Polynomially) Reflexive • Riesz • L-semi-inner product • (B • Strictly • Uniformly) convex • Uniformly smooth • (Injective • Projective) Tensor product (of Hilbert spaces) Banach spaces are: • Barrelled • Complete • F-space • Fréchet • tame • Locally convex • Seminorms/Minkowski functionals • Mackey • Metrizable • Normed • norm • Quasinormed • Stereotype Function space Topologies • Banach–Mazur compactum • Dual • Dual space • Dual norm • Operator • Ultraweak • Weak • polar • operator • Strong • polar • operator • Ultrastrong • Uniform convergence Linear operators • Adjoint • Bilinear • form • operator • sesquilinear • (Un)Bounded • Closed • Compact • on Hilbert spaces • (Dis)Continuous • Densely defined • Fredholm • kernel • operator • Hilbert–Schmidt • Functionals • positive • Pseudo-monotone • Normal • Nuclear • Self-adjoint • Strictly singular • Trace class • Transpose • Unitary Operator theory • Banach algebras • C*-algebras • Operator space • Spectrum • C*-algebra • radius • Spectral theory • of ODEs • Spectral theorem • Polar decomposition • Singular value decomposition Theorems • Anderson–Kadec • Banach–Alaoglu • Banach–Mazur • Banach–Saks • Banach–Schauder (open mapping) • Banach–Steinhaus (Uniform boundedness) • Bessel's inequality • Cauchy–Schwarz inequality • Closed graph • Closed range • Eberlein–Šmulian • Freudenthal spectral • Gelfand–Mazur • Gelfand–Naimark • Goldstine • Hahn–Banach • hyperplane separation • Kakutani fixed-point • Krein–Milman • Lomonosov's invariant subspace • Mackey–Arens • Mazur's lemma • M. Riesz extension • Parseval's identity • Riesz's lemma • Riesz representation • Robinson-Ursescu • Schauder fixed-point Analysis • Abstract Wiener space • Banach manifold • bundle • Bochner space • Convex series • Differentiation in Fréchet spaces • Derivatives • Fréchet • Gateaux • functional • holomorphic • quasi • Integrals • Bochner • Dunford • Gelfand–Pettis • regulated • Paley–Wiener • weak • Functional calculus • Borel • continuous • holomorphic • Measures • Lebesgue • Projection-valued • Vector • Weakly / Strongly measurable function Types of sets • Absolutely convex • Absorbing • Affine • Balanced/Circled • Bounded • Convex • Convex cone (subset) • Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx)) • Linear cone (subset) • Radial • Radially convex/Star-shaped • Symmetric • Zonotope Subsets / set operations • Affine hull • (Relative) Algebraic interior (core) • Bounding points • Convex hull • Extreme point • Interior • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Examples • Absolute continuity AC • $ba(\Sigma )$ • c space • Banach coordinate BK • Besov $B_{p,q}^{s}(\mathbb {R} )$ • Birnbaum–Orlicz • Bounded variation BV • Bs space • Continuous C(K) with K compact Hausdorff • Hardy Hp • Hilbert H • Morrey–Campanato $L^{\lambda ,p}(\Omega )$ • ℓp • $\ell ^{\infty }$ • Lp • $L^{\infty }$ • weighted • Schwartz $S\left(\mathbb {R} ^{n}\right)$ • Segal–Bargmann F • Sequence space • Sobolev Wk,p • Sobolev inequality • Triebel–Lizorkin • Wiener amalgam $W(X,L^{p})$ Applications • Differential operator • Finite element method • Mathematical formulation of quantum mechanics • Ordinary Differential Equations (ODEs) • Validated numerics
Wikipedia
Tonelli–Hobson test In mathematics, the Tonelli–Hobson test gives sufficient criteria for a function ƒ on R2 to be an integrable function. It is often used to establish that Fubini's theorem may be applied to ƒ. It is named for Leonida Tonelli and E. W. Hobson. More precisely, the Tonelli–Hobson test states that if ƒ is a real-valued measurable function on R2, and either of the two iterated integrals $\int _{\mathbb {R} }\left(\int _{\mathbb {R} }|f(x,y)|\,dx\right)\,dy$ or $\int _{\mathbb {R} }\left(\int _{\mathbb {R} }|f(x,y)|\,dy\right)\,dx$ is finite, then ƒ is Lebesgue-integrable on R2.
Wikipedia
Tony Lévy Tony Lévy is an historian of mathematics, born in Egypt in 1943, specializing particularly in Hebrew mathematics. His family left Egypt in 1957 for Belgium and France after the Suez Crisis but his elder brother Eddy Levy remained in Egypt. A political activist, the latter converted to Islam and took the name Adel Rifaat. He would join France in the 80s and form with Bahgat Elnadi the binomial of political scientists and scholars of Islam known under the pseudonym Mahmoud Hussein. His other brother is the activist, philosopher and writer Benny Levy. Like his younger brother Benny, Tony was an extreme left militant in the 1960s and 1970s. Publications • L'Infini et le nombre chez Rabbi Hasdai Crescas : XVIe siècle, 1983 • Mathématiques de l'infini chez Hasdai Crescas (1340–1410) : un chapitre de l'histoire de l'infini d'Aristote à la Renaissance, 1985 • Figures de l'infini : les mathématiques au miroir des cultures, 1987 • Le Chapitre I, 73 du "Guide des égarés" et la tradition mathématique hébraïque au moyen âge : Un commentaire inédit de Salomon b. Isaac, 1989 • L'Étude des sections coniques dans la tradition médiévale hébraïque, ses relations avec les tradictions arabe et latine, 1989 • Éléments d'Euclide, 1991 • Gersonide, commentateur d'Euclide : traduction annotée de ses gloses sur les Eléments, 1992 • Gersonide, le Pseudo-Tusi, et le postulat des paralleles : Les mathématiques en hébreu et leurs sources arabes, 1992 • L'histoire des nombres amiables : le témoignage des textes hébreux médiévaux, 1996 • La littérature mathématique hébraïque en Europe du XIe au XVIe siècle, 1996 • La matematica hebraica, 2002 • A Newly-Discovered Partial Hebrew Version of al-Khārizmī's Algebra, 2002 • L'algèbre arabe dans les textes hébraïques (I) : un ouvrage inédit d'Isaac ben Salomon Al-Aḥdab (XVIe siècle), 2003 • Maïmonide philosophe et savant, 1138–1204, 2004 (in collaboration) • Sefer ha-middot : a mid-twelfth-century text on arithmetic and geometry attributed to Abraham ibn Ezra, 2006 (in collaboration) • L'algèbre arabe dans les textes hébraïques (II) : dans l'Italie des XVe et XVIe siècles : sources arabes et sources vernaculaires, 2007 External links • Maïmonide philosophe et savant (1138–1204) • L'étude des sections coniques dans la tradition médiévale hébraïque. Ses relations avec les traditions arabe et latine • L'ESPACE, LE LIEU, L'INFINI on YouTube Authority control International • ISNI • VIAF National • France • BnF data • Israel • Belgium • United States • Netherlands Academics • zbMATH Other • IdRef
Wikipedia
Limitation of size In the philosophy of mathematics, specifically the philosophical foundations of set theory, limitation of size is a concept developed by Philip Jourdain and/or Georg Cantor to avoid Cantor's paradox. It identifies certain "inconsistent multiplicities", in Cantor's terminology, that cannot be sets because they are "too large". In modern terminology these are called proper classes. Use The axiom of limitation of size is an axiom in some versions of von Neumann–Bernays–Gödel set theory or Morse–Kelley set theory. This axiom says that any class that is not "too large" is a set, and a set cannot be "too large". "Too large" is defined as being large enough that the class of all sets can be mapped one-to-one into it. References • Hallett, Michael (1986). Cantorian Set Theory and Limitation of Size. Oxford University Press. ISBN 0-19-853283-0.
Wikipedia
Toothpick sequence In geometry, the toothpick sequence is a sequence of 2-dimensional patterns which can be formed by repeatedly adding line segments ("toothpicks") to the previous pattern in the sequence. The first stage of the design is a single "toothpick", or line segment. Each stage after the first is formed by taking the previous design and, for every exposed toothpick end, placing another toothpick centered at a right angle on that end.[1] This process results in a pattern of growth in which the number of segments at stage n oscillates with a fractal pattern between 0.45n2 and 0.67n2. If T(n) denotes the number of segments at stage n, then values of n for which T(n)/n2 is near its maximum occur when n is near a power of two, while the values for which it is near its minimum occur near numbers that are approximately 1.43 times a power of two.[2] The structure of stages in the toothpick sequence often resemble the T-square fractal, or the arrangement of cells in the Ulam–Warburton cellular automaton.[1] All of the bounded regions surrounded by toothpicks in the pattern, but not themselves crossed by toothpicks, must be squares or rectangles.[1] It has been conjectured that every open rectangle in the toothpick pattern (that is, a rectangle that is completely surrounded by toothpicks, but has no toothpick crossing its interior) has side lengths and areas that are powers of two, with one of the side lengths being at most two.[3] References 1. Applegate, David; Pol, Omar E.; Sloane, N. J. A. (2010). "The toothpick sequence and other sequences from cellular automata". Proceedings of the Forty-First Southeastern International Conference on Combinatorics, Graph Theory and Computing. Congressus Numerantium. Vol. 206. pp. 157–191. arXiv:1004.3036. Bibcode:2010arXiv1004.3036A. MR 2762248. 2. Cipra, Barry A. (2010). "What Comes Next?". Science. AAAS. 327: 943. doi:10.1126/science.327.5968.943. 3. Sloane, N. J. A. (ed.). "Sequence A139250 (Toothpick sequence)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. External links • A list of integer sequences related to the Toothpick Sequence from the On-line Encyclopedia of Integer Sequences. (note: IDs such as A139250 are IDs within the OEIS, and descriptions of the sequences can be located by entering these IDs in the OEIS search page.) • Joshua Trees and Toothpicks, Brian Hayes, 8 February 2013
Wikipedia
Volume form In mathematics, a volume form or top-dimensional form is a differential form of degree equal to the differentiable manifold dimension. Thus on a manifold $M$ of dimension $n$, a volume form is an $n$-form. It is an element of the space of sections of the line bundle $\textstyle {\bigwedge }^{n}(T^{*}M)$, denoted as $\Omega ^{n}(M)$. A manifold admits a nowhere-vanishing volume form if and only if it is orientable. An orientable manifold has infinitely many volume forms, since multiplying a volume form by a nowhere-vanishing real valued function yields another volume form. On non-orientable manifolds, one may instead define the weaker notion of a density. A volume form provides a means to define the integral of a function on a differentiable manifold. In other words, a volume form gives rise to a measure with respect to which functions can be integrated by the appropriate Lebesgue integral. The absolute value of a volume form is a volume element, which is also known variously as a twisted volume form or pseudo-volume form. It also defines a measure, but exists on any differentiable manifold, orientable or not. Kähler manifolds, being complex manifolds, are naturally oriented, and so possess a volume form. More generally, the $n$th exterior power of the symplectic form on a symplectic manifold is a volume form. Many classes of manifolds have canonical volume forms: they have extra structure which allows the choice of a preferred volume form. Oriented pseudo-Riemannian manifolds have an associated canonical volume form. Orientation The following will only be about orientability of differentiable manifolds (it's a more general notion defined on any topological manifold). A manifold is orientable if it has a coordinate atlas all of whose transition functions have positive Jacobian determinants. A selection of a maximal such atlas is an orientation on $M.$ A volume form $\omega $ on $M$ gives rise to an orientation in a natural way as the atlas of coordinate charts on $M$ that send $\omega $ to a positive multiple of the Euclidean volume form $dx^{1}\wedge \cdots \wedge dx^{n}.$ A volume form also allows for the specification of a preferred class of frames on $M.$ Call a basis of tangent vectors $(X_{1},\ldots ,X_{n})$ right-handed if $\omega \left(X_{1},X_{2},\ldots ,X_{n}\right)>0.$ The collection of all right-handed frames is acted upon by the group $\mathrm {GL} ^{+}(n)$ of general linear mappings in $n$ dimensions with positive determinant. They form a principal $\mathrm {GL} ^{+}(n)$ sub-bundle of the linear frame bundle of $M,$ and so the orientation associated to a volume form gives a canonical reduction of the frame bundle of $M$ to a sub-bundle with structure group $\mathrm {GL} ^{+}(n).$ That is to say that a volume form gives rise to $\mathrm {GL} ^{+}(n)$-structure on $M.$ More reduction is clearly possible by considering frames that have $\omega \left(X_{1},X_{2},\ldots ,X_{n}\right)=1.$ (1) Thus a volume form gives rise to an $\mathrm {SL} (n)$-structure as well. Conversely, given an $\mathrm {SL} (n)$-structure, one can recover a volume form by imposing (1) for the special linear frames and then solving for the required $n$-form $\omega $ by requiring homogeneity in its arguments. A manifold is orientable if and only if it has a nowhere-vanishing volume form. Indeed, $\mathrm {SL} (n)\to \mathrm {GL} ^{+}(n)$ is a deformation retract since $\mathrm {GL} ^{+}=\mathrm {SL} \times \mathbb {R} ^{+},$ where the positive reals are embedded as scalar matrices. Thus every $\mathrm {GL} ^{+}(n)$-structure is reducible to an $\mathrm {SL} (n)$-structure, and $\mathrm {GL} ^{+}(n)$-structures coincide with orientations on $M.$ More concretely, triviality of the determinant bundle $\Omega ^{n}(M)$ is equivalent to orientability, and a line bundle is trivial if and only if it has a nowhere-vanishing section. Thus, the existence of a volume form is equivalent to orientability. Relation to measures See also: Density on a manifold Given a volume form $\omega $ on an oriented manifold, the density $|\omega |$ is a volume pseudo-form on the nonoriented manifold obtained by forgetting the orientation. Densities may also be defined more generally on non-orientable manifolds. Any volume pseudo-form $\omega $ (and therefore also any volume form) defines a measure on the Borel sets by $\mu _{\omega }(U)=\int _{U}\omega .$ The difference is that while a measure can be integrated over a (Borel) subset, a volume form can only be integrated over an oriented cell. In single variable calculus, writing $ \int _{b}^{a}f\,dx=-\int _{a}^{b}f\,dx$ considers $dx$ as a volume form, not simply a measure, and $ \int _{b}^{a}$ indicates "integrate over the cell $[a,b]$ with the opposite orientation, sometimes denoted ${\overline {[a,b]}}$". Further, general measures need not be continuous or smooth: they need not be defined by a volume form, or more formally, their Radon–Nikodym derivative with respect to a given volume form need not be absolutely continuous. Divergence Given a volume form $\omega $ on $M,$ one can define the divergence of a vector field $X$ as the unique scalar-valued function, denoted by $\operatorname {div} X,$ satisfying $(\operatorname {div} X)\omega =L_{X}\omega =d(X\mathbin {\!\rfloor } \omega ),$ where $L_{X}$ denotes the Lie derivative along $X$ and $X\mathbin {\!\rfloor } \omega $ denotes the interior product or the left contraction of $\omega $ along $X.$ If $X$ is a compactly supported vector field and $M$ is a manifold with boundary, then Stokes' theorem implies $\int _{M}(\operatorname {div} X)\omega =\int _{\partial M}X\mathbin {\!\rfloor } \omega ,$ which is a generalization of the divergence theorem. The solenoidal vector fields are those with $\operatorname {div} X=0.$ It follows from the definition of the Lie derivative that the volume form is preserved under the flow of a solenoidal vector field. Thus solenoidal vector fields are precisely those that have volume-preserving flows. This fact is well-known, for instance, in fluid mechanics where the divergence of a velocity field measures the compressibility of a fluid, which in turn represents the extent to which volume is preserved along flows of the fluid. Special cases Lie groups For any Lie group, a natural volume form may be defined by translation. That is, if $\omega _{e}$ is an element of $ \bigwedge }^{n}T_{e}^{*}G,$ then a left-invariant form may be defined by $\omega _{g}=L_{g^{-1}}^{*}\omega _{e},$ where $L_{g}$ is left-translation. As a corollary, every Lie group is orientable. This volume form is unique up to a scalar, and the corresponding measure is known as the Haar measure. Symplectic manifolds Any symplectic manifold (or indeed any almost symplectic manifold) has a natural volume form. If $M$ is a $2n$-dimensional manifold with symplectic form $\omega ,$ then $\omega ^{n}$ is nowhere zero as a consequence of the nondegeneracy of the symplectic form. As a corollary, any symplectic manifold is orientable (indeed, oriented). If the manifold is both symplectic and Riemannian, then the two volume forms agree if the manifold is Kähler. Riemannian volume form Any oriented pseudo-Riemannian (including Riemannian) manifold has a natural volume form. In local coordinates, it can be expressed as $\omega ={\sqrt {|g|}}dx^{1}\wedge \dots \wedge dx^{n}$ where the $dx^{i}$ are 1-forms that form a positively oriented basis for the cotangent bundle of the manifold. Here, $|g|$ is the absolute value of the determinant of the matrix representation of the metric tensor on the manifold. The volume form is denoted variously by $\omega =\mathrm {vol} _{n}=\varepsilon ={\star }(1).$ Here, the ${\star }$ is the Hodge star, thus the last form, ${\star }(1),$ emphasizes that the volume form is the Hodge dual of the constant map on the manifold, which equals the Levi-Civita tensor $\varepsilon .$ Although the Greek letter $\omega $ is frequently used to denote the volume form, this notation is not universal; the symbol $\omega $ often carries many other meanings in differential geometry (such as a symplectic form). Invariants of a volume form Volume forms are not unique; they form a torsor over non-vanishing functions on the manifold, as follows. Given a non-vanishing function $f$ on $M,$ and a volume form $\omega ,$ $f\omega $ is a volume form on $M.$ Conversely, given two volume forms $\omega ,\omega ',$ their ratio is a non-vanishing function (positive if they define the same orientation, negative if they define opposite orientations). In coordinates, they are both simply a non-zero function times Lebesgue measure, and their ratio is the ratio of the functions, which is independent of choice of coordinates. Intrinsically, it is the Radon–Nikodym derivative of $\omega '$ with respect to $\omega .$ On an oriented manifold, the proportionality of any two volume forms can be thought of as a geometric form of the Radon–Nikodym theorem. No local structure A volume form on a manifold has no local structure in the sense that it is not possible on small open sets to distinguish between the given volume form and the volume form on Euclidean space (Kobayashi 1972). That is, for every point $p$ in $M,$ there is an open neighborhood $U$ of $p$ and a diffeomorphism $\varphi $ of $U$ onto an open set in $\mathbb {R} ^{n}$ such that the volume form on $U$ is the pullback of $dx^{1}\wedge \cdots \wedge dx^{n}$ along $\varphi .$ As a corollary, if $M$ and $N$ are two manifolds, each with volume forms $\omega _{M},\omega _{N},$ then for any points $m\in M,n\in N,$ there are open neighborhoods $U$ of $m$ and $V$ of $n$ and a map $f:U\to V$ such that the volume form on $N$ restricted to the neighborhood $V$ pulls back to volume form on $M$ restricted to the neighborhood $U$: $f^{*}\omega _{N}\vert _{V}=\omega _{M}\vert _{U}.$ In one dimension, one can prove it thus: given a volume form $\omega $ on $\mathbb {R} ,$ define $f(x):=\int _{0}^{x}\omega .$ Then the standard Lebesgue measure $dx$ pulls back to $\omega $ under $f$: $\omega =f^{*}dx.$ Concretely, $\omega =f'\,dx.$ In higher dimensions, given any point $m\in M,$ it has a neighborhood locally homeomorphic to $\mathbb {R} \times \mathbb {R} ^{n-1},$ and one can apply the same procedure. Global structure: volume A volume form on a connected manifold $M$ has a single global invariant, namely the (overall) volume, denoted $\mu (M),$ which is invariant under volume-form preserving maps; this may be infinite, such as for Lebesgue measure on $\mathbb {R} ^{n}.$ On a disconnected manifold, the volume of each connected component is the invariant. In symbols, if $f:M\to N$ is a homeomorphism of manifolds that pulls back $\omega _{N}$ to $\omega _{M},$ then $\mu (N)=\int _{N}\omega _{N}=\int _{f(M)}\omega _{N}=\int _{M}f^{*}\omega _{N}=\int _{M}\omega _{M}=\mu (M)\,$ and the manifolds have the same volume. Volume forms can also be pulled back under covering maps, in which case they multiply volume by the cardinality of the fiber (formally, by integration along the fiber). In the case of an infinite sheeted cover (such as $\mathbb {R} \to S^{1}$), a volume form on a finite volume manifold pulls back to a volume form on an infinite volume manifold. See also • Cylindrical coordinate system § Line and volume elements • Measure (mathematics) – Generalization of mass, length, area and volume • Poincaré metric provides a review of the volume form on the complex plane • Spherical coordinate system § Integration and differentiation in spherical coordinates References • Kobayashi, S. (1972), Transformation Groups in Differential Geometry, Classics in Mathematics, Springer, ISBN 3-540-58659-8, OCLC 31374337. • Spivak, Michael (1965), Calculus on Manifolds, Reading, Massachusetts: W.A. Benjamin, Inc., ISBN 0-8053-9021-9. Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space Riemannian geometry (Glossary) Basic concepts • Curvature • tensor • Scalar • Ricci • Sectional • Exponential map • Geodesic • Inner product • Metric tensor • Levi-Civita connection • Covariant derivative • Signature • Raising and lowering indices/Musical isomorphism • Parallel transport • Riemannian manifold • Pseudo-Riemannian manifold • Riemannian volume form Types of manifolds • Hermitian • Hyperbolic • Kähler • Kenmotsu Main results • Fundamental theorem of Riemannian geometry • Gauss's lemma • Gauss–Bonnet theorem • Hopf–Rinow theorem • Nash embedding theorem • Ricci flow • Schur's lemma Generalizations • Finsler • Hilbert • Sub-Riemannian Applications • General relativity • Geometrization conjecture • Poincaré conjecture • Uniformization theorem Tensors Glossary of tensor theory Scope Mathematics • Coordinate system • Differential geometry • Dyadic algebra • Euclidean geometry • Exterior calculus • Multilinear algebra • Tensor algebra • Tensor calculus • Physics • Engineering • Computer vision • Continuum mechanics • Electromagnetism • General relativity • Transport phenomena Notation • Abstract index notation • Einstein notation • Index notation • Multi-index notation • Penrose graphical notation • Ricci calculus • Tetrad (index notation) • Van der Waerden notation • Voigt notation Tensor definitions • Tensor (intrinsic definition) • Tensor field • Tensor density • Tensors in curvilinear coordinates • Mixed tensor • Antisymmetric tensor • Symmetric tensor • Tensor operator • Tensor bundle • Two-point tensor Operations • Covariant derivative • Exterior covariant derivative • Exterior derivative • Exterior product • Hodge star operator • Lie derivative • Raising and lowering indices • Symmetrization • Tensor contraction • Tensor product • Transpose (2nd-order tensors) Related abstractions • Affine connection • Basis • Cartan formalism (physics) • Connection form • Covariance and contravariance of vectors • Differential form • Dimension • Exterior form • Fiber bundle • Geodesic • Levi-Civita connection • Linear map • Manifold • Matrix • Multivector • Pseudotensor • Spinor • Vector • Vector space Notable tensors Mathematics • Kronecker delta • Levi-Civita symbol • Metric tensor • Nonmetricity tensor • Ricci curvature • Riemann curvature tensor • Torsion tensor • Weyl tensor Physics • Moment of inertia • Angular momentum tensor • Spin tensor • Cauchy stress tensor • stress–energy tensor • Einstein tensor • EM tensor • Gluon field strength tensor • Metric tensor (GR) Mathematicians • Élie Cartan • Augustin-Louis Cauchy • Elwin Bruno Christoffel • Albert Einstein • Leonhard Euler • Carl Friedrich Gauss • Hermann Grassmann • Tullio Levi-Civita • Gregorio Ricci-Curbastro • Bernhard Riemann • Jan Arnoldus Schouten • Woldemar Voigt • Hermann Weyl
Wikipedia
Top-hat filter The name Top-hat filter refers to several real-space or Fourier space filtering techniques (not to be confused with the top-hat transform). The name top-hat originates from the shape of the filter, which is a rectangle function, when viewed in the domain in which the filter is constructed. Real space In real-space the filter performs nearest-neighbour filtering, incorporating components from neighbouring y-function values. However, despite their ease of implementation their practical use is limited as the real-space representation of a top-hat filter is the sinc function, which has the often undesirable effect of incorporating non-local frequencies. Analogue implementations Exact non-digital implementations are only theoretically possible. Top-hat filters can be constructed by chaining theoretical low-band and high-band filters. In practice, an approximate top-hat filter can be constructed in analogue hardware using approximate low-band and high-band filters. Fourier space In Fourier space, a top hat filter selects a band of signal of desired frequency by the specification of a lower and upper bounding frequencies. Top-hat filters are particularly easy to implement digitally. Related functions The top hat function can be generated by differentiating a linear ramp function of width $\epsilon $. The limit of $\epsilon $ then becomes the Dirac delta function. Its real-space form is the same as the moving average, with the exception of not introducing a shift in the output function. See also • Boxcar averager • Rectangular function • Step function • Boxcar function References
Wikipedia
Topology In mathematics, topology (from the Greek words τόπος, 'place, location', and λόγος, 'study') is concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling, and bending; that is, without closing holes, opening holes, tearing, gluing, or passing through itself. A topological space is a set endowed with a structure, called a topology, which allows defining continuous deformation of subspaces, and, more generally, all kinds of continuity. Euclidean spaces, and, more generally, metric spaces are examples of a topological space, as any distance or metric defines a topology. The deformations that are considered in topology are homeomorphisms and homotopies. A property that is invariant under such deformations is a topological property. The following are basic examples of topological properties: the dimension, which allows distinguishing between a line and a surface; compactness, which allows distinguishing between a line and a circle; connectedness, which allows distinguishing a circle from two non-intersecting circles. The ideas underlying topology go back to Gottfried Leibniz, who in the 17th century envisioned the geometria situs and analysis situs. Leonhard Euler's Seven Bridges of Königsberg problem and polyhedron formula are arguably the field's first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century; although, it was not until the first decades of the 20th century that the idea of a topological space was developed. Motivation The motivating insight behind topology is that some geometric problems depend not on the exact shape of the objects involved, but rather on the way they are put together. For example, the square and the circle have many properties in common: they are both one dimensional objects (from a topological point of view) and both separate the plane into two parts, the part inside and the part outside. In one of the first papers in topology, Leonhard Euler demonstrated that it was impossible to find a route through the town of Königsberg (now Kaliningrad) that would cross each of its seven bridges exactly once. This result did not depend on the lengths of the bridges or on their distance from one another, but only on connectivity properties: which bridges connect to which islands or riverbanks. This Seven Bridges of Königsberg problem led to the branch of mathematics known as graph theory. Similarly, the hairy ball theorem of algebraic topology says that "one cannot comb the hair flat on a hairy ball without creating a cowlick." This fact is immediately convincing to most people, even though they might not recognize the more formal statement of the theorem, that there is no nonvanishing continuous tangent vector field on the sphere. As with the Bridges of Königsberg, the result does not depend on the shape of the sphere; it applies to any kind of smooth blob, as long as it has no holes. To deal with these problems that do not rely on the exact shape of the objects, one must be clear about just what properties these problems do rely on. From this need arises the notion of homeomorphism. The impossibility of crossing each bridge just once applies to any arrangement of bridges homeomorphic to those in Königsberg, and the hairy ball theorem applies to any space homeomorphic to a sphere. Intuitively, two spaces are homeomorphic if one can be deformed into the other without cutting or gluing. A traditional joke is that a topologist cannot distinguish a coffee mug from a doughnut, since a sufficiently pliable doughnut could be reshaped to a coffee cup by creating a dimple and progressively enlarging it, while shrinking the hole into a handle.[1] Homeomorphism can be considered the most basic topological equivalence. Another is homotopy equivalence. This is harder to describe without getting technical, but the essential notion is that two objects are homotopy equivalent if they both result from "squishing" some larger object. History See also: History of the separation axioms Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries.[2] Among these are certain questions in geometry investigated by Leonhard Euler. His 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology.[2] On 14 November 1750, Euler wrote to a friend that he had realized the importance of the edges of a polyhedron. This led to his polyhedron formula, V − E + F = 2 (where V, E, and F respectively indicate the number of vertices, edges, and faces of the polyhedron). Some authorities regard this analysis as the first theorem, signaling the birth of topology.[3] Further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti.[4] Listing introduced the term "Topologie" in Vorstudien zur Topologie, written in his native German, in 1847, having used the word for ten years in correspondence before its first appearance in print.[5] The English form "topology" was used in 1883 in Listing's obituary in the journal Nature to distinguish "qualitative geometry from the ordinary geometry in which quantitative relations chiefly are treated".[6] Their work was corrected, consolidated and greatly extended by Henri Poincaré. In 1895, he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology.[4] Topological characteristics of closed 2-manifolds[4] ManifoldEuler numOrientabilityBetti numbersTorsion coefficient (1-dim) b0b1b2 Sphere2Orientable101none Torus0Orientable121none 2-holed torus−2Orientable141none g-holed torus (genus g)2 − 2gOrientable12g1none Projective plane1Non-orientable1002 Klein bottle0Non-orientable1102 Sphere with c cross-caps (c > 0)2 − cNon-orientable1c − 102 2-Manifold with g holes and c cross-caps (c > 0) 2 − (2g + c)Non-orientable1(2g + c) − 102 Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906.[7] A metric space is now considered a special case of a general topological space, with any given topological space potentially giving rise to many distinct metric spaces. In 1914, Felix Hausdorff coined the term "topological space" and gave the definition for what is now called a Hausdorff space.[8] Currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski.[9] Modern topology depends strongly on the ideas of set theory, developed by Georg Cantor in the later part of the 19th century. In addition to establishing the basic ideas of set theory, Cantor considered point sets in Euclidean space as part of his study of Fourier series. For further developments, see point-set topology and algebraic topology. The 2022 Abel Prize was awarded to Dennis Sullivan "for his groundbreaking contributions to topology in its broadest sense, and in particular its algebraic, geometric and dynamical aspects".[10] Concepts Topologies on sets Main article: Topological space The term topology also refers to a specific mathematical idea central to the area of mathematics called topology. Informally, a topology describes how elements of a set relate spatially to each other. The same set can have different topologies. For instance, the real line, the complex plane, and the Cantor set can be thought of as the same set with different topologies. Formally, let X be a set and let τ be a family of subsets of X. Then τ is called a topology on X if: 1. Both the empty set and X are elements of τ. 2. Any union of elements of τ is an element of τ. 3. Any intersection of finitely many elements of τ is an element of τ. If τ is a topology on X, then the pair (X, τ) is called a topological space. The notation Xτ may be used to denote a set X endowed with the particular topology τ. By definition, every topology is a π-system. The members of τ are called open sets in X. A subset of X is said to be closed if its complement is in τ (that is, its complement is open). A subset of X may be open, closed, both (a clopen set), or neither. The empty set and X itself are always both closed and open. An open subset of X which contains a point x is called a neighborhood of x. Continuous functions and homeomorphisms Main articles: Continuous function and homeomorphism A function or map from one topological space to another is called continuous if the inverse image of any open set is open. If the function maps the real numbers to the real numbers (both spaces with the standard topology), then this definition of continuous is equivalent to the definition of continuous in calculus. If a continuous function is one-to-one and onto, and if the inverse of the function is also continuous, then the function is called a homeomorphism and the domain of the function is said to be homeomorphic to the range. Another way of saying this is that the function has a natural extension to the topology. If two spaces are homeomorphic, they have identical topological properties, and are considered topologically the same. The cube and the sphere are homeomorphic, as are the coffee cup and the doughnut. However, the sphere is not homeomorphic to the doughnut. Manifolds Main article: Manifold While topological spaces can be extremely varied and exotic, many areas of topology focus on the more familiar class of spaces known as manifolds. A manifold is a topological space that resembles Euclidean space near each point. More precisely, each point of an n-dimensional manifold has a neighborhood that is homeomorphic to the Euclidean space of dimension n. Lines and circles, but not figure eights, are one-dimensional manifolds. Two-dimensional manifolds are also called surfaces, although not all surfaces are manifolds. Examples include the plane, the sphere, and the torus, which can all be realized without self-intersection in three dimensions, and the Klein bottle and real projective plane, which cannot (that is, all their realizations are surfaces that are not manifolds). Topics General topology Main article: General topology General topology is the branch of topology dealing with the basic set-theoretic definitions and constructions used in topology.[11][12] It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. Another name for general topology is point-set topology. The basic object of study is topological spaces, which are sets equipped with a topology, that is, a family of subsets, called open sets, which is closed under finite intersections and (finite or infinite) unions. The fundamental concepts of topology, such as continuity, compactness, and connectedness, can be defined in terms of open sets. Intuitively, continuous functions take nearby points to nearby points. Compact sets are those that can be covered by finitely many sets of arbitrarily small size. Connected sets are sets that cannot be divided into two pieces that are far apart. The words nearby, arbitrarily small, and far apart can all be made precise by using open sets. Several topologies can be defined on a given space. Changing a topology consists of changing the collection of open sets. This changes which functions are continuous and which subsets are compact or connected. Metric spaces are an important class of topological spaces where the distance between any two points is defined by a function called a metric. In a metric space, an open set is a union of open disks, where an open disk of radius r centered at x is the set of all points whose distance to x is less than r. Many common spaces are topological spaces whose topology can be defined by a metric. This is the case of the real line, the complex plane, real and complex vector spaces and Euclidean spaces. Having a metric simplifies many proofs. Algebraic topology Main article: Algebraic topology Algebraic topology is a branch of mathematics that uses tools from algebra to study topological spaces.[13] The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence. The most important of these invariants are homotopy groups, homology, and cohomology. Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group. Differential topology Main article: Differential topology Differential topology is the field dealing with differentiable functions on differentiable manifolds.[14] It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds. More specifically, differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are "softer" than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifold – that is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume. Geometric topology Main article: Geometric topology Geometric topology is a branch of topology that primarily focuses on low-dimensional manifolds (that is, spaces of dimensions 2, 3, and 4) and their interaction with geometry, but it also includes some higher-dimensional topology.[15] Some examples of topics in geometric topology are orientability, handle decompositions, local flatness, crumpling and the planar and higher-dimensional Schönflies theorem. In high-dimensional topology, characteristic classes are a basic invariant, and surgery theory is a key theory. Low-dimensional topology is strongly geometric, as reflected in the uniformization theorem in 2 dimensions – every surface admits a constant curvature metric; geometrically, it has one of 3 possible geometries: positive curvature/spherical, zero curvature/flat, and negative curvature/hyperbolic – and the geometrization conjecture (now theorem) in 3 dimensions – every 3-manifold can be cut into pieces, each of which has one of eight possible geometries. 2-dimensional topology can be studied as complex geometry in one variable (Riemann surfaces are complex curves) – by the uniformization theorem every conformal class of metrics is equivalent to a unique complex one, and 4-dimensional topology can be studied from the point of view of complex geometry in two variables (complex surfaces), though not every 4-manifold admits a complex structure. Generalizations Occasionally, one needs to use the tools of topology but a "set of points" is not available. In pointless topology one considers instead the lattice of open sets as the basic notion of the theory,[16] while Grothendieck topologies are structures defined on arbitrary categories that allow the definition of sheaves on those categories, and with that the definition of general cohomology theories.[17] Applications Biology Topology has been used to study various biological systems including molecules and nanostructure (e.g., membraneous objects[18]). In particular, circuit topology and knot theory have been extensively applied to classify and compare the topology of folded proteins and nucleic acids. Circuit topology classifies folded molecular chains based on the pairwise arrangement of their intra-chain contacts and chain crossings. Knot theory, a branch of topology, is used in biology to study the effects of certain enzymes on DNA. These enzymes cut, twist, and reconnect the DNA, causing knotting with observable effects such as slower electrophoresis.[19] Topology is also used in evolutionary biology to represent the relationship between phenotype and genotype.[20] Phenotypic forms that appear quite different can be separated by only a few mutations depending on how genetic changes map to phenotypic changes during development. In neuroscience, topological quantities like the Euler characteristic and Betti number have been used to measure the complexity of patterns of activity in neural networks. Computer science Topological data analysis uses techniques from algebraic topology to determine the large scale structure of a set (for instance, determining if a cloud of points is spherical or toroidal). The main method used by topological data analysis is to: 1. Replace a set of data points with a family of simplicial complexes, indexed by a proximity parameter. 2. Analyse these topological complexes via algebraic topology – specifically, via the theory of persistent homology.[21] 3. Encode the persistent homology of a data set in the form of a parameterized version of a Betti number, which is called a barcode.[21] Several branches of programming language semantics, such as domain theory, are formalized using topology. In this context, Steve Vickers, building on work by Samson Abramsky and Michael B. Smyth, characterizes topological spaces as Boolean or Heyting algebras over open sets, which are characterized as semidecidable (equivalently, finitely observable) properties.[22] Physics Topology is relevant to physics in areas such as condensed matter physics,[23] quantum field theory and physical cosmology. The topological dependence of mechanical properties in solids is of interest in disciplines of mechanical engineering and materials science. Electrical and mechanical properties depend on the arrangement and network structures of molecules and elementary units in materials.[24] The compressive strength of crumpled topologies is studied in attempts to understand the high strength to weight of such structures that are mostly empty space.[25] Topology is of further significance in Contact mechanics where the dependence of stiffness and friction on the dimensionality of surface structures is the subject of interest with applications in multi-body physics. A topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants. Although TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory, the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for work related to topological field theory. The topological classification of Calabi–Yau manifolds has important implications in string theory, as different manifolds can sustain different kinds of strings.[26] In cosmology, topology can be used to describe the overall shape of the universe.[27] This area of research is commonly known as spacetime topology. In condensed matter a relevant application to topological physics comes from the possibility to obtain one-way current, which is a current protected from backscattering. It was first discovered in electronics with the famous quantum Hall effect, and then generalized in other areas of physics, for instance in photonics[28] by F.D.M Haldane. Robotics The possible positions of a robot can be described by a manifold called configuration space.[29] In the area of motion planning, one finds paths between two points in configuration space. These paths represent a motion of the robot's joints and other parts into the desired pose.[30] Games and puzzles Disentanglement puzzles are based on topological aspects of the puzzle's shapes and components.[31][32][33] Fiber art In order to create a continuous join of pieces in a modular construction, it is necessary to create an unbroken path in an order which surrounds each piece and traverses each edge only once. This process is an application of the Eulerian path.[34] See also • Characterizations of the category of topological spaces • Equivariant topology • List of algebraic topology topics • List of examples in general topology • List of general topology topics • List of geometric topology topics • List of topology topics • Publications in topology • Topoisomer • Topology glossary • Topological Galois theory • Topological geometry • Topological order References Citations 1. Hubbard, John H.; West, Beverly H. (1995). Differential Equations: A Dynamical Systems Approach. Part II: Higher-Dimensional Systems. Texts in Applied Mathematics. Vol. 18. Springer. p. 204. ISBN 978-0-387-94377-0. 2. Croom 1989, p. 7 3. Richeson 2008, p. 63; Aleksandrov 1969, p. 204 4. Richeson (2008) 5. Listing, Johann Benedict, "Vorstudien zur Topologie", Vandenhoeck und Ruprecht, Göttingen, p. 67, 1848 6. Tait, Peter Guthrie (1 February 1883). "Johann Benedict Listing (obituary)". Nature. 27 (692): 316–317. Bibcode:1883Natur..27..316P. doi:10.1038/027316a0. 7. Fréchet, Maurice (1906). Sur quelques points du calcul fonctionnel. OCLC 8897542. {{cite book}}: |work= ignored (help) 8. Hausdorff, Felix, "Grundzüge der Mengenlehre", Leipzig: Veit. In (Hausdorff Werke, II (2002), 91–576) 9. Croom 1989, p. 129 10. "Prize winner 2022". The Norwegian Academy of Science and Letters. Retrieved 23 March 2022. 11. Munkres, James R. Topology. Vol. 2. Upper Saddle River: Prentice Hall, 2000. 12. Adams, Colin Conrad, and Robert David Franzosa. Introduction to topology: pure and applied. Pearson Prentice Hall, 2008. 13. Allen Hatcher, Algebraic topology. Archived 6 February 2012 at the Wayback Machine (2002) Cambridge University Press, xii+544 pp. ISBN 0-521-79160-X, 0-521-79540-0. 14. Lee, John M. (2006). Introduction to Smooth Manifolds. Springer-Verlag. ISBN 978-0-387-95448-6. 15. R. B. Sher and R. J. Daverman (2002), Handbook of Geometric Topology, North-Holland. ISBN 0-444-82432-4 16. Johnstone, Peter T. (1983). "The point of pointless topology". Bulletin of the American Mathematical Society. 8 (1): 41–53. doi:10.1090/s0273-0979-1983-15080-2. 17. Artin, Michael (1962). Grothendieck topologies. Cambridge, MA: Harvard University, Dept. of Mathematics. Zbl 0208.48701. 18. Mashaghi, Samaneh; Jadidi, Tayebeh; Koenderink, Gijsje; Mashaghi, Alireza (2013). "Lipid Nanotechnology". International Journal of Molecular Sciences. 14 (2): 4242–4282. doi:10.3390/ijms14024242. PMC 3588097. PMID 23429269. 19. Adams, Colin (2004). The Knot Book: An Elementary Introduction to the Mathematical Theory of Knots. American Mathematical Society. ISBN 978-0-8218-3678-1. 20. Stadler, Bärbel M.R.; Stadler, Peter F.; Wagner, Günter P.; Fontana, Walter (2001). "The Topology of the Possible: Formal Spaces Underlying Patterns of Evolutionary Change". Journal of Theoretical Biology. 213 (2): 241–274. Bibcode:2001JThBi.213..241S. CiteSeerX 10.1.1.63.7808. doi:10.1006/jtbi.2001.2423. PMID 11894994. 21. Gunnar Carlsson (April 2009). "Topology and data" (PDF). Bulletin of the American Mathematical Society. New Series. 46 (2): 255–308. doi:10.1090/S0273-0979-09-01249-X. 22. Vickers, Steve (1996). Topology via Logic. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press. ISBN 978-0521576512. 23. "The Nobel Prize in Physics 2016". Nobel Foundation. 4 October 2016. Retrieved 12 October 2016. 24. Stephenson, C.; et., al. (2017). "Topological properties of a self-assembled electrical network via ab initio calculation". Sci. Rep. 7: 41621. Bibcode:2017NatSR...741621S. doi:10.1038/srep41621. PMC 5290745. PMID 28155863. 25. Cambou, Anne Dominique; Narayanan, Menon (2011). "Three-dimensional structure of a sheet crumpled into a ball". Proceedings of the National Academy of Sciences of the United States of America. 108 (36): 14741–14745. arXiv:1203.5826. Bibcode:2011PNAS..10814741C. doi:10.1073/pnas.1019192108. PMC 3169141. PMID 21873249. 26. Yau, S. & Nadis, S.; The Shape of Inner Space, Basic Books, 2010. 27. The Shape of Space: How to Visualize Surfaces and Three-dimensional Manifolds 2nd ed (Marcel Dekker, 1985, ISBN 0-8247-7437-X) 28. Haldane, F. D. M.; Raghu, S. (10 January 2008). "Possible Realization of Directional Optical Waveguides in Photonic Crystals with Broken Time-Reversal Symmetry". Physical Review Letters. 100 (1): 013904. arXiv:cond-mat/0503588. Bibcode:2008PhRvL.100a3904H. doi:10.1103/PhysRevLett.100.013904. ISSN 0031-9007. PMID 18232766. S2CID 44745453. 29. John J. Craig, Introduction to Robotics: Mechanics and Control, 3rd Ed. Prentice-Hall, 2004 30. Farber, Michael (2008). Invitation to Topological Robotics. European Mathematical Society. ISBN 978-3037190548. 31. Horak, Mathew (2006). "Disentangling Topological Puzzles by Using Knot Theory". Mathematics Magazine. 79 (5): 368–375. doi:10.2307/27642974. JSTOR 27642974.. 32. http://sma.epfl.ch/Notes.pdf Archived 1 November 2022 at the Wayback Machine A Topological Puzzle, Inta Bertuccioni, December 2003. 33. https://www.futilitycloset.com/the-figure-8-puzzle Archived 25 May 2017 at the Wayback Machine The Figure Eight Puzzle, Science and Math, June 2012. 34. Eckman, Edie (2012). Connect the shapes crochet motifs: creative techniques for joining motifs of all shapes. Storey Publishing. ISBN 978-1603429733. Bibliography • Aleksandrov, P.S. (1969) [1956], "Chapter XVIII Topology", in Aleksandrov, A.D.; Kolmogorov, A.N.; Lavrent'ev, M.A. (eds.), Mathematics / Its Content, Methods and Meaning (2nd ed.), The M.I.T. Press • Croom, Fred H. (1989), Principles of Topology, Saunders College Publishing, ISBN 978-0-03-029804-2 • Richeson, D. (2008), Euler's Gem: The Polyhedron Formula and the Birth of Topology, Princeton University Press Further reading • Ryszard Engelking, General Topology, Heldermann Verlag, Sigma Series in Pure Mathematics, December 1989, ISBN 3-88538-006-4. • Bourbaki; Elements of Mathematics: General Topology, Addison–Wesley (1966). • Breitenberger, E. (2006). "Johann Benedict Listing". In James, I.M. (ed.). History of Topology. North Holland. ISBN 978-0-444-82375-5. • Kelley, John L. (1975). General Topology. Springer-Verlag. ISBN 978-0-387-90125-1. • Brown, Ronald (2006). Topology and Groupoids. Booksurge. ISBN 978-1-4196-2722-4. (Provides a well motivated, geometric account of general topology, and shows the use of groupoids in discussing van Kampen's theorem, covering spaces, and orbit spaces.) • Wacław Sierpiński, General Topology, Dover Publications, 2000, ISBN 0-486-41148-6 • Pickover, Clifford A. (2006). The Möbius Strip: Dr. August Möbius's Marvelous Band in Mathematics, Games, Literature, Art, Technology, and Cosmology. Thunder's Mouth Press. ISBN 978-1-56025-826-1. (Provides a popular introduction to topology and geometry) • Gemignani, Michael C. (1990) [1967], Elementary Topology (2nd ed.), Dover Publications Inc., ISBN 978-0-486-66522-1 External links Wikimedia Commons has media related to Topology. Wikiquote has quotations related to Topology. Wikibooks has more on the topic of: Topology • "Topology, general", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Elementary Topology: A First Course Viro, Ivanov, Netsvetaev, Kharlamov. • Topology at Curlie • The Topological Zoo at The Geometry Center. • Topology Atlas • Topology Course Lecture Notes Aisling McCluskey and Brian McMaster, Topology Atlas. • Topology Glossary • Moscow 1935: Topology moving towards America, a historical essay by Hassler Whitney. Topology Fields • General (point-set) • Algebraic • Combinatorial • Continuum • Differential • Geometric • low-dimensional • Homology • cohomology • Set-theoretic • Digital Key concepts • Open set / Closed set • Interior • Continuity • Space • compact • Connected • Hausdorff • metric • uniform • Homotopy • homotopy group • fundamental group • Simplicial complex • CW complex • Polyhedral complex • Manifold • Bundle (mathematics) • Second-countable space • Cobordism Metrics and properties • Euler characteristic • Betti number • Winding number • Chern number • Orientability Key results • Banach fixed-point theorem • De Rham cohomology • Invariance of domain • Poincaré conjecture • Tychonoff's theorem • Urysohn's lemma • Category •  Mathematics portal • Wikibook • Wikiversity • Topics • general • algebraic • geometric • Publications Major mathematics areas • History • Timeline • Future • Outline • Lists • Glossary Foundations • Category theory • Information theory • Mathematical logic • Philosophy of mathematics • Set theory • Type theory Algebra • Abstract • Commutative • Elementary • Group theory • Linear • Multilinear • Universal • Homological Analysis • Calculus • Real analysis • Complex analysis • Hypercomplex analysis • Differential equations • Functional analysis • Harmonic analysis • Measure theory Discrete • Combinatorics • Graph theory • Order theory Geometry • Algebraic • Analytic • Arithmetic • Differential • Discrete • Euclidean • Finite Number theory • Arithmetic • Algebraic number theory • Analytic number theory • Diophantine geometry Topology • General • Algebraic • Differential • Geometric • Homotopy theory Applied • Engineering mathematics • Mathematical biology • Mathematical chemistry • Mathematical economics • Mathematical finance • Mathematical physics • Mathematical psychology • Mathematical sociology • Mathematical statistics • Probability • Statistics • Systems science • Control theory • Game theory • Operations research Computational • Computer science • Theory of computation • Computational complexity theory • Numerical analysis • Optimization • Computer algebra Related topics • Mathematicians • lists • Informal mathematics • Films about mathematicians • Recreational mathematics • Mathematics and art • Mathematics education •  Mathematics portal • Category • Commons • WikiProject Authority control International • FAST National • France • BnF data • Germany • Israel • United States • Latvia • Japan • Czech Republic
Wikipedia
Top (algebra) In the context of a module M over a ring R, the top of M is the largest semisimple quotient module of M if it exists. For finite-dimensional k-algebras (k a field) R, if rad(M) denotes the intersection of all proper maximal submodules of M (the radical of the module), then the top of M is M/rad(M). In the case of local rings with maximal ideal P, the top of M is M/PM. In general if R is a semilocal ring (=semi-artinian ring), that is, if R/Rad(R) is an Artinian ring, where Rad(R) is the Jacobson radical of R, then M/rad(M) is a semisimple module and is the top of M. This includes the cases of local rings and finite dimensional algebras over fields. See also • Projective cover • Radical of a module • Socle (mathematics) References • David Eisenbud, Commutative algebra with a view toward Algebraic Geometry ISBN 0-387-94269-6
Wikipedia
Category of topological spaces In mathematics, the category of topological spaces, often denoted Top, is the category whose objects are topological spaces and whose morphisms are continuous maps. This is a category because the composition of two continuous maps is again continuous, and the identity function is continuous. The study of Top and of properties of topological spaces using the techniques of category theory is known as categorical topology. N.B. Some authors use the name Top for the categories with topological manifolds, with compactly generated spaces as objects and continuous maps as morphisms or with the category of compactly generated weak Hausdorff spaces. As a concrete category Like many categories, the category Top is a concrete category, meaning its objects are sets with additional structure (i.e. topologies) and its morphisms are functions preserving this structure. There is a natural forgetful functor U : Top → Set to the category of sets which assigns to each topological space the underlying set and to each continuous map the underlying function. The forgetful functor U has both a left adjoint D : Set → Top which equips a given set with the discrete topology, and a right adjoint I : Set → Top which equips a given set with the indiscrete topology. Both of these functors are, in fact, right inverses to U (meaning that UD and UI are equal to the identity functor on Set). Moreover, since any function between discrete or between indiscrete spaces is continuous, both of these functors give full embeddings of Set into Top. Top is also fiber-complete meaning that the category of all topologies on a given set X (called the fiber of U above X) forms a complete lattice when ordered by inclusion. The greatest element in this fiber is the discrete topology on X, while the least element is the indiscrete topology. Top is the model of what is called a topological category. These categories are characterized by the fact that every structured source $(X\to UA_{i})_{I}$ has a unique initial lift $(A\to A_{i})_{I}$. In Top the initial lift is obtained by placing the initial topology on the source. Topological categories have many properties in common with Top (such as fiber-completeness, discrete and indiscrete functors, and unique lifting of limits). Limits and colimits The category Top is both complete and cocomplete, which means that all small limits and colimits exist in Top. In fact, the forgetful functor U : Top → Set uniquely lifts both limits and colimits and preserves them as well. Therefore, (co)limits in Top are given by placing topologies on the corresponding (co)limits in Set. Specifically, if F is a diagram in Top and (L, φ : L → F) is a limit of UF in Set, the corresponding limit of F in Top is obtained by placing the initial topology on (L, φ : L → F). Dually, colimits in Top are obtained by placing the final topology on the corresponding colimits in Set. Unlike many algebraic categories, the forgetful functor U : Top → Set does not create or reflect limits since there will typically be non-universal cones in Top covering universal cones in Set. Examples of limits and colimits in Top include: • The empty set (considered as a topological space) is the initial object of Top; any singleton topological space is a terminal object. There are thus no zero objects in Top. • The product in Top is given by the product topology on the Cartesian product. The coproduct is given by the disjoint union of topological spaces. • The equalizer of a pair of morphisms is given by placing the subspace topology on the set-theoretic equalizer. Dually, the coequalizer is given by placing the quotient topology on the set-theoretic coequalizer. • Direct limits and inverse limits are the set-theoretic limits with the final topology and initial topology respectively. • Adjunction spaces are an example of pushouts in Top. Other properties • The monomorphisms in Top are the injective continuous maps, the epimorphisms are the surjective continuous maps, and the isomorphisms are the homeomorphisms. • The extremal monomorphisms are (up to isomorphism) the subspace embeddings. In fact, in Top all extremal monomorphisms happen to satisfy the stronger property of being regular. • The extremal epimorphisms are (essentially) the quotient maps. Every extremal epimorphism is regular. • The split monomorphisms are (essentially) the inclusions of retracts into their ambient space. • The split epimorphisms are (up to isomorphism) the continuous surjective maps of a space onto one of its retracts. • There are no zero morphisms in Top, and in particular the category is not preadditive. • Top is not cartesian closed (and therefore also not a topos) since it does not have exponential objects for all spaces. When this feature is desired, one often restricts to the full subcategory of compactly generated Hausdorff spaces CGHaus or the category of compactly generated weak Hausdorff spaces. However, Top is contained in the exponential category of pseudotopologies, which is itself a subcategory of the (also exponential) category of convergence spaces.[1] Relationships to other categories • The category of pointed topological spaces Top• is a coslice category over Top. • The homotopy category hTop has topological spaces for objects and homotopy equivalence classes of continuous maps for morphisms. This is a quotient category of Top. One can likewise form the pointed homotopy category hTop•. • Top contains the important category Haus of Hausdorff spaces as a full subcategory. The added structure of this subcategory allows for more epimorphisms: in fact, the epimorphisms in this subcategory are precisely those morphisms with dense images in their codomains, so that epimorphisms need not be surjective. • Top contains the full subcategory CGHaus of compactly generated Hausdorff spaces, which has the important property of being a Cartesian closed category while still containing all of the typical spaces of interest. This makes CGHaus a particularly convenient category of topological spaces that is often used in place of Top. • The forgetful functor to Set has both a left and a right adjoint, as described above in the concrete category section. • There is a functor to the category of locales Loc sending a topological space to its locale of open sets. This functor has a right adjoint that sends each locale to its topological space of points. This adjunction restricts to an equivalence between the category of sober spaces and spatial locales. • The homotopy hypothesis relates Top with ∞Grpd, the category of ∞-groupoids. The conjecture states that ∞-groupoids are equivalent to topological spaces modulo weak homotopy equivalence. See also • Category of groups – category in mathematicsPages displaying wikidata descriptions as a fallback • Category of metric spaces – mathematical category with metric spaces as its objects and distance-non-increasing maps as its morphismsPages displaying wikidata descriptions as a fallback • Category of sets – Category in mathematics where the objects are sets • Category of topological spaces with base point – Topological space with a distinguished pointPages displaying short descriptions of redirect targets • Category of topological vector spaces – Topological category Citations 1. Dolecki 2009, pp. 1–51 References • Adámek, Jiří, Herrlich, Horst, & Strecker, George E.; (1990). Abstract and Concrete Categories (4.2MB PDF). Originally publ. John Wiley & Sons. ISBN 0-471-60922-6. (now free on-line edition). • Dolecki, Szymon; Mynard, Frederic (2016). Convergence Foundations Of Topology. New Jersey: World Scientific Publishing Company. ISBN 978-981-4571-52-4. OCLC 945169917. • Dolecki, Szymon (2009). "An initiation into convergence theory" (PDF). In Mynard, Frédéric; Pearl, Elliott (eds.). Beyond Topology. Contemporary Mathematics. Vol. 486. pp. 115–162. doi:10.1090/conm/486/09509. ISBN 9780821842799. Retrieved 14 January 2021. • Dolecki, Szymon; Mynard, Frédéric (2014). "A unified theory of function spaces and hyperspaces: local properties" (PDF). Houston J. Math. 40 (1): 285–318. Retrieved 14 January 2021. • Herrlich, Horst: Topologische Reflexionen und Coreflexionen. Springer Lecture Notes in Mathematics 78 (1968). • Herrlich, Horst: Categorical topology 1971–1981. In: General Topology and its Relations to Modern Analysis and Algebra 5, Heldermann Verlag 1983, pp. 279–383. • Herrlich, Horst & Strecker, George E.: Categorical Topology – its origins, as exemplified by the unfolding of the theory of topological reflections and coreflections before 1971. In: Handbook of the History of General Topology (eds. C.E.Aull & R. Lowen), Kluwer Acad. Publ. vol 1 (1997) pp. 255–341.
Wikipedia
Greatest element and least element In mathematics, especially in order theory, the greatest element of a subset $S$ of a partially ordered set (poset) is an element of $S$ that is greater than every other element of $S$. The term least element is defined dually, that is, it is an element of $S$ that is smaller than every other element of $S.$ Definitions Let $(P,\leq )$ be a preordered set and let $S\subseteq P.$ An element $g\in P$ is said to be a greatest element of $S$ if $g\in S$ and if it also satisfies: $s\leq g$ for all $s\in S.$ By switching the side of the relation that $s$ is on in the above definition, the definition of a least element of $S$ is obtained. Explicitly, an element $l\in P$ is said to be a least element of $S$ if $l\in S$ and if it also satisfies: $l\leq s$ for all $s\in S.$ If $(P,\leq )$ is also a partially ordered set then $S$ can have at most one greatest element and it can have at most one least element. Whenever a greatest element of $S$ exists and is unique then this element is called the greatest element of $S$. The terminology the least element of $S$ is defined similarly. If $(P,\leq )$ has a greatest element (resp. a least element) then this element is also called a top (resp. a bottom) of $(P,\leq ).$ Relationship to upper/lower bounds Greatest elements are closely related to upper bounds. Let $(P,\leq )$ be a preordered set and let $S\subseteq P.$ An upper bound of $S$ in $(P,\leq )$ is an element $u$ such that $u\in P$ and $s\leq u$ for all $s\in S.$ Importantly, an upper bound of $S$ in $P$ is not required to be an element of $S.$ If $g\in P$ then $g$ is a greatest element of $S$ if and only if $g$ is an upper bound of $S$ in $(P,\leq )$ and $g\in S.$ In particular, any greatest element of $S$ is also an upper bound of $S$ (in $P$) but an upper bound of $S$ in $P$ is a greatest element of $S$ if and only if it belongs to $S.$ In the particular case where $P=S,$ the definition of "$u$ is an upper bound of $S$ in $S$" becomes: $u$ is an element such that $u\in S$ and $s\leq u$ for all $s\in S,$ which is completely identical to the definition of a greatest element given before. Thus $g$ is a greatest element of $S$ if and only if $g$ is an upper bound of $S$ in $S$. If $u$ is an upper bound of $S$ in $P$ that is not an upper bound of $S$ in $S$ (which can happen if and only if $u\not \in S$) then $u$ can not be a greatest element of $S$ (however, it may be possible that some other element is a greatest element of $S$). In particular, it is possible for $S$ to simultaneously not have a greatest element and for there to exist some upper bound of $S$ in $P$. Even if a set has some upper bounds, it need not have a greatest element, as shown by the example of the negative real numbers. This example also demonstrates that the existence of a least upper bound (the number 0 in this case) does not imply the existence of a greatest element either. Contrast to maximal elements and local/absolute maximums A greatest element of a subset of a preordered set should not be confused with a maximal element of the set, which are elements that are not strictly smaller than any other element in the set. Let $(P,\leq )$ be a preordered set and let $S\subseteq P.$ An element $m\in S$ is said to be a maximal element of $S$ if the following condition is satisfied: whenever $s\in S$ satisfies $m\leq s,$ then necessarily $s\leq m.$ If $(P,\leq )$ is a partially ordered set then $m\in S$ is a maximal element of $S$ if and only if there does not exist any $s\in S$ such that $m\leq s$ and $s\neq m.$ A maximal element of $(P,\leq )$ is defined to mean a maximal element of the subset $S:=P.$ A set can have several maximal elements without having a greatest element. Like upper bounds and maximal elements, greatest elements may fail to exist. In a totally ordered set the maximal element and the greatest element coincide; and it is also called maximum; in the case of function values it is also called the absolute maximum, to avoid confusion with a local maximum.[1] The dual terms are minimum and absolute minimum. Together they are called the absolute extrema. Similar conclusions hold for least elements. Role of (in)comparability in distinguishing greatest vs. maximal elements One of the most important differences between a greatest element $g$ and a maximal element $m$ of a preordered set $(P,\leq )$ has to do with what elements they are comparable to. Two elements $x,y\in P$ are said to be comparable if $x\leq y$ or $y\leq x$; they are called incomparable if they are not comparable. Because preorders are reflexive (which means that $x\leq x$ is true for all elements $x$), every element $x$ is always comparable to itself. Consequently, the only pairs of elements that could possibly be incomparable are distinct pairs. In general, however, preordered sets (and even directed partially ordered sets) may have elements that are incomparable. By definition, an element $g\in P$ is a greatest element of $(P,\leq )$ if $s\leq g,$ for every $s\in P$; so by its very definition, a greatest element of $(P,\leq )$ must, in particular, be comparable to every element in $P.$ This is not required of maximal elements. Maximal elements of $(P,\leq )$ are not required to be comparable to every element in $P.$ This is because unlike the definition of "greatest element", the definition of "maximal element" includes an important if statement. The defining condition for $m\in P$ to be a maximal element of $(P,\leq )$ can be reworded as: For all $s\in P,$ IF $m\leq s$ (so elements that are incomparable to $m$ are ignored) then $s\leq m.$ Example where all elements are maximal but none are greatest Suppose that $S$ is a set containing at least two (distinct) elements and define a partial order $\,\leq \,$ on $S$ by declaring that $i\leq j$ if and only if $i=j.$ If $i\neq j$ belong to $S$ then neither $i\leq j$ nor $j\leq i$ holds, which shows that all pairs of distinct (i.e. non-equal) elements in $S$ are incomparable. Consequently, $(S,\leq )$ can not possibly have a greatest element (because a greatest element of $S$ would, in particular, have to be comparable to every element of $S$ but $S$ has no such element). However, every element $m\in S$ is a maximal element of $(S,\leq )$ because there is exactly one element in $S$ that is both comparable to $m$ and $\geq m,$ that element being $m$ itself (which of course, is $\leq m$).[note 1] In contrast, if a preordered set $(P,\leq )$ does happen to have a greatest element $g$ then $g$ will necessarily be a maximal element of $(P,\leq )$ and moreover, as a consequence of the greatest element $g$ being comparable to every element of $P,$ if $(P,\leq )$ is also partially ordered then it is possible to conclude that $g$ is the only maximal element of $(P,\leq ).$ However, the uniqueness conclusion is no longer guaranteed if the preordered set $(P,\leq )$ is not also partially ordered. For example, suppose that $R$ is a non-empty set and define a preorder $\,\leq \,$ on $R$ by declaring that $i\leq j$ always holds for all $i,j\in R.$ The directed preordered set $(R,\leq )$ is partially ordered if and only if $R$ has exactly one element. All pairs of elements from $R$ are comparable and every element of $R$ is a greatest element (and thus also a maximal element) of $(R,\leq ).$ So in particular, if $R$ has at least two elements then $(R,\leq )$ has multiple distinct greatest elements. Properties Throughout, let $(P,\leq )$ be a partially ordered set and let $S\subseteq P.$ • A set $S$ can have at most one greatest element.[note 2] Thus if a set has a greatest element then it is necessarily unique. • If it exists, then the greatest element of $S$ is an upper bound of $S$ that is also contained in $S.$ • If $g$ is the greatest element of $S$ then $g$ is also a maximal element of $S$[note 3] and moreover, any other maximal element of $S$ will necessarily be equal to $g.$[note 4] • Thus if a set $S$ has several maximal elements then it cannot have a greatest element. • If $P$ satisfies the ascending chain condition, a subset $S$ of $P$ has a greatest element if, and only if, it has one maximal element.[note 5] • When the restriction of $\,\leq \,$ to $S$ is a total order ($S=\{1,2,4\}$ in the topmost picture is an example), then the notions of maximal element and greatest element coincide.[note 6] • However, this is not a necessary condition for whenever $S$ has a greatest element, the notions coincide, too, as stated above. • If the notions of maximal element and greatest element coincide on every two-element subset $S$ of $P,$ then $\,\leq \,$ is a total order on $P.$[note 7] Sufficient conditions • A finite chain always has a greatest and a least element. Top and bottom The least and greatest element of the whole partially ordered set play a special role and are also called bottom (⊥) and top (⊤), or zero (0) and unit (1), respectively. If both exist, the poset is called a bounded poset. The notation of 0 and 1 is used preferably when the poset is a complemented lattice, and when no confusion is likely, i.e. when one is not talking about partial orders of numbers that already contain elements 0 and 1 different from bottom and top. The existence of least and greatest elements is a special completeness property of a partial order. Further introductory information is found in the article on order theory. Examples • The subset of integers has no upper bound in the set $\mathbb {R} $ of real numbers. • Let the relation $\,\leq \,$ on $\{a,b,c,d\}$ be given by $a\leq c,$ $a\leq d,$ $b\leq c,$ $b\leq d.$ The set $\{a,b\}$ has upper bounds $c$ and $d,$ but no least upper bound, and no greatest element (cf. picture). • In the rational numbers, the set of numbers with their square less than 2 has upper bounds but no greatest element and no least upper bound. • In $\mathbb {R} ,$ the set of numbers less than 1 has a least upper bound, viz. 1, but no greatest element. • In $\mathbb {R} ,$ the set of numbers less than or equal to 1 has a greatest element, viz. 1, which is also its least upper bound. • In $\mathbb {R} ^{2}$ with the product order, the set of pairs $(x,y)$ with $0<x<1$ has no upper bound. • In $\mathbb {R} ^{2}$ with the lexicographical order, this set has upper bounds, e.g. $(1,0).$ It has no least upper bound. See also • Essential supremum and essential infimum • Initial and terminal objects • Maximal and minimal elements • Limit superior and limit inferior (infimum limit) • Upper and lower bounds Notes 1. Of course, in this particular example, there exists only one element in $S$ that is comparable to $m,$ which is necessarily $m$ itself, so the second condition "and $\geq m,$" was redundant. 2. If $g_{1}$ and $g_{2}$ are both greatest, then $g_{1}\leq g_{2}$ and $g_{2}\leq g_{1},$ and hence $g_{1}=g_{2}$ by antisymmetry. 3. If $g$ is the greatest element of $S$ and $s\in S,$ then $s\leq g.$ By antisymmetry, this renders ($g\leq s$ and $g\neq s$) impossible. 4. If $M$ is a maximal element, then $M\leq g$ since $g$ is greatest, hence $M=g$ since $M$ is maximal. 5. Only if: see above. — If: Assume for contradiction that $S$ has just one maximal element, $m,$ but no greatest element. Since $m$ is not greatest, some $s_{1}\in S$ must exist that is incomparable to $m.$ Hence $s_{1}\in S$ cannot be maximal, that is, $s_{1}<s_{2}$ must hold for some $s_{2}\in S.$ The latter must be incomparable to $m,$ too, since $m<s_{2}$ contradicts $m$'s maximality while $s_{2}\leq m$ contradicts the incomparability of $m$ and $s_{1}.$ Repeating this argument, an infinite ascending chain $s_{1}<s_{2}<\cdots <s_{n}<\cdots $ can be found (such that each $s_{i}$ is incomparable to $m$ and not maximal). This contradicts the ascending chain condition. 6. Let $m\in S$ be a maximal element, for any $s\in S$ either $s\leq m$ or $m\leq s.$ In the second case, the definition of maximal element requires that $m=s,$ so it follows that $s\leq m.$ In other words, $m$ is a greatest element. 7. If $a,b\in P$ were incomparable, then $S=\{a,b\}$ would have two maximal, but no greatest element, contradicting the coincidence. References 1. The notion of locality requires the function's domain to be at least a topological space. • Davey, B. A.; Priestley, H. A. (2002). Introduction to Lattices and Order (2nd ed.). Cambridge University Press. ISBN 978-0-521-78451-1.
Wikipedia
Top type In mathematical logic and computer science, some type theories and type systems include a top type that is commonly denoted with top or the symbol ⊤. The top type is sometimes called also universal type, or universal supertype as all other types in the type system of interest are subtypes of it, and in most cases, it contains every possible object of the type system. It is in contrast with the bottom type, or the universal subtype, which every other type is supertype of and it is often that the type contains no members at all. Support in programming languages Several typed programming languages provide explicit support for the top type. In statically-typed languages, there are two different, often confused, concepts when discussing the top type. 1. A universal base class or other item at the top of a run time class hierarchy (often relevant in object-oriented programming) or type hierarchy; it is often possible to create objects with this (run time) type, or it could be found when one examines the type hierarchy programmatically, in languages that support it 2. A (compile time) static type in the code whose variables can be assigned any value (or a subset thereof, like any object pointer value), similar to dynamic typing The first concept often implies the second, i.e., if a universal base class exists, then a variable that can point to an object of this class can also point to an object of any class. However, several languages have types in the second regard above (e.g., void * in C++, id in Objective-C, interface {} in Go), static types which variables can accept any object value, but which do not reflect real run time types that an object can have in the type system, so are not top types in the first regard. In dynamically-typed languages, the second concept does not exist (any value can be assigned to any variable anyway), so only the first (class hierarchy) is discussed. This article tries to stay with the first concept when discussing top types, but also mention the second concept in languages where it is significant. Most object-oriented programming languages include a universal base class: NameLanguages ObjectSmalltalk, JavaScript, Ruby (pre-1.9.2),[1] and some others. java.lang.ObjectJava. Often written without the package prefix, as Object. Also, it is not a supertype of the primitive types; however, since Java 1.5, autoboxing allows implicit or explicit type conversion of a primitive value to Object, e.g., ((Object)42).toString() System.Object[2]C#, Visual Basic .NET, and other .NET Framework languages std::anyC++ since C++17 objectPython since the type/class unification[3] in version 2.2 (new-style objects only; old-style objects in 2.x lack this as a base class). A new typing module introduces type Any which is compatible with any type and vise-versa TObjectObject Pascal tLisp, many dialects such as Common Lisp Any? Kotlin[4] AnyScala,[5] Swift,[6] Julia,[7] Python[8] ANYEiffel[9] UNIVERSALPerl 5 VariantVisual Basic up to version 6, D[10] interface{}Go BasicObjectRuby (version 1.9.2 and beyond) any and unknown[11]TypeScript (with unknown having been introduced in version 3.0[12]) mixedPHP (as of version 8.0) The following object-oriented languages have no universal base class: • C++. The pointer to void type can accept any non-function pointer, even though the void type itself is not the universal type but the unit type. Since C++17, the standard library provides the top type std::any. • Objective-C. It is possible to create a new base class by not specifying a parent class for a class, although this is highly unusual. Object is conventionally used as the base class in the original Objective-C run times. In the OpenStep and Cocoa Objective-C libraries, NSObject is conventionally the universal base class. The top type for pointers to objects is id. • Swift. It is possible to create a new base class by not specifying a parent class for a class. The protocol Any can accept any type. Other languages Languages that are not object-oriented usually have no universal supertype, or subtype polymorphism support. While Haskell purposefully lacks subtyping, it has several other forms of polymorphism including parametric polymorphism. The most generic type class parameter is an unconstrained parameter a (without a type class constraint). In Rust, <T: ?Sized> is the most generic parameter (<T> is not, as it implies the Sized trait by default). The top type is used as a generic type, more so in languages without parametric polymorphism. For example, before introducing generics in Java 5, collection classes in the Java library (excluding Java arrays) held references of type Object. In this way, any non-intrinsic type could be inserted into a collection. The top type is also often used to hold objects of unknown type. The top type may also be seen as the implied type of non-statically typed languages. Languages with run time typing often provide downcasting (or type refinement) to allow discovering a more specific type for an object at run time. In C++, downcasting from void * cannot be done in a safe way, where failed downcasts are detected by the language run time. In languages with a structural type system, the empty structure serves as a top type. For example, objects in OCaml are structurally typed; the empty object type (the type of objects with no methods), < >, is the top type of object types. Any OCaml object can be explicitly upcasted to this type, although the result would be of no use. Go also uses structural typing; and all types implement the empty interface: interface {}, which has no methods, but may still be downcast back to a more specific type. In logic The notion of top is also found in propositional calculus, corresponding to a formula which is true in every possible interpretation. It has a similar meaning in predicate calculus. In description logic, top is used to refer to the set of all concepts. This is intuitively like the use of the top type in programming languages. For example, in the Web Ontology Language (OWL), which supports various description logics, top corresponds to the class owl:Thing, where all classes are subclasses of owl:Thing. (the bottom type or empty set corresponds to owl:Nothing). See also • Singly rooted hierarchy Notes 1. "Class: BasicObject (Ruby 1.9.2)". Retrieved April 7, 2014. 2. System.Object 3. Python type/class unification 4. Matilla, Hugo (2019-02-27). "Kotlin basics: types. Any, Unit and Nothing". Medium. Retrieved September 16, 2019. 5. "An Overview of the Scala Programming Language" (PDF). 2006. Retrieved April 7, 2014. 6. "Types — The Swift Programming Language (Swift 5.3)". docs.swift.org. Retrieved November 2, 2020. 7. "Types · The Julia Language". Retrieved May 15, 2021. 8. "The Any type". 2022. Retrieved October 26, 2022. 9. "Standard ECMA-367. Eiffel: Analysis, Design and Programming Language" (PDF). 2006. Retrieved March 10, 2016. 10. "std.variant - D Programming Language". dlang.org. Retrieved 2022-10-29. 11. "The top types 'any' and 'unknown' in TypeScript". 12. "The unknown Type in TypeScript". 15 May 2019. References • Pierce, Benjamin C. (2002). Types and Programming Languages. MIT Press. ISBN 0-262-16209-1. External links • c2.com: Top type Data types Uninterpreted • Bit • Byte • Trit • Tryte • Word • Bit array Numeric • Arbitrary-precision or bignum • Complex • Decimal • Fixed point • Floating point • Reduced precision • Minifloat • Half precision • bfloat16 • Single precision • Double precision • Quadruple precision • Octuple precision • Extended precision • Long double • Integer • signedness • Interval • Rational Pointer • Address • physical • virtual • Reference Text • Character • String • null-terminated Composite • Algebraic data type • generalized • Array • Associative array • Class • Dependent • Equality • Inductive • Intersection • List • Object • metaobject • Option type • Product • Record or Struct • Refinement • Set • Union • tagged Other • Boolean • Bottom type • Collection • Enumerated type • Exception • Function type • Opaque data type • Recursive data type • Semaphore • Stream • Strongly typed identifier • Top type • Type class • Empty type • Unit type • Void Related topics • Abstract data type • Boxing • Data structure • Generic • Kind • metaclass • Parametric polymorphism • Primitive data type • Interface • Subtyping • Type constructor • Type conversion • Type system • Type theory • Variable
Wikipedia
Outline of discrete mathematics Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic[1] – do not vary smoothly in this way, but have distinct, separated values.[2] Discrete mathematics, therefore, excludes topics in "continuous mathematics" such as calculus and analysis. Included below are many of the standard terms used routinely in university-level courses and in research papers. This is not, however, intended as a complete list of mathematical terms; just a selection of typical terms of art that may be encountered. Subjects in discrete mathematics • Logic – a study of reasoning -Modal Logic: A type of logic for the study of necessity and probability • Set theory – a study of collections of elements • Number theory – study of integers and integer-valued functions • Combinatorics – a study of Counting • Finite mathematics – a course title • Graph theory – a study of graphs • Digital geometry and digital topology • Algorithmics – a study of methods of calculation • Information theory – a mathematical representation of the conditions and parameters affecting the transmission and processing of information • Computability and complexity theories – deal with theoretical and practical limitations of algorithms • Elementary probability theory and Markov chains • Linear algebra – a study of related linear equations • Functions – an expression, rule, or law that defines a relationship between one variable (the independent variable) and another variable (the dependent variable) • Partially ordered set – • Probability – concerns with numerical descriptions of the chances of occurrence of an event • Proofs – • Relation – a collection of ordered pairs containing one object from each set Discrete mathematical disciplines For further reading in discrete mathematics, beyond a basic level, see these pages. Many of these disciplines are closely related to computer science. • Automata theory – • Coding theory – • Combinatorics – • Computational geometry – • Digital geometry – • Discrete geometry – • Graph theory – a study of graphs • Mathematical logic – • Discrete optimization – • Set theory – • Number theory – • Information theory – • Game theory – Concepts in discrete mathematics Sets • Set (mathematics) – • Element (mathematics) – • Venn diagram – • Empty set – • Subset – • Union (set theory) – • Disjoint union – • Intersection (set theory) – • Disjoint sets – • Complement (set theory) – • Symmetric difference – • Ordered pair – • Cartesian product – • Power set – • Simple theorems in the algebra of sets – • Naive set theory – • Multiset – Functions • Function – • Domain of a function – • Codomain – • Range of a function – • Image (mathematics) – • Injective function – • Surjection – • Bijection – • Function composition – • Partial function – • Multivalued function – • Binary function – • Floor function – • Sign function – • Inclusion map – • Pigeonhole principle – • Relation composition – • Permutations – • Symmetry – Arithmetic • Decimal – • Binary numeral system – • Divisor – • Division by zero – • Indeterminate form – • Empty product – • Euclidean algorithm – • Fundamental theorem of arithmetic – • Modular arithmetic – • Successor function Elementary algebra Elementary algebra • Left-hand side and right-hand side of an equation – • Linear equation – • Quadratic equation – • Solution point – • Arithmetic progression – • Recurrence relation – • Finite difference – • Difference operator – • Groups – • Group isomorphism – • Subgroups – • Fermat's little theorem – • Cryptography – • Faulhaber's formula – Mathematical relations • Binary relation – • Heterogeneous relation – • Reflexive relation – • Reflexive property of equality – • Symmetric relation – • Symmetric property of equality – • Antisymmetric relation – • Transitivity (mathematics) – • Transitive closure – • Transitive property of equality – • Equivalence and identity • Equivalence relation – • Equivalence class – • Equality (mathematics) – • Inequation – • Inequality (mathematics) – • Similarity (geometry) – • Congruence (geometry) – • Equation – • Identity (mathematics) – • Identity element – • Identity function – • Substitution property of equality – • Graphing equivalence – • Extensionality – • Uniqueness quantification – Mathematical phraseology • If and only if – iff • Necessary and sufficient (Sufficient condition) – $P\Rightarrow Q}$, which implies that if P is true, then Q will also be true. • Distinct – • Difference – • Absolute value – $\left\vert A\right\vert $ gives absolute value of the number A • Up to – • Modular arithmetic – • Characterization (mathematics) – • Normal form – • Canonical form – • Without loss of generality – • Vacuous truth – • Contradiction, Reductio ad absurdum – • Counterexample – • Sufficiently large – • Pons asinorum – • Table of mathematical symbols – • Contrapositive – $\thicksim $P implies contrapositive of P . • Mathematical induction – Combinatorics Combinatorics • Permutations and combinations – • Permutation – • Combination – • Factorial – • Empty product – • Pascal's triangle – • Combinatorial proof – • Bijective proof – • Double counting (proof technique) – Probability Probability • Average – • Expected value – • Discrete random variable – • Sample space – • Event – • Conditional Probability – • Independence – • Random variables – Propositional logic • Logical operator – • Truth table – • De Morgan's laws – • Open sentence – • List of topics in logic – Mathematicians associated with discrete mathematics • Paul Erdős • Ronald Graham • George Szekeres • Aristotle See also References 1. Richard Johnsonbaugh, Discrete Mathematics, Prentice Hall, 2008; James Franklin, Discrete and continuous: a fundamental dichotomy in mathematics, Journal of Humanistic Mathematics 7 (2017), 355-378. 2. Weisstein, Eric W. "Discrete mathematics". MathWorld. External links • Archives • Jonathan Arbib & John Dwyer, Discrete Mathematics for Cryptography, 1st Edition ISBN 978-1-907934-01-8. • John Dwyer & Suzy Jagger, Discrete Mathematics for Business & Computing, 1st Edition 2010 ISBN 978-1-907934-00-1. Wikipedia Outlines General reference • Culture and the arts • Geography and places • Health and fitness • History and events • Mathematics and logic • Natural and physical sciences • People and self • Philosophy and thinking • Religion and belief systems • Society and social sciences • Technology and applied sciences
Wikipedia
Outline of probability Probability is a measure of the likeliness that an event will occur. Probability is used to quantify an attitude of mind towards some proposition whose truth is not certain. The proposition of interest is usually of the form "A specific event will occur." The attitude of mind is of the form "How certain is it that the event will occur?" The certainty that is adopted can be described in terms of a numerical measure, and this number, between 0 and 1 (where 0 indicates impossibility and 1 indicates certainty) is called the probability. Probability theory is used extensively in statistics, mathematics, science and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems. Probability • Outline • Catalog of articles • Probabilists • Glossary • Notation • Journals • Category •  Mathematics portal Introduction • Probability and randomness. Basic probability (Related topics: set theory, simple theorems in the algebra of sets) Events • Events in probability theory • Elementary events, sample spaces, Venn diagrams • Mutual exclusivity Elementary probability • The axioms of probability • Boole's inequality Meaning of probability • Probability interpretations • Bayesian probability • Frequency probability Calculating with probabilities • Conditional probability • The law of total probability • Bayes' theorem Independence • Independence (probability theory) Probability theory (Related topics: measure theory) Measure-theoretic probability • Sample spaces, σ-algebras and probability measures • Probability space • Sample space • Standard probability space • Random element • Random compact set • Dynkin system • Probability axioms • Event (probability theory) • Complementary event • Elementary event • "Almost surely" Independence • Independence (probability theory) • The Borel–Cantelli lemmas and Kolmogorov's zero–one law Conditional probability • Conditional probability • Conditioning (probability) • Conditional expectation • Conditional probability distribution • Regular conditional probability • Disintegration theorem • Bayes' theorem • Rule of succession • Conditional independence • Conditional event algebra • Goodman–Nguyen–van Fraassen algebra Random variables Discrete and continuous random variables • Discrete random variables: Probability mass functions • Continuous random variables: Probability density functions • Normalizing constants • Cumulative distribution functions • Joint, marginal and conditional distributions Expectation • Expectation (or mean), variance and covariance • Jensen's inequality • General moments about the mean • Correlated and uncorrelated random variables • Conditional expectation: • law of total expectation, law of total variance • Fatou's lemma and the monotone and dominated convergence theorems • Markov's inequality and Chebyshev's inequality Independence • Independent random variables Some common distributions • Discrete: • constant (see also degenerate distribution), • Bernoulli and binomial, • negative binomial, • (discrete) uniform, • geometric, • Poisson, and • hypergeometric. • Continuous: • (continuous) uniform, • exponential, • gamma, • beta, • normal (or Gaussian) and multivariate normal, • χ-squared (or chi-squared), • F-distribution, • Student's t-distribution, and • Cauchy. Some other distributions • Cantor • Fisher–Tippett (or Gumbel) • Pareto • Benford's law Functions of random variables • Sum of normally distributed random variables • Borel's paradox Generating functions (Related topics: integral transforms) Common generating functions • Probability-generating functions • Moment-generating functions • Laplace transforms and Laplace–Stieltjes transforms • Characteristic functions Applications • A proof of the central limit theorem Convergence of random variables (Related topics: convergence) Modes of convergence • Convergence in distribution and convergence in probability, • Convergence in mean, mean square and rth mean • Almost sure convergence • Skorokhod's representation theorem Applications • Central limit theorem and Laws of large numbers • Illustration of the central limit theorem and a 'concrete' illustration • Berry–Esséen theorem • Law of the iterated logarithm Stochastic processes Some common stochastic processes • Random walk • Poisson process • Compound Poisson process • Wiener process • Geometric Brownian motion • Fractional Brownian motion • Brownian bridge • Ornstein–Uhlenbeck process • Gamma process Markov processes • Markov property • Branching process • Galton–Watson process • Markov chain • Examples of Markov chains • Population processes • Applications to queueing theory • Erlang distribution Stochastic differential equations • Stochastic calculus • Diffusions • Brownian motion • Wiener equation • Wiener process Time series • Moving-average and autoregressive processes • Correlation function and autocorrelation Martingales • Martingale central limit theorem • Azuma's inequality See also • Catalog of articles in probability theory • Glossary of probability and statistics • Notation in probability and statistics • List of mathematical probabilists • List of probability distributions • List of probability topics • List of scientific journals in probability • Timeline of probability and statistics • Topic outline of statistics Wikipedia Outlines General reference • Culture and the arts • Geography and places • Health and fitness • History and events • Mathematics and logic • Natural and physical sciences • People and self • Philosophy and thinking • Religion and belief systems • Society and social sciences • Technology and applied sciences
Wikipedia
Outline of linear algebra This is an outline of topics related to linear algebra, the branch of mathematics concerning linear equations and linear maps and their representations in vector spaces and through matrices. Linear equations Linear equation • System of linear equations • Determinant • Minor • Cauchy–Binet formula • Cramer's rule • Gaussian elimination • Gauss–Jordan elimination • Overcompleteness • Strassen algorithm Matrices Matrix • Matrix addition • Matrix multiplication • Basis transformation matrix • Characteristic polynomial • Trace • Eigenvalue, eigenvector and eigenspace • Cayley–Hamilton theorem • Spread of a matrix • Jordan normal form • Weyr canonical form • Rank • Matrix inversion, invertible matrix • Pseudoinverse • Adjugate • Transpose • Dot product • Symmetric matrix • Orthogonal matrix • Skew-symmetric matrix • Conjugate transpose • Unitary matrix • Hermitian matrix, Antihermitian matrix • Positive-definite, positive-semidefinite matrix • Pfaffian • Projection • Spectral theorem • Perron–Frobenius theorem • List of matrices • Diagonal matrix, main diagonal • Diagonalizable matrix • Triangular matrix • Tridiagonal matrix • Block matrix • Sparse matrix • Hessenberg matrix • Hessian matrix • Vandermonde matrix • Stochastic matrix • Toeplitz matrix • Circulant matrix • Hankel matrix • (0,1)-matrix Matrix decompositions Matrix decomposition • Cholesky decomposition • LU decomposition • QR decomposition • Polar decomposition • Reducing subspace • Spectral theorem • Singular value decomposition • Higher-order singular value decomposition • Schur decomposition • Schur complement • Haynsworth inertia additivity formula Relations • Matrix equivalence • Matrix congruence • Matrix similarity • Matrix consimilarity • Row equivalence Computations • Elementary row operations • Householder transformation • Least squares, linear least squares • Gram–Schmidt process • Woodbury matrix identity Vector spaces Vector space • Linear combination • Linear span • Linear independence • Scalar multiplication • Basis • Change of basis • Hamel basis • Cyclic decomposition theorem • Dimension theorem for vector spaces • Hamel dimension • Examples of vector spaces • Linear map • Shear mapping or Galilean transformation • Squeeze mapping or Lorentz transformation • Linear subspace • Row and column spaces • Column space • Row space • Cyclic subspace • Null space, nullity • Rank–nullity theorem • Nullity theorem • Dual space • Linear function • Linear functional • Category of vector spaces Structures • Topological vector space • Normed vector space • Inner product space • Euclidean space • Orthogonality • Orthogonal complement • Orthogonal projection • Orthogonal group • Pseudo-Euclidean space • Null vector • Indefinite orthogonal group • Orientation (geometry) • Improper rotation • Symplectic structure Multilinear algebra Multilinear algebra • Tensor • Classical treatment of tensors • Component-free treatment of tensors • Gamas's Theorem • Outer product • Tensor algebra • Exterior algebra • Symmetric algebra • Clifford algebra • Geometric algebra Topics related to affine spaces Affine space • Affine transformation • Affine group • Affine geometry • Affine coordinate system • Flat (geometry) • Cartesian coordinate system • Euclidean group • Poincaré group • Galilean group Projective space Projective space • Projective transformation • Projective geometry • Projective linear group • Quadric and conic section See also • Glossary of linear algebra • Glossary of tensor theory Linear algebra • Outline • Glossary Basic concepts • Scalar • Vector • Vector space • Scalar multiplication • Vector projection • Linear span • Linear map • Linear projection • Linear independence • Linear combination • Basis • Change of basis • Row and column vectors • Row and column spaces • Kernel • Eigenvalues and eigenvectors • Transpose • Linear equations Matrices • Block • Decomposition • Invertible • Minor • Multiplication • Rank • Transformation • Cramer's rule • Gaussian elimination Bilinear • Orthogonality • Dot product • Hadamard product • Inner product space • Outer product • Kronecker product • Gram–Schmidt process Multilinear algebra • Determinant • Cross product • Triple product • Seven-dimensional cross product • Geometric algebra • Exterior algebra • Bivector • Multivector • Tensor • Outermorphism Vector space constructions • Dual • Direct sum • Function space • Quotient • Subspace • Tensor product Numerical • Floating-point • Numerical stability • Basic Linear Algebra Subprograms • Sparse matrix • Comparison of linear algebra libraries • Category • Mathematics portal • Commons • Wikibooks • Wikiversity
Wikipedia
Topkis's theorem In mathematical economics, Topkis's theorem is a result that is useful for establishing comparative statics. The theorem allows researchers to understand how the optimal value for a choice variable changes when a feature of the environment changes. The result states that if f is supermodular in (x,θ), and D is a lattice, then $x^{*}(\theta )=\arg \max _{x\in D}f(x,\theta )$ is nondecreasing in θ. The result is especially helpful for establishing comparative static results when the objective function is not differentiable. The result is named after Donald M. Topkis. An example This example will show how using Topkis's theorem gives the same result as using more standard tools. The advantage of using Topkis's theorem is that it can be applied to a wider class of problems than can be studied with standard economics tools. A driver is driving down a highway and must choose a speed, s. Going faster is desirable, but is more likely to result in a crash. There is some prevalence of potholes, p. The presence of potholes increases the probability of crashing. Note that s is a choice variable and p is a parameter of the environment that is fixed from the perspective of the driver. The driver seeks to $\max _{s}U(s,p)$. We would like to understand how the driver's speed (a choice variable) changes with the amount of potholes: ${\frac {\partial s^{\ast }(p)}{\partial p}}.$ If one wanted to solve the problem with standard tools such as the implicit function theorem, one would have to assume that the problem is well behaved: U(.) is twice continuously differentiable, concave in s, that the domain over which s is defined is convex, and that it there is a unique maximizer $s^{\ast }(p)$ for every value of p and that $s^{\ast }(p)$ is in the interior of the set over which s is defined. Note that the optimal speed is a function of the amount of potholes. Taking the first order condition, we know that at the optimum, $U_{s}(s^{\ast }(p),p)=0$. Differentiating the first order condition, with respect to p and using the implicit function theorem, we find that $U_{ss}(s^{\ast }(p),p)(\partial s^{\ast }(p)/(\partial p))+U_{sp}(s^{\ast }(p),p)=0$ or that ${\frac {\partial s^{\ast }(p)}{\partial p}}={\underset {{\text{negative since we assumed }}U(.){\text{ was concave in }}s}{\underbrace {\frac {-U_{sp}(s^{\ast }(p),p)}{U_{ss}(s^{\ast }(p),p)}} }}.$ So, ${\frac {\partial s^{\ast }(p)}{\partial p}}{\overset {\text{sign}}{=}}U_{sp}(s^{\ast }(p),p).$ If s and p are substitutes, $U_{sp}(s^{\ast }(p),p)<0$ and hence ${\frac {\partial s^{\ast }(p)}{\partial p}}<0$ and more potholes causes less speeding. Clearly it is more reasonable to assume that they are substitutes. The problem with the above approach is that it relies on the differentiability of the objective function and on concavity. We could get at the same answer using Topkis's theorem in the following way. We want to show that $U(s,p)$ is submodular (the opposite of supermodular) in $\left(s,p\right)$. Note that the choice set is clearly a lattice. The cross partial of U being negative, ${\frac {\partial ^{2}U}{\partial s\,\partial p}}<0$, is a sufficient condition. Hence if ${\frac {\partial ^{2}U}{\partial s\,\partial p}}<0,$ we know that ${\frac {\partial s^{\ast }(p)}{\partial p}}<0$. Hence using the implicit function theorem and Topkis's theorem gives the same result, but the latter does so with fewer assumptions. Notes and references • Amir, Rabah (2005). "Supermodularity and Complementarity in Economics: An Elementary Survey". Southern Economic Journal. 71 (3): 636–660. doi:10.2307/20062066. JSTOR 20062066. • Topkis, Donald M. (1978). "Minimizing a Submodular Function on a Lattice". Operations Research. 26 (2): 305–321. CiteSeerX 10.1.1.557.5908. doi:10.1287/opre.26.2.305. • Topkis, Donald M. (1998). Supermodularity and Complementarity. Princeton University Press. ISBN 978-0-691-03244-3.
Wikipedia
Toponogov's theorem In the mathematical field of Riemannian geometry, Toponogov's theorem (named after Victor Andreevich Toponogov) is a triangle comparison theorem. It is one of a family of comparison theorems that quantify the assertion that a pair of geodesics emanating from a point p spread apart more slowly in a region of high curvature than they would in a region of low curvature. Let M be an m-dimensional Riemannian manifold with sectional curvature K satisfying $K\geq \delta \,.$ Let pqr be a geodesic triangle, i.e. a triangle whose sides are geodesics, in M, such that the geodesic pq is minimal and if δ > 0, the length of the side pr is less than $\pi /{\sqrt {\delta }}$. Let p′q′r′ be a geodesic triangle in the model space Mδ, i.e. the simply connected space of constant curvature δ, such that the lengths of sides p′q′ and p′r′ are equal to that of pq and pr respectively and the angle at p′ is equal to that at p. Then $d(q,r)\leq d(q',r').\,$ When the sectional curvature is bounded from above, a corollary to the Rauch comparison theorem yields an analogous statement, but with the reverse inequality . References • Chavel, Isaac (2006), Riemannian Geometry; A Modern Introduction (second ed.), Cambridge University Press • Berger, Marcel (2004), A Panoramic View of Riemannian Geometry, Springer-Verlag, ISBN 3-540-65317-1 • Cheeger, Jeff; Ebin, David G. (2008), Comparison theorems in Riemannian geometry, AMS Chelsea Publishing, Providence, RI, ISBN 978-0-8218-4417-5, MR 2394158 External links • Pambuccian V., Zamfirescu T. "Paolo Pizzetti: The forgotten originator of triangle comparison geometry". Hist Math 38:8 (2011)
Wikipedia
Topological abelian group In mathematics, a topological abelian group, or TAG, is a topological group that is also an abelian group. That is, a TAG is both a group and a topological space, the group operations are continuous, and the group's binary operation is commutative. The theory of topological groups applies also to TAGs, but more can be done with TAGs. Locally compact TAGs, in particular, are used heavily in harmonic analysis. See also • Compact group – Topological group with compact topology • Complete field – Algebraic structure that is complete relative to a metricPages displaying wikidata descriptions as a fallback • Fourier transform – Mathematical transform that expresses a function of time as a function of frequency • Haar measure – Left-invariant (or right-invariant) measure on locally compact topological group • Locally compact field • Locally compact quantum group – relatively new C*-algebraic approach toward quantum groupsPages displaying wikidata descriptions as a fallback • Locally compact group – topological group G for which the underlying topology is locally compact and Hausdorff, so that the Haar measure can be definedPages displaying wikidata descriptions as a fallback • Pontryagin duality – Duality for locally compact abelian groups • Protorus – Mathematical object, a topological abelian group that is compact and connected • Ordered topological vector space • Topological field – Algebraic structure with addition, multiplication, and divisionPages displaying short descriptions of redirect targets • Topological group – Group that is a topological space with continuous group action • Topological module • Topological ring – ring where ring operations are continuousPages displaying wikidata descriptions as a fallback • Topological semigroup – semigroup with continuous operationPages displaying wikidata descriptions as a fallback • Topological vector space – Vector space with a notion of nearness References • Banaszczyk, Wojciech (1991). Additive subgroups of topological vector spaces. Lecture Notes in Mathematics. Vol. 1466. Berlin: Springer-Verlag. pp. viii+178. ISBN 3-540-53917-4. MR 1119302. • Fourier analysis on Groups, by Walter Rudin.
Wikipedia
Topological algebra In mathematics, a topological algebra $A$ is an algebra and at the same time a topological space, where the algebraic and the topological structures are coherent in a specified sense. Definition A topological algebra $A$ over a topological field $K$ is a topological vector space together with a bilinear multiplication $\cdot :A\times A\to A$, $(a,b)\mapsto a\cdot b$ that turns $A$ into an algebra over $K$ and is continuous in some definite sense. Usually the continuity of the multiplication is expressed by one of the following (non-equivalent) requirements: • joint continuity:[1] for each neighbourhood of zero $U\subseteq A$ there are neighbourhoods of zero $V\subseteq A$ and $W\subseteq A$ such that $V\cdot W\subseteq U$ (in other words, this condition means that the multiplication is continuous as a map between topological spaces $A\times A\to A$), or • stereotype continuity:[2] for each totally bounded set $S\subseteq A$ and for each neighbourhood of zero $U\subseteq A$ there is a neighbourhood of zero $V\subseteq A$ such that $S\cdot V\subseteq U$ and $V\cdot S\subseteq U$, or • separate continuity:[3] for each element $a\in A$ and for each neighbourhood of zero $U\subseteq A$ there is a neighbourhood of zero $V\subseteq A$ such that $a\cdot V\subseteq U$ and $V\cdot a\subseteq U$. (Certainly, joint continuity implies stereotype continuity, and stereotype continuity implies separate continuity.) In the first case $A$ is called a "topological algebra with jointly continuous multiplication", and in the last, "with separately continuous multiplication". A unital associative topological algebra is (sometimes) called a topological ring. History The term was coined by David van Dantzig; it appears in the title of his doctoral dissertation (1931). Examples 1. Fréchet algebras are examples of associative topological algebras with jointly continuous multiplication. 2. Banach algebras are special cases of Fréchet algebras. 3. Stereotype algebras are examples of associative topological algebras with stereotype continuous multiplication. Notes 1. Beckenstein, Narici & Suffel 1977. 2. Akbarov 2003. 3. Mallios 1986. External links • Topological algebra at the nLab References • Beckenstein, E.; Narici, L.; Suffel, C. (1977). Topological Algebras. Amsterdam: North Holland. ISBN 9780080871356. • Akbarov, S.S. (2003). "Pontryagin duality in the theory of topological vector spaces and in topological algebra". Journal of Mathematical Sciences. 113 (2): 179–349. doi:10.1023/A:1020929201133. S2CID 115297067. • Mallios, A. (1986). Topological Algebras. Amsterdam: North Holland. ISBN 9780080872353. • Balachandran, V.K. (2000). Topological Algebras. Amsterdam: North Holland. ISBN 9780080543086. • Fragoulopoulou, M. (2005). Topological Algebras with Involution. Amsterdam: North Holland. ISBN 9780444520258.
Wikipedia
Supersymmetric theory of stochastic dynamics Supersymmetric theory of stochastic dynamics or stochastics (STS) is an exact theory of stochastic (partial) differential equations (SDEs), the class of mathematical models with the widest applicability covering, in particular, all continuous time dynamical systems, with and without noise. The main utility of the theory from the physical point of view is a rigorous theoretical explanation of the ubiquitous spontaneous long-range dynamical behavior that manifests itself across disciplines via such phenomena as 1/f, flicker, and crackling noises and the power-law statistics, or Zipf's law, of instantonic processes like earthquakes and neuroavalanches. From the mathematical point of view, STS is interesting because it bridges the two major parts of mathematical physics – the dynamical systems theory and topological field theories. Besides these and related disciplines such as algebraic topology and supersymmetric field theories, STS is also connected with the traditional theory of stochastic differential equations and the theory of pseudo-Hermitian operators. The theory began with the application of BRST gauge fixing procedure to Langevin SDEs,[1][2] that was later adapted to classical mechanics[3][4][5][6] and its stochastic generalization,[7] higher-order Langevin SDEs,[8] and, more recently, to SDEs of arbitrary form,[9] which allowed to link BRST formalism to the concept of transfer operators and recognize spontaneous breakdown of BRST supersymmetry as a stochastic generalization of dynamical chaos. The main idea of the theory is to study, instead of trajectories, the SDE-defined temporal evolution of differential forms. This evolution has an intrinsic BRST or topological supersymmetry representing the preservation of topology and/or the concept of proximity in the phase space by continuous time dynamics. The theory identifies a model as chaotic, in the generalized, stochastic sense, if its ground state is not supersymmetric, i.e., if the supersymmetry is broken spontaneously. Accordingly, the emergent long-range behavior that always accompanies dynamical chaos and its derivatives such as turbulence and self-organized criticality can be understood as a consequence of the Goldstone theorem. History and relation to other theories The first relation between supersymmetry and stochastic dynamics was established in two papers in 1979 and 1982 by Giorgio Parisi and Nicolas Sourlas,[1][2] who demonstrated that the application of the BRST gauge fixing procedure to Langevin SDEs, i.e., to SDEs with linear phase spaces, gradient flow vector fields, and additive noises, results in N=2 supersymmetric models. The original goal of their work was dimensional reduction, i.e., a specific cancellation of divergences in Feynman diagrams proposed a few years earlier by Amnon Aharony, Yoseph Imry, and Shang-keng Ma.[10] Since then, relation between so-emerged supersymmetry of Langevin SDEs and a few physical concepts [11][12][13][14][8] have been established including the fluctuation dissipation theorems,[14] Jarzynski equality,[15] Onsager principle of microscopic reversibility,[16] solutions of Fokker–Planck equations,[17] self-organization,[18] etc. A similar approach was used to establish that classical mechanics,[3][4] its stochastic generalization,[7] and higher-order Langevin SDEs[8] also have supersymmetric representations. Real dynamical systems, however, are never purely Langevin or classical mechanical. In addition, physically meaningful Langevin SDEs never break supersymmetry spontaneously. Therefore, for the purpose of the identification of the spontaneous supersymmetry breaking as dynamical chaos, the generalization of the Parisi–Sourlas approach to SDEs of general form is needed. This generalization could come only after a rigorous formulation of the theory of pseudo-Hermitian operators[19] because the stochastic evolution operator is pseudo-Hermitian in the general case. Such generalization[9] showed that all SDEs possess N=1 BRST or topological supersymmetry (TS) and this finding completes the story of relation between supersymmetry and SDEs. In parallel to the BRST procedure approach to SDEs, mathematicians working in the dynamical systems theory introduced and studied the concept of generalized transfer operator defined for random dynamical systems.[20][21] This concept underlies the most important object of the STS, the stochastic evolution operator, and provides it with a solid mathematical meaning. STS has a close relation with algebraic topology and its topological sector belongs to the class of models known as Witten-type topological or cohomological field theory.[22][23] [24][25][26][27] As a supersymmetric theory, BRST procedure approach to SDEs can be viewed as one of the realizations of the concept of Nicolai map.[28][29] Parisi–Sourlas approach to Langevin SDEs In the context of supersymmetric approach to stochastic dynamics, the term Langevin SDEs denotes SDEs with Euclidean phase space, $X=\mathbb {R} ^{n}$, gradient flow vector field, and additive Gaussian white noise, ${\dot {x}}(t)=-\partial U(x(t))+(2\Theta )^{1/2}\xi (t),$ where $x\in X$ , $\xi \in \mathbb {R} ^{n}$is the noise variable, $\Theta $ is the noise intensity, and $\partial U(x)$, which in coordinates $(\partial U(x))^{i}\equiv \delta ^{ij}\partial _{j}U(x)$ and $\partial _{i}U(x)\equiv \partial U(x)/\partial x^{i}$, is the gradient flow vector field with $U(x)$ being the Langevin function often interpreted as the energy of the purely dissipative stochastic dynamical system. The Parisi–Sourlas method is a way of construction of the path integral representation of the Langevin SDE. It can be thought of as a BRST gauge fixing procedure that uses the Langevin SDE as a gauge condition. Namely, one considers the following functional integral, ${\mathcal {W}}=\left\langle \int \dots \int J\left(\prod \nolimits _{\tau }\delta ({\dot {x}}(\tau )-{\mathcal {F}}(x(\tau )))\right)Dx\right\rangle _{\text{noise}},$ where ${\mathcal {F}}$ denotes the r.h.s. of the Langevin SDE, $\textstyle \langle \cdot \rangle _{\text{noise}}\equiv \int \dots \int \cdot P(\xi )D\xi $ is the operation of stochastic averaging with $P(\xi )\propto e^{-\int _{t'}^{t}d\tau \xi ^{2}(\tau )/2}$ being the normalized distribution of noise configurations, $\textstyle J=\operatorname {Det} {\frac {\delta ({\dot {x}}(\tau )-{\mathcal {F}}(x(\tau )))}{\delta x(\tau ')}}$ is the Jacobian of the corresponding functional derivative, and the path integration is over all closed paths, $x(t)=x(t')$, where $t'$ and $t>t'$are the initial and final moments of temporal evolution. Dimensional reduction The Parisi–Sourlas construction originally aimed at "dimensional reduction" proposed in 1976 by Amnon Aharony, Yoseph Imry, and Shang-keng Ma[10] who proved that to all orders in perturbation expansion, the critical exponents in a d-dimensional (4 < d < 6) system with short-range exchange and a random quenched field are the same as those of a (d–2)-dimensional pure system.[30] Their arguments indicated that the "Feynman diagrams which give the leading singular behavior for the random case are identically equal, apart from combinatorial factors, to the corresponding Feynman diagrams for the pure case in two fewer dimensions."[30] ... Parisi and Sourlas ... observed that the most infrared divergent diagrams are those with the maximum number of random source insertions, and, if the other diagrams are neglected, one is left with a diagrammatic expansion for a classical field theory in the presence of random sources. Parisi and Sourlas then pointed out that the underlying phenomenon for the connection between random systems and pure systems in two fewer dimensions is that a classical field theory in the presence of random sources is perturbatively equivalent to the corresponding quantum field theory in two fewer dimensions. Parisi and Sourlas explained this dimensional reduction by a hidden supersymmetry.[30] Topological interpretation Topological aspects of the Parisi–Sourlas construction can be briefly outlined in the following manner.[22][31] The delta-functional, i.e., the collection of the infinite number of delta-functions, ensures that only solutions of the Langevin SDE contribute to ${\mathcal {W}}$. In the context of BRST procedure, these solutions can be viewed as Gribov copies. Each solution contributes either positive or negative unity: $ {\mathcal {W}}=\textstyle \left\langle I_{N}(\xi )\right\rangle _{\text{noise}}$with $ I_{N}(\xi )=\sum _{\text{solutions}}\operatorname {sign} J$ being the index of the so-called Nicolai map, $ \xi =({\dot {x}}+\partial U)/(2\Theta )^{1/2}$, which in this case is the map from the space of closed paths in $ X$ to the space of noise configurations, a map that provides a noise configuration at which a given closed path is a solution of the Langevin SDE. $ I_{N}(\xi )$ can be viewed as a realization of Poincaré–Hopf theorem on the infinite-dimensional space of close paths with the Langevin SDE playing the role of the vector field and with the solutions of Langevin SDE playing the role of the critical points with index $\operatorname {sign} J$. $ I_{N}(\xi )$ is independent of the noise configuration because it is of topological character. The same it true for its stochastic average, ${\mathcal {W}}$, which is not the partition function of the model but, instead, its Witten index. Path integral representation With the help of a standard field theoretic technique that involves introduction of additional field called Lagrange multiplier, $B$, and a pair of fermionic fields called Faddeev–Popov ghosts, $\chi ,{\bar {\chi }}$, the Witten index can be given the following form, ${\mathcal {W}}=\int \dots \int _{p.b.c.}e^{(Q,\Psi )}D\Phi ,$ where $\Phi $ denotes collection of all the fields, p.b.c. stands for periodic boundary conditions, the so-called gauge fermion, $\textstyle \Psi =\int _{t'}^{t}d\tau (\imath _{\dot {x}}-{\bar {d}})(\tau )$, with $\textstyle \imath _{\dot {x}}=i{\bar {\chi }}_{j}{\dot {x}}^{j}$ and $ \textstyle {\bar {d}}=-i{\bar {\chi }}_{j}\delta ^{jk}(\partial _{k}U+\Theta iB_{k})$, and the BRST symmetry defined via its action on arbitrary functional $A(\Phi )$ as $(Q,A(\Phi ))=\textstyle \int _{t'}^{t}d\tau (\chi ^{i}(\tau )\delta /\delta x^{i}(\tau )+B_{i}(\tau )\delta /\delta {\bar {\chi }}_{i}(\tau ))A(\Phi )$. In the BRST formalism, the Q-exact pieces like, $(Q,\Psi )$, serve as gauge fixing tools. Therefore, the path integral expression for ${\mathcal {W}}$ can be interpreted as a model whose action contains nothing else but the gauge fixing term. This is a definitive feature of Witten-type topological field theories and in this particular case of BRST procedure approach to SDEs, the BRST symmetry can be also recognized as the topological supersymmetry.[22] A common way to explain the BRST procedure is to say that the BRST symmetry generates the fermionic version of the gauge transformations, whereas its overall effect on the path integral is to limit the integration only to configurations that satisfy a specified gauge condition. This interpretation also applies to Parisi–Sourlas approach with the deformations of the trajectory and the Langevin SDE playing the roles of the gauge transformations and the gauge condition respectively. Operator representation Physical fermions in the high-energy physics and condensed matter models have antiperiodic boundary conditions in time. The unconventional periodic boundary conditions for fermions in the path integral expression for the Witten index is the origin of the topological character of this object. These boundary conditions reveal themselves in the operator representation of the Witten index as the alternating sign operator, ${\mathcal {W}}=\operatorname {Tr} (-1)^{\hat {n}}{\hat {\mathcal {M}}}_{tt'},$ where $ {\hat {n}}$ is the operator of the number of ghosts/fermions and the finite-time stochastic evolution operator (SEO), $ {\hat {\mathcal {M}}}_{tt'}=e^{-(t-t'){\hat {H}}}$, where, ${\hat {H}}={\hat {L}}_{-\partial U}-\Theta {\hat {\triangle }}=[{\hat {d}},{\hat {\bar {d}}}],$ is the infinitesimal SEO with $\textstyle {\hat {L}}$ being the Lie derivative along the subscript vector field, $ {\hat {\triangle }}$ being the Laplacian, $\textstyle {\hat {d}}=\chi ^{i}\partial /\partial x^{i}$ being the exterior derivative, which is the operator representative of the topological supersymmetry (TS), and $\textstyle {\hat {\bar {d}}}=-(i{\hat {\bar {\chi }}}_{i})\delta ^{ij}(\partial _{j}U+\Theta (i{\hat {B}}_{j}))$, where $i{\hat {B}}_{j}=\partial /\partial x^{j}$ and $i{\hat {\bar {\chi }}}_{j}=\partial /\partial \chi ^{j}$ are bosonic and fermionic momenta, and with square brackets denoting bi-graded commutator, i.e., it is an anticommutator if both operators are fermionic (contain odd total number of $\chi $'s and ${\hat {\bar {\chi }}}$'s) and a commutator otherwise. The exterior derivative and $\textstyle {\hat {\bar {d}}}$ are supercharges. They are nilpotent, e.g., ${\hat {d}}^{2}=0$, and commutative with the SEO. In other words, Langevin SDEs possess N=2 supersymmetry. The fact that $\textstyle {\hat {\bar {d}}}$ is a supercharge is accidental. For SDEs of arbitrary form, this is not true. Hilbert space The wavefunctions are functions not only of the bosonic variables, $x\in X$, but also of the Grassmann numbers or fermions, $\chi \in TX$, from the tangent space of $X$. The wavefunctions can be viewed as differential forms on $X$ with the fermions playing the role of the differentials $\chi \equiv dx\wedge $.[26] The concept of infinitesimal SEO generalizes the Fokker–Planck operator, which is essentially the SEO acting on top differential forms that have the meaning of the total probability distributions. Differential forms of lesser degree can be interpreted, at least locally on $X$, as conditional probability distributions.[32] Viewing the spaces of differential forms of all degrees as wavefunctions of the model is a mathematical necessity. Without it, the Witten index representing the most fundamental object of the model—the partition function of the noise—would not exist and the dynamical partition function would not represent the number of fixed points of the SDE (see below). The most general understanding of the wavefunctions is the coordinate-free objects that contain information not only on trajectories but also on the evolution of the differentials and/or Lyapunov exponents.[33] Relation to nonlinear sigma model and algebraic topology In Ref.,[26] a model has been introduced that can be viewed as a 1D prototype of the topological nonlinear sigma models (TNSM),[23] a subclass of the Witten-type topological field theories. The 1D TNSM is defined for Riemannian phase spaces while for Euclidean phase spaces it reduces to the Parisi–Sourlas model. Its key difference from STS is the diffusion operator which is the Hodge Laplacian for 1D TNSM and $\textstyle {\hat {L}}_{e_{a}}{\hat {L}}_{e_{a}}$for STS . This difference is unimportant in the context of relation between STS and algebraic topology, the relation established by the theory of 1D TNSM (see, e.g., Refs.[26][22]). The model is defined by the following evolution operator $ {\hat {H}}={\hat {L}}_{-\partial U}+\Theta [{\hat {d}},{\hat {d}}^{\dagger }]$, where $ (\partial U)^{i}=g^{ij}\partial _{j}U$ with $ g$ being the metric, $ [{\hat {d}},{\hat {d}}^{\dagger }]$ is the Hodge Laplacian, and the differential forms from the exterior algebra of the phase space, $ \Omega (X)$, are viewed as wavefunctions. There exists a similarity transformation, ${\hat {H}}\to {\hat {H}}_{U}=e^{U/2\Theta }{\hat {H}}e^{-U/2\Theta }$, that brings the evolution operator to the explicitly Hermitian form ${\hat {H}}_{U}=\Theta [{\hat {d}}_{U},{\hat {d}}_{U}^{\dagger }]$ with ${\hat {d}}_{U}=e^{U/2\Theta }{\hat {d}}e^{-U/2\Theta }=\chi ^{i}(\partial /\partial x^{i}-\partial _{i}U/2\Theta )$. In the Euclidean case, ${\hat {H}}_{U}$ is the Hamiltonian of a N=2 supersymmetric quantum mechanics. One can introduce two Hermitian operators, ${\hat {q}}_{1}=({\hat {d}}_{U}+{\hat {d}}_{U}^{\dagger })/2^{1/2}$ and ${\hat {q}}_{2}=i({\hat {d}}_{U}-{\hat {d}}_{U}^{\dagger })/2^{1/2}$, such that ${\hat {H}}_{U}=\Theta {\hat {q}}_{1}^{2}=\Theta {\hat {q}}_{2}^{2}$ . This demonstrates that the spectrum of ${\hat {H}}_{U}$ and/or ${\hat {H}}$ is real and nonnegative. This is also true for SEOs of Langevin SDEs. For the SDEs of arbitrary form, however, this is no longer true as the eigenvalues of the SEO can be negative and even complex, which actually allows for the TS to be broken spontaneously. The following properties of the evolution operator of 1D TNSM hold even for the SEO of the SDEs of arbitrary form. The evolution operator commutes with the operator of the degree of differential forms. As a result, $\textstyle {\hat {H}}={\hat {H}}^{(\dim X)}\oplus \dots \oplus {\hat {H}}^{(0)}$, where $\textstyle {\hat {H}}^{(k)}={\hat {H}}|_{\Omega ^{(k)}}$ and $\Omega ^{(n)}(X)$ is the space of differential forms of degree $n$. Furthermore, due to the presence of TS, $ \Omega (X)={\mathcal {H}}\oplus {\mathcal {N}}\oplus ({\hat {d}}{\mathcal {N}})$, where $ {\mathcal {H}}$ are the supersymmetric eigenstates, $ \theta 's$, non-trivial in de Rham cohomology whereas the rest are the pairs of non-supersymmetric eigenstates of the form $ |\vartheta \rangle $ and $ {\hat {d}}|\vartheta \rangle $. All supersymmetric eigenstates have exactly zero eigenvalue and, barring accidental situations, all non-supersymmetric states have non-zero eigenvalues. Non-supersymmetric pairs of eigenstates do not contribute to the Witten index, which equals the difference in the numbers of the supersymmetric states of even and odd degrees, $ {\mathcal {W}}=\#\{{\text{even }}\theta 's\}-\#\{{\text{odd }}\theta 's\}.$For compact $X$, each de Rham cohomology class provides one supersymmetric eigenstate and the Witten index equals the Euler characteristic of the phase space. BRST procedure for SDEs of arbitrary form The Parisi–Sourlas method of BRST procedure approach to Langevin SDEs have also been adapted to classical mechanics,[3] stochastic generalization of classical mechanics,[7] higher order Langevin SDEs,[8] and, more recently, to SDEs of arbitrary form.[9] While there exist standard techniques that allow to consider models with colored noises, higher-dimensional "base spaces" described by partial SDEs etc., the key elements of STS can be discussed using the following basic class of SDEs, ${\dot {x}}(t)=F(x(t))+(2\Theta )^{1/2}e_{a}(x(t))\xi ^{a}(t),$ where $ x\in X$ is a point in the phase space assumed for simplicity a closed topological manifold, $F(x)\in TX_{x}$ is a sufficiently smooth vector field, called flow vector field, from the tangent space of $X$, and $e_{a}\in TX,a=1,\ldots ,\dim X$ is a set of sufficiently smooth vector fields that specify how the system is coupled to the noise, which is called additive/multiplicative depending on whether $e_{a}$'s are independent/dependent on the position on $X$. Ambiguity of path integral representation and Ito–Stratonovich dilemma BRST gauge fixing procedure goes along the same lines as in case of Langevin SDEs. The topological interpretation of the BRST procedure is just the same and the path integral representation of the Witten index is defined by the gauge fermion, $\textstyle \Psi $, given by the same expression but with the generalized version of $ \textstyle {\bar {d}}=\imath _{F}-\Theta \imath _{e_{a}}(Q,\imath _{e_{a}})$. There is one important subtlety, however, that appears on the way to the operator representation of the model. Unlike for Langevin SDEs, classical mechanics, and other SDEs with additive noises, the path integral representation of the finite-time SEO is an ambiguous object. This ambiguity originates from non-commutativity of momenta and position operators, e.g., ${\hat {B}}{\hat {x}}\neq {\hat {x}}{\hat {B}}$. As a result, $B(\tau )x(\tau )$ in the path integral representation has a whole one-parameter family of possible interpretations in the operator representation, $(\alpha {\hat {B}}{\hat {x}}+(1-\alpha ){\hat {x}}{\hat {B}})|\psi (\tau )\rangle $, where $|\psi (\tau )\rangle $ denotes an arbitrary wavefunction. Accordingly, there is a whole $\alpha $-family of infinitesimal SEOs, ${\hat {H}}_{\alpha }={\hat {L}}_{F}-\Theta {\hat {L}}_{e_{a}}{\hat {L}}_{e_{a}}=[{\hat {d}},{\hat {\bar {d}}}_{\alpha }],$ with $\textstyle {\hat {\bar {d}}}_{\alpha }={\hat {\imath }}_{F_{\alpha }}-\Theta {\hat {\imath }}_{e_{a}}{\hat {L}}_{e_{a}}$, ${\hat {\imath }}_{F_{\alpha }}$ being the interior multiplication by the subscript vector field, and the "shifted" flow vector field being $\textstyle F_{\alpha }=F-\Theta (2\alpha -1)(e_{a}\cdot \partial )e_{a}$. Noteworthy, unlike in Langevin SDEs, ${\hat {\bar {d}}}_{\alpha }$ is not a supercharge and STS cannot be identified as a N=2 supersymmetric theory in the general case. The path integral representation of stochastic dynamics is equivalent to the traditional understanding of SDEs as of a continuous time limit of stochastic difference equations where different choices of parameter $\alpha $ are called "interpretations" of SDEs. The choice $\alpha =1/2$, for which $\textstyle F_{1/2}=F$ and which is known in quantum theory as Weyl symmetrization rule, is known as the Stratonovich interpretation, whereas $\alpha =1$ as the Ito interpretation. While in quantum theory the Weyl symmetrization is preferred because it guarantees hermiticity of Hamiltonians, in STS the Stratonovich–Weyl approach is preferred because it corresponds to the most natural mathematical meaning of the finite-time SEO discussed below—the stochastically averaged pullback induced by the SDE-defined diffeomorphisms. Eigensystem of stochastic evolution operator As compared to the SEO of Langevin SDEs, the SEO of a general form SDE is pseudo-Hermitian.[19] As a result, the eigenvalues of non-supersymmetric eigenstates are not restricted to be real positive, whereas the eigenvalues of supersymmetric eigenstates are still exactly zero. Just like for Langevin SDEs and nonlinear sigma model, the structure of the eigensystem of the SEO reestablishes the topological character of the Witten index: the contributions from the non-supersymmetric pairs of eigenstates vanish and only supersymmetric states contribute the Euler characteristic of (closed) $X$. Among other properties of the SEO spectra is that $\textstyle {\hat {H}}^{(\operatorname {dim} X)}$ and $\textstyle {\hat {H}}^{(0)}$ never break TS, i.e., $\textstyle Re(\operatorname {spec} {\hat {H}}^{(\operatorname {dim} X)})\geq 0$. As a result, there are three major types of the SEO spectra presented in the figure on the right. The two types that have negative (real parts of) eigenvalues correspond to the spontaneously broken TS. All types of the SEO spectra are realizable as can be established, e.g., from the exact relation between the theory of kinematic dynamo and STS.[34] STS without BRST procedure The mathematical meaning of stochastic evolution operator The finite-time SEO can be obtained in another, more mathematical way based on the idea to study the SDE-induced actions on differential forms directly, without going through the BRST gauge fixing procedure. The so-obtained finite-time SEO is known in dynamical systems theory as the generalized transfer operator[20][21] and it has also been used in the classical theory of SDEs (see, e.g., Refs.[35][36] ). The contribution to this construction from STS[9] is the exposition of the supersymmetric structure underlying it and establishing its relation to the BRST procedure for SDEs. Namely, for any configuration of the noise, $\xi $, and an initial condition, $x(t')=x'\in X$, SDE defines a unique solution/trajectory, $x(t)\in X$. Even for noise configurations that are non-differentiable with respect to time, $t$, the solution is differentiable with respect to the initial condition, $x'$.[37] In other words, SDE defines the family of the noise-configuration-dependent diffeomorphisms of the phase space to itself, $M_{tt'}:X\to X$. This object can be understood as a collection and/or definition of all the noise-configuration-dependent trajectories, $x(t)=M_{tt'}(x')$. The diffeomorphisms induce actions or pullbacks, $\textstyle M_{t't}^{*}:\Omega (X)\to \Omega (X)$. Unlike, say, trajectories in $X$, pullbacks are linear objects even for nonlinear $X$. Linear objects can be averaged and averaging $M_{t't}^{*}$ over the noise configurations, $\xi $, results in the finite-time SEO which is unique and corresponds to the Stratonovich–Weyl interpretation of the BRST procedure approach to SDEs, ${\hat {\mathcal {M}}}_{tt'}=\langle M_{t't}^{*}\rangle _{\text{noise}}=e^{-(t-t'){\hat {H}}_{1/2}}$. Within this definition of the finite-time SEO, the Witten index can be recognized as the sharp trace of the generalized transfer operator.[20][21] It also links the Witten index to the Lefschetz index,$ \textstyle I_{L}=\operatorname {Tr} (-1)^{\hat {n}}M_{t't}^{*}=\sum _{x\in \operatorname {fix} M_{tt'}}\operatorname {sign} \operatorname {det} (\delta _{j}^{i}-\partial M_{tt'}^{i}(x)/\partial x^{j})$, a topological constant that equals the Euler characteristic of the (closed) phase space. Namely, $ \textstyle {\mathcal {W}}=\operatorname {Tr} (-1)^{\hat {n}}\langle M_{t't}^{*}\rangle _{\text{noise}}=\langle \operatorname {Tr} (-1)^{\hat {n}}M_{t't}^{*}\rangle _{\text{noise}}=I_{L}$. The meaning of supersymmetry and the butterfly effect The N=2 supersymmetry of Langevin SDEs has been linked to the Onsager principle of microscopic reversibility[16] and Jarzynski equality.[15] In classical mechanics, a relation between the corresponding N=2 supersymmetry and ergodicity has been proposed.[6] In general form SDEs, where physical arguments may not be applicable, a lower level explanation of the TS is available. This explanation is based on understanding of the finite-time SEO as a stochastically averaged pullback of the SDE-defined diffeomorphisms (see subsection above). In this picture, the question of why any SDE has TS is the same as the question of why exterior derivative commutes with the pullback of any diffeomorphism. The answer to this question is differentiability of the corresponding map. In other words, the presence of TS is the algebraic version of the statement that continuous-time flow preserves continuity of $X$. Two initially close points will remain close during evolution, which is just yet another way of saying that $M_{tt'}$ is a diffeomorphism. In deterministic chaotic models, initially close points can part in the limit of infinitely long temporal evolution. This is the famous butterfly effect, which is equivalent to the statement that $M_{tt'}$ losses differentiability in this limit. In algebraic representation of dynamics, the evolution in the infinitely long time limit is described by the ground state of the SEO and the butterfly effect is equivalent to the spontaneous breakdown of TS, i.e., to the situation when the ground state is not supersymmetric. Noteworthy, unlike traditional understanding of deterministic chaotic dynamics, the spontaneous breakdown of TS works also for stochastic cases. This is the most important generalization because deterministic dynamics is, in fact, a mathematical idealization. Real dynamical systems cannot be isolated from their environments and thus always experience stochastic influence. Spontaneous supersymmetry breaking and dynamical chaos BRST gauge fixing procedure applied to SDEs leads directly to the Witten index. The Witten index is of topological character and it does not respond to any perturbation. In particular, all response correlators calculated using the Witten index vanish. This fact has a physical interpretation within the STS: the physical meaning of the Witten index is the partition function of the noise[32] and since there is no backaction from the dynamical system to the noise, the Witten index has no information on the details of the SDE. In contrast, the information on the details of the model is contained in the other trace-like object of the theory, the dynamical partition function, ${\mathcal {Z}}_{tt'}=\int \dots \int _{a.p.b.c.}e^{(Q,\Psi )}D\Phi =\operatorname {Tr} {\hat {\mathcal {M}}}_{tt'},$ where a.p.b.c. denotes antiperiodic boundary conditions for the fermionic fields and periodic boundary conditions for bosonic fields. In the standard manner, the dynamical partition function can be promoted to the generating functional by coupling the model to external probing fields. For a wide class of models, dynamical partition function provides lower bound for the stochastically averaged number of fixed points of the SDE-defined diffeomorphisms, $\left.{\mathcal {Z}}_{tt'}\right|_{t-t'\to \infty }=\sum \nolimits _{p}e^{-(t-t'){\mathsf {E}}_{p}}\leq \langle \#\{{\text{fixed points of }}M_{t't}\}\rangle _{\text{noise}}\propto e^{(t-t')S}.$ Here, index $p$ runs over "physical states", i.e., the eigenstates that grow fastest with the rate of the exponential growth given as,$ \Gamma _{g}=-\min _{\alpha }\operatorname {Re} {\mathsf {E}}_{\alpha }\geq 0$, and parameter $S$ can be viewed as stochastic version of dynamical entropy such as topological entropy. Positive entropy is one of the key signatures of deterministic chaos. Therefore, the situation with positive $\Gamma _{g}$ must be identified as chaotic in the generalized, stochastic sense as it implies positive entropy: $0<\Gamma _{g}\leq S$. At the same time, positive $\Gamma _{g}$ implies that TS is broken spontaneously, that is, the ground state in not supersymmetric because its eigenvalue is not zero. In other words, positive dynamical entropy is a reason to identify spontaneous TS breaking as the stochastic generalization of the concept of dynamical chaos. Noteworthy, Langevin SDEs are never chaotic because the spectrum of their SEO is real non-negative. The complete list of reasons why spontaneous TS breaking must be viewed as the stochastic generalization of the concept of dynamical chaos is as follows. • Positive dynamical entropy. • According to the Goldstone's theorem, spontaneous TS breaking must tailor a long-range dynamical behavior, one of the manifestations of which is the butterfly effect discussed above in the context of the meaning of TS. • From the properties of the eigensystem of SEO, TS can be spontaneously broken only if $\dim X\geq 3$. This conclusion can be viewed as the stochastic generalization of the Poincare–Bendixson theorem for deterministic chaos. • In the deterministic case, integrable models in the sense of dynamical systems have well-defined global stable and unstable manifolds of $F$. The bras/kets of the global ground states of such models are the Poincare duals of the global stable/unstable manifolds. These ground states are supersymmetric so that TS is not broken spontaneously. On the contrary, when the model is non-integrable or chaotic, its global (un)stable manifolds are not well-defined topological manifolds, but rather have a fractal, self-recurrent structure that can be captured using the concept of branching manifolds.[38] Wavefunctions that can represent such manifolds cannot be supersymmetric. Therefore, TS breaking is intrinsically related to the concept of non-integrability in the sense of dynamical systems, which is actually yet another widely accepted definition of deterministic chaos. All the above features of TS breaking work for both deterministic and stochastic models. This is in contrast with the traditional deterministic chaos whose trajectory-based properties such as the topological mixing cannot in principle be generalized to stochastic case because, just like in quantum dynamics, all trajectories are possible in the presence of noise and, say, the topological mixing property is satisfied trivially by all models with non-zero noise intensity. STS as a topological field theory The topological sector of STS can be recognized as a member of the Witten-type topological field theories.[22][23][25][26][27] In other words, some objects in STS are of topological character with the Witten index being the most famous example. There are other classes of topological objects. One class of objects is related to instantons, i.e., transient dynamics. Crumpling paper, protein folding, and many other nonlinear dynamical processes in response to quenches, i.e., to external (sudden) changes of parameters, can be recognized as instantonic dynamics. From the mathematical point of view, instantons are families of solutions of deterministic equations of motion, ${\dot {x}}=F$, that lead from, say, less stable fixed point of $F$ to a more stable fixed point. Certain matrix elements calculated on instantons are of topological nature. An example of such matrix elements can be defined for a pair of critical points, $a$ and $b$, with $a$ being more stable than $b$, $\left.\int \dots \int _{x(\pm \infty )=a,b}\left(\prod \nolimits _{i}O_{i}(t_{i})\right)e^{({\mathcal {Q}},\Psi )}D\Phi \right|_{\Theta \to 0}=\langle a|{\mathcal {T}}\left(\prod \nolimits _{i}{\hat {O}}_{i}(t_{i})\right)|b\rangle .$ Here $\langle a|$ and $|b\rangle $ are the bra and ket of the corresponding perturbative supersymmetric ground states, or vacua, which are the Poincare duals of the local stable and unstable manifolds of the corresponding critical point; ${\mathcal {T}}$ denotes chronological ordering; $O$'s are observables that are the Poincare duals of some closed submanifolds in $X$; ${\hat {O}}(t)={\hat {\mathcal {M}}}_{t_{0},t}{\hat {O}}{\hat {\mathcal {M}}}_{t,t_{0}}$ are the observables in the Heisenberg representation with $t_{0}$ being an unimportant reference time moment. The critical points have different indexes of stability so that the states $|a\rangle $ and $|b\rangle $ are topologically inequivalent as they represent unstable manifolds of different dimensionalities. The above matrix elements are independent of $t_{i}'s$ as they actually represent the intersection number of $O$-manifolds on the instanton as exemplified in the figure. The above instantonic matrix elements are exact only in the deterministic limit. In the general stochastic case, one can consider global supersymmetric states, $\theta $'s, from the De Rham cohomology classes of $X$ and observables, $\gamma $, that are Poincare duals of closed manifolds non-trivial in homology of $X$. The following matrix elements, $\textstyle \langle \theta _{\alpha }|{\mathcal {T}}\left(\prod \nolimits _{i}{\hat {\gamma }}_{i}(t_{i})\right)|\theta _{\beta }\rangle ,$ are topological invariants representative of the structure of De Rham cohomology ring of $X$. Applications Supersymmetric theory of stochastic dynamics can be interesting in different ways. For example, STS offers a promising realization of the concept of supersymmetry. In general, there are two major problems in the context of supersymmetry. The first is establishing connections between this mathematical entity and the real world. Within STS, supersymmetry is the most common symmetry in nature because it is pertinent to all continuous time dynamical systems. The second is the spontaneous breakdown of supersymmetry. This problem is particularly important for particle physics because supersymmetry of elementary particles, if exists at extremely short scale, must be broken spontaneously at large scale. This problem is nontrivial because supersymmetries are hard to break spontaneously, the very reason behind the introduction of soft or explicit supersymmetry breaking.[39] Within STS, spontaneous breakdown of supersymmetry is indeed a nontrivial dynamical phenomenon that has been variously known across disciplines as chaos, turbulence, self-organized criticality etc. A few more specific applications of STS are as follows. Classification of stochastic dynamics STS provides classification for stochastic models depending on whether TS is broken and integrability of flow vector field. In can be exemplified as a part of the general phase diagram at the border of chaos (see figure on the right). The phase diagram has the following properties: • For physical models, TS gets restored eventually with the increase of noise intensity. • Symmetric phase can be called thermal equilibrium or T-phase because the ground state is the supersymmetric state of steady-state total probability distribution. • In the deterministic limit, ordered phase is equivalent to deterministic chaotic dynamics with non-integrable flow. • Ordered non-integrable phase can be called chaos or C-phase because ordinary deterministic chaos belongs to it. • Ordered integrable phase can be called noise-induced chaos or N-phase because it disappears in the deterministic limit. TS is broken by the condensation of (anti-)instantons (see below). • At stronger noises, the sharp N-C boundary must smear out into a crossover because (anti-)instantons lose their individuality and it is hard for an external observer to tell one tunneling process from another. Demystification of self-organized criticality Many sudden (or instantonic) processes in nature, such as, e.g., crackling noise, exhibit scale-free statistics often called the Zipf's law. As an explanation for this peculiar spontaneous dynamical behavior, it was proposed to believe that some stochastic dynamical systems have a tendency to self-tune themselves into a critical point, the phenomenological approach known as self-organized criticality (SOC).[40] STS offers an alternative perspective on this phenomenon.[41] Within STS, SOC is nothing more than dynamics in the N-phase. Specifically, the definitive feature of the N-phase is the peculiar mechanism of the TS breaking. Unlike in the C-phase, where the TS is broken by the non-integrability of the flow, in the N-phase, the TS is spontaneously broken due to the condensation of the configurations of instantons and noise-induced antiinstantons, i.e., time-reversed instantons. These processes can be roughly interpreted as the noise-induced tunneling events between, e.g., different attractors. Qualitatively, the dynamics in the N-phase appears to an external observer as a sequence of sudden jumps or "avalanches" that must exhibit a scale-free behavior/statistics as a result of the Goldstone theorem. This picture of dynamics in the N-phase is exactly the dynamical behavior that the concept of SOC was designed to explain. In contrast with the original understanding of SOC,[42] its STS interpretation has little to do with the traditional critical phenomena theory where scale-free behavior is associated with unstable fixed points of the renormalization group flow. Kinematic dynamo theory Magnetohydrodynamical phenomenon of kinematic dynamo can also be identified as the spontaneous breakdown of TS.[34] This result follows from equivalence between the evolution operator of the magnetic field and the SEO of the corresponding SDE describing the flow of the background matter. The so emerged STS-kinematic dynamo correspondence proves, in particular, that both types of TS breaking spectra are possible, with the real and complex ground state eigenvalues, because kinematic dynamo with both types of the fastest growing eigenmodes are known.[43] Transient dynamics It is well known that various types of transient dynamics, such as quenches, exhibit spontaneous long-range behavior. In case of quenches across phase transitions, this behavior is often attributed to the proximity of criticality. Quenches that do not exhibit a phase transition are also known to exhibit long-range characteristics, with the best known examples being the Barkhausen effect and the various realizations of the concept of crackling noise. It is intuitively appealing that theoretical explanations for the scale-free behavior in quenches must be the same for all quenches, regardless of whether or not it produces a phase transition; STS offers such an explanation. Namely, transient dynamics is essentially a composite instanton and TS is intrinsically broken within instantons. Even though TS breaking within instantons is not exactly due to the phenomenon of the spontaneous breakdown of a symmetry by a global ground state, this effective TS breaking must also result in a scale-free behavior. This understanding is supported by the fact that condensed instantons lead to appearance of logarithms in the correlation functions.[44] This picture of transient dynamics explains computational efficiency of the digital memcomputing machines.[45] See also • Stochastic quantization References 1. Parisi, G.; Sourlas, N. (1979). "Random Magnetic Fields, Supersymmetry, and Negative Dimensions". Physical Review Letters. 43 (11): 744–745. Bibcode:1979PhRvL..43..744P. doi:10.1103/PhysRevLett.43.744. 2. Parisi, G. (1982). "Supersymmetric field theories and stochastic differential equations". Nuclear Physics B. 206 (2): 321–332. Bibcode:1982NuPhB.206..321P. doi:10.1016/0550-3213(82)90538-7. 3. Gozzi, E.; Reuter, M. (1990). "Classical mechanics as a topological field theory". Physics Letters B. 240 (1–2): 137–144. Bibcode:1990PhLB..240..137G. doi:10.1016/0370-2693(90)90422-3. 4. Niemi, A. J. (1995). "A lower bound for the number of periodic classical trajectories". Physics Letters B. 355 (3–4): 501–506. Bibcode:1995PhLB..355..501N. doi:10.1016/0370-2693(95)00780-o. 5. Niemi, A. J.; Pasanen, P. (1996-10-03). "Topological σ-model, Hamiltonian dynamics and loop space Lefschetz number". Physics Letters B. 386 (1): 123–130. arXiv:hep-th/9508067. Bibcode:1996PhLB..386..123N. doi:10.1016/0370-2693(96)00941-0. S2CID 119102809. 6. Gozzi, E.; Reuter, M. (1989-12-28). "Algebraic characterization of ergodicity". Physics Letters B. 233 (3): 383–392. Bibcode:1989PhLB..233..383G. doi:10.1016/0370-2693(89)91327-0. 7. Tailleur, J.; Tănase-Nicola, S.; Kurchan, J. (2006-02-01). "Kramers Equation and Supersymmetry". Journal of Statistical Physics. 122 (4): 557–595. arXiv:cond-mat/0503545. Bibcode:2006JSP...122..557T. doi:10.1007/s10955-005-8059-x. ISSN 0022-4715. S2CID 119716999. 8. Kleinert, H.; Shabanov, S. V. (1997-10-27). "Supersymmetry in stochastic processes with higher-order time derivatives". Physics Letters A. 235 (2): 105–112. arXiv:quant-ph/9705042. Bibcode:1997PhLA..235..105K. doi:10.1016/s0375-9601(97)00660-9. S2CID 119459346. 9. Ovchinnikov, I. V. (2016-03-28). "Introduction to Supersymmetric Theory of Stochastics". Entropy. 18 (4): 108. arXiv:1511.03393. Bibcode:2016Entrp..18..108O. doi:10.3390/e18040108. S2CID 2388285. 10. Aharony, A.; Imry, Y.; Ma, S.K. (1976). "Lowering of dimensionality in phase transitions with random fields". Physical Review Letters. 37 (20): 1364–1367. doi:10.1103/PhysRevLett.37.1364. 11. Cecotti, S; Girardello, L (1983-01-01). "Stochastic and parastochastic aspects of supersymmetric functional measures: A new non-perturbative approach to supersymmetry". Annals of Physics. 145 (1): 81–99. Bibcode:1983AnPhy.145...81C. doi:10.1016/0003-4916(83)90172-0. 12. Zinn-Justin, J. (1986-09-29). "Renormalization and stochastic quantization". Nuclear Physics B. 275 (1): 135–159. Bibcode:1986NuPhB.275..135Z. doi:10.1016/0550-3213(86)90592-4. 13. Dijkgraaf, R.; Orlando, D.; Reffert, S. (2010-01-11). "Relating field theories via stochastic quantization". Nuclear Physics B. 824 (3): 365–386. arXiv:0903.0732. Bibcode:2010NuPhB.824..365D. doi:10.1016/j.nuclphysb.2009.07.018. S2CID 2033425. 14. Kurchan, J. (1992-07-01). "Supersymmetry in spin glass dynamics". Journal de Physique I. 2 (7): 1333–1352. Bibcode:1992JPhy1...2.1333K. doi:10.1051/jp1:1992214. ISSN 1155-4304. S2CID 124073976. 15. Mallick, K.; Moshe, M.; Orland, H. (2007-11-13). "Supersymmetry and Nonequilibrium Work Relations". arXiv:0711.2059 [cond-mat.stat-mech]. 16. Gozzi, E. (1984). "Onsager principle of microscopic reversibility and supersymmetry". Physical Review D. 30 (6): 1218–1227. Bibcode:1984PhRvD..30.1218G. doi:10.1103/physrevd.30.1218. 17. Bernstein, M. (1984). "Supersymmetry and the Bistable Fokker-Planck Equation". Physical Review Letters. 52 (22): 1933–1935. Bibcode:1984PhRvL..52.1933B. doi:10.1103/physrevlett.52.1933. 18. Olemskoi, A. I; Khomenko, A. V; Olemskoi, D. A (2004-02-01). "Field theory of self-organization". Physica A: Statistical Mechanics and Its Applications. 332: 185–206. Bibcode:2004PhyA..332..185O. doi:10.1016/j.physa.2003.10.035. 19. Mostafazadeh, A. (2002-07-19). "Pseudo-Hermiticity versus PT-symmetry III: Equivalence of pseudo-Hermiticity and the presence of antilinear symmetries". Journal of Mathematical Physics. 43 (8): 3944–3951. arXiv:math-ph/0203005. Bibcode:2002JMP....43.3944M. doi:10.1063/1.1489072. ISSN 0022-2488. S2CID 7096321. 20. Reulle, D. (2002). "Dynamical Zeta Functions and Transfer Operators" (PDF). Notices of the AMS. 49 (8): 887. 21. Ruelle, D. (1990-12-01). "An extension of the theory of Fredholm determinants". Publications Mathématiques de l'Institut des Hautes Études Scientifiques. 72 (1): 175–193. doi:10.1007/bf02699133. ISSN 0073-8301. S2CID 121869096. 22. Birmingham, D; Blau, M.; Rakowski, M.; Thompson, G. (1991). "Topological field theory". Physics Reports. 209 (4–5): 129–340. Bibcode:1991PhR...209..129B. doi:10.1016/0370-1573(91)90117-5. 23. Witten, E. (1988-09-01). "Topological sigma models". Communications in Mathematical Physics. 118 (3): 411–449. Bibcode:1988CMaPh.118..411W. doi:10.1007/BF01466725. ISSN 0010-3616. S2CID 34042140. 24. Baulieu, L.; Singer, I.M. (1988). "The topological sigma model". Communications in Mathematical Physics. 125 (2): 227–237. doi:10.1007/BF01217907. S2CID 120150962. 25. Witten, E. (1988-09-01). "Topological quantum field theory". Communications in Mathematical Physics. 117 (3): 353–386. Bibcode:1988CMaPh.117..353W. doi:10.1007/BF01223371. ISSN 0010-3616. S2CID 43230714. 26. Witten, E. (1982). "Supersymmetry and Morse theory". Journal of Differential Geometry. 17 (4): 661–692. doi:10.4310/jdg/1214437492. ISSN 0022-040X. 27. Labastida, J. M. F. (1989-12-01). "Morse theory interpretation of topological quantum field theories". Communications in Mathematical Physics. 123 (4): 641–658. Bibcode:1989CMaPh.123..641L. CiteSeerX 10.1.1.509.3123. doi:10.1007/BF01218589. ISSN 0010-3616. S2CID 53555484. 28. Nicolai, H. (1980-12-22). "Supersymmetry and functional integration measures". Nuclear Physics B. 176 (2): 419–428. Bibcode:1980NuPhB.176..419N. doi:10.1016/0550-3213(80)90460-5. 29. Nicolai, H. (1980-01-28). "On a new characterization of scalar supersymmetric theories" (PDF). Physics Letters B. 89 (3): 341–346. Bibcode:1980PhLB...89..341N. doi:10.1016/0370-2693(80)90138-0. 30. Klein, A.; Landau, L.J.; Perez, J.F. (1984). "Supersymmetry and the Parisi-Sourlas dimensional reduction: a rigorous proof". Communications in Mathematical Physics. 94 (4): 459–482. doi:10.1007/BF01403882. S2CID 120640917. 31. Baulieu, L.; Grossman, B. (1988). "A topological interpretation of stochastic quantization". Physics Letters B. 212 (3): 351–356. Bibcode:1988PhLB..212..351B. doi:10.1016/0370-2693(88)91328-7. 32. Ovchinnikov, I.V. (2013-01-15). "Topological field theory of dynamical systems. II". Chaos: An Interdisciplinary Journal of Nonlinear Science. 23 (1): 013108. arXiv:1212.1989. Bibcode:2013Chaos..23a3108O. doi:10.1063/1.4775755. ISSN 1054-1500. PMID 23556945. S2CID 34229910. 33. Graham, R. (1988). "Lyapunov Exponents and Supersymmetry of Stochastic Dynamical Systems". EPL. 5 (2): 101–106. Bibcode:1988EL......5..101G. doi:10.1209/0295-5075/5/2/002. ISSN 0295-5075. S2CID 250788554. 34. Ovchinnikov, I.V.; Ensslin, T. A. (2016). "Kinematic dynamo, supersymmetry breaking, and chaos". Physical Review D. 93 (8): 085023. arXiv:1512.01651. Bibcode:2016PhRvD..93h5023O. doi:10.1103/PhysRevD.93.085023. S2CID 59367815. 35. Ancona, A.; Elworthy, K. D.; Emery, M.; Kunita, H. (2013). Stochastic differential geometry at Saint-Flour. Springer. ISBN 9783642341700. OCLC 811000422. 36. Kunita, H. (1997). Stochastic flows and stochastic differential equations. Cambridge University Press. ISBN 978-0521599252. OCLC 36864963. 37. Slavík, A. (2013). "Generalized differential equations: Differentiability of solutions with respect to initial conditions and parameters". Journal of Mathematical Analysis and Applications. 402 (1): 261–274. doi:10.1016/j.jmaa.2013.01.027. 38. Gilmore, R.; Lefranc, M. (2011). The topology of chaos : Alice in stretch and squeezeland. Wiley-VCH. ISBN 9783527410675. OCLC 967841676. 39. Chung, D. J. H.; Everett, L. L.; Kane, G. L.; King, S. F.; Lykken, J.; Wang, Lian-Tao (2005-02-01). "The soft supersymmetry-breaking Lagrangian: theory and applications". Physics Reports. 407 (1–3): 1–203. arXiv:hep-ph/0312378. Bibcode:2005PhR...407....1C. doi:10.1016/j.physrep.2004.08.032. S2CID 119344585. 40. Watkins, N. W.; Pruessner, G.; Chapman, S. C.; Crosby, N. B.; Jensen, H. J. (2016-01-01). "25 Years of Self-organized Criticality: Concepts and Controversies". Space Science Reviews. 198 (1–4): 3–44. arXiv:1504.04991. Bibcode:2016SSRv..198....3W. doi:10.1007/s11214-015-0155-x. ISSN 0038-6308. S2CID 34782655. 41. Ovchinnikov, I. V. (2016-06-01). "Supersymmetric Theory of Stochastics: Demystification of Self-Organized Criticality". In Skiadas C.H. and Skiadas C. (ed.). Handbook of Applications of Chaos Theory. Chapman and Hall/CRC. pp. 271–305. doi:10.1201/b20232. ISBN 9781466590441. 42. Bak, P.; Tang, C.; Wiesenfeld, K. (1987). "Self-organized criticality: An explanation of the 1/f noise". Physical Review Letters. 59 (4): 381–384. Bibcode:1987PhRvL..59..381B. doi:10.1103/PhysRevLett.59.381. PMID 10035754. S2CID 7674321. 43. Bouya, I.; Dormy, E. (2013-03-01). "Revisiting the ABC flow dynamo". Physics of Fluids. 25 (3): 037103–037103–10. arXiv:1206.5186. Bibcode:2013PhFl...25c7103B. doi:10.1063/1.4795546. ISSN 1070-6631. S2CID 118722952. 44. Frenkel, E.; Losev, A.; Nekrasov, N. (2007). "Notes on instantons in topological field theory and beyond". Nuclear Physics B: Proceedings Supplements. 171: 215–230. arXiv:hep-th/0702137. Bibcode:2007NuPhS.171..215F. doi:10.1016/j.nuclphysbps.2007.06.013. S2CID 14914819. 45. Di Ventra, M.; Traversa, F. L.; Ovchinnikov, I. V. (2017). "Topological Field Theory and Computing with Instantons". Annalen der Physik. 2017 (12): 1700123. arXiv:1609.03230. Bibcode:2017AnP...52900123D. doi:10.1002/andp.201700123. ISSN 1521-3889. S2CID 9437990. Industrial and applied mathematics Computational • Algorithms • design • analysis • Automata theory • Coding theory • Computational geometry • Constraint programming • Computational logic • Cryptography • Information theory Discrete • Computer algebra • Computational number theory • Combinatorics • Graph theory • Discrete geometry Analysis • Approximation theory • Clifford analysis • Clifford algebra • Differential equations • Ordinary differential equations • Partial differential equations • Stochastic differential equations • Differential geometry • Differential forms • Gauge theory • Geometric analysis • Dynamical systems • Chaos theory • Control theory • Functional analysis • Operator algebra • Operator theory • Harmonic analysis • Fourier analysis • Multilinear algebra • Exterior • Geometric • Tensor • Vector • Multivariable calculus • Exterior • Geometric • Tensor • Vector • Numerical analysis • Numerical linear algebra • Numerical methods for ordinary differential equations • Numerical methods for partial differential equations • Validated numerics • Variational calculus Probability theory • Distributions (random variables) • Stochastic processes / analysis • Path integral • Stochastic variational calculus Mathematical physics • Analytical mechanics • Lagrangian • Hamiltonian • Field theory • Classical • Conformal • Effective • Gauge • Quantum • Statistical • Topological • Perturbation theory • in quantum mechanics • Potential theory • String theory • Bosonic • Topological • Supersymmetry • Supersymmetric quantum mechanics • Supersymmetric theory of stochastic dynamics Algebraic structures • Algebra of physical space • Feynman integral • Poisson algebra • Quantum group • Renormalization group • Representation theory • Spacetime algebra • Superalgebra • Supersymmetry algebra Decision sciences • Game theory • Operations research • Optimization • Social choice theory • Statistics • Mathematical economics • Mathematical finance Other applications • Biology • Chemistry • Psychology • Sociology • "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" Related • Mathematics • Mathematical software Organizations • Society for Industrial and Applied Mathematics • Japan Society for Industrial and Applied Mathematics • Société de Mathématiques Appliquées et Industrielles • International Council for Industrial and Applied Mathematics • European Community on Computational Methods in Applied Sciences • Category • Mathematics portal / outline / topics list
Wikipedia
Topological Galois theory In mathematics, topological Galois theory is a mathematical theory which originated from a topological proof of Abel's impossibility theorem found by V. I. Arnold and concerns the applications of some topological concepts to some problems in the field of Galois theory. It connects many ideas from algebra to ideas in topology. As described in Khovanskii's book: "According to this theory, the way the Riemann surface of an analytic function covers the plane of complex numbers can obstruct the representability of this function by explicit formulas. The strongest known results on the unexpressibility of functions by explicit formulas have been obtained in this way." References • Arnold, V. I. Abel's Theorem in Problems and Solutions. • Khovanskii, A. G. Topological Galois Theory. • Burda, Y. Topological Methods in Galois Theory (PDF).
Wikipedia
Topological category In category theory, a discipline in mathematics, the notion of topological category has a number of different, inequivalent definitions. In one approach, a topological category is a category that is enriched over the category of compactly generated Hausdorff spaces. They can be used as a foundation for higher category theory, where they can play the role of ($\infty $,1)-categories. An important example of a topological category in this sense is given by the category of CW complexes, where each set Hom(X,Y) of continuous maps from X to Y is equipped with the compact-open topology. (Lurie 2009) In another approach, a topological category is defined as a category $C$ along with a forgetful functor $T:C\to \mathbf {Set} $ that maps to the category of sets and has the following three properties: • $C$ admits initial (also known as weak) structures with respect to $T$ • Constant functions in $\mathbf {Set} $ lift to $C$-morphisms • Fibers $T^{-1}x,x\in \mathbf {Set} $ are small (they are sets and not proper classes). An example of a topological category in this sense is the category of all topological spaces with continuous maps, where one uses the standard forgetful functor.[1] See also • Infinity category • Simplicial category References 1. Brümmer, G. C. L. (September 1984). "Topological categories". Topology and Its Applications. 18 (1): 27–41. doi:10.1016/0166-8641(84)90029-4. • Lurie, Jacob (2009), Higher topos theory, Annals of Mathematics Studies, vol. 170, Princeton University Press, arXiv:math.CT/0608040, ISBN 978-0-691-14049-0, MR 2522659
Wikipedia
Chiral homology In mathematics, chiral homology, introduced by Alexander Beilinson and Vladimir Drinfeld, is, in their words, "a “quantum” version of (the algebra of functions on) the space of global horizontal sections of an affine ${\mathcal {D}}_{X}$-scheme (i.e., the space of global solutions of a system of non-linear differential equations)." Jacob Lurie's topological chiral homology gives an analog for manifolds.[1] See also • Ran space • Chiral Lie algebra • Factorization homology References 1. outline of "On the Classification of Topological Field Theories" at the nLab • Beilinson, Alexander; Drinfeld, Vladimir (2004). "Chapter 4". Chiral algebras. American Mathematical Society. ISBN 0-8218-3528-9.
Wikipedia
Closure (topology) In topology, the closure of a subset S of points in a topological space consists of all points in S together with all limit points of S. The closure of S may equivalently be defined as the union of S and its boundary, and also as the intersection of all closed sets containing S. Intuitively, the closure can be thought of as all the points that are either in S or "very near" S. A point which is in the closure of S is a point of closure of S. The notion of closure is in many ways dual to the notion of interior. Definitions Point of closure For $S$ as a subset of a Euclidean space, $x$ is a point of closure of $S$ if every open ball centered at $x$ contains a point of $S$ (this point can be $x$ itself). This definition generalizes to any subset $S$ of a metric space $X.$ Fully expressed, for $X$ as a metric space with metric $d,$ $x$ is a point of closure of $S$ if for every $r>0$ there exists some $s\in S$ such that the distance $d(x,s)<r$ ($x=s$ is allowed). Another way to express this is to say that $x$ is a point of closure of $S$ if the distance $d(x,S):=\inf _{s\in S}d(x,s)=0$ where $\inf $ is the infimum. This definition generalizes to topological spaces by replacing "open ball" or "ball" with "neighbourhood". Let $S$ be a subset of a topological space $X.$ Then $x$ is a point of closure or adherent point of $S$ if every neighbourhood of $x$ contains a point of $S$ (again, $x=s$ for $s\in S$ is allowed).[1] Note that this definition does not depend upon whether neighbourhoods are required to be open. Limit point The definition of a point of closure of a set is closely related to the definition of a limit point of a set. The difference between the two definitions is subtle but important – namely, in the definition of a limit point $x$ of a set $S$, every neighbourhood of $x$ must contain a point of $S$ other than $x$ itself, i.e., each neighbourhood of $x$ obviously has $x$ but it also must have a point of $S$ that is not equal to $x$ in order for $x$ to be a limit point of $S$. A limit point of $S$ has more strict condition than a point of closure of $S$ in the definitions. The set of all limit points of a set $S$ is called the derived set of $S$. A limit point of a set is also called cluster point or accumulation point of the set. Thus, every limit point is a point of closure, but not every point of closure is a limit point. A point of closure which is not a limit point is an isolated point. In other words, a point $x$ is an isolated point of $S$ if it is an element of $S$ and there is a neighbourhood of $x$ which contains no other points of $S$ than $x$ itself.[2] For a given set $S$ and point $x,$ $x$ is a point of closure of $S$ if and only if $x$ is an element of $S$ or $x$ is a limit point of $S$ (or both). Closure of a set See also: Closure (mathematics) The closure of a subset $S$ of a topological space $(X,\tau ),$ denoted by $\operatorname {cl} _{(X,\tau )}S$ or possibly by $\operatorname {cl} _{X}S$ (if $\tau $ is understood), where if both $X$ and $\tau $ are clear from context then it may also be denoted by $\operatorname {cl} S,$ ${\overline {S}},$ or $S{}^{-}$ (Moreover, $\operatorname {cl} $ is sometimes capitalized to $\operatorname {Cl} $.) can be defined using any of the following equivalent definitions: 1. $\operatorname {cl} S$ is the set of all points of closure of $S.$ 2. $\operatorname {cl} S$ is the set $S$ together with all of its limit points. (Each point of $S$ is a point of closure of $S$, and each limit point of $S$ is also a point of closure of $S$.)[3] 3. $\operatorname {cl} S$ is the intersection of all closed sets containing $S.$ 4. $\operatorname {cl} S$ is the smallest closed set containing $S.$ 5. $\operatorname {cl} S$ is the union of $S$ and its boundary $\partial (S).$ 6. $\operatorname {cl} S$ is the set of all $x\in X$ for which there exists a net (valued) in $S$ that converges to $x$ in $(X,\tau ).$ The closure of a set has the following properties.[4] • $\operatorname {cl} S$ is a closed superset of $S$. • The set $S$ is closed if and only if $S=\operatorname {cl} S$. • If $S\subseteq T$ then $\operatorname {cl} S$ is a subset of $\operatorname {cl} T.$ • If $A$ is a closed set, then $A$ contains $S$ if and only if $A$ contains $\operatorname {cl} S.$ Sometimes the second or third property above is taken as the definition of the topological closure, which still make sense when applied to other types of closures (see below).[5] In a first-countable space (such as a metric space), $\operatorname {cl} S$ is the set of all limits of all convergent sequences of points in $S.$ For a general topological space, this statement remains true if one replaces "sequence" by "net" or "filter" (as described in the article on filters in topology). Note that these properties are also satisfied if "closure", "superset", "intersection", "contains/containing", "smallest" and "closed" are replaced by "interior", "subset", "union", "contained in", "largest", and "open". For more on this matter, see closure operator below. Examples Consider a sphere in a 3 dimensional space. Implicitly there are two regions of interest created by this sphere; the sphere itself and its interior (which is called an open 3-ball). It is useful to distinguish between the interior and the surface of the sphere, so we distinguish between the open 3-ball (the interior of the sphere), and the closed 3-ball – the closure of the open 3-ball that is the open 3-ball plus the surface (the surface as the sphere itself). In topological space: • In any space, $\varnothing =\operatorname {cl} \varnothing $. In other words, the closure of the empty set $\varnothing $ is $\varnothing $ itself. • In any space $X,$ $X=\operatorname {cl} X.$ Giving $\mathbb {R} $ and $\mathbb {C} $ the standard (metric) topology: • If $X$ is the Euclidean space $\mathbb {R} $ of real numbers, then $\operatorname {cl} _{X}((0,1))=[0,1]$. In other words., the closure of the set $(0,1)$ as a subset of $X$ is $[0,1]$. • If $X$ is the Euclidean space $\mathbb {R} $, then the closure of the set $\mathbb {Q} $ of rational numbers is the whole space $\mathbb {R} .$ We say that $\mathbb {Q} $ is dense in $\mathbb {R} .$ • If $X$ is the complex plane $\mathbb {C} =\mathbb {R} ^{2},$ then $\operatorname {cl} _{X}\left(\{z\in \mathbb {C} :|z|>1\}\right)=\{z\in \mathbb {C} :|z|\geq 1\}.$ :|z|>1\}\right)=\{z\in \mathbb {C} :|z|\geq 1\}.} • If $S$ is a finite subset of a Euclidean space $X,$ then $\operatorname {cl} _{X}S=S.$ (For a general topological space, this property is equivalent to the T1 axiom.) On the set of real numbers one can put other topologies rather than the standard one. • If $X=\mathbb {R} $ is endowed with the lower limit topology, then $\operatorname {cl} _{X}((0,1))=[0,1).$ • If one considers on $X=\mathbb {R} $ the discrete topology in which every set is closed (open), then $\operatorname {cl} _{X}((0,1))=(0,1).$ • If one considers on $X=\mathbb {R} $ the trivial topology in which the only closed (open) sets are the empty set and $\mathbb {R} $ itself, then $\operatorname {cl} _{X}((0,1))=\mathbb {R} .$ These examples show that the closure of a set depends upon the topology of the underlying space. The last two examples are special cases of the following. • In any discrete space, since every set is closed (and also open), every set is equal to its closure. • In any indiscrete space $X,$ since the only closed sets are the empty set and $X$ itself, we have that the closure of the empty set is the empty set, and for every non-empty subset $A$ of $X,$ $\operatorname {cl} _{X}A=X.$ In other words, every non-empty subset of an indiscrete space is dense. The closure of a set also depends upon in which space we are taking the closure. For example, if $X$ is the set of rational numbers, with the usual relative topology induced by the Euclidean space $\mathbb {R} ,$ and if $S=\{q\in \mathbb {Q} :q^{2}>2,q>0\},$ then $S$ is both closed and open in $\mathbb {Q} $ because neither $S$ nor its complement can contain ${\sqrt {2}}$, which would be the lower bound of $S$, but cannot be in $S$ because ${\sqrt {2}}$ is irrational. So, $S$ has no well defined closure due to boundary elements not being in $\mathbb {Q} $. However, if we instead define $X$ to be the set of real numbers and define the interval in the same way then the closure of that interval is well defined and would be the set of all real numbers greater than or equal to ${\sqrt {2}}$. Closure operator See also: Closure operator and Kuratowski closure axioms A closure operator on a set $X$ is a mapping of the power set of $X,$ ${\mathcal {P}}(X)$, into itself which satisfies the Kuratowski closure axioms. Given a topological space $(X,\tau )$, the topological closure induces a function $\operatorname {cl} _{X}:\wp (X)\to \wp (X)$ that is defined by sending a subset $S\subseteq X$ to $\operatorname {cl} _{X}S,$ where the notation ${\overline {S}}$ or $S^{-}$ may be used instead. Conversely, if $\mathbb {c} $ is a closure operator on a set $X,$ then a topological space is obtained by defining the closed sets as being exactly those subsets $S\subseteq X$ that satisfy $\mathbb {c} (S)=S$ (so complements in $X$ of these subsets form the open sets of the topology).[6] The closure operator $\operatorname {cl} _{X}$ is dual to the interior operator, which is denoted by $\operatorname {int} _{X},$ in the sense that $\operatorname {cl} _{X}S=X\setminus \operatorname {int} _{X}(X\setminus S),$ and also $\operatorname {int} _{X}S=X\setminus \operatorname {cl} _{X}(X\setminus S).$ Therefore, the abstract theory of closure operators and the Kuratowski closure axioms can be readily translated into the language of interior operators by replacing sets with their complements in $X.$ In general, the closure operator does not commute with intersections. However, in a complete metric space the following result does hold: Theorem[7] (C. Ursescu) — Let $S_{1},S_{2},\ldots $ be a sequence of subsets of a complete metric space $X.$ • If each $S_{i}$ is closed in $X$ then $\operatorname {cl} _{X}\left(\bigcup _{i\in \mathbb {N} }\operatorname {int} _{X}S_{i}\right)=\operatorname {cl} _{X}\left[\operatorname {int} _{X}\left(\bigcup _{i\in \mathbb {N} }S_{i}\right)\right].$ • If each $S_{i}$ is open in $X$ then $\operatorname {int} _{X}\left(\bigcap _{i\in \mathbb {N} }\operatorname {cl} _{X}S_{i}\right)=\operatorname {int} _{X}\left[\operatorname {cl} _{X}\left(\bigcap _{i\in \mathbb {N} }S_{i}\right)\right].$ Facts about closures A subset $S$ is closed in $X$ if and only if $\operatorname {cl} _{X}S=S.$ In particular: • The closure of the empty set is the empty set; • The closure of $X$ itself is $X.$ • The closure of an intersection of sets is always a subset of (but need not be equal to) the intersection of the closures of the sets. • In a union of finitely many sets, the closure of the union and the union of the closures are equal; the union of zero sets is the empty set, and so this statement contains the earlier statement about the closure of the empty set as a special case. • The closure of the union of infinitely many sets need not equal the union of the closures, but it is always a superset of the union of the closures. • Thus, just as the union of two closed sets is closed, so too does closure distribute over binary unions: that is, $\operatorname {cl} _{X}(S\cup T)=(\operatorname {cl} _{X}S)\cup (\operatorname {cl} _{X}T).$ But just as a union of infinitely many closed sets is not necessarily closed, so too does closure not necessarily distribute over infinite unions: that is, $\operatorname {cl} _{X}\left(\bigcup _{i\in I}S_{i}\right)\neq \bigcup _{i\in I}\operatorname {cl} _{X}S_{i}$ is possible when $I$ is infinite. If $S\subseteq T\subseteq X$ and if $T$ is a subspace of $X$ (meaning that $T$ is endowed with the subspace topology that $X$ induces on it), then $\operatorname {cl} _{T}S\subseteq \operatorname {cl} _{X}S$ and the closure of $S$ computed in $T$ is equal to the intersection of $T$ and the closure of $S$ computed in $X$: $\operatorname {cl} _{T}S~=~T\cap \operatorname {cl} _{X}S.$ Proof Because $\operatorname {cl} _{X}S$ is a closed subset of $X,$ the intersection $T\cap \operatorname {cl} _{X}S$ is a closed subset of $T$ (by definition of the subspace topology), which implies that $\operatorname {cl} _{T}S\subseteq T\cap \operatorname {cl} _{X}S$ (because $\operatorname {cl} _{T}S$ is the smallest closed subset of $T$ containing $S$). Because $\operatorname {cl} _{T}S$ is a closed subset of $T,$ from the definition of the subspace topology, there must exist some set $C\subseteq X$ such that $C$ is closed in $X$ and $\operatorname {cl} _{T}S=T\cap C.$ Because $S\subseteq \operatorname {cl} _{T}S\subseteq C$ and $C$ is closed in $X,$ the minimality of $\operatorname {cl} _{X}S$ implies that $\operatorname {cl} _{X}S\subseteq C.$ Intersecting both sides with $T$ shows that $T\cap \operatorname {cl} _{X}S\subseteq T\cap C=\operatorname {cl} _{T}S.$ $\blacksquare $ It follows that $S\subseteq T$ is a dense subset of $T$ if and only if $T$ is a subset of $\operatorname {cl} _{X}S.$ It is possible for $\operatorname {cl} _{T}S=T\cap \operatorname {cl} _{X}S$ to be a proper subset of $\operatorname {cl} _{X}S;$ for example, take $X=\mathbb {R} ,$ $S=(0,1),$ and $T=(0,\infty ).$ If $S,T\subseteq X$ but $S$ is not necessarily a subset of $T$ then only $\operatorname {cl} _{T}(S\cap T)~\subseteq ~T\cap \operatorname {cl} _{X}S$ is always guaranteed, where this containment could be strict (consider for instance $X=\mathbb {R} $ with the usual topology, $T=(-\infty ,0],$ and $S=(0,\infty )$[proof 1]), although if $T$ happens to an open subset of $X$ then the equality $\operatorname {cl} _{T}(S\cap T)=T\cap \operatorname {cl} _{X}S$ will hold (no matter the relationship between $S$ and $T$). Proof Let $S,T\subseteq X$ and assume that $T$ is open in $X.$ Let $C:=\operatorname {cl} _{T}(T\cap S),$ which is equal to $T\cap \operatorname {cl} _{X}(T\cap S)$ (because $T\cap S\subseteq T\subseteq X$). The complement $T\setminus C$ is open in $T,$ where $T$ being open in $X$ now implies that $T\setminus C$ is also open in $X.$ Consequently $X\setminus (T\setminus C)=(X\setminus T)\cup C$ is a closed subset of $X$ where $(X\setminus T)\cup C$ contains $S$ as a subset (because if $s\in S$ is in $T$ then $s\in T\cap S\subseteq \operatorname {cl} _{T}(T\cap S)=C$), which implies that $\operatorname {cl} _{X}S\subseteq (X\setminus T)\cup C.$ Intersecting both sides with $T$ proves that $T\cap \operatorname {cl} _{X}S\subseteq T\cap C=C.$ The reverse inclusion follows from $C\subseteq \operatorname {cl} _{X}(T\cap S)\subseteq \operatorname {cl} _{X}S.$ $\blacksquare $ Consequently, if ${\mathcal {U}}$ is any open cover of $X$ and if $S\subseteq X$ is any subset then: $\operatorname {cl} _{X}S=\bigcup _{U\in {\mathcal {U}}}\operatorname {cl} _{U}(U\cap S)$ because $\operatorname {cl} _{U}(S\cap U)=U\cap \operatorname {cl} _{X}S$ for every $U\in {\mathcal {U}}$ (where every $U\in {\mathcal {U}}$ is endowed with the subspace topology induced on it by $X$). This equality is particularly useful when $X$ is a manifold and the sets in the open cover ${\mathcal {U}}$ are domains of coordinate charts. In words, this result shows that the closure in $X$ of any subset $S\subseteq X$ can be computed "locally" in the sets of any open cover of $X$ and then unioned together. In this way, this result can be viewed as the analogue of the well-known fact that a subset $S\subseteq X$ is closed in $X$ if and only if it is "locally closed in $X$", meaning that if ${\mathcal {U}}$ is any open cover of $X$ then $S$ is closed in $X$ if and only if $S\cap U$ is closed in $U$ for every $U\in {\mathcal {U}}.$ Functions and closure Continuity Main article: Continuous function A function $f:X\to Y$ between topological spaces is continuous if and only if the preimage of every closed subset of the codomain is closed in the domain; explicitly, this means: $f^{-1}(C)$ is closed in $X$ whenever $C$ is a closed subset of $Y.$ In terms of the closure operator, $f:X\to Y$ is continuous if and only if for every subset $A\subseteq X,$ $f\left(\operatorname {cl} _{X}A\right)~\subseteq ~\operatorname {cl} _{Y}(f(A)).$ That is to say, given any element $x\in X$ that belongs to the closure of a subset $A\subseteq X,$ $f(x)$ necessarily belongs to the closure of $f(A)$ in $Y.$ If we declare that a point $x$ is close to a subset $A\subseteq X$ if $x\in \operatorname {cl} _{X}A,$ then this terminology allows for a plain English description of continuity: $f$ is continuous if and only if for every subset $A\subseteq X,$ $f$ maps points that are close to $A$ to points that are close to $f(A).$ Thus continuous functions are exactly those functions that preserve (in the forward direction) the "closeness" relationship between points and sets: a function is continuous if and only if whenever a point is close to a set then the image of that point is close to the image of that set. Similarly, $f$ is continuous at a fixed given point $x\in X$ if and only if whenever $x$ is close to a subset $A\subseteq X,$ then $f(x)$ is close to $f(A).$ Closed maps Main article: Open and closed maps A function $f:X\to Y$ is a (strongly) closed map if and only if whenever $C$ is a closed subset of $X$ then $f(C)$ is a closed subset of $Y.$ In terms of the closure operator, $f:X\to Y$ is a (strongly) closed map if and only if $\operatorname {cl} _{Y}f(A)\subseteq f\left(\operatorname {cl} _{X}A\right)$ for every subset $A\subseteq X.$ Equivalently, $f:X\to Y$ is a (strongly) closed map if and only if $\operatorname {cl} _{Y}f(C)\subseteq f(C)$ for every closed subset $C\subseteq X.$ Categorical interpretation One may define the closure operator in terms of universal arrows, as follows. The powerset of a set $X$ may be realized as a partial order category $P$ in which the objects are subsets and the morphisms are inclusion maps $A\to B$ whenever $A$ is a subset of $B.$ Furthermore, a topology $T$ on $X$ is a subcategory of $P$ with inclusion functor $I:T\to P.$ The set of closed subsets containing a fixed subset $A\subseteq X$ can be identified with the comma category $(A\downarrow I).$ This category — also a partial order — then has initial object $\operatorname {cl} A.$ Thus there is a universal arrow from $A$ to $I,$ given by the inclusion $A\to \operatorname {cl} A.$ Similarly, since every closed set containing $X\setminus A$ corresponds with an open set contained in $A$ we can interpret the category $(I\downarrow X\setminus A)$ as the set of open subsets contained in $A,$ with terminal object $\operatorname {int} (A),$ the interior of $A.$ All properties of the closure can be derived from this definition and a few properties of the above categories. Moreover, this definition makes precise the analogy between the topological closure and other types of closures (for example algebraic closure), since all are examples of universal arrows. See also • Adherent point – Point that belongs to the closure of some given subset of a topological space • Closure algebra • Closed regular set, a set equal to the closure of their interior • Derived set (mathematics) – set of all limit points of a set S is called the derived set of SPages displaying wikidata descriptions as a fallback • Interior (topology) – Largest open subset of some given set • Limit point of a set – Cluster point in a topological spacePages displaying short descriptions of redirect targets Notes 1. From $T:=(-\infty ,0]$ and $S:=(0,\infty )$ it follows that $S\cap T=\varnothing $ and $\operatorname {cl} _{X}S=[0,\infty ),$ which implies $\varnothing ~=~\operatorname {cl} _{T}(S\cap T)~\neq ~T\cap \operatorname {cl} _{X}S~=~\{0\}.$ References 1. Schubert 1968, p. 20 2. Kuratowski 1966, p. 75 3. Hocking & Young 1988, p. 4 4. Croom 1989, p. 104 5. Gemignani 1990, p. 55, Pervin 1965, p. 40 and Baker 1991, p. 38 use the second property as the definition. 6. Pervin 1965, p. 41 7. Zălinescu 2002, p. 33. Bibliography • Baker, Crump W. (1991), Introduction to Topology, Wm. C. Brown Publisher, ISBN 0-697-05972-3 • Croom, Fred H. (1989), Principles of Topology, Saunders College Publishing, ISBN 0-03-012813-7 • Gemignani, Michael C. (1990) [1967], Elementary Topology (2nd ed.), Dover, ISBN 0-486-66522-4 • Hocking, John G.; Young, Gail S. (1988) [1961], Topology, Dover, ISBN 0-486-65676-4 • Kuratowski, K. (1966), Topology, vol. I, Academic Press • Pervin, William J. (1965), Foundations of General Topology, Academic Press • Schubert, Horst (1968), Topology, Allyn and Bacon • Zălinescu, Constantin (30 July 2002). Convex Analysis in General Vector Spaces. River Edge, N.J. London: World Scientific Publishing. ISBN 978-981-4488-15-0. MR 1921556. OCLC 285163112 – via Internet Archive. External links • "Closure of a set", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Topology Fields • General (point-set) • Algebraic • Combinatorial • Continuum • Differential • Geometric • low-dimensional • Homology • cohomology • Set-theoretic • Digital Key concepts • Open set / Closed set • Interior • Continuity • Space • compact • Connected • Hausdorff • metric • uniform • Homotopy • homotopy group • fundamental group • Simplicial complex • CW complex • Polyhedral complex • Manifold • Bundle (mathematics) • Second-countable space • Cobordism Metrics and properties • Euler characteristic • Betti number • Winding number • Chern number • Orientability Key results • Banach fixed-point theorem • De Rham cohomology • Invariance of domain • Poincaré conjecture • Tychonoff's theorem • Urysohn's lemma • Category •  Mathematics portal • Wikibook • Wikiversity • Topics • general • algebraic • geometric • Publications
Wikipedia
Topological combinatorics The mathematical discipline of topological combinatorics is the application of topological and algebro-topological methods to solving problems in combinatorics. History The discipline of combinatorial topology used combinatorial concepts in topology and in the early 20th century this turned into the field of algebraic topology. In 1978 the situation was reversed—methods from algebraic topology were used to solve a problem in combinatorics—when László Lovász proved the Kneser conjecture, thus beginning the new field of topological combinatorics. Lovász's proof used the Borsuk–Ulam theorem and this theorem retains a prominent role in this new field. This theorem has many equivalent versions and analogs and has been used in the study of fair division problems. In another application of homological methods to graph theory, Lovász proved both the undirected and directed versions of a conjecture of András Frank: Given a k-connected graph G, k points $v_{1},\ldots ,v_{k}\in V(G)$, and k positive integers $n_{1},n_{2},\ldots ,n_{k}$ that sum up to $|V(G)|$, there exists a partition $\{V_{1},\ldots ,V_{k}\}$ of $V(G)$ such that $v_{i}\in V_{i}$, $|V_{i}|=n_{i}$, and $V_{i}$ spans a connected subgraph. In 1987 the necklace splitting problem was solved by Noga Alon using the Borsuk–Ulam theorem. It has also been used to study complexity problems in linear decision tree algorithms and the Aanderaa–Karp–Rosenberg conjecture. Other areas include topology of partially ordered sets and Bruhat orders. Additionally, methods from differential topology now have a combinatorial analog in discrete Morse theory. See also • Sperner's lemma • Discrete exterior calculus • Topological graph theory • Combinatorial topology • Finite topological space References • de Longueville, Mark (2004), "25 years proof of the Kneser conjecture - The advent of topological combinatorics" (PDF), EMS Newsletter, Southampton, Hampshire: European Mathematical Society, pp. 16–19, retrieved 2008-07-29. Further reading • Björner, Anders (1995), "Topological Methods", in Graham, Ronald L.; Grötschel, Martin; Lovász, László (eds.), Handbook of Combinatorics (PDF), vol. 2, The MIT press, ISBN 978-0-262-07171-0. • Kozlov, Dmitry (2005), Trends in topological combinatorics, arXiv:math.AT/0507390, Bibcode:2005math......7390K. • Kozlov, Dmitry (2007), Combinatorial Algebraic Topology, Springer, ISBN 978-3-540-71961-8. • Lange, Carsten (2005), Combinatorial Curvatures, Group Actions, and Colourings: Aspects of Topological Combinatorics (PDF), Ph.D. thesis, Berlin Institute of Technology. • Matoušek, Jiří (2003), Using the Borsuk-Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry, Springer, ISBN 978-3-540-00362-5. • Barmak, Jonathan (2011), Algebraic Topology of Finite Topological Spaces and Applications, Springer, ISBN 978-3-642-22002-9. • de Longueville, Mark (2011), A Course in Topological Combinatorics, Springer, ISBN 978-1-4419-7909-4.
Wikipedia
Topological complexity In mathematics, topological complexity of a topological space X (also denoted by TC(X)) is a topological invariant closely connected to the motion planning problem, introduced by Michael Farber in 2003. Definition Let X be a topological space and $PX=\{\gamma :[0,1]\,\to \,X\}$ :[0,1]\,\to \,X\}} be the space of all continuous paths in X. Define the projection $\pi :PX\to \,X\times X$ by $\pi (\gamma )=(\gamma (0),\gamma (1))$. The topological complexity is the minimal number k such that • there exists an open cover $\{U_{i}\}_{i=1}^{k}$ of $X\times X$, • for each $i=1,\ldots ,k$, there exists a local section $s_{i}:\,U_{i}\to \,PX.$ Examples • The topological complexity: TC(X) = 1 if and only if X is contractible. • The topological complexity of the sphere $S^{n}$ is 2 for n odd and 3 for n even. For example, in the case of the circle $S^{1}$, we may define a path between two points to be the geodesic between the points, if it is unique. Any pair of antipodal points can be connected by a counter-clockwise path. • If $F(\mathbb {R} ^{m},n)$ is the configuration space of n distinct points in the Euclidean m-space, then $TC(F(\mathbb {R} ^{m},n))={\begin{cases}2n-1&\mathrm {for\,\,{\it {m}}\,\,odd} \\2n-2&\mathrm {for\,\,{\it {m}}\,\,even.} \end{cases}}$ • The topological complexity of the Klein bottle is 5.[1] References 1. Cohen, Daniel C.; Vandembroucq, Lucile (2016). "Topological Complexity of the Klein bottle". arXiv:1612.03133 [math.AT]. • Farber, M. (2003). "Topological complexity of motion planning". Discrete & Computational Geometry. Vol. 29, no. 2. pp. 211–221. • Armindo Costa: Topological Complexity of Configuration Spaces, Ph.D. Thesis, Durham University (2010), online
Wikipedia
Continuum (topology) In the mathematical field of point-set topology, a continuum (plural: "continua") is a nonempty compact connected metric space, or, less frequently, a compact connected Hausdorff space. Continuum theory is the branch of topology devoted to the study of continua. Not to be confused with Continuity (topology). Definitions • A continuum that contains more than one point is called nondegenerate. • A subset A of a continuum X such that A itself is a continuum is called a subcontinuum of X. A space homeomorphic to a subcontinuum of the Euclidean plane R2 is called a planar continuum. • A continuum X is homogeneous if for every two points x and y in X, there exists a homeomorphism h: X → X such that h(x) = y. • A Peano continuum is a continuum that is locally connected at each point. • An indecomposable continuum is a continuum that cannot be represented as the union of two proper subcontinua. A continuum X is hereditarily indecomposable if every subcontinuum of X is indecomposable. • The dimension of a continuum usually means its topological dimension. A one-dimensional continuum is often called a curve. Examples • An arc is a space homeomorphic to the closed interval [0,1]. If h: [0,1] → X is a homeomorphism and h(0) = p and h(1) = q then p and q are called the endpoints of X; one also says that X is an arc from p to q. An arc is the simplest and most familiar type of a continuum. It is one-dimensional, arcwise connected, and locally connected. • The topologist's sine curve is a subset of the plane that is the union of the graph of the function f(x) = sin(1/x), 0 < x ≤ 1 with the segment −1 ≤ y ≤ 1 of the y-axis. It is a one-dimensional continuum that is not arcwise connected, and it is locally disconnected at the points along the y-axis. • The Warsaw circle is obtained by "closing up" the topologist's sine curve by an arc connecting (0,−1) and (1,sin(1)). It is a one-dimensional continuum whose homotopy groups are all trivial, but it is not a contractible space. • An n-cell is a space homeomorphic to the closed ball in the Euclidean space Rn. It is contractible and is the simplest example of an n-dimensional continuum. • An n-sphere is a space homeomorphic to the standard n-sphere in the (n + 1)-dimensional Euclidean space. It is an n-dimensional homogeneous continuum that is not contractible, and therefore different from an n-cell. • The Hilbert cube is an infinite-dimensional continuum. • Solenoids are among the simplest examples of indecomposable homogeneous continua. They are neither arcwise connected nor locally connected. • The Sierpinski carpet, also known as the Sierpinski universal curve, is a one-dimensional planar Peano continuum that contains a homeomorphic image of any one-dimensional planar continuum. • The pseudo-arc is a homogeneous hereditarily indecomposable planar continuum. Properties There are two fundamental techniques for constructing continua, by means of nested intersections and inverse limits. • If {Xn} is a nested family of continua, i.e. Xn ⊇ Xn+1, then their intersection is a continuum. • If {(Xn, fn)} is an inverse sequence of continua Xn, called the coordinate spaces, together with continuous maps fn: Xn+1 → Xn, called the bonding maps, then its inverse limit is a continuum. A finite or countable product of continua is a continuum. See also • Linear continuum • Menger sponge • Shape theory (mathematics) References Sources • Sam B. Nadler, Jr, Continuum theory. An introduction. Pure and Applied Mathematics, Marcel Dekker. ISBN 0-8247-8659-9. External links • Open problems in continuum theory • Examples in continuum theory • Continuum Theory and Topological Dynamics, M. Barge and J. Kennedy, in Open Problems in Topology, J. van Mill and G.M. Reed (Editors) Elsevier Science Publishers B.V. (North-Holland), 1990. • Hyperspacewiki Topology Fields • General (point-set) • Algebraic • Combinatorial • Continuum • Differential • Geometric • low-dimensional • Homology • cohomology • Set-theoretic • Digital Key concepts • Open set / Closed set • Interior • Continuity • Space • compact • Connected • Hausdorff • metric • uniform • Homotopy • homotopy group • fundamental group • Simplicial complex • CW complex • Polyhedral complex • Manifold • Bundle (mathematics) • Second-countable space • Cobordism Metrics and properties • Euler characteristic • Betti number • Winding number • Chern number • Orientability Key results • Banach fixed-point theorem • De Rham cohomology • Invariance of domain • Poincaré conjecture • Tychonoff's theorem • Urysohn's lemma • Category •  Mathematics portal • Wikibook • Wikiversity • Topics • general • algebraic • geometric • Publications
Wikipedia
Topological data analysis In applied mathematics, topological data analysis (TDA) is an approach to the analysis of datasets using techniques from topology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is insensitive to the particular metric chosen and provides dimensionality reduction and robustness to noise. Beyond this, it inherits functoriality, a fundamental concept of modern mathematics, from its topological nature, which allows it to adapt to new mathematical tools. The initial motivation is to study the shape of data. TDA has combined algebraic topology and other tools from pure mathematics to allow mathematically rigorous study of "shape". The main tool is persistent homology, an adaptation of homology to point cloud data. Persistent homology has been applied to many types of data across many fields. Moreover, its mathematical foundation is also of theoretical importance. The unique features of TDA make it a promising bridge between topology and geometry. Basic theory Intuition TDA is premised on the idea that the shape of data sets contains relevant information. Real high-dimensional data is typically sparse, and tends to have relevant low dimensional features. One task of TDA is to provide a precise characterization of this fact. For example, the trajectory of a simple predator-prey system governed by the Lotka–Volterra equations[1] forms a closed circle in state space. TDA provides tools to detect and quantify such recurrent motion.[2] Many algorithms for data analysis, including those used in TDA, require setting various parameters. Without prior domain knowledge, the correct collection of parameters for a data set is difficult to choose. The main insight of persistent homology is to use the information obtained from all parameter values by encoding this huge amount of information into an understandable and easy-to-represent form. With TDA, there is a mathematical interpretation when the information is a homology group. In general, the assumption is that features that persist for a wide range of parameters are "true" features. Features persisting for only a narrow range of parameters are presumed to be noise, although the theoretical justification for this is unclear.[3] Early history Precursors to the full concept of persistent homology appeared gradually over time.[4] In 1990, Patrizio Frosini introduced a pseudo-distance between submanifolds, and later the size function, which on 1-dim curves is equivalent to the 0th persistent homology.[5][6] Nearly a decade later, Vanessa Robins studied the images of homomorphisms induced by inclusion.[7] Finally, shortly thereafter, Edelsbrunner et al. introduced the concept of persistent homology together with an efficient algorithm and its visualization as a persistence diagram.[8] Carlsson et al. reformulated the initial definition and gave an equivalent visualization method called persistence barcodes,[9] interpreting persistence in the language of commutative algebra.[10] In algebraic topology the persistent homology has emerged through the work of Sergey Barannikov on Morse theory. The set of critical values of smooth Morse function was canonically partitioned into pairs "birth-death", filtered complexes were classified, their invariants, equivalent to persistence diagram and persistence barcodes, together with the efficient algorithm for their calculation, were described under the name of canonical forms in 1994 by Barannikov.[11][12] Concepts Some widely used concepts are introduced below. Note that some definitions may vary from author to author. A point cloud is often defined as a finite set of points in some Euclidean space, but may be taken to be any finite metric space. The Čech complex of a point cloud is the nerve of the cover of balls of a fixed radius around each point in the cloud. A persistence module $\mathbb {U} $ indexed by $\mathbb {Z} $ is a vector space $U_{t}$ for each $t\in \mathbb {Z} $, and a linear map $u_{t}^{s}:U_{s}\to U_{t}$ whenever $s\leq t$, such that $u_{t}^{t}=1$ for all $t$ and $u_{t}^{s}u_{s}^{r}=u_{t}^{r}$ whenever $r\leq s\leq t.$[13] An equivalent definition is a functor from $\mathbb {Z} $ considered as a partially ordered set to the category of vector spaces. The persistent homology group $PH$ of a point cloud is the persistence module defined as $PH_{k}(X)=\prod H_{k}(X_{r})$, where $X_{r}$ is the Čech complex of radius $r$ of the point cloud $X$ and $H_{k}$ is the homology group. A persistence barcode is a multiset of intervals in $\mathbb {R} $, and a persistence diagram is a multiset of points in $\Delta $($:=\{(u,v)\in \mathbb {R} ^{2}\mid u,v\geq 0,u\leq v\}$ :=\{(u,v)\in \mathbb {R} ^{2}\mid u,v\geq 0,u\leq v\}} ). The Wasserstein distance between two persistence diagrams $X$ and $Y$ is defined as $W_{p}[L_{q}](X,Y):=\inf _{\varphi :X\to Y}\left[\sum _{x\in X}(\Vert x-\varphi (x)\Vert _{q})^{p}\right]^{1/p}$ where $1\leq p,q\leq \infty $ and $\varphi $ ranges over bijections between $X$ and $Y$. Please refer to figure 3.1 in Munch [14] for illustration. The bottleneck distance between $X$ and $Y$ is $W_{\infty }[L_{q}](X,Y):=\inf _{\varphi :X\to Y}\sup _{x\in X}\Vert x-\varphi (x)\Vert _{q}.$ This is a special case of Wasserstein distance, letting $p=\infty $. Structure theorem The first classification theorem for persistent homology appeared in 1994[11] via Barannikov's canonical forms. The classification theorem interpreting persistence in the language of commutative algebra appeared in 2005:[10] for a finitely generated persistence module $C$ with field $F$ coefficients, $H(C;F)\simeq \bigoplus _{i}x^{t_{i}}\cdot F[x]\oplus \left(\bigoplus _{j}x^{r_{j}}\cdot (F[x]/(x^{s_{j}}\cdot F[x]))\right).$ Intuitively, the free parts correspond to the homology generators that appear at filtration level $t_{i}$ and never disappear, while the torsion parts correspond to those that appear at filtration level $r_{j}$ and last for $s_{j}$ steps of the filtration (or equivalently, disappear at filtration level $s_{j}+r_{j}$).[11] Persistent homology is visualized through a barcode or persistence diagram. The barcode has its root in abstract mathematics. Namely, the category of finite filtered complexes over a field is semi-simple. Any filtered complex is isomorphic to its canonical form, a direct sum of one- and two-dimensional simple filtered complexes. Stability Stability is desirable because it provides robustness against noise. If $X$ is any space which is homeomorphic to a simplicial complex, and $f,g:X\to \mathbb {R} $ are continuous tame[15] functions, then the persistence vector spaces $\{H_{k}(f^{-1}([0,r]))\}$ and $\{H_{k}(g^{-1}([0,r]))\}$ are finitely presented, and $W_{\infty }(D(f),D(g))\leq \lVert f-g\rVert _{\infty }$, where $W_{\infty }$ refers to the bottleneck distance[16] and $D$ is the map taking a continuous tame function to the persistence diagram of its $k$-th homology. Workflow The basic workflow in TDA is:[17] point cloud $\to $ nested complexes $\to $ persistence module $\to $ barcode or diagram 1. If $X$ is a point cloud, replace $X$ with a nested family of simplicial complexes $X_{r}$ (such as the Čech or Vietoris-Rips complex). This process converts the point cloud into a filtration of simplicial complexes. Taking the homology of each complex in this filtration gives a persistence module $H_{i}(X_{r_{0}})\to H_{i}(X_{r_{1}})\to H_{i}(X_{r_{2}})\to \cdots $ 2. Apply the structure theorem to obtain the persistent Betti numbers, persistence diagram, or equivalently, barcode. Graphically speaking, Computation The first algorithm over all fields for persistent homology in algebraic topology setting was described by Barannikov[11] through reduction to the canonical form by upper-triangular matrices. The algorithm for persistent homology over $F_{2}$ was given by Edelsbrunner et al.[8] Zomorodian and Carlsson gave the practical algorithm to compute persistent homology over all fields.[10] Edelsbrunner and Harer's book gives general guidance on computational topology.[19] One issue that arises in computation is the choice of complex. The Čech complex and Vietoris–Rips complex are most natural at first glance; however, their size grows rapidly with the number of data points. The Vietoris–Rips complex is preferred over Čech complex because its definition is simpler and the Čech complex requires extra effort to define in a general finite metric space. Efficient ways to lower the computational cost of homology have been studied. For example, the α-complex and witness complex are used to reduce the dimension and size of complexes.[20] Recently, Discrete Morse theory has shown promise for computational homology because it can reduce a given simplicial complex to a much smaller cellular complex which is homotopic to the original one.[21] This reduction can in fact be performed as the complex is constructed by using matroid theory, leading to further performance increases.[22] Another recent algorithm saves time by ignoring the homology classes with low persistence.[23] Various software packages are available, such as javaPlex, Dionysus, Perseus, PHAT, DIPHA, GUDHI, Ripser, and TDAstats. A comparison between these tools is done by Otter et al.[24] Giotto-tda is a Python package dedicated to integrating TDA in the machine learning workflow by means of a scikit-learn API. An R package TDA is capable of calculating recently invented concepts like landscape and the kernel distance estimator.[25] The Topology ToolKit is specialized for continuous data defined on manifolds of low dimension (1, 2 or 3), as typically found in scientific visualization. Another R package, TDAstats, implements the Ripser library to calculate persistent homology.[26] Visualization High-dimensional data is impossible to visualize directly. Many methods have been invented to extract a low-dimensional structure from the data set, such as principal component analysis and multidimensional scaling.[27] However, it is important to note that the problem itself is ill-posed, since many different topological features can be found in the same data set. Thus, the study of visualization of high-dimensional spaces is of central importance to TDA, although it does not necessarily involve the use of persistent homology. However, recent attempts have been made to use persistent homology in data visualization.[28] Carlsson et al. have proposed a general method called MAPPER.[29] It inherits the idea of Serre that a covering preserves homotopy.[30] A generalized formulation of MAPPER is as follows: Let $X$ and $Z$ be topological spaces and let $f:X\to Z$ be a continuous map. Let $\mathbb {U} =\{U_{\alpha }\}_{\alpha \in A}$ be a finite open covering of $Z$. The output of MAPPER is the nerve of the pullback cover $ M(\mathbb {U} ,f):=N(f^{-1}(\mathbb {U} ))$, where each preimage is split into its connected components.[28] This is a very general concept, of which the Reeb graph [31] and merge trees are special cases. This is not quite the original definition.[29] Carlsson et al. choose $Z$ to be $\mathbb {R} $ or $\mathbb {R} ^{2}$, and cover it with open sets such that at most two intersect.[3] This restriction means that the output is in the form of a complex network. Because the topology of a finite point cloud is trivial, clustering methods (such as single linkage) are used to produce the analogue of connected sets in the preimage $f^{-1}(U)$ when MAPPER is applied to actual data. Mathematically speaking, MAPPER is a variation of the Reeb graph. If the $ M(\mathbb {U} ,f)$ is at most one dimensional, then for each $i\geq 0$, $H_{i}(X)\simeq H_{0}(N(\mathbb {U} );{\hat {F}}_{i})\oplus H_{1}(N(\mathbb {U} );{\hat {F}}_{i-1}).$ [32] The added flexibility also has disadvantages. One problem is instability, in that some change of the choice of the cover can lead to major change of the output of the algorithm.[33] Work has been done to overcome this problem.[28] Three successful applications of MAPPER can be found in Carlsson et al.[34] A comment on the applications in this paper by J. Curry is that "a common feature of interest in applications is the presence of flares or tendrils."[35] A free implementation of MAPPER is available online written by Daniel Müllner and Aravindakshan Babu. MAPPER also forms the basis of Ayasdi's AI platform. Multidimensional persistence Multidimensional persistence is important to TDA. The concept arises in both theory and practice. The first investigation of multidimensional persistence was early in the development of TDA,.[36] Carlsson-Zomorodian introduced the theory of multidimensional persistence in [37] and in collaboration with Singh [38] introduced the use of tools from symbolic algebra (Grobner basis methods) to compute MPH modules. Their definition presents multidimensional persistence with n parameters as a $\mathbb {Z} ^{n}$ graded module over a polynomial ring in n variables. Tools from commutative and homological algebra are applied to the study of multidimensional persistence in work of Harrington-Otter-Schenck-Tillman.[39] The first application to appear in the literature is a method for shape comparison, similar to the invention of TDA.[40] The definition of an n-dimensional persistence module in $\mathbb {R} ^{n}$ is[35] • vector space $V_{s}$ is assigned to each point in $s=(s_{1},...,s_{n})$ • map $\rho _{s}^{t}:V_{s}\to V_{t}$ is assigned if $s\leq t$($s_{i}\leq t_{i},i=1,...,n)$ • maps satisfy $\rho _{r}^{t}=\rho _{s}^{t}\circ \rho _{r}^{s}$ for all $r\leq s\leq t$ It might be worth noting that there are controversies on the definition of multidimensional persistence.[35] One of the advantages of one-dimensional persistence is its representability by a diagram or barcode. However, discrete complete invariants of multidimensional persistence modules do not exist.[41] The main reason for this is that the structure of the collection of indecomposables is extremely complicated by Gabriel's theorem in the theory of quiver representations,[42] although a finitely generated n-dim persistence module can be uniquely decomposed into a direct sum of indecomposables due to the Krull-Schmidt theorem.[43] Nonetheless, many results have been established. Carlsson and Zomorodian introduced the rank invariant $\rho _{M}(u,v)$, defined as the $\rho _{M}(u,v)=\mathrm {rank} (x^{u-v}:M_{u}\to M_{v})$, in which $M$ is a finitely generated n-graded module. In one dimension, it is equivalent to the barcode. In the literature, the rank invariant is often referred as the persistent Betti numbers (PBNs).[19] In many theoretical works, authors have used a more restricted definition, an analogue from sublevel set persistence. Specifically, the persistence Betti numbers of a function $f:X\to \mathbb {R} ^{k}$ are given by the function $\beta _{f}:\Delta ^{+}\to \mathrm {N} $, taking each $(u,v)\in \Delta ^{+}$ to $\beta _{f}(u,v):=\mathrm {rank} (H(X(f\leq u)\to H(X(f\leq v)))$, where $\Delta ^{+}:=\{(u,v)\in \mathbb {R} ^{k}\times \mathbb {R} ^{k}:u\leq v\}$ and $X(f\leq u):=\{x\in X:f(x)\leq u\}$. Some basic properties include monotonicity and diagonal jump.[44] Persistent Betti numbers will be finite if $X$ is a compact and locally contractible subspace of $\mathbb {R} ^{n}$.[45] Using a foliation method, the k-dim PBNs can be decomposed into a family of 1-dim PBNs by dimensionality deduction.[46] This method has also led to a proof that multi-dim PBNs are stable.[47] The discontinuities of PBNs only occur at points $(u,v)(u\leq v)$ where either $u$ is a discontinuous point of $\rho _{M}(\star ,v)$ or $v$ is a discontinuous point of $\rho (u,\star )$ under the assumption that $f\in C^{0}(X,\mathbb {R} ^{k})$ and $X$ is a compact, triangulable topological space.[48] Persistent space, a generalization of persistent diagram, is defined as the multiset of all points with multiplicity larger than 0 and the diagonal.[49] It provides a stable and complete representation of PBNs. An ongoing work by Carlsson et al. is trying to give geometric interpretation of persistent homology, which might provide insights on how to combine machine learning theory with topological data analysis.[50] The first practical algorithm to compute multidimensional persistence was invented very early.[51] After then, many other algorithms have been proposed, based on such concepts as discrete morse theory[52] and finite sample estimating.[53] Other persistences The standard paradigm in TDA is often referred as sublevel persistence. Apart from multidimensional persistence, many works have been done to extend this special case. Zigzag persistence The nonzero maps in persistence module are restricted by the preorder relationship in the category. However, mathematicians have found that the unanimity of direction is not essential to many results. "The philosophical point is that the decomposition theory of graph representations is somewhat independent of the orientation of the graph edges".[54] Zigzag persistence is important to the theoretical side. The examples given in Carlsson's review paper to illustrate the importance of functorality all share some of its features.[3] Extended persistence and levelset persistence There are some attempts to loosen the stricter restriction of the function.[55] Please refer to the Categorification and cosheaves and Impact on mathematics sections for more information. It's natural to extend persistence homology to other basic concepts in algebraic topology, such as cohomology and relative homology/cohomology.[56] An interesting application is the computation of circular coordinates for a data set via the first persistent cohomology group.[57] Circular persistence Normal persistence homology studies real-valued functions. The circle-valued map might be useful, "persistence theory for circle-valued maps promises to play the role for some vector fields as does the standard persistence theory for scalar fields", as commented in Dan Burghelea et al.[58] The main difference is that Jordan cells (very similar in format to the Jordan blocks in linear algebra) are nontrivial in circle-valued functions, which would be zero in real-valued case, and combining with barcodes give the invariants of a tame map, under moderate conditions.[58] Two techniques they use are Morse-Novikov theory[59] and graph representation theory.[60] More recent results can be found in D. Burghelea et al.[61] For example, the tameness requirement can be replaced by the much weaker condition, continuous. Persistence with torsion The proof of the structure theorem relies on the base domain being field, so not many attempts have been made on persistence homology with torsion. Frosini defined a pseudometric on this specific module and proved its stability.[62] One of its novelty is that it doesn't depend on some classification theory to define the metric.[63] Categorification and cosheaves One advantage of category theory is its ability to lift concrete results to a higher level, showing relationships between seemingly unconnected objects. Bubenik et al.[64] offers a short introduction of category theory fitted for TDA. Category theory is the language of modern algebra, and has been widely used in the study of algebraic geometry and topology. It has been noted that "the key observation of [10] is that the persistence diagram produced by [8] depends only on the algebraic structure carried by this diagram."[65] The use of category theory in TDA has proved to be fruitful.[64][65] Following the notations made in Bubenik et al.,[65] the indexing category $ P$ is any preordered set (not necessarily $\mathbb {N} $ or $\mathbb {R} $), the target category $D$ is any category (instead of the commonly used $ \mathrm {Vect} _{\mathbb {F} }$), and functors $ P\to D$ are called generalized persistence modules in $D$, over $ P$. One advantage of using category theory in TDA is a clearer understanding of concepts and the discovery of new relationships between proofs. Take two examples for illustration. The understanding of the correspondence between interleaving and matching is of huge importance, since matching has been the method used in the beginning (modified from Morse theory). A summary of works can be found in Vin de Silva et al.[66] Many theorems can be proved much more easily in a more intuitive setting.[63] Another example is the relationship between the construction of different complexes from point clouds. It has long been noticed that Čech and Vietoris-Rips complexes are related. Specifically, $V_{r}(X)\subset C_{{\sqrt {2}}r}(X)\subset V_{2r}(X)$.[67] The essential relationship between Cech and Rips complexes can be seen much more clearly in categorical language.[66] The language of category theory also helps cast results in terms recognizable to the broader mathematical community. Bottleneck distance is widely used in TDA because of the results on stability with respect to the bottleneck distance.[13][16] In fact, the interleaving distance is the terminal object in a poset category of stable metrics on multidimensional persistence modules in a prime field.[63][68] Sheaves, a central concept in modern algebraic geometry, are intrinsically related to category theory. Roughly speaking, sheaves are the mathematical tool for understanding how local information determines global information. Justin Curry regards level set persistence as the study of fibers of continuous functions. The objects that he studies are very similar to those by MAPPER, but with sheaf theory as the theoretical foundation.[35] Although no breakthrough in the theory of TDA has yet used sheaf theory, it is promising since there are many beautiful theorems in algebraic geometry relating to sheaf theory. For example, a natural theoretical question is whether different filtration methods result in the same output.[69] Stability Stability is of central importance to data analysis, since real data carry noises. By usage of category theory, Bubenik et al. have distinguished between soft and hard stability theorems, and proved that soft cases are formal.[65] Specifically, general workflow of TDA is data ${\stackrel {F}{\longrightarrow }}$ topological persistence module ${\stackrel {H}{\longrightarrow }}$ algebraic persistence module ${\stackrel {J}{\longrightarrow }}$ discrete invariant The soft stability theorem asserts that $HF$ is Lipschitz continuous, and the hard stability theorem asserts that $J$ is Lipschitz continuous. Bottleneck distance is widely used in TDA. The isometry theorem asserts that the interleaving distance $d_{I}$ is equal to the bottleneck distance.[63] Bubenik et al. have abstracted the definition to that between functors $F,G:P\to D$ when $ P$ is equipped with a sublinear projection or superlinear family, in which still remains a pseudometric.[65] Considering the magnificent characters of interleaving distance,[70] here we introduce the general definition of interleaving distance(instead of the first introduced one):[13] Let $\Gamma ,K\in \mathrm {Trans_{P}} $ (a function from $ P$ to $ P$ which is monotone and satisfies $x\leq \Gamma (x)$ for all $ x\in P$). A $(\Gamma ,K)$-interleaving between F and G consists of natural transformations $\varphi \colon F\Rightarrow G\Gamma $ and $\psi :G\Rightarrow FK$, such that $(\psi \Gamma )=\varphi F\eta _{K\Gamma }$ and $(\varphi \Gamma )=\psi G\eta _{\Gamma K}$. The two main results are[65] • Let $ P$ be a preordered set with a sublinear projection or superlinear family. Let $ H:D\to E$ be a functor between arbitrary categories $ D,E$. Then for any two functors $ F,G:P\to D$, we have $ d_{I}(HF,HG)\leq d_{I}(F,G)$. • Let $ P$ be a poset of a metric space $ Y$ , $ X$ be a topological space. And let$ f,g:X\to Y$ (not necessarily continuous) be functions, and $ F,G$ to be the corresponding persistence diagram. Then $d_{I}(F,G)\leq d_{\infty }(f,g):=\sup _{x\in X}d_{Y}(f(x),g(x))$. These two results summarize many results on stability of different models of persistence. For the stability theorem of multidimensional persistence, please refer to the subsection of persistence. Structure theorem The structure theorem is of central importance to TDA; as commented by G. Carlsson, "what makes homology useful as a discriminator between topological spaces is the fact that there is a classification theorem for finitely generated abelian groups."[3] (see the fundamental theorem of finitely generated abelian groups). The main argument used in the proof of the original structure theorem is the standard structure theorem for finitely generated modules over a principal ideal domain.[10] However, this argument fails if the indexing set is $(\mathbb {R} ,\leq )$.[3] In general, not every persistence module can be decomposed into intervals.[71] Many attempts have been made at relaxing the restrictions of the original structure theorem. The case for pointwise finite-dimensional persistence modules indexed by a locally finite subset of $\mathbb {R} $ is solved based on the work of Webb.[72] The most notable result is done by Crawley-Boevey, which solved the case of $\mathbb {R} $. Crawley-Boevey's theorem states that any pointwise finite-dimensional persistence module is a direct sum of interval modules.[73] To understand the definition of his theorem, some concepts need introducing. An interval in $(\mathbb {R} ,\leq )$ is defined as a subset $I\subset \mathbb {R} $ having the property that if $r,t\in I$ and if there is an $s\in \mathbb {R} $ such that $r\leq s\leq t$, then $s\in I$ as well. An interval module $k_{I}$ assigns to each element $s\in I$ the vector space $k$ and assigns the zero vector space to elements in $\mathbb {R} \setminus I$. All maps $\rho _{s}^{t}$ are the zero map, unless $s,t\in I$ and $s\leq t$, in which case $\rho _{s}^{t}$ is the identity map.[35] Interval modules are indecomposable.[74] Although the result of Crawley-Boevey is a very powerful theorem, it still doesn't extend to the q-tame case.[71] A persistence module is q-tame if the rank of $\rho _{s}^{t}$ is finite for all $s<t$. There are examples of q-tame persistence modules that fail to be pointwise finite.[75] However, it turns out that a similar structure theorem still holds if the features that exist only at one index value are removed.[74] This holds because the infinite dimensional parts at each index value do not persist, due to the finite-rank condition.[76] Formally, the observable category $\mathrm {Ob} $ is defined as $\mathrm {Pers} /\mathrm {Eph} $, in which $\mathrm {Eph} $ denotes the full subcategory of $\mathrm {Pers} $ whose objects are the ephemeral modules ($\rho _{s}^{t}=0$ whenever $s<t$).[74] Note that the extended results listed here do not apply to zigzag persistence, since the analogue of a zigzag persistence module over $\mathbb {R} $ is not immediately obvious. Statistics Real data is always finite, and so its study requires us to take stochasticity into account. Statistical analysis gives us the ability to separate true features of the data from artifacts introduced by random noise. Persistent homology has no inherent mechanism to distinguish between low-probability features and high-probability features. One way to apply statistics to topological data analysis is to study the statistical properties of topological features of point clouds. The study of random simplicial complexes offers some insight into statistical topology. K. Turner et al.[77] offers a summary of work in this vein. A second way is to study probability distributions on the persistence space. The persistence space $B_{\infty }$ is $\coprod _{n}B_{n}/{\backsim }$, where $B_{n}$ is the space of all barcodes containing exactly $n$ intervals and the equivalences are $\{[x_{1},y_{1}],[x_{2},y_{2}],\ldots ,[x_{n},y_{n}]\}\backsim \{[x_{1},y_{1}],[x_{2},y_{2}],\ldots ,[x_{n-1},y_{n-1}]\}$ if $x_{n}=y_{n}$.[78] This space is fairly complicated; for example, it is not complete under the bottleneck metric. The first attempt made to study it is by Y. Mileyko et al.[79] The space of persistence diagrams $D_{p}$ in their paper is defined as $D_{p}:=\left\{d\mid \sum _{x\in d}\left(2\inf _{y\in \Delta }\lVert x-y\rVert \right)^{p}<\infty \right\}$ where $\Delta $ is the diagonal line in $\mathbb {R} ^{2}$. A nice property is that $D_{p}$ is complete and separable in the Wasserstein metric $W_{p}(u,v)=\left(\inf _{\gamma \in \Gamma (u,v)}\int _{\mathbb {X} \times \mathbb {X} }\rho ^{p}(x,y)\,\mathrm {d} \gamma (x,y)\right)^{1/p}$. Expectation, variance, and conditional probability can be defined in the Fréchet sense. This allows many statistical tools to be ported to TDA. Works on null hypothesis significance test,[80] confidence intervals,[81] and robust estimates[82] are notable steps. A third way is to consider the cohomology of probabilistic space or statistical systems directly, called information structures and basically consisting in the triple ($\Omega ,\Pi ,P$), sample space, random variables and probability laws.[83][84] Random variables are considered as partitions of the n atomic probabilities (seen as a probability (n-1)-simplex, $|\Omega |=n$) on the lattice of partitions ($\Pi _{n}$). The random variables or modules of measurable functions provide the cochain complexes while the coboundary is considered as the general homological algebra first discovered by Hochschild with a left action implementing the action of conditioning. The first cocycle condition corresponds to the chain rule of entropy, allowing to derive uniquely up to the multiplicative constant, Shannon entropy as the first cohomology class. The consideration of a deformed left-action generalises the framework to Tsallis entropies. The information cohomology is an example of ringed topos. Multivariate k-Mutual information appear in coboundaries expressions, and their vanishing, related to cocycle condition, gives equivalent conditions for statistical independence.[85] Minima of mutual-informations, also called synergy, give rise to interesting independence configurations analog to homotopical links. Because of its combinatorial complexity, only the simplicial subcase of the cohomology and of information structure has been investigated on data. Applied to data, those cohomological tools quantifies statistical dependences and independences, including Markov chains and conditional independence, in the multivariate case.[86] Notably, mutual-informations generalize correlation coefficient and covariance to non-linear statistical dependences. These approaches were developed independently and only indirectly related to persistence methods, but may be roughly understood in the simplicial case using Hu Kuo Tin Theorem that establishes one-to-one correspondence between mutual-informations functions and finite measurable function of a set with intersection operator, to construct the Čech complex skeleton. Information cohomology offers some direct interpretation and application in terms of neuroscience (neural assembly theory and qualitative cognition [87]), statistical physic, and deep neural network for which the structure and learning algorithm are imposed by the complex of random variables and the information chain rule.[88] Persistence landscapes, introduced by Peter Bubenik, are a different way to represent barcodes, more amenable to statistical analysis.[89] The persistence landscape of a persistent module $M$ is defined as a function $\lambda :\mathbb {N} \times \mathbb {R} \to {\bar {\mathbb {R} }}$ :\mathbb {N} \times \mathbb {R} \to {\bar {\mathbb {R} }}} , $\lambda (k,t):=\sup(m\geq 0\mid \beta ^{t-m,t-m}\geq k)$, where ${\bar {\mathbb {R} }}$ denotes the extended real line and $\beta ^{a,b}=\mathrm {dim} (\mathrm {im} (M(a\leq b)))$. The space of persistence landscapes is very nice: it inherits all good properties of barcode representation (stability, easy representation, etc.), but statistical quantities can be readily defined, and some problems in Y. Mileyko et al.'s work, such as the non-uniqueness of expectations,[79] can be overcome. Effective algorithms for computation with persistence landscapes are available.[90] Another approach is to use revised persistence, which is image, kernel and cokernel persistence.[91] Applications Classification of applications More than one way exists to classify the applications of TDA. Perhaps the most natural way is by field. A very incomplete list of successful applications includes [92] data skeletonization,[93] shape study,[94] graph reconstruction,[95][96][97] [98] [99] image analysis, [100][101] material,[102][103] progression analysis of disease,[104][105] sensor network,[67] signal analysis,[106] cosmic web,[107] complex network,[108][109][110][111] fractal geometry,[112] viral evolution,[113] propagation of contagions on networks ,[114] bacteria classification using molecular spectroscopy,[115] super-resolution microscopy, [116] hyperspectral imaging in physical-chemistry,[117] remote sensing,[118] feature selection,[119] and early warning signs of financial crashes.[120] Another way is by distinguishing the techniques by G. Carlsson,[78] one being the study of homological invariants of data on individual data sets, and the other is the use of homological invariants in the study of databases where the data points themselves have geometric structure. Characteristics of TDA in applications There are several notable interesting features of the recent applications of TDA: 1. Combining tools from several branches of mathematics. Besides the obvious need for algebra and topology, partial differential equations,[121] algebraic geometry,[41] representation theory,[54] statistics, combinatorics, and Riemannian geometry[76] have all found use in TDA. 2. Quantitative analysis. Topology is considered to be very soft since many concepts are invariant under homotopy. However, persistent topology is able to record the birth (appearance) and death (disappearance) of topological features, thus extra geometric information is embedded in it. One evidence in theory is a partially positive result on the uniqueness of reconstruction of curves;[122] two in application are on the quantitative analysis of Fullerene stability and quantitative analysis of self-similarity, separately.[112][123] 3. The role of short persistence. Short persistence has also been found to be useful, despite the common belief that noise is the cause of the phenomena.[124] This is interesting to the mathematical theory. One of the main fields of data analysis today is machine learning. Some examples of machine learning in TDA can be found in Adcock et al.[125] A conference is dedicated to the link between TDA and machine learning. In order to apply tools from machine learning, the information obtained from TDA should be represented in vector form. An ongoing and promising attempt is the persistence landscape discussed above. Another attempt uses the concept of persistence images.[126] However, one problem of this method is the loss of stability, since the hard stability theorem depends on the barcode representation. Impact on mathematics Topological data analysis and persistent homology have had impacts on Morse theory. Morse theory has played a very important role in the theory of TDA, including on computation. Some work in persistent homology has extended results about Morse functions to tame functions or, even to continuous functions. A forgotten result of R. Deheuvels long before the invention of persistent homology extends Morse theory to all continuous functions.[127] One recent result is that the category of Reeb graphs is equivalent to a particular class of cosheaf.[128] This is motivated by theoretical work in TDA, since the Reeb graph is related to Morse theory and MAPPER is derived from it. The proof of this theorem relies on the interleaving distance. Persistent homology is closely related to spectral sequences.[129][130] In particular the algorithm bringing a filtered complex to its canonical form[11] permits much faster calculation of spectral sequences than the standard procedure of calculating $E_{p,q}^{r}$ groups page by page. Zigzag persistence may turn out to be of theoretical importance to spectral sequences. See also • Dimensionality reduction • Data mining • Computer vision • Computational topology • Discrete Morse theory • Shape analysis (digital geometry) • Size theory • Algebraic topology References 1. Epstein, Charles; Carlsson, Gunnar; Edelsbrunner, Herbert (2011-12-01). "Topological data analysis". Inverse Problems. 27 (12): 120201. arXiv:1609.08227. Bibcode:2011InvPr..27a0101E. doi:10.1088/0266-5611/27/12/120201. S2CID 250913810. 2. "diva-portal.org/smash/record.jsf?pid=diva2%253A575329&dswid=4297". www.diva-portal.org. Archived from the original on November 19, 2015. Retrieved 2015-11-05. 3. Carlsson, Gunnar (2009-01-01). "Topology and data". Bulletin of the American Mathematical Society. 46 (2): 255–308. doi:10.1090/S0273-0979-09-01249-X. ISSN 0273-0979. 4. Edelsbrunner, H.; Morozov, D. (2017). "Persistent Homology". In Csaba D. Toth; Joseph O'Rourke; Jacob E. Goodman (eds.). Handbook of Discrete and Computational Geometry (3rd ed.). CRC. doi:10.1201/9781315119601. ISBN 9781315119601. 5. Frosini, Patrizio (1990-12-01). "A distance for similarity classes of submanifolds of a Euclidean space". Bulletin of the Australian Mathematical Society. 42 (3): 407–415. doi:10.1017/S0004972700028574. ISSN 1755-1633. 6. Frosini, Patrizio (1992). Casasent, David P. (ed.). "Measuring shapes by size functions". Proc. SPIE, Intelligent Robots and Computer Vision X: Algorithms and Techniques. Intelligent Robots and Computer Vision X: Algorithms and Techniques. 1607: 122–133. Bibcode:1992SPIE.1607..122F. doi:10.1117/12.57059. S2CID 121295508. 7. Robins V. Towards computing homology from finite approximations[C]//Topology proceedings. 1999, 24(1): 503-532. 8. Edelsbrunner; Letscher; Zomorodian (2002-11-01). "Topological Persistence and Simplification". Discrete & Computational Geometry. 28 (4): 511–533. doi:10.1007/s00454-002-2885-2. ISSN 0179-5376. 9. Carlsson, Gunnar; Zomorodian, Afra; Collins, Anne; Guibas, Leonidas J. (2005-12-01). "Persistence barcodes for shapes". International Journal of Shape Modeling. 11 (2): 149–187. CiteSeerX 10.1.1.5.2718. doi:10.1142/S0218654305000761. ISSN 0218-6543. 10. Zomorodian, Afra; Carlsson, Gunnar (2004-11-19). "Computing Persistent Homology". Discrete & Computational Geometry. 33 (2): 249–274. doi:10.1007/s00454-004-1146-y. ISSN 0179-5376. 11. Barannikov, Sergey (1994). "Framed Morse complex and its invariants". Advances in Soviet Mathematics. ADVSOV. 21: 93–115. doi:10.1090/advsov/021/03. ISBN 9780821802373. S2CID 125829976. 12. "UC Berkeley Mathematics Department Colloquium: Persistent homology and applications from PDE to symplectic topology". events.berkeley.edu. Archived from the original on 2021-04-18. Retrieved 2021-03-27. 13. Chazal, Frédéric; Cohen-Steiner, David; Glisse, Marc; Guibas, Leonidas J.; Oudot, Steve Y. (2009-01-01). "Proximity of persistence modules and their diagrams". Proceedings of the twenty-fifth annual symposium on Computational geometry. SCG '09. ACM. pp. 237–246. CiteSeerX 10.1.1.473.2112. doi:10.1145/1542362.1542407. ISBN 978-1-60558-501-7. S2CID 840484. 14. Munch, E. (2013). Applications of persistent homology to time varying systems (Thesis). Duke University. hdl:10161/7180. ISBN 9781303019128. 15. Shikhman, Vladimir (2011). Topological Aspects of Nonsmooth Optimization. Springer. pp. 169–170. ISBN 9781461418979. Retrieved 22 November 2017. 16. Cohen-Steiner, David; Edelsbrunner, Herbert; Harer, John (2006-12-12). "Stability of Persistence Diagrams". Discrete & Computational Geometry. 37 (1): 103–120. doi:10.1007/s00454-006-1276-5. ISSN 0179-5376. 17. Ghrist, Robert (2008-01-01). "Barcodes: The persistent topology of data". Bulletin of the American Mathematical Society. 45 (1): 61–75. doi:10.1090/S0273-0979-07-01191-3. ISSN 0273-0979. 18. Chazal, Frédéric; Glisse, Marc; Labruère, Catherine; Michel, Bertrand (2013-05-27). "Optimal rates of convergence for persistence diagrams in Topological Data Analysis". arXiv:1305.6239 [math.ST]. 19. Edelsbrunner & Harer 2010 20. De Silva, Vin; Carlsson, Gunnar (2004-01-01). Topological Estimation Using Witness Complexes. pp. 157–166. doi:10.2312/SPBG/SPBG04/157-166. ISBN 978-3-905673-09-8. {{cite book}}: |journal= ignored (help) 21. Mischaikow, Konstantin; Nanda, Vidit (2013-07-27). "Morse Theory for Filtrations and Efficient Computation of Persistent Homology". Discrete & Computational Geometry. 50 (2): 330–353. doi:10.1007/s00454-013-9529-6. ISSN 0179-5376. 22. Henselman, Gregory; Ghrist, Robert (2016). "Matroid Filtrations and Computational Persistent Homology". arXiv:1606.00199 [math.AT]. 23. Chen, Chao; Kerber, Michael (2013-05-01). "An output-sensitive algorithm for persistent homology". Computational Geometry. 27th Annual Symposium on Computational Geometry (SoCG 2011). 46 (4): 435–447. doi:10.1016/j.comgeo.2012.02.010. 24. Otter, Nina; Porter, Mason A.; Tillmann, Ulrike; Grindrod, Peter; Harrington, Heather A. (2015-06-29). "A roadmap for the computation of persistent homology". EPJ Data Science. 6 (1): 17. arXiv:1506.08903. Bibcode:2015arXiv150608903O. doi:10.1140/epjds/s13688-017-0109-5. PMC 6979512. PMID 32025466. 25. Fasy, Brittany Terese; Kim, Jisu; Lecci, Fabrizio; Maria, Clément (2014-11-07). "Introduction to the R package TDA". arXiv:1411.1830 [cs.MS]. 26. Wadhwa, Raoul; Williamson, Drew; Dhawan, Andrew; Scott, Jacob (2018). "TDAstats: R pipeline for computing persistent homology in topological data analysis". Journal of Open Source Software. 3 (28): 860. Bibcode:2018JOSS....3..860R. doi:10.21105/joss.00860. PMC 7771879. PMID 33381678. 27. Liu, S.; Maljovec, D.; Wang, B.; Bremer, P.T.; Pascucci, V. (2016). "Visualizing high-dimensional data: Advances in the past decade". IEEE Transactions on Visualization and Computer Graphics. 23 (3): 1249–68. doi:10.1109/TVCG.2016.2640960. PMID 28113321. S2CID 745262. 28. Dey, Tamal K.; Memoli, Facundo; Wang, Yusu (2015-04-14). "Mutiscale Mapper: A Framework for Topological Summarization of Data and Maps". arXiv:1504.03763 [cs.CG]. 29. <!- Please confirm this ref ->Singh, G.; Mémoli, F.; Carlsson, G. (2007). "Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition" (PDF). Point-based graphics 2007 : Eurographics/IEEE VGTC symposium proceedings. doi:10.2312/SPBG/SPBG07/091-100. ISBN 9781568813660. S2CID 5703368. 30. Bott, Raoul; Tu, Loring W. (2013-04-17). Differential Forms in Algebraic Topology. Springer. ISBN 978-1-4757-3951-0. 31. Pascucci, Valerio; Scorzelli, Giorgio; Bremer, Peer-Timo; Mascarenhas, Ajith (2007). "Robust on-line computation of Reeb graphs: simplicity and speed". ACM Transactions on Graphics. 33: 58.1–58.9. doi:10.1145/1275808.1276449. 32. Curry, Justin (2013-03-13). "Sheaves, Cosheaves and Applications". arXiv:1303.3255 [math.AT]. 33. Liu, Xu; Xie, Zheng; Yi, Dongyun (2012-01-01). "A fast algorithm for constructing topological structure in large data". Homology, Homotopy and Applications. 14 (1): 221–238. doi:10.4310/hha.2012.v14.n1.a11. ISSN 1532-0073. 34. Lum, P. Y.; Singh, G.; Lehman, A.; Ishkanov, T.; Vejdemo-Johansson, M.; Alagappan, M.; Carlsson, J.; Carlsson, G. (2013-02-07). "Extracting insights from the shape of complex data using topology". Scientific Reports. 3: 1236. Bibcode:2013NatSR...3E1236L. doi:10.1038/srep01236. PMC 3566620. PMID 23393618. 35. Curry, Justin (2014-11-03). "Topological Data Analysis and Cosheaves". arXiv:1411.0613 [math.AT]. 36. Frosini, P; Mulazzani, M. (1999). "Size homotopy groups for computation of natural size distances". Bulletin of the Belgian Mathematical Society, Simon Stevin. 6 (3): 455–464. doi:10.36045/bbms/1103065863. 37. Carlsson, G.; Zomorodian, A. (2009). "The theory of multidimensional persistence". Discrete & Computational Geometry. Vol. 42. Springer. pp. 71–93. doi:10.1007/978-3-642-10631-6_74. ISBN 978-3-642-10631-6. 38. Carlsson, G.; Singh, A.; Zomorodian, A. (2010). "Computing multidimensional persistence". Journal of Computational Geometry. 1: 72–100. doi:10.20382/jocg.v1i1a6. S2CID 15529723. 39. Harrington, H.; Otter, N.; Schenck, H.; Tillman, U. (2019). "Stratifying multiparameter persistent homology". SIAM Journal on Applied Algebra and Geometry. 3 (3): 439–471. arXiv:1708.07390. doi:10.1137/18M1224350. S2CID 119689059. 40. Biasotti, S.; Cerri, A.; Frosini, P.; Giorgi, D.; Landi, C. (2008-05-17). "Multidimensional Size Functions for Shape Comparison". Journal of Mathematical Imaging and Vision. 32 (2): 161–179. doi:10.1007/s10851-008-0096-z. ISSN 0924-9907. S2CID 13372132. 41. Carlsson, Gunnar; Zomorodian, Afra (2009-04-24). "The Theory of Multidimensional Persistence". Discrete & Computational Geometry. 42 (1): 71–93. doi:10.1007/s00454-009-9176-0. ISSN 0179-5376. 42. Derksen, H.; Weyman, J. (2005). "Quiver representations" (PDF). Notices of the AMS. 52 (2): 200–6. 43. Atiyah, M.F. (1956). "On the Krull-Schmidt theorem with application to sheaves" (PDF). Bulletin de la Société Mathématique de France. 84: 307–317. doi:10.24033/bsmf.1475. 44. Cerri A, Di Fabio B, Ferri M, et al. Multidimensional persistent homology is stable[J]. arXiv:0908.0064, 2009. 45. Cagliari, Francesca; Landi, Claudia (2011-04-01). "Finiteness of rank invariants of multidimensional persistent homology groups". Applied Mathematics Letters. 24 (4): 516–8. arXiv:1001.0358. doi:10.1016/j.aml.2010.11.004. S2CID 14337220. 46. Cagliari, Francesca; Di Fabio, Barbara; Ferri, Massimo (2010-01-01). "One-dimensional reduction of multidimensional persistent homology". Proceedings of the American Mathematical Society. 138 (8): 3003–17. arXiv:math/0702713. doi:10.1090/S0002-9939-10-10312-8. ISSN 0002-9939. S2CID 18284958. 47. Cerri, Andrea; Fabio, Barbara Di; Ferri, Massimo; Frosini, Patrizio; Landi, Claudia (2013-08-01). "Betti numbers in multidimensional persistent homology are stable functions". Mathematical Methods in the Applied Sciences. 36 (12): 1543–57. Bibcode:2013MMAS...36.1543C. doi:10.1002/mma.2704. ISSN 1099-1476. S2CID 9938133. 48. Cerri, Andrea; Frosini, Patrizio (2015-03-15). "Necessary conditions for discontinuities of multidimensional persistent Betti numbers". Mathematical Methods in the Applied Sciences. 38 (4): 617–629. Bibcode:2015MMAS...38..617C. doi:10.1002/mma.3093. ISSN 1099-1476. S2CID 5537858. 49. Cerri, Andrea; Landi, Claudia (2013-03-20). Gonzalez-Diaz, Rocio; Jimenez, Maria-Jose; Medrano, Belen (eds.). The Persistence Space in Multidimensional Persistent Homology. Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp. 180–191. doi:10.1007/978-3-642-37067-0_16. ISBN 978-3-642-37066-3. 50. Skryzalin, Jacek; Carlsson, Gunnar (2014-11-14). "Numeric Invariants from Multidimensional Persistence". arXiv:1411.4022 [cs.CG]. 51. Carlsson, Gunnar; Singh, Gurjeet; Zomorodian, Afra (2009-12-16). Dong, Yingfei; Du, Ding-Zhu; Ibarra, Oscar (eds.). Computing Multidimensional Persistence. Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp. 730–9. CiteSeerX 10.1.1.313.7004. doi:10.1007/978-3-642-10631-6_74. ISBN 978-3-642-10630-9. S2CID 15529723. 52. Allili, Madjid; Kaczynski, Tomasz; Landi, Claudia (2013-10-30). "Reducing complexes in multidimensional persistent homology theory". arXiv:1310.8089 [cs.CG]. 53. Cavazza, N.; Ferri, M.; Landi, C. (2010). "Estimating multidimensional persistent homology through a finite sampling". International Journal of Computational Geometry & Applications. 25 (3): 187–205. arXiv:1507.05277. doi:10.1142/S0218195915500119. S2CID 4803380. 54. Carlsson, Gunnar; Silva, Vin de (2010-04-21). "Zigzag Persistence". Foundations of Computational Mathematics. 10 (4): 367–405. doi:10.1007/s10208-010-9066-0. ISSN 1615-3375. 55. Cohen-Steiner, David; Edelsbrunner, Herbert; Harer, John (2008-04-04). "Extending Persistence Using Poincaré and Lefschetz Duality". Foundations of Computational Mathematics. 9 (1): 79–103. doi:10.1007/s10208-008-9027-z. ISSN 1615-3375. S2CID 33297537. 56. de Silva, Vin; Morozov, Dmitriy; Vejdemo-Johansson, Mikael (2011). "Dualities in persistent (co)homology". Inverse Problems. 27 (12): 124003. arXiv:1107.5665. Bibcode:2011InvPr..27l4003D. doi:10.1088/0266-5611/27/12/124003. S2CID 5706682. 57. Silva, Vin de; Morozov, Dmitriy; Vejdemo-Johansson, Mikael (2011-03-30). "Persistent Cohomology and Circular Coordinates". Discrete & Computational Geometry. 45 (4): 737–759. arXiv:0905.4887. doi:10.1007/s00454-011-9344-x. ISSN 0179-5376. S2CID 31480083. 58. Burghelea, Dan; Dey, Tamal K. (2013-04-09). "Topological Persistence for Circle-Valued Maps". Discrete & Computational Geometry. 50 (1): 69–98. arXiv:1104.5646. doi:10.1007/s00454-013-9497-x. ISSN 0179-5376. S2CID 17407953. 59. Sergey P. Novikov, Quasiperiodic structures in topology[C]//Topological methods in modern mathematics, Proceedings of the symposium in honor of John Milnor's sixtieth birthday held at the State University of New York, Stony Brook, New York. 1991: 223-233. 60. Gross, Jonathan L.; Yellen, Jay (2004-06-02). Handbook of Graph Theory. CRC Press. ISBN 978-0-203-49020-4. 61. Burghelea, Dan; Haller, Stefan (2015-06-04). "Topology of angle valued maps, bar codes and Jordan blocks". arXiv:1303.4328 [math.AT]. 62. Frosini, Patrizio (2012-06-23). "Stable Comparison of Multidimensional Persistent Homology Groups with Torsion". Acta Applicandae Mathematicae. 124 (1): 43–54. arXiv:1012.4169. doi:10.1007/s10440-012-9769-0. ISSN 0167-8019. S2CID 4809929. 63. Lesnick, Michael (2015-03-24). "The Theory of the Interleaving Distance on Multidimensional Persistence Modules". Foundations of Computational Mathematics. 15 (3): 613–650. arXiv:1106.5305. doi:10.1007/s10208-015-9255-y. ISSN 1615-3375. S2CID 17184609. 64. Bubenik, Peter; Scott, Jonathan A. (2014-01-28). "Categorification of Persistent Homology". Discrete & Computational Geometry. 51 (3): 600–627. arXiv:1205.3669. doi:10.1007/s00454-014-9573-x. ISSN 0179-5376. S2CID 11056619. 65. Bubenik, Peter; Silva, Vin de; Scott, Jonathan (2014-10-09). "Metrics for Generalized Persistence Modules". Foundations of Computational Mathematics. 15 (6): 1501–31. CiteSeerX 10.1.1.748.3101. doi:10.1007/s10208-014-9229-5. ISSN 1615-3375. S2CID 16351674. 66. de Silva, Vin; Nanda, Vidit (2013-01-01). "Geometry in the space of persistence modules". Proceedings of the twenty-ninth annual symposium on Computational geometry. SoCG '13. New York, NY, USA: ACM. pp. 397–404. doi:10.1145/2462356.2462402. ISBN 978-1-4503-2031-3. S2CID 16326608. 67. De Silva, V.; Ghrist, R. (2007). "Coverage in sensor networks via persistent homology". Algebraic & Geometric Topology. 7 (1): 339–358. doi:10.2140/agt.2007.7.339. 68. d’Amico, Michele; Frosini, Patrizio; Landi, Claudia (2008-10-14). "Natural Pseudo-Distance and Optimal Matching between Reduced Size Functions". Acta Applicandae Mathematicae. 109 (2): 527–554. arXiv:0804.3500. Bibcode:2008arXiv0804.3500D. doi:10.1007/s10440-008-9332-1. ISSN 0167-8019. S2CID 1704971. 69. Di Fabio, B.; Frosini, P. (2013-08-01). "Filtrations induced by continuous functions". Topology and Its Applications. 160 (12): 1413–22. arXiv:1304.1268. Bibcode:2013arXiv1304.1268D. doi:10.1016/j.topol.2013.05.013. S2CID 13971804. 70. Lesnick, Michael (2012-06-06). "Multidimensional Interleavings and Applications to Topological Inference". arXiv:1206.1365 [math.AT]. 71. Chazal, Frederic; de Silva, Vin; Glisse, Marc; Oudot, Steve (2012-07-16). "The structure and stability of persistence modules". arXiv:1207.3674 [math.AT]. 72. Webb, Cary (1985-01-01). "Decomposition of graded modules". Proceedings of the American Mathematical Society. 94 (4): 565–571. doi:10.1090/S0002-9939-1985-0792261-6. ISSN 0002-9939. 73. Crawley-Boevey, William (2015). "Decomposition of pointwise finite-dimensional persistence modules". Journal of Algebra and Its Applications. 14 (5): 1550066. arXiv:1210.0819. doi:10.1142/s0219498815500668. S2CID 119635797. 74. Chazal, Frederic; Crawley-Boevey, William; de Silva, Vin (2014-05-22). "The observable structure of persistence modules". arXiv:1405.5644 [math.RT]. 75. Droz, Jean-Marie (2012-10-15). "A subset of Euclidean space with large Vietoris-Rips homology". arXiv:1210.4097 [math.GT]. 76. Weinberger, S. (2011). "What is... persistent homology?" (PDF). Notices of the AMS. 58 (1): 36–39. 77. Turner, Katharine; Mileyko, Yuriy; Mukherjee, Sayan; Harer, John (2014-07-12). "Fréchet Means for Distributions of Persistence Diagrams". Discrete & Computational Geometry. 52 (1): 44–70. arXiv:1206.2790. doi:10.1007/s00454-014-9604-7. ISSN 0179-5376. S2CID 14293062. 78. Carlsson, Gunnar (2014-05-01). "Topological pattern recognition for point cloud data". Acta Numerica. 23: 289–368. doi:10.1017/S0962492914000051. ISSN 1474-0508. 79. Mileyko, Yuriy; Mukherjee, Sayan; Harer, John (2011-11-10). "Probability measures on the space of persistence diagrams". Inverse Problems. 27 (12): 124007. Bibcode:2011InvPr..27l4007M. doi:10.1088/0266-5611/27/12/124007. ISSN 0266-5611. S2CID 250676. 80. Robinson, Andrew; Turner, Katharine (2013-10-28). "Hypothesis Testing for Topological Data Analysis". arXiv:1310.7467 [stat.AP]. 81. Fasy, Brittany Terese; Lecci, Fabrizio; Rinaldo, Alessandro; Wasserman, Larry; Balakrishnan, Sivaraman; Singh, Aarti (2014-12-01). "Confidence sets for persistence diagrams". The Annals of Statistics. 42 (6): 2301–39. doi:10.1214/14-AOS1252. ISSN 0090-5364. 82. Blumberg, Andrew J.; Gal, Itamar; Mandell, Michael A.; Pancia, Matthew (2014-05-15). "Robust Statistics, Hypothesis Testing, and Confidence Intervals for Persistent Homology on Metric Measure Spaces". Foundations of Computational Mathematics. 14 (4): 745–789. arXiv:1206.4581. doi:10.1007/s10208-014-9201-4. ISSN 1615-3375. S2CID 17150103. 83. Baudot, Pierre; Bennequin, Daniel (2015). "The Homological Nature of Entropy". Entropy. 17 (5): 3253–3318. Bibcode:2015Entrp..17.3253B. doi:10.3390/e17053253. 84. Vigneaux, Juan-Pablo (2019). Topology of Statistical Systems: A Cohomological Approach to Information Theory (PDF) (PhD). Université Sorbonne Paris Cité. tel-02951504. 85. Baudot, Pierre; Tapia, Monica; Bennequin, Daniel; Goaillard, Jean-Marc (2019). "Topological Information Data Analysis". Entropy. 21 (9): 881. Bibcode:2019Entrp..21..881B. doi:10.3390/e21090881. 86. Tapia, Monica; al., et (2018). "Neurotransmitter identity and electrophysiological phenotype are genetically coupled in midbrain dopaminergic neurons". Scientific Reports. 8 (1): 13637. Bibcode:2018NatSR...813637T. doi:10.1038/s41598-018-31765-z. PMC 6134142. PMID 30206240. 87. Baudot, Pierre (2019). "Elements of qualitative cognition: an Information Topology Perspective". Physics of Life Reviews. 31: 263–275. arXiv:1807.04520. Bibcode:2019PhLRv..31..263B. doi:10.1016/j.plrev.2019.10.003. PMID 31679788. S2CID 207897618. 88. Baudot, Pierre (2019). "The Poincaré-Shannon Machine: Statistical Physics and Machine Learning Aspects of Information Cohomology". Entropy. 21 (9): 881. Bibcode:2019Entrp..21..881B. doi:10.3390/e21090881. 89. Bubenik, Peter (2012-07-26). "Statistical topological data analysis using persistence landscapes". arXiv:1207.6437 [math.AT]. 90. Bubenik, Peter; Dlotko, Pawel (2014-12-31). "A persistence landscapes toolbox for topological statistics". Journal of Symbolic Computation. 78: 91–114. arXiv:1501.00179. Bibcode:2015arXiv150100179B. doi:10.1016/j.jsc.2016.03.009. S2CID 9789489. 91. Cohen-Steiner, David; Edelsbrunner, Herbert; Harer, John; Morozov, Dmitriy (2009). Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms. pp. 1011–20. CiteSeerX 10.1.1.179.3236. doi:10.1137/1.9781611973068.110. ISBN 978-0-89871-680-1. 92. Kurlin, V. (2015). "A one-dimensional Homologically Persistent Skeleton of an unstructured point cloud in any metric space" (PDF). Computer Graphics Forum. 34 (5): 253–262. doi:10.1111/cgf.12713. S2CID 10610111. 93. Kurlin, V. (2014). "A Fast and Robust Algorithm to Count Topologically Persistent Holes in Noisy Clouds". 2014 IEEE Conference on Computer Vision and Pattern Recognition (PDF). pp. 1458–1463. arXiv:1312.1492. doi:10.1109/CVPR.2014.189. ISBN 978-1-4799-5118-5. S2CID 10118087. 94. Kurlin, V. (2015). "A Homologically Persistent Skeleton is a fast and robust descriptor of interest points in 2D images" (PDF). Proceedings of CAIP: Computer Analysis of Images and Patterns. Lecture Notes in Computer Science. Vol. 9256. pp. 606–617. doi:10.1007/978-3-319-23192-1_51. ISBN 978-3-319-23191-4. 95. Cerri, A.; Ferri, M.; Giorgi, D. (2006-09-01). "Retrieval of trademark images by means of size functions". Graphical Models. Special Issue on the Vision, Video and Graphics Conference 2005. 68 (5–6): 451–471. doi:10.1016/j.gmod.2006.07.001. 96. Chazal, Frédéric; Cohen-Steiner, David; Guibas, Leonidas J.; Mémoli, Facundo; Oudot, Steve Y. (2009-07-01). "Gromov-Hausdorff Stable Signatures for Shapes using Persistence". Computer Graphics Forum. 28 (5): 1393–1403. CiteSeerX 10.1.1.161.9103. doi:10.1111/j.1467-8659.2009.01516.x. ISSN 1467-8659. S2CID 8173320. 97. Biasotti, S.; Giorgi, D.; Spagnuolo, M.; Falcidieno, B. (2008-09-01). "Size functions for comparing 3D models". Pattern Recognition. 41 (9): 2855–2873. Bibcode:2008PatRe..41.2855B. doi:10.1016/j.patcog.2008.02.003. 98. Li, C.; Ovsjanikov, M.; Chazal, F. (2014). "Persistence-based Structural Recognition". IEEE Conference on Computer Vision and Pattern Recognition (PDF). pp. 2003–10. doi:10.1109/CVPR.2014.257. ISBN 978-1-4799-5118-5. S2CID 17787875. 99. Tapia, Monica; al., et (2018). "Neurotransmitter identity and electrophysiological phenotype are genetically coupled in midbrain dopaminergic neurons". Scientific Reports. 8 (1): 13637. Bibcode:2018NatSR...813637T. doi:10.1038/s41598-018-31765-z. PMC 6134142. PMID 30206240. 100. Bendich, P.; Edelsbrunner, H.; Kerber, M. (2010-11-01). "Computing Robustness and Persistence for Images". IEEE Transactions on Visualization and Computer Graphics. 16 (6): 1251–1260. CiteSeerX 10.1.1.185.523. doi:10.1109/TVCG.2010.139. ISSN 1077-2626. PMID 20975165. S2CID 8589124. 101. Carlsson, Gunnar; Ishkhanov, Tigran; Silva, Vin de; Zomorodian, Afra (2007-06-30). "On the Local Behavior of Spaces of Natural Images". International Journal of Computer Vision. 76 (1): 1–12. CiteSeerX 10.1.1.463.7101. doi:10.1007/s11263-007-0056-x. ISSN 0920-5691. S2CID 207252002. 102. Hiraoka, Yasuaki; Nakamura, Takenobu; Hirata, Akihiko; Escolar, Emerson G.; Matsue, Kaname; Nishiura, Yasumasa (2016-06-28). "Hierarchical structures of amorphous solids characterized by persistent homology". Proceedings of the National Academy of Sciences. 113 (26): 7035–40. arXiv:1501.03611. Bibcode:2016PNAS..113.7035H. doi:10.1073/pnas.1520877113. ISSN 0027-8424. PMC 4932931. PMID 27298351. 103. Nakamura, Takenobu; Hiraoka, Yasuaki; Hirata, Akihiko; Escolar, Emerson G.; Nishiura, Yasumasa (2015-02-26). "Persistent Homology and Many-Body Atomic Structure for Medium-Range Order in the Glass". Nanotechnology. 26 (30): 304001. arXiv:1502.07445. Bibcode:2015Nanot..26D4001N. doi:10.1088/0957-4484/26/30/304001. PMID 26150288. S2CID 7298655. 104. Nicolau, Monica; Levine, Arnold J.; Carlsson, Gunnar (2011-04-26). "Topology based data analysis identifies a subgroup of breast cancers with a unique mutational profile and excellent survival". Proceedings of the National Academy of Sciences. 108 (17): 7265–7270. Bibcode:2011PNAS..108.7265N. doi:10.1073/pnas.1102826108. ISSN 0027-8424. PMC 3084136. PMID 21482760. 105. Schmidt, Stephan; Post, Teun M.; Boroujerdi, Massoud A.; Kesteren, Charlotte van; Ploeger, Bart A.; Pasqua, Oscar E. Della; Danhof, Meindert (2011-01-01). Kimko, Holly H. C.; Peck, Carl C. (eds.). Disease Progression Analysis: Towards Mechanism-Based Models. AAPS Advances in the Pharmaceutical Sciences Series. Springer New York. pp. 433–455. doi:10.1007/978-1-4419-7415-0_19. ISBN 978-1-4419-7414-3. 106. Perea, Jose A.; Harer, John (2014-05-29). "Sliding Windows and Persistence: An Application of Topological Methods to Signal Analysis". Foundations of Computational Mathematics. 15 (3): 799–838. CiteSeerX 10.1.1.357.6648. doi:10.1007/s10208-014-9206-z. ISSN 1615-3375. S2CID 592832. 107. van de Weygaert, Rien; Vegter, Gert; Edelsbrunner, Herbert; Jones, Bernard J. T.; Pranav, Pratyush; Park, Changbom; Hellwing, Wojciech A.; Eldering, Bob; Kruithof, Nico (2011-01-01). Gavrilova, Marina L.; Tan, C. Kenneth; Mostafavi, Mir Abolfazl (eds.). Transactions on Computational Science XIV. Berlin, Heidelberg: Springer-Verlag. pp. 60–101. ISBN 978-3-642-25248-8. 108. Horak, Danijela; Maletić, Slobodan; Rajković, Milan (2009-03-01). "Persistent homology of complex networks - IOPscience". Journal of Statistical Mechanics: Theory and Experiment. 2009 (3): P03034. arXiv:0811.2203. Bibcode:2009JSMTE..03..034H. doi:10.1088/1742-5468/2009/03/p03034. S2CID 15592802. 109. Carstens, C. J.; Horadam, K. J. (2013-06-04). "Persistent Homology of Collaboration Networks". Mathematical Problems in Engineering. 2013: 1–7. doi:10.1155/2013/815035. 110. Lee, Hyekyoung; Kang, Hyejin; Chung, M.K.; Kim, Bung-Nyun; Lee, Dong Soo (2012-12-01). "Persistent Brain Network Homology From the Perspective of Dendrogram". IEEE Transactions on Medical Imaging. 31 (12): 2267–2277. CiteSeerX 10.1.1.259.2692. doi:10.1109/TMI.2012.2219590. ISSN 0278-0062. PMID 23008247. S2CID 858022. 111. Petri, G.; Expert, P.; Turkheimer, F.; Carhart-Harris, R.; Nutt, D.; Hellyer, P. J.; Vaccarino, F. (2014-12-06). "Homological scaffolds of brain functional networks". Journal of the Royal Society Interface. 11 (101): 20140873. doi:10.1098/rsif.2014.0873. ISSN 1742-5689. PMC 4223908. PMID 25401177. 112. MacPherson, Robert; Schweinhart, Benjamin (2012-07-01). "Measuring shape with topology". Journal of Mathematical Physics. 53 (7): 073516. arXiv:1011.2258. Bibcode:2012JMP....53g3516M. doi:10.1063/1.4737391. ISSN 0022-2488. S2CID 17423075. 113. Chan, Joseph Minhow; Carlsson, Gunnar; Rabadan, Raul (2013-11-12). "Topology of viral evolution". Proceedings of the National Academy of Sciences. 110 (46): 18566–18571. Bibcode:2013PNAS..11018566C. doi:10.1073/pnas.1313480110. ISSN 0027-8424. PMC 3831954. PMID 24170857. 114. Taylor, D.; al, et. (2015-08-21). "Topological data analysis of contagion maps for examining spreading processes on networks". Nature Communications. 6 (6): 7723. arXiv:1408.1168. Bibcode:2015NatCo...6.7723T. doi:10.1038/ncomms8723. ISSN 2041-1723. PMC 4566922. PMID 26194875. 115. Offroy, M. (2016). "Topological data analysis: A promising big data exploration tool in biology, analytical chemistry and physical chemistry". Analytica Chimica Acta. 910: 1–11. doi:10.1016/j.aca.2015.12.037. PMID 26873463. 116. Weidner, Jonas; Neitzel, Charlotte; Gote, Martin; Deck, Jeanette; Küntzelmann, Kim; Pilarczyk, Götz; Falk, Martin; Hausmann, Michael (2023). "Advanced image-free analysis of the nano-organization of chromatin and other biomolecules by Single Molecule Localization Microscopy (SMLM)". Computational and Structural Biotechnology Journal. Elsevier. 21: 2018–2034. doi:10.1016/j.csbj.2023.03.009. PMC 10030913. PMID 36968017. 117. Duponchel, L. (2018). "Exploring hyperspectral imaging data sets with topological data analysis". Analytica Chimica Acta. 1000: 123–131. doi:10.1016/j.aca.2017.11.029. PMID 29289301. 118. Duponchel, L. (2018). "When remote sensing meets topological data analysis". Journal of Spectral Imaging. 7: a1. doi:10.1255/jsi.2018.a1. 119. Li, Xiaoyun; Wu, Chenxi; Li, Ping (2020). "IVFS: Simple and Efficient Feature Selection for High Dimensional Topology Preservation". AAAI Conference on Artificial Intelligence 34. 34 (4): 4747–4754. doi:10.1609/aaai.v34i04.5908. 120. Gidea, Marian; Katz, Yuri (2018). "Topological data analysis of financial time series: Landscapes of crashes". Physica A: Statistical Mechanics and Its Applications. Elsevier BV. 491: 820–834. arXiv:1703.04385. Bibcode:2018PhyA..491..820G. doi:10.1016/j.physa.2017.09.028. ISSN 0378-4371. S2CID 85550367. 121. Wang, Bao; Wei, Guo-Wei (2014-12-07). "Objective-oriented Persistent Homology". arXiv:1412.2368 [q-bio.BM]. 122. Frosini, Patrizio; Landi, Claudia (2011). "Uniqueness of models in persistent homology: the case of curves". Inverse Problems. 27 (12): 124005. arXiv:1012.5783. Bibcode:2011InvPr..27l4005F. doi:10.1088/0266-5611/27/12/124005. S2CID 16636182. 123. Xia, Kelin; Feng, Xin; Tong, Yiying; Wei, Guo Wei (2015-03-05). "Persistent homology for the quantitative prediction of fullerene stability". Journal of Computational Chemistry. 36 (6): 408–422. doi:10.1002/jcc.23816. ISSN 1096-987X. PMC 4324100. PMID 25523342. 124. Xia, Kelin; Wei, Guo-Wei (2014-08-01). "Persistent homology analysis of protein structure, flexibility, and folding". International Journal for Numerical Methods in Biomedical Engineering. 30 (8): 814–844. arXiv:1412.2779. Bibcode:2014arXiv1412.2779X. doi:10.1002/cnm.2655. ISSN 2040-7947. PMC 4131872. PMID 24902720. 125. Adcock, Aaron; Carlsson, Erik; Carlsson, Gunnar (2016-05-31). "The ring of algebraic functions on persistence bar codes" (PDF). Homology, Homotopy and Applications. 18 (1): 381–402. doi:10.4310/hha.2016.v18.n1.a21. S2CID 2964961. 126. Chepushtanova, Sofya; Emerson, Tegan; Hanson, Eric; Kirby, Michael; Motta, Francis; Neville, Rachel; Peterson, Chris; Shipman, Patrick; Ziegelmeier, Lori (2015-07-22). "Persistence Images: An Alternative Persistent Homology Representation". arXiv:1507.06217 [cs.CG]. 127. Deheuvels, Rene (1955-01-01). "Topologie D'Une Fonctionnelle". Annals of Mathematics. Second Series. 61 (1): 13–72. doi:10.2307/1969619. JSTOR 1969619. 128. de Silva, Vin; Munch, Elizabeth; Patel, Amit (2016-04-13). "Categorified Reeb graphs". Discrete and Computational Geometry. 55 (4): 854–906. arXiv:1501.04147. doi:10.1007/s00454-016-9763-9. S2CID 7111141. 129. Goodman, Jacob E. (2008-01-01). Surveys on Discrete and Computational Geometry: Twenty Years Later : AMS-IMS-SIAM Joint Summer Research Conference, June 18-22, 2006, Snowbird, Utah. American Mathematical Soc. ISBN 9780821842393. 130. Edelsbrunner, Herbert; Harer, John (2008). "Persistent homology — a survey". Surveys on Discrete and Computational Geometry: Twenty Years Later. Contemporary Mathematics. Vol. 453. AMS. pp. 15–18. CiteSeerX 10.1.1.87.7764. doi:10.1090/conm/453/08802. ISBN 9780821842393. Section 5 Further reading Brief Introduction • Lesnick, Michael (2013). "Studying the Shape of Data Using Topology". Institute for Advanced Study. • Source Material for Topological Data Analysis by Mikael Vejdemo-Johansson Monograph • Oudot, Steve Y. (2015). Persistence Theory: From Quiver Representations to Data Analysis. American Mathematical Society. ISBN 978-1-4704-2545-6. Video Lecture • Introduction to Persistent Homology and Topology for Data Analysis, by Matthew Wright • The Shape of Data, by Gunnar Carlsson Textbook on Topology • Hatcher, Allen (2002). Algebraic Topology. Cambridge University Press. ISBN 0-521-79540-0. Available for Download • Edelsbrunner, Herbert; Harer, John (2010). Computational Topology: An Introduction. American Mathematical Society. ISBN 9780821849255. • Elementary Applied Topology, by Robert Ghrist Other Resources of TDA • Applied Topology, by Stanford • Applied algebraic topology research network Archived 2016-01-31 at the Wayback Machine , by the Institute for Mathematics and its Applications • Topological Kernel Learning: Discrete Morse Theory is used to connect kernel machine learning with topological data analysis. https://www.researchgate.net/publication/327427685_Topological_Kernel_Learning
Wikipedia
Topological degree theory In mathematics, topological degree theory is a generalization of the winding number of a curve in the complex plane. It can be used to estimate the number of solutions of an equation, and is closely connected to fixed-point theory. When one solution of an equation is easily found, degree theory can often be used to prove existence of a second, nontrivial, solution. There are different types of degree for different types of maps: e.g. for maps between Banach spaces there is the Brouwer degree in Rn, the Leray-Schauder degree for compact mappings in normed spaces, the coincidence degree and various other types. There is also a degree for continuous maps between manifolds. Topological degree theory has applications in complementarity problems, differential equations, differential inclusions and dynamical systems. Further reading • Topological fixed point theory of multivalued mappings, Lech Górniewicz, Springer, 1999, ISBN 978-0-7923-6001-8 • Topological degree theory and applications, Donal O'Regan, Yeol Je Cho, Yu Qing Chen, CRC Press, 2006, ISBN 978-1-58488-648-8 • Mapping Degree Theory, Enrique Outerelo, Jesus M. Ruiz, AMS Bookstore, 2009, ISBN 978-0-8218-4915-6
Wikipedia
Topological divisor of zero In mathematics, an element $z$ of a Banach algebra $A$ is called a topological divisor of zero if there exists a sequence $x_{1},x_{2},x_{3},...$ of elements of $A$ such that 1. The sequence $zx_{n}$ converges to the zero element, but 2. The sequence $x_{n}$ does not converge to the zero element. If such a sequence exists, then one may assume that $\left\Vert \ x_{n}\right\|=1$ for all $n$. If $A$ is not commutative, then $z$ is called a "left" topological divisor of zero, and one may define "right" topological divisors of zero similarly. Examples • If $A$ has a unit element, then the invertible elements of $A$ form an open subset of $A$, while the non-invertible elements are the complementary closed subset. Any point on the boundary between these two sets is both a left and right topological divisor of zero. • In particular, any quasinilpotent element is a topological divisor of zero (e.g. the Volterra operator). • An operator on a Banach space $X$, which is injective, not surjective, but whose image is dense in $X$, is a left topological divisor of zero. Generalization The notion of a topological divisor of zero may be generalized to any topological algebra. If the algebra in question is not first-countable, one must substitute nets for the sequences used in the definition.
Wikipedia
Topological dynamics In mathematics, topological dynamics is a branch of the theory of dynamical systems in which qualitative, asymptotic properties of dynamical systems are studied from the viewpoint of general topology. Scope The central object of study in topological dynamics is a topological dynamical system, i.e. a topological space, together with a continuous transformation, a continuous flow, or more generally, a semigroup of continuous transformations of that space. The origins of topological dynamics lie in the study of asymptotic properties of trajectories of systems of autonomous ordinary differential equations, in particular, the behavior of limit sets and various manifestations of "repetitiveness" of the motion, such as periodic trajectories, recurrence and minimality, stability, non-wandering points. George Birkhoff is considered to be the founder of the field. A structure theorem for minimal distal flows proved by Hillel Furstenberg in the early 1960s inspired much work on classification of minimal flows. A lot of research in the 1970s and 1980s was devoted to topological dynamics of one-dimensional maps, in particular, piecewise linear self-maps of the interval and the circle. Unlike the theory of smooth dynamical systems, where the main object of study is a smooth manifold with a diffeomorphism or a smooth flow, phase spaces considered in topological dynamics are general metric spaces (usually, compact). This necessitates development of entirely different techniques but allows an extra degree of flexibility even in the smooth setting, because invariant subsets of a manifold are frequently very complicated topologically (cf limit cycle, strange attractor); additionally, shift spaces arising via symbolic representations can be considered on an equal footing with more geometric actions. Topological dynamics has intimate connections with ergodic theory of dynamical systems, and many fundamental concepts of the latter have topological analogues (cf Kolmogorov–Sinai entropy and topological entropy). See also • Poincaré–Bendixson theorem • Symbolic dynamics • Topological conjugacy References • D. V. Anosov (2001) [1994], "Topological dynamics", Encyclopedia of Mathematics, EMS Press • Joseph Auslander (ed.). "Topological dynamics". Scholarpedia. • Robert Ellis, Lectures on topological dynamics. W. A. Benjamin, Inc., New York 1969 • Walter Gottschalk, Gustav Hedlund, Topological dynamics. American Mathematical Society Colloquium Publications, Vol. 36. American Mathematical Society, Providence, R. I., 1955 • J. de Vries, Elements of topological dynamics. Mathematics and its Applications, 257. Kluwer Academic Publishers Group, Dordrecht, 1993 ISBN 0-7923-2287-8 • Ethan Akin, The General Topology of Dynamical Systems, AMS Bookstore, 2010, ISBN 978-0-8218-4932-3 • J. de Vries, Topological Dynamical Systems: An Introduction to the Dynamics of Continuous Mappings, De Gruyter Studies in Mathematics, 59, De Gruyter, Berlin, 2014, ISBN 978-3-1103-4073-0 • Jian Li and Xiang Dong Ye, Recent development of chaos theory in topological dynamics, Acta Mathematica Sinica, English Series, 2016, Volume 32, Issue 1, pp. 83–114.
Wikipedia
Topological game In mathematics, a topological game is an infinite game of perfect information played between two players on a topological space. Players choose objects with topological properties such as points, open sets, closed sets and open coverings. Time is generally discrete, but the plays may have transfinite lengths, and extensions to continuum time have been put forth. The conditions for a player to win can involve notions like topological closure and convergence. It turns out that some fundamental topological constructions have a natural counterpart in topological games; examples of these are the Baire property, Baire spaces, completeness and convergence properties, separation properties, covering and base properties, continuous images, Suslin sets, and singular spaces. At the same time, some topological properties that arise naturally in topological games can be generalized beyond a game-theoretic context: by virtue of this duality, topological games have been widely used to describe new properties of topological spaces, and to put known properties under a different light. There are also close links with selection principles. The term topological game was first introduced by Claude Berge,[1][2][3] who defined the basic ideas and formalism in analogy with topological groups. A different meaning for topological game, the concept of “topological properties defined by games”, was introduced in the paper of Rastislav Telgársky,[4] and later "spaces defined by topological games";[5] this approach is based on analogies with matrix games, differential games and statistical games, and defines and studies topological games within topology. After more than 35 years, the term “topological game” became widespread, and appeared in several hundreds of publications. The survey paper of Telgársky[6] emphasizes the origin of topological games from the Banach–Mazur game. There are two other meanings of topological games, but these are used less frequently. • The term topological game introduced by Leon Petrosjan[7] in the study of antagonistic pursuit–evasion games. The trajectories in these topological games are continuous in time. • The games of Nash (the Hex games), the Milnor games (Y games), the Shapley games (projective plane games), and Gale's games (Bridg-It games) were called topological games by David Gale in his invited address [1979/80]. The number of moves in these games is always finite. The discovery or rediscovery of these topological games goes back to years 1948–49. Basic setup for a topological game Many frameworks can be defined for infinite positional games of perfect information. The typical setup is a game between two players, I and II, who alternately pick subsets of a topological space X. In the nth round, player I plays a subset In of X, and player II responds with a subset Jn. There is a round for every natural number n, and after all rounds are played, player I wins if the sequence I0, J0, I1, J1,... satisfies some property, and otherwise player II wins. The game is defined by the target property and the allowed moves at each step. For example, in the Banach–Mazur game BM(X), the allowed moves are nonempty open subsets of the previous move, and player I wins if $\bigcap _{n}I_{n}\neq \emptyset $. This typical setup can be modified in various ways. For example, instead of being a subset of X, each move might consist of a pair $(I,p)$ where $I\subset X$ and $p\in x$. Alternatively, the sequence of moves might have length some ordinal number other than ω1. Definitions and notation • A play of the game is a sequence of legal moves I0, J0, I1, J1,... The result of a play is either a win or a loss for each player. • A strategy for player P is a function defined over every legal finite sequence of moves of P's opponent. For example, a strategy for player I is a function s from sequences (J0, J1, ..., Jn) to subsets of X. A game is said to be played according to strategy s if every player P move is the value of s on the sequence of their opponent's prior moves. So if s is a strategy for player I, the play $s(\lambda ),J_{0},s(J_{0}),J_{1},s(J_{0},J_{1}),J_{2},s(J_{0},J_{1},J_{2}),\ldots $ is according to strategy s. (Here λ denotes the empty sequence of moves.) • A strategy for player P is said to be winning if for every play according to strategy s results in a win for player P, for any sequence of legal moves by P's opponent. If player P has a winning strategy for game G, this is denoted $P\uparrow G$. If either player has a winning strategy for G, then G is said to be determined. It follows from the axiom of choice that there are non-determined topological games. • A strategy for P is stationary if it depends only on the last move by P's opponent; a strategy is Markov if it depends both on the last move of the opponent and on the ordinal number of the move. The Banach–Mazur game Main article: Banach–Mazur game The first topological game studied was the Banach–Mazur game, which is a motivating example of the connections between game-theoretic notions and topological properties. Let Y be a topological space, and let X be a subset of Y, called the winning set. Player I begins the game by picking a nonempty open subset $I_{0}\subset Y$, and player II responds with a nonempty open subset $J_{0}\subset I_{0}$. Play continues in this fashion, with players alternately picking a nonempty open subset of the previous play. After an infinite sequence of moves, one for each natural number, the game is finished, and I wins if and only if $X\cap \bigcap _{n\in \omega }I_{n}\neq \emptyset .$ The game-theoretic and topological connections demonstrated by the game include: • II has a winning strategy in the game if and only if X is of the first category in Y (a set is of the first category or meagre if it is the countable union of nowhere-dense sets). • If Y is a complete metric space, then I has a winning strategy if and only if X is comeagre in some nonempty open subset of Y. • If X has the Property of Baire in Y, then the game is determined. Other topological games Some other notable topological games are: • the binary game introduced by Ulam — a modification of the Banach–Mazur game; • the Banach game — played on a subset of the real line; • the Choquet game — related to siftable spaces; • the point-open game — in which player I chooses points and player II chooses open neighborhoods of them. • Selection games — each round player I chooses a (topological) collection and II chooses a member or finite subset of that collection. See Selection principle § Topological games. Many more games have been introduced over the years, to study, among others: the Kuratowski coreduction principle; separation and reduction properties of sets in close projective classes; Luzin sieves; invariant descriptive set theory; Suslin sets; the closed graph theorem; webbed spaces; MP-spaces; the axiom of choice; recursive functions. Topological games have also been related to ideas in mathematical logic, model theory, infinitely-long formulas, infinite strings of alternating quantifiers, ultrafilters, partially ordered sets, and the coloring number of infinite graphs. For a longer list and a more detailed account see the 1987 survey paper of Telgársky.[6] See also • Topological puzzle References 1. C. Berge, Topological games with perfect information. Contributions to the theory of games, vol. 3, 165–178. Annals of Mathematics Studies, no. 39. Princeton University Press, Princeton, N. J., 1957. 2. C. Berge, Théorie des jeux à n personnes, Mém. des Sc. Mat., Gauthier-Villars, Paris 1957. 3. A. R. Pears, On topological games, Proc. Cambridge Philos. Soc. 61 (1965), 165–171. 4. R. Telgársky, On topological properties defined by games, Topics in Topology (Proc. Colloq. Keszthely 1972), Colloq. Math. Soc. János Bolyai, Vol. 8, North-Holland, Amsterdam 1974, 617–624. 5. R. Telgársky, Spaces defined by topological games, Fund. Math. 88 (1975), 193–223. 6. R. Telgársky, "Topological Games: On the 50th Anniversary of the Banach-Mazur Game", Rocky Mountain J. Math. 17 (1987), 227–276. 7. L. A. Petrosjan, Topological games and their applications to pursuit problems. I. SIAM J. Control 10 (1972), 194–202.
Wikipedia
Topological geometry Topological geometry deals with incidence structures consisting of a point set $P$ and a family ${\mathfrak {L}}$ of subsets of $P$ called lines or circles etc. such that both $P$ and ${\mathfrak {L}}$ carry a topology and all geometric operations like joining points by a line or intersecting lines are continuous. As in the case of topological groups, many deeper results require the point space to be (locally) compact and connected. This generalizes the observation that the line joining two distinct points in the Euclidean plane depends continuously on the pair of points and the intersection point of two lines is a continuous function of these lines. Linear geometries Linear geometries are incidence structures in which any two distinct points $x$ and $y$ are joined by a unique line $xy$. Such geometries are called topological if $xy$ depends continuously on the pair $(x,y)$ with respect to given topologies on the point set and the line set. The dual of a linear geometry is obtained by interchanging the roles of points and lines. A survey of linear topological geometries is given in Chapter 23 of the Handbook of incidence geometry.[1] The most extensively investigated topological linear geometries are those which are also dual topological linear geometries. Such geometries are known as topological projective planes. History A systematic study of these planes began in 1954 with a paper by Skornyakov.[2] Earlier, the topological properties of the real plane had been introduced via ordering relations on the affine lines, see, e.g., Hilbert,[3] Coxeter,[4] and O. Wyler.[5] The completeness of the ordering is equivalent to local compactness and implies that the affine lines are homeomorphic to $\mathbb {R} $ and that the point space is connected. Note that the rational numbers do not suffice to describe our intuitive notions of plane geometry and that some extension of the rational field is necessary. In fact, the equation $x^{2}+y^{2}=3$ for a circle has no rational solution. Topological projective planes The approach to the topological properties of projective planes via ordering relations is not possible, however, for the planes coordinatized by the complex numbers, the quaternions or the octonion algebra.[6] The point spaces as well as the line spaces of these classical planes (over the real numbers, the complex numbers, the quaternions, and the octonions) are compact manifolds of dimension $2^{m},\,1\leq m\leq 4$. Topological dimension The notion of the dimension of a topological space plays a prominent rôle in the study of topological, in particular of compact connected planes. For a normal space $X$, the dimension $\dim X$ can be characterized as follows: If $\mathbb {S} _{n}$ denotes the $n$-sphere, then $\dim X\leq n$ if, and only if, for every closed subspace $A\subset X$ each continuous map $\varphi :A\to \mathbb {S} _{n}$ has a continuous extension $\psi :X\to \mathbb {S} _{n}$. For details and other definitions of a dimension see [7] and the references given there, in particular Engelking[8] or Fedorchuk.[9] 2-dimensional planes The lines of a compact topological plane with a 2-dimensional point space form a family of curves homeomorphic to a circle, and this fact characterizes these planes among the topological projective planes.[10] Equivalently, the point space is a surface. Early examples not isomorphic to the classical real plane ${\mathcal {E}}$ have been given by Hilbert[3][11] and Moulton.[12] The continuity properties of these examples have not been considered explicitly at that time, they may have been taken for granted. Hilbert’s construction can be modified to obtain uncountably many pairwise non-isomorphic $2$-dimensional compact planes. The traditional way to distinguish ${\mathcal {E}}$ from the other $2$-dimensional planes is by the validity of Desargues’s theorem or the theorem of Pappos (see, e.g., Pickert[13] for a discussion of these two configuration theorems). The latter is known to imply the former (Hessenberg[14]). The theorem of Desargues expresses a kind of homogeneity of the plane. In general, it holds in a projective plane if, and only if, the plane can be coordinatized by a (not necessarily commutative) field,[3][15][13] hence it implies that the group of automorphisms is transitive on the set of quadrangles ($4$ points no $3$ of which are collinear). In the present setting, a much weaker homogeneity condition characterizes ${\mathcal {E}}$: Theorem. If the automorphism group $\Sigma $ of a $2$-dimensional compact plane ${\mathcal {P}}$ is transitive on the point set (or the line set), then $\Sigma $ has a compact subgroup $\Phi $ which is even transitive on the set of flags (=incident point-line pairs), and ${\mathcal {P}}$ is classical.[10] The automorphism group $\Sigma =\operatorname {Aut} {\mathcal {P}}$ of a $2$-dimensional compact plane ${\mathcal {P}}$, taken with the topology of uniform convergence on the point space, is a locally compact group of dimension at most $8$, in fact even a Lie group. All $2$-dimensional planes such that $\dim \Sigma \geq 3$ can be described explicitly;[10] those with $\dim \Sigma =4$ are exactly the Moulton planes, the classical plane ${\mathcal {E}}$ is the only $2$-dimensional plane with $\dim \Sigma >4$; see also.[16] Compact connected planes The results on $2$-dimensional planes have been extended to compact planes of dimension $>2$. This is possible due to the following basic theorem: Topology of compact planes. If the dimension of the point space $P$ of a compact connected projective plane is finite, then $\dim P=2^{m}$ with $m\in \{1,2,3,4\}$. Moreover, each line is a homotopy sphere of dimension $2^{m-1}$, see [17] or.[18] Special aspects of 4-dimensional planes are treated in,[19] more recent results can be found in.[20] The lines of a $4$-dimensional compact plane are homeomorphic to the $2$-sphere;[21] in the cases $m>2$ the lines are not known to be manifolds, but in all examples which have been found so far the lines are spheres. A subplane ${\mathcal {B}}$ of a projective plane ${\mathcal {P}}$ is said to be a Baer subplane,[22] if each point of ${\mathcal {P}}$ is incident with a line of ${\mathcal {B}}$ and each line of ${\mathcal {P}}$ contains a point of ${\mathcal {B}}$. A closed subplane ${\mathcal {B}}$ is a Baer subplane of a compact connected plane ${\mathcal {P}}$ if, and only if, the point space of ${\mathcal {B}}$ and a line of ${\mathcal {P}}$ have the same dimension. Hence the lines of an 8-dimensional plane ${\mathcal {P}}$ are homeomorphic to a sphere $\mathbb {S} _{4}$ if ${\mathcal {P}}$ has a closed Baer subplane.[23] Homogeneous planes. If ${\mathcal {P}}$ is a compact connected projective plane and if $\Sigma =\operatorname {Aut} {\mathcal {P}}$ is transitive on the point set of ${\mathcal {P}}$, then $\Sigma $ has a flag-transitive compact subgroup $\Phi $ and ${\mathcal {P}}$ is classical, see [24] or.[25] In fact, $\Phi $ is an elliptic motion group.[26] Let ${\mathcal {P}}$ be a compact plane of dimension $2^{m},\;m=2,3,4$, and write $\Sigma =\operatorname {Aut} {\mathcal {P}}$. If $\dim \Sigma >8,18,40$, then ${\mathcal {P}}$ is classical,[27] and $\operatorname {Aut} {\mathcal {P}}$ is a simple Lie group of dimension $16,35,78$ respectively. All planes ${\mathcal {P}}$ with $\dim \Sigma =8,18,40$ are known explicitly.[28] The planes with $\dim \Sigma =40$ are exactly the projective closures of the affine planes coordinatized by a so-called mutation $(\mathbb {O} ,+,\circ )$ of the octonion algebra $(\mathbb {O} ,+,\ \,)$, where the new multiplication $\circ $ is defined as follows: choose a real number $t$ with $1/2<t\neq 1$ and put $a\circ b=t\cdot ab+(1-t)\cdot ba$. Vast families of planes with a group of large dimension have been discovered systematically starting from assumptions about their automorphism groups, see, e.g.,.[20][29][30][31][32] Many of them are projective closures of translation planes (affine planes admitting a sharply transitive group of automorphisms mapping each line to a parallel), cf.;[33] see also [34] for more recent results in the case $m=3$ and [30] for $m=4$. Compact projective spaces Subplanes of projective spaces of geometrical dimension at least 3 are necessarily Desarguesian, see [35] §1 or [4] §16 or.[36] Therefore, all compact connected projective spaces can be coordinatized by the real or complex numbers or the quaternion field.[37] Stable planes The classical non-euclidean hyperbolic plane can be represented by the intersections of the straight lines in the real plane with an open circular disk. More generally, open (convex) parts of the classical affine planes are typical stable planes. A survey of these geometries can be found in,[38] for the $2$-dimensional case see also.[39] Precisely, a stable plane ${\mathcal {S}}$ is a topological linear geometry $(P,{\mathfrak {L}})$ such that 1. $P$ is a locally compact space of positive finite dimension, 2. each line $L\in {\mathfrak {L}}$ is a closed subset of $P$, and ${\mathfrak {L}}$ is a Hausdorff space, 3. the set $\{(K,L)\mid K\neq L,\;K\cap L\neq \emptyset \}$ is an open subspace ${\mathfrak {O}}\subset {\mathfrak {L}}^{2}$ ( stability), 4. the map $(K,L)\mapsto K\cap L:{\mathfrak {O}}\to P$ is continuous. Note that stability excludes geometries like the $3$-dimensional affine space over $\mathbb {R} $ or $\mathbb {C} $. A stable plane ${\mathcal {S}}$ is a projective plane if, and only if, $P$ is compact.[40] As in the case of projective planes, line pencils are compact and homotopy equivalent to a sphere of dimension $2^{m-1}$, and $\dim P=2^{m}$ with $m\in \{1,2,3,4\}$, see [17] or.[41] Moreover, the point space $P$ is locally contractible.[17][42] 'Compact groups of (proper) stable planes are rather small. Let $\Phi _{d}$ denote a maximal compact subgroup of the automorphism group of the classical $d$-dimensional projective plane ${\mathcal {P}}_{d}$. Then the following theorem holds: If a $d$-dimensional stable plane ${\mathcal {S}}$ admits a compact group $\Gamma $ of automorphisms such that $\dim \Gamma >\dim \Phi _{d}-d$, then ${\mathcal {S}}\cong {\mathcal {P}}_{d}$, see.[43] Flag-homogeneous stable planes. Let ${\mathcal {S}}=(P,{\mathfrak {L}})$ be a stable plane. If the automorphism group $\operatorname {Aut} {\mathcal {S}}$ is flag-transitive, then ${\mathcal {S}}$ is a classical projective or affine plane, or ${\mathcal {S}}$ is isomorphic to the interior of the absolute sphere of the hyperbolic polarity of a classical plane; see.[44][45][46] In contrast to the projective case, there is an abundance of point-homogeneous stable planes, among them vast classes of translation planes, see [33] and.[47] Symmetric planes Affine translation planes have the following property: • There exists a point transitive closed subgroup $\Delta $ of the automorphism group which contains a unique reflection at some and hence at each point. More generally, a symmetric plane is a stable plane ${\mathcal {S}}=(P,{\mathfrak {L}})$ satisfying the aforementioned condition; see,[48] cf.[49] for a survey of these geometries. By [50] Corollary 5.5, the group $\Delta $ is a Lie group and the point space $P$ is a manifold. It follows that ${\mathcal {S}}$ is a symmetric space. By means of the Lie theory of symmetric spaces, all symmetric planes with a point set of dimension $2$ or $4$ have been classified.[48][51] They are either translation planes or they are determined by a Hermitian form. An easy example is the real hyperbolic plane. Circle geometries Classical models [52] are given by the plane sections of a quadratic surface $S$ in real projective $3$-space; if $S$ is a sphere, the geometry is called a Möbius plane.[39] The plane sections of a ruled surface (one-sheeted hyperboloid) yield the classical Minkowski plane, cf.[53] for generalizations. If $S$ is an elliptic cone without its vertex, the geometry is called a Laguerre plane. Collectively these planes are sometimes referred to as Benz planes. A topological Benz plane is classical, if each point has a neighbourhood which is isomorphic to some open piece of the corresponding classical Benz plane.[54] Möbius planes Möbius planes consist of a family ${\mathfrak {C}}$ of circles, which are topological 1-spheres, on the $2$-sphere $S$ such that for each point $p$ the derived structure $(S\setminus \{p\},\{C\setminus \{p\}\mid p\in C\in {\mathfrak {C}}\})$ is a topological affine plane.[55] In particular, any $3$ distinct points are joined by a unique circle. The circle space ${\mathfrak {C}}$ is then homeomorphic to real projective $3$-space with one point deleted.[56] A large class of examples is given by the plane sections of an egg-like surface in real $3$-space. Homogeneous Möbius planes If the automorphism group $\Sigma $ of a Möbius plane is transitive on the point set $S$ or on the set ${\mathfrak {C}}$ of circles, or if $\dim \Sigma \geq 4$, then $(S,{\mathfrak {C}})$ is classical and $\dim \Sigma =6$, see.[57][58] In contrast to compact projective planes there are no topological Möbius planes with circles of dimension $>1$, in particular no compact Möbius planes with a $4$-dimensional point space.[59] All 2-dimensional Möbius planes such that $\dim \Sigma \geq 3$ can be described explicitly.[60][61] Laguerre planes The classical model of a Laguerre plane consists of a circular cylindrical surface $C$ in real $3$-space $\mathbb {R} ^{3}$ as point set and the compact plane sections of $C$ as circles. Pairs of points which are not joined by a circle are called parallel. Let $P$ denote a class of parallel points. Then $C\setminus P$ is a plane $\mathbb {R} ^{2}$, the circles can be represented in this plane by parabolas of the form $y=ax^{2}+bx+c$. In an analogous way, the classical $4$-dimensional Laguerre plane is related to the geometry of complex quadratic polynomials. In general, the axioms of a locally compact connected Laguerre plane require that the derived planes embed into compact projective planes of finite dimension. A circle not passing through the point of derivation induces an oval in the derived projective plane. By [62] or,[63] circles are homeomorphic to spheres of dimension $1$ or $2$. Hence the point space of a locally compact connected Laguerre plane is homeomorphic to the cylinder $C$ or it is a $4$-dimensional manifold, cf.[64] A large class of $2$-dimensional examples, called ovoidal Laguerre planes, is given by the plane sections of a cylinder in real 3-space whose base is an oval in $\mathbb {R} ^{2}$. The automorphism group of a $2d$-dimensional Laguerre plane ($d=1,2$) is a Lie group with respect to the topology of uniform convergence on compact subsets of the point space; furthermore, this group has dimension at most $7d$. All automorphisms of a Laguerre plane which fix each parallel class form a normal subgroup, the kernel of the full automorphism group. The $2$-dimensional Laguerre planes with $\dim \Sigma =5$ are exactly the ovoidal planes over proper skew parabolae.[65] The classical $2d$-dimensional Laguerre planes are the only ones such that $\dim \Sigma >5d$, see,[66] cf. also.[67] Homogeneous Laguerre planes If the automorphism group $\Sigma $ of a $2d$-dimensional Laguerre plane ${\mathcal {L}}$ is transitive on the set of parallel classes, and if the kernel $T\triangleleft \Sigma $ is transitive on the set of circles, then ${\mathcal {L}}$ is classical, see [68][67] 2.1,2. However, transitivity of the automorphism group on the set of circles does not suffice to characterize the classical model among the $2d$-dimensional Laguerre planes. Minkowski planes The classical model of a Minkowski plane has the torus $\mathbb {S} _{1}\times \mathbb {S} _{1}$ as point space, circles are the graphs of real fractional linear maps on $\mathbb {S} _{1}=\mathbb {R} \cup \{\infty \}$. As with Laguerre planes, the point space of a locally compact connected Minkowski plane is $1$- or $2$-dimensional; the point space is then homeomorphic to a torus or to $\mathbb {S} _{2}\times \mathbb {S} _{2}$, see.[69] Homogeneous Minkowski planes If the automorphism group $\Sigma $ of a Minkowski plane ${\mathcal {M}}$ of dimension $2d$ is flag-transitive, then ${\mathcal {M}}$ is classical.[70] The automorphism group of a $2d$-dimensional Minkowski plane is a Lie group of dimension at most $6d$. All $2$-dimensional Minkowski planes such that $\dim \Sigma \geq 4$ can be described explicitly.[71] The classical $2d$-dimensional Minkowski plane is the only one with $\dim \Sigma >4d$, see.[72] Notes 1. Grundhöfer & Löwen 1995 2. Skornyakov, L.A. (1954), "Topological projective planes", Trudy Moskov. Mat. Obschtsch., 3: 347–373 3. Hilbert 1899 4. Coxeter, H.S.M. (1993), The real projective plane, New York: Springer 5. Wyler, O. (1952), "Order and topology in projective planes", Amer. J. Math., 74 (3): 656–666, doi:10.2307/2372268, JSTOR 2372268 6. Conway, J.H.; Smith, D.A. (2003), On quaternions and octonions: their geometry, arithmetic, and symmetry, Natick, MA: A K Peters 7. Salzmann et al. 1995, §92 8. Engelking, R. (1978), Dimension theory, North-Holland Publ. Co. 9. Fedorchuk, V.V. (1990), "The fundamentals of dimension theory", Encycl. Math. Sci., Berlin: Springer, 17: 91–192 10. Salzmann 1967 11. Stroppel, M. (1998), "Bemerkungen zur ersten nicht desarguesschen ebenen Geometrie bei Hilbert", J. Geom., 63 (1–2): 183–195, doi:10.1007/bf01221248, S2CID 120078708 12. Moulton, F.R. (1902), "A simple non-Desarguesian plane geometry", Trans. Amer. Math. Soc., 3 (2): 192–195, doi:10.1090/s0002-9947-1902-1500595-3 13. Pickert 1955 14. Hessenberg, G. (1905), "Beweis des Desarguesschen Satzes aus dem Pascalschen", Math. Ann. (in German), 61 (2): 161–172, doi:10.1007/bf01457558, S2CID 120456855 15. Hughes, D.R.; Piper, F.C. (1973), Projective planes, Berlin: Springer 16. Salzmann et al. 1995, Chapter 3 17. Löwen 1983a 18. Salzmann et al. 1995, 54.11 19. Salzmann et al. 1995, Chapter 7 20. Betten, Dieter (1997), "On the classification of 4-dimensional flexible projective planes", Mostly finite geometries (Iowa City, IA, 1996), Lecture Notes in Pure and Applied Mathematics, vol. 190, New York: Dekker, pp. 9–33, doi:10.1017/CBO9780511665608, MR 1463975 21. Salzmann et al. 1995, 53.15 22. Salzmann, H. (2003), "Baer subplanes", Illinois J. Math., 47 (1–2): 485–513, doi:10.1215/ijm/1258488168 23. Salzmann et al. 1995, 55.6 24. Löwen, R. (1981), "Homogeneous compact projective planes", J. Reine Angew. Math., 321: 217–220 25. Salzmann et al. 1995, 63.8 26. Salzmann et al. 1995, 13.12 27. Salzmann et al. 1995, 72.8,84.28,85.16 28. Salzmann et al. 1995, 73.22,84.28,87.7 29. Hähl, H. (1986), "Achtdimensionale lokalkompakte Translationsebenen mit mindestens $17$-dimensionaler Kollineationsgruppe", Geom. Dedicata (in German), 21: 299–340, doi:10.1007/bf00181535, S2CID 116969491 30. Hähl, H. (2011), "Sixteen-dimensional locally compact translation planes with collineation groups of dimension at least $38$", Adv. Geom., 11: 371–380, doi:10.1515/advgeom.2010.046 31. Hähl, H. (2000), "Sixteen-dimensional locally compact translation planes with large automorphism groups having no fixed points", Geom. Dedicata, 83: 105–117, doi:10.1023/A:1005212813861, S2CID 128076685 32. Salzmann et al. 1995, §§73,74,82,86 33. Knarr 1995 34. Salzmann 2014 35. Hilbert 1899, §§22 36. Veblen, O.; Young, J.W. (1910), Projective Geometry Vol. I, Boston: Ginn Comp. 37. Kolmogoroff, A. (1932), "Zur Begründung der projektiven Geometrie", Ann. of Math. (in German), 33 (1): 175–176, doi:10.2307/1968111, JSTOR 1968111 38. Salzmann et al. 1995, §§3,4 39. Polster & Steinke 2001 40. Salzmann et al. 1995, 3.11 41. Salzmann et al. 1995, 3.28,29 42. Grundhöfer & Löwen 1995, 3.7 43. Stroppel, M. (1994), "Compact groups of automorphisms of stable planes", Forum Math., 6 (6): 339–359, doi:10.1515/form.1994.6.339, S2CID 53550190 44. Löwen 1983b. 45. Salzmann et al. 1995, 5.8 46. Salzmann 2014, 8.11,12 47. Salzmann et al. 1995, Chapters 7 and 8 48. Löwen, R. (1979), "Symmetric planes", Pacific J. Math., 84 (2): 367–390, doi:10.2140/pjm.1979.84.367 49. Grundhöfer & Löwen 1995, 5.26-31 50. Hofmann, K.H.; Kramer, L. (2015), "Transitive actions of locally compact groups on locally contractive spaces", J. Reine Angew. Math., 702: 227–243, 245/6 51. Löwen, R. (1979), "Classification of $4$-dimensional symmetric planes", Mathematische Zeitschrift, 167: 137–159, doi:10.1007/BF01215118, S2CID 123564207 52. Steinke 1995 53. Polster & Steinke 2001, §4 54. Steinke, G. (1983), "Locally classical Benz planes are classical", Mathematische Zeitschrift, 183: 217–220, doi:10.1007/bf01214821, S2CID 122877328 55. Wölk, D. (1966), "Topologische Möbiusebenen", Mathematische Zeitschrift (in German), 93: 311–333, doi:10.1007/BF01111942 56. Löwen, R.; Steinke, G.F. (2014), "The circle space of a spherical circle plane", Bull. Belg. Math. Soc. Simon Stevin, 21 (2): 351–364, doi:10.36045/bbms/1400592630 57. Strambach, K. (1970), "Sphärische Kreisebenen", Mathematische Zeitschrift (in German), 113: 266–292, doi:10.1007/bf01110328, S2CID 122982956 58. Steinke 1995, 3.2 59. Groh, H. (1973), "Möbius planes with locally euclidean circles are flat", Math. Ann., 201 (2): 149–156, doi:10.1007/bf01359792, S2CID 122256290 60. Strambach, K. (1972), "Sphärische Kreisebenen mit dreidimensionaler nichteinfacher Automorphismengruppe", Mathematische Zeitschrift (in German), 124: 289–314, doi:10.1007/bf01113922, S2CID 120716300 61. Strambach, K. (1973), "Sphärische Kreisebenen mit einfacher Automorphismengruppe'", Geom. Dedicata (in German), 1: 182–220, doi:10.1007/bf00147520, S2CID 123023992 62. Buchanan, T.; Hähl, H.; Löwen, R. (1980), "Topologische Ovale", Geom. Dedicata (in German), 9 (4): 401–424, doi:10.1007/bf00181558, S2CID 189889834 63. Salzmann et al. 1995, 55.14 64. Steibke 1995, 5.7 harvnb error: no target: CITEREFSteibke1995 (help) 65. Steinke 1995, 5.5 66. Steinke 1995, 5.4,8 67. Steinke, G.F. (2002), "$4$-dimensional elation Laguerre planes admitting non-solvable automorphism groups", Monatsh. Math., 136: 327–354, doi:10.1007/s006050200046, S2CID 121391952 68. Steinke, G.F. (1993), "$4$-dimensional point-transitive groups of automorphisms of $2$- dimensional Laguerre planes", Results in Mathematics, 24: 326–341, doi:10.1007/bf03322341, S2CID 123334384 69. Steinke 1991, 4.6 harvnb error: no target: CITEREFSteinke1991 (help) 70. Steinke, G.F. (1992), "$4$-dimensional Minkowski planes with large automorphism group", Forum Math., 4: 593–605, doi:10.1515/form.1992.4.593, S2CID 122970200 71. Polster & Steinke 2001, §4.4 72. Steinke 1995, 4.5 and 4.8 References • Grundhöfer, T.; Löwen, R. (1995), Buekenhout, F. (ed.), Handbook of incidence geometry: buildings and foundations, Amsterdam: North-Holland, pp. 1255–1324 • Hilbert, D. (1899), The foundations of geometry, translation by E. J. Townsend, 1902, Chicago • Knarr, N. (1995), Translation planes. Foundations and construction principles, Lecture Notes in Mathematics, vol. 1611, Berlin: Springer • Löwen, R. (1983a), "Topology and dimension of stable planes: On a conjecture of H. Freudenthal", J. Reine Angew. Math., 343: 108–122 • Löwen, R. (1983b), "Stable planes with isotropic points", Mathematische Zeitschrift, 182: 49–61, doi:10.1007/BF01162593, S2CID 117081501 • Pickert, G. (1955), Projektive Ebenen (in German), Berlin: Springer • Polster, B.; Steinke, G.F. (2001), Geometries on surfaces, Cambridge UP • Salzmann, H. (1967), "Topological planes", Advances in Mathematics, 2: 1–60, doi:10.1016/s0001-8708(67)80002-1 • Salzmann, H. (2014), Compact planes, mostly 8-dimensional. A retrospect, arXiv:1402.0304, Bibcode:2014arXiv1402.0304S • Salzmann, H.; Betten, D.; Grundhöfer, T.; Hähl, H.; Löwen, R.; Stroppel, M. (1995), Compact Projective Planes, W. de Gruyter • Steinke, G. (1995), "Topological circle geometries", Handbook of Incidence Geometry, Amsterdam: North-Holland: 1325–1354, doi:10.1016/B978-044488355-1/50026-8, ISBN 9780444883551
Wikipedia
Glossary of topology This is a glossary of some terms used in the branch of mathematics known as topology. Although there is no absolute distinction between different areas of topology, the focus here is on general topology. The following definitions are also fundamental to algebraic topology, differential topology and geometric topology. Look up Appendix:Glossary of topology in Wiktionary, the free dictionary. All spaces in this glossary are assumed to be topological spaces unless stated otherwise. A Absolutely closed See H-closed Accessible See $T_{1}$. Accumulation point See limit point. Alexandrov topology The topology of a space X is an Alexandrov topology (or is finitely generated) if arbitrary intersections of open sets in X are open, or equivalently, if arbitrary unions of closed sets are closed, or, again equivalently, if the open sets are the upper sets of a poset.[1] Almost discrete A space is almost discrete if every open set is closed (hence clopen). The almost discrete spaces are precisely the finitely generated zero-dimensional spaces. α-closed, α-open A subset A of a topological space X is α-open if $A\subseteq \operatorname {Int} _{X}\left(\operatorname {Cl} _{X}\left(\operatorname {Int} _{X}A\right)\right)$, and the complement of such a set is α-closed.[2] Approach space An approach space is a generalization of metric space based on point-to-set distances, instead of point-to-point. B Baire space This has two distinct common meanings: 1. A space is a Baire space if the intersection of any countable collection of dense open sets is dense; see Baire space. 2. Baire space is the set of all functions from the natural numbers to the natural numbers, with the topology of pointwise convergence; see Baire space (set theory). Base A collection B of open sets is a base (or basis) for a topology $\tau $ if every open set in $\tau $ is a union of sets in $B$. The topology $\tau $ is the smallest topology on $X$ containing $B$ and is said to be generated by $B$. Basis See Base. β-open See Semi-preopen. b-open, b-closed A subset $A$ of a topological space $X$ is b-open if $A\subseteq \operatorname {Int} _{X}\left(\operatorname {Cl} _{X}A\right)\cup \operatorname {Cl} _{X}\left(\operatorname {Int} _{X}A\right)$. The complement of a b-open set is b-closed.[2] Borel algebra The Borel algebra on a topological space $(X,\tau )$ is the smallest $\sigma $-algebra containing all the open sets. It is obtained by taking intersection of all $\sigma $-algebras on $X$ containing $\tau $. Borel set A Borel set is an element of a Borel algebra. Boundary The boundary (or frontier) of a set is the set's closure minus its interior. Equivalently, the boundary of a set is the intersection of its closure with the closure of its complement. Boundary of a set $A$ is denoted by $\partial A$ or $bd$ $A$. Bounded A set in a metric space is bounded if it has finite diameter. Equivalently, a set is bounded if it is contained in some open ball of finite radius. A function taking values in a metric space is bounded if its image is a bounded set. C Category of topological spaces The category Top has topological spaces as objects and continuous maps as morphisms. Cauchy sequence A sequence {xn} in a metric space (M, d) is a Cauchy sequence if, for every positive real number r, there is an integer N such that for all integers m, n > N, we have d(xm, xn) < r. Clopen set A set is clopen if it is both open and closed. Closed ball If (M, d) is a metric space, a closed ball is a set of the form D(x; r) := {y in M : d(x, y) ≤ r}, where x is in M and r is a positive real number, the radius of the ball. A closed ball of radius r is a closed r-ball. Every closed ball is a closed set in the topology induced on M by d. Note that the closed ball D(x; r) might not be equal to the closure of the open ball B(x; r). Closed set A set is closed if its complement is a member of the topology. Closed function A function from one space to another is closed if the image of every closed set is closed. Closure The closure of a set is the smallest closed set containing the original set. It is equal to the intersection of all closed sets which contain it. An element of the closure of a set S is a point of closure of S. Closure operator See Kuratowski closure axioms. Coarser topology If X is a set, and if T1 and T2 are topologies on X, then T1 is coarser (or smaller, weaker) than T2 if T1 is contained in T2. Beware, some authors, especially analysts, use the term stronger. Comeagre A subset A of a space X is comeagre (comeager) if its complement X\A is meagre. Also called residual. Compact A space is compact if every open cover has a finite subcover. Every compact space is Lindelöf and paracompact. Therefore, every compact Hausdorff space is normal. See also quasicompact. Compact-open topology The compact-open topology on the set C(X, Y) of all continuous maps between two spaces X and Y is defined as follows: given a compact subset K of X and an open subset U of Y, let V(K, U) denote the set of all maps f in C(X, Y) such that f(K) is contained in U. Then the collection of all such V(K, U) is a subbase for the compact-open topology. Complete A metric space is complete if every Cauchy sequence converges. Completely metrizable/completely metrisable See complete space. Completely normal A space is completely normal if any two separated sets have disjoint neighbourhoods. Completely normal Hausdorff A completely normal Hausdorff space (or T5 space) is a completely normal T1 space. (A completely normal space is Hausdorff if and only if it is T1, so the terminology is consistent.) Every completely normal Hausdorff space is normal Hausdorff. Completely regular A space is completely regular if, whenever C is a closed set and x is a point not in C, then C and {x} are functionally separated. Completely T3 See Tychonoff. Component See Connected component/Path-connected component. Connected A space is connected if it is not the union of a pair of disjoint nonempty open sets. Equivalently, a space is connected if the only clopen sets are the whole space and the empty set. Connected component A connected component of a space is a maximal nonempty connected subspace. Each connected component is closed, and the set of connected components of a space is a partition of that space. Continuous A function from one space to another is continuous if the preimage of every open set is open. Continuum A space is called a continuum if it a compact, connected Hausdorff space. Contractible A space X is contractible if the identity map on X is homotopic to a constant map. Every contractible space is simply connected. Coproduct topology If {Xi} is a collection of spaces and X is the (set-theoretic) disjoint union of {Xi}, then the coproduct topology (or disjoint union topology, topological sum of the Xi) on X is the finest topology for which all the injection maps are continuous. Core-compact space Cosmic space A continuous image of some separable metric space.[3] Countable chain condition A space X satisfies the countable chain condition if every family of non-empty, pairswise disjoint open sets is countable. Countably compact A space is countably compact if every countable open cover has a finite subcover. Every countably compact space is pseudocompact and weakly countably compact. Countably locally finite A collection of subsets of a space X is countably locally finite (or σ-locally finite) if it is the union of a countable collection of locally finite collections of subsets of X. Cover A collection of subsets of a space is a cover (or covering) of that space if the union of the collection is the whole space. Covering See Cover. Cut point If X is a connected space with more than one point, then a point x of X is a cut point if the subspace X − {x} is disconnected. D δ-cluster point, δ-closed, δ-open A point x of a topological space X is a δ-cluster point of a subset A if $A\cap \operatorname {Int} _{X}\left(\operatorname {Cl} _{X}(U)\right)\neq \emptyset $ for every open neighborhood U of x in X. The subset A is δ-closed if it is equal to the set of its δ-cluster points, and δ-open if its complement is δ-closed.[4] Dense set A set is dense if it has nonempty intersection with every nonempty open set. Equivalently, a set is dense if its closure is the whole space. Dense-in-itself set A set is dense-in-itself if it has no isolated point. Density the minimal cardinality of a dense subset of a topological space. A set of density ℵ0 is a separable space.[5] Derived set If X is a space and S is a subset of X, the derived set of S in X is the set of limit points of S in X. Developable space A topological space with a development.[6] Development A countable collection of open covers of a topological space, such that for any closed set C and any point p in its complement there exists a cover in the collection such that every neighbourhood of p in the cover is disjoint from C.[6] Diameter If (M, d) is a metric space and S is a subset of M, the diameter of S is the supremum of the distances d(x, y), where x and y range over S. Discrete metric The discrete metric on a set X is the function d : X × X  →  R such that for all x, y in X, d(x, x) = 0 and d(x, y) = 1 if x ≠ y. The discrete metric induces the discrete topology on X. Discrete space A space X is discrete if every subset of X is open. We say that X carries the discrete topology.[7] Discrete topology See discrete space. Disjoint union topology See Coproduct topology. Dispersion point If X is a connected space with more than one point, then a point x of X is a dispersion point if the subspace X − {x} is hereditarily disconnected (its only connected components are the one-point sets). Distance See metric space. Dunce hat (topology) E Entourage See Uniform space. Exterior The exterior of a set is the interior of its complement. F Fσ set An Fσ set is a countable union of closed sets.[8] Filter See also: Filters in topology. A filter on a space X is a nonempty family F of subsets of X such that the following conditions hold: 1. The empty set is not in F. 2. The intersection of any finite number of elements of F is again in F. 3. If A is in F and if B contains A, then B is in F. Final topology On a set X with respect to a family of functions into $X$, is the finest topology on X which makes those functions continuous.[9] Fine topology (potential theory) On Euclidean space $\mathbb {R} ^{n}$, the coarsest topology making all subharmonic functions (equivalently all superharmonic functions) continuous.[10] Finer topology If X is a set, and if T1 and T2 are topologies on X, then T2 is finer (or larger, stronger) than T1 if T2 contains T1. Beware, some authors, especially analysts, use the term weaker. Finitely generated See Alexandrov topology. First category See Meagre. First-countable A space is first-countable if every point has a countable local base. Fréchet See T1. Frontier See Boundary. Full set A compact subset K of the complex plane is called full if its complement is connected. For example, the closed unit disk is full, while the unit circle is not. Functionally separated Two sets A and B in a space X are functionally separated if there is a continuous map f: X  →  [0, 1] such that f(A) = 0 and f(B) = 1. G Gδ set A Gδ set or inner limiting set is a countable intersection of open sets.[8] Gδ space A space in which every closed set is a Gδ set.[8] Generic point A generic point for a closed set is a point for which the closed set is the closure of the singleton set containing that point.[11] H Hausdorff A Hausdorff space (or T2 space) is one in which every two distinct points have disjoint neighbourhoods. Every Hausdorff space is T1. H-closed A space is H-closed, or Hausdorff closed or absolutely closed, if it is closed in every Hausdorff space containing it. Hereditarily P A space is hereditarily P for some property P if every subspace is also P. Hereditary A property of spaces is said to be hereditary if whenever a space has that property, then so does every subspace of it.[12] For example, second-countability is a hereditary property. Homeomorphism If X and Y are spaces, a homeomorphism from X to Y is a bijective function f : X → Y such that f and f−1 are continuous. The spaces X and Y are then said to be homeomorphic. From the standpoint of topology, homeomorphic spaces are identical. Homogeneous A space X is homogeneous if, for every x and y in X, there is a homeomorphism f : X  →  X such that f(x) = y. Intuitively, the space looks the same at every point. Every topological group is homogeneous. Homotopic maps Two continuous maps f, g : X  →  Y are homotopic (in Y) if there is a continuous map H : X × [0, 1]  →  Y such that H(x, 0) = f(x) and H(x, 1) = g(x) for all x in X. Here, X × [0, 1] is given the product topology. The function H is called a homotopy (in Y) between f and g. Homotopy See Homotopic maps. Hyperconnected A space is hyperconnected if no two non-empty open sets are disjoint[13] Every hyperconnected space is connected.[13] I Identification map See Quotient map. Identification space See Quotient space. Indiscrete space See Trivial topology. Infinite-dimensional topology See Hilbert manifold and Q-manifolds, i.e. (generalized) manifolds modelled on the Hilbert space and on the Hilbert cube respectively. Inner limiting set A Gδ set.[8] Interior The interior of a set is the largest open set contained in the original set. It is equal to the union of all open sets contained in it. An element of the interior of a set S is an interior point of S. Interior point See Interior. Isolated point A point x is an isolated point if the singleton {x} is open. More generally, if S is a subset of a space X, and if x is a point of S, then x is an isolated point of S if {x} is open in the subspace topology on S. Isometric isomorphism If M1 and M2 are metric spaces, an isometric isomorphism from M1 to M2 is a bijective isometry f : M1  →  M2. The metric spaces are then said to be isometrically isomorphic. From the standpoint of metric space theory, isometrically isomorphic spaces are identical. Isometry If (M1, d1) and (M2, d2) are metric spaces, an isometry from M1 to M2 is a function f : M1  →  M2 such that d2(f(x), f(y)) = d1(x, y) for all x, y in M1. Every isometry is injective, although not every isometry is surjective. K Kolmogorov axiom See T0. Kuratowski closure axioms The Kuratowski closure axioms is a set of axioms satisfied by the function which takes each subset of X to its closure: 1. Isotonicity: Every set is contained in its closure. 2. Idempotence: The closure of the closure of a set is equal to the closure of that set. 3. Preservation of binary unions: The closure of the union of two sets is the union of their closures. 4. Preservation of nullary unions: The closure of the empty set is empty. If c is a function from the power set of X to itself, then c is a closure operator if it satisfies the Kuratowski closure axioms. The Kuratowski closure axioms can then be used to define a topology on X by declaring the closed sets to be the fixed points of this operator, i.e. a set A is closed if and only if c(A) = A. Kolmogorov topology TKol = {R, $\varnothing $}∪{(a,∞): a is real number}; the pair (R,TKol) is named Kolmogorov Straight. L L-space An L-space is a hereditarily Lindelöf space which is not hereditarily separable. A Suslin line would be an L-space.[14] Larger topology See Finer topology. Limit point A point x in a space X is a limit point of a subset S if every open set containing x also contains a point of S other than x itself. This is equivalent to requiring that every neighbourhood of x contains a point of S other than x itself. Limit point compact See Weakly countably compact. Lindelöf A space is Lindelöf if every open cover has a countable subcover. Local base A set B of neighbourhoods of a point x of a space X is a local base (or local basis, neighbourhood base, neighbourhood basis) at x if every neighbourhood of x contains some member of B. Local basis See Local base. Locally (P) space There are two definitions for a space to be "locally (P)" where (P) is a topological or set-theoretic property: that each point has a neighbourhood with property (P), or that every point has a neighourbood base for which each member has property (P). The first definition is usually taken for locally compact, countably compact, metrizable, separable, countable; the second for locally connected.[15] Locally closed subset A subset of a topological space that is the intersection of an open and a closed subset. Equivalently, it is a relatively open subset of its closure. Locally compact A space is locally compact if every point has a compact neighbourhood: the alternative definition that each point has a local base consisting of compact neighbourhoods is sometimes used: these are equivalent for Hausdorff spaces.[15] Every locally compact Hausdorff space is Tychonoff. Locally connected A space is locally connected if every point has a local base consisting of connected neighbourhoods.[15] Locally dense see Preopen. Locally finite A collection of subsets of a space is locally finite if every point has a neighbourhood which has nonempty intersection with only finitely many of the subsets. See also countably locally finite, point finite. Locally metrizable/Locally metrisable A space is locally metrizable if every point has a metrizable neighbourhood.[15] Locally path-connected A space is locally path-connected if every point has a local base consisting of path-connected neighbourhoods.[15] A locally path-connected space is connected if and only if it is path-connected. Locally simply connected A space is locally simply connected if every point has a local base consisting of simply connected neighbourhoods. Loop If x is a point in a space X, a loop at x in X (or a loop in X with basepoint x) is a path f in X, such that f(0) = f(1) = x. Equivalently, a loop in X is a continuous map from the unit circle S1 into X. M Meagre If X is a space and A is a subset of X, then A is meagre in X (or of first category in X) if it is the countable union of nowhere dense sets. If A is not meagre in X, A is of second category in X.[16] Metacompact A space is metacompact if every open cover has a point finite open refinement. Metric See Metric space. Metric invariant A metric invariant is a property which is preserved under isometric isomorphism. Metric map If X and Y are metric spaces with metrics dX and dY respectively, then a metric map is a function f from X to Y, such that for any points x and y in X, dY(f(x), f(y)) ≤ dX(x, y). A metric map is strictly metric if the above inequality is strict for all x and y in X. Metric space A metric space (M, d) is a set M equipped with a function d : M × M → R satisfying the following axioms for all x, y, and z in M: 1. d(x, y) ≥ 0 2. d(x, x) = 0 3. if   d(x, y) = 0   then   x = y     (identity of indiscernibles) 4. d(x, y) = d(y, x)     (symmetry) 5. d(x, z) ≤ d(x, y) + d(y, z)     (triangle inequality) The function d is a metric on M, and d(x, y) is the distance between x and y. The collection of all open balls of M is a base for a topology on M; this is the topology on M induced by d. Every metric space is Hausdorff and paracompact (and hence normal and Tychonoff). Every metric space is first-countable. Metrizable/Metrisable A space is metrizable if it is homeomorphic to a metric space. Every metrizable space is Hausdorff and paracompact (and hence normal and Tychonoff). Every metrizable space is first-countable. Monolith Every non-empty ultra-connected compact space X has a largest proper open subset; this subset is called a monolith. Moore space A Moore space is a developable regular Hausdorff space.[6] N Nearly open see preopen. Neighbourhood/Neighborhood A neighbourhood of a point x is a set containing an open set which in turn contains the point x. More generally, a neighbourhood of a set S is a set containing an open set which in turn contains the set S. A neighbourhood of a point x is thus a neighbourhood of the singleton set {x}. (Note that under this definition, the neighbourhood itself need not be open. Many authors require that neighbourhoods be open; be careful to note conventions.) Neighbourhood base/basis See Local base. Neighbourhood system for a point x A neighbourhood system at a point x in a space is the collection of all neighbourhoods of x. Net A net in a space X is a map from a directed set A to X. A net from A to X is usually denoted (xα), where α is an index variable ranging over A. Every sequence is a net, taking A to be the directed set of natural numbers with the usual ordering. Normal A space is normal if any two disjoint closed sets have disjoint neighbourhoods.[8] Every normal space admits a partition of unity. Normal Hausdorff A normal Hausdorff space (or T4 space) is a normal T1 space. (A normal space is Hausdorff if and only if it is T1, so the terminology is consistent.) Every normal Hausdorff space is Tychonoff. Nowhere dense A nowhere dense set is a set whose closure has empty interior. O Open cover An open cover is a cover consisting of open sets.[6] Open ball If (M, d) is a metric space, an open ball is a set of the form B(x; r) := {y in M : d(x, y) < r}, where x is in M and r is a positive real number, the radius of the ball. An open ball of radius r is an open r-ball. Every open ball is an open set in the topology on M induced by d. Open condition See open property. Open set An open set is a member of the topology. Open function A function from one space to another is open if the image of every open set is open. Open property A property of points in a topological space is said to be "open" if those points which possess it form an open set. Such conditions often take a common form, and that form can be said to be an open condition; for example, in metric spaces, one defines an open ball as above, and says that "strict inequality is an open condition". P Paracompact A space is paracompact if every open cover has a locally finite open refinement. Paracompact implies metacompact.[17] Paracompact Hausdorff spaces are normal.[18] Partition of unity A partition of unity of a space X is a set of continuous functions from X to [0, 1] such that any point has a neighbourhood where all but a finite number of the functions are identically zero, and the sum of all the functions on the entire space is identically 1. Path A path in a space X is a continuous map f from the closed unit interval [0, 1] into X. The point f(0) is the initial point of f; the point f(1) is the terminal point of f.[13] Path-connected A space X is path-connected if, for every two points x, y in X, there is a path f from x to y, i.e., a path with initial point f(0) = x and terminal point f(1) = y. Every path-connected space is connected.[13] Path-connected component A path-connected component of a space is a maximal nonempty path-connected subspace. The set of path-connected components of a space is a partition of that space, which is finer than the partition into connected components.[13] The set of path-connected components of a space X is denoted π0(X). Perfectly normal a normal space which is also a Gδ.[8] π-base A collection B of nonempty open sets is a π-base for a topology τ if every nonempty open set in τ includes a set from B.[19] Point A point is an element of a topological space. More generally, a point is an element of any set with an underlying topological structure; e.g. an element of a metric space or a topological group is also a "point". Point of closure See Closure. Polish A space is Polish if it is separable and completely metrizable, i.e. if it is homeomorphic to a separable and complete metric space. Polyadic A space is polyadic if it is the continuous image of the power of a one-point compactification of a locally compact, non-compact Hausdorff space. P-point A point of a topological space is a P-point if its filter of neighbourhoods is closed under countable intersections. Pre-compact See Relatively compact. Pre-open set A subset A of a topological space X is preopen if $A\subseteq \operatorname {Int} _{X}\left(\operatorname {Cl} _{X}A\right)$.[4] Prodiscrete topology The prodiscrete topology on a product AG is the product topology when each factor A is given the discrete topology.[20] Product topology If $\left(X_{i}\right)$ is a collection of spaces and X is the (set-theoretic) Cartesian product of $\left(X_{i}\right),$ then the product topology on X is the coarsest topology for which all the projection maps are continuous. Proper function/mapping A continuous function f from a space X to a space Y is proper if $f^{-1}(C)$ is a compact set in X for any compact subspace C of Y. Proximity space A proximity space (X, d) is a set X equipped with a binary relation d between subsets of X satisfying the following properties: For all subsets A, B and C of X, 1. A d B implies B d A 2. A d B implies A is non-empty 3. If A and B have non-empty intersection, then A d B 4. A d (B $\cup $ C) if and only if (A d B or A d C) 5. If, for all subsets E of X, we have (A d E or B d E), then we must have A d (X − B) Pseudocompact A space is pseudocompact if every real-valued continuous function on the space is bounded. Pseudometric See Pseudometric space. Pseudometric space A pseudometric space (M, d) is a set M equipped with a real-valued function $d:M\times M\to \mathbb {R} $ satisfying all the conditions of a metric space, except possibly the identity of indiscernibles. That is, points in a pseudometric space may be "infinitely close" without being identical. The function d is a pseudometric on M. Every metric is a pseudometric. Punctured neighbourhood/Punctured neighborhood A punctured neighbourhood of a point x is a neighbourhood of x, minus {x}. For instance, the interval (−1, 1) = {y : −1 < y < 1} is a neighbourhood of x = 0 in the real line, so the set $(-1,0)\cup (0,1)=(-1,1)-\{0\}$ is a punctured neighbourhood of 0. Q Quasicompact See compact. Some authors define "compact" to include the Hausdorff separation axiom, and they use the term quasicompact to mean what we call in this glossary simply "compact" (without the Hausdorff axiom). This convention is most commonly found in French, and branches of mathematics heavily influenced by the French. Quotient map If X and Y are spaces, and if f is a surjection from X to Y, then f is a quotient map (or identification map) if, for every subset U of Y, U is open in Y if and only if f -1(U) is open in X. In other words, Y has the f-strong topology. Equivalently, $f$ is a quotient map if and only if it is the transfinite composition of maps $X\rightarrow X/Z$, where $Z\subset X$ is a subset. Note that this does not imply that f is an open function. Quotient space If X is a space, Y is a set, and f : X → Y is any surjective function, then the Quotient topology on Y induced by f is the finest topology for which f is continuous. The space X is a quotient space or identification space. By definition, f is a quotient map. The most common example of this is to consider an equivalence relation on X, with Y the set of equivalence classes and f the natural projection map. This construction is dual to the construction of the subspace topology. R Refinement A cover K is a refinement of a cover L if every member of K is a subset of some member of L. Regular A space is regular if, whenever C is a closed set and x is a point not in C, then C and x have disjoint neighbourhoods. Regular Hausdorff A space is regular Hausdorff (or T3) if it is a regular T0 space. (A regular space is Hausdorff if and only if it is T0, so the terminology is consistent.) Regular open A subset of a space X is regular open if it equals the interior of its closure; dually, a regular closed set is equal to the closure of its interior.[21] An example of a non-regular open set is the set U = (0,1) ∪ (1,2) in R with its normal topology, since 1 is in the interior of the closure of U, but not in U. The regular open subsets of a space form a complete Boolean algebra.[21] Relatively compact A subset Y of a space X is relatively compact in X if the closure of Y in X is compact. Residual If X is a space and A is a subset of X, then A is residual in X if the complement of A is meagre in X. Also called comeagre or comeager. Resolvable A topological space is called resolvable if it is expressible as the union of two disjoint dense subsets. Rim-compact A space is rim-compact if it has a base of open sets whose boundaries are compact. S S-space An S-space is a hereditarily separable space which is not hereditarily Lindelöf.[14] Scattered A space X is scattered if every nonempty subset A of X contains a point isolated in A. Scott The Scott topology on a poset is that in which the open sets are those Upper sets inaccessible by directed joins.[22] Second category See Meagre. Second-countable A space is second-countable or perfectly separable if it has a countable base for its topology.[8] Every second-countable space is first-countable, separable, and Lindelöf. Semilocally simply connected A space X is semilocally simply connected if, for every point x in X, there is a neighbourhood U of x such that every loop at x in U is homotopic in X to the constant loop x. Every simply connected space and every locally simply connected space is semilocally simply connected. (Compare with locally simply connected; here, the homotopy is allowed to live in X, whereas in the definition of locally simply connected, the homotopy must live in U.) Semi-open A subset A of a topological space X is called semi-open if $A\subseteq \operatorname {Cl} _{X}\left(\operatorname {Int} _{X}A\right)$.[23] Semi-preopen A subset A of a topological space X is called semi-preopen if $A\subseteq \operatorname {Cl} _{X}\left(\operatorname {Int} _{X}\left(\operatorname {Cl} _{X}A\right)\right)$[2] Semiregular A space is semiregular if the regular open sets form a base. Separable A space is separable if it has a countable dense subset.[8][16] Separated Two sets A and B are separated if each is disjoint from the other's closure. Sequentially compact A space is sequentially compact if every sequence has a convergent subsequence. Every sequentially compact space is countably compact, and every first-countable, countably compact space is sequentially compact. Short map See metric map Simply connected A space is simply connected if it is path-connected and every loop is homotopic to a constant map. Smaller topology See Coarser topology. Sober In a sober space, every irreducible closed subset is the closure of exactly one point: that is, has a unique generic point.[24] Star The star of a point in a given cover of a topological space is the union of all the sets in the cover that contain the point. See star refinement. $f$-Strong topology Let $f\colon X\rightarrow Y$ be a map of topological spaces. We say that $Y$ has the $f$-strong topology if, for every subset $U\subset Y$, one has that $U$ is open in $Y$ if and only if $f^{-1}(U)$ is open in $X$ Stronger topology See Finer topology. Beware, some authors, especially analysts, use the term weaker topology. Subbase A collection of open sets is a subbase (or subbasis) for a topology if every non-empty proper open set in the topology is the union of a finite intersection of sets in the subbase. If ${\mathcal {B}}$ is any collection of subsets of a set X, the topology on X generated by ${\mathcal {B}}$ is the smallest topology containing ${\mathcal {B}};$ this topology consists of the empty set, X and all unions of finite intersections of elements of ${\mathcal {B}}.$ Thus ${\mathcal {B}}$ is a subbase for the topology it generates. Subbasis See Subbase. Subcover A cover K is a subcover (or subcovering) of a cover L if every member of K is a member of L. Subcovering See Subcover. Submaximal space A topological space is said to be submaximal if every subset of it is locally closed, that is, every subset is the intersection of an open set and a closed set. Here are some facts about submaximality as a property of topological spaces: • Every door space is submaximal. • Every submaximal space is weakly submaximal viz every finite set is locally closed. • Every submaximal space is irresolvable.[25] Subspace If T is a topology on a space X, and if A is a subset of X, then the subspace topology on A induced by T consists of all intersections of open sets in T with A. This construction is dual to the construction of the quotient topology. T T0 A space is T0 (or Kolmogorov) if for every pair of distinct points x and y in the space, either there is an open set containing x but not y, or there is an open set containing y but not x. T1 A space is T1 (or Fréchet or accessible) if for every pair of distinct points x and y in the space, there is an open set containing x but not y. (Compare with T0; here, we are allowed to specify which point will be contained in the open set.) Equivalently, a space is T1 if all its singletons are closed. Every T1 space is T0. T2 See Hausdorff space. T3 See Regular Hausdorff. T3½ See Tychonoff space. T4 See Normal Hausdorff. T5 See Completely normal Hausdorff. Top See Category of topological spaces. θ-cluster point, θ-closed, θ-open A point x of a topological space X is a θ-cluster point of a subset A if $A\cap \operatorname {Cl} _{X}(U)\neq \emptyset $ for every open neighborhood U of x in X. The subset A is θ-closed if it is equal to the set of its θ-cluster points, and θ-open if its complement is θ-closed.[23] Topological invariant A topological invariant is a property which is preserved under homeomorphism. For example, compactness and connectedness are topological properties, whereas boundedness and completeness are not. Algebraic topology is the study of topologically invariant abstract algebra constructions on topological spaces. Topological space A topological space (X, T) is a set X equipped with a collection T of subsets of X satisfying the following axioms: 1. The empty set and X are in T. 2. The union of any collection of sets in T is also in T. 3. The intersection of any pair of sets in T is also in T. The collection T is a topology on X. Topological sum See Coproduct topology. Topologically complete Completely metrizable spaces (i. e. topological spaces homeomorphic to complete metric spaces) are often called topologically complete; sometimes the term is also used for Čech-complete spaces or completely uniformizable spaces. Topology See Topological space. Totally bounded A metric space M is totally bounded if, for every r > 0, there exist a finite cover of M by open balls of radius r. A metric space is compact if and only if it is complete and totally bounded. Totally disconnected A space is totally disconnected if it has no connected subset with more than one point. Trivial topology The trivial topology (or indiscrete topology) on a set X consists of precisely the empty set and the entire space X. Tychonoff A Tychonoff space (or completely regular Hausdorff space, completely T3 space, T3.5 space) is a completely regular T0 space. (A completely regular space is Hausdorff if and only if it is T0, so the terminology is consistent.) Every Tychonoff space is regular Hausdorff. U Ultra-connected A space is ultra-connected if no two non-empty closed sets are disjoint.[13] Every ultra-connected space is path-connected. Ultrametric A metric is an ultrametric if it satisfies the following stronger version of the triangle inequality: for all x, y, z in M, d(x, z) ≤ max(d(x, y), d(y, z)). Uniform isomorphism If X and Y are uniform spaces, a uniform isomorphism from X to Y is a bijective function f : X → Y such that f and f−1 are uniformly continuous. The spaces are then said to be uniformly isomorphic and share the same uniform properties. Uniformizable/Uniformisable A space is uniformizable if it is homeomorphic to a uniform space. Uniform space A uniform space is a set X equipped with a nonempty collection Φ of subsets of the Cartesian product X × X satisfying the following axioms: 1. if U is in Φ, then U contains { (x, x) | x in X }. 2. if U is in Φ, then { (y, x) | (x, y) in U } is also in Φ 3. if U is in Φ and V is a subset of X × X which contains U, then V is in Φ 4. if U and V are in Φ, then U ∩ V is in Φ 5. if U is in Φ, then there exists V in Φ such that, whenever (x, y) and (y, z) are in V, then (x, z) is in U. The elements of Φ are called entourages, and Φ itself is called a uniform structure on X. The uniform structure induces a topology on X where the basic neighborhoods of x are sets of the form {y : (x,y)∈U} for U∈Φ. Uniform structure See Uniform space. W Weak topology The weak topology on a set, with respect to a collection of functions from that set into topological spaces, is the coarsest topology on the set which makes all the functions continuous. Weaker topology See Coarser topology. Beware, some authors, especially analysts, use the term stronger topology. Weakly countably compact A space is weakly countably compact (or limit point compact) if every infinite subset has a limit point. Weakly hereditary A property of spaces is said to be weakly hereditary if whenever a space has that property, then so does every closed subspace of it. For example, compactness and the Lindelöf property are both weakly hereditary properties, although neither is hereditary. Weight The weight of a space X is the smallest cardinal number κ such that X has a base of cardinal κ. (Note that such a cardinal number exists, because the entire topology forms a base, and because the class of cardinal numbers is well-ordered.) Well-connected See Ultra-connected. (Some authors use this term strictly for ultra-connected compact spaces.) Z Zero-dimensional A space is zero-dimensional if it has a base of clopen sets.[26] See also • Naive set theory, Axiomatic set theory, and Function for definitions concerning sets and functions. • Topology for a brief history and description of the subject area • Topological spaces for basic definitions and examples • List of general topology topics • List of examples in general topology Topology specific concepts • Compact space • Connected space • Continuity • Metric space • Separated sets • Separation axiom • Topological space • Uniform space Other glossaries • Glossary of algebraic topology • Glossary of differential geometry and topology • Glossary of areas of mathematics • Glossary of Riemannian and metric geometry References 1. Vickers (1989) p.22 2. Hart, Nagata & Vaughan 2004, p. 9. 3. Deza, Michel Marie; Deza, Elena (2012). Encyclopedia of Distances. Springer-Verlag. p. 64. ISBN 978-3642309588. 4. Hart, Nagata & Vaughan 2004, pp. 8–9. 5. Nagata (1985) p.104 6. Steen & Seebach (1978) p.163 7. Steen & Seebach (1978) p.41 8. Steen & Seebach (1978) p.162 9. Willard, Stephen (1970). General Topology. Addison-Wesley Series in Mathematics. Reading, MA: Addison-Wesley. ISBN 9780201087079. Zbl 0205.26601. 10. Conway, John B. (1995). Functions of One Complex Variable II. Graduate Texts in Mathematics. Vol. 159. Springer-Verlag. pp. 367–376. ISBN 0-387-94460-5. Zbl 0887.30003. 11. Vickers (1989) p.65 12. Steen & Seebach p.4 13. Steen & Seebach (1978) p.29 14. Gabbay, Dov M.; Kanamori, Akihiro; Woods, John Hayden, eds. (2012). Sets and Extensions in the Twentieth Century. Elsevier. p. 290. ISBN 978-0444516213. 15. Hart et al (2004) p.65 16. Steen & Seebach (1978) p.7 17. Steen & Seebach (1978) p.23 18. Steen & Seebach (1978) p.25 19. Hart, Nagata, Vaughan Sect. d-22, page 227 20. Ceccherini-Silberstein, Tullio; Coornaert, Michel (2010). Cellular automata and groups. Springer Monographs in Mathematics. Berlin: Springer-Verlag. p. 3. ISBN 978-3-642-14033-4. Zbl 1218.37004. 21. Steen & Seebach (1978) p.6 22. Vickers (1989) p.95 23. Hart, Nagata & Vaughan 2004, p. 8. 24. Vickers (1989) p.66 25. Miroslav Hušek; J. van Mill (2002), Recent progress in general topology, Recent Progress in General Topology, vol. 2, Elsevier, p. 21, ISBN 0-444-50980-1 26. Steen & Seebach (1978) p.33 • Hart, Klaas Pieter; Nagata, Jun-iti; Vaughan, Jerry E. (2004). Encyclopedia of general topology. Elsevier. ISBN 978-0-444-50355-8. • Kunen, Kenneth; Vaughan, Jerry E., eds. (1984). Handbook of Set-Theoretic Topology. North-Holland. ISBN 0-444-86580-2. • Nagata, Jun-iti (1985). Modern general topology. North-Holland Mathematical Library. Vol. 33 (2nd revised ed.). Amsterdam-New York-Oxford: North-Holland. ISBN 0080933793. Zbl 0598.54001. • Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1978). Counterexamples in Topology (Dover reprint of 1978 ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-486-68735-3. MR 0507446. • Vickers, Steven (1989). Topology via Logic. Cambridge Tracts in Theoretic Computer Science. Vol. 5. ISBN 0-521-36062-5. Zbl 0668.54001. • Willard, Stephen (1970). General Topology. Addison-Wesley Series in Mathematics. Reading, MA: Addison-Wesley. ISBN 978-0-201-08707-9. Zbl 0205.26601. Also available as Dover reprint. External links • A glossary of definitions in topology Topology Fields • General (point-set) • Algebraic • Combinatorial • Continuum • Differential • Geometric • low-dimensional • Homology • cohomology • Set-theoretic • Digital Key concepts • Open set / Closed set • Interior • Continuity • Space • compact • Connected • Hausdorff • metric • uniform • Homotopy • homotopy group • fundamental group • Simplicial complex • CW complex • Polyhedral complex • Manifold • Bundle (mathematics) • Second-countable space • Cobordism Metrics and properties • Euler characteristic • Betti number • Winding number • Chern number • Orientability Key results • Banach fixed-point theorem • De Rham cohomology • Invariance of domain • Poincaré conjecture • Tychonoff's theorem • Urysohn's lemma • Category •  Mathematics portal • Wikibook • Wikiversity • Topics • general • algebraic • geometric • Publications
Wikipedia
Topological derivative The topological derivative is, conceptually, a derivative of a shape functional with respect to infinitesimal changes in its topology, such as adding an infinitesimal hole or crack. When used in higher dimensions than one, the term topological gradient is also used to name the first-order term of the topological asymptotic expansion, dealing only with infinitesimal singular domain perturbations. It has applications in shape optimization, topology optimization, image processing and mechanical modeling. Definition Let $\Omega $ be an open bounded domain of $\mathbb {R} ^{d}$, with $d\geq 2$, which is subject to a nonsmooth perturbation confined in a small region $\omega _{\varepsilon }({\tilde {x}})={\tilde {x}}+\varepsilon \omega $ of size $\varepsilon $ with ${\tilde {x}}$ an arbitrary point of $\Omega $ and $\omega $ a fixed domain of $\mathbb {R} ^{d}$. Let $\Psi $ be a characteristic function associated to the unperturbed domain and $\Psi _{\varepsilon }$ be a characteristic function associated to the perforated domain $\Omega _{\varepsilon }=\Omega \backslash {\overline {\omega _{\varepsilon }}}$. A given shape functional $\Phi (\Psi _{\varepsilon }({\tilde {x}}))$ associated to the topologically perturbed domain, admits the following topological asymptotic expansion: $\Phi (\Psi _{\varepsilon }({\tilde {x}}))=\Phi (\Psi )+f(\varepsilon )g({\tilde {x}})+o(f(\varepsilon ))$ where $\Phi (\Psi )$ is the shape functional associated to the reference domain, $f(\varepsilon )$ is a positive first order correction function of $\Phi (\Psi )$ and $o(f(\varepsilon ))$ is the remainder. The function $g({\tilde {x}})$ is called the topological derivative of $\Phi $ at ${\tilde {x}}$. Applications Structural mechanics The topological derivative can be applied to shape optimization problems in structural mechanics.[1] The topological derivative can be considered as the singular limit of the shape derivative. It is a generalization of this classical tool in shape optimization.[2] Shape optimization concerns itself with finding an optimal shape. That is, find $\Omega $ to minimize some scalar-valued objective function, $J(\Omega )$. The topological derivative technique can be coupled with level-set method.[3] In 2005, the topological asymptotic expansion for the Laplace equation with respect to the insertion of a short crack inside a plane domain had been found. It allows to detect and locate cracks for a simple model problem: the steady-state heat equation with the heat flux imposed and the temperature measured on the boundary.[4] The topological derivative had been fully developed for a wide range of second-order differential operators and in 2011, it had been applied to Kirchhoff plate bending problem with a fourth-order operator.[5] Image processing In the field of image processing, in 2006, the topological derivative has been used to perform edge detection and image restoration. The impact of an insulating crack in the domain is studied. The topological sensitivity gives information on the image edges. The presented algorithm is non-iterative and thanks to the use of spectral methods has a short computing time.[6] Only $O(Nlog(N))$ operations are needed to detect edges, where $N$ is the number of pixels.[7] During the following years, other problems have been considered: classification, segmentation, inpainting and super-resolution.[7][8][9][10][11] This approach can be applied to gray-level or color images.[12] Until 2010, isotropic diffusion was used for image reconstructions. The topological gradient is also able to provide edge orientation and this information can be used to perform anisotropic diffusion.[13] In 2012, a general framework is presented to reconstruct an image $u\in L^{2}(\Omega )$ given some noisy observations $Lu+n$ in a Hilbert space $E$ where $\Omega $ is the domain where the image $u$ is defined.[11] The observation space $E$ depends on the specific application as well as the linear observation operator $L:L^{2}(\Omega )\rightarrow E$. The norm on the space $E$ is $\|.\|_{E}$. The idea to recover the original image is to minimize the following functional for $u\in H^{1}(\Omega )$: $\|C^{1/2}\nabla u\|_{L^{2}(\Omega )}^{2}+\|Lu-v\|_{E}^{2}$ where $C$ is a positive definite tensor. The first term of the equation ensures that the recovered image $u$ is regular, and the second term measures the discrepancy with the data. In this general framework, different types of image reconstruction can be performed such as[11] • image denoising with $E=L^{2}(\Omega )$ and $Lu=u$, • image denoising and deblurring with $E=L^{2}(\Omega )$ and $Lu=\phi \ast u$ with $\phi $ a motion blur or Gaussian blur, • image inpainting with $E=L^{2}(\Omega \backslash \omega )$ and $Lu=u|_{\Omega \backslash \omega }$, the subset $\omega \subset \Omega $ is the region where the image has to be recovered. In this framework, the asymptotic expansion of the cost function $J_{\Omega }(u_{\Omega })={\frac {1}{2}}\int _{\Omega }u_{\Omega }^{2}$ in the case of a crack provides the same topological derivative $g(x,n)=-\pi c(\nabla u_{0}.n)(\nabla p_{0}.n)-\pi (\nabla u_{0}.n)^{2}$ where $n$ is the normal to the crack and $c$ a constant diffusion coefficient. The functions $u_{0}$ and $p_{0}$ are solutions of the following direct and adjoint problems.[11] $-\nabla (c\nabla u_{0})+L^{*}Lu_{0}=L^{*}v$ in $\Omega $ and $\partial _{n}u_{0}=0$ on $\partial \Omega $ $-\nabla (c\nabla p_{0})+L^{*}Lp_{0}=\Delta u_{0}$ in $\Omega $ and $\partial _{n}p_{0}=0$ on $\partial \Omega $ Thanks to the topological gradient, it is possible to detect the edges and their orientation and to define an appropriate $C$ for the image reconstruction process.[11] In image processing, the topological derivatives have also been studied in the case of a multiplicative noise of gamma law or in presence of Poissonian statistics.[14] Inverse problems In 2009, the topological gradient method has been applied to tomographic reconstruction.[15] The coupling between the topological derivative and the level set has also been investigated in this application.[16] References 1. J. Sokolowski and A. Zochowski, 44On topological derivative in shape optimization44, 1997 2. Topological Derivatives in Shape Optimization, Jan Sokołowski, May 28, 2012. Retrieved November 9, 2012 3. G. Allaire, F. Jouve, Coupling the level set method and the topological gradient in structural optimization, in IUTAM symposium on topological design optimization of structures, machines and materials, M. Bendsoe et al. eds., pp3-12, Springer (2006). 4. S. Amstutz, I. Horchani, and M. Masmoudi. Crack detection by the topological gradient method. Control and Cybernetics, 34(1):81–101, 2005. 5. S. Amstutz, A.A. Novotny, Topological asymptotic analysis of the Kirchhoff plate bending problem. ESAIM: COCV 17(3), pp. 705-721, 2011 6. L. J. Belaid, M. Jaoua, M. Masmoudi, and L. Siala. Image restoration and edge detection by topological asymptotic expansion. CRAS Paris, 342(5):313–318, March 2006. 7. D. Auroux and M. Masmoudi. Image processing by topological asymptotic analysis. ESAIM: Proc. Mathematical methods for imaging and inverse problems, 26:24–44, April 2009. 8. D. Auroux, M. Masmoudi, and L. Jaafar Belaid. Image restoration and classification by topological asymptotic expansion, pp. 23–42, Variational Formulations in Mechanics: Theory and Applications, E. Taroco, E.A. de Souza Neto and A.A. Novotny (Eds), CIMNE, Barcelona, Spain, 2007. 9. D. Auroux and M. Masmoudi. A one-shot inpainting algorithm based on the topological asymptotic analysis. Computational and Applied Mathematics, 25(2-3):251–267, 2006. 10. D. Auroux and M. Masmoudi. Image processing by topological asymptotic expansion. J. Math. Imaging Vision, 33(2):122–134, February 2009. 11. S. Larnier, J. Fehrenbach and M. Masmoudi, The topological gradient method: From optimal design to image processing, Milan Journal of Mathematics, vol. 80, issue 2, pp. 411–441, December 2012. 12. D. Auroux, L. Jaafar Belaid, and B. Rjaibi. Application of the topological gradient method to color image restoration. SIAM J. Imaging Sci., 3(2):153–175, 2010. 13. S. Larnier and J. Fehrenbach. Edge detection and image restoration with anisotropic topological gradient. In 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 1362–1365, March 2010. 14. A. Drogoul, G. Aubert, The topological gradient method for semi-linear problems and application to edge detection and noise removal. 15. D. Auroux, L. Jaafar Belaid, and B. Rjaibi. Application of the topological gradient method to tomography. In ARIMA Proc. TamTam'09, 2010. 16. T. Rymarczyk, P. Tchórzewski, J. Sikora, Topological Approach to Image Reconstruction in Electrical Impedance Tomography, ADVCOMP 2014 : The Eighth International Conference on Advanced Engineering Computing and Applications in Science Books A. A. Novotny and J. Sokolowski, Topological derivatives in shape optimization, Springer, 2013. External links • Allaire and al. Structural optimization using topological and shape sensitivity via a level set method
Wikipedia
Topological graph In mathematics, a topological graph is a representation of a graph in the plane, where the vertices of the graph are represented by distinct points and the edges by Jordan arcs (connected pieces of Jordan curves) joining the corresponding pairs of points. The points representing the vertices of a graph and the arcs representing its edges are called the vertices and the edges of the topological graph. It is usually assumed that any two edges of a topological graph cross a finite number of times, no edge passes through a vertex different from its endpoints, and no two edges touch each other (without crossing). A topological graph is also called a drawing of a graph. This article is about graphs embedded in the plane, with crossings. For graph embeddings on other surfaces, see topological graph theory. An important special class of topological graphs is the class of geometric graphs, where the edges are represented by line segments. (The term geometric graph is sometimes used in a broader, somewhat vague sense.) The theory of topological graphs is an area of graph theory, mainly concerned with combinatorial properties of topological graphs, in particular, with the crossing patterns of their edges. It is closely related to graph drawing, a field which is more application oriented, and topological graph theory, which focuses on embeddings of graphs in surfaces (that is, drawings without crossings). Extremal problems for topological graphs A fundamental problem in extremal graph theory is the following: what is the maximum number of edges that a graph of n vertices can have if it contains no subgraph belonging to a given class of forbidden subgraphs? The prototype of such results is Turán's theorem, where there is one forbidden subgraph: a complete graph with k vertices (k is fixed). Analogous questions can be raised for topological and geometric graphs, with the difference that now certain geometric subconfigurations are forbidden. Historically, the first instance of such a theorem is due to Paul Erdős, who extended a result of Heinz Hopf and Erika Pannwitz.[2] He proved that the maximum number of edges that a geometric graph with n > 2 vertices can have without containing two disjoint edges (that cannot even share an endpoint) is n. John Conway conjectured that this statement can be generalized to simple topological graphs. A topological graph is called "simple" if any pair of its edges share at most one point, which is either an endpoint or a common interior point at which the two edges properly cross. Conway's thrackle conjecture can now be reformulated as follows: A simple topological graph with n > 2 vertices and no two disjoint edges has at most n edges. The first linear upper bound on the number of edges of such a graph was established by Lovász et al.[3] The best known upper bound, 1.3984n, was proved by Fulek and Pach.[4] Apart from geometric graphs, Conway's thrackle conjecture is known to be true for x-monotone topological graphs.[5] A topological graph is said to be x-monotone if every vertical line intersects every edge in at most one point. Alon and Erdős[6] initiated the investigation of the generalization of the above question to the case where the forbidden configuration consists of k disjoint edges (k > 2). They proved that the number of edges of a geometric graph of n vertices, containing no 3 disjoint edges is O(n). The optimal bound of roughly 2.5n was determined by Černý.[7] For larger values of k, the first linear upper bound, $O(k^{4}n)$, was established by Pach and Töröcsik.[8] This was improved by Tóth to $O(k^{2}n)$.[9] For the number of edges in a simple topological graph with no k disjoint edges, only an $O(n\log ^{4k-8}n)$ upper bound is known.[10] This implies that every complete simple topological graph with n vertices has at least $c{\frac {\log n}{\log \log n}}$ pairwise disjoint edges, which was improved to $cn^{{\frac {1}{2}}-\epsilon }$ by Ruiz-Vargas.[11] [12] It is possible that this lower bound can be further improved to cn, where c > 0 is a constant. Quasi-planar graphs A common interior point of two edges, at which the first edge passes from one side of the second edge to the other, is called a crossing. Two edges of a topological graph cross each other if they determine a crossing. For any integer k > 1, a topological or geometric graph is called k-quasi-planar if it has no k pairwise crossing edges. Using this terminology, if a topological graph is 2-quasi-planar, then it is a planar graph. It follows from Euler's polyhedral formula that every planar graph with n > 2 vertices has at most 3n − 6 edges. Therefore, every 2-quasi-planar graph with n > 2 vertices has at most 3n − 6 edges. It has been conjectured by Pach et al.[13] that every k-quasi-planar topological graph with n vertices has at most c(k)n edges, where c(k) is a constant depending only on k. This conjecture is known to be true for k = 3 [14] [15] and k = 4.[16] It is also known to be true for convex geometric graphs (that is for geometric graphs whose vertices form the vertex set of a convex n-gon),[17] and for k-quasi-planar topological graphs whose edges are drawn as x-monotone curves, all of which cross a vertical line.[18][19] The last result implies that every k-quasi-planar topological graph with n vertices, whose edges are drawn as x-monotone curves has at most c(k)n log n edges for a suitable constant c(k). For geometric graphs, this was proved earlier by Valtr.[20] The best known general upper bound for the number of edges of a k-quasi-planar topological graph is $n\log ^{O(\log k)}n$.[21] This implies that every complete topological graph with n vertices has at least $n^{\frac {1}{O(\log \log n)}}$pairwise crossing edges, which was improved to a quasi linear bound in the case of geometric graph.[22] Crossing numbers Main article: Crossing number (graph theory) Ever since Pál Turán coined his brick factory problem [23] during World War II, the determination or estimation of crossing numbers of graphs has been a popular theme in graph theory and in the theory of algorithms[24] that is abundant with famous long standing open problems such as the Albertson conjecture, Harary-Hill's conjecture[25] or the still unsolved Turán's brick factory problem.[26] However, the publications in the subject (explicitly or implicitly) used several competing definitions of crossing numbers. This was pointed out by Pach and Tóth,[27] who introduced the following terminology. Crossing number (of a graph G): The minimum number of crossing points over all drawings of G in the plane (that is, all of its representations as a topological graph) with the property that no three edges pass through the same point. It is denoted by cr(G). Pair-crossing number: The minimum number of crossing pairs of edges over all drawings of G. It is denoted by pair-cr(G). Odd-crossing number: The minimum number of those pairs of edges that cross an odd number of times, over all drawings of G. It is denoted by odd-cr(G). These parameters are not unrelated. One has odd-cr(G) ≤ pair-cr(G) ≤ cr(G) for every graph G. It is known that cr(G) ≤ 2(odd-cr(G))2[27] and $O(\operatorname {pcr} (G)^{\frac {3}{2}}\log ^{2}\operatorname {pcr} (G))$[28] and that there exist infinitely many graphs for which pair-cr(G) ≠ odd-cr(G).[1] [29] No examples are known for which the crossing number and the pair-crossing number are not the same. It follows from the Hanani–Tutte theorem[30] [31] that odd-cr(G) = 0 implies cr(G) = 0. It is also known that odd-cr(G) = k implies cr(G)=k for k = 1, 2, 3.[32] Another well researched graph parameter is the following. Rectilinear crossing number: The minimum number of crossing points over all straight-line drawings of G in the plane (that is, all of its representations as a geometric graph) with the property that no three edges pass through the same point. It is denoted by lin-cr(G). By definition, one has cr(G) ≤ lin-cr(G) for every graph G. It was shown by Bienstock and Dean that there are graphs with crossing number 4 and with arbitrarily large rectilinear crossing number.[33] Computing the crossing number is NP-complete[34] in general. Therefore a large body of research focuses on its estimates. The Crossing Lemma is a cornerstone result that provides widely applicable lower bounds on the crossing number. Several interesting variants and generalizations of the Crossing Lemma are known for Jordan curves[35] [36] and degenerate crossing number[37][38], where the latter relates the notion of the crossing number to the graph genus. Ramsey-type problems for geometric graphs In traditional graph theory, a typical Ramsey-type result states that if we color the edges of a sufficiently large complete graph with a fixed number of colors, then we necessarily find a monochromatic subgraph of a certain type.[39] One can raise similar questions for geometric (or topological) graphs, except that now we look for monochromatic (one-colored) substructures satisfying certain geometric conditions.[40] One of the first results of this kind states that every complete geometric graph whose edges are colored with two colors contains a non-crossing monochromatic spanning tree.[41] It is also true that every such geometric graph contains $\left\lceil {\frac {n+1}{3}}\right\rceil $ disjoint edges of the same color.[41] The existence of a non-crossing monochromatic path of size at least cn, where c > 0 is a constant, is a long-standing open problem. It is only known that every complete geometric graph on n vertices contains a non-crossing monochromatic path of length at least $n^{\frac {2}{3}}$.[42] Topological hypergraphs If we view a topological graph as a topological realization of a 1-dimensional simplicial complex, it is natural to ask how the above extremal and Ramsey-type problems generalize to topological realizations of d-dimensional simplicial complexes. There are some initial results in this direction, but it requires further research to identify the key notions and problems.[43][44][45] Two vertex disjoint simplices are said to cross if their relative interiors have a point in common. A set of k > 3 simplices strongly cross if no 2 of them share a vertex, but their relative interiors have a point common. It is known that a set of d-dimensional simplices spanned by n points in $\mathbb {R} ^{d}$ without a pair of crossing simplices can have at most $O(n^{d})$ simplices and this bound is asymptotically tight.[46] This result was generalized to sets of 2-dimensional simplices in $\mathbb {R} ^{2}$ without three strongly crossing simplices.[47] If we forbid k strongly crossing simplices, the corresponding best known upper bound is $O(n^{d+1-\delta })$,[46] for some $\delta =\delta (k,d)<1$. This result follows from the colored Tverberg theorem.[48] It is far from the conjectured bound of $O(n^{d})$.[46] For any fixed k > 1, we can select at most $O(n^{\lceil {\frac {d}{2}}\rceil })$ d-dimensional simplices spanned by a set of n points in $\mathbb {R} ^{d}$ with the property that no k of them share a common interior point.[46][49] This is asymptotically tight. Two triangles in $\mathbb {R} ^{3}$ are said to be almost disjoint if they are disjoint or if they share only one vertex. It is an old problem of Gil Kalai and others to decide whether the largest number of almost disjoint triangles that can be chosen on some vertex set of n points in $\mathbb {R} ^{3}$ is $o(n^{2})$. It is known that there exists sets of n points for which this number is at least $cn^{\frac {3}{2}}$ for a suitable constant c > 0.[50] References 1. Pelsmajer, Michael J.; Schaefer, Marcus; Štefankovič, Daniel (2008), "Odd crossing number and crossing number are not the same", Discrete and Computational Geometry, 39 (1–3): 442–454, doi:10.1007/s00454-008-9058-x A preliminary version of these results was reviewed in Proc. of 13th International Symposium on Graph Drawing, 2005, pp. 386–396, doi:10.1007/11618058_35 2. Hopf, Heinz; Pannwitz, Erika (1934), "Aufgabe nr. 167", Jahresbericht der Deutschen Mathematiker-Vereinigung, 43: 114 3. Lovász, László; Pach, János; Szegedy, Mario (1997), "On Conway's thrackle conjecture", Discrete and Computational Geometry, Springer, 18 (4): 369–376, doi:10.1007/PL00009322 4. Fulek, Radoslav; Pach, János (2019), "Thrackles: An improved upper bound", Discrete Applied Mathematics, 259: 226–231, arXiv:1708.08037, doi:10.1016/j.dam.2018.12.025. 5. Pach, János; Sterling, Ethan (2011), "Conway's conjecture for monotone thrackles", American Mathematical Monthly, 118 (6): 544–548, doi:10.4169/amer.math.monthly.118.06.544, MR 2812285, S2CID 17558559 6. Alon, Noga; Erdős, Paul (1989), "Disjoint edges in geometric graphs", Discrete and Computational Geometry, 4 (4): 287–290, doi:10.1007/bf02187731 7. Černý, Jakub (2005), "Geometric graphs with no three disjoint edges", Discrete and Computational Geometry, 34 (4): 679–695, doi:10.1007/s00454-005-1191-1 8. Pach, János; Töröcsik, Jenö (1994), "Some geometric applications of Dilworth's theorem", Discrete and Computational Geometry, 12: 1–7, doi:10.1007/BF02574361 9. Tóth, Géza (2000), "Note on geometric graphs", Journal of Combinatorial Theory, Series A, 89 (1): 126–132, doi:10.1006/jcta.1999.3001 10. Pach, János; Tóth, Géza (2003), "Disjoint edges in topological graphs", Combinatorial Geometry and Graph Theory: Indonesia-Japan Joint Conference, IJCCGGT 2003, Bandung, Indonesia, September 13-16, 2003, Revised Selected Papers (PDF), Lecture Notes in Computer Science, vol. 3330, Springer-Verlag, pp. 133–140, doi:10.1007/978-3-540-30540-8_15 11. Ruiz-Vargas, Andres J. (2015), "Many disjoint edges in topological graphs", in Campêlo, Manoel; Corrêa, Ricardo; Linhares-Sales, Cláudia; Sampaio, Rudini (eds.), LAGOS'15 – VIII Latin-American Algorithms, Graphs and Optimization Symposium, Electronic Notes in Discrete Mathematics, vol. 50, Elsevier, pp. 29–34, arXiv:1412.3833, doi:10.1016/j.endm.2015.07.006, S2CID 14865350 12. Ruiz-Vargas, Andres J. (2017), "Many disjoint edges in topological graphs", Comput. Geom., 62: 1–13, arXiv:1412.3833, doi:10.1016/j.comgeo.2016.11.003 13. Pach, János; Shahrokhi, Farhad; Szegedy, Mario (1996), "Applications of the crossing number", Algorithmica, Springer, 16 (1): 111–117, doi:10.1007/BF02086610, S2CID 20375896 14. Agarwal K., Pankaj; Aronov, Boris; Pach, János; Pollack, Richard; Sharir, Micha (1997), "Quasi-planar graphs have a linear number of edges", Combinatorica, 17: 1–9, doi:10.1007/bf01196127, S2CID 8092013 15. Ackerman, Eyal; Tardos, Gábor (2007), "On the maximum number of edges in quasi-planar graphs", Journal of Combinatorial Theory, Series A, 114 (3): 563–571, doi:10.1016/j.jcta.2006.08.002 16. Ackerman, Eyal (2009), "On the maximum number of edges in topological graphs with no four pairwise crossing edges", Discrete and Computational Geometry, 41 (3): 365–375, doi:10.1007/s00454-009-9143-9 17. Capoyleas, Vasilis; Pach, János (1992), "A Turán-type theorem on chords of a convex polygon", Journal of Combinatorial Theory, Series B, 56 (1): 9–15, doi:10.1016/0095-8956(92)90003-G 18. Suk, Andrew (2011), "k-quasi-planar graphs", Graph Drawing: 19th International Symposium, GD 2011, Eindhoven, The Netherlands, September 21-23, 2011, Revised Selected Papers, Lecture Notes in Computer Science, vol. 7034, Springer-Verlag, pp. 266–277, arXiv:1106.0958, doi:10.1007/978-3-642-25878-7_26, S2CID 18681576 19. Fox, Jacob; Pach, János; Suk, Andrew (2013), "The number of edges in k-quasi-planar graphs", SIAM Journal on Discrete Mathematics, 27 (1): 550–561, arXiv:1112.2361, doi:10.1137/110858586, S2CID 52317. 20. Valtr, Pavel (1997), "Graph drawing with no k pairwise crossing edges", Graph Drawing: 5th International Symposium, GD '97 Rome, Italy, September 18–20, 1997, Proceedings, Lecture Notes in Computer Science, vol. 1353, Springer-Verlag, pp. 205–218 21. Fox, Jacob; Pach, János (2012), "Coloring $K_{\mbox{k}}$-free intersection graphs of geometric objects in the plane", European Journal of Combinatorics, 33 (5): 853–866, doi:10.1016/j.ejc.2011.09.021 A preliminary version of these results was announced in Proc. of Symposium on Computational Geometry (PDF), 2008, pp. 346–354, doi:10.1145/1377676.1377735, S2CID 15138458 22. Pach, János; Rubin, Natan; Tardos, Gábor (2021), "Planar point sets determine many pairwise crossing segments", Advances in Mathematics, 386: 107779, arXiv:1904.08845, doi:10.1016/j.aim.2021.107779. 23. Turán, Paul (1977), "A note of welcome", Journal of Graph Theory, 1: 7–9, doi:10.1002/jgt.3190010105 24. Garey, M. R.; Johnson, D. S. (1983), "Crossing number is NP-complete", SIAM Journal on Algebraic and Discrete Methods, 4 (3): 312–316, doi:10.1137/0604033, MR 0711340{{citation}}: CS1 maint: multiple names: authors list (link) 25. Balogh, József; Lidický, Bernard; Salazar, Gelasio (2019), "Closing in on Hill's conjecture", SIAM Journal on Discrete Mathematics, 33 (3): 1261–1276, arXiv:1711.08958, doi:10.1137/17M1158859, S2CID 119672893 26. Schaefer, Marcus (2012), "The graph crossing number and its variants: A survey", Electronic Journal of Combinatorics, 1000: DS21–May, doi:10.37236/2713 27. Pach, János; Tóth, Géza (2000), "Which crossing number is it anyway?", Journal of Combinatorial Theory, Series B, 80 (2): 225–246, doi:10.1006/jctb.2000.1978 28. Matoušek, Jiří (2014), "Near-optimal separators in string graphs", Combin. Probab. Comput., vol. 23, pp. 135–139, arXiv:1302.6482, doi:10.1017/S0963548313000400, S2CID 2351522 29. Tóth, Géza (2008), "Note on the pair-crossing number and the odd-crossing number", Discrete and Computational Geometry, 39 (4): 791–799, doi:10.1007/s00454-007-9024-z. 30. Chojnacki (Hanani), Chaim (Haim) (1934), "Uber wesentlich unplattbar Kurven im dreidimensionale Raume", Fundamenta Mathematicae, 23: 135–142, doi:10.4064/fm-23-1-135-142 31. Tutte, William T. (1970), "Toward a theory of crossing numbers", Journal of Combinatorial Theory, 8: 45–53, doi:10.1016/s0021-9800(70)80007-2 32. Pelsmajer, Michael J.; Schaefer, Marcus; Štefankovič, Daniel (2007), "Removing even crossings", Journal of Combinatorial Theory, Series B, 97 (4): 489–500, doi:10.1016/j.jctb.2006.08.001 33. Bienstock, Daniel; Dean, Nathaniel (1993), "Bounds for rectilinear crossing numbers", Journal of Graph Theory, 17 (3): 333–348, doi:10.1002/jgt.3190170308 34. Garey, M. R.; Johnson, D. S. (1983). "Crossing number is NP-complete". SIAM Journal on Algebraic and Discrete Methods. 4 (3): 312–316. doi:10.1137/0604033. MR 0711340. 35. Pach, János; Rubin, Natan; Tardos, Gábor (2018), "A crossing lemma for Jordan curves", Advances in Mathematics, 331: 908–940, arXiv:1708.02077, doi:10.1016/j.aim.2018.03.015, S2CID 22278629 36. Pach, János; Tóth, Géza (2019), "Many touchings force many crossings", Journal of Combinatorial Theory, Series B, 137: 104–111, arXiv:1706.06829, doi:10.1016/j.jctb.2018.12.002 37. Ackerman, Eyal; Pinchasi, Rom (2013), "On the Degenerate Crossing Number", Discrete & Computational Geometry, 49 (3): 695–702, doi:10.1007/s00454-013-9493-1, S2CID 254030772 38. Schaefer, Marcus; Štefankovič, Daniel (2015), "The degenerate crossing number and higher-genus embeddings", Graph Drawing: 23rd International Symposium, GD 2015, Los Angeles, CA, USA, September 24-26, 2015, Revised Selected Papers, pp. 63–74, doi:10.1007/978-3-319-27261-0_6 39. Graham, Ronald L.; Rothschild, Bruce L.; Spencer, Joel H. (1990), Ramsey Theory, Wiley 40. Károlyi, Gyula (2013), "Ramsey-type problems for geometric graphs", in Pach, J. (ed.), Thirty essays on geometric graph theory, Springer 41. Károlyi, Gyula J.; Pach, János; Tóth, Géza (1997), "Ramsey-type results for geometric graphs, I", Discrete and Computational Geometry, 18 (3): 247–255, doi:10.1007/PL00009317 42. Károlyi, Gyula J.; Pach, János; Tóth, Géza; Valtr, Pavel (1998), "Ramsey-type results for geometric graphs, II", Discrete and Computational Geometry, 20 (3): 375–388, doi:10.1007/PL00009391 43. Pach, János (2004), "Geometric graph theory", in Goodman, Jacob E.; O'Rourke, Joseph (eds.), Handbook of Discrete and Computational Geometry, Discrete Mathematics and Its Applications (2nd ed.), Chapman and Hall/CRC 44. Matoušek, Jiří; Tancer, Martin; Wagner, Uli (2009), "Hardness of embedding simplicial complexes in $\mathbb {R} ^{d}$", Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 855–864 45. Brass, Peter (2004), "Turán-type problems for convex geometric hypergraphs", in Pach, J. (ed.), Towards a Theory of Geometric Graphs, Contemporary Mathematics, American Mathematical Society, pp. 25–33 46. Dey, Tamal K.; Pach, János (1998), "Extremal problems for geometric hypergraphs", Discrete and Computational Geometry, 19 (4): 473–484, doi:10.1007/PL00009365 47. Suk, Andrew (2013), "A note on geometric 3-hypergraphs", in Pach, J. (ed.), Thirty Essays on Geometric Graph Theory, Springer arXiv:1010.5716v3 48. Živaljević, Rade T.; Vrećica, Siniša T. (1992), "The colored Tverberg's problem and complexes of injective functions", Journal of Combinatorial Theory, Series A, 61 (2): 309–318, doi:10.1016/0097-3165(92)90028-S 49. Bárány, Imre; Fürédi, Zoltán (1987), "Empty simplices in Euclidean-space", Canadian Mathematical Bulletin, 30 (4): 436–445, doi:10.4153/cmb-1987-064-1, hdl:1813/8573, S2CID 122255929 50. Károlyi, Gyula; Solymosi, József (2002), "Almost disjoint triangles in 3-space", Discrete and Computational Geometry, 28 (4): 577–583, doi:10.1007/s00454-002-2888-z
Wikipedia
Topological graph theory In mathematics, topological graph theory is a branch of graph theory. It studies the embedding of graphs in surfaces, spatial embeddings of graphs, and graphs as topological spaces.[1] It also studies immersions of graphs. This article is about the study of graph embeddings. For graphs in the plane with crossings, see topological graph. Embedding a graph in a surface means that we want to draw the graph on a surface, a sphere for example, without two edges intersecting. A basic embedding problem often presented as a mathematical puzzle is the three utilities problem. Other applications can be found in printing electronic circuits where the aim is to print (embed) a circuit (the graph) on a circuit board (the surface) without two connections crossing each other and resulting in a short circuit. Graphs as topological spaces See also: Graph (topology) and Graph homology To an undirected graph we may associate an abstract simplicial complex C with a single-element set per vertex and a two-element set per edge. The geometric realization |C| of the complex consists of a copy of the unit interval [0,1] per edge, with the endpoints of these intervals glued together at vertices. In this view, embeddings of graphs into a surface or as subdivisions of other graphs are both instances of topological embedding, homeomorphism of graphs is just the specialization of topological homeomorphism, the notion of a connected graph coincides with topological connectedness, and a connected graph is a tree if and only if its fundamental group is trivial. Other simplicial complexes associated with graphs include the Whitney complex or clique complex, with a set per clique of the graph, and the matching complex, with a set per matching of the graph (equivalently, the clique complex of the complement of the line graph). The matching complex of a complete bipartite graph is called a chessboard complex, as it can be also described as the complex of sets of nonattacking rooks on a chessboard.[2] Example studies John Hopcroft and Robert Tarjan[3] derived a means of testing the planarity of a graph in time linear to the number of edges. Their algorithm does this by constructing a graph embedding which they term a "palm tree". Efficient planarity testing is fundamental to graph drawing. Fan Chung et al[4] studied the problem of embedding a graph into a book with the graph's vertices in a line along the spine of the book. Its edges are drawn on separate pages in such a way that edges residing on the same page do not cross. This problem abstracts layout problems arising in the routing of multilayer printed circuit boards. Graph embeddings are also used to prove structural results about graphs, via graph minor theory and the graph structure theorem. See also • Crossing number (graph theory) • Genus • Planar graph • Real tree • Toroidal graph • Topological combinatorics • Voltage graph Notes 1. Gross, J.L.; Tucker, T.W. (2012) [1987]. Topological graph theory. Dover. ISBN 978-0-486-41741-7. 2. Shareshian, John; Wachs, Michelle L. (2007) [2004]. "Torsion in the matching complex and chessboard complex". Advances in Mathematics. 212 (2): 525–570. arXiv:math.CO/0409054. CiteSeerX 10.1.1.499.1516. doi:10.1016/j.aim.2006.10.014. 3. Hopcroft, John; Tarjan, Robert E. (1974). "Efficient Planarity Testing" (PDF). Journal of the ACM. 21 (4): 549–568. doi:10.1145/321850.321852. hdl:1813/6011. S2CID 6279825. 4. Chung, F. R. K.; Leighton, F. T.; Rosenberg, A. L. (1987). "Embedding Graphs in Books: A Layout Problem with Applications to VLSI Design" (PDF). SIAM Journal on Algebraic and Discrete Methods. 8 (1): 33–58. doi:10.1137/0608002.
Wikipedia
Topological half-exact functor In mathematics, a topological half-exact functor F is a functor from a fixed topological category (for example CW complexes or pointed spaces) to an abelian category (most frequently in applications, category of abelian groups or category of modules over a fixed ring) that has a following property: for each sequence of spaces, of the form: X → Y → C(f) where C(f) denotes a mapping cone, the sequence: F(X) → F(Y) → F(C(f)) is exact. If F is a contravariant functor, it is half-exact if for each sequence of spaces as above, the sequence F(C(f)) → F(Y) → F(X) is exact. Homology is an example of a half-exact functor, and cohomology (and generalized cohomology theories) are examples of contravariant half-exact functors. If B is any fibrant topological space, the (representable) functor F(X)=[X,B] is half-exact.
Wikipedia
Euler calculus Euler calculus is a methodology from applied algebraic topology and integral geometry that integrates constructible functions and more recently definable functions[1] by integrating with respect to the Euler characteristic as a finitely-additive measure. In the presence of a metric, it can be extended to continuous integrands via the Gauss–Bonnet theorem.[2] It was introduced independently by Pierre Schapira[3][4][5] and Oleg Viro[6] in 1988, and is useful for enumeration problems in computational geometry and sensor networks.[7] For numerical analysis of ordinary differential equations, see Euler's method. See also • Topological data analysis References 1. Baryshnikov, Y.; Ghrist, R. Euler integration for definable functions, Proc. National Acad. Sci., 107(21), 9525–9530, 25 May 2010. 2. McTague, Carl (1 Nov 2015). "A New Approach to Euler Calculus for Continuous Integrands". arXiv:1511.00257 [math.DG]. 3. Schapira, P. "Cycles Lagrangiens, fonctions constructibles et applications", Seminaire EDP, Publ. Ecole Polytechnique (1988/89) 4. Schapira, P. Operations on constructible functions, J. Pure Appl. Algebra 72, 1991, 83–93. 5. Schapira, Pierre. Tomography of constructible functions, Applied Algebra, Algebraic Algorithms and Error-Correcting Codes Lecture Notes in Computer Science, 1995, Volume 948/1995, 427–435, doi:10.1007/3-540-60114-7_33 6. Viro, O. Some integral calculus based on Euler characteristic, Lecture Notes in Math., vol. 1346, Springer-Verlag, 1988, 127–138. 7. Baryshnikov, Y.; Ghrist, R. Target enumeration via Euler characteristic integrals, SIAM J. Appl. Math., 70(3), 825–844, 2009. • Van den Dries, Lou. Tame Topology and O-minimal Structures, Cambridge University Press, 1998. ISBN 978-0-521-59838-5 • Arnold, V. I.; Goryunov, V. V.; Lyashko, O. V. Singularity Theory, Volume 1, Springer, 1998, p. 219. ISBN 978-3-540-63711-0 External links • Ghrist, Robert. Euler Calculus video presentation, June 2009. published 30 July 2009.
Wikipedia
Topological modular forms In mathematics, topological modular forms (tmf) is the name of a spectrum that describes a generalized cohomology theory. In concrete terms, for any integer n there is a topological space $\operatorname {tmf} ^{n}$, and these spaces are equipped with certain maps between them, so that for any topological space X, one obtains an abelian group structure on the set $\operatorname {tmf} ^{n}(X)$ of homotopy classes of continuous maps from X to $\operatorname {tmf} ^{n}$. One feature that distinguishes tmf is the fact that its coefficient ring, $\operatorname {tmf} ^{0}$(point), is almost the same as the graded ring of holomorphic modular forms with integral cusp expansions. Indeed, these two rings become isomorphic after inverting the primes 2 and 3, but this inversion erases a lot of torsion information in the coefficient ring. The spectrum of topological modular forms is constructed as the global sections of a sheaf of E-infinity ring spectra on the moduli stack of (generalized) elliptic curves. This theory has relations to the theory of modular forms in number theory, the homotopy groups of spheres, and conjectural index theories on loop spaces of manifolds. tmf was first constructed by Michael Hopkins and Haynes Miller; many of the computations can be found in preprints and articles by Paul Goerss, Hopkins, Mark Mahowald, Miller, Charles Rezk, and Tilman Bauer. Construction The original construction of tmf uses the obstruction theory of Hopkins, Miller, and Paul Goerss, and is based on ideas of Dwyer, Kan, and Stover. In this approach, one defines a presheaf Otop ("top" stands for topological) of multiplicative cohomology theories on the etale site of the moduli stack of elliptic curves and shows that this can be lifted in an essentially unique way to a sheaf of E-infinity ring spectra. This sheaf has the following property: to any etale elliptic curve over a ring R, it assigns an E-infinity ring spectrum (a classical elliptic cohomology theory) whose associated formal group is the formal group of that elliptic curve. A second construction, due to Jacob Lurie, constructs tmf rather by describing the moduli problem it represents and applying general representability theory to then show existence: just as the moduli stack of elliptic curves represents the functor that assigns to a ring the category of elliptic curves over it, the stack together with the sheaf of E-infinity ring spectra represents the functor that assigns to an E-infinity ring its category of oriented derived elliptic curves, appropriately interpreted. These constructions work over the moduli stack of smooth elliptic curves, and they also work for the Deligne-Mumford compactification of this moduli stack, in which elliptic curves with nodal singularities are included. TMF is the spectrum that results from the global sections over the moduli stack of smooth curves, and tmf is the spectrum arising as the global sections of the Deligne–Mumford compactification. TMF is a periodic version of the connective tmf. While the ring spectra used to construct TMF are periodic with period 2, TMF itself has period 576. The periodicity is related to the modular discriminant. Relations to other parts of mathematics Some interest in tmf comes from string theory and conformal field theory. Graeme Segal first proposed in the 1980s to provide a geometric construction of elliptic cohomology (the precursor to tmf) as some kind of moduli space of conformal field theories, and these ideas have been continued and expanded by Stephan Stolz and Peter Teichner. Their program is to try to construct TMF as a moduli space of supersymmetric Euclidean field theories. In work more directly motivated by string theory, Edward Witten introduced the Witten genus, a homomorphism from the string bordism ring to the ring of modular forms, using equivariant index theory on a formal neighborhood of the trivial locus in the loop space of a manifold. This associates to any spin manifold with vanishing half first Pontryagin class a modular form. By work of Hopkins, Matthew Ando, Charles Rezk and Neil Strickland, the Witten genus can be lifted to topology. That is, there is a map from the string bordism spectrum to tmf (a so-called orientation) such that the Witten genus is recovered as the composition of the induced map on the homotopy groups of these spectra and a map of the homotopy groups of tmf to modular forms. This allowed to prove certain divisibility statements about the Witten genus. The orientation of tmf is in analogy with the Atiyah–Bott–Shapiro map from the spin bordism spectrum to classical K-theory, which is a lift of the Dirac equation to topology. References • Bauer, Tilman (2008). "Computation of the homotopy of the spectrum TMF". Groups, homotopy and configuration spaces (Tokyo 2005). Geometry and Topology Monographs. Vol. 13. pp. 11–40. arXiv:math.AT/0311328. doi:10.2140/gtm.2008.13.11. S2CID 1396008. • Behrens, M., Notes on the Construction of tmf (2007), http://www-math.mit.edu/~mbehrens/papers/buildTMF.pdf • Douglas, Christopher L.; Francis, John; Henriques, André G.; et al., eds. (2014). Topological Modular Forms. Mathematical Surveys and Monographs. Vol. 201. A.M.S. ISBN 978-1-4704-1884-7. • Goerss, P. and Hopkins, M., Moduli Spaces of Commutative Ring Spectra, http://www.math.northwestern.edu/~pgoerss/papers/sum.pdf • Hopkins, Michael J. (2002). "Algebraic topology and modular forms". arXiv:math.AT/0212397. {{cite journal}}: Cite journal requires |journal= (help) • Hopkins, M and Mahowald, M., From Elliptic Curves to Homotopy Theory (1998), http://www.math.purdue.edu/research/atopology/Hopkins-Mahowald/eo2homotopy.pdf Archived 2006-09-11 at the Wayback Machine • Lurie, J, A Survey of Elliptic Cohomology (2007), http://www.math.harvard.edu/~lurie/papers/survey.pdf • Rezk, C., http://www.math.uiuc.edu/~rezk/512-spr2001-notes.pdf • Stolz, S. and Teichner, P., Supersymmetric Euclidean Field theories and generalized cohomology (2008), http://math.berkeley.edu/~teichner/Papers/Survey.pdf
Wikipedia
Topological module In mathematics, a topological module is a module over a topological ring such that scalar multiplication and addition are continuous. Examples A topological vector space is a topological module over a topological field. An abelian topological group can be considered as a topological module over $\mathbb {Z} ,$ where $\mathbb {Z} $ is the ring of integers with the discrete topology. A topological ring is a topological module over each of its subrings. A more complicated example is the $I$-adic topology on a ring and its modules. Let $I$ be an ideal of a ring $R.$ The sets of the form $x+I^{n}$ for all $x\in R$ and all positive integers $n,$ form a base for a topology on $R$ that makes $R$ into a topological ring. Then for any left $R$-module $M,$ the sets of the form $x+I^{n}M,$ for all $x\in M$ and all positive integers $n,$ form a base for a topology on $M$ that makes $M$ into a topological module over the topological ring $R.$ See also • Linear topology • Ordered topological vector space • Topological abelian group – concept in mathematicsPages displaying wikidata descriptions as a fallback • Topological field – Algebraic structure with addition, multiplication, and divisionPages displaying short descriptions of redirect targets • Topological group – Group that is a topological space with continuous group action • Topological ring – ring where ring operations are continuousPages displaying wikidata descriptions as a fallback • Topological semigroup – semigroup with continuous operationPages displaying wikidata descriptions as a fallback • Topological vector space – Vector space with a notion of nearness References • Kuz'min, L. V. (1993). "Topological modules". In Hazewinkel, M. (ed.). Encyclopedia of Mathematics. Vol. 9. Kluwer Academic Publishers.
Wikipedia