text
stringlengths
100
500k
subset
stringclasses
4 values
Sieve (category theory) In category theory, a branch of mathematics, a sieve is a way of choosing arrows with a common codomain. It is a categorical analogue of a collection of open subsets of a fixed open set in topology. In a Grothendieck topology, certain sieves become categorical analogues of open covers in topology. Sieves were introduced by Giraud (1964) in order to reformulate the notion of a Grothendieck topology. Definition Let C be a category, and let c be an object of C. A sieve $S\colon C^{\rm {op}}\to {\rm {Set}}$ on c is a subfunctor of Hom(−, c), i.e., for all objects c′ of C, S(c′) ⊆ Hom(c′, c), and for all arrows f:c″→c′, S(f) is the restriction of Hom(f, c), the pullback by f (in the sense of precomposition, not of fiber products), to S(c′); see the next section, below. Put another way, a sieve is a collection S of arrows with a common codomain that satisfies the condition, "If g:c′→c is an arrow in S, and if f:c″→c′ is any other arrow in C, then gf is in S." Consequently, sieves are similar to right ideals in ring theory or filters in order theory. Pullback of sieves The most common operation on a sieve is pullback. Pulling back a sieve S on c by an arrow f:c′→c gives a new sieve f*S on c′. This new sieve consists of all the arrows in S that factor through c′. There are several equivalent ways of defining f*S. The simplest is: For any object d of C, f*S(d) = { g:d→c′ | fg ∈ S(d)} A more abstract formulation is: f*S is the image of the fibered product S×Hom(−, c)Hom(−, c′) under the natural projection S×Hom(−, c)Hom(−, c′)→Hom(−, c′). Here the map Hom(−, c′)→Hom(−, c) is Hom(f, c′), the pullback by f. The latter formulation suggests that we can also take the image of S×Hom(−, c)Hom(−, c′) under the natural map to Hom(−, c). This will be the image of f*S under composition with f. For each object d of C, this sieve will consist of all arrows fg, where g:d→c′ is an arrow of f*S(d). In other words, it consists of all arrows in S that can be factored through f. If we denote by ∅c the empty sieve on c, that is, the sieve for which ∅(d) is always the empty set, then for any f:c′→c, f*∅c is ∅c′. Furthermore, f*Hom(−, c) = Hom(−, c′). Properties of sieves Let S and S′ be two sieves on c. We say that S ⊆ S′ if for all objects c′ of C, S(c′) ⊆ S′(c′). For all objects d of C, we define (S ∪ S′)(d) to be S(d) ∪ S′(d) and (S ∩ S′)(d) to be S(d) ∩ S′(d). We can clearly extend this definition to infinite unions and intersections as well. If we define SieveC(c) (or Sieve(c) for short) to be the set of all sieves on c, then Sieve(c) becomes partially ordered under ⊆. It is easy to see from the definition that the union or intersection of any family of sieves on c is a sieve on c, so Sieve(c) is a complete lattice. A Grothendieck topology is a collection of sieves subject to certain properties. These sieves are called covering sieves. The set of all covering sieves on an object c is a subset J(c) of Sieve(c). J(c) satisfies several properties in addition to those required by the definition: • If S and S′ are sieves on c, S ⊆ S′, and S ∈ J(c), then S′ ∈ J(c). • Finite intersections of elements of J(c) are in J(c). Consequently, J(c) is also a distributive lattice, and it is cofinal in Sieve(c). References • Artin, Michael; Alexandre Grothendieck; Jean-Louis Verdier, eds. (1972). Séminaire de Géométrie Algébrique du Bois Marie - 1963-64 - Théorie des topos et cohomologie étale des schémas - (SGA 4) - vol. 1. Lecture notes in mathematics (in French). Vol. 269. Berlin; New York: Springer-Verlag. xix+525. doi:10.1007/BFb0081551. ISBN 978-3-540-05896-0. • Giraud, Jean (1964), "Analysis situs", Séminaire Bourbaki, 1962/63. Fasc. 3, Paris: Secrétariat mathématique, MR 0193122 • Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical foundations. Special topics in order, topology, algebra, and sheaf theory. Encyclopedia of Mathematics and Its Applications. Vol. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001.
Wikipedia
Sieve estimator In statistics, sieve estimators are a class of non-parametric estimators which use progressively more complex models to estimate an unknown high-dimensional function as more data becomes available, with the aim of asymptotically reducing error towards zero as the amount of data increases. This method is generally attributed to Ulf Grenander. Method of sieves in positron emission tomography Sieve estimators have been used extensively for estimating density functions in high-dimensional spaces such as in Positron emission tomography (PET). The first exploitation of Sieves in PET for solving the maximum-likelihood image reconstruction problem was by Donald Snyder and Michael Miller,[1] where they stabilized the time-of-flight PET problem originally solved by Shepp and Vardi.[2] Shepp and Vardi's introduction of Maximum-likelihood estimators in emission tomography exploited the use of the Expectation-Maximization algorithm, which as it ascended towards the maximum-likelihood estimator developed a series of artifacts associated to the fact that the underlying emission density was of too high a dimension for any fixed sample size of Poisson measured counts. Grenander's method of sieves was used to stabilize the estimator, so that for any fixed sample size a resolution could be set which was consistent for the number of counts. As the observe PET imaging time would go to infinity, the dimension of the sieve would increase as well in such a manner that the density was appropriate for each sample size. See also • Nonparametric regression References 1. Snyder, Donald L.; Miller, Michael I. (1985). "On the Use of the Method of Sieves for Positron Emission Tomography". IEEE Transactions on Medical Imaging. NS-32(5): 3864–3872. doi:10.1109/TNS.1985.4334521. 2. Shepp, Larry; Vardi, Yehuda (1982). "Maximum Likelihood Reconstruction for Emission Tomography". IEEE Transactions on Medical Imaging. 1 (2): 113–22. doi:10.1109/TMI.1982.4307558. PMID 18238264. External links • Stuart Geman, Chii-Ruey Hwang. "Nonparametric Maximum Likelihood Estimation by the Method of Sieves" (PDF). The Annals of Statistics, Vol. 10, No. 2 (Jun., 1982), pp. 401-414. • "Sieve Estimation" (PDF). Archived from the original (PDF) on September 2, 2006.
Wikipedia
Sieve of Atkin In mathematics, the sieve of Atkin is a modern algorithm for finding all prime numbers up to a specified integer. Compared with the ancient sieve of Eratosthenes, which marks off multiples of primes, the sieve of Atkin does some preliminary work and then marks off multiples of squares of primes, thus achieving a better theoretical asymptotic complexity. It was created in 2003 by A. O. L. Atkin and Daniel J. Bernstein.[1] Algorithm In the algorithm: • All remainders are modulo-sixty remainders (divide the number by 60 and return the remainder). • All numbers, including x and y, are positive integers. • Flipping an entry in the sieve list means to change the marking (prime or nonprime) to the opposite marking. • This results in numbers with an odd number of solutions to the corresponding equation being potentially prime (prime if they are also square free), and numbers with an even number of solutions being composite. The algorithm: 1. Create a results list, filled with 2, 3, and 5. 2. Create a sieve list with an entry for each positive integer; all entries of this list should initially be marked non prime (composite). 3. For each entry number n in the sieve list, with modulo-sixty remainder r : 1. If r is 1, 13, 17, 29, 37, 41, 49, or 53, flip the entry for each possible solution to 4x2 + y2 = n. The number of flipping operations as a ratio to the sieving range for this step approaches 4√π/15[1] × 8/60 (the "8" in the fraction comes from the eight modulos handled by this quadratic and the 60 because Atkin calculated this based on an even number of modulo 60 wheels), which results in a fraction of about 0.1117010721276.... 2. If r is 7, 19, 31, or 43, flip the entry for each possible solution to 3x2 + y2 = n. The number of flipping operations as a ratio to the sieving range for this step approaches π√0.12[1] × 4/60 (the "4" in the fraction comes from the four modulos handled by this quadratic and the 60 because Atkin calculated this based on an even number of modulo 60 wheels), which results in a fraction of about 0.072551974569.... 3. If r is 11, 23, 47, or 59, flip the entry for each possible solution to 3x2 − y2 = n when x > y. The number of flipping operations as a ratio to the sieving range for this step approaches √1.92ln(√0.5+√1.5)[1] × 4/60 (the "4" in the fraction comes from the four modulos handled by this quadratic and the 60 because Atkin calculated this based on an even number of modulo 60 wheels), which results in a fraction of about 0.060827679704.... 4. If r is something else, ignore it completely. 4. Start with the lowest number in the sieve list. 5. Take the next number in the sieve list still marked prime. 6. Include the number in the results list. 7. Square the number and mark all multiples of that square as non prime. Note that the multiples that can be factored by 2, 3, or 5 need not be marked, as these will be ignored in the final enumeration of primes. 8. Repeat steps four through seven. The total number of operations for these repetitions of marking the squares of primes as a ratio of the sieving range is the sum of the inverse of the primes squared, which approaches the prime zeta function(2) of 0.45224752004... minus 1/22, 1/32, and 1/52 for those primes which have been eliminated by the wheel, with the result multiplied by 16/60 for the ratio of wheel hits per range; this results in a ratio of about 0.01363637571.... Adding the above ratios of operations together, the above algorithm takes a constant ratio of flipping/marking operations to the sieving range of about 0.2587171021...; From an actual implementation of the algorithm, the ratio is about 0.25 for sieving ranges as low as 67. Pseudocode The following is pseudocode which combines Atkin's algorithms 3.1, 3.2, and 3.3[1] by using a combined set "s" of all the numbers modulo 60 excluding those which are multiples of the prime numbers 2, 3, and 5, as per the algorithms, for a straightforward version of the algorithm that supports optional bit packing of the wheel; although not specifically mentioned in the referenced paper, this pseudocode eliminates some obvious combinations of odd/even x's/y's in order to reduce computation where those computations would never pass the modulo tests anyway (i.e. would produce even numbers, or multiples of 3 or 5): limit ← 1000000000 // arbitrary search limit // set of wheel "hit" positions for a 2/3/5 wheel rolled twice as per the Atkin algorithm s ← {1,7,11,13,17,19,23,29,31,37,41,43,47,49,53,59} // Initialize the sieve with enough wheels to include limit: for n ← 60 × w + x where w ∈ {0,1,...,limit ÷ 60}, x ∈ s: is_prime(n) ← false // Put in candidate primes: // integers which have an odd number of // representations by certain quadratic forms. // Algorithm step 3.1: for n ≤ limit, n ← 4x²+y² where x ∈ {1,2,...} and y ∈ {1,3,...} // all x's odd y's if n mod 60 ∈ {1,13,17,29,37,41,49,53}: is_prime(n) ← ¬is_prime(n) // toggle state // Algorithm step 3.2: for n ≤ limit, n ← 3x²+y² where x ∈ {1,3,...} and y ∈ {2,4,...} // only odd x's if n mod 60 ∈ {7,19,31,43}: // and even y's is_prime(n) ← ¬is_prime(n) // toggle state // Algorithm step 3.3: for n ≤ limit, n ← 3x²-y² where x ∈ {2,3,...} and y ∈ {x-1,x-3,...,1} //all even/odd if n mod 60 ∈ {11,23,47,59}: // odd/even combos is_prime(n) ← ¬is_prime(n) // toggle state // Eliminate composites by sieving, only for those occurrences on the wheel: for n² ≤ limit, n ← 60 × w + x where w ∈ {0,1,...}, x ∈ s, n ≥ 7: if is_prime(n): // n is prime, omit multiples of its square; this is sufficient // because square-free composites can't get on this list for c ≤ limit, c ← n² × (60 × w + x) where w ∈ {0,1,...}, x ∈ s: is_prime(c) ← false // one sweep to produce a sequential list of primes up to limit: output 2, 3, 5 for 7 ≤ n ≤ limit, n ← 60 × w + x where w ∈ {0,1,...}, x ∈ s: if is_prime(n): output n This pseudocode is written for clarity; although some redundant computations have been eliminated by controlling the odd/even x/y combinations, it still wastes almost half of its quadratic computations on non-productive loops that don't pass the modulo tests such that it will not be faster than an equivalent wheel factorized (2/3/5) sieve of Eratosthenes. To improve its efficiency, a method must be devised to minimize or eliminate these non-productive computations. Explanation The algorithm completely ignores any numbers with remainder modulo 60 that is divisible by two, three, or five, since numbers with a modulo 60 remainder divisible by one of these three primes are themselves divisible by that prime. All numbers n with modulo-sixty remainder 1, 13, 17, 29, 37, 41, 49, or 53 have a modulo-four remainder of 1. These numbers are prime if and only if the number of solutions to 4x2 + y2 = n is odd and the number is squarefree (proven as theorem 6.1 of[1]). All numbers n with modulo-sixty remainder 7, 19, 31, or 43 have a modulo-six remainder of 1. These numbers are prime if and only if the number of solutions to 3x2 + y2 = n is odd and the number is squarefree (proven as theorem 6.2 of[1]). All numbers n with modulo-sixty remainder 11, 23, 47, or 59 have a modulo-twelve remainder of 11. These numbers are prime if and only if the number of solutions to 3x2 − y2 = n is odd and the number is squarefree (proven as theorem 6.3 of[1]). None of the potential primes are divisible by 2, 3, or 5, so they can't be divisible by their squares. This is why squarefree checks don't include 22, 32, and 52. Computational complexity It can be computed[1] that the above series of three quadratic equation operations each have a number of operations that is a constant ratio of the range as the range goes to infinity; as well the prime square free culling operations can be described by the prime zeta function(2) with constant offsets and factors so it is also a constant factor of the range as the range goes to infinity. Therefore, the algorithm described above can compute primes up to N using O(N) operations with only O(N) bits of memory. The page segmented version implemented by the authors has the same O(N) operations but reduces the memory requirement to just that required by the base primes below the square root of the range of O(N1/2/log N) bits of memory plus a minimal page buffer. This is slightly better performance with the same memory requirement as the page segmented sieve of Eratosthenes which uses O(N log log N) operations and the same O(N1/2/log N) bits of memory[2] plus a minimal page buffer. However, such a sieve does not outperform a Sieve of Eratosthenes with maximum practical wheel factorization (a combination of a 2/3/5/7 sieving wheel and pre-culling composites in the segment page buffers using a 2/3/5/7/11/13/17/19 pattern), which although it has slightly more operations than the Sieve of Atkin for very large but practical ranges, has a constant factor of less complexity per operation by about three times in comparing the per operation time between the algorithms implemented by Bernstein in CPU clock cycles per operation. The main problem with the Page Segmented Sieve of Atkin is the difficulty in implementing the "prime square free" culling sequences due to the span between culls rapidly growing far beyond the page buffer span; the time expended for this operation in Bernstein's implementation rapidly grows to many times the time expended in the actual quadratic equation calculations, meaning that the linear complexity of the part that would otherwise be quite negligible becomes a major consumer of execution time. Thus, even though an optimized implementation may again settle to a O(n) time complexity, this constant factor of increased time per operations means that the Sieve of Atkin is slower. A special modified "enumerating lattice points" variation which is not the above version of the Sieve of Atkin can theoretically compute primes up to N using O(N/log log N) operations with N1/2 + o(1) bits of memory[1] but this variation is rarely implemented. That is a little better in performance at a very high cost in memory as compared to both the ordinary page segmented version and to an equivalent but rarely-implemented version of the sieve of Eratosthenes which uses O(N) operations and O(N1/2(log log N)/log N) bits of memory.[3][4][5] Pritchard observed that for the wheel sieves, one can reduce memory consumption while preserving Big O time complexity, but this generally comes at a cost in an increased constant factor for time per operation due to the extra complexity. Therefore, this special version is likely more of value as an intellectual exercise than a practical prime sieve with reduced real time expended for a given large practical sieving range. See also • Sieve of Eratosthenes • Legendre sieve • Sieve of Sundaram • Sieve theory References 1. A.O.L. Atkin, D.J. Bernstein, Prime sieves using binary quadratic forms, Math. Comp. 73 (2004), 1023-1030. 2. Pritchard, Paul, "Linear prime-number sieves: a family tree," Sci. Comput. Programming 9:1 (1987), pp. 17–35. 3. Paul Pritchard, A sublinear additive sieve for finding prime numbers, Communications of the ACM 24 (1981), 18–23. MR600730 4. Paul Pritchard, Explaining the wheel sieve, Acta Informatica 17 (1982), 477–485. MR685983 5. Paul Pritchard, Fast compact prime number sieves (among others), Journal of Algorithms 4 (1983), 332–344. MR729229 External links • Article about Sieve of Atkin and Implementation • An optimized implementation of the sieve (in C) Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms
Wikipedia
Sieve of Pritchard In mathematics, the sieve of Pritchard is an algorithm for finding all prime numbers up to a specified bound. Like the ancient sieve of Eratosthenes, it has a simple conceptual basis in number theory.[1] It is especially suited to quick hand computation for small bounds. Whereas the sieve of Eratosthenes marks off each non-prime for each of its prime factors, the sieve of Pritchard avoids considering almost all non-prime numbers by building progressively larger wheels, which represent the pattern of numbers not divisible by any of the primes processed thus far. It thereby achieves a better asymptotic complexity, and was the first sieve with a running time sublinear in the specified bound. Its asymptotic running-time has not been improved on, and it deletes fewer composites than any other known sieve. It was created in 1979 by Paul Pritchard.[2] Since Pritchard has created a number of other sieve algorithms for finding prime numbers,[3][4][5] the sieve of Pritchard is sometimes singled out by being called the wheel sieve (by Pritchard himself[1]) or the dynamic wheel sieve.[6] Overview A prime number is a natural number that has no natural number divisors other than the number $1$ and itself. To find all the prime numbers less than or equal to a given integer $N$, a sieve algorithm examines a set of candidates in the range $2,3,...,N$, and eliminates those that are not prime, leaving the primes at the end. The sieve of Eratosthenes examines all of the range, first removing all multiples of the first prime $2$, then of the next prime $3$, and so on. The sieve of Pritchard instead examines a subset of the range consisting of numbers that occur on successive wheels, which represent the pattern of numbers left after each successive prime is processed by the sieve of Eratosthenes. For $i>0$ the $i$'th wheel $W_{i}$ represents this pattern. It is the set of numbers between $1$ and the product $P_{i}=p_{1}*p_{2}*...*p_{i}$ of the first $i$ prime numbers that are not divisible by any of these prime numbers (and is said to have an associated length $P_{i}$). This is because adding $P_{i}$ to a number doesn't change whether or not it is divisible by one of the first $i$ prime numbers, since the remainder on division by any one of these primes is unchanged. So $W_{1}=\{1\}$ with length $P_{1}=2$ represents the pattern of odd numbers; $W_{2}=\{1,5\}$ with length $P_{2}=6$ represents the pattern of numbers not divisible by $2$ or $3$; etc. Wheels are so-called because $W_{i}$ can be usefully visualized as a circle of circumference $P_{i}$ with its members marked at their corresponding distances from an origin. Then rolling the wheel along the number line marks points corresponding to successive numbers not divisible by one of the first $i$ prime numbers. The animation shows $W_{2}$ being rolled up to 30. It's useful to define $W_{i}\rightarrow n$ for $n>0$ to be the result of rolling $W_{i}$ up to $n$. Then the animation generates $W_{2}\rightarrow 30=\{1,5,7,11,13,17,19,23,25,29\}$. Note that up to $5^{2}-1=24$, this consists only of $1$ and the primes between $5$ and $25$. The sieve of Pritchard is derived from the observation[1] that this holds generally: for all $i>0$, the values in $W_{i}\rightarrow {(p_{i+1}^{2}-1)}$ are $1$ and the primes between $p_{i+1}$ and $p_{i+1}^{2}$. It even holds for $i=0$, where the wheel has length $1$ and contains just $1$ (representing all the natural numbers). So the sieve of Pritchard starts with the trivial wheel $W_{0}$ and builds successive wheels until the square of the wheel's first member after $1$ is at least $N$. Wheels grow very quickly, but only their values up to $N$ are needed and generated. It remains to find a method for generating the next wheel. Note in the animation that $W_{3}=\{1,5,7,11,13,17,19,23,25,29\}-\{5*1,5*5\}$ can be obtained by rolling $W_{2}$ up to $30$ and then removing $5$ times each member of $W_{2}$. This also holds generally: for all $i\geq 0$, $W_{i+1}=(W_{i}\rightarrow P_{i+1})-\{p_{i+1}*w|w\in W_{i}\}$.[1] Rolling $W_{i}$ past $P_{i}$ just adds values to $W_{i}$, so the current wheel is first extended by getting each successive member starting with $w=1$, adding $P_{i}$ to it, and inserting the result in the set. Then the multiples of $p_{i+1}$ are deleted. Care must be taken to avoid a number being deleted that itself needs to be multiplied by $p_{i+1}$. The sieve of Pritchard as originally presented[2] does so by first skipping past successive members until finding the maximum one needed, and then doing the deletions in reverse order by working back through the set. This is the method used in the first animation above. A simpler approach is just to gather the multiples of $p_{i+1}$ in a list, and then delete them.[7] Another approach is given by Gries and Misra.[8] If the main loop terminates with a wheel whose length is less than $N$, it is extended up to $N$ to generate the remaining primes. The algorithm, for finding all primes up to N, is therefore as follows: 1. Start with a set W={1} and length=1 representing wheel 0, and prime p=2. 2. As long as p2 <= N, do the following 1. if length < N then 1. extend W by repeatedly getting successive members w of W starting with 1 and inserting length+w into W as long as it doesn't exceed p*length or N; 2. increase length to the minimum of p*length and N. 2. repeatedly delete p times each member of W by first finding the largest <= length and then working backwards. 3. note the prime p, then set p to the next member of W after 1 (or 3 if p was 2). 3. if length < N then extend W to N by repeatedly getting successive members w of W starting with 1 and inserting length+w into W as long as it doesn't exceed N; 4. On termination, the rest of the primes up to N are the members of W after 1. Example To find all the prime numbers less than or equal to 150, proceed as follows. Start with wheel 0 with length 1, representing all natural numbers 1, 2, 3...:  1 The first number after 1 for wheel 0 (when rolled) is 2; note it as a prime. Now form wheel 1 with length 2x1=2 by first extending wheel 0 up to 2 and then deleting 2 times each number in wheel 0, to get:  1 2 The first number after 1 for wheel 1 (when rolled) is 3; note it as a prime. Now form wheel 2 with length 3x2=6 by first extending wheel 1 up to 6 and then deleting 3 times each number in wheel 1, to get  1 2 3 5 The first number after 1 for wheel 2 is 5; note it as a prime. Now form wheel 3 with length 5x6=30 by first extending wheel 2 up to 30 and then deleting 5 times each number in wheel 2 (in reverse order!), to get  1 2 3 5 7 11 13 17 19 23 25 29 The first number after 1 for wheel 3 is 7; note it as a prime. Now wheel 4 has length 7x30=210, so we only extend wheel 3 up to our limit 150. (No further extending will be done now that the limit has been reached.) We then delete 7 times each number in wheel 3 until we exceed our limit 150, to get the elements in wheel 4 up to 150:  1 2 3 5 7 11 13 17 19 23 25 29 31 37 41 43 47 49 53 59 61 67 71 73 77 79 83 89 91 97 101 103 107 109 113 119 121 127 131 133 137 139 143 149 The first number after 1 for this partial wheel 4 is 11; note it as a prime. Since we've finished with rolling, we delete 11 times each number in the partial wheel 4 until we exceed our limit 150, to get the elements in wheel 5 up to 150:  1 2 3 5 7 11 13 17 19 23 25 29 31 37 41 43 47 49 53 59 61 67 71 73 77 79 83 89 91 97 101 103 107 109 113 119 121 127 131 133 137 139 143 149 The first number after 1 for this partial wheel 5 is 13. Since 13 squared is at least our limit 150, we stop. The remaining numbers (other than 1) are the rest of the primes up to our limit 150. Just 8 composite numbers are removed, once each. The rest of the numbers considered (other than 1) are prime. In comparison, the natural version of Eratosthenes sieve (stopping at the same point) removes composite numbers 184 times. Pseudocode The sieve of Pritchard can be expressed in pseudocode, as follows:[1] algorithm Sieve of Pritchard is input: an integer N >= 2. output: the set of prime numbers in {1,2,...,N}. let W and Pr be sets of integer values, and all other variables integer values. k, W, length, p, Pr := 1, {1}, 2, 3, {2}; {invariant: p = pk+1 and W = Wk $\cap $ {1,2,...,N} and length = minimum of Pk,N and Pr = the primes up to pk} while p2 <= N do if (length < N) then Extend W,length to minimum of p*length,N; Delete multiples of p from W; Insert p into Pr; k, p := k+1, next(W, 1) if (length < N) then Extend W,length to N; return Pr $\cup $ W - {1}; where next(W, w) is the next value in the ordered set W after w. procedure Extend W,length to n is {in: W = Wk and length = Pk and n > length} {out: W = Wk$\rightarrow $n and length = n} integer w, x; w, x := 1, length+1; while x <= n do Insert x into W; w := next(W,w); x := length + w; length := n; procedure Delete multiples of p from W,length is integer w; w := p; while p*w <= length do w := next(W,w); while w > 1 do w := prev(W,w); Remove p*w from W; where prev(W, w) is the previous value in the ordered set W before w. The algorithm can be initialized with $W_{0}$ instead of $W_{1}$ at the minor complicaion of making next(W,1) a special case when k = 0. This abstract algorithm uses ordered sets supporting the operations of insertion of a value greater than the maximum, deletion of a member, getting the next value after a member, and getting the previous value before a member. Using one of Mertens' theorems (the third) it can be shown to use $O(N/\log \log N)$ of these operations and additions and multiplications.[2] Implementation An array-based doubly-linked list s can be used to implement the ordered set W, with s[w] storing next(W,w) and s[w-1] storing prev(W,w). This permits each abstract operation to be implemented in a small number of operations. (The array can also be used to store the set Pr "for free".) Therefore the time complexity of the sieve of Pritchard to calculate the primes up to $N$ in the random access machine model is $O(N/\log \log N)$ operations on words of size $O(\log N)$. Pritchard also shows how multiplications can be eliminated by using very small multiplication tables,[2] so the bit complexity is $O(N\log N/\log \log N)$ bit operations. In the same model, the space complexity is $O(N)$ words, i.e., $O(N\log N)$ bits. The sieve of Eratosthenes requires only 1 bit for each candidate in the range 2 through $N$, so its space complexity is lower at $O(N)$ bits. Note that space needed for the primes is not counted, since they can printed or written to external storage as they are found. Pritchard[2] presented a variant of his sieve that requires only $O(N/\log \log N)$ bits without compromising the sublinear time complexity, making it asymptotically superior to the natural version of the sieve of Eratostheses in both time and space. However, the sieve of Eratostheses can be optimized to require much less memory by operating on successive segments of the natural numbers.[9] Its space complexity can be reduced to $O({\sqrt {N}})$ bits without increasing its time complexity[3] This means that in practice it can be used for much larger limits $N$ than would otherwise fit in memory, and also take advantage of fast cache memory. For maximum speed it is also optimized using a small wheel to avoid sieving with the first few primes (although this does not change its asymptotic time complexity). Therefore the sieve of Pritchard is not competitive as a practical sieve over sufficiently large ranges. Geometric model At the heart of the sieve of Pritchard is an algorithm for building successive wheels. It has a simple geometric model as follows: 1. Start with a circle of circumference 1 with a mark at 1 2. To generate the next wheel: 1. Go around the wheel and find (the distance to) the first mark after 1; call it p 2. Create a new circle with p times the circumference of the current wheel 3. Roll the current wheel around the new circle, marking it where a mark touches it 4. Magnify the current wheel by p and remove the marks that coincide Note that for the first 2 iterations it is necessary to continue round the circle until 1 is reached again. The first circle represents $W_{0}=\{1\}$, and successive circles represent wheels $W_{1},W_{2},...$. The animation on the right shows this model in action up to $W_{3}$. It is apparent from the model that wheels are symmetric. This is because $P_{k}-w$ is not divisible by one of the first $k$ primes if and only if $w$ is not so divisible. It is possible to exploit this to avoid processing some composites, but at the cost of a more complex algorithm. Related sieves Once the wheel in the sieve of Pritchard reaches its maximum size, the remaining operations are equivalent to those performed by Euler's sieve. The sieve of Pritchard is unique in conflating the set of prime candidates with a dynamic wheel used to speed up the sifting process. But a separate static wheel (as frequently used to speed up the sieve of Eratosthenes) can give an $O(\log \log N)$ speedup to the latter, or to linear sieves, provided it is large enough (as a function of $N$). Examples are the use of the largest wheel of length not exceeding ${\sqrt {N}}/log^{2}N$ to get a version of the sieve of Eratosthenes that takes $O(N)$ additions and requires only $O({\sqrt {N}}/\log \log N)$ bits,[3] and the speedup of the naturally linear sieve of Atkin to get a sublinear optimized version. Bengalloun found a linear smoothly incremental sieve,[10] i.e., one that (in theory) can run indefinitely and takes a bounded number of operations to increment the current bound $N$. He also showed how to make it sublinear by adapting the sieve of Pritchard to incrementally build the next dynamic wheel while the current one is being used. Pritchard[5] showed how to avoid multiplications, thereby obtaining the same asymptotic bit-complexity as the sieve of Pritchard. Runciman provides a functional algorithm[11] inspired by the sieve of Pritchard. See also • Sieve of Eratosthenes • Sieve of Atkin • Sieve theory References 1. Pritchard, Paul (1982). "Explaining the Wheel Sieve". Acta Informatica. 17 (4): 477–485. doi:10.1007/BF00264164. S2CID 122592488. 2. Pritchard, Paul (1981). "A Sublinear Additive Sieve for Finding Prime Numbers". Communications of the ACM. 24 (1): 18–23. doi:10.1145/358527.358540. S2CID 16526704. 3. Pritchard, Paul (1983). "Fast Compact Prime Number Sieves (Among Others)". Journal of Algorithms. 4 (4): 332–344. doi:10.1016/0196-6774(83)90014-7. hdl:1813/6313. S2CID 1068851. 4. Pritchard, Paul (1987). "Linear prime-number sieves: A family tree". Science of Computer Programming. 9 (1): 17–35. doi:10.1016/0167-6423(87)90024-4. S2CID 44111749. 5. Pritchard, Paul (1980). "On the prime example of programming". Language Design and Programming Methodology. Lecture Notes in Computer Science. Vol. 877. pp. 280–288. CiteSeerX 10.1.1.52.835. doi:10.1007/3-540-09745-7_5. ISBN 978-3-540-09745-7. S2CID 9214019. 6. Dunten, Brian; Jones, Julie; Sorenson, Jonathan (1996). "A Space-Efficient Fast Prime Number Sieve". Information Processing Letters. 59 (2): 79–84. CiteSeerX 10.1.1.31.3936. doi:10.1016/0020-0190(96)00099-3. S2CID 9385950. 7. Mairson, Harry G. (1977). "Some new upper bounds on the generation of prime numbers". Communications of the ACM. 20 (9): 664–669. doi:10.1145/359810.359838. S2CID 20118576. 8. Gries, David; Misra, Jayadev (1978). "A linear sieve algorithm for finding prime numbers". Communications of the ACM. 21 (12): 999–1003. doi:10.1145/359657.359660. hdl:1813/6407. S2CID 11990373. 9. Bays, Carter; Hudson, Richard H. (1977). "The segmented sieve of Eratosthenes and primes in arithmetic progressions to 1012". BIT. 17 (2): 121–127. doi:10.1007/BF01932283. S2CID 122592488. 10. Bengelloun, S. A. (2004). "An incremental primal sieve". Acta Informatica. 23 (2): 119–125. doi:10.1007/BF00289493. S2CID 20118576. 11. Runciman, C. (1997). "Lazy Wheel Sieves and Spirals of Primes" (PDF). Journal of Functional Programming. 7 (2): 219–225. doi:10.1017/S0956796897002670. S2CID 2422563. Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms
Wikipedia
Sieve of Eratosthenes In mathematics, the sieve of Eratosthenes is an ancient algorithm for finding all prime numbers up to any given limit. It does so by iteratively marking as composite (i.e., not prime) the multiples of each prime, starting with the first prime number, 2. The multiples of a given prime are generated as a sequence of numbers starting from that prime, with constant difference between them that is equal to that prime.[1] This is the sieve's key distinction from using trial division to sequentially test each candidate number for divisibility by each prime.[2] Once all the multiples of each discovered prime have been marked as composites, the remaining unmarked numbers are primes. The earliest known reference to the sieve (Ancient Greek: κόσκινον Ἐρατοσθένους, kóskinon Eratosthénous) is in Nicomachus of Gerasa's Introduction to Arithmetic,[3] an early 2nd cent. CE book which attributes it to Eratosthenes of Cyrene, a 3rd cent. BCE Greek mathematician, though describing the sieving by odd numbers instead of by primes.[4] One of a number of prime number sieves, it is one of the most efficient ways to find all of the smaller primes. It may be used to find primes in arithmetic progressions.[5] Overview Sift the Two's and Sift the Three's: The Sieve of Eratosthenes. When the multiples sublime, The numbers that remain are Prime. Anonymous[6] A prime number is a natural number that has exactly two distinct natural number divisors: the number 1 and itself. To find all the prime numbers less than or equal to a given integer n by Eratosthenes' method: 1. Create a list of consecutive integers from 2 through n: (2, 3, 4, ..., n). 2. Initially, let p equal 2, the smallest prime number. 3. Enumerate the multiples of p by counting in increments of p from 2p to n, and mark them in the list (these will be 2p, 3p, 4p, ...; the p itself should not be marked). 4. Find the smallest number in the list greater than p that is not marked. If there was no such number, stop. Otherwise, let p now equal this new number (which is the next prime), and repeat from step 3. 5. When the algorithm terminates, the numbers remaining not marked in the list are all the primes below n. The main idea here is that every value given to p will be prime, because if it were composite it would be marked as a multiple of some other, smaller prime. Note that some of the numbers may be marked more than once (e.g., 15 will be marked both for 3 and 5). As a refinement, it is sufficient to mark the numbers in step 3 starting from p2, as all the smaller multiples of p will have already been marked at that point. This means that the algorithm is allowed to terminate in step 4 when p2 is greater than n.[1] Another refinement is to initially list odd numbers only, (3, 5, ..., n), and count in increments of 2p in step 3, thus marking only odd multiples of p. This actually appears in the original algorithm.[1][4] This can be generalized with wheel factorization, forming the initial list only from numbers coprime with the first few primes and not just from odds (i.e., numbers coprime with 2), and counting in the correspondingly adjusted increments so that only such multiples of p are generated that are coprime with those small primes, in the first place.[7] Example To find all the prime numbers less than or equal to 30, proceed as follows. First, generate a list of integers from 2 to 30:  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 The first number in the list is 2; cross out every 2nd number in the list after 2 by counting up from 2 in increments of 2 (these will be all the multiples of 2 in the list):  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 The next number in the list after 2 is 3; cross out every 3rd number in the list after 3 by counting up from 3 in increments of 3 (these will be all the multiples of 3 in the list):  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 The next number not yet crossed out in the list after 3 is 5; cross out every 5th number in the list after 5 by counting up from 5 in increments of 5 (i.e. all the multiples of 5):  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 The next number not yet crossed out in the list after 5 is 7; the next step would be to cross out every 7th number in the list after 7, but they are all already crossed out at this point, as these numbers (14, 21, 28) are also multiples of smaller primes because 7 × 7 is greater than 30. The numbers not crossed out at this point in the list are all the prime numbers below 30:  2 3 5 7 11 13 17 19 23 29 Algorithm and variants Pseudocode The sieve of Eratosthenes can be expressed in pseudocode, as follows:[8][9] algorithm Sieve of Eratosthenes is input: an integer n > 1. output: all prime numbers from 2 through n. let A be an array of Boolean values, indexed by integers 2 to n, initially all set to true. for i = 2, 3, 4, ..., not exceeding √n do if A[i] is true for j = i2, i2+i, i2+2i, i2+3i, ..., not exceeding n do set A[j] := false return all i such that A[i] is true. This algorithm produces all primes not greater than n. It includes a common optimization, which is to start enumerating the multiples of each prime i from i2. The time complexity of this algorithm is O(n log log n),[9] provided the array update is an O(1) operation, as is usually the case. Segmented sieve As Sorenson notes, the problem with the sieve of Eratosthenes is not the number of operations it performs but rather its memory requirements.[9] For large n, the range of primes may not fit in memory; worse, even for moderate n, its cache use is highly suboptimal. The algorithm walks through the entire array A, exhibiting almost no locality of reference. A solution to these problems is offered by segmented sieves, where only portions of the range are sieved at a time.[10] These have been known since the 1970s, and work as follows:[9][11] 1. Divide the range 2 through n into segments of some size Δ ≤ √n. 2. Find the primes in the first (i.e. the lowest) segment, using the regular sieve. 3. For each of the following segments, in increasing order, with m being the segment's topmost value, find the primes in it as follows: 1. Set up a Boolean array of size Δ. 2. Mark as non-prime the positions in the array corresponding to the multiples of each prime p ≤ √m found so far, by enumerating its multiples in steps of p starting from the lowest multiple of p between m - Δ and m. 3. The remaining non-marked positions in the array correspond to the primes in the segment. It isn't necessary to mark any multiples of these primes, because all of these primes are larger than √m, as for k ≥ 1, one has $(k\Delta +1)^{2}>(k+1)\Delta $. If Δ is chosen to be √n, the space complexity of the algorithm is O(√n), while the time complexity is the same as that of the regular sieve.[9] For ranges with upper limit n so large that the sieving primes below √n as required by the page segmented sieve of Eratosthenes cannot fit in memory, a slower but much more space-efficient sieve like the sieve of Sorenson can be used instead.[12] Incremental sieve An incremental formulation of the sieve[2] generates primes indefinitely (i.e., without an upper bound) by interleaving the generation of primes with the generation of their multiples (so that primes can be found in gaps between the multiples), where the multiples of each prime p are generated directly by counting up from the square of the prime in increments of p (or 2p for odd primes). The generation must be initiated only when the prime's square is reached, to avoid adverse effects on efficiency. It can be expressed symbolically under the dataflow paradigm as primes = [2, 3, ...] \ [[p², p²+p, ...] for p in primes], using list comprehension notation with \ denoting set subtraction of arithmetic progressions of numbers. Primes can also be produced by iteratively sieving out the composites through divisibility testing by sequential primes, one prime at a time. It is not the sieve of Eratosthenes but is often confused with it, even though the sieve of Eratosthenes directly generates the composites instead of testing for them. Trial division has worse theoretical complexity than that of the sieve of Eratosthenes in generating ranges of primes.[2] When testing each prime, the optimal trial division algorithm uses all prime numbers not exceeding its square root, whereas the sieve of Eratosthenes produces each composite from its prime factors only, and gets the primes "for free", between the composites. The widely known 1975 functional sieve code by David Turner[13] is often presented as an example of the sieve of Eratosthenes[7] but is actually a sub-optimal trial division sieve.[2] Algorithmic complexity The sieve of Eratosthenes is a popular way to benchmark computer performance.[14] The time complexity of calculating all primes below n in the random access machine model is O(n log log n) operations, a direct consequence of the fact that the prime harmonic series asymptotically approaches log log n. It has an exponential time complexity with regard to input size, though, which makes it a pseudo-polynomial algorithm. The basic algorithm requires O(n) of memory. The bit complexity of the algorithm is O(n (log n) (log log n)) bit operations with a memory requirement of O(n).[15] The normally implemented page segmented version has the same operational complexity of O(n log log n) as the non-segmented version but reduces the space requirements to the very minimal size of the segment page plus the memory required to store the base primes less than the square root of the range used to cull composites from successive page segments of size O(√n/log n). A special (rarely, if ever, implemented) segmented version of the sieve of Eratosthenes, with basic optimizations, uses O(n) operations and O(√nlog log n/log n) bits of memory.[16][17][18] Using big O notation ignores constant factors and offsets that may be very significant for practical ranges: The sieve of Eratosthenes variation known as the Pritchard wheel sieve[16][17][18] has an O(n) performance, but its basic implementation requires either a "one large array" algorithm which limits its usable range to the amount of available memory else it needs to be page segmented to reduce memory use. When implemented with page segmentation in order to save memory, the basic algorithm still requires about O(n/log n) bits of memory (much more than the requirement of the basic page segmented sieve of Eratosthenes using O(√n/log n) bits of memory). Pritchard's work reduced the memory requirement at the cost of a large constant factor. Although the resulting wheel sieve has O(n) performance and an acceptable memory requirement, it is not faster than a reasonably Wheel Factorized basic sieve of Eratosthenes for practical sieving ranges. Euler's sieve Euler's proof of the zeta product formula contains a version of the sieve of Eratosthenes in which each composite number is eliminated exactly once.[9] The same sieve was rediscovered and observed to take linear time by Gries & Misra (1978).[19] It, too, starts with a list of numbers from 2 to n in order. On each step the first element is identified as the next prime, is multiplied with each element of the list (thus starting with itself), and the results are marked in the list for subsequent deletion. The initial element and the marked elements are then removed from the working sequence, and the process is repeated:  [2] (3) 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 ...  [3] (5) 7 11 13 17 19 23 25 29 31 35 37 41 43 47 49 53 55 59 61 65 67 71 73 77 79 ...  [4] (7) 11 13 17 19 23 29 31 37 41 43 47 49 53 59 61 67 71 73 77 79 ...  [5] (11) 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 ...  [...] Here the example is shown starting from odds, after the first step of the algorithm. Thus, on the kth step all the remaining multiples of the kth prime are removed from the list, which will thereafter contain only numbers coprime with the first k primes (cf. wheel factorization), so that the list will start with the next prime, and all the numbers in it below the square of its first element will be prime too. Thus, when generating a bounded sequence of primes, when the next identified prime exceeds the square root of the upper limit, all the remaining numbers in the list are prime.[9] In the example given above that is achieved on identifying 11 as next prime, giving a list of all primes less than or equal to 80. Note that numbers that will be discarded by a step are still used while marking the multiples in that step, e.g., for the multiples of 3 it is 3 × 3 = 9, 3 × 5 = 15, 3 × 7 = 21, 3 × 9 = 27, ..., 3 × 15 = 45, ..., so care must be taken dealing with this.[9] See also • Sieve of Pritchard • Sieve of Atkin • Sieve of Sundaram • Sieve theory References 1. Horsley, Rev. Samuel, F. R. S., "Κόσκινον Ερατοσθένους or, The Sieve of Eratosthenes. Being an account of his method of finding all the Prime Numbers," Philosophical Transactions (1683–1775), Vol. 62. (1772), pp. 327–347. 2. O'Neill, Melissa E., "The Genuine Sieve of Eratosthenes", Journal of Functional Programming, published online by Cambridge University Press 9 October 2008 doi:10.1017/S0956796808007004, pp. 10, 11 (contains two incremental sieves in Haskell: a priority-queue–based one by O'Neill and a list–based, by Richard Bird). 3. Hoche, Richard, ed. (1866), Nicomachi Geraseni Pythagorei Introductionis arithmeticae libri II, chapter XIII, 3, Leipzig: B.G. Teubner, p. 30 4. Nicomachus of Gerasa (1926), Introduction to Arithmetic; translated into English by Martin Luther D'Ooge ; with studies in Greek arithmetic by Frank Egleston Robbins and Louis Charles Karpinski, chapter XIII, 3, New York: The Macmillan Company, p. 204 5. J. C. Morehead, "Extension of the Sieve of Eratosthenes to arithmetical progressions and applications", Annals of Mathematics, Second Series 10:2 (1909), pp. 88–104. 6. Clocksin, William F., Christopher S. Mellish, Programming in Prolog, 1984, p. 170. ISBN 3-540-11046-1. 7. Runciman, Colin (1997). "Functional Pearl: Lazy wheel sieves and spirals of primes" (PDF). Journal of Functional Programming. 7 (2): 219–225. doi:10.1017/S0956796897002670. S2CID 2422563. 8. Sedgewick, Robert (1992). Algorithms in C++. Addison-Wesley. ISBN 978-0-201-51059-1., p. 16. 9. Jonathan Sorenson, An Introduction to Prime Number Sieves, Computer Sciences Technical Report #909, Department of Computer Sciences University of Wisconsin-Madison, January 2, 1990 (the use of optimization of starting from squares, and thus using only the numbers whose square is below the upper limit, is shown). 10. Crandall & Pomerance, Prime Numbers: A Computational Perspective, second edition, Springer: 2005, pp. 121–24. 11. Bays, Carter; Hudson, Richard H. (1977). "The segmented sieve of Eratosthenes and primes in arithmetic progressions to 1012". BIT. 17 (2): 121–127. doi:10.1007/BF01932283. S2CID 122592488. 12. J. Sorenson, "The pseudosquares prime sieve", Proceedings of the 7th International Symposium on Algorithmic Number Theory. (ANTS-VII, 2006). 13. Turner, David A. SASL language manual. Tech. rept. CS/75/1. Department of Computational Science, University of St. Andrews 1975. (primes = sieve [2..]; sieve (p:nos) = p:sieve (remove (multsof p) nos); remove m = filter (not . m); multsof p n = rem n p==0). But see also Peter Henderson, Morris, James Jr., A Lazy Evaluator, 1976, where we find the following, attributed to P. Quarendon: primeswrt[x;l] = if car[l] mod x=0 then primeswrt[x;cdr[l]] else cons[car[l];primeswrt[x;cdr[l]]] ; primes[l] = cons[car[l];primes[primeswrt[car[l];cdr[l]]]] ; primes[integers[2]]; the priority is unclear. 14. Peng, T. A. (Fall 1985). "One Million Primes Through the Sieve". BYTE. pp. 243–244. Retrieved 19 March 2016. 15. Pritchard, Paul, "Linear prime-number sieves: a family tree," Sci. Comput. Programming 9:1 (1987), pp. 17–35. 16. Paul Pritchard, "A sublinear additive sieve for finding prime numbers", Communications of the ACM 24 (1981), 18–23. MR600730 17. Paul Pritchard, Explaining the wheel sieve, Acta Informatica 17 (1982), 477–485. MR685983 18. Paul Pritchard, "Fast compact prime number sieves" (among others), Journal of Algorithms 4 (1983), 332–344. MR729229 19. Gries, David; Misra, Jayadev (December 1978), "A linear sieve algorithm for finding prime numbers" (PDF), Communications of the ACM, 21 (12): 999–1003, doi:10.1145/359657.359660, hdl:1813/6407, S2CID 11990373. External links • primesieve – Very fast highly optimized C/C++ segmented Sieve of Eratosthenes • Eratosthenes, sieve of at Encyclopaedia of Mathematics • Interactive JavaScript Page • Sieve of Eratosthenes by George Beck, Wolfram Demonstrations Project. • Sieve of Eratosthenes in Haskell • Sieve of Eratosthenes algorithm illustrated and explained. Java and C++ implementations. • A related sieve written in x86 assembly language • Fast optimized highly parallel CUDA segmented Sieve of Eratosthenes in C • SieveOfEratosthenesInManyProgrammingLanguages c2 wiki page • The Art of Prime Sieving Sieve of Eratosthenes in C from 1998 with nice features and algorithmic tricks explained. Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms
Wikipedia
Sieve of Sundaram In mathematics, the sieve of Sundaram is a variant of the sieve of Eratosthenes, a simple deterministic algorithm for finding all the prime numbers up to a specified integer. It was discovered by Indian student S. P. Sundaram in 1934.[1][2] Algorithm Start with a list of the integers from 1 to n. From this list, remove all numbers of the form i + j + 2ij where: • $i,j\in \mathbb {N} ,\ 1\leq i\leq j$ • $i+j+2ij\leq n$ The remaining numbers are doubled and incremented by one, giving a list of the odd prime numbers (i.e., all primes except 2) below 2n + 2. The sieve of Sundaram sieves out the composite numbers just as the sieve of Eratosthenes does, but even numbers are not considered; the work of "crossing out" the multiples of 2 is done by the final double-and-increment step. Whenever Eratosthenes' method would cross out k different multiples of a prime $2i+1$, Sundaram's method crosses out $i+j(2i+1)$ for $1\leq j\leq \lfloor k/2\rfloor $. Correctness If we start with integers from 1 to n, the final list contains only odd integers from 3 to $2n+1$. From this final list, some odd integers have been excluded; we must show these are precisely the composite odd integers less than $2n+2$. Let q be an odd integer of the form $2k+1$. Then, q is excluded if and only if k is of the form $i+j+2ij$, that is $q=2(i+j+2ij)+1$. Then we have: ${\begin{aligned}q&=2(i+j+2ij)+1\\&=2i+2j+4ij+1\\&=(2i+1)(2j+1).\end{aligned}}$ So, an odd integer is excluded from the final list if and only if it has a factorization of the form $(2i+1)(2j+1)$ — which is to say, if it has a non-trivial odd factor. Therefore the list must be composed of exactly the set of odd prime numbers less than or equal to $2n+2$. def sieve_of_Sundaram(n): """The sieve of Sundaram is a simple deterministic algorithm for finding all the prime numbers up to a specified integer.""" k = (n - 2) // 2 integers_list = [True] * (k + 1) for i in range(1, k + 1): j = i while i + j + 2 * i * j <= k: integers_list[i + j + 2 * i * j] = False j += 1 if n > 2: print(2, end=' ') for i in range(1, k + 1): if integers_list[i]: print(2 * i + 1, end=' ') Asymptotic Complexity The above obscure but as commonly implemented Python version of the Sieve of Sundaram hides the true complexity of the algorithm due to the following reasons: 1. The range for the outer i looping variable is much too large, resulting in redundant looping that can't perform any composite number representation culling; the proper range is to the array index represent odd numbers less than the square root of the range. 2. The code doesn't properly account for indexing of Python arrays, which are zero index based so that it ignores the values at the bottom and top of the array; this is a minor issue, but serves to show that the algorithm behind the code has not been clearly understood. 3. The inner culling loop (the j loop) exactly reflects the way the algorithm is formulated, but seemingly without realizing that the indexed culling starts at exactly the index representing the square of the base odd number and that the indexing using multiplication can much more easily be expressed as a simple repeated addition of the base odd number across the range; in fact, this method of adding a constant value across the culling range is exactly how the Sieve of Eratosthenes culling is generally implemented. The following Python code in the same style resolves the above three issues, as well converting the code to a prime counting function that also displays the total number of composite culling representation culling operations: import math def sieve_of_Sundaram(n): """The sieve of Sundaram is a simple deterministic algorithm for finding all the prime numbers up to a specified integer.""" if n < 3: if n < 2: return 0 else: return 1 k = (n - 3) // 2 + 1 integers_list = [True for i in range(k)] ops = 0 for i in range((int(math.sqrt(n)) - 3) // 2 + 1): # if integers_list[i]: # adding this condition turns it into a SoE! p = 2 * i + 3 s = (p * p - 3) // 2 # compute cull start for j in range(s, k, p): integers_list[j] = False ops += 1 print("Total operations: ", ops, ";", sep='') count = 1 for i in range(k): if integers_list[i]: count += 1 print("Found ", count, " primes to ", n, ".", sep='') Note the commented out line which is all that is necessary to convert the Sieve of Sundaram to the Odds-Only (wheel factorized by the only even prime of two) Sieve of Eratosthenes; this clarifies that the only difference between these two algorithms is that the Sieve of Sundaram culls composite numbers using all odd numbers as the base values, whereas the Odds-Only Sieve of Eratosthenes uses only the odd primes as base values, with both ranges of base values bounded to the square root of the range. When run for various ranges, it is immediately clear that while, of course, the resulting count of primes for a given range is identical between the two algorithms, the number of culling operations is much higher for the Sieve of Sundaram and also grows much more quickly with increasing range. From the above implementation, it is clear that the amount of work done is according to the following: $\int _{a}^{b}{\frac {n}{2x}}\,dx.$ or ${\frac {n}{2}}\int _{a}^{b}{\frac {1}{x}}\,dx.$ where: • n is the range to be sieved and • the range a to b is the odd numbers between 3 and the square root of n - the a to b range actually starts at the square of the odd base values, but this difference is negligible for large ranges. As the integral of the reciprocal of x is exactly $\log {x}$, and as the lower value for a is relatively very small (close to one which has a log value of zero), this is about as follows: ${\frac {n}{4}}\log {\sqrt {n}}$ or ${\frac {n}{4}}{\frac {1}{2}}\log {n}$ or ${\frac {n}{8}}\log {n}$. Ignoring the constant factor of one eighth, the asymptotic complexity in Big O notation is clearly $O({n}\log {n})$. See also • Sieve of Eratosthenes • Sieve of Atkin • Sieve theory References 1. V. Ramaswami Aiyar (1934). "Sundaram's Sieve for Prime Numbers". The Mathematics Student. 2 (2): 73. ISSN 0025-5742. 2. G. (1941). "Curiosa 81. A New Sieve for Prime Numbers". Scripta Mathematica. 8 (3): 164. • Ogilvy, C. Stanley; John T. Anderson (1988). Excursions in Number Theory. Dover Publications, 1988 (reprint from Oxford University Press, 1966). pp. 98–100, 158. ISBN 0-486-25778-9. • Honsberger, Ross (1970). Ingenuity in Mathematics. New Mathematical Library #23. Mathematical Association of America. pp. 75. ISBN 0-394-70923-3. • A new "sieve" for primes, an excerpt from Kordemski, Boris A. (1974). Köpfchen, Köpfchen! Mathematik zur Unterhaltung. MSB Nr. 78. Urania Verlag. p. 200. (translation of Russian book Кордемский, Борис Анастасьевич (1958). Математическая смекалка. М.: ГИФМЛ.) • Movshovitz-Hadar, N. (1988). "Stimulating Presentations of Theorems Followed by Responsive Proofs". For the Learning of Mathematics. 8 (2): 12–19. • Ferrando, Elisabetta (2005). Abductive processes in conjecturing and proving (PDF) (PhD). Purdue University. pp. 70–72. • Baxter, Andrew. "Sundaram's Sieve". Topics from the History of Cryptography. MU Department of Mathematics. External links • A C99 implementation of the Sieve of Sundaram using bitarrays Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms
Wikipedia
Sieve theory Sieve theory is a set of general techniques in number theory, designed to count, or more realistically to estimate the size of, sifted sets of integers. The prototypical example of a sifted set is the set of prime numbers up to some prescribed limit X. Correspondingly, the prototypical example of a sieve is the sieve of Eratosthenes, or the more general Legendre sieve. The direct attack on prime numbers using these methods soon reaches apparently insuperable obstacles, in the way of the accumulation of error terms. In one of the major strands of number theory in the twentieth century, ways were found of avoiding some of the difficulties of a frontal attack with a naive idea of what sieving should be. One successful approach is to approximate a specific sifted set of numbers (e.g. the set of prime numbers) by another, simpler set (e.g. the set of almost prime numbers), which is typically somewhat larger than the original set, and easier to analyze. More sophisticated sieves also do not work directly with sets per se, but instead count them according to carefully chosen weight functions on these sets (options for giving some elements of these sets more "weight" than others). Furthermore, in some modern applications, sieves are used not to estimate the size of a sifted set, but to produce a function that is large on the set and mostly small outside it, while being easier to analyze than the characteristic function of the set. Basic sieve theory For information on notation see at the end. We start with some countable sequence of non-negative numbers ${\mathcal {A}}=(a_{n})$. In the most basic case this sequence is just the indicator function $a_{n}=1_{A}(n)$ of some set $A=\{s:s\leq x\}$ we want to sieve. However this abstraction allows for more general situations. Next we introduce a general set of prime numbers called the sifting range ${\mathcal {P}}\subseteq \mathbb {P} $ and their product up to $z$ as a function $P(z)=\prod \limits _{p\in {\mathcal {P}},p<z}p$. The goal of sieve theory is to estimate the sifting function $S({\mathcal {A}},{\mathcal {P}},z)=\sum \limits _{n\leq x,(n,P(z))=1}a_{n}.$ In the case of $a_{n}=1_{A}(n)$ this just counts the cardinality of a subset $A_{\operatorname {sift} }\subseteq A$ of numbers, that are coprime to the prime factors of $P(z)$. Legendre's identity We can rewrite the sifting function with Legendre's identity $S({\mathcal {A}},{\mathcal {P}},z)=\sum \limits _{d\mid P(z)}\mu (d)A_{d}(x)$ by using the Möbius function and some functions $A_{d}(x)$ induced by the elements of ${\mathcal {P}}$ $A_{d}(x)=\sum \limits _{n\leq x,n\equiv 0{\pmod {d}}}a_{n}.$ Example Let $z=7$ and ${\mathcal {P}}=\mathbb {P} $. The Möbius function is negative for every prime, so we get ${\begin{aligned}S({\mathcal {A}},\mathbb {P} ,7)&=A_{1}(x)-A_{2}(x)-A_{3}(x)-A_{5}(x)+A_{6}(x)+A_{10}(x)+A_{15}(x)-A_{30}(x).\end{aligned}}$ Approximation of the congruence sum One assumes then that $A_{d}(x)$ can be written as $A_{d}(x)=g(d)X+r_{d}(x)$ where $g(d)$ is a density, meaning a multiplicative function such that $g(1)=1,\qquad 0\leq g(p)<1\qquad p\in \mathbb {P} $ and $X$ is an approximation of $A_{1}(x)$ and $r_{d}(x)$ is some remainder term. The sifting function becomes $S({\mathcal {A}},{\mathcal {P}},z)=X\sum \limits _{d\mid P(z)}\mu (d)g(d)+\sum \limits _{d\mid P(z)}\mu (d)r_{d}(x)$ or in short $S({\mathcal {A}},{\mathcal {P}},z)=XG(x,z)+R(x,z).$ One tries then to estimate the sifting function by finding upper and lower bounds for $S$ respectively $G$ and $R$. The partial sum of the sifting function alternately over- and undercounts, so the remainder term will be huge. Brun's idea to improve this was to replace $\mu (d)$ in the sifting function with a weight sequence $(\lambda _{d})$ consisting of restricted Möbius functions. Choosing two appropriate sequences $(\lambda _{d}^{-})$ and $(\lambda _{d}^{+})$ and denoting the sifting functions with $S^{-}$ and $S^{+}$, one can get lower and upper bounds for the original sifting functions $S^{-}\leq S\leq S^{+}.$[1] Since $g$ is multiplicative, one can also work with the identity $\sum \limits _{d\mid n}\mu (d)g(d)=\prod \limits _{\begin{array}{c}p|n;\;p\in \mathbb {P} \end{array}}(1-g(p)),\quad \forall \;n\in \mathbb {N} .$ Notation: a word of caution regarding the notation, in the literature one often identifies the set of sequences ${\mathcal {A}}$ with the set $A$ itself. This means one writes ${\mathcal {A}}=\{s:s\leq x\}$ to define a sequence ${\mathcal {A}}=(a_{n})$. Also in the literature the sum $A_{d}(x)$ is sometimes notated as the cardinality $|A_{d}(x)|$ of some set $A_{d}(x)$, while we have defined $A_{d}(x)$ to be already the cardinality of this set. We used $\mathbb {P} $ to denote the set of primes and $(a,b)$ for the greatest common divisor of $a$ and $b$. Types of sieving Modern sieves include the Brun sieve, the Selberg sieve, the Turán sieve, the large sieve, the larger sieve and the Goldston-Pintz-Yıldırım sieve. One of the original purposes of sieve theory was to try to prove conjectures in number theory such as the twin prime conjecture. While the original broad aims of sieve theory still are largely unachieved, there have been some partial successes, especially in combination with other number theoretic tools. Highlights include: 1. Brun's theorem, which shows that the sum of the reciprocals of the twin primes converges (whereas the sum of the reciprocals of all primes diverges); 2. Chen's theorem, which shows that there are infinitely many primes p such that p + 2 is either a prime or a semiprime (the product of two primes); a closely related theorem of Chen Jingrun asserts that every sufficiently large even number is the sum of a prime and another number which is either a prime or a semiprime. These can be considered to be near-misses to the twin prime conjecture and the Goldbach conjecture respectively. 3. The fundamental lemma of sieve theory, which asserts that if one is sifting a set of N numbers, then one can accurately estimate the number of elements left in the sieve after $N^{\varepsilon }$ iterations provided that $\varepsilon $ is sufficiently small (fractions such as 1/10 are quite typical here). This lemma is usually too weak to sieve out primes (which generally require something like $N^{1/2}$ iterations), but can be enough to obtain results regarding almost primes. 4. The Friedlander–Iwaniec theorem, which asserts that there are infinitely many primes of the form $a^{2}+b^{4}$. 5. Zhang's theorem (Zhang 2014), which shows that there are infinitely many pairs of primes within a bounded distance. The Maynard–Tao theorem (Maynard 2015) generalizes Zhang's theorem to arbitrarily long sequences of primes. Techniques of sieve theory The techniques of sieve theory can be quite powerful, but they seem to be limited by an obstacle known as the parity problem, which roughly speaking asserts that sieve theory methods have extreme difficulty distinguishing between numbers with an odd number of prime factors and numbers with an even number of prime factors. This parity problem is still not very well understood. Compared with other methods in number theory, sieve theory is comparatively elementary, in the sense that it does not necessarily require sophisticated concepts from either algebraic number theory or analytic number theory. Nevertheless, the more advanced sieves can still get very intricate and delicate (especially when combined with other deep techniques in number theory), and entire textbooks have been devoted to this single subfield of number theory; a classic reference is (Halberstam & Richert 1974) and a more modern text is (Iwaniec & Friedlander 2010). The sieve methods discussed in this article are not closely related to the integer factorization sieve methods such as the quadratic sieve and the general number field sieve. Those factorization methods use the idea of the sieve of Eratosthenes to determine efficiently which members of a list of numbers can be completely factored into small primes. Literature • Cojocaru, Alina Carmen; Murty, M. Ram (2006), An introduction to sieve methods and their applications, London Mathematical Society Student Texts, vol. 66, Cambridge University Press, ISBN 0-521-84816-4, MR 2200366 • Motohashi, Yoichi (1983), Lectures on Sieve Methods and Prime Number Theory, Tata Institute of Fundamental Research Lectures on Mathematics and Physics, vol. 72, Berlin: Springer-Verlag, ISBN 3-540-12281-8, MR 0735437 • Greaves, George (2001), Sieves in number theory, Ergebnisse der Mathematik und ihrer Grenzgebiete (3), vol. 43, Berlin: Springer-Verlag, doi:10.1007/978-3-662-04658-6, ISBN 3-540-41647-1, MR 1836967 • Harman, Glyn (2007). Prime-detecting sieves. London Mathematical Society Monographs. Vol. 33. Princeton, NJ: Princeton University Press. ISBN 978-0-691-12437-7. MR 2331072. Zbl 1220.11118. • Halberstam, Heini; Richert, Hans-Egon (1974). Sieve Methods. London Mathematical Society Monographs. Vol. 4. London-New York: Academic Press. ISBN 0-12-318250-6. MR 0424730. • Iwaniec, Henryk; Friedlander, John (2010), Opera de cribro, American Mathematical Society Colloquium Publications, vol. 57, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4970-5, MR 2647984 • Hooley, Christopher (1976), Applications of sieve methods to the theory of numbers, Cambridge Tracts in Mathematics, vol. 70, Cambridge-New York-Melbourne: Cambridge University Press, ISBN 0-521-20915-3, MR 0404173 • Maynard, James (2015). "Small gaps between primes". Annals of Mathematics. 181 (1): 383–413. arXiv:1311.4600. doi:10.4007/annals.2015.181.1.7. MR 3272929. • Tenenbaum, Gérald (1995), Introduction to Analytic and Probabilistic Number Theory, Cambridge studies in advanced mathematics, vol. 46, Translated from the second French edition (1995) by C. B. Thomas, Cambridge University Press, pp. 56–79, ISBN 0-521-41261-7, MR 1342300 • Zhang, Yitang (2014). "Bounded gaps between primes". Annals of Mathematics. 179 (3): 1121–1174. doi:10.4007/annals.2014.179.3.7. MR 3171761. External links • Bredikhin, B.M. (2001) [1994], "Sieve method", Encyclopedia of Mathematics, EMS Press References 1. (Iwaniec & Friedlander 2010)
Wikipedia
Liu Sifeng Liu Sifeng (Chinese: 刘思峰; born 15 July 1955) is a Chinese systems engineer. He is the director of the Institute for Grey Systems Studies at Nanjing University of Aeronautics and Astronautics, Nanjing, China. He is best known for his work on grey system theory. Liu Sifeng 劉思峰 Born (1955-07-15) July 15, 1955 Known forGrey system theory Academic background Alma mater • Henan University (BE) • Huazhong University of Science and Technology (MS, PhD) Doctoral advisorJulong Deng Academic work DisciplineInformation theory Sub-disciplinedecision making with incomplete information Education and career Liu obtained his BE in Mathematics from Henan University in 1981, then his MS in Economics (1986) and PhD in Systems Engineering (1998) from Huazhong University of Science and Technology, Wuhan, China.[1] He was the doctoral student of Julong Deng, the founder of grey system theory.[2] Liu was appointed as a lecturer at Henan University in 1985. He was promoted through the ranks, reaching full professor in 1994. In 2000, he moved as a distinguished professor to Nanjing University of Aeronautics and Astronautics, where he also serves as director of the Institute for Grey Systems Studies. In 2014, he worked as a research professor at De Montfort University in Leicester, UK.[1][2] Liu is the editor-in-chief of Grey Systems: Theory and Application,[3] and of the Journal of Grey System.[4] Awards and honors Liu is an honorary fellow of the World Organisation of Systems and Cybernetics,[5] and an honorary editor of "International Journal of Grey Systems" (USA).[6] German Chancellor Angela Merkel mentioned Liu's contributions to grey system theory in a 2019 speech at Huazhong University of Science and Technology.[7] Books • Liu, Sifeng; Lin, Yi (2006). Grey Information: Theory and Practical Applications. Springer Science & Business Media. ISBN 978-1-84628-342-0.[8] • Dang, Yaoguo; Liu, Sifeng; Wang, Yuhong (2010). Optimization of Regional Industrial Structures and Applications. CRC Press. ISBN 978-1-4200-8752-9. • Fang, Zhigeng; Liu, Sifeng; Shi, Hongxing; Lin, Yi (2010). Grey Game Theory and Its Applications in Economic Decision-Making. CRC Press. ISBN 978-1-4200-8740-6. • Liu, Sifeng; Fang, Zhigeng; Shi, Hongxing; Guo, Benhai (2010). Theory of Science and Technology Transfer and Applications. CRC Press. ISBN 978-1-4200-8742-0. • Liu, Sifeng; Forrest, Jeffrey Yi-Lin, eds. (2010). Advances in Grey Systems Research. Springer. ISBN 978-3-642-13938-3. • Jian, Lirong; Liu, Sifeng; Lin, Yi (2011). Hybrid Rough Sets and Applications in Uncertain Decision-Making. CRC Press. ISBN 978-1-4200-8749-9. • Liu, Sifeng; Xie, Naiming; Yuan, Chaoqing; Fang, Zhigeng (2012). Systems Evaluation: Methods, Models, and Applications. CRC Press. ISBN 978-1-4200-8847-2. • Liu, Sifeng; Yang, Yingjie; Forrest, Jeffrey (2016). Grey Data Analysis: Methods, Models and Applications. Springer. ISBN 978-981-10-1841-1. References 1. Liu, Sifeng. "Curriculum Vita" (PDF). Retrieved 11 June 2021. 2. Javed, Saad Ahmed (18 June 2021). "Editorial". Grey Systems: Theory and Application. 11 (3): 2043–9377. doi:10.1108/GS-06-2021-169. S2CID 235826478. 3. "Journal Description". Grey Systems. Emerald Publishing. Retrieved 11 June 2021. 4. "Official Webpage". Journal of Grey System. Retrieved 11 June 2021. 5. "Distinctions and Awards". World Organisation of Systems and Cybernetics. 29 November 2011. Retrieved 12 June 2021. 6. "Editorial Board". International Journal of Grey Systems. Retrieved 25 March 2021. 7. "Rede von Bundeskanzlerin Merkel an der Huazhong University of Science and Technology am 7. September 2019 in Wuhan" [Speech by Chancellor Merkel at the Huazhong University of Science and Technology on September 7, 2019 in Wuhan] (in German). Die Bundeskanzlerin. 7 September 2019. Retrieved 25 June 2021. 8. "Grey Information: Theory and Practical Applications". Kybernetes. 37 (1): 189. 15 February 2008. doi:10.1108/03684920810851078. ISSN 0368-492X. External links • Liu Sifeng publications indexed by Google Scholar Authority control International • ISNI • VIAF National • Germany • Israel • United States • Czech Republic Academics • ORCID • Scopus • zbMATH Other • IdRef
Wikipedia
SigSpec SigSpec (acronym of SIGnificance SPECtrum) is a statistical technique to provide the reliability of periodicities in a measured (noisy and not necessarily equidistant) time series.[1] It relies on the amplitude spectrum obtained by the Discrete Fourier transform (DFT) and assigns a quantity called the spectral significance (frequently abbreviated by “sig”) to each amplitude. This quantity is a logarithmic measure of the probability that the given amplitude level would be seen in white noise, in the sense of a type I error. It represents the answer to the question, “What would be the chance to obtain an amplitude like the measured one or higher, if the analysed time series were random?” SigSpec may be considered a formal extension to the Lomb-Scargle periodogram,[2][3] appropriately incorporating a time series to be averaged to zero before applying the DFT, which is done in many practical applications. When a zero-mean corrected dataset has to be statistically compared to a random sample, the sample mean (rather than the population mean only) has to be zero. Probability density function (pdf) of white noise in Fourier space Considering a time series to be represented by a set of $K$ pairs $(t_{k},x_{k})$, the amplitude pdf of white noise in Fourier space, depending on frequency and phase angle may be described in terms of three parameters, $\alpha _{0}$, $\beta _{0}$, $\theta _{0}$, defining the “sampling profile”, according to $\tan 2\theta _{0}={\frac {K\sum _{k=0}^{K-1}\sin 2\omega t_{k}-2\left(\sum _{k=0}^{K-1}\cos \omega t_{k}\right)\left(\sum _{k=0}^{K-1}\sin \omega t_{k}\right)}{K\sum _{k=0}^{K-1}\cos 2\omega t_{k}-{\big (}\sum _{k=0}^{K-1}\cos \omega t_{k}{\big )}^{2}+{\big (}\sum _{k=0}^{K-1}\sin \omega t_{k}{\big )}^{2}}},$ $\alpha _{0}={\sqrt {{\frac {2}{K^{2}}}\left(K\sum _{k=0}^{K-1}\cos ^{2}\left(\omega t_{k}-\theta _{0}\right)-\left[\sum _{l=0}^{K-1}\cos \left(\omega t_{k}-\theta _{0}\right)\right]^{2}\right)}},$ $\beta _{0}={\sqrt {{\frac {2}{K^{2}}}\left(K\sum _{k=0}^{K-1}\sin ^{2}\left(\omega t_{k}-\theta _{0}\right)-\left[\sum _{l=0}^{K-1}\sin \left(\omega t_{k}-\theta _{0}\right)\right]^{2}\right)}}.$ In terms of the phase angle in Fourier space, $\theta $, with $\tan \theta ={\frac {\sum _{k=0}^{K-1}\sin \omega t_{k}}{\sum _{k=0}^{K-1}\cos \omega t_{k}}},$ the probability density of amplitudes is given by $\phi (A)={\frac {KA\cdot \operatorname {sock} }{2<x^{2}>}}\exp \left(-{\frac {KA^{2}}{4<x^{2}>}}\cdot \operatorname {sock} \right),$ where the sock function is defined by $\operatorname {sock} (\omega ,\theta )=\left[{\frac {\cos ^{2}\left(\theta -\theta _{0}\right)}{\alpha _{0}^{2}}}+{\frac {\sin ^{2}\left(\theta -\theta _{0}\right)}{\beta _{0}^{2}}}\right]$ and $<x^{2}>$ denotes the variance of the dependent variable $x_{k}$. False-alarm probability and spectral significance Integration of the pdf yields the false-alarm probability that white noise in the time domain produces an amplitude of at least $A$, $\Phi _{\operatorname {FA} }(A)=\exp \left(-{\frac {KA^{2}}{4<x^{2}>}}\cdot \operatorname {sock} \right).$ The sig is defined as the negative logarithm of the false-alarm probability and evaluates to $\operatorname {sig} (A)={\frac {KA^{2}\log e}{4<x^{2}>}}\cdot \operatorname {sock} .$ It returns the number of random time series one would have to examine to obtain one amplitude exceeding $A$ at the given frequency and phase. Applications SigSpec is primarily used in asteroseismology to identify variable stars and to classify stellar pulsation (see references below). The fact that this method incorporates the properties of the time-domain sampling appropriately makes it a valuable tool for typical astronomical measurements containing data gaps. See also • Spectral density estimation References 1. P. Reegen (2007). "SigSpec - I. Frequency- and phase-resolved significance in Fourier space". Astronomy and Astrophysics. 467: 1353–1371. arXiv:physics/0703160. Bibcode:2007A&A...467.1353R. doi:10.1051/0004-6361:20066597. 2. N. R. Lomb (1976). "Least-squares frequency analysis of unequally spaced data". Astrophysics and Space Science. 39: 447–462. Bibcode:1976Ap&SS..39..447L. doi:10.1007/BF00648343. 3. J. D. Scargle (1982). "Studies in astronomical time series analysis. II. Statistical aspects of spectral analysis of unevenly spaced data". The Astrophysical Journal. 263: 835–853. Bibcode:1982ApJ...263..835S. doi:10.1086/160554. • M. Breger; S. M. Rucinski; P. Reegen (2007). "The Pulsation of EE Camelopardalis". The Astronomical Journal. 134: 1994–1998. arXiv:0709.3393. Bibcode:2007AJ....134.1994B. doi:10.1086/522795. • M. Gruberbauer; K. Kolenberg; J. F. Rowe; D. Huber; J. M. Matthews; P. Reegen; R. Kuschnig; C. Cameron; T. Kallinger; W. W. Weiss; D. B. Guenther; A. F. J. Moffat; S. M. Rucinski; D. Sasselov; G. A. H. Walker (2007). "MOST photometry of the RRdLyrae variable AQLeo: two radial modes, 32 combination frequencies and beyond". Monthly Notices of the Royal Astronomical Society. 379: 1498–1506. arXiv:0705.4603. Bibcode:2007MNRAS.379.1498G. doi:10.1111/j.1365-2966.2007.12042.x. • M. Gruberbauer; H. Saio; D. Huber; T. Kallinger; W. W. Weiss; D. B. Guenther; R. Kuschnig; J. M. Matthews; A. F. J. Moffat; S. M. Rucinski; D. Sasselov; G. A. H. Walker (2008). "MOST photometry and modeling of the rapidly oscillating (roAp) star γ Equulei". Astronomy and Astrophysics. 480: 223–232. arXiv:0801.0863. Bibcode:2008A&A...480..223G. doi:10.1051/0004-6361:20078830. • D. B. Guenther; T. Kallinger; P. Reegen; W. W. Weiss; J. M. Matthews; R. Kuschnig; A. F. J. Moffat; S. M. Rucinski; D. Sasselov; G. A. H. Walker (2007). "Searching for p-modes in η Bootis & Procyon using MOST satellite data". Communications in Asteroseismology. 151: 5–25. Bibcode:2007CoAst.151....5G. doi:10.1553/cia151s5. • D. B. Guenther; T. Kallinger; K. Zwintz; W. W. Weiss; J. Tanner (2007). "Seismology of Pre-Main-Sequence Stars in NGC 6530" (PDF). The Astrophysical Journal. 671: 581–591. Bibcode:2007ApJ...671..581G. doi:10.1086/522880. • D. Huber; H. Saio; M. Gruberbauer; W. W. Weiss; J. F. Rowe; M. Hareter; T. Kallinger; P. Reegen; J. M. Matthews; R. Kuschnig; D. B. Guenther; A. F. J. Moffat; S. M. Rucinski; D. Sasselov; G. A. H. Walker (2008). "MOST photometry of the roAp star 10 Aquilae". Astronomy and Astrophysics. 483: 239–248. arXiv:0803.1721. Bibcode:2008A&A...483..239H. doi:10.1051/0004-6361:20079220. • T. Kallinger; D. B. Guenther; J. M. Matthews; W. W. Weiss; D. Huber; R. Kuschnig; A. F. J. Moffat; S. M. Rucinski; D. Sasselov (2008). "Nonradial p-modes in the G9.5 giant ε Ophiuchi? Pulsation model fits to MOST photometry". Astronomy and Astrophysics. 478: 497–505. arXiv:0711.0837. Bibcode:2008A&A...478..497K. doi:10.1051/0004-6361:20078171. • T. Kallinger; P. Reegen; W. W. Weiss (2008). "A heuristic derivation of the uncertainty for frequency determination in time series data". Astronomy and Astrophysics. 481: 571–574. arXiv:0801.0683. Bibcode:2008A&A...481..571K. doi:10.1051/0004-6361:20077559. • P. Reegen (2005). "SigSpec - reliable computation of significance in Fourier space", in The A-Star Puzzle, Proceedings IAU Symp. 224, eds. J. Zverko, J. Ziznovsky, S.J. Adelman, W.W. Weiss. pp. 791–798. ISBN 0-521-85018-5. {{cite book}}: |work= ignored (help) • P. Reegen; M. Gruberbauer; L. Schneider; W. W. Weiss (2008). "Cinderella - Comparison of INDEpendent RELative Least-squares Amplitudes". Astronomy and Astrophysics. 484: 601–608. arXiv:0710.2963. Bibcode:2008A&A...484..601R. doi:10.1051/0004-6361:20078855. • C. Schoenaers; A. E. Lynas-Gray (2007). "A new slowly pulsating subdwarf-B star: HD 4539". Communications in Asteroseismology. 151: 67–76. Bibcode:2007CoAst.151...67S. doi:10.1553/cia151s67. • M. Zechmeister; M. Kuerster (2009). "The gemeralised Lomb-Scargle periodogram. A new formalism for the floating-mean and Keplerian periodograms". Astronomy and Astrophysics. 496: 577–584. arXiv:0901.2573. Bibcode:2009A&A...496..577Z. doi:10.1051/0004-6361:200811296. • K. Zwintz; T. Kallinger; D. B. Guenther; M. Gruberbauer; D. Huber; J. Rowe; R. Kuschnig; W. W. Weiss; J. M. Matthews; A. F. J. Moffat; S. M. Rucinski; D. Sasselov; G. A. H. Walker; M. P. Casey (2009). "MOST photometry of the enigmatic PMS pulsator HD 142666". Astronomy and Astrophysics. 494: 1031–1040. arXiv:0812.1960. Bibcode:2009A&A...494.1031Z. doi:10.1051/0004-6361:200811116. • K. Zwintz; M. Hareter; R. Kuschnig; P. J. Amado; N. Nesvacil; E. Rodriguez; D. Diaz-Fraile; W. W. Weiss; T. Pribulla; D. B. Guenther; J. M. Matthews; A. F. J. Moffat; S. M. Rucinski; D. Sasselov; G. A. H. Walker (2009). "MOST observations of the young open cluster NGC 2264". Astronomy and Astrophysics. 502: 1239–252. doi:10.1051/0004-6361:200911863. External links • Website with further information on SigSpec calculation, etc.
Wikipedia
Sigeru Mizohata Sigeru (Shigeru) Mizohata (Japanese: 溝畑 茂(みぞはた しげる); December 30, 1924 – June 25, 2002) was a Japanese mathematician, who specialized in the theory of partial differential equations.[1] Sigeru Mizohata Born(1924-12-30)30 December 1924 Osaka Prefecture, Japan Died25 June 2002(2002-06-25) (aged 77) NationalityJapanese Alma materKyoto University Known forLax-Mizohata theorem, Mizohata operator AwardsMatsunaga Prize (1966) Scientific career FieldsMathematics, Partial Differential Equations InstitutionsKyoto University Doctoral advisorHiroshi Okamura Biography Sigeru Mizohata graduated from the Faculty of Science at the Kyoto Imperial University in 1947, where he was studying under Hiroshi Okamura. From 1954 to 1957 he studied in France as an international student; this left a lasting impact, with many of his research papers subsequently published in French. His research interests mainly concerned hyperbolic partial differential equations and the use of functional analysis in the theory of PDEs. He was awarded an honorary doctorate by the University of Paris in 1986. Books • Mizohata, Sigeru (1979). The Theory of Partial Differential Equations (revised ed.). Cambridge University Press. ISBN 9780521297462. • Mizohata, Sigeru (1985). On the Cauchy Problem. Notes and Reports in Mathematics in Science and Engineering. Vol. 3. Academic Press, Inc. ISBN 9781483269061. Works • Mizohata, Sigeru (1961), "Some remarks on the Cauchy problem", Journal of Mathematics of Kyoto University, 1 (1): 109–127, doi:10.1215/kjm/1250525109 • Mizohata, Sigeru (1962), "Analyticity of the fundamental solutions of hyperbolic systems", Journal of Mathematics of Kyoto University, 1 (3): 327–355, doi:10.1215/kjm/1250525008 • Mizohata, Sigeru (1965). Lectures on Cauchy Problem, Tata Institute of Fundamental Research. • Mizohata, Sigeru (1974), "On Cauchy-Kowalevski's Theorem; A Necessary Condition", Publications of the Research Institute for Mathematical Sciences, 10 (2): 509–519, doi:10.2977/prims/1195192007 • Mizohata, Sigeru (1981), "On some Schrödinger type equations", Proceedings of the Japan Academy, Series A, Mathematical Sciences, 57 (2): 81–84, doi:10.3792/pjaa.57.81 • Mizohata, Sigeru (1958), "Unicité du prolongement des solutions pour quelques opérateurs différentiels paraboliques", Memoirs of the College of Science, University of Kyoto, Series A: Mathematics, 31 (3): 219–239, doi:10.1215/kjm/1250776858 (in French) • Mizohata, Sigeru (1962), "Solutions nulles et solutions non analytiques", Journal of Mathematics of Kyoto University, Series A: Mathematics, 1 (2): 271–302, doi:10.1215/kjm/1250525061 (in French) References 1. sikyo.net/-/1064485 (in Japanese) Authority control International • FAST • ISNI • VIAF • WorldCat National • Norway • France • BnF data • Germany • Israel • United States • Japan • Australia • Netherlands Academics • CiNii • MathSciNet • Scopus • zbMATH Other • IdRef
Wikipedia
σ-finite measure In mathematics, a positive (or signed) measure μ defined on a σ-algebra Σ of subsets of a set X is called a finite measure if μ(X) is a finite real number (rather than ∞). A set A in Σ is of finite measure if μ(A) < ∞. The measure μ is called σ-finite if X is a countable union of measurable sets each with finite measure. A set in a measure space is said to have σ-finite measure if it is a countable union of measurable sets with finite measure. A measure being σ-finite is a weaker condition than being finite, i.e. all finite measures are σ-finite but there are (many) σ-finite measures that are not finite. A different but related notion that should not be confused with σ-finiteness is s-finiteness. Definition Let $(X,{\mathcal {A}})$ be a measurable space and $\mu $ a measure on it. The measure $\mu $ is called a σ-finite measure, if it satisfies one of the four following equivalent criteria: 1. the set $X$ can be covered with at most countably many measurable sets with finite measure. This means that there are sets $A_{1},A_{2},\ldots \in {\mathcal {A}}$ with $\mu \left(A_{n}\right)<\infty $ for all $n\in \mathbb {N} $ that satisfy $\bigcup _{n\in \mathbb {N} }A_{n}=X$.[1] 2. the set $X$ can be covered with at most countably many measurable disjoint sets with finite measure. This means that there are sets $B_{1},B_{2},\ldots \in {\mathcal {A}}$ with $\mu \left(B_{n}\right)<\infty $ for all $n\in \mathbb {N} $ and $B_{i}\cap B_{j}=\varnothing $ for $i\neq j$ that satisfy $\bigcup _{n\in \mathbb {N} }B_{n}=X$. 3. the set $X$ can be covered with a monotone sequence of measurable sets with finite measure. This means that there are sets $C_{1},C_{2},\ldots \in {\mathcal {A}}$ with $C_{1}\subset C_{2}\subset \cdots $ and $\mu \left(C_{n}\right)<\infty $ for all $n\in \mathbb {N} $ that satisfy $\bigcup _{n\in \mathbb {N} }C_{n}=X$. 4. there exists a strictly positive measurable function $f$ whose integral is finite.[2] This means that $f(x)>0$ for all $x\in X$ and $\int f(x)\mu (\mathrm {d} x)<\infty $. If $\mu $ is a $\sigma $-finite measure, the measure space $(X,{\mathcal {A}},\mu )$ is called a $\sigma $-finite measure space.[3] Examples Lebesgue measure For example, Lebesgue measure on the real numbers is not finite, but it is σ-finite. Indeed, consider the intervals [k, k + 1) for all integers k; there are countably many such intervals, each has measure 1, and their union is the entire real line. Counting measure Alternatively, consider the real numbers with the counting measure; the measure of any finite set is the number of elements in the set, and the measure of any infinite set is infinity. This measure is not σ-finite, because every set with finite measure contains only finitely many points, and it would take uncountably many such sets to cover the entire real line. But, the set of natural numbers $\mathbb {N} $ with the counting measure is σ -finite. Locally compact groups Locally compact groups which are σ-compact are σ-finite under the Haar measure. For example, all connected, locally compact groups G are σ-compact. To see this, let V be a relatively compact, symmetric (that is V = V−1) open neighborhood of the identity. Then $H=\bigcup _{n\in \mathbb {N} }V^{n}$ is an open subgroup of G. Therefore H is also closed since its complement is a union of open sets and by connectivity of G, must be G itself. Thus all connected Lie groups are σ-finite under Haar measure. Nonexamples Any non-trivial measure taking only the two values 0 and $\infty $ is clearly non σ-finite. One example in $\mathbb {R} $ is: for all $A\subset \mathbb {R} $, $\mu (A)=\infty $ if and only if A is not empty; another one is: for all $A\subset \mathbb {R} $, $\mu (A)=\infty $ if and only if A is uncountable, 0 otherwise. Incidentally, both are translation-invariant. Properties The class of σ-finite measures has some very convenient properties; σ-finiteness can be compared in this respect to separability of topological spaces. Some theorems in analysis require σ-finiteness as a hypothesis. Usually, both the Radon–Nikodym theorem and Fubini's theorem are stated under an assumption of σ-finiteness on the measures involved. However, as shown in Segal's paper "Equivalences of measure spaces" (Am. J. Math. 73, 275 (1953)) they require only a weaker condition, namely localisability. Though measures which are not σ-finite are sometimes regarded as pathological, they do in fact occur quite naturally. For instance, if X is a metric space of Hausdorff dimension r, then all lower-dimensional Hausdorff measures are non-σ-finite if considered as measures on X. Equivalence to a probability measure Any σ-finite measure μ on a space X is equivalent to a probability measure on X: let Vn, n ∈ N, be a covering of X by pairwise disjoint measurable sets of finite μ-measure, and let wn, n ∈ N, be a sequence of positive numbers (weights) such that $\sum _{n=1}^{\infty }w_{n}=1.$ The measure ν defined by $\nu (A)=\sum _{n=1}^{\infty }w_{n}{\frac {\mu \left(A\cap V_{n}\right)}{\mu \left(V_{n}\right)}}$ is then a probability measure on X with precisely the same null sets as μ. Related concepts Moderate measures A Borel measure (in the sense of a locally finite measure on the Borel $\sigma $-algebra[4]) $\mu $ is called a moderate measure iff there are at most countably many open sets $A_{1},A_{2},\ldots $ with $\mu \left(A_{i}\right)<\infty $ for all $i$ and $\bigcup _{i=1}^{\infty }A_{i}=X$.[5] Every moderate measure is a $\sigma $-finite measure, the converse is not true. Decomposable measures A measure is called a decomposable measure there are disjoint measurable sets $\left(A_{i}\right)_{i\in I}$ with $\mu \left(A_{i}\right)<\infty $ for all $i\in I$ and $\bigcup _{i\in I}A_{i}=X$. For decomposable measures, there is no restriction on the number of measurable sets with finite measure. Every $\sigma $-finite measure is a decomposable measure, the converse is not true. s-finite measures A measure $\mu $ is called a s-finite measure if it is the sum of at most countably many finite measures.[2] Every σ-finite measure is s-finite, the converse is not true. For a proof and counterexample see s-finite measure#Relation to σ-finite measures. See also • Sigma additivity References 1. Klenke, Achim (2008). Probability Theory. Berlin: Springer. p. 12. doi:10.1007/978-1-84800-048-3. ISBN 978-1-84800-047-6. 2. Kallenberg, Olav (2017). Random Measures, Theory and Applications. Switzerland: Springer. p. 21. doi:10.1007/978-3-319-41598-7. ISBN 978-3-319-41596-3. 3. Anosov, D.V. (2001) [1994], "Measure space", Encyclopedia of Mathematics, EMS Press 4. Elstrodt, Jürgen (2009). Maß- und Integrationstheorie [Measure and Integration theory] (in German). Berlin: Springer Verlag. p. 313. doi:10.1007/978-3-540-89728-6. ISBN 978-3-540-89727-9. 5. Elstrodt, Jürgen (2009). Maß- und Integrationstheorie [Measure and Integration theory] (in German). Berlin: Springer Verlag. p. 318. doi:10.1007/978-3-540-89728-6. ISBN 978-3-540-89727-9. Measure theory Basic concepts • Absolute continuity of measures • Lebesgue integration • Lp spaces • Measure • Measure space • Probability space • Measurable space/function Sets • Almost everywhere • Atom • Baire set • Borel set • equivalence relation • Borel space • Carathéodory's criterion • Cylindrical σ-algebra • Cylinder set • 𝜆-system • Essential range • infimum/supremum • Locally measurable • π-system • σ-algebra • Non-measurable set • Vitali set • Null set • Support • Transverse measure • Universally measurable Types of Measures • Atomic • Baire • Banach • Besov • Borel • Brown • Complex • Complete • Content • (Logarithmically) Convex • Decomposable • Discrete • Equivalent • Finite • Inner • (Quasi-) Invariant • Locally finite • Maximising • Metric outer • Outer • Perfect • Pre-measure • (Sub-) Probability • Projection-valued • Radon • Random • Regular • Borel regular • Inner regular • Outer regular • Saturated • Set function • σ-finite • s-finite • Signed • Singular • Spectral • Strictly positive • Tight • Vector Particular measures • Counting • Dirac • Euler • Gaussian • Haar • Harmonic • Hausdorff • Intensity • Lebesgue • Infinite-dimensional • Logarithmic • Product • Projections • Pushforward • Spherical measure • Tangent • Trivial • Young Maps • Measurable function • Bochner • Strongly • Weakly • Convergence: almost everywhere • of measures • in measure • of random variables • in distribution • in probability • Cylinder set measure • Random: compact set • element • measure • process • variable • vector • Projection-valued measure Main results • Carathéodory's extension theorem • Convergence theorems • Dominated • Monotone • Vitali • Decomposition theorems • Hahn • Jordan • Maharam's • Egorov's • Fatou's lemma • Fubini's • Fubini–Tonelli • Hölder's inequality • Minkowski inequality • Radon–Nikodym • Riesz–Markov–Kakutani representation theorem Other results • Disintegration theorem • Lifting theory • Lebesgue's density theorem • Lebesgue differentiation theorem • Sard's theorem For Lebesgue measure • Isoperimetric inequality • Brunn–Minkowski theorem • Milman's reverse • Minkowski–Steiner formula • Prékopa–Leindler inequality • Vitale's random Brunn–Minkowski inequality Applications & related • Convex analysis • Descriptive set theory • Probability theory • Real analysis • Spectral theory
Wikipedia
σ-compact space In mathematics, a topological space is said to be σ-compact if it is the union of countably many compact subspaces.[1] A space is said to be σ-locally compact if it is both σ-compact and (weakly) locally compact.[2] That terminology can be somewhat confusing as it does not fit the usual pattern of σ-(property) meaning a countable union of spaces satisfying (property); that's why such spaces are more commonly referred to explicitly as σ-compact (weakly) locally compact, which is also equivalent to being exhaustible by compact sets.[3] Properties and examples • Every compact space is σ-compact, and every σ-compact space is Lindelöf (i.e. every open cover has a countable subcover).[4] The reverse implications do not hold, for example, standard Euclidean space (Rn) is σ-compact but not compact,[5] and the lower limit topology on the real line is Lindelöf but not σ-compact.[6] In fact, the countable complement topology on any uncountable set is Lindelöf but neither σ-compact nor locally compact.[7] However, it is true that any locally compact Lindelöf space is σ-compact. • (The irrational numbers) $\mathbb {R} \setminus \mathbb {Q} $ is not σ-compact.[8] • A Hausdorff, Baire space that is also σ-compact, must be locally compact at at least one point. • If G is a topological group and G is locally compact at one point, then G is locally compact everywhere. Therefore, the previous property tells us that if G is a σ-compact, Hausdorff topological group that is also a Baire space, then G is locally compact. This shows that for Hausdorff topological groups that are also Baire spaces, σ-compactness implies local compactness. • The previous property implies for instance that Rω is not σ-compact: if it were σ-compact, it would necessarily be locally compact since Rω is a topological group that is also a Baire space. • Every hemicompact space is σ-compact.[9] The converse, however, is not true;[10] for example, the space of rationals, with the usual topology, is σ-compact but not hemicompact. • The product of a finite number of σ-compact spaces is σ-compact. However the product of an infinite number of σ-compact spaces may fail to be σ-compact.[11] • A σ-compact space X is second category (respectively Baire) if and only if the set of points at which is X is locally compact is nonempty (respectively dense) in X.[12] See also • Exhaustion by compact sets – in analysis, a sequence of compact sets that converges on a given setPages displaying wikidata descriptions as a fallback • Lindelöf space – topological space such that every open cover has a countable subcoverPages displaying wikidata descriptions as a fallback • Locally compact space – topological space such that every point has a neighbourhood with compact closurePages displaying wikidata descriptions as a fallback Notes 1. Steen, p. 19; Willard, p. 126. 2. Steen, p. 21. 3. "A question about local compactness and $\sigma$-compactness". Mathematics Stack Exchange. 4. Steen, p. 19. 5. Steen, p. 56. 6. Steen, p. 75–76. 7. Steen, p. 50. 8. Hart, K.P.; Nagata, J.; Vaughan, J.E. (2004). Encyclopedia of General Topology. Elsevier. p. 170. ISBN 0 444 50355 2. 9. Willard, p. 126. 10. Willard, p. 126. 11. Willard, p. 126. 12. Willard, p. 188. References • Steen, Lynn A. and Seebach, J. Arthur Jr.; Counterexamples in Topology, Holt, Rinehart and Winston (1970). ISBN 0-03-079485-4. • Willard, Stephen (2004). General Topology. Dover Publications. ISBN 0-486-43479-6.
Wikipedia
Sigma-martingale In mathematics and information theory of probability, a sigma-martingale is a semimartingale with an integral representation. Sigma-martingales were introduced by C.S. Chou and M. Emery in 1977 and 1978.[1] In financial mathematics, sigma-martingales appear in the fundamental theorem of asset pricing as an equivalent condition to no free lunch with vanishing risk (a no-arbitrage condition).[2] Mathematical definition An $\mathbb {R} ^{d}$-valued stochastic process $X=(X_{t})_{t=0}^{T}$ is a sigma-martingale if it is a semimartingale and there exists an $\mathbb {R} ^{d}$-valued martingale M and an M-integrable predictable process $\phi $ with values in $\mathbb {R} _{+}$ such that $X=\phi \cdot M.$[1] References 1. F. Delbaen; W. Schachermayer (1998). "The Fundamental Theorem of Asset Pricing for Unbounded Stochastic Processes" (pdf). Mathematische Annalen. 312: 215–250. doi:10.1007/s002080050220. Retrieved October 14, 2011. 2. Delbaen, Freddy; Schachermayer, Walter. "What is... a Free Lunch?" (pdf). Notices of the AMS. 51 (5): 526–528. Retrieved October 14, 2011. Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Sigmoid function A sigmoid function is a mathematical function having a characteristic "S"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:[1] $\sigma (x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{e^{x}+1}}=1-\sigma (-x).$ Other standard sigmoid functions are given in the Examples section. In some fields, most notably in the context of artificial neural networks, the term "sigmoid function" is used as an alias for the logistic function. Special cases of the sigmoid function include the Gompertz curve (used in modeling systems that saturate at large values of x) and the ogee curve (used in the spillway of some dams). Sigmoid functions have domain of all real numbers, with return (response) value commonly monotonically increasing but could be decreasing. Sigmoid functions most often show a return value (y axis) in the range 0 to 1. Another commonly used range is from −1 to 1. A wide variety of sigmoid functions including the logistic and hyperbolic tangent functions have been used as the activation function of artificial neurons. Sigmoid curves are also common in statistics as cumulative distribution functions (which go from 0 to 1), such as the integrals of the logistic density, the normal density, and Student's t probability density functions. The logistic sigmoid function is invertible, and its inverse is the logit function. Definition A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point[1] [2] and exactly one inflection point. A sigmoid "function" and a sigmoid "curve" refer to the same object. Properties In general, a sigmoid function is monotonic, and has a first derivative which is bell shaped. Conversely, the integral of any continuous, non-negative, bell-shaped function (with one local maximum and no local minimum, unless degenerate) will be sigmoidal. Thus the cumulative distribution functions for many common probability distributions are sigmoidal. One such example is the error function, which is related to the cumulative distribution function of a normal distribution; another is the arctan function, which is related to the cumulative distribution function of a Cauchy distribution. A sigmoid function is constrained by a pair of horizontal asymptotes as $x\rightarrow \pm \infty $. A sigmoid function is convex for values less than a particular point, and it is concave for values greater than that point: in many of the examples here, that point is 0. Examples • Logistic function $f(x)={\frac {1}{1+e^{-x}}}$ • Hyperbolic tangent (shifted and scaled version of the logistic function, above) $f(x)=\tanh x={\frac {e^{x}-e^{-x}}{e^{x}+e^{-x}}}$ • Arctangent function $f(x)=\arctan x$ • Gudermannian function $f(x)=\operatorname {gd} (x)=\int _{0}^{x}{\frac {dt}{\cosh t}}=2\arctan \left(\tanh \left({\frac {x}{2}}\right)\right)$ • Error function $f(x)=\operatorname {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt$ • Generalised logistic function $f(x)=\left(1+e^{-x}\right)^{-\alpha },\quad \alpha >0$ • Smoothstep function $f(x)={\begin{cases}\left(\int _{0}^{1}\left(1-u^{2}\right)^{N}du\right)^{-1}\int _{0}^{x}\left(1-u^{2}\right)^{N}\ du},&|x|\leq 1\\\\\operatorname {sgn}(x)&|x|\geq 1\\\end{cases}}\quad N\in \mathbb {Z} \geq 1$ • Some algebraic functions, for example $f(x)={\frac {x}{\sqrt {1+x^{2}}}}$ • and in a more general form[3] $f(x)={\frac {x}{\left(1+|x|^{k}\right)^{1/k}}}$ • Up to shifts and scaling, many sigmoids are special cases of $f(x)=\varphi (\varphi (x,\beta ),\alpha ),$ where $\varphi (x,\lambda )={\begin{cases}(1-\lambda x)^{1/\lambda }&\lambda \neq 0\\e^{-x}&\lambda =0\\\end{cases}}$ is the inverse of the negative Box–Cox transformation, and $\alpha <1$ and $\beta <1$ are shape parameters.[4] • Smooth Interpolation[5] normalized to (-1,1) and $n$ is the slope at zero: ${\begin{aligned}f(x)&={\begin{cases}{\frac {2}{1+e^{-2n{\frac {x}{1-x^{2}}}}}}-1},&|x|<1\\\\\operatorname {sgn}(x)&|x|\geq 1\\\end{cases}}\\&={\begin{cases}\tanh \left(n{\frac {x}{1-x^{2}}}\right)},&|x|<1\\\\\operatorname {sgn}(x)&|x|\geq 1\\\end{cases}}\end{aligned}}$ using the hyperbolic tangent mentioned above. Applications Many natural processes, such as those of complex system learning curves, exhibit a progression from small beginnings that accelerates and approaches a climax over time. When a specific mathematical model is lacking, a sigmoid function is often used.[6] The van Genuchten–Gupta model is based on an inverted S-curve and applied to the response of crop yield to soil salinity. Examples of the application of the logistic S-curve to the response of crop yield (wheat) to both the soil salinity and depth to water table in the soil are shown in modeling crop response in agriculture. In artificial neural networks, sometimes non-smooth functions are used instead for efficiency; these are known as hard sigmoids. In audio signal processing, sigmoid functions are used as waveshaper transfer functions to emulate the sound of analog circuitry clipping.[7] In biochemistry and pharmacology, the Hill and Hill–Langmuir equations are sigmoid functions. In computer graphics and real-time rendering, some of the sigmoid functions are used to blend colors or geometry between two values, smoothly and without visible seams or discontinuities. Titration curves between strong acids and strong bases have a sigmoid shape due to the logarithmic nature of the pH scale. The logistic function can be calculated efficiently by utilizing type III Unums.[8] See also Wikimedia Commons has media related to Sigmoid functions. • Step function • Sign function • Heaviside step function • Logistic regression • Logit • Softplus function • Soboleva modified hyperbolic tangent • Softmax function • Swish function • Weibull distribution • Fermi–Dirac statistics References 1. Han, Jun; Morag, Claudio (1995). "The influence of the sigmoid function parameters on the speed of backpropagation learning". In Mira, José; Sandoval, Francisco (eds.). From Natural to Artificial Neural Computation. Lecture Notes in Computer Science. Vol. 930. pp. 195–201. doi:10.1007/3-540-59497-3_175. ISBN 978-3-540-59497-0. 2. Ling, Yibei; He, Bin (December 1993). "Entropic analysis of biological growth models". IEEE Transactions on Biomedical Engineering. 40 (12): 1193–2000. doi:10.1109/10.250574. PMID 8125495. 3. Dunning, Andrew J.; Kensler, Jennifer; Coudeville, Laurent; Bailleux, Fabrice (2015-12-28). "Some extensions in continuous methods for immunological correlates of protection". BMC Medical Research Methodology. 15 (107): 107. doi:10.1186/s12874-015-0096-9. PMC 4692073. PMID 26707389. 4. "grex --- Growth-curve Explorer". GitHub. 2022-07-09. Archived from the original on 2022-08-25. Retrieved 2022-08-25. 5. EpsilonDelta (2022-08-16). "Smooth Transition Function in One Dimension | Smooth Transition Function Series Part 1". 13:29/14:04 – via www.youtube.com. 6. Gibbs, Mark N.; Mackay, D. (November 2000). "Variational Gaussian process classifiers". IEEE Transactions on Neural Networks. 11 (6): 1458–1464. doi:10.1109/72.883477. PMID 18249869. S2CID 14456885. 7. Smith, Julius O. (2010). Physical Audio Signal Processing (2010 ed.). W3K Publishing. ISBN 978-0-9745607-2-4. Archived from the original on 2022-07-14. Retrieved 2020-03-28. 8. Gustafson, John L.; Yonemoto, Isaac (2017-06-12). "Beating Floating Point at its Own Game: Posit Arithmetic" (PDF). Archived (PDF) from the original on 2022-07-14. Retrieved 2019-12-28. Further reading • Mitchell, Tom M. (1997). Machine Learning. WCB McGraw–Hill. ISBN 978-0-07-042807-2.. (NB. In particular see "Chapter 4: Artificial Neural Networks" (in particular pp. 96–97) where Mitchell uses the word "logistic function" and the "sigmoid function" synonymously – this function he also calls the "squashing function" – and the sigmoid (aka logistic) function is used to compress the outputs of the "neurons" in multi-layer neural nets.) • Humphrys, Mark. "Continuous output, the sigmoid function". Archived from the original on 2022-07-14. Retrieved 2022-07-14. (NB. Properties of the sigmoid, including how it can shift along axes and how its domain may be transformed.) External links • "Fitting of logistic S-curves (sigmoids) to data using SegRegA". Archived from the original on 2022-07-14. Differentiable computing General • Differentiable programming • Information geometry • Statistical manifold • Automatic differentiation • Neuromorphic engineering • Pattern recognition • Tensor calculus • Computational learning theory • Inductive bias Concepts • Gradient descent • SGD • Clustering • Regression • Overfitting • Hallucination • Adversary • Attention • Convolution • Loss functions • Backpropagation • Normalization (Batchnorm) • Activation • Softmax • Sigmoid • Rectifier • Regularization • Datasets • Augmentation • Diffusion • Autoregression Applications • Machine learning • In-context learning • Artificial neural network • Deep learning • Scientific computing • Artificial Intelligence • Language model • Large language model Hardware • IPU • TPU • VPU • Memristor • SpiNNaker Software libraries • TensorFlow • PyTorch • Keras • Theano • JAX • Flux.jl Implementations Audio–visual • AlexNet • WaveNet • Human image synthesis • HWR • OCR • Speech synthesis • Speech recognition • Facial recognition • AlphaFold • DALL-E • Midjourney • Stable Diffusion Verbal • Word2vec • Seq2seq • BERT • LaMDA • Bard • NMT • Project Debater • IBM Watson • GPT-2 • GPT-3 • ChatGPT • GPT-4 • GPT-J • Chinchilla AI • PaLM • BLOOM • LLaMA Decisional • AlphaGo • AlphaZero • Q-learning • SARSA • OpenAI Five • Self-driving car • MuZero • Action selection • Auto-GPT • Robot control People • Yoshua Bengio • Alex Graves • Ian Goodfellow • Stephen Grossberg • Demis Hassabis • Geoffrey Hinton • Yann LeCun • Fei-Fei Li • Andrew Ng • Jürgen Schmidhuber • David Silver Organizations • Anthropic • EleutherAI • Google DeepMind • Hugging Face • OpenAI • Meta AI • Mila • MIT CSAIL Architectures • Neural Turing machine • Differentiable neural computer • Transformer • Recurrent neural network (RNN) • Long short-term memory (LSTM) • Gated recurrent unit (GRU) • Echo state network • Multilayer perceptron (MLP) • Convolutional neural network • Residual network • Autoencoder • Variational autoencoder (VAE) • Generative adversarial network (GAN) • Graph neural network • Portals • Computer programming • Technology • Categories • Artificial neural networks • Machine learning
Wikipedia
Swish function The swish function is a mathematical function defined as follows: $\operatorname {swish} (x)=x\operatorname {sigmoid} (\beta x)={\frac {x}{1+e^{-\beta x}}}.$[1] where β is either constant or a trainable parameter depending on the model. For β = 1, the function becomes equivalent to the Sigmoid Linear Unit[2] or SiLU, first proposed alongside the GELU in 2016. The SiLU was later rediscovered in 2017 as the Sigmoid-weighted Linear Unit (SiL) function used in reinforcement learning.[3][1] The SiLU/SiL was then rediscovered as the swish over a year after its initial discovery, originally proposed without the learnable parameter β, so that β implicitly equalled 1. The swish paper was then updated to propose the activation with the learnable parameter β, though researchers usually let β = 1 and do not use the learnable parameter β. For β = 0, the function turns into the scaled linear function f(x) = x/2.[1] With β → ∞, the sigmoid component approaches a 0-1 function pointwise, so swish approaches the ReLU function pointwise. Thus, it can be viewed as a smoothing function which nonlinearly interpolates between a linear function and the ReLU function.[1] This function uses non-monotonicity, and may have influenced the proposal of other activation functions with this property such as Mish.[4] When considering positive values, Swish is a particular case of sigmoid shrinkage function defined in [5] (see the doubly parameterized sigmoid shrinkage form given by Equation (3) of this reference). Applications In 2017, after performing analysis on ImageNet data, researchers from Google indicated that using this function as an activation function in artificial neural networks improves the performance, compared to ReLU and sigmoid functions.[1] It is believed that one reason for the improvement is that the swish function helps alleviate the vanishing gradient problem during backpropagation.[6] References 1. Ramachandran, Prajit; Zoph, Barret; Le, Quoc V. (2017-10-27). "Searching for Activation Functions". arXiv:1710.05941v2 [cs.NE]. 2. Hendrycks, Dan; Gimpel, Kevin (2016). "Gaussian Error Linear Units (GELUs)". arXiv:1606.08415 [cs.LG]. 3. Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji (2017-11-02). "Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning". arXiv:1702.03118v3 [cs.LG]. 4. Misra, Diganta (2019). "Mish: A Self Regularized Non-Monotonic Neural Activation Function". arXiv:1908.08681 [cs.LG]. 5. Atto, Abdourrahmane M.; Pastor, Dominique; Mercier, Gregoire (March 2008). "Smooth sigmoid wavelet shrinkage for non-parametric estimation". 2008 IEEE International Conference on Acoustics, Speech and Signal Processing (PDF). pp. 3265–3268. doi:10.1109/ICASSP.2008.4518347. ISBN 978-1-4244-1483-3. S2CID 9959057. 6. Serengil, Sefik Ilkin (2018-08-21). "Swish as Neural Networks Activation Function". Machine Learning, Math. Archived from the original on 2020-06-18. Retrieved 2020-06-18.
Wikipedia
Sigmund Selberg Sigmund Selberg (11 August 1910 – 20 April 1994) was a Norwegian mathematician. He was born in Langesund as the son of Ole Michael Ludvigsen Selberg and Anna Kristina Brigtsdatter Skeie. He was twin brother of Arne Selberg and brother of Henrik Selberg and Atle Selberg. He was appointed professor of mathematics at the Norwegian Institute of Technology in Trondheim from 1947 to 1977. His works mainly focused on the distribution of prime numbers.[1][2] Sigmund Selberg Born(1910-08-11)11 August 1910 Langesund, Norway Died20 April 1994(1994-04-20) (aged 83) NationalityNorwegian OccupationMathematician References 1. Baas, Nils Andreas. "Sigmund Selberg". In Helle, Knut (ed.). Norsk biografisk leksikon (in Norwegian). Oslo: Kunnskapsforlaget. Retrieved 6 November 2012. 2. Godal, Anne Marit (ed.). "Sigmund Selberg". Store norske leksikon (in Norwegian). Oslo: Norsk nettleksikon. Retrieved 6 November 2012. Authority control International • VIAF National • Norway Academics • zbMATH
Wikipedia
Sigmundur Gudmundsson Sigmundur Gudmundsson (born 1960) is an Icelandic-Swedish mathematician working at Lund University[1] in the fields of differential geometry and global analysis. He is mainly interested in the geometric aspects of harmonic maps and their derivatives, such as harmonic morphisms and p-harmonic functions. His work is partially devoted to the existence theory of complex-valued harmonic morphisms and p-harmonic functions from Riemannian homogeneous spaces of various types, such as symmetric spaces and semisimple, solvable and nilpotent Lie groups. [2] [3] Gudmundsson earned his Ph.D. from the University of Leeds in 1992, under the supervision of John C. Wood.[4] Gudmundsson is the founder of the website Nordic-Math-Job advertising vacant academic positions in the Nordic university departments of Mathematics and Statistics. This started off in 1997 as a one-man show, but is now supported by the mathematical societies in the Nordic countries and the National Committee for Mathematics of The Royal Swedish Academy of Sciences.[5] Publications • Introduction to Gaussian Geometry, Lund University (2021). • Introduction to Riemannian Geometry, Lund University (2021). • Research Papers References 1. Faculty profile, Lund University, retrieved 2015-02-02. 2. Harmonic Morphisms - Some Existence Theory 3. The Method of Eigenfamilies - Explicit p-Harmonic Functions and Harmonic Morphisms 4. Sigmundur Gudmundsson at the Mathematics Genealogy Project 5. Interview in the Newsletter of the Swedish Mathematical Society - 1st of January 2000 External links • Home Page at Lund University • Profile at Zentralblatt MATH • Profile at Google Scholar • Nordic-Math-Job - Established on the 14th of February 1997 Authority control International • ISNI • VIAF Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Sign-value notation A sign-value notation represents numbers using a sequence of numerals which each represent a distinct quantity, regardless of their position in the sequence. Sign-value notations are typically additive, subtractive, or multiplicative depending on their conventions for grouping signs together to collectively represent numbers.[1] Part of a series on Numeral systems Place-value notation Hindu-Arabic numerals • Western Arabic • Eastern Arabic • Bengali • Devanagari • Gujarati • Gurmukhi • Odia • Sinhala • Tamil • Malayalam • Telugu • Kannada • Dzongkha • Tibetan • Balinese • Burmese • Javanese • Khmer • Lao • Mongolian • Sundanese • Thai East Asian systems Contemporary • Chinese • Suzhou • Hokkien • Japanese • Korean • Vietnamese Historic • Counting rods • Tangut Other systems • History Ancient • Babylonian Post-classical • Cistercian • Mayan • Muisca • Pentadic • Quipu • Rumi Contemporary • Cherokee • Kaktovik (Iñupiaq) By radix/base Common radices/bases • 2 • 3 • 4 • 5 • 6 • 8 • 10 • 12 • 16 • 20 • 60 • (table) Non-standard radices/bases • Bijective (1) • Signed-digit (balanced ternary) • Mixed (factorial) • Negative • Complex (2i) • Non-integer (φ) • Asymmetric Sign-value notation Non-alphabetic • Aegean • Attic • Aztec • Brahmi • Chuvash • Egyptian • Etruscan • Kharosthi • Prehistoric counting • Proto-cuneiform • Roman • Tally marks Alphabetic • Abjad • Armenian • Alphasyllabic • Akṣarapallī • Āryabhaṭa • Kaṭapayādi • Coptic • Cyrillic • Geʽez • Georgian • Glagolitic • Greek • Hebrew List of numeral systems Although the absolute value of each sign is independent of its position, the value of the sequence as a whole may depend on the order of the signs, as with numeral systems which combine additive and subtractive notation, such as Roman numerals. There is no need for zero in sign-value notation. Additive notation "Additive notation" redirects here. For the convention for abelian groups, see Abelian group § Notation. Additive notation represents numbers by a series of numerals that added together equal the value of the number represented, much as tally marks are added together to represent a larger number. To represent multiples of the sign value, the same sign is simply repeated. In Roman numerals, for example, X means ten and L means fifty, so LXXX means eighty (50 + 10 + 10 + 10). Although signs may be written in a conventional order the value of each sign does not depend on its place in the sequence, and changing the order does not affect the total value of the sequence in an additive system. Frequently used large numbers are often expressed using unique symbols to avoid excessive repetition. Aztec numerals, for example, use a tally of dots for numbers less than twenty alongside unique symbols for powers of twenty, including 400 and 8,000.[1] Subtractive notation See also: Roman numerals § Subtractive notation Subtractive notation represents numbers by a series of numerals in which signs representing smaller values are typically subtracted from those representing larger values to equal the value of the number represented. In Roman numerals, for example, I means one and X means ten, so IX means nine (10 − 1). The consistent use of the subtractive system with Roman numerals was not standardised until after the widespread adoption of the printing press in Europe.[1] History Further information: History of ancient numeral systems Sign-value notation was the ancient way of writing numbers and only gradually evolved into place-value notation, also known as positional notation. Sign-value notations have been used across the world by a variety of cultures throughout history. Mesopotamia When ancient people wanted to write "two sheep" in clay, they could inscribe in clay a picture of two sheep, however, this would be impractical when they wanted to write "twenty sheep". In Mesopotamia they used small clay tokens to represent a number of a specific commodity, and strung the tokens like beads on a string, which were used for accounting. There was a token for one sheep and a token for ten sheep, and a different token for ten goats, etc. To ensure that nobody could alter the number and type of tokens, they invented the bulla; a clay envelope shaped like a hollow ball into which the tokens on a string were placed and then baked. If anybody contested the number, they could break open the clay envelope and do a recount. To avoid unnecessary damage to the record, they pressed archaic number signs on the outside of the envelope before it was baked, each sign similar in shape to the tokens they represented. Since there was seldom any need to break open the envelope, the signs on the outside became the first written language for writing numbers in clay, using sign-value notation.[2] Initially, different systems of counting were used in relation to specific kinds of measurement.[3] Much like counting tokens, early Mesopotamian proto-cuneiform numerals often utilised different signs to count or measure different things, and identical signs could be used to represent different quantities depending on what was being counted or measured.[4] Eventually, the sexagesimal system was widely adopted by cuneiform-using cultures.[3] The sexagesimal sign-value system used by the Sumerians and the Akkadians would later evolve into the place-value system of Babylonian cuneiform numerals. See also • Place-value notation • Location arithmetic, a base 2 sign-value notation invented by J. Napier in 1617 References 1. Daniels & Bright (1996), p. 796. 2. Daniels & Bright (1996), p. 796–797. 3. Daniels & Bright (1996), p. 798. 4. Croft (2017), p. 111. Works cited • Croft, William (2017). "Evolutionary Complexity of Social Cognition, Semasiographic Systems, and Language". In Mufwene, Salikoko S.; Coupé, Christophe; Pellegrino, François (eds.). Complexity in Language: developmental and evolutionary perspectives. Cambridge approaches to language contact. Cambridge, U.K.: Cambridge University Press. ISBN 978-1-107-05437-0. • Daniels, Peter T.; Bright, William (1996). The World's Writing Systems. New York, U.S.: Oxford University Press. ISBN 978-0-19-507993-7. Further reading • Schmandt-Besserat, Denise (1992). How Writing Came About. University of Texas Press. ISBN 0-292-77704-3. (Paperback). External links • Online Converter for Decimal/Roman Numerals (JavaScript, GPL)
Wikipedia
Signal-to-noise statistic In mathematics the signal-to-noise statistic distance between two vectors a and b with mean values $\mu _{a}$ and $\mu _{b}$ and standard deviation $\sigma _{a}$ and $\sigma _{b}$ respectively is: $D_{sn}={(\mu _{a}-\mu _{b}) \over (\sigma _{a}+\sigma _{b})}$ In the case of Gaussian-distributed data and unbiased class distributions, this statistic can be related to classification accuracy given an ideal linear discrimination, and a decision boundary can be derived.[1] This distance is frequently used to identify vectors that have significant difference. One usage is in bioinformatics to locate genes that are differential expressed on microarray experiments.[2][3][4] See also • Distance • Uniform norm • Manhattan distance • Signal-to-noise ratio • Signal to noise ratio (imaging) Notes 1. Auffarth, B., Lopez, M., Cerquides, J. (2010). Comparison of redundancy and relevance measures for feature selection in tissue classification of CT images. Advances in Data Mining. Applications and Theoretical Aspects. p. 248--262. Springer. 2. Golub, T.R. et al. (1999) Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring. Science 286, 531-537, 3. Slonim D.K. et al. (2000) Class Prediction and Discovery Using Gene Expression Data. Procs. of the Fourth Annual International Conference on Computational Molecular Biology Tokyo, Japan April 8 - 11, p263-272 4. Pomeroy, S.L. et al. (2002) Gene Expression-Based Classification and Outcome Prediction of Central Nervous System Embryonal Tumors. Nature 415, 436–442.
Wikipedia
Decorrelation Decorrelation is a general term for any process that is used to reduce autocorrelation within a signal, or cross-correlation within a set of signals, while preserving other aspects of the signal.[1] A frequently used method of decorrelation is the use of a matched linear filter to reduce the autocorrelation of a signal as far as possible. Since the minimum possible autocorrelation for a given signal energy is achieved by equalising the power spectrum of the signal to be similar to that of a white noise signal, this is often referred to as signal whitening. Process Although most decorrelation algorithms are linear, non-linear decorrelation algorithms also exist. Many data compression algorithms incorporate a decorrelation stage.[2] For example, many transform coders first apply a fixed linear transformation that would, on average, have the effect of decorrelating a typical signal of the class to be coded, prior to any later processing. This is typically a Karhunen–Loève transform, or a simplified approximation such as the discrete cosine transform. By comparison, sub-band coders do not generally have an explicit decorrelation step, but instead exploit the already-existing reduced correlation within each of the sub-bands of the signal, due to the relative flatness of each sub-band of the power spectrum in many classes of signals. Linear predictive coders can be modelled as an attempt to decorrelate signals by subtracting the best possible linear prediction from the input signal, leaving a whitened residual signal. Decorrelation techniques can also be used for many other purposes, such as reducing crosstalk in a multi-channel signal, or in the design of echo cancellers. In image processing decorrelation techniques can be used to enhance or stretch, colour differences found in each pixel of an image. This is generally termed as 'decorrelation stretching'. The concept of decorrelation can be applied in many other fields. In neuroscience, decorrelation is used in the analysis of the neural networks in the human visual system. In cryptography, it is used in cipher design (see Decorrelation theory) and in the design of hardware random number generators. See also • Equalisation • Randomness extractor • Eigenvalue decomposition • Whitening transformation References 1. "Decorrelation - an overview | ScienceDirect Topics". www.sciencedirect.com. Retrieved 2020-09-25. 2. "Data Compression - an overview | ScienceDirect Topics". www.sciencedirect.com. Retrieved 2020-09-25. External links • Non-linear decorrelation algorithms • Associative Decorrelation Dynamics in Visual Cortex
Wikipedia
Signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements.[1] Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, subjective video quality, and to also detect or pinpoint components of interest in a measured signal.[2] The signal on the left looks like noise, but the signal processing technique known as spectral density estimation shows (right) that it contains five well-defined frequency components. History According to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. They further state that the digital refinement of these techniques can be found in the digital control systems of the 1940s and 1950s.[3] In 1948, Claude Shannon wrote the influential paper "A Mathematical Theory of Communication" which was published in the Bell System Technical Journal.[4] The paper laid the groundwork for later development of information communication systems and the processing of signals for transmission.[5] Signal processing matured and flourished in the 1960s and 1970s, and digital signal processing became widely used with specialized digital signal processor chips in the 1980s.[5] Definition Signal A signal is a function $x(t)$, where this function is either[6] • deterministic (then one speaks of a deterministic signal) or • a path $(x_{t})_{t\in T}$, a realization of a stochastic process $(X_{t})_{t\in T}$ Categories Analog Analog signal processing is for signals that have not been digitized, as in most 20th-century radio, telephone, and television systems. This involves linear electronic circuits as well as nonlinear ones. The former are, for instance, passive filters, active filters, additive mixers, integrators, and delay lines. Nonlinear circuits include compandors, multipliers (frequency mixers, voltage-controlled amplifiers), voltage-controlled filters, voltage-controlled oscillators, and phase-locked loops. Continuous time Continuous-time signal processing is for signals that vary with the change of continuous domain (without considering some individual interrupted points). The methods of signal processing include time domain, frequency domain, and complex frequency domain. This technology mainly discusses the modeling of a linear time-invariant continuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals Discrete time Discrete-time signal processing is for sampled signals, defined only at discrete points in time, and as such are quantized in time, but not in magnitude. Analog discrete-time signal processing is a technology based on electronic devices such as sample and hold circuits, analog time-division multiplexers, analog delay lines and analog feedback shift registers. This technology was a predecessor of digital signal processing (see below), and is still used in advanced processing of gigahertz signals. The concept of discrete-time signal processing also refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without taking quantization error into consideration. Digital Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors (DSP chips). Typical arithmetical operations include fixed-point and floating-point, real-valued and complex-valued, multiplication and addition. Other typical operations supported by the hardware are circular buffers and lookup tables. Examples of algorithms are the fast Fourier transform (FFT), finite impulse response (FIR) filter, Infinite impulse response (IIR) filter, and adaptive filters such as the Wiener and Kalman filters. Nonlinear Nonlinear signal processing involves the analysis and processing of signals produced from nonlinear systems and can be in the time, frequency, or spatiotemporal domains.[7][8] Nonlinear systems can produce highly complex behaviors including bifurcations, chaos, harmonics, and subharmonics which cannot be produced or analyzed using linear methods. Polynomial signal processing is a type of non-linear signal processing, where polynomial systems may be interpreted as conceptually straightforward extensions of linear systems to the non-linear case.[9] Statistical Statistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks.[10] Statistical techniques are widely used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, and construct techniques based on this model to reduce the noise in the resulting image. Application fields • Audio signal processing – for electrical signals representing sound, such as speech or music[11] • Image processing – in digital cameras, computers and various imaging systems • Video processing – for interpreting moving pictures • Wireless communication – waveform generations, demodulation, filtering, equalization • Control systems • Array processing – for processing signals from arrays of sensors • Process control – a variety of signals are used, including the industry standard 4-20 mA current loop • Seismology • Financial signal processing – analyzing financial data using signal processing techniques, especially for prediction purposes. • Feature extraction, such as image understanding and speech recognition. • Quality improvement, such as noise reduction, image enhancement, and echo cancellation. • Source coding including audio compression, image compression, and video compression. • Genomic signal processing[12] • In geophysics, signal processing is used to amplify the signal vs the noise within time-series measurements of geophysical data. Processing is conducted within either the time domain or frequency domain, or both.[13][14] In communication systems, signal processing may occur at: • OSI layer 1 in the seven-layer OSI model, the physical layer (modulation, equalization, multiplexing, etc.); • OSI layer 2, the data link layer (forward error correction); • OSI layer 6, the presentation layer (source coding, including analog-to-digital conversion and data compression). Typical devices • Filters – for example analog (passive or active) or digital (FIR, IIR, frequency domain or stochastic filters, etc.) • Samplers and analog-to-digital converters for signal acquisition and reconstruction, which involves measuring a physical signal, storing or transferring it as digital signal, and possibly later rebuilding the original signal or an approximation thereof. • Signal compressors • Digital signal processors (DSPs) Mathematical methods applied • Differential equations[15] • Recurrence relations[16] • Transform theory • Time-frequency analysis – for processing non-stationary signals[17] • Spectral estimation – for determining the spectral content (i.e., the distribution of power over frequency) of a time series[18] • Statistical signal processing – analyzing and extracting information from signals and noise based on their stochastic properties • Linear time-invariant system theory, and transform theory • Polynomial signal processing – analysis of systems which relate input and output using polynomials • System identification[7] and classification • Calculus • Complex analysis[19] • Vector spaces and Linear algebra[20] • Functional analysis[21] • Probability and stochastic processes[10] • Detection theory • Estimation theory • Optimization[22] • Numerical methods • Time series • Data mining – for statistical analysis of relations between large quantities of variables (in this context representing many physical signals), to extract previously unknown interesting patterns See also • Algebraic signal processing • Audio filter • Bounded variation • Digital image processing • Dynamic range compression, companding, limiting, and noise gating • Fourier transform • Information theory • Least-squares spectral analysis • Non-local means • Reverberation • Sensitivity (electronics) References 1. Sengupta, Nandini; Sahidullah, Md; Saha, Goutam (August 2016). "Lung sound classification using cepstral-based statistical features". Computers in Biology and Medicine. 75 (1): 118–129. doi:10.1016/j.compbiomed.2016.05.013. PMID 27286184. 2. Alan V. Oppenheim and Ronald W. Schafer (1989). Discrete-Time Signal Processing. Prentice Hall. p. 1. ISBN 0-13-216771-9. 3. Oppenheim, Alan V.; Schafer, Ronald W. (1975). Digital Signal Processing. Prentice Hall. p. 5. ISBN 0-13-214635-5. 4. "A Mathematical Theory of Communication – CHM Revolution". Computer History. Retrieved 2019-05-13. 5. Fifty Years of Signal Processing: The IEEE Signal Processing Society and its Technologies, 1948–1998 (PDF). The IEEE Signal Processing Society. 1998. 6. Berber, S. (2021). Discrete Communication Systems. United Kingdom: Oxford University Press., page 9, https://www.google.de/books/edition/Discrete_Communication_Systems/CCs0EAAAQBAJ?hl=de&gbpv=1&pg=PA9 7. Billings, S. A. (2013). Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains. Wiley. ISBN 978-1119943594. 8. Slawinska, J., Ourmazd, A., and Giannakis, D. (2018). "A New Approach to Signal Processing of Spatiotemporal Data". 2018 IEEE Statistical Signal Processing Workshop (SSP). IEEE Xplore. pp. 338–342. doi:10.1109/SSP.2018.8450704. ISBN 978-1-5386-1571-3. S2CID 52153144.{{cite book}}: CS1 maint: uses authors parameter (link) 9. V. John Mathews; Giovanni L. Sicuranza (May 2000). Polynomial Signal Processing. Wiley. ISBN 978-0-471-03414-8. 10. Scharf, Louis L. (1991). Statistical signal processing: detection, estimation, and time series analysis. Boston: Addison–Wesley. ISBN 0-201-19038-9. OCLC 61160161. 11. Sarangi, Susanta; Sahidullah, Md; Saha, Goutam (September 2020). "Optimization of data-driven filterbank for automatic speaker verification". Digital Signal Processing. 104: 102795. arXiv:2007.10729. doi:10.1016/j.dsp.2020.102795. S2CID 220665533. 12. Anastassiou, D. (2001). "Genomic signal processing". IEEE Signal Processing Magazine. IEEE. 18 (4): 8–20. Bibcode:2001ISPM...18....8A. doi:10.1109/79.939833. 13. Telford, William Murray; Geldart, L. P.; Sheriff, Robert E. (1990). Applied geophysics. Cambridge University Press. ISBN 978-0-521-33938-4. 14. Reynolds, John M. (2011). An Introduction to Applied and Environmental Geophysics. Wiley-Blackwell. ISBN 978-0-471-48535-3. 15. Patrick Gaydecki (2004). Foundations of Digital Signal Processing: Theory, Algorithms and Hardware Design. IET. pp. 40–. ISBN 978-0-85296-431-6. 16. Shlomo Engelberg (8 January 2008). Digital Signal Processing: An Experimental Approach. Springer Science & Business Media. ISBN 978-1-84800-119-0. 17. Boashash, Boualem, ed. (2003). Time frequency signal analysis and processing a comprehensive reference (1 ed.). Amsterdam: Elsevier. ISBN 0-08-044335-4. 18. Stoica, Petre; Moses, Randolph (2005). Spectral Analysis of Signals (PDF). NJ: Prentice Hall. 19. Peter J. Schreier; Louis L. Scharf (4 February 2010). Statistical Signal Processing of Complex-Valued Data: The Theory of Improper and Noncircular Signals. Cambridge University Press. ISBN 978-1-139-48762-7. 20. Max A. Little (13 August 2019). Machine Learning for Signal Processing: Data Science, Algorithms, and Computational Statistics. OUP Oxford. ISBN 978-0-19-102431-3. 21. Steven B. Damelin; Willard Miller, Jr (2012). The Mathematics of Signal Processing. Cambridge University Press. ISBN 978-1-107-01322-3. 22. Daniel P. Palomar; Yonina C. Eldar (2010). Convex Optimization in Signal Processing and Communications. Cambridge University Press. ISBN 978-0-521-76222-9. Further reading • P Stoica, R Moses (2005). Spectral Analysis of Signals (PDF). NJ: Prentice Hall. • Kay, Steven M. (1993). Fundamentals of Statistical Signal Processing. Upper Saddle River, New Jersey: Prentice Hall. ISBN 0-13-345711-7. OCLC 26504848. • Papoulis, Athanasios (1991). Probability, Random Variables, and Stochastic Processes (third ed.). McGraw-Hill. ISBN 0-07-100870-5. • Kainam Thomas Wong : Statistical Signal Processing lecture notes at the University of Waterloo, Canada. • Ali H. Sayed, Adaptive Filters, Wiley, NJ, 2008, ISBN 978-0-470-25388-5. • Thomas Kailath, Ali H. Sayed, and Babak Hassibi, Linear Estimation, Prentice-Hall, NJ, 2000, ISBN 978-0-13-022464-4. External links • Signal Processing for Communications – free online textbook by Paolo Prandoni and Martin Vetterli (2008) • Scientists and Engineers Guide to Digital Signal Processing – free online textbook by Stephen Smith Digital signal processing Theory • Detection theory • Discrete signal • Estimation theory • Nyquist–Shannon sampling theorem Sub-fields • Audio signal processing • Digital image processing • Speech processing • Statistical signal processing Techniques • Z-transform • Advanced z-transform • Matched Z-transform method • Bilinear transform • Constant-Q transform • Discrete cosine transform (DCT) • Discrete Fourier transform (DFT) • Discrete-time Fourier transform (DTFT) • Impulse invariance • Integral transform • Laplace transform • Post's inversion formula • Starred transform • Zak transform Sampling • Aliasing • Anti-aliasing filter • Downsampling • Nyquist rate / frequency • Oversampling • Quantization • Sampling rate • Undersampling • Upsampling Authority control International • FAST National • Spain • France • BnF data • Germany • Israel • United States • Czech Republic Other • IdRef
Wikipedia
Signal magnitude area In mathematics, the signal magnitude area (abbreviated SMA or sma) is a statistical measure of the magnitude of a varying quantity. Definition The SMA value of a set of values (or a continuous-time waveform) is the normalized integral of the original values. [1][2] In the case of a set of n values $\{x_{1},x_{2},\dots ,x_{n}\}$ matching a time length T, the SMA $x_{\text{sma}}=\sum _{i=1}^{n}x_{i}$ In the continuous domain, we have for example, with a 3-axis signal with an offset correction a for each axis, the following equation:[3] $f_{\text{sma}}={1 \over T}\int _{0}^{T}|x(t)-a_{x}|+|y(t)-a_{y}|+|z(t)-a_{z}|\,dt$ See also • Root mean square References 1. "Matlab compute Normalized Signal Magnitude area". 2. Chung, W. Y.; Purwar, A.; Sharma, A. (2008). "Frequency domain approach for activity classification using accelerometer, section 3B. Detection Algorithm". pp. 1120–3. arXiv:1107.4417. 3. "Classifying prosthetic use via accelerometry in persons with transtibial amputations". Journal of Rehabilitation Research & Development. U.S. Department of Veteran Affairs. 2013. Retrieved 2014-10-14.
Wikipedia
Clipping (signal processing) Clipping is a form of distortion that limits a signal once it exceeds a threshold. Clipping may occur when a signal is recorded by a sensor that has constraints on the range of data it can measure, it can occur when a signal is digitized, or it can occur any other time an analog or digital signal is transformed, particularly in the presence of gain or overshoot and undershoot. Clipping may be described as hard, in cases where the signal is strictly limited at the threshold, producing a flat cutoff; or it may be described as soft, in cases where the clipped signal continues to follow the original at a reduced gain. Hard clipping results in many high-frequency harmonics; soft clipping results in fewer higher-order harmonics and intermodulation distortion components. Audio In the frequency domain, clipping produces strong harmonics in the high-frequency range (as the clipped waveform comes closer to a squarewave). The extra high-frequency weighting of the signal could make tweeter damage more likely than if the signal was not clipped. Many electric guitar players intentionally overdrive their amplifiers (or insert a "fuzz box") to cause clipping in order to get a desired sound (see guitar distortion). In general, the distortion associated with clipping is unwanted, and is visible on an oscilloscope even if it is inaudible.[1] Images In the image domain, clipping is seen as desaturated (washed-out) bright areas that turn to pure white if all color components clip. In digital colour photography, it is also possible for individual colour channels to clip, which results in inaccurate colour reproduction. Causes Analog circuitry A circuit designer may intentionally use a clipper or clamper to keep a signal within a desired range. When an amplifier is pushed to create a signal with more power than it can support, it will amplify the signal only up to its maximum capacity, at which point the signal will be amplified no further. • An integrated circuit or discrete solid state amplifier cannot give an output voltage larger than the voltage it is powered by (commonly a 24- or 30-volt spread for operational amplifiers used in line-level equipment). • A vacuum tube can only move a limited number of electrons in an amount of time, dependent on its size, temperature, and metals. • A transformer (most commonly used between stages in tube equipment) will clip when its ferromagnetic core becomes electromagnetically saturated. Digital processing In digital signal processing, clipping occurs when the signal is restricted by the range of a chosen representation. For example in a system using 16-bit signed integers, 32767 is the largest positive value that can be represented, and if during processing the amplitude of the signal is doubled, sample values of 32000 should become 64000, but instead they are truncated to the maximum, 32767. Clipping is preferable to the alternative in digital systems — wrapping — which occurs if the digital hardware is allowed to "overflow", ignoring the most significant bits of the magnitude, and sometimes even the sign of the sample value, resulting in gross distortion of the signal. The incidence of clipping may be greatly reduced by using floating point numbers instead of integers. However, floating point numbers are usually less efficient to use, sometimes result in a loss of precision, and they can still clip if a number is extremely large or small. Avoiding clipping Clipping can be detected by viewing the signal (on an oscilloscope, for example), and observing that the tops and bottoms of waves aren't smooth anymore. When working with images, some tools can highlight all pixels that are pure white, allowing the user to identify larger groups of white pixels and decide if too much clipping has occurred. To avoid clipping, the signal can be dynamically reduced using a limiter. If not done carefully, this can still cause undesirable distortion, but it prevents any data from being completely lost. Repairing a clipped signal When clipping occurs, part of the original signal is lost, so perfect restoration is impossible. Thus, it is much preferable to avoid clipping in the first place. However, when repair is the only option, the goal is to make up a plausible replacement for the clipped part of the signal. See also • Dynamic range • Dynamic range compression References 1. Zottola, Tino (1996). Vacuum Tube and Guitar and Bass Amplifier Servicing. Bold Strummer. p. 6. ISBN 0-933224-97-4.
Wikipedia
Signalizer functor In mathematics, a signalizer functor gives the intersections of a potential subgroup of a finite group with the centralizers of nontrivial elements of an abelian group. The signalizer functor theorem gives conditions under which a signalizer functor comes from a subgroup. The idea is to try to construct a $p'$-subgroup of a finite group $G$, which has a good chance of being normal in $G$, by taking as generators certain $p'$-subgroups of the centralizers of nonidentity elements in one or several given noncyclic elementary abelian $p$-subgroups of $G.$ The technique has origins in the Feit–Thompson theorem, and was subsequently developed by many people including Gorenstein (1969) who defined signalizer functors, Glauberman (1976) who proved the Solvable Signalizer Functor Theorem for solvable groups, and McBride (1982a, 1982b) who proved it for all groups. This theorem is needed to prove the so-called "dichotomy" stating that a given nonabelian finite simple group either has local characteristic two, or is of component type. It thus plays a major role in the classification of finite simple groups. Definition Let A be a noncyclic elementary abelian p-subgroup of the finite group G. An A-signalizer functor on G or simply a signalizer functor when A and G are clear is a mapping θ from the set of nonidentity elements of A to the set of A-invariant p′-subgroups of G satisfying the following properties: • For every nonidentity $a\in A$, the group $\theta (a)$ is contained in $C_{G}(a).$ • For every nonidentity $a,b\in A$, we have $\theta (a)\cap C_{G}(b)\subseteq \theta (b).$ The second condition above is called the balance condition. If the subgroups $\theta (a)$ are all solvable, then the signalizer functor $\theta $ itself is said to be solvable. Solvable signalizer functor theorem Given $\theta ,$ certain additional, relatively mild, assumptions allow one to prove that the subgroup $W=\langle \theta (a)\mid a\in A,a\neq 1\rangle $ of $G$ generated by the subgroups $\theta (a)$ is in fact a $p'$-subgroup. The Solvable Signalizer Functor Theorem proved by Glauberman and mentioned above says that this will be the case if $\theta $ is solvable and $A$ has at least three generators. The theorem also states that under these assumptions, $W$ itself will be solvable. Several earlier versions of the theorem were proven: Gorenstein (1969) proved this under the stronger assumption that $A$ had rank at least 5. Goldschmidt (1972a, 1972b) proved this under the assumption that $A$ had rank at least 4 or was a 2-group of rank at least 3. Bender (1975) gave a simple proof for 2-groups using the ZJ theorem, and a proof in a similar spirit has been given for all primes by Flavell (2007). Glauberman (1976) gave the definitive result for solvable signalizer functors. Using the classification of finite simple groups, McBride (1982a, 1982b) showed that $W$ is a $p'$-group without the assumption that $\theta $ is solvable. Completeness The terminology of completeness is often used in discussions of signalizer functors. Let $\theta $ be a signalizer functor as above, and consider the set И of all $A$-invariant $p'$-subgroups $H$ of $G$ satisfying the following condition: • $H\cap C_{G}(a)\subseteq \theta (a)$ for all nonidentity $a\in A.$ For example, the subgroups $\theta (a)$ belong to И by the balance condition. The signalizer functor $\theta $ is said to be complete if И has a unique maximal element when ordered by containment. In this case, the unique maximal element can be shown to coincide with $W$ above, and $W$ is called the completion of $\theta $. If $\theta $ is complete, and $W$ turns out to be solvable, then $\theta $ is said to be solvably complete. Thus, the Solvable Signalizer Functor Theorem can be rephrased by saying that if $A$ has at least three generators, then every solvable $A$-signalizer functor on $G$ is solvably complete. Examples of signalizer functors The easiest way to obtain a signalizer functor is to start with an $A$-invariant $p'$-subgroup $M$ of $G,$ and define $\theta (a)=M\cap C_{G}(a)$ for all nonidentity $a\in A.$ In practice, however, one begins with $\theta $ and uses it to construct the $A$-invariant $p'$-group. The simplest signalizer functor used in practice is this: $\theta (a)=O_{p'}(C_{G}(a)).$ A few words of caution are needed here. First, note that $\theta (a)$ as defined above is indeed an $A$-invariant $p'$-subgroup of $G$ because $A$ is abelian. However, some additional assumptions are needed to show that this $\theta $ satisfies the balance condition. One sufficient criterion is that for each nonidentity $a\in A,$ the group $C_{G}(a)$ is solvable (or $p$-solvable or even $p$-constrained). Verifying the balance condition for this $\theta $ under this assumption requires a famous lemma, known as Thompson's $P\times Q$-lemma. (Note, this lemma is also called Thompson's $A\times B$-lemma, but the $A$ in this use must not be confused with the $A$ appearing in the definition of a signalizer functor!) Coprime action To obtain a better understanding of signalizer functors, it is essential to know the following general fact about finite groups: • Let $E$ be an abelian noncyclic group acting on the finite group $X.$ Assume that the orders of $E$ and $X$ are relatively prime. Then $X=\langle C_{X}(E_{0})\mid E_{0}\subseteq E,{\text{ and }}E/E_{0}{\text{ cyclic }}\rangle $ To prove this fact, one uses the Schur–Zassenhaus theorem to show that for each prime $q$ dividing the order of $X,$ the group $X$ has an $E$-invariant Sylow $q$-subgroup. This reduces to the case where $X$ is a $q$-group. Then an argument by induction on the order of $X$ reduces the statement further to the case where $X$ is elementary abelian with $E$ acting irreducibly. This forces the group $E/C_{E}(X)$ to be cyclic, and the result follows. See either of the books Aschbacher (2000) or Kurzweil & Stellmacher (2004) for details. This is used in both the proof and applications of the Solvable Signalizer Functor Theorem. To begin, notice that it quickly implies the claim that if $\theta $ is complete, then its completion is the group $W$ defined above. Normal completion The completion of a signalizer functor has a "good chance" of being normal in $G,$ according to the top of the article. Here, the coprime action fact will be used to motivate this claim. Let $\theta $ be a complete $A$-signalizer functor on $G$ Let $B$ be a noncyclic subgroup of $A.$ Then the coprime action fact together with the balance condition imply that $W=\langle \theta (a)\mid a\in A,a\neq 1\rangle =\langle \theta (b)\mid b\in B,b\neq 1\rangle $. To see this, observe that because $\theta (a)$ is B-invariant, we have $\theta (a)=\langle \theta (a)\cap C_{G}(b)\mid b\in B,b\neq 1\rangle \subseteq \langle \theta (b)\mid b\in B,b\neq 1\rangle .$ The equality above uses the coprime action fact, and the containment uses the balance condition. Now, it is often the case that $\theta $ satisfies an "equivariance" condition, namely that for each $g\in G$ and nonidentity $a\in A$ $\theta (a^{g})=\theta (a)^{g}.\,$ The superscript denotes conjugation by $g.$ For example, the mapping $a\mapsto O_{p'}(C_{G}(a))$ (which is often a signalizer functor!) satisfies this condition. If $\theta $ satisfies equivariance, then the normalizer of $B$ will normalize $W.$ It follows that if $G$ is generated by the normalizers of the noncyclic subgroups of $A,$ then the completion of $\theta $ (i.e. W) is normal in $G.$ References • Aschbacher, Michael (2000), Finite Group Theory, Cambridge University Press, ISBN 978-0-521-78675-1 • Bender, Helmut (1975), "Goldschmidt's 2-signalizer functor theorem", Israel Journal of Mathematics, 22 (3): 208–213, doi:10.1007/BF02761590, ISSN 0021-2172, MR 0390056 • Flavell, Paul (2007), A new proof of the Solvable Signalizer Functor Theorem (PDF), archived from the original (PDF) on 2012-04-14 • Goldschmidt, David M. (1972a), "Solvable signalizer functors on finite groups", Journal of Algebra, 21: 137–148, doi:10.1016/0021-8693(72)90040-3, ISSN 0021-8693, MR 0297861 • Goldschmidt, David M. (1972b), "2-signalizer functors on finite groups", Journal of Algebra, 21: 321–340, doi:10.1016/0021-8693(72)90027-0, ISSN 0021-8693, MR 0323904 • Glauberman, George (1976), "On solvable signalizer functors in finite groups", Proceedings of the London Mathematical Society, Third Series, 33 (1): 1–27, doi:10.1112/plms/s3-33.1.1, ISSN 0024-6115, MR 0417284 • Gorenstein, D. (1969), "On the centralizers of involutions in finite groups", Journal of Algebra, 11: 243–277, doi:10.1016/0021-8693(69)90056-8, ISSN 0021-8693, MR 0240188 • Kurzweil, Hans; Stellmacher, Bernd (2004), The theory of finite groups, Universitext, Berlin, New York: Springer-Verlag, doi:10.1007/b97433, ISBN 978-0-387-40510-0, MR 2014408 • McBride, Patrick Paschal (1982a), "Near solvable signalizer functors on finite groups" (PDF), Journal of Algebra, 78 (1): 181–214, doi:10.1016/0021-8693(82)90107-7, hdl:2027.42/23875, ISSN 0021-8693, MR 0677717 • McBride, Patrick Paschal (1982b), "Nonsolvable signalizer functors on finite groups", Journal of Algebra, 78 (1): 215–238, doi:10.1016/0021-8693(82)90108-9, hdl:2027.42/23876, ISSN 0021-8693
Wikipedia
Signature Record Type Definition In near field communications the NFC Forum Signature Record Type Definition (RTD) is a security protocol used to protect the integrity and authenticity of NDEF (NFC Data Exchange Format) Messages. The Signature RTD is an open interoperable specification modeled after Code signing where the trust of signed messages is tied to digital certificates.[1] Signing NDEF records prevents malicious use of NFC tags (containing a protected NDEF record). For example, smartphone users tapping NFC tags containing URLs. Without some level of integrity protection an adversary could launch a phishing attack. Signing the NDEF record protects the integrity of the contents and allows the user to identify the signer if they wish. Signing certificates are obtained from third party Certificate Authorities and are governed by the NFC Forum Signature RTD Certificate Policy. How it works The NDEF signing process Referring to the diagrams. An author obtains a signing certificate from a valid certificate authority. The author's private key is used to sign the Data Record (text, URI, or whatever you like). The signature and author's certificate comprise the signature record. The Data Record and Signature Record are concatenated to produce the Signed NDEF Message that can be written to a standard NFC tag with sufficient memory (typically on the order of 300 to 500 bytes). The NDEF record remains in the clear (not encrypted) so any NFC tag reader will be able to read the signed data even if they cannot verify it. Data RecordSignature Record NDEF RecordSignature, Certificate Chain The NDEF Verification Process Referring to the diagram. Upon reading the Signed NDEF Message, the Signature on the Data Record is first cryptographically verified using the author's public key (extracted from the Author's Certificate). Once verified, the Author's Certificate can be verified using the NFC Root Certificate. If both verifications are valid then one can trust the NDEF record and perform the desired operation. Supported certificate formats The Signature RTD 2.0 supports two certificate formats. One being X.509 certificate format and the other the Machine to Machine (M2M) Certificate format.[2] The M2M Certificate format is a subset of X.509 designed for limited memory typically found on NFC tags. The author's certificate can optionally be replaced with a URI reference to that certificate or Certificate Chain so that messages can be cryptographically verified. The URI certificate reference designed to save memory for NFC tags. Supported cryptographic algorithms The Signature RTD 2.0 uses industry standard digital signature algorithms. The following algorithms are supported: Signature Type/HashSecurity Strength (IEEE P1363) RSA_1024/SHA_25680 bits DSA_1024/SHA_25680 bits ECDSA_P192/SHA_25680 bits RSA_2048/SHA_256112 bits DSA_2048/SHA_256112 bits ECDSA_P224/SHA_256112 bits ECDSA_K233/SHA_256112 bits ECDSA_B233/SHA_256112 bits ECDSA_P256/SHA_256128 bits On the security of the Signature RTD The Signature RTD 2.0's primary purpose is the protect the integrity and authenticity of NDEF records. Thus, NFC tag contents using the Signature RTD 2.0 is protected. The security of the system is tied to a certificate authority and the associated Certificate Chain. The NFC Forum Signature RTD Certificate Policy defines the policies under which certificate authorities can operate in the context of NFC. Root certificates are carried in verification devices and are not contained in the signature record. This separation is important for the security of the system just as web browser certificates are separated from web server certificates in TLS. References 1. "Home - NFC Forum". NFC Forum. 2. "IETF - M2M Certificate format". IETF.
Wikipedia
Signature matrix In mathematics, a signature matrix is a diagonal matrix whose diagonal elements are plus or minus 1, that is, any matrix of the form:[1] $A={\begin{pmatrix}\pm 1&0&\cdots &0&0\\0&\pm 1&\cdots &0&0\\\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&\cdots &\pm 1&0\\0&0&\cdots &0&\pm 1\end{pmatrix}}$ Any such matrix is its own inverse, hence is an involutory matrix. It is consequently a square root of the identity matrix. Note however that not all square roots of the identity are signature matrices. Noting that signature matrices are both symmetric and involutory, they are thus orthogonal. Consequently, any linear transformation corresponding to a signature matrix constitutes an isometry. Geometrically, signature matrices represent a reflection in each of the axes corresponding to the negated rows or columns. Properties If A is a matrix of N*N then: • $-N\leq \operatorname {tr} (A)\leq N$(Due to the diagonal values being -1 or 1) • The Determinant of A is either 1 or -1 (Due to it being diagonal) See also • Metric signature References 1. Bapat, R. B. (2010), Graphs and matrices, Universitext, London: Springer, p. 40, doi:10.1007/978-1-84882-981-7, ISBN 978-1-84882-980-0, MR 2797201. Matrix classes Explicitly constrained entries • Alternant • Anti-diagonal • Anti-Hermitian • Anti-symmetric • Arrowhead • Band • Bidiagonal • Bisymmetric • Block-diagonal • Block • Block tridiagonal • Boolean • Cauchy • Centrosymmetric • Conference • Complex Hadamard • Copositive • Diagonally dominant • Diagonal • Discrete Fourier Transform • Elementary • Equivalent • Frobenius • Generalized permutation • Hadamard • Hankel • Hermitian • Hessenberg • Hollow • Integer • Logical • Matrix unit • Metzler • Moore • Nonnegative • Pentadiagonal • Permutation • Persymmetric • Polynomial • Quaternionic • Signature • Skew-Hermitian • Skew-symmetric • Skyline • Sparse • Sylvester • Symmetric • Toeplitz • Triangular • Tridiagonal • Vandermonde • Walsh • Z Constant • Exchange • Hilbert • Identity • Lehmer • Of ones • Pascal • Pauli • Redheffer • Shift • Zero Conditions on eigenvalues or eigenvectors • Companion • Convergent • Defective • Definite • Diagonalizable • Hurwitz • Positive-definite • Stieltjes Satisfying conditions on products or inverses • Congruent • Idempotent or Projection • Invertible • Involutory • Nilpotent • Normal • Orthogonal • Unimodular • Unipotent • Unitary • Totally unimodular • Weighing With specific applications • Adjugate • Alternating sign • Augmented • Bézout • Carleman • Cartan • Circulant • Cofactor • Commutation • Confusion • Coxeter • Distance • Duplication and elimination • Euclidean distance • Fundamental (linear differential equation) • Generator • Gram • Hessian • Householder • Jacobian • Moment • Payoff • Pick • Random • Rotation • Seifert • Shear • Similarity • Symplectic • Totally positive • Transformation Used in statistics • Centering • Correlation • Covariance • Design • Doubly stochastic • Fisher information • Hat • Precision • Stochastic • Transition Used in graph theory • Adjacency • Biadjacency • Degree • Edmonds • Incidence • Laplacian • Seidel adjacency • Tutte Used in science and engineering • Cabibbo–Kobayashi–Maskawa • Density • Fundamental (computer vision) • Fuzzy associative • Gamma • Gell-Mann • Hamiltonian • Irregular • Overlap • S • State transition • Substitution • Z (chemistry) Related terms • Jordan normal form • Linear independence • Matrix exponential • Matrix representation of conic sections • Perfect matrix • Pseudoinverse • Row echelon form • Wronskian •  Mathematics portal • List of matrices • Category:Matrices
Wikipedia
Signature (topology) In the field of topology, the signature is an integer invariant which is defined for an oriented manifold M of dimension divisible by four. This invariant of a manifold has been studied in detail, starting with Rokhlin's theorem for 4-manifolds, and Hirzebruch signature theorem. Definition Given a connected and oriented manifold M of dimension 4k, the cup product gives rise to a quadratic form Q on the 'middle' real cohomology group $H^{2k}(M,\mathbf {R} )$. The basic identity for the cup product $\alpha ^{p}\smile \beta ^{q}=(-1)^{pq}(\beta ^{q}\smile \alpha ^{p})$ shows that with p = q = 2k the product is symmetric. It takes values in $H^{4k}(M,\mathbf {R} )$. If we assume also that M is compact, Poincaré duality identifies this with $H^{0}(M,\mathbf {R} )$ which can be identified with $\mathbf {R} $. Therefore the cup product, under these hypotheses, does give rise to a symmetric bilinear form on H2k(M,R); and therefore to a quadratic form Q. The form Q is non-degenerate due to Poincaré duality, as it pairs non-degenerately with itself.[1] More generally, the signature can be defined in this way for any general compact polyhedron with 4n-dimensional Poincaré duality. The signature $\sigma (M)$ of M is by definition the signature of Q, that is, $\sigma (M)=n_{+}-n_{-}$ where any diagonal matrix defining Q has $n_{+}$ positive entries and $n_{-}$ negative entries.[2] If M is not connected, its signature is defined to be the sum of the signatures of its connected components. Other dimensions Further information: L-theory If M has dimension not divisible by 4, its signature is usually defined to be 0. There are alternative generalization in L-theory: the signature can be interpreted as the 4k-dimensional (simply connected) symmetric L-group $L^{4k},$ or as the 4k-dimensional quadratic L-group $L_{4k},$ and these invariants do not always vanish for other dimensions. The Kervaire invariant is a mod 2 (i.e., an element of $\mathbf {Z} /2$) for framed manifolds of dimension 4k+2 (the quadratic L-group $L_{4k+2}$), while the de Rham invariant is a mod 2 invariant of manifolds of dimension 4k+1 (the symmetric L-group $L^{4k+1}$); the other dimensional L-groups vanish. Kervaire invariant Main article: Kervaire invariant When $d=4k+2=2(2k+1)$ is twice an odd integer (singly even), the same construction gives rise to an antisymmetric bilinear form. Such forms do not have a signature invariant; if they are non-degenerate, any two such forms are equivalent. However, if one takes a quadratic refinement of the form, which occurs if one has a framed manifold, then the resulting ε-quadratic forms need not be equivalent, being distinguished by the Arf invariant. The resulting invariant of a manifold is called the Kervaire invariant. Properties • Compact oriented manifolds M and N satisfy $\sigma (M\sqcup N)=\sigma (M)+\sigma (N)$ by definition, and satisfy $\sigma (M\times N)=\sigma (M)\sigma (N)$ by a Künneth formula. • If M is an oriented boundary, then $\sigma (M)=0$. • René Thom (1954) showed that the signature of a manifold is a cobordism invariant, and in particular is given by some linear combination of its Pontryagin numbers.[3] For example, in four dimensions, it is given by ${\frac {p_{1}}{3}}$. Friedrich Hirzebruch (1954) found an explicit expression for this linear combination as the L genus of the manifold. • William Browder (1962) proved that a simply connected compact polyhedron with 4n-dimensional Poincaré duality is homotopy equivalent to a manifold if and only if its signature satisfies the expression of the Hirzebruch signature theorem. • Rokhlin's theorem says that the signature of a 4-dimensional simply connected manifold with a spin structure is divisible by 16. See also • Hirzebruch signature theorem • Genus of a multiplicative sequence • Rokhlin's theorem References 1. Hatcher, Allen (2003). Algebraic topology (PDF) (Repr. ed.). Cambridge: Cambridge Univ. Pr. p. 250. ISBN 978-0521795401. Retrieved 8 January 2017. 2. Milnor, John; Stasheff, James (1962). Characteristic classes. Annals of Mathematics Studies 246. p. 224. CiteSeerX 10.1.1.448.869. ISBN 978-0691081229. 3. Thom, René. "Quelques proprietes globales des varietes differentiables" (PDF) (in French). Comm. Math. Helvetici 28 (1954), S. 17–86. Retrieved 26 October 2019.
Wikipedia
Signature operator In mathematics, the signature operator is an elliptic differential operator defined on a certain subspace of the space of differential forms on an even-dimensional compact Riemannian manifold, whose analytic index is the same as the topological signature of the manifold if the dimension of the manifold is a multiple of four.[1] It is an instance of a Dirac-type operator. Definition in the even-dimensional case Let $M$ be a compact Riemannian manifold of even dimension $2l$. Let $d:\Omega ^{p}(M)\rightarrow \Omega ^{p+1}(M)$ be the exterior derivative on $i$-th order differential forms on $M$. The Riemannian metric on $M$ allows us to define the Hodge star operator $\star $ and with it the inner product $\langle \omega ,\eta \rangle =\int _{M}\omega \wedge \star \eta $ on forms. Denote by $d^{*}:\Omega ^{p+1}(M)\rightarrow \Omega ^{p}(M)$ the adjoint operator of the exterior differential $d$. This operator can be expressed purely in terms of the Hodge star operator as follows: $d^{*}=(-1)^{2l(p+1)+2l+1}\star d\star =-\star d\star $ Now consider $d+d^{*}$ acting on the space of all forms $\Omega (M)=\bigoplus _{p=0}^{2l}\Omega ^{p}(M)$. One way to consider this as a graded operator is the following: Let $\tau $ be an involution on the space of all forms defined by: $\tau (\omega )=i^{p(p-1)+l}\star \omega \quad ,\quad \omega \in \Omega ^{p}(M)$ It is verified that $d+d^{*}$ anti-commutes with $\tau $ and, consequently, switches the $(\pm 1)$-eigenspaces $\Omega _{\pm }(M)$ of $\tau $ Consequently, $d+d^{*}={\begin{pmatrix}0&D\\D^{*}&0\end{pmatrix}}$ Definition: The operator $d+d^{*}$ with the above grading respectively the above operator $D:\Omega _{+}(M)\rightarrow \Omega _{-}(M)$ is called the signature operator of $M$.[2] Definition in the odd-dimensional case In the odd-dimensional case one defines the signature operator to be $i(d+d^{*})\tau $ acting on the even-dimensional forms of $M$. Hirzebruch Signature Theorem If $l=2k$, so that the dimension of $M$ is a multiple of four, then Hodge theory implies that: $\mathrm {index} (D)=\mathrm {sign} (M)$ where the right hand side is the topological signature (i.e. the signature of a quadratic form on $H^{2k}(M)\ $ defined by the cup product). The Heat Equation approach to the Atiyah-Singer index theorem can then be used to show that: $\mathrm {sign} (M)=\int _{M}L(p_{1},\ldots ,p_{l})$ where $L$ is the Hirzebruch L-Polynomial,[3] and the $p_{i}\ $ the Pontrjagin forms on $M$.[4] Homotopy invariance of the higher indices Kaminker and Miller proved that the higher indices of the signature operator are homotopy-invariant.[5] See also • Hirzebruch signature theorem • Pontryagin class • Friedrich Hirzebruch • Michael Atiyah • Isadore Singer Notes 1. Atiyah & Bott 1967 2. Atiyah & Bott 1967 3. Hirzebruch 1995 4. Gilkey 1973, Atiyah, Bott & Patodi 1973 5. Kaminker & Miller 1985 References • Atiyah, M.F.; Bott, R. (1967), "A Lefschetz fixed-point formula for elliptic complexes I", Annals of Mathematics, 86 (2): 374–407, doi:10.2307/1970694, JSTOR 1970694 • Atiyah, M.F.; Bott, R.; Patodi, V.K. (1973), "On the heat equation and the index theorem", Inventiones Mathematicae, 19 (4): 279–330, Bibcode:1973InMat..19..279A, doi:10.1007/bf01425417, S2CID 115700319 • Gilkey, P.B. (1973), "Curvature and the eigenvalues of the Laplacian for elliptic complexes", Advances in Mathematics, 10 (3): 344–382, doi:10.1016/0001-8708(73)90119-9 • Hirzebruch, Friedrich (1995), Topological Methods in Algebraic Geometry, 4th edition, Berlin and Heidelberg: Springer-Verlag. Pp. 234, ISBN 978-3-540-58663-0 • Kaminker, Jerome; Miller, John G. (1985), "Homotopy Invariance of the Analytic Index of Signature Operators over C*-Algebras" (PDF), Journal of Operator Theory, 14: 113–127
Wikipedia
Multiplication algorithm A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the decimal system. Long multiplication If a positional numeral system is used, a natural way of multiplying numbers is taught in schools as long multiplication, sometimes called grade-school multiplication, sometimes called the Standard Algorithm: multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted results. It requires memorization of the multiplication table for single digits. This is the usual algorithm for multiplying larger numbers by hand in base 10. A person doing long multiplication on paper will write down all the products and then add them together; an abacus-user will sum the products as soon as each one is computed. Example This example uses long multiplication to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product). 23958233 × 5830 ——————————————— 00000000 ( = 23,958,233 × 0) 71874699 ( = 23,958,233 × 30) 191665864 ( = 23,958,233 × 800) + 119791165 ( = 23,958,233 × 5,000) ——————————————— 139676498390 ( = 139,676,498,390) Other notations In some countries such as Germany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier:[1] 23958233 · 5830 ——————————————— 119791165 191665864 71874699 00000000 ——————————————— 139676498390 Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation (akin to languages such as Java and C) for compactness. multiply(a[1..p], b[1..q], base) // Operands containing rightmost digits at index 1 product = [1..p+q] // Allocate space for result for b_i = 1 to q // for all digits in b carry = 0 for a_i = 1 to p // for all digits in a product[a_i + b_i - 1] += carry + a[a_i] * b[b_i] carry = product[a_i + b_i - 1] / base product[a_i + b_i - 1] = product[a_i + b_i - 1] mod base product[b_i + p] = carry // last digit comes from final carry return product Usage in computers Some chips implement long multiplication, in hardware or in microcode, for various integer and floating-point word sizes. In arbitrary-precision arithmetic, it is common to use long multiplication with the base set to 2w, where w is the number of bits in a word, for multiplying relatively small numbers. To multiply two numbers with n digits using this method, one needs about n2 operations. More formally, multiplying two n-digit numbers using long multiplication requires Θ(n2) single-digit operations (additions and multiplications). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. A typical solution is to represent the number in a small base, b, such that, for example, 8b is a representable machine integer. Several additions can then be performed before an overflow occurs. When the number becomes too large, we add part of it to the result, or we carry and map the remaining part back to a number that is less than b. This process is called normalization. Richard Brent used this approach in his Fortran package, MP.[2] Computers initially used a very similar algorithm to long multiplication in base 2, but modern processors have optimized circuitry for fast multiplications using more efficient algorithms, at the price of a more complex hardware realization. In base two, long multiplication is sometimes called "shift and add", because the algorithm simplifies and just consists of shifting left (multiplying by powers of two) and adding. Most currently available microprocessors implement this or other similar algorithms (such as Booth encoding) for various integer and floating-point sizes in hardware multipliers or in microcode. On currently available processors, a bit-wise shift instruction is faster than a multiply instruction and can be used to multiply (shift left) and divide (shift right) by powers of two. Multiplication by a constant and division by a constant can be implemented using a sequence of shifts and adds or subtracts. For example, there are several ways to multiply by 10 using only bit-shift and addition. ((x << 2) + x) << 1 # Here 10*x is computed as (x*2^2 + x)*2 (x << 3) + (x << 1) # Here 10*x is computed as x*2^3 + x*2 In some cases such sequences of shifts and adds or subtracts will outperform hardware multipliers and especially dividers. A division by a number of the form $2^{n}$ or $2^{n}\pm 1$ often can be converted to such a short sequence. Algorithms for multiplying by hand In addition to the standard long multiplication, there are several other methods used to perform multiplication by hand. Such algorithms may be devised for speed, ease of calculation, or educational value, particularly when computers or multiplication tables are unavailable. Grid method Main article: Grid method multiplication The grid method (or box method) is an introductory method for multiple-digit multiplication that is often taught to pupils at primary school or elementary school. It has been a standard part of the national primary school mathematics curriculum in England and Wales since the late 1990s.[3] Both factors are broken up ("partitioned") into their hundreds, tens and units parts, and the products of the parts are then calculated explicitly in a relatively simple multiplication-only stage, before these contributions are then totalled to give the final answer in a separate addition stage. The calculation 34 × 13, for example, could be computed using the grid: 300 40 90 + 12 ———— 442 × 30 4 10 300 40 3 90 12 followed by addition to obtain 442, either in a single sum (see right), or through forming the row-by-row totals (300 + 40) + (90 + 12) = 340 + 102 = 442. This calculation approach (though not necessarily with the explicit grid arrangement) is also known as the partial products algorithm. Its essence is the calculation of the simple multiplications separately, with all addition being left to the final gathering-up stage. The grid method can in principle be applied to factors of any size, although the number of sub-products becomes cumbersome as the number of digits increases. Nevertheless, it is seen as a usefully explicit method to introduce the idea of multiple-digit multiplications; and, in an age when most multiplication calculations are done using a calculator or a spreadsheet, it may in practice be the only multiplication algorithm that some students will ever need. Lattice multiplication Main article: Lattice multiplication Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice (a grid drawn on paper) which guides the calculation and separates all the multiplications from the additions. It was introduced to Europe in 1202 in Fibonacci's Liber Abaci. Fibonacci described the operation as mental, using his right and left hands to carry the intermediate calculations. Matrakçı Nasuh presented 6 different variants of this method in this 16th-century book, Umdet-ul Hisab. It was widely used in Enderun schools across the Ottoman Empire.[4] Napier's bones, or Napier's rods also used this method, as published by Napier in 1617, the year of his death. As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found in Muhammad ibn Musa al-Khwarizmi's "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002. • During the multiplication phase, the lattice is filled in with two-digit products of the corresponding digits labeling each row and column: the tens digit goes in the top-left corner. • During the addition phase, the lattice is summed on the diagonals. • Finally, if a carry phase is necessary, the answer as shown along the left and bottom sides of the lattice is converted to normal form by carrying ten's digits as in long addition or multiplication. Example The pictures on the right show how to calculate 345 × 12 using lattice multiplication. As a more complicated example, consider the picture below displaying the computation of 23,958,233 multiplied by 5,830 (multiplier); the result is 139,676,498,390. Notice 23,958,233 is along the top of the lattice and 5,830 is along the right side. The products fill the lattice and the sum of those products (on the diagonal) are along the left and bottom sides. Then those sums are totaled as shown. 2 3 9 5 8 2 3 3 +---+---+---+---+---+---+---+---+- |1 /|1 /|4 /|2 /|4 /|1 /|1 /|1 /| | / | / | / | / | / | / | / | / | 5 01|/ 0|/ 5|/ 5|/ 5|/ 0|/ 0|/ 5|/ 5| +---+---+---+---+---+---+---+---+- |1 /|2 /|7 /|4 /|6 /|1 /|2 /|2 /| | / | / | / | / | / | / | / | / | 8 02|/ 6|/ 4|/ 2|/ 0|/ 4|/ 6|/ 4|/ 4| +---+---+---+---+---+---+---+---+- |0 /|0 /|2 /|1 /|2 /|0 /|0 /|0 /| | / | / | / | / | / | / | / | / | 3 17|/ 6|/ 9|/ 7|/ 5|/ 4|/ 6|/ 9|/ 9| +---+---+---+---+---+---+---+---+- |0 /|0 /|0 /|0 /|0 /|0 /|0 /|0 /| | / | / | / | / | / | / | / | / | 0 24|/ 0|/ 0|/ 0|/ 0|/ 0|/ 0|/ 0|/ 0| +---+---+---+---+---+---+---+---+- 26 15 13 18 17 13 09 00 01 002 0017 00024 000026 0000015 00000013 000000018 0000000017 00000000013 000000000009 0000000000000 ————————————— 139676498390 = 139,676,498,390 Russian peasant multiplication Main article: Peasant multiplication The binary method is also known as peasant multiplication, because it has been widely used by people who are classified as peasants and thus have not memorized the multiplication tables required for long multiplication.[5] The algorithm was in use in ancient Egypt.[6] Its main advantages are that it can be taught quickly, requires no memorization, and can be performed using tokens, such as poker chips, if paper and pencil aren't available. The disadvantage is that it takes more steps than long multiplication, so it can be unwieldy for large numbers. Description On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product. Examples This example uses peasant multiplication to multiply 11 by 3 to arrive at a result of 33. Decimal: Binary: 11 3 1011 11 5 6 101 110 2 12 10 1100 1 24 1 11000 —— —————— 33 100001 Describing the steps explicitly: • 11 and 3 are written at the top • 11 is halved (5.5) and 3 is doubled (6). The fractional portion is discarded (5.5 becomes 5). • 5 is halved (2.5) and 6 is doubled (12). The fractional portion is discarded (2.5 becomes 2). The figure in the left column (2) is even, so the figure in the right column (12) is discarded. • 2 is halved (1) and 12 is doubled (24). • All not-scratched-out values are summed: 3 + 6 + 24 = 33. The method works because multiplication is distributive, so: ${\begin{aligned}3\times 11&=3\times (1\times 2^{0}+1\times 2^{1}+0\times 2^{2}+1\times 2^{3})\\&=3\times (1+2+8)\\&=3+6+24\\&=33.\end{aligned}}$ A more complicated example, using the figures from the earlier examples (23,958,233 and 5,830): Decimal: Binary: 5830 23958233 1011011000110 1011011011001001011011001 2915 47916466 101101100011 10110110110010010110110010 1457 95832932 10110110001 101101101100100101101100100 728 191665864 1011011000 1011011011001001011011001000 364 383331728 101101100 10110110110010010110110010000 182 766663456 10110110 101101101100100101101100100000 91 1533326912 1011011 1011011011001001011011001000000 45 3066653824 101101 10110110110010010110110010000000 22 6133307648 10110 101101101100100101101100100000000 11 12266615296 1011 1011011011001001011011001000000000 5 24533230592 101 10110110110010010110110010000000000 2 49066461184 10 101101101100100101101100100000000000 1 98132922368 1 1011011011001001011011001000000000000 ———————————— 1022143253354344244353353243222210110 (before carry) 139676498390 10000010000101010111100011100111010110 Quarter square multiplication This formula can in some cases be used, to make multiplication tasks easier to complete: ${\frac {\left(x+y\right)^{2}}{4}}-{\frac {\left(x-y\right)^{2}}{4}}={\frac {1}{4}}\left(\left(x^{2}+2xy+y^{2}\right)-\left(x^{2}-2xy+y^{2}\right)\right)={\frac {1}{4}}\left(4xy\right)=xy.$ Properties of Quarter square multiplication equation Case $x=2p$ and $y=2q$: $x+y=2p+2q=2(p+q)\implies (x+y)^{2}=4(p+q)^{2}$. It also implies that $(x-y)^{2}=4(p-q)^{2}$. This results in $xy={\frac {1}{4}}((x+y)^{2}-(x-y)^{2}))=(p+q)^{2}-(p-q)^{2}$. Case $x=2p+1$ and $y=2q+1$: $x+y=2p+1+2q+1=2(p+q+1)\implies (x+y)^{2}=4(p+q+1)^{2}$. It also implies that $(x-y)^{2}=4(p-q)^{2}$. This results in $xy={\frac {1}{4}}((x+y)^{2}-(x-y)^{2}))=(p+q+1)^{2}-(p-q)^{2}$. Case $x=2p$ and $y=2q+1$: $x+y=2p+2q+1=2(p+q+{\frac {1}{2}})\implies (x+y)^{2}=4(p+q+{\frac {1}{2}})^{2}$. It also implies that $(x-y)^{2}=4(p-q-{\frac {1}{2}})^{2}$. This results in $xy={\frac {1}{4}}((x+y)^{2}-(x-y)^{2}))=(p+q+{\frac {1}{2}})^{2}-(p-q-{\frac {1}{2}})^{2}$. Case $x=2p+1$ and $y=2q$: $x+y=2p+1+2q=2(p+q+{\frac {1}{2}})\implies (x+y)^{2}=4(p+q+{\frac {1}{2}})^{2}$. It also implies that $(x-y)^{2}=4(p-q-{\frac {1}{2}})^{2}$. This results in $xy={\frac {1}{4}}((x+y)^{2}-(x-y)^{2}))=(p+q+{\frac {1}{2}})^{2}-(p-q+{\frac {1}{2}})^{2}$. Examples Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to 9×9. n    0  1  2  3  4  5  6789101112131415161718 ⌊n2/4⌋0012469121620253036424956647281 If, for example, you wanted to multiply 9 by 3, you observe that the sum and difference are 12 and 6 respectively. Looking both those values up on the table yields 36 and 9, the difference of which is 27, which is the product of 9 and 3. History of quarter square multiplication In prehistoric time, quarter square multiplication involved floor function; that some sources[7][8] attribute to Babylonian mathematics (2000–1600 BC). Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856,[9] and a table from 1 to 200000 by Joseph Blater in 1888.[10] Quarter square multipliers were used in analog computers to form an analog signal that was the product of two analog input signals. In this application, the sum and difference of two input voltages are formed using operational amplifiers. The square of each of these is approximated using piecewise linear circuits. Finally the difference of the two squares is formed and scaled by a factor of one fourth using yet another operational amplifier. In 1980, Everett L. Johnson proposed using the quarter square method in a digital multiplier.[11] To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 29−1=511 entries (one entry for the full range 0..510 of possible sums, the differences using only the first 256 entries in range 0..255) or 29−1=511 entries (using for negative differences the technique of 2-complements and 9-bit masking, which avoids testing the sign of differences), each entry being 16-bit wide (the entry values are from (0²/4)=0 to (510²/4)=65025). The quarter square multiplier technique has benefited 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the 6502.[12] Computational complexity of multiplication Unsolved problem in computer science: What is the fastest algorithm for multiplication of two $n$-digit numbers? (more unsolved problems in computer science) A line of research in theoretical computer science is about the number of single-bit arithmetic operations necessary to multiply two $n$-bit integers. This is known as the computational complexity of multiplication. Usual algorithms done by hand have asymptotic complexity of $O(n^{2})$, but in 1960 Anatoly Karatsuba discovered that better complexity was possible (with the Karatsuba algorithm). Currently, the algorithm with the best computational complexity is a 2019 algorithm of David Harvey and Joris van der Hoeven, which uses the strategies of using number-theoretic transforms introduced with the Schönhage–Strassen algorithm to multiply integers using only $O(n\log n)$ operations.[13] This is conjectured to be the best possible algorithm, but lower bounds of $\Omega (n\log n)$ are not known. Karatsuba multiplication Main article: Karatsuba algorithm Karatsuba multiplication is an O(nlog23) ≈ O(n1.585) divide and conquer algorithm, that uses recursion to merge together sub calculations. By rewriting the formula, one makes it possible to do sub calculations / recursion. By doing recursion, one can solve this in a fast manner. Let $x$ and $y$ be represented as $n$-digit strings in some base $B$. For any positive integer $m$ less than $n$, one can write the two given numbers as $x=x_{1}B^{m}+x_{0},$ $y=y_{1}B^{m}+y_{0},$ where $x_{0}$ and $y_{0}$ are less than $B^{m}$. The product is then ${\begin{aligned}xy&=(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})\\&=x_{1}y_{1}B^{2m}+(x_{1}y_{0}+x_{0}y_{1})B^{m}+x_{0}y_{0}\\&=z_{2}B^{2m}+z_{1}B^{m}+z_{0},\\\end{aligned}}$ where $z_{2}=x_{1}y_{1},$ $z_{1}=x_{1}y_{0}+x_{0}y_{1},$ $z_{0}=x_{0}y_{0}.$ These formulae require four multiplications and were known to Charles Babbage.[14] Karatsuba observed that $xy$ can be computed in only three multiplications, at the cost of a few extra additions. With $z_{0}$ and $z_{2}$ as before one can observe that ${\begin{aligned}z_{1}&=x_{1}y_{0}+x_{0}y_{1}\\&=x_{1}y_{0}+x_{0}y_{1}+x_{1}y_{1}-x_{1}y_{1}+x_{0}y_{0}-x_{0}y_{0}\\&=x_{1}y_{0}+x_{0}y_{0}+x_{0}y_{1}+x_{1}y_{1}-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})y_{0}+(x_{0}+x_{1})y_{1}-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})(y_{0}+y_{1})-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})(y_{1}+y_{0})-z_{2}-z_{0}.\\\end{aligned}}$ Because of the overhead of recursion, Karatsuba's multiplication is slower than long multiplication for small values of n; typical implementations therefore switch to long multiplication for small values of n. General case with multiplication of N numbers By exploring patterns after expansion, one see following: $(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})(z_{1}B^{m}+z_{0})(a_{1}B^{m}+a_{0})=$ $a_{1}x_{1}y_{1}z_{1}B^{4m}+a_{1}x_{1}y_{1}z_{0}B^{3m}+a_{1}x_{1}y_{0}z_{1}B^{3m}+a_{1}x_{0}y_{1}z_{1}B^{3m}$ $+a_{0}x_{1}y_{1}z_{1}B^{3m}+a_{1}x_{1}y_{0}z_{0}B^{2m}+a_{1}x_{0}y_{1}z_{0}B^{2m}+a_{0}x_{1}y_{1}z_{0}B^{2m}$ $+a_{1}x_{0}y_{0}z_{1}B^{2m}+a_{0}x_{1}y_{0}z_{1}B^{2m}+a_{0}x_{0}y_{1}z_{1}B^{2m}+a_{1}x_{0}y_{0}z_{0}B^{m}$ $+a_{0}x_{1}y_{0}z_{0}B^{m}+a_{0}x_{0}y_{1}z_{0}B^{m}+a_{0}x_{0}y_{0}z_{1}B^{m}+a_{0}x_{0}y_{0}z_{0}$ Each summand is associated to a unique binary number from 0 to $2^{N+1}-1$, for example $a_{1}x_{1}y_{1}z_{1}\longleftrightarrow 1111,\ a_{1}x_{0}y_{1}z_{0}\longleftrightarrow 1010$ etc. Furthermore; B is powered to number of 1, in this binary string, multiplied with m. If we express this in fewer terms, we get: $\prod _{j=1}^{N}(x_{j,1}B^{m}+x_{j,0})=\sum _{i=1}^{2^{N+1}-1}\prod _{j=1}^{N}x_{j,c(i,j)}B^{m\sum _{j=1}^{N}c(i,j)}=\sum _{j=0}^{N}z_{j}B^{jm}$, where $c(i,j)$ means digit in number i at position j. Notice that $c(i,j)\in \{0,1\}$ $z_{0}=\prod _{j=1}^{N}x_{j,0}$ $z_{N}=\prod _{j=1}^{N}x_{j,1}$ $z_{N-1}=\prod _{j=1}^{N}(x_{j,0}+x_{j,1})-\sum _{i\neq N-1}^{N}z_{i}$ History Karatsuba's algorithm was the first known algorithm for multiplication that is asymptotically faster than long multiplication,[15] and can thus be viewed as the starting point for the theory of fast multiplications. Toom–Cook Main article: Toom–Cook multiplication Another method of multiplication is called Toom–Cook or Toom-3. The Toom–Cook method splits each number to be multiplied into multiple parts. The Toom–Cook method is one of the generalizations of the Karatsuba method. A three-way Toom–Cook can do a size-3N multiplication for the cost of five size-N multiplications. This accelerates the operation by a factor of 9/5, while the Karatsuba method accelerates it by 4/3. Although using more and more parts can reduce the time spent on recursive multiplications further, the overhead from additions and digit management also grows. For this reason, the method of Fourier transforms is typically faster for numbers with several thousand digits, and asymptotically faster for even larger numbers. Schönhage–Strassen Every number in base B, can be written as a polynomial: $X=\sum _{i=0}^{N}{x_{i}B^{i}}$ Furthermore, multiplication of two numbers could be thought of as a product of two polynomials: $XY=(\sum _{i=0}^{N}{x_{i}B^{i}})(\sum _{j=0}^{N}{y_{i}B^{j}})$ Because,for $B^{k}$: $c_{k}=\sum _{(i,j):i+j=k}{a_{i}b_{j}}=\sum _{i=0}^{k}{a_{i}b_{k-i}}$, we have a convolution. By using fft (fast fourier transformation) with convolution rule, we can get ${\hat {f}}(a*b)={\hat {f}}(\sum _{i=0}^{k}{a_{i}b_{k-i}})={\hat {f}}(a)$ ● ${\hat {f}}(b)$. That is; $C_{k}=a_{k}$ ● $b_{k}$, where $C_{k}$ is the corresponding coefficient in fourier space. This can also be written as: fft(a * b) = fft(a) ● fft(b). We have the same coefficient due to linearity under fourier transformation, and because these polynomials only consist of one unique term per coefficient: ${\hat {f}}(x^{n})=\left({\frac {i}{2\pi }}\right)^{n}\delta ^{(n)}$ and ${\hat {f}}(a\,X(\xi )+b\,Y(\xi ))=a\,{\hat {X}}(\xi )+b\,{\hat {Y}}(\xi )$ Convolution rule: ${\hat {f}}(X*Y)=\ {\hat {f}}(X)$ ● ${\hat {f}}(Y)$ We have reduced our convolution problem to product problem, through fft. By finding ifft (polynomial interpolation), for each $c_{k}$, one get the desired coefficients. Algorithm uses divide and conquer strategy, to divide problem to subproblems. It has a time complexity of O(n log(n) log(log(n))). History Algorithm were invented by Strassen (1968). The algorithm was made practical and theoretical guarantees were provided in 1971 by Schönhage and Strassen resulting in the Schönhage–Strassen algorithm.[16] Further improvements In 2007 the asymptotic complexity of integer multiplication was improved by the Swiss mathematician Martin Fürer of Pennsylvania State University to n log(n) 2Θ(log*(n)) using Fourier transforms over complex numbers,[17] where log* denotes the iterated logarithm. Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi gave a similar algorithm using modular arithmetic in 2008 achieving the same running time.[18] In context of the above material, what these latter authors have achieved is to find N much less than 23k + 1, so that Z/NZ has a (2m)th root of unity. This speeds up computation and reduces the time complexity. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs. In 2014, Harvey, Joris van der Hoeven and Lecerf[19] gave a new algorithm that achieves a running time of $O(n\log n\cdot 2^{3\log ^{*}n})$, making explicit the implied constant in the $O(\log ^{*}n)$ exponent. They also proposed a variant of their algorithm which achieves $O(n\log n\cdot 2^{2\log ^{*}n})$ but whose validity relies on standard conjectures about the distribution of Mersenne primes. In 2016, Covanov and Thomé proposed an integer multiplication algorithm based on a generalization of Fermat primes that conjecturally achieves a complexity bound of $O(n\log n\cdot 2^{2\log ^{*}n})$. This matches the 2015 conditional result of Harvey, van der Hoeven, and Lecerf but uses a different algorithm and relies on a different conjecture.[20] In 2018, Harvey and van der Hoeven used an approach based on the existence of short lattice vectors guaranteed by Minkowski's theorem to prove an unconditional complexity bound of $O(n\log n\cdot 2^{2\log ^{*}n})$.[21] In March 2019, David Harvey and Joris van der Hoeven announced their discovery of an O(n log n) multiplication algorithm.[22] It was published in the Annals of Mathematics in 2021.[23] Because Schönhage and Strassen predicted that n log(n) is the ‘best possible’ result Harvey said: "...our work is expected to be the end of the road for this problem, although we don't know yet how to prove this rigorously."[24] Lower bounds There is a trivial lower bound of Ω(n) for multiplying two n-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any sharper lower bound is known. Multiplication lies outside of AC0[p] for any prime p, meaning there is no family of constant-depth, polynomial (or even subexponential) size circuits using AND, OR, NOT, and MODp gates that can compute a product. This follows from a constant-depth reduction of MODq to multiplication.[25] Lower bounds for multiplication are also known for some classes of branching programs.[26] Complex number multiplication Complex multiplication normally involves four multiplications and two additions. $(a+bi)(c+di)=(ac-bd)+(bc+ad)i.$ Or ${\begin{array}{c|c|c}\times &a&bi\\\hline c&ac&bci\\\hline di&adi&-bd\end{array}}$ As observed by Peter Ungar in 1963, one can reduce the number of multiplications to three, using essentially the same computation as Karatsuba's algorithm.[27] The product (a + bi) · (c + di) can be calculated in the following way. k1 = c · (a + b) k2 = a · (d − c) k3 = b · (c + d) Real part = k1 − k3 Imaginary part = k1 + k2. This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point. For fast Fourier transforms (FFTs) (or any linear transformation) the complex multiplies are by constant coefficients c + di (called twiddle factors in FFTs), in which case two of the additions (d−c and c+d) can be precomputed. Hence, only three multiplies and three adds are required.[28] However, trading off a multiplication for an addition in this way may no longer be beneficial with modern floating-point units.[29] Polynomial multiplication All the above multiplication algorithms can also be expanded to multiply polynomials. Alternatively the Kronecker substitution technique may be used to convert the problem of multiplying polynomials into a single binary multiplication.[30] Long multiplication methods can be generalised to allow the multiplication of algebraic formulae: 14ac - 3ab + 2 multiplied by ac - ab + 1 14ac -3ab 2 ac -ab 1 ———————————————————— 14a2c2 -3a2bc 2ac -14a2bc 3 a2b2 -2ab 14ac -3ab 2 ——————————————————————————————————————— 14a2c2 -17a2bc 16ac 3a2b2 -5ab +2 =======================================[31] As a further example of column based multiplication, consider multiplying 23 long tons (t), 12 hundredweight (cwt) and 2 quarters (qtr) by 47. This example uses avoirdupois measures: 1 t = 20 cwt, 1 cwt = 4 qtr. t cwt qtr 23 12 2 47 x ———————————————— 141 94 94 940 470 29 23 ———————————————— 1110 587 94 ———————————————— 1110 7 2 ================= Answer: 1110 ton 7 cwt 2 qtr First multiply the quarters by 47, the result 94 is written into the first workspace. Next, multiply cwt 12*47 = (2 + 10)*47 but don't add up the partial results (94, 470) yet. Likewise multiply 23 by 47 yielding (141, 940). The quarters column is totaled and the result placed in the second workspace (a trivial move in this case). 94 quarters is 23 cwt and 2 qtr, so place the 2 in the answer and put the 23 in the next column left. Now add up the three entries in the cwt column giving 587. This is 29 t 7 cwt, so write the 7 into the answer and the 29 in the column to the left. Now add up the tons column. There is no adjustment to make, so the result is just copied down. The same layout and methods can be used for any traditional measurements and non-decimal currencies such as the old British £sd system. See also • Binary multiplier • Dadda multiplier • Division algorithm • Horner scheme for evaluating of a polynomial • Logarithm • Mental calculation • Number-theoretic transform • Prosthaphaeresis • Slide rule • Trachtenberg system • Residue number system § Multiplication for another fast multiplication algorithm, specially efficient when many operations are done in sequence, such as in linear algebra • Wallace tree References 1. "Multiplication". www.mathematische-basteleien.de. Retrieved 2022-03-15. 2. Brent, Richard P (March 1978). "A Fortran Multiple-Precision Arithmetic Package". ACM Transactions on Mathematical Software. 4: 57–70. CiteSeerX 10.1.1.117.8425. doi:10.1145/355769.355775. S2CID 8875817. 3. Eason, Gary (2000-02-13). "Back to school for parents". BBC News. Eastaway, Rob (2010-09-10). "Why parents can't do maths today". BBC News. 4. Corlu, M.S.; Burlbaw, L.M.; Capraro, R.M.; Corlu, M.A.; Han, S. (2010). "The Ottoman Palace School Enderun and the Man with Multiple Talents, Matrakçı Nasuh". Journal of the Korea Society of Mathematical Education Series D: Research in Mathematical Education. 14 (1): 19–31. 5. Bogomolny, Alexander. "Peasant Multiplication". www.cut-the-knot.org. Retrieved 2017-11-04. 6. Wells, D. (1987). The Penguin Dictionary of Curious and Interesting Numbers. Penguin Books. p. 44. ISBN 978-0-14-008029-2. 7. McFarland, David (2007), Quarter Tables Revisited: Earlier Tables, Division of Labor in Table Construction, and Later Implementations in Analog Computers, p. 1 8. Robson, Eleanor (2008). Mathematics in Ancient Iraq: A Social History. p. 227. ISBN 978-0691091822. 9. "Reviews", The Civil Engineer and Architect's Journal: 54–55, 1857. 10. Holmes, Neville (2003), "Multiplying with quarter squares", The Mathematical Gazette, 87 (509): 296–299, doi:10.1017/S0025557200172778, JSTOR 3621048, S2CID 125040256. 11. Everett L., Johnson (March 1980), "A Digital Quarter Square Multiplier", IEEE Transactions on Computers, Washington, DC, USA: IEEE Computer Society, vol. C-29, no. 3, pp. 258–261, doi:10.1109/TC.1980.1675558, ISSN 0018-9340, S2CID 24813486 12. Putney, Charles (March 1986). "Fastest 6502 Multiplication Yet". Apple Assembly Line. 6 (6). 13. Harvey, David; van der Hoeven, Joris (2021). "Integer multiplication in time $O(n\log n)$" (PDF). Annals of Mathematics. Second Series. 193 (2): 563–617. doi:10.4007/annals.2021.193.2.4. MR 4224716. S2CID 109934776. 14. Charles Babbage, Chapter VIII – Of the Analytical Engine, Larger Numbers Treated, Passages from the Life of a Philosopher, Longman Green, London, 1864; page 125. 15. D. Knuth, The Art of Computer Programming, vol. 2, sec. 4.3.3 (1998) 16. Schönhage, A.; Strassen, V. (1971). "Schnelle Multiplikation großer Zahlen". Computing. 7 (3–4): 281–292. doi:10.1007/BF02242355. S2CID 9738629. 17. Fürer, M. (2007). "Faster Integer Multiplication" (PDF). Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11–13, 2007, San Diego, California, USA. pp. 57–66. doi:10.1145/1250790.1250800. ISBN 978-1-59593-631-8. S2CID 8437794. 18. De, A.; Saha, C.; Kurur, P.; Saptharishi, R. (2008). "Fast integer multiplication using modular arithmetic". Proceedings of the 40th annual ACM Symposium on Theory of Computing (STOC). pp. 499–506. arXiv:0801.1416. doi:10.1145/1374376.1374447. ISBN 978-1-60558-047-0. S2CID 3264828. 19. Harvey, David; van der Hoeven, Joris; Lecerf, Grégoire (2016). "Even faster integer multiplication". Journal of Complexity. 36: 1–30. arXiv:1407.3360. doi:10.1016/j.jco.2016.03.001. MR 3530637. 20. Covanov, Svyatoslav; Thomé, Emmanuel (2019). "Fast Integer Multiplication Using Generalized Fermat Primes". Math. Comp. 88 (317): 1449–1477. arXiv:1502.02800. doi:10.1090/mcom/3367. S2CID 67790860. 21. Harvey, D.; van der Hoeven, J. (2019). "Faster integer multiplication using short lattice vectors". The Open Book Series. 2: 293–310. arXiv:1802.07932. doi:10.2140/obs.2019.2.293. S2CID 3464567. 22. Hartnett, Kevin (2019-04-11). "Mathematicians Discover the Perfect Way to Multiply". Quanta Magazine. Retrieved 2019-05-03. 23. Harvey, David; van der Hoeven, Joris (2021). "Integer multiplication in time $O(n\log n)$" (PDF). Annals of Mathematics. Second Series. 193 (2): 563–617. doi:10.4007/annals.2021.193.2.4. MR 4224716. S2CID 109934776. 24. Gilbert, Lachlan (2019-04-04). "Maths whiz solves 48-year-old multiplication problem". UNSW. Retrieved 2019-04-18. 25. Arora, Sanjeev; Barak, Boaz (2009). Computational Complexity: A Modern Approach. Cambridge University Press. ISBN 978-0-521-42426-4. 26. Ablayev, F.; Karpinski, M. (2003). "A lower bound for integer multiplication on randomized ordered read-once branching programs" (PDF). Information and Computation. 186 (1): 78–89. doi:10.1016/S0890-5401(03)00118-4. 27. Knuth, Donald E. (1988), The Art of Computer Programming volume 2: Seminumerical algorithms, Addison-Wesley, pp. 519, 706 28. Duhamel, P.; Vetterli, M. (1990). "Fast Fourier transforms: A tutorial review and a state of the art" (PDF). Signal Processing. 19 (4): 259–299 See Section 4.1. doi:10.1016/0165-1684(90)90158-U. 29. Johnson, S.G.; Frigo, M. (2007). "A modified split-radix FFT with fewer arithmetic operations" (PDF). IEEE Trans. Signal Process. 55 (1): 111–9 See Section IV. Bibcode:2007ITSP...55..111J. doi:10.1109/TSP.2006.882087. S2CID 14772428. 30. von zur Gathen, Joachim; Gerhard, Jürgen (1999), Modern Computer Algebra, Cambridge University Press, pp. 243–244, ISBN 978-0-521-64176-0. 31. Castle, Frank (1900). Workshop Mathematics. London: MacMillan and Co. p. 74. Further reading • Warren Jr., Henry S. (2013). Hacker's Delight (2 ed.). Addison Wesley - Pearson Education, Inc. ISBN 978-0-321-84268-8. • Savard, John J. G. (2018) [2006]. "Advanced Arithmetic Techniques". quadibloc. Archived from the original on 2018-07-03. Retrieved 2018-07-16. • Johansson, Kenny (2008). Low Power and Low Complexity Shift-and-Add Based Computations (PDF) (Dissertation thesis). Linköping Studies in Science and Technology (1 ed.). Linköping, Sweden: Department of Electrical Engineering, Linköping University. ISBN 978-91-7393-836-5. ISSN 0345-7524. No. 1201. Archived (PDF) from the original on 2017-08-13. Retrieved 2021-08-23. (x+268 pages) External links Basic arithmetic • The Many Ways of Arithmetic in UCSMP Everyday Mathematics • A Powerpoint presentation about ancient mathematics • Lattice Multiplication Flash Video Advanced algorithms • Multiplication Algorithms used by GMP Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms
Wikipedia
Signed-digit representation In mathematical notation for numbers, a signed-digit representation is a positional numeral system with a set of signed digits used to encode the integers. Part of a series on Numeral systems Place-value notation Hindu-Arabic numerals • Western Arabic • Eastern Arabic • Bengali • Devanagari • Gujarati • Gurmukhi • Odia • Sinhala • Tamil • Malayalam • Telugu • Kannada • Dzongkha • Tibetan • Balinese • Burmese • Javanese • Khmer • Lao • Mongolian • Sundanese • Thai East Asian systems Contemporary • Chinese • Suzhou • Hokkien • Japanese • Korean • Vietnamese Historic • Counting rods • Tangut Other systems • History Ancient • Babylonian Post-classical • Cistercian • Mayan • Muisca • Pentadic • Quipu • Rumi Contemporary • Cherokee • Kaktovik (Iñupiaq) By radix/base Common radices/bases • 2 • 3 • 4 • 5 • 6 • 8 • 10 • 12 • 16 • 20 • 60 • (table) Non-standard radices/bases • Bijective (1) • Signed-digit (balanced ternary) • Mixed (factorial) • Negative • Complex (2i) • Non-integer (φ) • Asymmetric Sign-value notation Non-alphabetic • Aegean • Attic • Aztec • Brahmi • Chuvash • Egyptian • Etruscan • Kharosthi • Prehistoric counting • Proto-cuneiform • Roman • Tally marks Alphabetic • Abjad • Armenian • Alphasyllabic • Akṣarapallī • Āryabhaṭa • Kaṭapayādi • Coptic • Cyrillic • Geʽez • Georgian • Glagolitic • Greek • Hebrew List of numeral systems Signed-digit representation can be used to accomplish fast addition of integers because it can eliminate chains of dependent carries.[1] In the binary numeral system, a special case signed-digit representation is the non-adjacent form, which can offer speed benefits with minimal space overhead. History Challenges in calculation stimulated early authors Colson (1726) and Cauchy (1840) to use signed-digit representation. The further step of replacing negated digits with new ones was suggested by Selling (1887) and Cajori (1928). In 1928, Florian Cajori noted the recurring theme of signed digits, starting with Colson (1726) and Cauchy (1840).[2] In his book History of Mathematical Notations, Cajori titled the section "Negative numerals".[3] For completeness, Colson[4] uses examples and describes addition (pp. 163–4), multiplication (pp. 165–6) and division (pp. 170–1) using a table of multiples of the divisor. He explains the convenience of approximation by truncation in multiplication. Colson also devised an instrument (Counting Table) that calculated using signed digits. Eduard Selling[5] advocated inverting the digits 1, 2, 3, 4, and 5 to indicate the negative sign. He also suggested snie, jes, jerd, reff, and niff as names to use vocally. Most of the other early sources used a bar over a digit to indicate a negative sign for it. Another German usage of signed-digits was described in 1902 in Klein's encyclopedia.[6] Definition and properties Digit set Let ${\mathcal {D}}$ be a finite set of numerical digits with cardinality $b>1$ (If $b\leq 1$, then the positional number system is trivial and only represents the trivial ring), with each digit denoted as $d_{i}$ for $0\leq i<b.$ $b$ is known as the radix or number base. ${\mathcal {D}}$ can be used for a signed-digit representation if it's associated with a unique function $f_{\mathcal {D}}:{\mathcal {D}}\rightarrow \mathbb {Z} $ such that $f_{\mathcal {D}}(d_{i})\equiv i{\bmod {b}}$ for all $0\leq i<b.$ This function, $f_{\mathcal {D}},$ is what rigorously and formally establishes how integer values are assigned to the symbols/glyphs in ${\mathcal {D}}.$ One benefit of this formalism is that the definition of "the integers" (however they may be defined) is not conflated with any particular system for writing/representing them; in this way, these two distinct (albeit closely related) concepts are kept separate. ${\mathcal {D}}$ can be partitioned into three distinct sets ${\mathcal {D}}_{+}$, ${\mathcal {D}}_{0}$, and ${\mathcal {D}}_{-}$, representing the positive, zero, and negative digits respectively, such that all digits $d_{+}\in {\mathcal {D}}_{+}$ satisfy $f_{\mathcal {D}}(d_{+})>0$, all digits $d_{0}\in {\mathcal {D}}_{0}$ satisfy $f_{\mathcal {D}}(d_{0})=0$ and all digits $d_{-}\in {\mathcal {D}}_{-}$ satisfy $f_{\mathcal {D}}(d_{-})<0$. The cardinality of ${\mathcal {D}}_{+}$ is $b_{+}$, the cardinality of ${\mathcal {D}}_{0}$ is $b_{0}$, and the cardinality of ${\mathcal {D}}_{-}$ is $b_{-}$, giving the number of positive and negative digits respectively, such that $b=b_{+}+b_{0}+b_{-}$. Balanced form representations See also: Balanced ternary Balanced form representations are representations where for every positive digit $d_{+}$, there exist a corresponding negative digit $d_{-}$ such that $f_{\mathcal {D}}(d_{+})=-f_{\mathcal {D}}(d_{-})$. It follows that $b_{+}=b_{-}$. Only odd bases can have balanced form representations, as otherwise $d_{b/2}$ has to be the opposite of itself and hence 0, but $0\neq {\frac {b}{2}}$. In balanced form, the negative digits $d_{-}\in {\mathcal {D}}_{-}$ are usually denoted as positive digits with a bar over the digit, as $d_{-}={\bar {d}}_{+}$ for $d_{+}\in {\mathcal {D}}_{+}$. For example, the digit set of balanced ternary would be ${\mathcal {D}}_{3}=\lbrace {\bar {1}},0,1\rbrace $ with $f_{{\mathcal {D}}_{3}}({\bar {1}})=-1$, $f_{{\mathcal {D}}_{3}}(0)=0$, and $f_{{\mathcal {D}}_{3}}(1)=1$. This convention is adopted in finite fields of odd prime order $q$:[7] $\mathbb {F} _{q}=\lbrace 0,1,{\bar {1}}=-1,...d={\frac {q-1}{2}},\ {\bar {d}}={\frac {1-q}{2}}\ |\ q=0\rbrace .$ Dual signed-digit representation Every digit set ${\mathcal {D}}$ has a dual digit set ${\mathcal {D}}^{\operatorname {op} }$ given by the inverse order of the digits with an isomorphism $g:{\mathcal {D}}\rightarrow {\mathcal {D}}^{\operatorname {op} }$ defined by $-f_{\mathcal {D}}=g\circ f_{{\mathcal {D}}^{\operatorname {op} }}$. As a result, for any signed-digit representations ${\mathcal {N}}$ of a number system ring $N$ constructed from ${\mathcal {D}}$ with valuation $v_{\mathcal {D}}:{\mathcal {N}}\rightarrow N$, there exists a dual signed-digit representations of $N$, ${\mathcal {N}}^{\operatorname {op} }$, constructed from ${\mathcal {D}}^{\operatorname {op} }$ with valuation $v_{{\mathcal {D}}^{\operatorname {op} }}:{\mathcal {N}}^{\operatorname {op} }\rightarrow N$, and an isomorphism $h:{\mathcal {N}}\rightarrow {\mathcal {N}}^{\operatorname {op} }$ defined by $-v_{\mathcal {D}}=h\circ v_{{\mathcal {D}}^{\operatorname {op} }}$, where $-$ is the additive inverse operator of $N$. The digit set for balanced form representations is self-dual. For integers Given the digit set ${\mathcal {D}}$ and function $f:{\mathcal {D}}\rightarrow \mathbb {Z} $ as defined above, let us define an integer endofunction $T:\mathbb {Z} \rightarrow \mathbb {Z} $ as the following: $T(n)={\begin{cases}{\frac {n-f(d_{i})}{b}}&{\text{if }}n\equiv i{\bmod {b}},0\leq i<b\end{cases}}$ If the only periodic point of $T$ is the fixed point $0$, then the set of all signed-digit representations of the integers $\mathbb {Z} $ using ${\mathcal {D}}$ is given by the Kleene plus ${\mathcal {D}}^{+}$, the set of all finite concatenated strings of digits $d_{n}\ldots d_{0}$ with at least one digit, with $n\in \mathbb {N} $. Each signed-digit representation $m\in {\mathcal {D}}^{+}$ has a valuation $v_{\mathcal {D}}:{\mathcal {D}}^{+}\rightarrow \mathbb {Z} $ $v_{\mathcal {D}}(m)=\sum _{i=0}^{n}f_{\mathcal {D}}(d_{i})b^{i}$. Examples include balanced ternary with digits ${\mathcal {D}}=\lbrace {\bar {1}},0,1\rbrace $. Otherwise, if there exist a non-zero periodic point of $T$, then there exist integers that are represented by an infinite number of non-zero digits in ${\mathcal {D}}$. Examples include the standard decimal numeral system with the digit set $\operatorname {dec} =\lbrace 0,1,2,3,4,5,6,7,8,9\rbrace $, which requires an infinite number of the digit $9$ to represent the additive inverse $-1$, as $T_{\operatorname {dec} }(-1)={\frac {-1-9}{10}}=-1$, and the positional numeral system with the digit set ${\mathcal {D}}=\lbrace {\text{A}},0,1\rbrace $ with $f({\text{A}})=-4$, which requires an infinite number of the digit ${\text{A}}$ to represent the number $2$, as $T_{\mathcal {D}}(2)={\frac {2-(-4)}{3}}=2$. For decimal fractions Main article: Decimal representation If the integers can be represented by the Kleene plus ${\mathcal {D}}^{+}$, then the set of all signed-digit representations of the decimal fractions, or $b$-adic rationals $\mathbb {Z} [1\backslash b]$, is given by ${\mathcal {Q}}={\mathcal {D}}^{+}\times {\mathcal {P}}\times {\mathcal {D}}^{*}$, the Cartesian product of the Kleene plus ${\mathcal {D}}^{+}$, the set of all finite concatenated strings of digits $d_{n}\ldots d_{0}$ with at least one digit, the singleton ${\mathcal {P}}$ consisting of the radix point ($.$ or $,$), and the Kleene star ${\mathcal {D}}^{*}$, the set of all finite concatenated strings of digits $d_{-1}\ldots d_{-m}$, with $m,n\in \mathbb {N} $. Each signed-digit representation $q\in {\mathcal {Q}}$ has a valuation $v_{\mathcal {D}}:{\mathcal {Q}}\rightarrow \mathbb {Z} [1\backslash b]$ $v_{\mathcal {D}}(q)=\sum _{i=-m}^{n}f_{\mathcal {D}}(d_{i})b^{i}$ For real numbers Main article: Construction of the reals § Construction from Cauchy sequences If the integers can be represented by the Kleene plus ${\mathcal {D}}^{+}$, then the set of all signed-digit representations of the real numbers $\mathbb {R} $ is given by ${\mathcal {R}}={\mathcal {D}}^{+}\times {\mathcal {P}}\times {\mathcal {D}}^{\mathbb {N} }$, the Cartesian product of the Kleene plus ${\mathcal {D}}^{+}$, the set of all finite concatenated strings of digits $d_{n}\ldots d_{0}$ with at least one digit, the singleton ${\mathcal {P}}$ consisting of the radix point ($.$ or $,$), and the Cantor space ${\mathcal {D}}^{\mathbb {N} }$, the set of all infinite concatenated strings of digits $d_{-1}d_{-2}\ldots $, with $n\in \mathbb {N} $. Each signed-digit representation $r\in {\mathcal {R}}$ has a valuation $v_{\mathcal {D}}:{\mathcal {R}}\rightarrow \mathbb {R} $ $v_{\mathcal {D}}(r)=\sum _{i=-\infty }^{n}f_{\mathcal {D}}(d_{i})b^{i}$. The infinite series always converges to a finite real number. For other number systems All base-$b$ numerals can be represented as a subset of ${\mathcal {D}}^{\mathbb {Z} }$, the set of all doubly infinite sequences of digits in ${\mathcal {D}}$, where $\mathbb {Z} $ is the set of integers, and the ring of base-$b$ numerals is represented by the formal power series ring $\mathbb {Z} [[b,b^{-1}]]$, the doubly infinite series $\sum _{i=-\infty }^{\infty }a_{i}b^{i}$ where $a_{i}\in \mathbb {Z} $ for $i\in \mathbb {Z} $. Integers modulo powers of b The set of all signed-digit representations of the integers modulo $b^{n}$, $\mathbb {Z} \backslash b^{n}\mathbb {Z} $ is given by the set ${\mathcal {D}}^{n}$, the set of all finite concatenated strings of digits $d_{n-1}\ldots d_{0}$ of length $n$, with $n\in \mathbb {N} $. Each signed-digit representation $m\in {\mathcal {D}}^{n}$ has a valuation $v_{\mathcal {D}}:{\mathcal {D}}^{n}\rightarrow \mathbb {Z} /b^{n}\mathbb {Z} $ $v_{\mathcal {D}}(m)\equiv \sum _{i=0}^{n-1}f_{\mathcal {D}}(d_{i})b^{i}{\bmod {b}}^{n}$ Prüfer groups A Prüfer group is the quotient group $\mathbb {Z} (b^{\infty })=\mathbb {Z} [1\backslash b]/\mathbb {Z} $ of the integers and the $b$-adic rationals. The set of all signed-digit representations of the Prüfer group is given by the Kleene star ${\mathcal {D}}^{*}$, the set of all finite concatenated strings of digits $d_{1}\ldots d_{n}$, with $n\in \mathbb {N} $. Each signed-digit representation $p\in {\mathcal {D}}^{*}$ has a valuation $v_{\mathcal {D}}:{\mathcal {D}}^{*}\rightarrow \mathbb {Z} (b^{\infty })$ $v_{\mathcal {D}}(m)\equiv \sum _{i=1}^{n}f_{\mathcal {D}}(d_{i})b^{-i}{\bmod {1}}$ Circle group The circle group is the quotient group $\mathbb {T} =\mathbb {R} /\mathbb {Z} $ of the integers and the real numbers. The set of all signed-digit representations of the circle group is given by the Cantor space ${\mathcal {D}}^{\mathbb {N} }$, the set of all right-infinite concatenated strings of digits $d_{1}d_{2}\ldots $. Each signed-digit representation $m\in {\mathcal {D}}^{n}$ has a valuation $v_{\mathcal {D}}:{\mathcal {D}}^{\mathbb {N} }\rightarrow \mathbb {T} $ $v_{\mathcal {D}}(m)\equiv \sum _{i=1}^{\infty }f_{\mathcal {D}}(d_{i})b^{-i}{\bmod {1}}$ The infinite series always converges. b-adic integers The set of all signed-digit representations of the $b$-adic integers, $\mathbb {Z} _{b}$ is given by the Cantor space ${\mathcal {D}}^{\mathbb {N} }$, the set of all left-infinite concatenated strings of digits $\ldots d_{1}d_{0}$. Each signed-digit representation $m\in {\mathcal {D}}^{n}$ has a valuation $v_{\mathcal {D}}:{\mathcal {D}}^{\mathbb {N} }\rightarrow \mathbb {Z} _{b}$ $v_{\mathcal {D}}(m)=\sum _{i=0}^{\infty }f_{\mathcal {D}}(d_{i})b^{i}$ b-adic solenoids The set of all signed-digit representations of the $b$-adic solenoids, $\mathbb {T} _{b}$ is given by the Cantor space ${\mathcal {D}}^{\mathbb {Z} }$, the set of all doubly infinite concatenated strings of digits $\ldots d_{1}d_{0}d_{-1}\ldots $. Each signed-digit representation $m\in {\mathcal {D}}^{n}$ has a valuation $v_{\mathcal {D}}:{\mathcal {D}}^{\mathbb {Z} }\rightarrow \mathbb {T} _{b}$ $v_{\mathcal {D}}(m)=\sum _{i=-\infty }^{\infty }f_{\mathcal {D}}(d_{i})b^{i}$ In written and spoken language Indo-Aryan languages The oral and written forms of numbers in the Indo-Aryan languages use a negative numeral (e.g., "un" in Hindi and Bengali, "un" or "unna" in Punjabi, "ekon" in Marathi) for the numbers between 11 and 90 that end with a nine. The numbers followed by their names are shown for Punjabi below (the prefix "ik" means "one"):[8] • 19 unni, 20 vih, 21 ikki • 29 unatti, 30 tih, 31 ikatti • 39 untali, 40 chali, 41 iktali • 49 unanja, 50 panjah, 51 ikvanja • 59 unahat, 60 sath, 61 ikahat • 69 unattar, 70 sattar, 71 ikhattar • 79 unasi, 80 assi, 81 ikiasi • 89 unanve, 90 nabbe, 91 ikinnaven. Similarly, the Sesotho language utilizes negative numerals to form 8's and 9's. • 8 robeli (/Ro-bay-dee/) meaning "break two" i.e. two fingers down • 9 robong (/Ro-bong/) meaning "break one" i.e. one finger down Classical Latin In Classical Latin,[9] integers 18 and 19 did not even have a spoken, nor written form including corresponding parts for "eight" or "nine" in practice - despite them being in existence. Instead, in Classic Latin, • 18 = duodēvīgintī ("two taken from twenty"), (IIXX or XIIX), • 19 = ūndēvīgintī ("one taken from twenty"), (IXX or XIX) • 20 = vīgintī ("twenty"), (XX). For upcoming integer numerals [28, 29, 38, 39, ..., 88, 89] the additive form in the language had been much more common, however, for the listed numbers, the above form was still preferred. Hence, approaching thirty, numerals were expressed as:[10] • 28 = duodētrīgintā ("two taken from thirty"), less frequently also yet vīgintī octō / octō et vīgintī ("twenty eight / eight and twenty"), (IIXXX or XXIIX versus XXVIII, latter having been fully outcompeted.) • 29 = ūndētrīgintā ("one taken from thirty") despite the less preferred form was also at their disposal. This is one of the main foundations of contemporary historians' reasoning, explaining why the subtractive I- and II- was so common in this range of cardinals compared to other ranges. Numerals 98 and 99 could also be expressed in both forms, yet "two to hundred" might have sounded a bit odd - clear evidence is the scarce occurrence of these numbers written down in a subtractive fashion in authentic sources. Finnish Language There is yet another language having this feature (by now, only in traces), however, still in active use today. This is the Finnish Language, where the (spelled out) numerals are used this way should a digit of 8 or 9 occur. The scheme is like this:[11] • 1 = "yksi" (Note: yhd- or yht- mostly when about to be declined; e.g. "yhdessä" = "together, as one [entity]") • 2 = "kaksi" (Also note: kahde-, kahte- when declined) • 3 = "kolme" • 4 = "neljä" ... • 7 = "seitsemän" • 8 = "kah(d)eksan" (two left [for it to reach it]) • 9 = "yh(d)eksän" (one left [for it to reach it]) • 10 = "kymmenen" (ten) Above list is no special case, it consequently appears in larger cardinals as well, e.g.: • 399 = "kolmesataayhdeksänkymmentäyhdeksän" Emphasizing of these attributes stay present even in the shortest colloquial forms of numerals: • 1 = "yy" • 2 = "kaa" • 3 = "koo" ... • 7 = "seiska" • 8 = "kasi" • 9 = "ysi" • 10 = "kymppi" However, this phenomenon has no influence on written numerals, the Finnish use the standard Western-Arabic decimal notation. Time keeping In the English language it is common to refer to times as, for example, 'seven to three', 'to' performing the negation. Other systems There exist other signed-digit bases such that the base $b\neq b_{+}+b_{-}+1$. A notable examples of this is Booth encoding, which has a digit set ${\mathcal {D}}=\lbrace {\bar {1}},0,1\rbrace $ with $b_{+}=1$ and $b_{-}=1$, but which uses a base $b=2<3=b_{+}+b_{-}+1$. The standard binary numeral system would only use digits of value $\lbrace 0,1\rbrace $. Note that non-standard signed-digit representations are not unique. For instance: $0111_{\mathcal {D}}=4+2+1=7$ $10{\bar {1}}1_{\mathcal {D}}=8-2+1=7$ $1{\bar {1}}11_{\mathcal {D}}=8-4+2+1=7$ $100{\bar {1}}_{\mathcal {D}}=8-1=7$ The non-adjacent form (NAF) of Booth encoding does guarantee a unique representation for every integer value. However, this only applies for integer values. For example, consider the following repeating binary numbers in NAF, ${\frac {2}{3}}=0.{\overline {10}}_{\mathcal {D}}=1.{\overline {0{\bar {1}}}}_{\mathcal {D}}$ See also • Balanced ternary • Negative base • Redundant binary representation Notes and references 1. Dhananjay Phatak, I. Koren (1994) Hybrid Signed-Digit Number Systems: A Unified Framework for Redundant Number Representations with Bounded Carry Propagation Chains 2. Augustin-Louis Cauchy (16 November 1840) "Sur les moyens d'eviter les erreurs dans les calculs numerique", Comptes rendus 11:789. Also found in Oevres completes Ser. 1, vol. 5, pp. 434–42. 3. Cajori, Florian (1993) [1928-1929]. A History of Mathematical Notations. Dover Publications. p. 57. ISBN 978-0486677668. 4. John Colson (1726) "A Short Account of Negativo-Affirmativo Arithmetik", Philosophical Transactions of the Royal Society 34:161–173. Available as Early Journal Content from JSTOR 5. Eduard Selling (1887) Eine neue Rechenmachine, pp. 15–18, Berlin 6. Rudolf Mehmke (1902) "Numerisches Rechen", §4 Beschränkung in den verwendeten Ziffern, Klein's encyclopedia, I-2, p. 944. 7. Hirschfeld, J. W. P. (1979). Projective Geometries Over Finite Fields. Oxford University Press. p. 8. ISBN 978-0-19-850295-1. 8. Punjabi numbers from Quizlet 9. J. Matthew Harrington (2016) Synopsis of Ancient Latin Grammar 10. from English Wiktionary 11. from Kielitoimiston sanakirja • J. P. Balantine (1925) "A Digit for Negative One", American Mathematical Monthly 32:302. • Lui Han, Dongdong Chen, Seok-Bum Ko, Khan A. Wahid "Non-speculative Decimal Signed Digit Adder" from Department of Electrical and Computer Engineering, University of Saskatchewan.
Wikipedia
Signed area In mathematics, the signed area or oriented area of a region of an affine plane is its area with orientation specified by the ("plus" or "minus") sign. More generally, the signed area of an arbitrary surface region is its surface area with specified orientation. When the boundary of the region is a simple curve, the signed area also indicates the orientation of the boundary. The integral of a real function can be imagined as the signed area between the line $y=0$ and the curve $y=f(x)$ over an interval [a, b]. Negative area arises in the study of natural logarithm as signed area under the curve y = 1/x for x in the positive real numbers: Definition: $\ln x=\int _{1}^{x}{\frac {dt}{t}},\quad x>0.$ "For 0 < x < 1, $\ln x=\int _{1}^{x}{\frac {dt}{t}}=-\int _{x}^{1}{\frac {dt}{t}}<0$ and so ln x is the negative of the area,,,"[1] In differential geometry, the sign of the area of a region of a surface is associated with the orientation of the surface: "In addition to the area ... one may consider also signed areas of portions of surfaces; in this case the area corresponding to one of the two possible orientations is defined by [A(H), a double integral] while the area corresponding to the other orientation is −A(H)"[2] References 1. Stewart, James (1991). Single Variable Calculus (2nd ed.). Brooks/Cole. p. 358. ISBN 0-534-16414-5. 2. Kreyszig, Erwin (1959). Differential Geometry. University of Toronto Press. p. 114,5. See also • Vector area • Volume form • Signed measure External links • Kleitman, Daniel. "Chapter 15: Areas and Volumes of Parallel Sided Figures; Determinants". Kleitman's Homepage. MIT Math Department Homepage.
Wikipedia
Curvature In mathematics, curvature is any of several strongly related concepts in geometry. Intuitively, the curvature is the amount by which a curve deviates from being a straight line, or a surface deviates from being a plane. For curves, the canonical example is that of a circle, which has a curvature equal to the reciprocal of its radius. Smaller circles bend more sharply, and hence have higher curvature. The curvature at a point of a differentiable curve is the curvature of its osculating circle, that is the circle that best approximates the curve near this point. The curvature of a straight line is zero. In contrast to the tangent, which is a vector quantity, the curvature at a point is typically a scalar quantity, that is, it is expressed by a single real number. For surfaces (and, more generally for higher-dimensional manifolds), that are embedded in a Euclidean space, the concept of curvature is more complex, as it depends on the choice of a direction on the surface or manifold. This leads to the concepts of maximal curvature, minimal curvature, and mean curvature. For Riemannian manifolds (of dimension at least two) that are not necessarily embedded in a Euclidean space, one can define the curvature intrinsically, that is without referring to an external space. See Curvature of Riemannian manifolds for the definition, which is done in terms of lengths of curves traced on the manifold, and expressed, using linear algebra, by the Riemann curvature tensor. History In Tractatus de configurationibus qualitatum et motuum,[1] the 14th-century philosopher and mathematician Nicole Oresme introduces the concept of curvature as a measure of departure from straightness; for circles he has the curvature as being inversely proportional to the radius; and he attempts to extend this idea to other curves as a continuously varying magnitude. [2] The curvature of a differentiable curve was originally defined through osculating circles. In this setting, Augustin-Louis Cauchy showed that the center of curvature is the intersection point of two infinitely close normal lines to the curve.[3] Plane curves Intuitively, the curvature describes for any part of a curve how much the curve direction changes over a small distance travelled (e.g. angle in rad/m), so it is a measure of the instantaneous rate of change of direction of a point that moves on the curve: the larger the curvature, the larger this rate of change. In other words, the curvature measures how fast the unit tangent vector to the curve rotates[4] (fast in terms of curve position). In fact, it can be proved that this instantaneous rate of change is exactly the curvature. More precisely, suppose that the point is moving on the curve at a constant speed of one unit, that is, the position of the point P(s) is a function of the parameter s, which may be thought as the time or as the arc length from a given origin. Let T(s) be a unit tangent vector of the curve at P(s), which is also the derivative of P(s) with respect to s. Then, the derivative of T(s) with respect to s is a vector that is normal to the curve and whose length is the curvature. To be meaningful, the definition of the curvature and its different characterizations require that the curve is continuously differentiable near P, for having a tangent that varies continuously; it requires also that the curve is twice differentiable at P, for insuring the existence of the involved limits, and of the derivative of T(s). The characterization of the curvature in terms of the derivative of the unit tangent vector is probably less intuitive than the definition in terms of the osculating circle, but formulas for computing the curvature are easier to deduce. Therefore, and also because of its use in kinematics, this characterization is often given as a definition of the curvature. Osculating circle Historically, the curvature of a differentiable curve was defined through the osculating circle, which is the circle that best approximates the curve at a point. More precisely, given a point P on a curve, every other point Q of the curve defines a circle (or sometimes a line) passing through Q and tangent to the curve at P. The osculating circle is the limit, if it exists, of this circle when Q tends to P. Then the center and the radius of curvature of the curve at P are the center and the radius of the osculating circle. The curvature is the reciprocal of radius of curvature. That is, the curvature is $\kappa ={\frac {1}{R}},$ where R is the radius of curvature[5] (the whole circle has this curvature, it can be read as turn 2π over the length 2πR). This definition is difficult to manipulate and to express in formulas. Therefore, other equivalent definitions have been introduced. In terms of arc-length parametrization Every differentiable curve can be parametrized with respect to arc length.[6] In the case of a plane curve, this means the existence of a parametrization γ(s) = (x(s), y(s)), where x and y are real-valued differentiable functions whose derivatives satisfy $\|{\boldsymbol {\gamma }}'\|={\sqrt {x'(s)^{2}+y'(s)^{2}}}=1.$ This means that the tangent vector $\mathbf {T} (s)={\bigl (}x'(s),y'(s){\bigr )}$ has a norm equal to one and is thus a unit tangent vector. If the curve is twice differentiable, that is, if the second derivatives of x and y exist, then the derivative of T(s) exists. This vector is normal to the curve, its norm is the curvature κ(s), and it is oriented toward the center of curvature. That is, ${\begin{aligned}\mathbf {T} (s)&={\boldsymbol {\gamma }}'(s),\\[8mu]\|\mathbf {T} (s)\|^{2}&=1\ {\text{(constant)}}\implies \mathbf {T} '(s)\cdot \mathbf {T} (s)=0,\\[5mu]\kappa (s)&=\|\mathbf {T} '(s)\|=\|{\boldsymbol {\gamma }}''(s)\|={\sqrt {x''(s)^{2}+y''(s)^{2}}}\end{aligned}}$ Moreover, because the radius of curvature is (assuming 𝜿(s) ≠ 0) $R(s)={\frac {1}{\kappa (s)}},$ and the center of curvature is on the normal to the curve, the center of curvature is the point $\mathbf {C} (s)={\boldsymbol {\gamma }}(s)+{\frac {1}{\kappa (s)^{2}}}\mathbf {T} '(s).$ (In case the curvature is zero, the center of curvature is not located anywhere on the plane R2 and is often said to be located "at infinity".) If N(s) is the unit normal vector obtained from T(s) by a counterclockwise rotation of π/2, then $\mathbf {T} '(s)=k(s)\mathbf {N} (s),$ with k(s) = ± κ(s). The real number k(s) is called the oriented curvature or signed curvature. It depends on both the orientation of the plane (definition of counterclockwise), and the orientation of the curve provided by the parametrization. In fact, the change of variable s → –s provides another arc-length parametrization, and changes the sign of k(s). In terms of a general parametrization Let γ(t) = (x(t), y(t)) be a proper parametric representation of a twice differentiable plane curve. Here proper means that on the domain of definition of the parametrization, the derivative dγ/dt is defined, differentiable and nowhere equal to the zero vector. With such a parametrization, the signed curvature is $k={\frac {x'y''-y'x''}{{\bigl (}{x'}^{2}+{y'}^{2}{\bigr )}{\vphantom {'}}^{3/2}}},$ where primes refer to derivatives with respect to t. The curvature κ is thus $\kappa ={\frac {\left|x'y''-y'x''\right|}{{\bigl (}{x'}^{2}+{y'}^{2}{\bigr )}{\vphantom {'}}^{3/2}}}.$ These can be expressed in a coordinate-free way as $k={\frac {\det \left({\boldsymbol {\gamma }}',{\boldsymbol {\gamma }}''\right)}{\|{\boldsymbol {\gamma }}'\|^{3}}},\qquad \kappa ={\frac {\left|\det \left({\boldsymbol {\gamma }}',{\boldsymbol {\gamma }}''\right)\right|}{\|{\boldsymbol {\gamma }}'\|^{3}}}.$ These formulas can be derived from the special case of arc-length parametrization in the following way. The above condition on the parametrisation imply that the arc length s is a differentiable monotonic function of the parameter t, and conversely that t is a monotonic function of s. Moreover, by changing, if needed, s to –s, one may suppose that these functions are increasing and have a positive derivative. Using notation of the preceding section and the chain rule, one has ${\frac {d{\boldsymbol {\gamma }}}{dt}}={\frac {ds}{dt}}\mathbf {T} ,$ and thus, by taking the norm of both sides ${\frac {dt}{ds}}={\frac {1}{\|{\boldsymbol {\gamma }}'\|}},$ where the prime denotes differentiation with respect to t. The curvature is the norm of the derivative of T with respect to s. By using the above formula and the chain rule this derivative and its norm can be expressed in terms of γ′ and γ″ only, with the arc-length parameter s completely eliminated, giving the above formulas for the curvature. Graph of a function The graph of a function y = f(x), is a special case of a parametrized curve, of the form ${\begin{aligned}x&=t\\y&=f(t).\end{aligned}}$ As the first and second derivatives of x are 1 and 0, previous formulas simplify to $\kappa ={\frac {\left|y''\right|}{{\bigl (}1+{y'}^{2}{\bigr )}{\vphantom {'}}^{3/2}}},$ for the curvature, and to $k={\frac {y''}{{\bigl (}1+{y'}^{2}{\bigr )}{\vphantom {'}}^{3/2}}},$ for the signed curvature. In the general case of a curve, the sign of the signed curvature is somewhat arbitrary, as it depends on the orientation of the curve. In the case of the graph of a function, there is a natural orientation by increasing values of x. This makes significant the sign of the signed curvature. The sign of the signed curvature is the same as the sign of the second derivative of f. If it is positive then the graph has an upward concavity, and, if it is negative the graph has a downward concavity. It is zero, then one has an inflection point or an undulation point. When the slope of the graph (that is the derivative of the function) is small, the signed curvature is well approximated by the second derivative. More precisely, using big O notation, one has $k(x)=y''{\Bigl (}1+O{\bigl (} y'}^{2}{\bigr )}{\Bigr )}.$ It is common in physics and engineering to approximate the curvature with the second derivative, for example, in beam theory or for deriving the wave equation of a string under tension, and other applications where small slopes are involved. This often allows systems that are otherwise nonlinear to be treated approximately as linear. Polar coordinates If a curve is defined in polar coordinates by the radius expressed as a function of the polar angle, that is r is a function of θ, then its curvature is $\kappa (\theta )={\frac {\left|r^{2}+2{r'}^{2}-r\,r''\right|}{{\bigl (}r^{2}+{r'}^{2}{\bigr )}{\vphantom {'}}^{3/2}}}$ where the prime refers to differentiation with respect to θ. This results from the formula for general parametrizations, by considering the parametrization ${\begin{aligned}x&=r(\theta )\cos \theta \\y&=r(\theta )\sin \theta \end{aligned}}$ Implicit curve For a curve defined by an implicit equation F(x, y) = 0 with partial derivatives denoted Fx , Fy , Fxx , Fxy , Fyy , the curvature is given by[7] $\kappa ={\frac {\left|F_{y}^{2}F_{xx}-2F_{x}F_{y}F_{xy}+F_{x}^{2}F_{yy}\right|}{{\bigl (}F_{x}^{2}+F_{y}^{2}{\bigr )}{\vphantom {'}}^{3/2}}}.$ The signed curvature is not defined, as it depends on an orientation of the curve that is not provided by the implicit equation. Note that changing F into –F would not change the curve defined by F(x, y) = 0, but it would change the sign of the numerator if the absolute value were omitted in the preceding formula. A point of the curve where Fx = Fy = 0 is a singular point, which means that the curve is not differentiable at this point, and thus that the curvature is not defined (most often, the point is either a crossing point or a cusp). The above formula for the curvature can be derived from the expression of the curvature of the graph of a function by using the implicit function theorem and the fact that, on such a curve, one has ${\frac {dy}{dx}}=-{\frac {F_{x}}{F_{y}}}.$ Examples It can be useful to verify on simple examples that the different formulas given in the preceding sections give the same result. Circle A common parametrization of a circle of radius r is γ(t) = (r cos t, r sin t). The formula for the curvature gives $k(t)={\frac {r^{2}\sin ^{2}t+r^{2}\cos ^{2}t}{{\bigl (}r^{2}\cos ^{2}t+r^{2}\sin ^{2}t{\bigr )}{\vphantom {'}}^{3/2}}}={\frac {1}{r}}.$ It follows, as expected, that the radius of curvature is the radius of the circle, and that the center of curvature is the center of the circle. The circle is a rare case where the arc-length parametrization is easy to compute, as it is ${\boldsymbol {\gamma }}(s)=\left(r\cos {\frac {s}{r}},\,r\sin {\frac {s}{r}}\right).$ It is an arc-length parametrization, since the norm of ${\boldsymbol {\gamma }}'(s)=\left(-\sin {\frac {s}{r}},\,\cos {\frac {s}{r}}\right)$ is equal to one. This parametrization gives the same value for the curvature, as it amounts to division by r3 in both the numerator and the denominator in the preceding formula. The same circle can also be defined by the implicit equation F(x, y) = 0 with F(x, y) = x2 + y2 – r2. Then, the formula for the curvature in this case gives ${\begin{aligned}\kappa &={\frac {\left|F_{y}^{2}F_{xx}-2F_{x}F_{y}F_{xy}+F_{x}^{2}F_{yy}\right|}{{\bigl (}F_{x}^{2}+F_{y}^{2}{\bigr )}{\vphantom {'}}^{3/2}}}\\&={\frac {8y^{2}+8x^{2}}{{\bigl (}4x^{2}+4y^{2}{\bigr )}{\vphantom {'}}^{3/2}}}\\&={\frac {8r^{2}}{{\bigl (}4r^{2}{\bigr )}{\vphantom {'}}^{3/2}}}={\frac {1}{r}}.\end{aligned}}$ Parabola Consider the parabola y = ax2 + bx + c. It is the graph of a function, with derivative 2ax + b, and second derivative 2a. So, the signed curvature is $k(x)={\frac {2a}{{\bigl (}1+\left(2ax+b\right)^{2}{\bigr )}{\vphantom {)}}^{3/2}}}.$ It has the sign of a for all values of x. This means that, if a > 0, the concavity is upward directed everywhere; if a < 0, the concavity is downward directed; for a = 0, the curvature is zero everywhere, confirming that the parabola degenerates into a line in this case. The (unsigned) curvature is maximal for x = –b/2a, that is at the stationary point (zero derivative) of the function, which is the vertex of the parabola. Consider the parametrization γ(t) = (t, at2 + bt + c) = (x, y). The first derivative of x is 1, and the second derivative is zero. Substituting into the formula for general parametrizations gives exactly the same result as above, with x replaced by t. If we use primes for derivatives with respect to the parameter t. The same parabola can also be defined by the implicit equation F(x, y) = 0 with F(x, y) = ax2 + bx + c – y. As Fy = –1, and Fyy = Fxy = 0, one obtains exactly the same value for the (unsigned) curvature. However, the signed curvature is meaningless here, as –F(x, y) = 0 is a valid implicit equation for the same parabola, which gives the opposite sign for the curvature. Frenet–Serret formulas for plane curves The expression of the curvature In terms of arc-length parametrization is essentially the first Frenet–Serret formula $\mathbf {T} '(s)=\kappa (s)\mathbf {N} (s),$ where the primes refer to the derivatives with respect to the arc length s, and N(s) is the normal unit vector in the direction of T′(s). As planar curves have zero torsion, the second Frenet–Serret formula provides the relation ${\begin{aligned}{\frac {d\mathbf {N} }{ds}}&=-\kappa \mathbf {T} ,\\&=-\kappa {\frac {d{\boldsymbol {\gamma }}}{ds}}.\end{aligned}}$ For a general parametrization by a parameter t, one needs expressions involving derivatives with respect to t. As these are obtained by multiplying by ds/dt the derivatives with respect to s, one has, for any proper parametrization $\mathbf {N} '(t)=-\kappa (t){\boldsymbol {\gamma }}'(t).$ Curvature comb A curvature comb[8] can be used to represent graphically the curvature of every point on a curve. If $t\mapsto x(t)$ is a parametrised curve its comb is defined as the parametrized curve $t\mapsto x(t)+d\kappa (t)n(t)$ where $\kappa ,n$ are the curvature and normal vector and $d$ is a scaling factor (to be chosen as to enhance the graphical representation). Space curves As in the case of curves in two dimensions, the curvature of a regular space curve C in three dimensions (and higher) is the magnitude of the acceleration of a particle moving with unit speed along a curve. Thus if γ(s) is the arc-length parametrization of C then the unit tangent vector T(s) is given by $\mathbf {T} (s)={\boldsymbol {\gamma }}'(s)$ and the curvature is the magnitude of the acceleration: $\kappa (s)=\|\mathbf {T} '(s)\|=\|{\boldsymbol {\gamma }}''(s)\|.$ The direction of the acceleration is the unit normal vector N(s), which is defined by $\mathbf {N} (s)={\frac {\mathbf {T} '(s)}{\|\mathbf {T} '(s)\|}}.$ The plane containing the two vectors T(s) and N(s) is the osculating plane to the curve at γ(s). The curvature has the following geometrical interpretation. There exists a circle in the osculating plane tangent to γ(s) whose Taylor series to second order at the point of contact agrees with that of γ(s). This is the osculating circle to the curve. The radius of the circle R(s) is called the radius of curvature, and the curvature is the reciprocal of the radius of curvature: $\kappa (s)={\frac {1}{R(s)}}.$ The tangent, curvature, and normal vector together describe the second-order behavior of a curve near a point. In three dimensions, the third-order behavior of a curve is described by a related notion of torsion, which measures the extent to which a curve tends to move as a helical path in space. The torsion and curvature are related by the Frenet–Serret formulas (in three dimensions) and their generalization (in higher dimensions). General expressions For a parametrically-defined space curve in three dimensions given in Cartesian coordinates by γ(t) = (x(t), y(t), z(t)), the curvature is $\kappa ={\frac {\sqrt {{\bigl (}z''y'-y''z'{\bigr )}{\vphantom {'}}^{2}+{\bigl (}x''z'-z''x'{\bigr )}{\vphantom {'}}^{2}+{\bigl (}y''x'-x''y'{\bigr )}{\vphantom {'}}^{2}}}{{\bigl (}{x'}^{2}+{y'}^{2}+{z'}^{2}{\bigr )}{\vphantom {'}}^{3/2}}},$ where the prime denotes differentiation with respect to the parameter t. This can be expressed independently of the coordinate system by means of the formula[9] $\kappa ={\frac {{\bigl \|}{\boldsymbol {\gamma }}'\times {\boldsymbol {\gamma }}''{\bigr \|}}{{\bigl \|}{\boldsymbol {\gamma }}'{\bigr \|}{\vphantom {'}}^{3}}}$ where × denotes the vector cross product. The following formula is valid for the curvature of curves in a Euclidean space of any dimension: $\kappa ={\frac {\sqrt {{\bigl \|}{\boldsymbol {\gamma }}'{\bigr \|}{\vphantom {'}}^{2}{\bigl \|}{\boldsymbol {\gamma }}''{\bigr \|}{\vphantom {'}}^{2}-{\bigl (}{\boldsymbol {\gamma }}'\cdot {\boldsymbol {\gamma }}''{\bigr )}{\vphantom {'}}^{2}}}{{\bigl \|}{\boldsymbol {\gamma }}'{\bigr \|}{\vphantom {'}}^{3}}}.$ Curvature from arc and chord length Given two points P and Q on C, let s(P,Q) be the arc length of the portion of the curve between P and Q and let d(P,Q) denote the length of the line segment from P to Q. The curvature of C at P is given by the limit[10] $\kappa (P)=\lim _{Q\to P}{\sqrt {\frac {24{\bigl (}s(P,Q)-d(P,Q){\bigr )}}{s(P,Q){\vphantom {Q}}^{3}}}}$ where the limit is taken as the point Q approaches P on C. The denominator can equally well be taken to be d(P,Q)3. The formula is valid in any dimension. Furthermore, by considering the limit independently on either side of P, this definition of the curvature can sometimes accommodate a singularity at P. The formula follows by verifying it for the osculating circle. Surfaces For broader coverage of this topic, see Differential geometry of surfaces. The curvature of curves drawn on a surface is the main tool for the defining and studying the curvature of the surface. Curves on surfaces For a curve drawn on a surface (embedded in three-dimensional Euclidean space), several curvatures are defined, which relates the direction of curvature to the surface's unit normal vector, including the: • normal curvature • geodesic curvature • geodesic torsion Any non-singular curve on a smooth surface has its tangent vector T contained in the tangent plane of the surface. The normal curvature, kn, is the curvature of the curve projected onto the plane containing the curve's tangent T and the surface normal u; the geodesic curvature, kg, is the curvature of the curve projected onto the surface's tangent plane; and the geodesic torsion (or relative torsion), τr, measures the rate of change of the surface normal around the curve's tangent. Let the curve be arc-length parametrized, and let t = u × T so that T, t, u form an orthonormal basis, called the Darboux frame. The above quantities are related by: ${\begin{pmatrix}\mathbf {T} '\\\mathbf {t} '\\\mathbf {u} '\end{pmatrix}}={\begin{pmatrix}0&\kappa _{\mathrm {g} }&\kappa _{\mathrm {n} }\\-\kappa _{\mathrm {g} }&0&\tau _{\mathrm {r} }\\-\kappa _{\mathrm {n} }&-\tau _{\mathrm {r} }&0\end{pmatrix}}{\begin{pmatrix}\mathbf {T} \\\mathbf {t} \\\mathbf {u} \end{pmatrix}}$ Principal curvature Main article: Principal curvature All curves on the surface with the same tangent vector at a given point will have the same normal curvature, which is the same as the curvature of the curve obtained by intersecting the surface with the plane containing T and u. Taking all possible tangent vectors, the maximum and minimum values of the normal curvature at a point are called the principal curvatures, k1 and k2, and the directions of the corresponding tangent vectors are called principal normal directions. Normal sections Curvature can be evaluated along surface normal sections, similar to § Curves on surfaces above (see for example the Earth radius of curvature). Developable surfaces Some curved surfaces, such as those made from a smooth sheet of paper, can be flattened down into the plane without distorting their intrinsic features in any way. Such developable surfaces have zero Gaussian curvature (see below).[11] Gaussian curvature Main article: Gaussian curvature In contrast to curves, which do not have intrinsic curvature, but do have extrinsic curvature (they only have a curvature given an embedding), surfaces can have intrinsic curvature, independent of an embedding. The Gaussian curvature, named after Carl Friedrich Gauss, is equal to the product of the principal curvatures, k1k2. It has a dimension of length−2 and is positive for spheres, negative for one-sheet hyperboloids and zero for planes and cylinders. It determines whether a surface is locally convex (when it is positive) or locally saddle-shaped (when it is negative). Gaussian curvature is an intrinsic property of the surface, meaning it does not depend on the particular embedding of the surface; intuitively, this means that ants living on the surface could determine the Gaussian curvature. For example, an ant living on a sphere could measure the sum of the interior angles of a triangle and determine that it was greater than 180 degrees, implying that the space it inhabited had positive curvature. On the other hand, an ant living on a cylinder would not detect any such departure from Euclidean geometry; in particular the ant could not detect that the two surfaces have different mean curvatures (see below), which is a purely extrinsic type of curvature. Formally, Gaussian curvature only depends on the Riemannian metric of the surface. This is Gauss's celebrated Theorema Egregium, which he found while concerned with geographic surveys and mapmaking. An intrinsic definition of the Gaussian curvature at a point P is the following: imagine an ant which is tied to P with a short thread of length r. It runs around P while the thread is completely stretched and measures the length C(r) of one complete trip around P. If the surface were flat, the ant would find C(r) = 2πr. On curved surfaces, the formula for C(r) will be different, and the Gaussian curvature K at the point P can be computed by the Bertrand–Diguet–Puiseux theorem as $K=\lim _{r\to 0^{+}}3\left({\frac {2\pi r-C(r)}{\pi r^{3}}}\right).$ The integral of the Gaussian curvature over the whole surface is closely related to the surface's Euler characteristic; see the Gauss–Bonnet theorem. The discrete analog of curvature, corresponding to curvature being concentrated at a point and particularly useful for polyhedra, is the (angular) defect; the analog for the Gauss–Bonnet theorem is Descartes' theorem on total angular defect. Because (Gaussian) curvature can be defined without reference to an embedding space, it is not necessary that a surface be embedded in a higher-dimensional space in order to be curved. Such an intrinsically curved two-dimensional surface is a simple example of a Riemannian manifold. Mean curvature Main article: Mean curvature The mean curvature is an extrinsic measure of curvature equal to half the sum of the principal curvatures, k1 + k2/2. It has a dimension of length−1. Mean curvature is closely related to the first variation of surface area. In particular, a minimal surface such as a soap film has mean curvature zero and a soap bubble has constant mean curvature. Unlike Gauss curvature, the mean curvature is extrinsic and depends on the embedding, for instance, a cylinder and a plane are locally isometric but the mean curvature of a plane is zero while that of a cylinder is nonzero. Second fundamental form Main article: Second fundamental form The intrinsic and extrinsic curvature of a surface can be combined in the second fundamental form. This is a quadratic form in the tangent plane to the surface at a point whose value at a particular tangent vector X to the surface is the normal component of the acceleration of a curve along the surface tangent to X; that is, it is the normal curvature to a curve tangent to X (see above). Symbolically, $\operatorname {I\!I} (\mathbf {X} ,\mathbf {X} )=\mathbf {N} \cdot (\nabla _{\mathbf {X} }\mathbf {X} )$ where N is the unit normal to the surface. For unit tangent vectors X, the second fundamental form assumes the maximum value k1 and minimum value k2, which occur in the principal directions u1 and u2, respectively. Thus, by the principal axis theorem, the second fundamental form is $\operatorname {I\!I} (\mathbf {X} ,\mathbf {X} )=k_{1}\left(\mathbf {X} \cdot \mathbf {u} _{1}\right)^{2}+k_{2}\left(\mathbf {X} \cdot \mathbf {u} _{2}\right)^{2}.$ Thus the second fundamental form encodes both the intrinsic and extrinsic curvatures. Shape operator Further information: Shape operator An encapsulation of surface curvature can be found in the shape operator, S, which is a self-adjoint linear operator from the tangent plane to itself (specifically, the differential of the Gauss map). For a surface with tangent vectors X and normal N, the shape operator can be expressed compactly in index summation notation as $\partial _{a}\mathbf {N} =-S_{ba}\mathbf {X} _{b}.$ (Compare the alternative expression of curvature for a plane curve.) The Weingarten equations give the value of S in terms of the coefficients of the first and second fundamental forms as $S=\left(EG-F^{2}\right)^{-1}{\begin{pmatrix}eG-fF&fG-gF\\fE-eF&gE-fF\end{pmatrix}}.$ The principal curvatures are the eigenvalues of the shape operator, the principal curvature directions are its eigenvectors, the Gauss curvature is its determinant, and the mean curvature is half its trace. Curvature of space Further information: Curvature of Riemannian manifolds and Curved space "Curvature of space" redirects here. Not to be confused with Curvature of space-time. By extension of the former argument, a space of three or more dimensions can be intrinsically curved. The curvature is intrinsic in the sense that it is a property defined at every point in the space, rather than a property defined with respect to a larger space that contains it. In general, a curved space may or may not be conceived as being embedded in a higher-dimensional ambient space; if not then its curvature can only be defined intrinsically. After the discovery of the intrinsic definition of curvature, which is closely connected with non-Euclidean geometry, many mathematicians and scientists questioned whether ordinary physical space might be curved, although the success of Euclidean geometry up to that time meant that the radius of curvature must be astronomically large. In the theory of general relativity, which describes gravity and cosmology, the idea is slightly generalised to the "curvature of spacetime"; in relativity theory spacetime is a pseudo-Riemannian manifold. Once a time coordinate is defined, the three-dimensional space corresponding to a particular time is generally a curved Riemannian manifold; but since the time coordinate choice is largely arbitrary, it is the underlying spacetime curvature that is physically significant. Although an arbitrarily curved space is very complex to describe, the curvature of a space which is locally isotropic and homogeneous is described by a single Gaussian curvature, as for a surface; mathematically these are strong conditions, but they correspond to reasonable physical assumptions (all points and all directions are indistinguishable). A positive curvature corresponds to the inverse square radius of curvature; an example is a sphere or hypersphere. An example of negatively curved space is hyperbolic geometry (see also: non-positive curvature). A space or space-time with zero curvature is called flat. For example, Euclidean space is an example of a flat space, and Minkowski space is an example of a flat spacetime. There are other examples of flat geometries in both settings, though. A torus or a cylinder can both be given flat metrics, but differ in their topology. Other topologies are also possible for curved space. See also shape of the universe. Generalizations The mathematical notion of curvature is also defined in much more general contexts.[12] Many of these generalizations emphasize different aspects of the curvature as it is understood in lower dimensions. One such generalization is kinematic. The curvature of a curve can naturally be considered as a kinematic quantity, representing the force felt by a certain observer moving along the curve; analogously, curvature in higher dimensions can be regarded as a kind of tidal force (this is one way of thinking of the sectional curvature). This generalization of curvature depends on how nearby test particles diverge or converge when they are allowed to move freely in the space; see Jacobi field. Another broad generalization of curvature comes from the study of parallel transport on a surface. For instance, if a vector is moved around a loop on the surface of a sphere keeping parallel throughout the motion, then the final position of the vector may not be the same as the initial position of the vector. This phenomenon is known as holonomy.[13] Various generalizations capture in an abstract form this idea of curvature as a measure of holonomy; see curvature form. A closely related notion of curvature comes from gauge theory in physics, where the curvature represents a field and a vector potential for the field is a quantity that is in general path-dependent: it may change if an observer moves around a loop. Two more generalizations of curvature are the scalar curvature and Ricci curvature. In a curved surface such as the sphere, the area of a disc on the surface differs from the area of a disc of the same radius in flat space. This difference (in a suitable limit) is measured by the scalar curvature. The difference in area of a sector of the disc is measured by the Ricci curvature. Each of the scalar curvature and Ricci curvature are defined in analogous ways in three and higher dimensions. They are particularly important in relativity theory, where they both appear on the side of Einstein's field equations that represents the geometry of spacetime (the other side of which represents the presence of matter and energy). These generalizations of curvature underlie, for instance, the notion that curvature can be a property of a measure; see curvature of a measure. Another generalization of curvature relies on the ability to compare a curved space with another space that has constant curvature. Often this is done with triangles in the spaces. The notion of a triangle makes senses in metric spaces, and this gives rise to CAT(k) spaces. See also • Curvature form for the appropriate notion of curvature for vector bundles and principal bundles with connection • Curvature of a measure for a notion of curvature in measure theory • Curvature of parametric surfaces • Curvature of Riemannian manifolds for generalizations of Gauss curvature to higher-dimensional Riemannian manifolds • Curvature vector and geodesic curvature for appropriate notions of curvature of curves in Riemannian manifolds, of any dimension • Degree of curvature • Differential geometry of curves for a full treatment of curves embedded in a Euclidean space of arbitrary dimension • Dioptre, a measurement of curvature used in optics • Evolute, the locus of the centers of curvature of a given curve • Fundamental theorem of curves • Gauss–Bonnet theorem for an elementary application of curvature • Gauss map for more geometric properties of Gauss curvature • Gauss's principle of least constraint, an expression of the Principle of Least Action • Mean curvature at one point on a surface • Minimum railway curve radius • Radius of curvature • Second fundamental form for the extrinsic curvature of hypersurfaces in general • Sinuosity • Torsion of a curve Notes 1. Clagett, Marshall (1968), Nicole Oresme and the Medieval Geometry of Qualities and Motions; a treatise on the uniformity and difformity of intensities known as Tractatus de configurationibus qualitatum et motuum, Madison, WI: University of Wisconsin Press, ISBN 0-299-04880-2 2. Serrano, Isabel; Suceavă, Bogdan (2015). "A Medieval Mystery: Nicole Oresme's Concept of Curvitas" (PDF). Notices of the American Mathematical Society. 62 (9): 1030–1034. doi:10.1090/noti1275. 3. Borovik, Alexandre; Katz, Mikhail G. (2011), "Who gave you the Cauchy–Weierstrass tale? The dual history of rigorous calculus", Foundations of Science, 17 (3): 245–276, arXiv:1108.2885, Bibcode:2011arXiv1108.2885B, doi:10.1007/s10699-011-9235-x, S2CID 119320059 4. Pressley, Andrew. Elementary Differential Geometry (1st ed.). p. 29. 5. Kline, Morris. Calculus: An Intuitive and Physical Approach (2nd ed.). p. 458. 6. Kennedy, John (2011). "The Arc Length Parametrization of a Curve". Archived from the original on 2015-09-28. Retrieved 2013-12-10. 7. Goldman, Ron (2005). "Curvature formulas for implicit curves and surfaces". Computer Aided Geometric Design. 22 (7): 632–658. CiteSeerX 10.1.1.413.3008. doi:10.1016/j.cagd.2005.06.005. 8. Farin, Gerald (2016). "Curvature combs and curvature plots". Computer-Aided Design. 80: 6–8. doi:10.1016/j.cad.2016.08.003. 9. A proof of this can be found at the article on curvature at Wolfram MathWorld. 10. Bitetto, Marco (26 November 2018). "Hyperspatial Dynamics". Nova Universita di Engeneria e Scienza: 206. 11. developable surface, Mathworld. (Retrieved 11 February 2021) 12. Kobayashi, Shoshichi; Nomizu, Katsumi. Foundations of Differential Geometry. Wiley Interscience. vol. 1 ch. 2–3. 13. Henderson, David W.; Taimiņa, Daina. Experiencing Geometry (3rd ed.). pp. 98–99. References • Coolidge, Julian L. (June 1952). "The Unsatisfactory Story of Curvature". American Mathematical Monthly. 59 (6): 375–379. doi:10.2307/2306807. JSTOR 2306807. • Sokolov, Dmitriĭ Dmitrievich (2001) [1994], "Curvature", Encyclopedia of Mathematics, EMS Press • Kline, Morris (1998). Calculus: An Intuitive and Physical Approach. Dover. pp. 457–461. ISBN 978-0-486-40453-0. (restricted online copy, p. 457, at Google Books) • Klaf, A. Albert (1956). Calculus Refresher. Dover. pp. 151–168. ISBN 978-0-486-20370-6. (restricted online copy , p. 151, at Google Books) • Casey, James (1996). Exploring Curvature. Vieweg+Teubner. ISBN 978-3-528-06475-4. External links Look up curvature in Wiktionary, the free dictionary. Wikimedia Commons has media related to Curvature. • The Feynman Lectures on Physics Vol. II Ch. 42: Curved Space • The History of Curvature • Curvature, Intrinsic and Extrinsic at MathPages Various notions of curvature defined in differential geometry Differential geometry of curves • Curvature • Torsion of a curve • Frenet–Serret formulas • Radius of curvature (applications) • Affine curvature • Total curvature • Total absolute curvature Differential geometry of surfaces • Principal curvatures • Gaussian curvature • Mean curvature • Darboux frame • Gauss–Codazzi equations • First fundamental form • Second fundamental form • Third fundamental form Riemannian geometry • Curvature of Riemannian manifolds • Riemann curvature tensor • Ricci curvature • Scalar curvature • Sectional curvature Curvature of connections • Curvature form • Torsion tensor • Cocurvature • Holonomy Spirals, curves and helices Curves • Algebraic • Curvature • Gallery • List • Topics Helices • Angle • Antenna • Boerdijk–Coxeter • Hemi • Symmetry • Triple Biochemistry • 310 • Alpha • Beta • Double • Pi • Polyproline • Super • Triple • Collagen Spirals • Archimedean • Cotes's • Epispiral • Hyperbolic • Poinsot's • Doyle • Euler • Fermat's • Golden • Involute • List • Logarithmic • On Spirals • Padovan • Theodorus • Spirangle • Ulam Authority control: National • Germany • Israel • United States • Japan
Wikipedia
Signed distance function In mathematics and its applications, the signed distance function (or oriented distance function) is the orthogonal distance of a given point x to the boundary of a set Ω in a metric space, with the sign determined by whether or not x is in the interior of Ω. The function has positive values at points x inside Ω, it decreases in value as x approaches the boundary of Ω where the signed distance function is zero, and it takes negative values outside of Ω.[1] However, the alternative convention is also sometimes taken instead (i.e., negative inside Ω and positive outside).[2] Definition If Ω is a subset of a metric space X with metric d, then the signed distance function f is defined by $f(x)={\begin{cases}d(x,\partial \Omega )&{\mbox{if }}\,x\in \Omega \\-d(x,\partial \Omega )&{\mbox{if }}\,x\in \Omega ^{c}\end{cases}}$ where $\partial \Omega $ denotes the boundary of $\Omega $. For any $x\in X$, $d(x,\partial \Omega ):=\inf _{y\in \partial \Omega }d(x,y)$ where inf denotes the infimum. Properties in Euclidean space If Ω is a subset of the Euclidean space Rn with piecewise smooth boundary, then the signed distance function is differentiable almost everywhere, and its gradient satisfies the eikonal equation $|\nabla f|=1.$ If the boundary of Ω is Ck for k ≥ 2 (see Differentiability classes) then d is Ck on points sufficiently close to the boundary of Ω.[3] In particular, on the boundary f satisfies $\nabla f(x)=N(x),$ where N is the inward normal vector field. The signed distance function is thus a differentiable extension of the normal vector field. In particular, the Hessian of the signed distance function on the boundary of Ω gives the Weingarten map. If, further, Γ is a region sufficiently close to the boundary of Ω that f is twice continuously differentiable on it, then there is an explicit formula involving the Weingarten map Wx for the Jacobian of changing variables in terms of the signed distance function and nearest boundary point. Specifically, if T(∂Ω, μ) is the set of points within distance μ of the boundary of Ω (i.e. the tubular neighbourhood of radius μ), and g is an absolutely integrable function on Γ, then $\int _{T(\partial \Omega ,\mu )}g(x)\,dx=\int _{\partial \Omega }\int _{-\mu }^{\mu }g(u+\lambda N(u))\,\det(I-\lambda W_{u})\,d\lambda \,dS_{u},$ where det denotes the determinant and dSu indicates that we are taking the surface integral.[4] Algorithms Algorithms for calculating the signed distance function include the efficient fast marching method, fast sweeping method[5] and the more general level-set method. For voxel rendering, a fast algorithm for calculating the SDF in taxicab geometry uses summed-area tables.[6] Applications Signed distance functions are applied, for example, in real-time rendering,[7] for instance the method of SDF ray marching, and computer vision.[8][9] SDF has been used to describe object geometry in real-time rendering, usually in a raymarching context, starting in the mid 2000s. By 2007, Valve is using SDFs to render large pixel-size (or high DPI) smooth fonts with GPU acceleration in its games.[10] Valve's method is not perfect as it runs in raster space in order to avoid the computational complexity of solving the problem in the (continuous) vector space. The rendered text often loses sharp corners. In 2014, an improved method was presented by Behdad Esfahbod. Behdad's GLyphy approximates the font's Bézier curves with arc splines, accelerated by grid-based discretization techniques (which culls too-far-away points) to run in real time.[11] A modified version of SDF was introduced as a loss function to minimise the error in interpenetration of pixels while rendering multiple objects.[12] In particular, for any pixel that does not belong to an object, if it lies outside the object in rendition, no penalty is imposed; if it does, a positive value proportional to its distance inside the object is imposed. $f(x)={\begin{cases}0&{\text{if }}\,x\in \Omega ^{c}\\d(x,\partial \Omega )&{\text{if }}\,x\in \Omega \end{cases}}$ In 2020, the FOSS game engine Godot 4.0 received SDF-based real-time global illumination (SDFGI), that became a compromise between more realistic voxel-based GI and baked GI. Its core advantage is that it can be applied to infinite space, which allows developers to use it for open-world games.[13] In 2023, a "GPUI" UI framework was released to draw all UI elements using the GPU, many parts using SDF. The author claims to have produced a Zed code editor that renders at 120 fps. The work makes use of Inigo Quilez's list of geometric primitives in SDF, Evan Wallace (co-founder of Figma)'s approximated gaussian blur in SDF, and a new rounded rectangle SDF.[14] See also • Distance function • Level-set method • Eikonal equation • Parallel (aka offset) curve • Signed arc length Notes 1. Chan, T.; Zhu, W. (2005). Level set based shape prior segmentation. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. doi:10.1109/CVPR.2005.212. 2. Malladi, R.; Sethian, J.A.; Vemuri, B.C. (1995). "Shape modeling with front propagation: a level set approach". IEEE Transactions on Pattern Analysis and Machine Intelligence. 17 (2): 158–175. CiteSeerX 10.1.1.33.2443. doi:10.1109/34.368173. S2CID 9505101. 3. Gilbarg & Trudinger 1983, Lemma 14.16. 4. Gilbarg & Trudinger 1983, Equation (14.98). 5. Zhao Hongkai. A fast sweeping method for eikonal equations. Mathematics of Computation, 2005, 74. Jg., Nr. 250, S. 603-627. 6. Nilsson, Tobias (2019). "Optimization Methods for Direct Volume Rendering on the Client Side Web" (PDF). Digitala Vetenskapliga Arkivet. Retrieved 2022-07-08. 7. Tomas Akenine-Möller; Eric Haines; Naty Hoffman (6 August 2018). Real-Time Rendering, Fourth Edition. CRC Press. ISBN 978-1-351-81615-1. 8. Perera, S.; Barnes, N.; He, X.; Izadi, S.; Kohli, P.; Glocker, B. (January 2015). "Motion Segmentation of Truncated Signed Distance Function Based Volumetric Surfaces". 2015 IEEE Winter Conference on Applications of Computer Vision. pp. 1046–1053. doi:10.1109/WACV.2015.144. ISBN 978-1-4799-6683-7. S2CID 16811314. 9. Izadi, Shahram; Kim, David; Hilliges, Otmar; Molyneaux, David; Newcombe, Richard; Kohli, Pushmeet; Shotton, Jamie; Hodges, Steve; Freeman, Dustin (2011). "KinectFusion". Proceedings of the 24th annual ACM symposium on User interface software and technology. UIST '11. New York, NY, USA: ACM. pp. 559–568. doi:10.1145/2047196.2047270. ISBN 9781450307161. S2CID 3345516. 10. Green, Chris (2007). "Improved alpha-tested magnification for vector textures and special effects". ACM SIGGRAPH 2007 Courses on - SIGGRAPH '07: 9–18. CiteSeerX 10.1.1.170.9418. doi:10.1145/1281500.1281665. ISBN 9781450318235. S2CID 7479538. 11. Behdad Esfahbod. GLyphy: high-quality glyph rendering using OpenGL ES2 shaders [linux.conf.au 2014]. YouTube. Archived from the original on 2021-12-11. Source Code 12. Jiang, Wen; Kolotouros, Nikos; Pavlakos, Georgios; Zhou, Xiaowei; Daniilidis, Kostas (2020-06-15). "Coherent Reconstruction of Multiple Humans from a Single Image". arXiv:2006.08586 [cs.CV]. 13. Engine, Godot. "Godot 4.0 gets SDF based real-time global illumination". Godot Engine. 14. Scandurra, Antonio (7 March 2023). "Leveraging Rust and the GPU to render user interfaces at 120 FPS - Zed Blog". Zed. References • Stanley J. Osher and Ronald P. Fedkiw (2003). Level Set Methods and Dynamic Implicit Surfaces. Springer. ISBN 9780387227467. • Gilbarg, D.; Trudinger, N. S. (1983). Elliptic Partial Differential Equations of Second Order. Grundlehren der mathematischen Wissenschaften. Vol. 224 (2nd ed.). Springer-Verlag. (or the Appendix of the 1977 1st ed.)
Wikipedia
Signed graph In the area of graph theory in mathematics, a signed graph is a graph in which each edge has a positive or negative sign. A signed graph is balanced if the product of edge signs around every cycle is positive. The name "signed graph" and the notion of balance appeared first in a mathematical paper of Frank Harary in 1953.[1] Dénes Kőnig had already studied equivalent notions in 1936 under a different terminology but without recognizing the relevance of the sign group.[2] At the Center for Group Dynamics at the University of Michigan, Dorwin Cartwright and Harary generalized Fritz Heider's psychological theory of balance in triangles of sentiments to a psychological theory of balance in signed graphs.[3][4] Signed graphs have been rediscovered many times because they come up naturally in many unrelated areas.[5] For instance, they enable one to describe and analyze the geometry of subsets of the classical root systems. They appear in topological graph theory and group theory. They are a natural context for questions about odd and even cycles in graphs. They appear in computing the ground state energy in the non-ferromagnetic Ising model; for this one needs to find a largest balanced edge set in Σ. They have been applied to data classification in correlation clustering. Fundamental theorem The sign of a path is the product of the signs of its edges. Thus a path is positive only if there are an even number of negative edges in it (where zero is even). In the mathematical balance theory of Frank Harary, a signed graph is balanced when every cycle is positive. Harary proves that a signed graph is balanced when (1) for every pair of nodes, all paths between them have the same sign, or (2) the vertices partition into a pair of subsets (possibly empty), each containing only positive edges, but connected by negative edges.[1] It generalizes the theorem that an ordinary (unsigned) graph is bipartite if and only if every cycle has even length. A simple proof uses the method of switching. Switching a signed graph means reversing the signs of all edges between a vertex subset and its complement. To prove Harary's theorem, one shows by induction that Σ can be switched to be all positive if and only if it is balanced. A weaker theorem, but with a simpler proof, is that if every 3-cycle in a signed complete graph is positive, then the graph is balanced. For the proof, pick an arbitrary node n and place it and all those nodes that are linked to n by a positive edge in one group, called A, and all those linked to n by a negative edge in the other, called B. Since this is a complete graph, every two nodes in A must be friends and every two nodes in B must be friends, otherwise there would be a 3-cycle which was unbalanced. (Since this is a complete graph, any one negative edge would cause an unbalanced 3-cycle.) Likewise, all negative edges must go between the two groups.[6] Frustration Frustration index The frustration index (early called the line index of balance[7]) of Σ is the smallest number of edges whose deletion, or equivalently whose sign reversal (a theorem of Harary[7]), makes Σ balanced. The reason for the equivalence is that the frustration index equals the smallest number of edges whose negation (or, equivalently, deletion; makes Σ balanced. A second way of describing the frustration index is that it is the smallest number of edges that cover all negative cycles. This quantity has been called the negative cycle cover number. There is another equivalent definition (which can be proved easily by switching). Give each vertex a value of +1 or −1; we call this a state of Σ. An edge is called satisfied if it is positive and both endpoints have the same value, or it is negative and the endpoints have opposite values. An edge that is not satisfied is called frustrated. The smallest number of frustrated edges over all states is the frustration index. This definition was first introduced in a different notation by Abelson and Rosenberg under the (obsolete) name complexity.[8] The complement of such a set is a balanced subgraph of Σ with the most possible edges. Finding the frustration index is an NP-hard problem. Aref et al. suggest binary programming models that are capable of computing the frustration index of graphs with up to 105 edges in a reasonable time.[9][10] [11] One can see the NP-hard complexity by observing that the frustration index of an all-negative signed graph is the same as the maximum cut problem in graph theory, which is NP-hard. The frustration index is important in a model of spin glasses, the mixed Ising model. In this model, the signed graph is fixed. A state consists of giving a "spin", either "up" or "down", to each vertex. We think of spin up as +1 and spin down as −1. Thus, each state has a number of frustrated edges. The energy of a state is larger when it has more frustrated edges, so a ground state is a state with the fewest frustrated energy. Thus, to find the ground state energy of Σ one has to find the frustration index. Frustration number The analogous vertex number is the frustration number, defined as the smallest number of vertices whose deletion from Σ results in balance. Equivalently, one wants the largest order of a balanced induced subgraph of Σ. Algorithmic problems Three fundamental questions about a signed graph are: Is it balanced? What is the largest size of a balanced edge set in it? What is the smallest number of vertices that must be deleted to make it balanced? The first question is easy to solve in polynomial time. The second question is called the Frustration Index or Maximum Balanced Subgraph problem. It is NP-hard because its special case (when all edges of the graph are negative) is the NP-hard problem Maximum Cut. The third question is called the Frustration Number or Maximum Balanced Induced Subgraph problem, is also NP-hard; see e.g.[12] Matroid theory There are two matroids associated with a signed graph, called the signed-graphic matroid (also called the frame matroid or sometimes bias matroid) and the lift matroid, both of which generalize the cycle matroid of a graph. They are special cases of the same matroids of a biased graph. The frame matroid (or signed-graphic matroid) M(G) has for its ground set the edge set E.[13] An edge set is independent if each component contains either no circles or just one circle, which is negative. (In matroid theory a half-edge acts exactly like a negative loop.) A circuit of the matroid is either a positive circle, or a pair of negative circles together with a connecting simple path, such that the two circles are either disjoint (then the connecting path has one end in common with each circle and is otherwise disjoint from both) or share just a single common vertex (in this case the connecting path is that single vertex). The rank of an edge set S is n − b, where n is the number of vertices of G and b is the number of balanced components of S, counting isolated vertices as balanced components. This matroid is the column matroid of the incidence matrix of the signed graph. That is why it describes the linear dependencies of the roots of a classical root system. The extended lift matroid L0(G) has for its ground set the set E0 the union of edge set E with an extra point, which we denote e0. The lift matroid L(G) is the extended lift matroid restricted to E. The extra point acts exactly like a negative loop, so we describe only the lift matroid. An edge set is independent if it contains either no circles or just one circle, which is negative. (This is the same rule that is applied separately to each component in the signed-graphic matroid.) A matroid circuit is either a positive circle or a pair of negative circles that are either disjoint or have just a common vertex. The rank of an edge set S is n − c + ε, where c is the number of components of S, counting isolated vertices, and ε is 0 if S is balanced and 1 if it is not. Other kinds of "signed graph" Sometimes the signs are taken to be +1 and −1. This is only a difference of notation, if the signs are still multiplied around a circle and the sign of the product is the important thing. However, there are two other ways of treating the edge labels that do not fit into signed graph theory. The term signed graph is applied occasionally to graphs in which each edge has a weight, w(e) = +1 or −1. These are not the same kind of signed graph; they are weighted graphs with a restricted weight set. The difference is that weights are added, not multiplied. The problems and methods are completely different. The name is also applied to graphs in which the signs function as colors on the edges. The significance of the color is that it determines various weights applied to the edge, and not that its sign is intrinsically significant. This is the case in knot theory, where the only significance of the signs is that they can be interchanged by the two-element group, but there is no intrinsic difference between positive and negative. The matroid of a sign-colored graph is the cycle matroid of the underlying graph; it is not the frame or lift matroid of the signed graph. The sign labels, instead of changing the matroid, become signs on the elements of the matroid. In this article we discuss only signed graph theory in the strict sense. For sign-colored graphs see colored matroids. Signed digraph A signed digraph is a directed graph with signed arcs. Signed digraphs are far more complicated than signed graphs, because only the signs of directed cycles are significant. For instance, there are several definitions of balance, each of which is hard to characterize, in strong contrast with the situation for signed undirected graphs. Signed digraphs should not be confused with oriented signed graphs. The latter are bidirected graphs, not directed graphs (except in the trivial case of all positive signs). Vertex signs A vertex-signed graph, sometimes called a marked graph, is a graph whose vertices are given signs. A circle is called consistent (but this is unrelated to logical consistency) or harmonious if the product of its vertex signs is positive, and inconsistent or inharmonious if the product is negative. There is no simple characterization of harmonious vertex-signed graphs analogous to Harary's balance theorem; instead, the characterization has been a difficult problem, best solved (even more generally) by Joglekar, Shah, and Diwan (2012).[14] It is often easy to add edge signs to the theory of vertex signs without major change; thus, many results for vertex-signed graphs (or "marked signed graphs") extend naturally to vertex-and-edge-signed graphs. This is notably true for the characterization of harmony by Joglekar, Shah, and Diwan (2012). The difference between a marked signed graph and a signed graph with a state function (as in § Frustration) is that the vertex signs in the former are part of the essential structure, while a state function is a variable function on the signed graph. Note that the term "marked graph" is widely used in Petri nets in a completely different meaning; see the article on marked graphs. Coloring As with unsigned graphs, there is a notion of signed graph coloring. Where a coloring of a graph is a mapping from the vertex set to the natural numbers, a coloring of a signed graph is a mapping from the vertex set to the integers. The constraints on proper colorings come from the edges of the signed graph. The integers assigned to two vertices must be distinct if they are connected by a positive edge. The labels on adjacent vertices must not be additive inverses if the vertices are connected by a negative edge. There can be no proper coloring of a signed graph with a positive loop. When restricting the vertex labels to the set of integers with magnitude at most a natural number k, the set of proper colorings of a signed graph is finite. The relation between the number of such proper colorings and k is a polynomial in k; when expressed in terms of $2k+1$ it is called the chromatic polynomial of the signed graph. It is analogous to the chromatic polynomial of an unsigned graph. Applications Social psychology In social psychology, signed graphs have been used to model social situations, with positive edges representing friendships and negative edges enmities between nodes, which represent people.[3] Then, for example, a positive 3-cycle is either three mutual friends, or two friends with a common enemy; while a negative 3-cycle is either three mutual enemies, or two enemies who share a mutual friend. According to balance theory, positive cycles are balanced and supposed to be stable social situations, whereas negative cycles are unbalanced and supposed to be unstable. According to the theory, in the case of three mutual enemies, this is because sharing a common enemy is likely to cause two of the enemies to become friends. In the case of two enemies sharing a friend, the shared friend is likely to choose one over the other and turn one of his or her friendships into an enemy. Antal, Krapivsky and Reder consider social dynamics as the change in sign on an edge of a signed graph.[15] The social relations with previous friends of a divorcing couple are used to illustrate the evolution of a signed graph in society. Another illustration describes the changing international alliances between European powers in the decades before the First World War. They consider local triad dynamics and constrained triad dynamics, where in the latter case a relationship change is made only when the total number of unbalanced triads is reduced. The simulation presumed a complete graph with random relations having a random unbalanced triad selected for transformation. The evolution of the signed graph with N nodes under this process is studied and simulated to describe the stationary density of friendly links. Balance theory has been severely challenged, especially in its application to large systems, on the theoretical ground that friendly relations tie a society together, while a society divided into two camps of enemies would be highly unstable.[16] Experimental studies have also provided only weak confirmation of the predictions of structural balance theory.[17] Spin glasses In physics, signed graphs are a natural context for the nonferromagnetic Ising model, which is applied to the study of spin glasses. Complex systems Using an analytic method initially developed in population biology and ecology, but now used in many scientific disciplines, signed digraphs have found application in reasoning about the behavior of complex causal systems.[18][19] Such analyses answer questions about feedback at given levels of the system, and about the direction of variable responses given a perturbation to a system at one or more points, variable correlations given such perturbations, the distribution of variance across the system, and the sensitivity or insensitivity of particular variables to system perturbations. Data clustering Correlation clustering looks for natural clustering of data by similarity. The data points are represented as the vertices of a graph, with a positive edge joining similar items and a negative edge joining dissimilar items. Neuroscience Brain can be considered as a signed graph where synchrony and anti-synchrony between activity patterns of brain regions determine positive and negative edges. In this regard, stability and energy of the brain network can be explored.[20] Also, recently, the concept of frustration has been used in brain network analysis to identify the non-trivial assemblage of neural connections and highlight the adjustable elements of the brain.[21] Generalizations A signed graph is the special kind of gain graph in which the gain group has order 2. The pair (G, B(Σ)) determined by a signed graph Σ is a special kind of biased graph. The sign group has the special property, not shared by larger gain groups, that the edge signs are determined up to switching by the set B(Σ) of balanced cycles.[22] Notes 1. Harary, Frank (1955), "On the notion of balance of a signed graph", Michigan Mathematical Journal, 2: 143–146, MR 0067468, archived from the original on 2013-04-15 2. Kőnig, Dénes (1936), Akademische Verlagsgesellschaft (ed.), Theorie der endlichen und unendlichen Graphen 3. Cartwright, D.; Harary, Frank (1956). "Structural balance: a generalization of Heider's theory" (PDF). Psychological Review. 63 (5): 277–293. doi:10.1037/h0046049. PMID 13359597. 4. Steven Strogatz (2010), The enemy of my enemy, The New York Times, February 14, 2010 5. Zaslavsky, Thomas (1998), "A mathematical bibliography of signed and gain graphs and allied areas", Electronic Journal of Combinatorics, 5, Dynamic Surveys 8, 124 pp., MR 1744869. 6. Luis Von Ahn Science of the Web Lecture 3 p. 28 7. Harary, Frank (1959), On the measurement of structural balance, Behavioral Science 4, 316–323. 8. Robert P. Abelson; Milton J. Rosenberg (1958), Symbolic psycho-logic: a model of attitudinal cognition, Behavioral Science 3, 1–13. 9. Aref, Samin; Mason, Andrew J.; Wilson, Mark C. (2019). "A Modelling and Computational Study of the Frustration Index in Signed Networks". arXiv:1611.09030 [cs.SI]. 10. Aref, Samin; Mason, Andrew J.; Wilson, Mark C. (2018), Goldengorin, Boris (ed.), "Computing the Line Index of Balance Using Integer Programming Optimisation", Optimization Problems in Graph Theory: In Honor of Gregory Z. Gutin's 60th Birthday, Springer Optimization and Its Applications, Springer International Publishing, pp. 65–84, arXiv:1710.09876, doi:10.1007/978-3-319-94830-0_3, ISBN 9783319948300, S2CID 27936778 11. Aref, Samin; Wilson, Mark C (2019-04-01). Estrada, Ernesto (ed.). "Balance and frustration in signed networks". Journal of Complex Networks. 7 (2): 163–189. arXiv:1712.04628. doi:10.1093/comnet/cny015. ISSN 2051-1329. 12. Gülpinar, N.; Gutin, G.; Mitra, G.; Zverovitch, A. (2004). "Extracting pure network submatrices in linear programs using signed graphs". Discrete Appl. Math. 137 (3): 359–372. doi:10.1016/S0166-218X(03)00361-5. 13. Zaslavsky, Thomas (1982), "Signed graphs", Discrete Applied Mathematics, 4 (1): 47–74, doi:10.1016/0166-218X(82)90033-6, hdl:10338.dmlcz/127957, MR 0676405. Erratum. Discrete Applied Mathematics, 5 (1983), 248 14. Manas Joglekar, Nisarg Shah, and Ajit A. Diwan (2012), "Balanced group labeled graphs", Discrete Mathematics, vol. 312, no. 9, pp. 1542–1549. 15. T. Antal, P.L. Krapivsky & S. Redner (2006) Social Balance on Networks: The Dynamics of Friendship and Enmity 16. B. Anderson, in Perspectives on Social Network Research, ed. P.W. Holland and S. Leinhardt. New York: Academic Press, 1979. 17. Morrissette, Julian O.; Jahnke, John C. (1967). "No relations and relations of strength zero in the theory of structural balance". Human Relations. 20 (2): 189–195. doi:10.1177/001872676702000207. S2CID 143210382. 18. Puccia, Charles J. and Levins, Richard (1986). Qualitative Modeling of Complex Systems: An Introduction to Loop Analysis and Time Averaging. Harvard University Press, Cambridge, MA. 19. Dambacher, Jeffrey M.; Li, Hiram W.; Rossignol, Philippe A. (2002). "Relevance of community structure in assessing indeterminacy of ecological predictions". Ecology. 83 (5): 1372–1385. doi:10.1890/0012-9658(2002)083[1372:rocsia]2.0.co;2. JSTOR 3071950. 20. Saberi M, Khosrowabadi R, Khatibi A, Misic B, Jafari G (January 2021). "Topological impact of negative links on the stability of resting-state brain network". Scientific Reports. 11 (1): 2176. Bibcode:2021NatSR..11.2176S. doi:10.1038/s41598-021-81767-7. PMC 7838299. PMID 33500525. 21. Saberi M, Khosrowabadi R, Khatibi A, Misic B, Jafari G (October 2022). "Pattern of frustration formation in the functional brain network". Network Neuroscience. 6 (4): 1334-1356. doi:10.1162/netn_a_00268. 22. Zaslavsky, Thomas (1981). "Characterizations of signed graphs". Journal of Graph Theory. 5 (4): 401–406. doi:10.1002/jgt.3190050409. References • Cartwright, D.; Harary, F. (1956), "Structural balance: a generalization of Heider's theory", Psychological Review, 63 (5): 277–293, doi:10.1037/h0046049, PMID 13359597. • Seidel, J. J. (1976), "A survey of two-graphs", Colloquio Internazionale sulle Teorie Combinatorie (Rome, 1973), Tomo I, Atti dei Convegni Lincei, vol. 17, Rome: Accademia Nazionale dei Lincei, pp. 481–511, MR 0550136. • Zaslavsky, Thomas (1998), "A mathematical bibliography of signed and gain graphs and allied areas", Electronic Journal of Combinatorics, 5, Dynamic Surveys 8, 124 pp., MR 1744869
Wikipedia
Sign (mathematics) In mathematics, the sign of a real number is its property of being either positive, negative, or zero. Depending on local conventions, zero may be considered as being neither positive nor negative (having no sign or a unique third sign), or it may be considered both positive and negative (having both signs). Whenever not specifically mentioned, this article adheres to the first convention. Not to be confused with the sine function in trigonometry. For symbols named "... sign", see List of mathematical symbols. In some contexts, it makes sense to consider a signed zero (such as floating-point representations of real numbers within computers). In mathematics and physics, the phrase "change of sign" is associated with the generation of the additive inverse (negation, or multiplication by −1) of any object that allows for this construction, and is not restricted to real numbers. It applies among other objects to vectors, matrices, and complex numbers, which are not prescribed to be only either positive, negative, or zero. The word "sign" is also often used to indicate other binary aspects of mathematical objects that resemble positivity and negativity, such as odd and even (sign of a permutation), sense of orientation or rotation (cw/ccw), one sided limits, and other concepts described in § Other meanings below. Sign of a number Numbers from various number systems, like integers, rationals, complex numbers, quaternions, octonions, ... may have multiple attributes, that fix certain properties of a number. A number system that bears the structure of an ordered ring contains a unique number that when added with any number leaves the latter unchanged. This unique number is known as the system's additive identity element. For example, the integers has the structure of an ordered ring. This number is generally denoted as 0. Because of the total order in this ring, there are numbers greater than zero, called the positive numbers. Another property required for a ring to be ordered is that, for each positive number, there exists a unique corresponding number less than 0 whose sum with the original positive number is 0. These numbers less than 0 are called the negative numbers. The numbers in each such pair are their respective additive inverses. This attribute of a number, being exclusively either zero (0), positive (+), or negative (−), is called its sign, and is often encoded to the real numbers 0, 1, and −1, respectively (similar to the way the sign function is defined).[1] Since rational and real numbers are also ordered rings (in fact ordered fields), the sign attribute also applies to these number systems. When a minus sign is used in between two numbers, it represents the binary operation of subtraction. When a minus sign is written before a single number, it represents the unary operation of yielding the additive inverse (sometimes called negation) of the operand. Abstractly then, the difference of two number is the sum of the minuend with the additive inverse of the subtrahend. While 0 is its own additive inverse (−0 = 0), the additive inverse of a positive number is negative, and the additive inverse of a negative number is positive. A double application of this operation is written as −(−3) = 3. The plus sign is predominantly used in algebra to denote the binary operation of addition, and only rarely to emphasize the positivity of an expression. In common numeral notation (used in arithmetic and elsewhere), the sign of a number is often made explicit by placing a plus or a minus sign before the number. For example, +3 denotes "positive three", and −3 denotes "negative three" (algebraically: the additive inverse of 3). Without specific context (or when no explicit sign is given), a number is interpreted per default as positive. This notation establishes a strong association of the minus sign "−" with negative numbers, and the plus sign "+" with positive numbers. Sign of zero Within the convention of zero being neither positive nor negative, a specific sign-value 0 may be assigned to the number value 0. This is exploited in the $\operatorname {sgn} $-function, as defined for real numbers.[1] In arithmetic, +0 and −0 both denote the same number 0. There is generally no danger of confusing the value with its sign, although the convention of assigning both signs to 0 does not immediately allow for this discrimination. In some contexts, especially in computing, it is useful to consider signed versions of zero, with signed zeros referring to different, discrete number representations (see signed number representations for more). The symbols +0 and −0 rarely appear as substitutes for 0+ and 0−, used in calculus and mathematical analysis for one-sided limits (right-sided limit and left-sided limit, respectively). This notation refers to the behaviour of a function as its real input variable approaches 0 along positive (resp., negative) values; the two limits need not exist or agree. Terminology for signs When 0 is said to be neither positive nor negative, the following phrases may refer to the sign of a number: • A number is positive if it is greater than zero. • A number is negative if it is less than zero. • A number is non-negative if it is greater than or equal to zero. • A number is non-positive if it is less than or equal to zero. When 0 is said to be both positive and negative , modified phrases are used to refer to the sign of a number: • A number is strictly positive if it is greater than zero. • A number is strictly negative if it is less than zero. • A number is positive if it is greater than or equal to zero. • A number is negative if it is less than or equal to zero. For example, the absolute value of a real number is always "non-negative", but is not necessarily "positive" in the first interpretation, whereas in the second interpretation, it is called "positive"—though not necessarily "strictly positive". The same terminology is sometimes used for functions that yield real or other signed values. For example, a function would be called a positive function if its values are positive for all arguments of its domain, or a non-negative function if all of its values are non-negative. Complex numbers Complex numbers are impossible to order, so they cannot carry the structure of an ordered ring, and, accordingly, cannot be partitioned into positive and negative complex numbers. They do, however, share an attribute with the reals, which is called absolute value or magnitude. Magnitudes are always non-negative real numbers, and to any non-zero number there belongs a positive real number, its absolute value. For example, the absolute value of −3 and the absolute value of 3 are both equal to 3. This is written in symbols as |−3| = 3 and |3| = 3. In general, any arbitrary real value can be specified by its magnitude and its sign. Using the standard encoding, any real value is given by the product of the magnitude and the sign in standard encoding. This relation can be generalized to define a sign for complex numbers. Since the real and complex numbers both form a field and contain the positive reals, they also contain the reciprocals of the magnitudes of all non-zero numbers. This means that any non-zero number may be multiplied with the reciprocal of its magnitude, that is, divided by its magnitude. It is immediate that the quotient of any non-zero real number by its magnitude yields exactly its sign. By analogy, the sign of a complex number z can be defined as the quotient of z and its magnitude |z|. The sign of a complex number is the exponential of the product of its argument with the imaginary unit. represents in some sense its complex argument. This is to be compared to the sign of real numbers, except with $e^{i\pi }=-1.$ For the definition of a complex sign-function. see § Complex sign function below. Sign functions Main article: sign function When dealing with numbers, it is often convenient to have their sign available as a number. This is accomplished by functions that extract the sign of any number, and map it to a predefined value before making it available for further calculations. For example, it might be advantageous to formulate an intricate algorithm for positive values only, and take care of the sign only afterwards. Real sign function The sign function or signum function extracts the sign of a real number, by mapping the set of real numbers to the set of the three reals $\{-1,\;0,\;1\}.$ It can be defined as follows:[1] ${\begin{aligned}\operatorname {sgn} :{}&\mathbb {R} \to \{-1,0,1\}\\&x\mapsto \operatorname {sgn}(x)={\begin{cases}-1&{\text{if }}x<0,\\~~\,0&{\text{if }}x=0,\\~~\,1&{\text{if }}x>0.\end{cases}}\end{aligned}}$ :{}&\mathbb {R} \to \{-1,0,1\}\\&x\mapsto \operatorname {sgn}(x)={\begin{cases}-1&{\text{if }}x<0,\\~~\,0&{\text{if }}x=0,\\~~\,1&{\text{if }}x>0.\end{cases}}\end{aligned}}} Thus sgn(x) is 1 when x is positive, and sgn(x) is −1 when x is negative. For non-zero values of x, this function can also be defined by the formula $\operatorname {sgn}(x)={\frac {x}{|x|}}={\frac {|x|}{x}},$ where |x| is the absolute value of x. Complex sign function While a real number has a 1-dimensional direction, a complex number has a 2-dimensional direction. The complex sign function requires the magnitude of its argument z = x + iy, which can be calculated as $|z|={\sqrt {z{\bar {z}}}}={\sqrt {x^{2}+y^{2}}}.$ Analogous to above, the complex sign function extracts the complex sign of a complex number by mapping the set of non-zero complex numbers to the set of unimodular complex numbers, and 0 to 0: $\{z\in \mathbb {C} :|z|=1\}\cup \{0\}.$ :|z|=1\}\cup \{0\}.} It may be defined as follows: Let z be also expressed by its magnitude and one of its arguments φ as z = |z|⋅eiφ, then[2] $\operatorname {sgn}(z)={\begin{cases}0&{\text{for }}z=0\\{\dfrac {z}{|z|}}=e^{i\varphi }&{\text{otherwise}}.\end{cases}}$ This definition may also be recognized as a normalized vector, that is, a vector whose direction is unchanged, and whose length is fixed to unity. If the original value was R,θ in polar form, then sign(R, θ) is 1 θ. Extension of sign() or signum() to any number of dimensions is obvious, but this has already been defined as normalizing a vector. Signs per convention In situations where there are exactly two possibilities on equal footing for an attribute, these are often labelled by convention as plus and minus, respectively. In some contexts, the choice of this assignment (i.e., which range of values is considered positive and which negative) is natural, whereas in other contexts, the choice is arbitrary, making an explicit sign convention necessary, the only requirement being consistent use of the convention. Sign of an angle Main article: Angle § Sign In many contexts, it is common to associate a sign with the measure of an angle, particularly an oriented angle or an angle of rotation. In such a situation, the sign indicates whether the angle is in the clockwise or counterclockwise direction. Though different conventions can be used, it is common in mathematics to have counterclockwise angles count as positive, and clockwise angles count as negative.[3] It is also possible to associate a sign to an angle of rotation in three dimensions, assuming that the axis of rotation has been oriented. Specifically, a right-handed rotation around an oriented axis typically counts as positive, while a left-handed rotation counts as negative. "A angle which is the negative of a given angle has an equal arc, but the opposite axis."[4] Sign of a change When a quantity x changes over time, the change in the value of x is typically defined by the equation $\Delta x=x_{\text{final}}-x_{\text{initial}}.$ Using this convention, an increase in x counts as positive change, while a decrease of x counts as negative change. In calculus, this same convention is used in the definition of the derivative. As a result, any increasing function has positive derivative, while any decreasing function has negative derivative. Sign of a direction In analytic geometry and physics, it is common to label certain directions as positive or negative. For a basic example, the number line is usually drawn with positive numbers to the right, and negative numbers to the left: As a result, when discussing linear motion, displacement or velocity, a motion to the right is usually thought of as being positive, while similar motion to the left is thought of as being negative. On the Cartesian plane, the rightward and upward directions are usually thought of as positive, with rightward being the positive x-direction, and upward being the positive y-direction. If a displacement or velocity vector is separated into its vector components, then the horizontal part will be positive for motion to the right and negative for motion to the left, while the vertical part will be positive for motion upward and negative for motion downward. Signedness in computing most-significant bit 0 1 1 1 1 1 1 1 =127 0 1 1 1 1 1 1 0 =126 0 0 0 0 0 0 1 0 =2 0 0 0 0 0 0 0 1 =1 0 0 0 0 0 0 0 0 =0 1 1 1 1 1 1 1 1 =−1 1 1 1 1 1 1 1 0 =−2 1 0 0 0 0 0 0 1 =−127 1 0 0 0 0 0 0 0 =−128 Most computers use two's complement to represent the sign of an integer. In computing, an integer value may be either signed or unsigned, depending on whether the computer is keeping track of a sign for the number. By restricting an integer variable to non-negative values only, one more bit can be used for storing the value of a number. Because of the way integer arithmetic is done within computers, signed number representations usually do not store the sign as a single independent bit, instead using e.g. two's complement. In contrast, real numbers are stored and manipulated as floating point values. The floating point values are represented using three separate values, mantissa, exponent, and sign. Given this separate sign bit, it is possible to represent both positive and negative zero. Most programming languages normally treat positive zero and negative zero as equivalent values, albeit, they provide means by which the distinction can be detected. Other meanings In addition to the sign of a real number, the word sign is also used in various related ways throughout mathematics and other sciences: • Words up to sign mean that, for a quantity q, it is known that either q = Q or q = −Q for certain Q. It is often expressed as q = ±Q. For real numbers, it means that only the absolute value |q| of the quantity is known. For complex numbers and vectors, a quantity known up to sign is a stronger condition than a quantity with known magnitude: aside Q and −Q, there are many other possible values of q such that |q| = |Q|. • The sign of a permutation is defined to be positive if the permutation is even, and negative if the permutation is odd. • In graph theory, a signed graph is a graph in which each edge has been marked with a positive or negative sign. • In mathematical analysis, a signed measure is a generalization of the concept of measure in which the measure of a set may have positive or negative values. • In a signed-digit representation, each digit of a number may have a positive or negative sign. • The ideas of signed area and signed volume are sometimes used when it is convenient for certain areas or volumes to count as negative. This is particularly true in the theory of determinants. In an (abstract) oriented vector space, each ordered basis for the vector space can be classified as either positively or negatively oriented. • In physics, any electric charge comes with a sign, either positive or negative. By convention, a positive charge is a charge with the same sign as that of a proton, and a negative charge is a charge with the same sign as that of an electron. See also • Plus–minus sign • Positive element • Signed distance • Signedness • Symmetry in mathematics References 1. Weisstein, Eric W. "Sign". mathworld.wolfram.com. Retrieved 2020-08-26. 2. "SignumFunction". www.cs.cas.cz. Retrieved 2020-08-26. 3. "Sign of Angles | What is An Angle? | Positive Angle | Negative Angle". Math Only Math. Retrieved 2020-08-26. 4. Alexander Macfarlane (1894) "Fundamental theorems of analysis generalized for space", page 3, link via Internet Archive
Wikipedia
Significant Figures (book) Significant Figures: The Lives and Work of Great Mathematicians is a 2017 nonfiction book by British mathematician Ian Stewart FRS CMath FIMA, published by Basic Books.[1] In the work, Stewart discusses the lives and contributions of 25 figures who are prominent in the history of mathematics.[2] The 25 mathematicians selected are: Archimedes, Liu Hui, Muḥammad ibn Mūsā al-Khwārizmī, Madhava of Sangamagrama, Gerolamo Cardano, Pierre de Fermat, Isaac Newton, Euler, Fourier, Gauss, Lobachevsky, Galois, Ada Lovelace, Boole, Riemann, Cantor, Sofia Kovalevskaia, Poincaré, Hilbert, Emmy Noether, Ramanujan, Gödel, Turing, Mandelbrot, and Thurston.[3] Significant Figures The Lives and Work of Great Mathematicians Front cover AuthorIan Stewart CountryUnited States LanguageEnglish GenreNon-fiction PublisherBasic Books Publication date September 12, 2017 Media typeHardcover Pages303 ISBN0465096123 Reception In Kirkus Reviews, it was written that "even a popularizer as skilled and prolific as Stewart cannot expect general readers to fully digest his highly distilled explanations of what these significant figures did to resolve ever more complex conundrums as math advanced." However, the reviewer praised Stewart's sketches of the lives and times of the innovators. The book was described as "a text for teachers, precocious students, and intellectually curious readers unafraid to tread unfamiliar territory".[4] See also • In Pursuit of the Unknown: 17 Equations That Changed the World References 1. Stewart, Ian; Basic Books (2017). Significant Figures: the lives and work of great mathematicians. New York: Basic Books. p. 303. ISBN 978-0-465-09612-1. OCLC 1030547312. 2. Hunacek, Mark (22 September 2017). "Review of Significant Figures by Ian Stewart". MAA Reviews, Mathematical Association of America. 3. Bultheel, Adhemar (20 July 2017). "Review of Significant Figures". European Mathematical Society. 4. Significant Figures: The Lives and Work of Great Mathematicians. {{cite book}}: |work= ignored (help)
Wikipedia
Signorini problem The Signorini problem is an elastostatics problem in linear elasticity: it consists in finding the elastic equilibrium configuration of an anisotropic non-homogeneous elastic body, resting on a rigid frictionless surface and subject only to its mass forces. The name was coined by Gaetano Fichera to honour his teacher, Antonio Signorini: the original name coined by him is problem with ambiguous boundary conditions. History • -"Il mio discepolo Fichera mi ha dato una grande soddisfazione" • -"Ma Lei ne ha avute tante, Professore, durante la Sua vita", rispose il Dottor Aprile, ma Signorini rispose di nuovo: • -"Ma questa è la più grande." E queste furono le sue ultime parole.[1] — Gaetano Fichera, (Fichera 1995, p. 49) The problem was posed by Antonio Signorini during a course taught at the Istituto Nazionale di Alta Matematica in 1959, later published as the article (Signorini 1959), expanding a previous short exposition he gave in a note published in 1933. Signorini (1959, p. 128) himself called it problem with ambiguous boundary conditions,[2] since there are two alternative sets of boundary conditions the solution must satisfy on any given contact point. The statement of the problem involves not only equalities but also inequalities, and it is not a priori known what of the two sets of boundary conditions is satisfied at each point. Signorini asked to determine if the problem is well-posed or not in a physical sense, i.e. if its solution exists and is unique or not: he explicitly invited young analysts to study the problem.[3] Gaetano Fichera and Mauro Picone attended the course, and Fichera started to investigate the problem: since he found no references to similar problems in the theory of boundary value problems,[4] he decided to approach it by starting from first principles, specifically from the virtual work principle. During Fichera's researches on the problem, Signorini began to suffer serious health problems: nevertheless, he desired to know the answer to his question before his death. Picone, being tied by a strong friendship with Signorini, began to chase Fichera to find a solution: Fichera himself, being tied as well to Signorini by similar feelings, perceived the last months of 1962 as worrying days.[5] Finally, on the first days of January 1963, Fichera was able to give a complete proof of the existence of a unique solution for the problem with ambiguous boundary condition, which he called the "Signorini problem" to honour his teacher. A preliminary research announcement, later published as (Fichera 1963), was written up and submitted to Signorini exactly a week before his death. Signorini expressed great satisfaction to see a solution to his question. A few days later, Signorini had with his family Doctor, Damiano Aprile, the conversation quoted above.[6] The solution of the Signorini problem coincides with the birth of the field of variational inequalities.[7] Formal statement of the problem The content of this section and the following subsections follows closely the treatment of Gaetano Fichera in Fichera 1963, Fichera 1964b and also Fichera 1995: his derivation of the problem is different from Signorini's one in that he does not consider only incompressible bodies and a plane rest surface, as Signorini does.[8] The problem consist in finding the displacement vector from the natural configuration $\scriptstyle {\boldsymbol {u}}({\boldsymbol {x}})=\left(u_{1}({\boldsymbol {x}}),u_{2}({\boldsymbol {x}}),u_{3}({\boldsymbol {x}})\right)$ of an anisotropic non-homogeneous elastic body that lies in a subset $A$ of the three-dimensional euclidean space whose boundary is $\scriptstyle \partial A$ and whose interior normal is the vector $n$, resting on a rigid frictionless surface whose contact surface (or more generally contact set) is $\Sigma $ and subject only to its body forces $\scriptstyle {\boldsymbol {f}}({\boldsymbol {x}})=\left(f_{1}({\boldsymbol {x}}),f_{2}({\boldsymbol {x}}),f_{3}({\boldsymbol {x}})\right)$, and surface forces $\scriptstyle {\boldsymbol {g}}({\boldsymbol {x}})=\left(g_{1}({\boldsymbol {x}}),g_{2}({\boldsymbol {x}}),g_{3}({\boldsymbol {x}})\right)$ applied on the free (i.e. not in contact with the rest surface) surface $\scriptstyle \partial A\setminus \Sigma $: the set $A$ and the contact surface $\Sigma $ characterize the natural configuration of the body and are known a priori. Therefore, the body has to satisfy the general equilibrium equations (1)     $\qquad {\frac {\partial \sigma _{ik}}{\partial x_{k}}}-f_{i}=0\qquad {\text{for }}i=1,2,3$ written using the Einstein notation as all in the following development, the ordinary boundary conditions on $\scriptstyle \partial A\setminus \Sigma $ (2)     $\qquad \sigma _{ik}n_{k}-g_{i}=0\qquad {\text{for }}i=1,2,3$ and the following two sets of boundary conditions on $\Sigma $, where $\scriptstyle {\boldsymbol {\sigma }}={\boldsymbol {\sigma }}({\boldsymbol {u}})$ is the Cauchy stress tensor. Obviously, the body forces and surface forces cannot be given in arbitrary way but they must satisfy a condition in order for the body to reach an equilibrium configuration: this condition will be deduced and analyzed in the following development. The ambiguous boundary conditions If $\scriptstyle {\boldsymbol {\tau }}=(\tau _{1},\tau _{2},\tau _{3})$ is any tangent vector to the contact set $\Sigma $, then the ambiguous boundary condition in each point of this set are expressed by the following two systems of inequalities (3)     $\quad {\begin{cases}u_{i}n_{i}&=0\\\sigma _{ik}n_{i}n_{k}&\geq 0\\\sigma _{ik}n_{i}\tau _{k}&=0\end{cases}}$     or     (4)     ${\begin{cases}u_{i}n_{i}&>0\\\sigma _{ik}n_{i}n_{k}&=0\\\sigma _{ik}n_{i}\tau _{k}&=0\end{cases}}$ Let's analyze their meaning: • Each set of conditions consists of three relations, equalities or inequalities, and all the second members are the zero function. • The quantities at first member of each first relation are proportional to the norm of the component of the displacement vector directed along the normal vector $n$. • The quantities at first member of each second relation are proportional to the norm of the component of the tension vector directed along the normal vector $n$, • The quantities at the first member of each third relation are proportional to the norm of the component of the tension vector along any vector $\tau $ tangent in the given point to the contact set $\Sigma $. • The quantities at the first member of each of the three relations are positive if they have the same sense of the vector they are proportional to, while they are negative if not, therefore the constants of proportionality are respectively $\scriptstyle +1$ and $\scriptstyle -1$. Knowing these facts, the set of conditions (3) applies to points of the boundary of the body which do not leave the contact set $\Sigma $ in the equilibrium configuration, since, according to the first relation, the displacement vector $u$ has no components directed as the normal vector $n$, while, according to the second relation, the tension vector may have a component directed as the normal vector $n$ and having the same sense. In an analogous way, the set of conditions (4) applies to points of the boundary of the body which leave that set in the equilibrium configuration, since displacement vector $u$ has a component directed as the normal vector $n$, while the tension vector has no components directed as the normal vector $n$. For both sets of conditions, the tension vector has no tangent component to the contact set, according to the hypothesis that the body rests on a rigid frictionless surface. Each system expresses a unilateral constraint, in the sense that they express the physical impossibility of the elastic body to penetrate into the surface where it rests: the ambiguity is not only in the unknown values non-zero quantities must satisfy on the contact set but also in the fact that it is not a priori known if a point belonging to that set satisfies the system of boundary conditions (3) or (4). The set of points where (3) is satisfied is called the area of support of the elastic body on $\Sigma $, while its complement respect to $\Sigma $ is called the area of separation. The above formulation is general since the Cauchy stress tensor i.e. the constitutive equation of the elastic body has not been made explicit: it is equally valid assuming the hypothesis of linear elasticity or the ones of nonlinear elasticity. However, as it would be clear from the following developments, the problem is inherently nonlinear, therefore assuming a linear stress tensor does not simplify the problem. The form of the stress tensor in the formulation of Signorini and Fichera The form assumed by Signorini and Fichera for the elastic potential energy is the following one (as in the previous developments, the Einstein notation is adopted) $W({\boldsymbol {\varepsilon }})=a_{ik,jh}({\boldsymbol {x}})\varepsilon _{ik}\varepsilon _{jh}$ where • $\scriptstyle {\boldsymbol {a}}({\boldsymbol {x}})=\left(a_{ik,jh}({\boldsymbol {x}})\right)$ is the elasticity tensor • $\scriptstyle {\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}({\boldsymbol {u}})=\left(\varepsilon _{ik}({\boldsymbol {u}})\right)=\left({\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{k}}}+{\frac {\partial u_{k}}{\partial x_{i}}}\right)\right)$ is the infinitesimal strain tensor The Cauchy stress tensor has therefore the following form (5)     $\sigma _{ik}=-{\frac {\partial W}{\partial \varepsilon _{ik}}}\qquad {\text{for }}i,k=1,2,3$ and it is linear with respect to the components of the infinitesimal strain tensor; however, it is not homogeneous nor isotropic. Solution of the problem As for the section on the formal statement of the Signorini problem, the contents of this section and the included subsections follow closely the treatment of Gaetano Fichera in Fichera 1963, Fichera 1964b, Fichera 1972 and also Fichera 1995: obviously, the exposition focuses on the basics steps of the proof of the existence and uniqueness for the solution of problem (1), (2), (3), (4) and (5), rather than the technical details. The potential energy The first step of the analysis of Fichera as well as the first step of the analysis of Antonio Signorini in Signorini 1959 is the analysis of the potential energy, i.e. the following functional (6)      $I({\boldsymbol {u}})=\int _{A}W({\boldsymbol {x}},{\boldsymbol {\varepsilon }})\mathrm {d} x-\int _{A}u_{i}f_{i}\mathrm {d} x-\int _{\partial A\setminus \Sigma }u_{i}g_{i}\mathrm {d} \sigma $ where $u$ belongs to the set of admissible displacements $\scriptstyle {\mathcal {U}}_{\Sigma }$ i.e. the set of displacement vectors satisfying the system of boundary conditions (3) or (4). The meaning of each of the three terms is the following • the first one is the total elastic potential energy of the elastic body • the second one is the total potential energy due to the body forces, for example the gravitational force • the third one is the potential energy due to surface forces, for example the forces exerted by the atmospheric pressure Signorini (1959, pp. 129–133) was able to prove that the admissible displacement $u$ which minimize the integral $I(u)$ is a solution of the problem with ambiguous boundary conditions (1), (2), (3), (4) and (5), provided it is a $C^{1}$ function supported on the closure $\scriptstyle {\bar {A}}$ of the set $A$: however Gaetano Fichera gave a class of counterexamples in (Fichera 1964b, pp. 619–620) showing that in general, admissible displacements are not smooth functions of these class. Therefore, Fichera tries to minimize the functional (6) in a wider function space: in doing so, he first calculates the first variation (or functional derivative) of the given functional in the neighbourhood of the sought minimizing admissible displacement $\scriptstyle {\boldsymbol {u}}\in {\mathcal {U}}_{\Sigma }$, and then requires it to be greater than or equal to zero $\left.{\frac {\mathrm {d} }{\mathrm {d} t}}I({\boldsymbol {u}}+t{\boldsymbol {v}})\right\vert _{t=0}=-\int _{A}\sigma _{ik}({\boldsymbol {u}})\varepsilon _{ik}({\boldsymbol {v}})\mathrm {d} x-\int _{A}v_{i}f_{i}\mathrm {d} x-\int _{\partial A\setminus \Sigma }\!\!\!\!\!v_{i}g_{i}\mathrm {d} \sigma \geq 0\qquad \forall {\boldsymbol {v}}\in {\mathcal {U}}_{\Sigma }$ Defining the following functionals $B({\boldsymbol {u}},{\boldsymbol {v}})=-\int _{A}\sigma _{ik}({\boldsymbol {u}})\varepsilon _{ik}({\boldsymbol {v}})\mathrm {d} x\qquad {\boldsymbol {u}},{\boldsymbol {v}}\in {\mathcal {U}}_{\Sigma }$ and $F({\boldsymbol {v}})=\int _{A}v_{i}f_{i}\mathrm {d} x+\int _{\partial A\setminus \Sigma }\!\!\!\!\!v_{i}g_{i}\mathrm {d} \sigma \qquad {\boldsymbol {v}}\in {\mathcal {U}}_{\Sigma }$ the preceding inequality is can be written as (7)      $B({\boldsymbol {u}},{\boldsymbol {v}})-F({\boldsymbol {v}})\geq 0\qquad \forall {\boldsymbol {v}}\in {\mathcal {U}}_{\Sigma }$ This inequality is the variational inequality for the Signorini problem. See also • Linear elasticity • Variational inequality Notes 1. Free English translation: • "My disciple Fichera gave me a great contentment". • "But you had many, Professor, during your life", replied Doctor Aprile, but then Signorini replied again: • "But this is the greatest one". And those were his last words. 2. Italian: Problema con ambigue condizioni al contorno. 3. As it is stated in (Signorini 1959, p. 129). 4. See (Fichera 1995, p. 49). 5. This dramatic situation is described by Fichera (1995, p. 51) himself. 6. Fichera (1995, p. 53) reports the episode following the remembrances of Mauro Picone: see the entry "Antonio Signorini" for further details. 7. According to Antman (1983, p. 282) 8. See Signorini 1959, p. 127) for the original approach. References Historical references • Antman, Stuart (1983), "The influence of elasticity in analysis: modern developments", Bulletin of the American Mathematical Society, 9 (3): 267–291, doi:10.1090/S0273-0979-1983-15185-6, MR 0714990, Zbl 0533.73001. • Duvaut, Georges (1971), "Problèmes unilatéraux en mécanique des milieux continus" (PDF), Actes du Congrès international des mathématiciens, 1970, ICM Proceedings, vol. Mathématiques appliquées (E), Histoire et Enseignement (F) – Volume 3, Paris: Gauthier-Villars, pp. 71–78. A brief research survey describing the field of variational inequalities. • Fichera, Gaetano (1972), "Boundary value problems of elasticity with unilateral constraints", in Flügge, Siegfried; Truesdell, Clifford A. (eds.), Festkörpermechanik/Mechanics of Solids, Handbuch der Physik (Encyclopedia of Physics), vol. VIa/2 (paperback 1984 ed.), Berlin–Heidelberg–New York: Springer-Verlag, pp. 391–424, ISBN 0-387-13161-2, Zbl 0277.73001. The encyclopedia entry about problems with unilateral constraints (the class of boundary value problems the Signorini problem belongs to) he wrote for the Handbuch der Physik on invitation by Clifford Truesdell. • Fichera, Gaetano (1995), "La nascita della teoria delle disequazioni variazionali ricordata dopo trent'anni", Incontro scientifico italo-spagnolo. Roma, 21 ottobre 1993, Atti dei Convegni Lincei (in Italian), vol. 114, Roma: Accademia Nazionale dei Lincei, pp. 47–53. The birth of the theory of variational inequalities remembered thirty years later (English translation of the contribution title) is an historical paper describing the beginning of the theory of variational inequalities from the point of view of its founder. • Fichera, Gaetano (2002), Opere storiche biografiche, divulgative [Historical, biographical, divulgative works] (in Italian), Napoli: Giannini, p. 491. A volume collecting almost all works of Gaetano Fichera in the fields of history of mathematics and scientific divulgation. • Fichera, Gaetano (2004), Opere scelte [Selected works], Firenze: Edizioni Cremonese (distributed by Unione Matematica Italiana), pp. XXIX+432 (vol. 1), pp. VI+570 (vol. 2), pp. VI+583 (vol. 3), archived from the original on 2009-12-28, ISBN 88-7083-811-0 (vol. 1), ISBN 88-7083-812-9 (vol. 2), ISBN 88-7083-813-7 (vol. 3). Three volumes collecting Gaetano Fichera's most important mathematical papers, with a biographical sketch of Olga A. Oleinik. • Signorini, Antonio (1991), Opere scelte [Selected works], Firenze: Edizioni Cremonese (distributed by Unione Matematica Italiana), pp. XXXI + 695, archived from the original on 2009-12-28. A volume collecting Antonio Signorini's most important works with an introduction and a commentary of Giuseppe Grioli. Research works • Andersson, John (2016), "Optimal regularity for the Signorini problem and its free boundary", Inventiones Mathematicae, 1 (1): 1–82, arXiv:1310.2511, Bibcode:2016InMat.204....1A, doi:10.1007/s00222-015-0608-6, MR 3480553, S2CID 118934322, Zbl 1339.35345. • Fichera, Gaetano (1963), "Sul problema elastostatico di Signorini con ambigue condizioni al contorno" [On the elastostatic problem of Signorini with ambiguous boundary conditions], Rendiconti della Accademia Nazionale dei Lincei, Classe di Scienze Fisiche, Matematiche e Naturali, 8 (in Italian), 34 (2): 138–142, MR 0176661, Zbl 0128.18305. A short research note announcing and describing (without proofs) the solution of the Signorini problem. • Fichera, Gaetano (1964a), "Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno" [Elastostatic problems with unilateral constraints: the Signorini problem with ambiguous boundary conditions], Memorie della Accademia Nazionale dei Lincei, Classe di Scienze Fisiche, Matematiche e Naturali, 8 (in Italian), 7 (2): 91–140, Zbl 0146.21204. The first paper where aa existence and uniqueness theorem for the Signorini problem is proved. • Fichera, Gaetano (1964b), "Elastostatic problems with unilateral constraints: the Signorini problem with ambiguous boundary conditions", Seminari dell'istituto Nazionale di Alta Matematica 1962–1963, Rome: Edizioni Cremonese, pp. 613–679. An English translation of the previous paper. • Petrosyan, Arshak; Shahgholian, Henrik; Uraltseva, Nina (2012), Regularity of Free Boundaries in Obstacle-Type Problems, Graduate Studies in Mathematics, vol. 136, Providence, RI: American Mathematical Society, pp. x+221, ISBN 978-0-8218-8794-3, MR 2962060, Zbl 1254.35001. • Signorini, Antonio (1959), "Questioni di elasticità non linearizzata e semilinearizzata" [Topics in non linear and semilinear elasticity], Rendiconti di Matematica e delle sue Applicazioni, 5 (in Italian), 18: 95–139, Zbl 0091.38006. External links • Barbu, V. (2001) [1994], "Signorini problem", Encyclopedia of Mathematics, EMS Press • Alessio Figalli, On global homogeneous solutions to the Signorini problem,
Wikipedia
Sigurður Helgason (mathematician) Sigurdur Helgason (born 30 September 1927; Icelandic: Sigurður) is an Icelandic mathematician whose research has been devoted to the geometry and analysis on symmetric spaces. In particular, he has used new integral geometric methods to establish fundamental existence theorems for differential equations on symmetric spaces as well as some new results on the representations of their isometry groups. He also introduced a Fourier transform on these spaces and proved the principal theorems for this transform, the inversion formula, the Plancherel theorem and the analog of the Paley–Wiener theorem. Sigurdur Helgason Sigurdur Helgason Born (1927-09-30) September 30, 1927 Akureyri, Iceland OccupationMathematician AwardsLeroy P. Steele Prize (1988) He was born in Akureyri, Iceland. In 1954, he earned a PhD from Princeton University under Salomon Bochner. Since 1965, Helgason has been a professor of mathematics at the Massachusetts Institute of Technology. He was winner of the 1988 Leroy P. Steele Prize for Seminal Contributions for his books Groups and Geometric Analysis and Differential Geometry, Lie Groups and Symmetric Spaces. This was followed by the 2008 book Geometric Analysis on Symmetric Spaces. On May 31, 1996 Helgason received an honorary doctorate from the Faculty of Science and Technology at Uppsala University, Sweden. [1] He has been a fellow of the American Academy of Arts and Sciences since 1970. In 2012, he became a fellow of the American Mathematical Society.[2] Selected works Articles • Helgason, S. (1954). "The derived algebra of a Banach algebra". Proceedings of the National Academy of Sciences of the United States of America. 40 (10): 994–995. Bibcode:1954PNAS...40..994H. doi:10.1073/pnas.40.10.994. PMC 534208. PMID 16589593. • Helgason, Sigurdur (1957). "Topologies of group algebras and a theorem of Littlewood". Transactions of the American Mathematical Society. 86 (2): 269–283. doi:10.1090/S0002-9947-1957-0095428-5. MR 0095428. • Helgason, Sigurdur (1958). "Lacunary Fourier series on noncommutative groups". Proceedings of the American Mathematical Society. 9 (5): 782–790. doi:10.1090/S0002-9939-1958-0100234-5. MR 0100234. • Helgason, Sigurdur (1958). "On Riemannian curvature of homogeneous spaces". Proceedings of the American Mathematical Society. 9 (6): 831–838. doi:10.1090/S0002-9939-1958-0108811-2. MR 0108811. • Helgason, S. (1962). "Some results on invariant theory". Bulletin of the American Mathematical Society. 68 (4): 367–371. doi:10.1090/S0002-9904-1962-10812-X. MR 0166303. • Helgason, S. (1963). "Fundamental solutions to invariant differential operators on symmetric spaces". Bulletin of the American Mathematical Society. 69 (6): 778–781. doi:10.1090/S0002-9904-1963-11029-0. MR 0156919. • Helgason, S. (1963). "Duality and Radon transforms for symmetric spaces". Bulletin of the American Mathematical Society. 69 (6): 782–7881. doi:10.1090/S0002-9904-1963-11030-7. MR 0158408. • Helgason, Sigurdur (1964). "A duality in integral geometry; some generalizations of the Radon transform". Bulletin of the American Mathematical Society. 70 (4): 435–446. doi:10.1090/S0002-9904-1964-11147-2. MR 0166795. • Helgason, S. (1965). "Radon–Fourier transforms on symmetric spaces and related group representations". Bulletin of the American Mathematical Society. 71 (5): 757–763. doi:10.1090/S0002-9904-1965-11380-5. MR 0179295. • Helgason, Sigurdur; Korányi, Ádám (1968). "A Fatou-type theorem for harmonic functions on symmetric spaces". Bulletin of the American Mathematical Society. 74 (2): 258–263. doi:10.1090/S0002-9904-1968-11912-3. MR 0229179. • Helgason, Sigurdur (1969). "Applications of the Radon transform to representations of semisimple Lie groups". Proceedings of the National Academy of Sciences of the United States of America. 63 (3): 643–647. Bibcode:1969PNAS...63..643H. doi:10.1073/pnas.63.3.643. PMC 223499. PMID 16591772. • Helgason, Sigurdur (1973). "Paley-Wiener theorems and surjectivity of invariant differential operators on symmetric spaces and Lie groups". Bulletin of the American Mathematical Society. 79 (1): 129–132. doi:10.1090/S0002-9904-1973-13127-1. hdl:1721.1/26688. MR 0312158. • Helgason, Sigurdur (1977). "Invariant differential equations and homogeneous manifolds" (PDF). Bulletin of the American Mathematical Society. 83 (5): 751–774. doi:10.1090/S0002-9904-1977-14317-6. MR 0445235. Books • Differential geometry and symmetric spaces. Academic Press 1962,[3] AMS 2001 • Analysis on Lie groups and homogeneous spaces. AMS 1972 • Differential geometry, Lie groups and symmetric spaces. Academic Press 1978,[4] 7th edn. 1995 • The Radon Transform. Birkhäuser, 1980, 2nd edn. 1999 • Topics in harmonic analysis on homogeneous spaces. Birkhäuser 1981 • Groups and geometric analysis: integral geometry, invariant differential operators and spherical functions. Academic Press 1984,[5] AMS 1994 • Geometric analysis on symmetric spaces. AMS 1994,[6] 2nd. edn. 2008 References 1. "Honorary doctorates - Uppsala University, Sweden". 2. List of Fellows of the American Mathematical Society, retrieved 2013-01-21. 3. Auslander, Louis (1964). "Review: Differential geometry and symmetric spaces, by S. Helgason". Bull. Amer. Math. Soc. 70 (2): 227–229. doi:10.1090/S0002-9904-1964-11091-0. 4. Kulkarni, Ravi S. (1980). "Review: Differential geometry, Lie groups and symmetric spaces, by S. Helgason". Bull. Amer. Math. Soc. (N.S.). 2 (3): 468–476. doi:10.1090/S0273-0979-1980-14772-2. 5. Howe, Roger (1989). "Groups and geometric analysis. Integral geometry, invariant differential operators and spherical functions, by S. Helgason". Bull. Amer. Math. Soc. (N.S.). 20 (2): 252–256. doi:10.1090/S0273-0979-1989-15786-8. 6. Rouvière, François (1995). "Geometric analysis on symmetric spaces, by S. Helgason". Bull. Amer. Math. Soc. (N.S.). 32 (4): 441–446. doi:10.1090/S0273-0979-1995-00602-6. Sources • "Curriculum vitae". Massachusetts Institute of Technology. 2005. • "Sigurdur Helgason". Massachusetts Institute of Technology. External links • Sigurdur Helgason at the Mathematics Genealogy Project • Sigurdur Helgason – Publications – MIT Mathematics Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • Belgium • United States • Sweden • Latvia • Czech Republic • Netherlands Academics • CiNii • Google Scholar • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Sijue Wu Sijue Wu (Chinese: 邬似珏; pinyin: Wū Sìjué; born May 15, 1964) is a Chinese-American mathematician who works as the Robert W. and Lynne H. Browne Professor of Mathematics at the University of Michigan. Her research involves the mathematics of water waves.[1][2] Sijue Wu Sijue Wu at Oberwolfach, 2006 BornMay 15, 1964 NationalityChinese American Alma materPeking University, Yale University Scientific career FieldsMathematics InstitutionsUniversity of Michigan Doctoral advisorRonald Coifman Education and career Wu earned bachelor's and master's degrees in 1983 and 1986 from Peking University.[1][2] She completed her doctorate in 1990 from Yale University, under the supervision of Ronald Coifman.[3] After a temporary instructorship at New York University, she became an assistant professor at Northwestern University. She moved in 1996 to the University of Iowa and again to the University of Maryland, College Park in 1998. She became the Browne Professor at the University of Michigan in 2008.[1] Awards and honors A 1997 paper by Wu in Inventiones Mathematicae, "Well-posedness in Sobolev spaces of the full water wave problem in 2-D", was the subject of a featured review in Mathematical Reviews.[4] Wu was an invited speaker at the International Congress of Mathematicians in 2002, speaking on partial differential equations.[5] She won the Ruth Lyttle Satter Prize in Mathematics[6] and the silver Morningside Medal in 2001, and the gold Morningside Medal in 2010, becoming the first female mathematician to win the gold medal.[2] She was elected to the American Academy of Arts and Sciences in 2022.[7] References 1. O'Connor, John J.; Robertson, Edmund F., "Sijue Wu", MacTutor History of Mathematics Archive, University of St Andrews 2. Riddle, Larry (January 10, 2014), "Sijue Wu", Biographies of Women Mathematicians, Agnes Scott College, retrieved 2015-10-22. 3. Sijue Wu at the Mathematics Genealogy Project. 4. Wu, Sijue (1997), "Well-posedness in Sobolev spaces of the full water wave problem in 2-D", Inventiones Mathematicae, 130 (1): 39–72, Bibcode:1997InMat.130...39W, doi:10.1007/s002220050177, MR 1471885, S2CID 126485710. 5. ICM Plenary and Invited Speakers since 1897, International Mathematical Union, retrieved 2015-10-22. 6. "2001 Satter Prize" (PDF), Notices of the American Mathematical Society, 48 (4): 411–412, April 2001. 7. "American Academy of Arts & Sciences Announces New Members Elected in 2022". American Academy of Arts & Sciences. Retrieved 2022-05-22. External links • Home page Ruth Lyttle Satter Prize in Mathematics recipients • 1991 Dusa McDuff • 1993 Lai-Sang Young • 1995 Sun-Yung Alice Chang • 1997 Ingrid Daubechies • 1999 Bernadette Perrin-Riou • 2001 Karen E. Smith & Sijue Wu • 2003 Abigail Thompson • 2005 Svetlana Jitomirskaya • 2007 Claire Voisin • 2009 Laure Saint-Raymond • 2011 Amie Wilkinson • 2013 Maryam Mirzakhani • 2015 Hee Oh • 2017 Laura DeMarco • 2019 Maryna Viazovska • 2021 Kaisa Matomäki • 2023 Panagiota Daskalopoulos & Nataša Šešum Authority control International • ISNI • VIAF National • United States Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Jean-Claude Sikorav Jean-Claude Sikorav (born 21 June 1957) is a French mathematician. He is professor at the École normale supérieure de Lyon. He is specialized in symplectic geometry.[1] Main contributions Sikorav is known for his proof, joint with François Laudenbach, of the Arnold conjecture for Lagrangian intersections in cotangent bundles,[2] as well as for introducing generating families in symplectic topology. Selected publications Sikorav is one of fifteen members of a group of mathematicians who published the book Uniformisation des surfaces de Riemann under the pseudonym of Henri Paul de Saint-Gervais.[3] He has written the survey • Sikorav, Jean-Claude (1994), "Some properties of holomorphic curves in almost complex manifolds", Holomorphic curves in symplectic geometry, Progress in Mathematics, vol. 117, Basel: Birkhäuser, pp. 165–189, MR 1274929. and research papers • Hofer, Helmut; Lizan, Véronique; Sikorav, Jean-Claude (1997), "On genericity for holomorphic curves in four-dimensional almost-complex manifolds", Journal of Geometric Analysis, 7 (1): 149–159, doi:10.1007/BF02921708, MR 1630789, S2CID 119936346. • Laudenbach, François; Sikorav, Jean-Claude (1985), "Persistance d'intersection avec la section nulle au cours d'une isotopie hamiltonienne dans un fibré cotangent", Inventiones Mathematicae, 82 (2): 349–357, doi:10.1007/BF01388807, MR 0809719, S2CID 122242002. Honors Sikorav is a Knight of the Ordre des Palmes Académiques. References 1. See here Archived 2012-07-12 at the Wayback Machine 2. Laudenbach, Sikorav, Persistance d'intersection avec la section nulle au cours d'une isotopie hamiltonienne dans un fibre cotangent, Inventiones Mathematicae 82 (1985), no. 2, 349–357 3. de Saint-Gervais, Henri Paul (2010), Uniformisation des surfaces de Riemann: Retour sur un théorème centenaire, ENS Éditions, Lyon, ISBN 978-2-84788-233-9, MR 2768303. External links • Jean-Claude Sikorav at the Mathematics Genealogy Project • Home page at the École Normale Supérieure de Lyon Authority control International • ISNI • VIAF National • France • BnF data Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Silas D. Alben Silas D. Alben is an American mathematician. His is Professor of Mathematics and Director of the Applied and Interdisciplinary Mathematics Program at the University of Michigan. His research addresses problems from biology (especially biomechanics) and engineering that can be studied with the tools of applied mathematics and continuum mechanics. Silas D. Alben NationalityAmerican Alma materNew York University Harvard University Scientific career FieldsBiomechanics Numerical methods Control theory InstitutionsGeorgia Institute of Technology University of Michigan Doctoral advisorMichael Shelley Biography Education Silas Alben attended Harvard College where he received in 1999 A.B. degrees in Mathematics and Physics, magna cum laude.[1] In 2000, he joined the Courant Institute of Mathematical Sciences at New York University, where he received a Ph.D. in Mathematics in 2004. His thesis Drag Reduction by Self-Similar Bending and a Transition to Forward Flight by a Symmetry-Breaking Instability was advised by Michael Shelley.[2] Research Alben's research focuses on problems arising in biomechanics, material science, and fluid mechanics. As a graduate student at NYU, Alben worked with Jun Zhang and Michael Shelley in investigating the dynamics of flexible structures and how such structures can become more aerodynamic by altering their shape. In this study, experiments visualized a short glass fiber deforming in fluid flow, and analysis showed how the fiber can reduce the drag force exerted by the fluid by changing its shape. This work was published 2002 in Nature under the title Drag Reduction Through Self-Similar Bending of a Flexible Body,[3] and was the subject of various news articles in periodicals including The New York Times[4] and others.[5] As a Postdoctoral Fellow at Harvard, Alben collaborated with Ernst A. van Nierop and Michael P. Brenner in a paper titled "How Bumps on Whale Flippers Delay Stall: An Aerodynamic Model".[6] The paper gave a mathematical model for this hydrodynamic phenomenon. This result, featured in MIT's Technology Review[7] and Nature,[8] provides a theoretical basis for potential improvements in using bumps for more stable airplanes, more agile submarines, and more efficient turbine blades. In 2007, Alben investigated (with Michael P. Brenner) the self-assembly of a 3D structures from flat, elastic sheets. This experiment, featured on New Scientist,[9] presented a new technique in nano construction; previously, the transformation of flat sheets to 3D structures were performed by random formation, but in this study, the addition of biases into the design of the sheets gave the possibility of predicting the resulting shape. Honors and awards • Alfred P. Sloan Foundation Research Fellow (2011) References 1. APS biography of Silas Alben 2. Alben, S. (2004). Drag reduction by self-similar bending and a transition to forward flight by a symmetry breaking instability. 3. Alben, S.; Shelley, M.; Zhang, J. (2002). "Drag reduction through self-similar bending of a flexible body" (PDF). Nature. 420 (6915): 479–481. Bibcode:2002Natur.420..479A. doi:10.1038/nature01232. PMID 12466836. S2CID 4414018. Retrieved 2008-06-29. 4. Nature's Secret to Building for Strength: Flexibility 5. NYU scientists show the benefits of being flexible 6. Van Nierop, E.A.; Alben, S.; Brenner, M.P. (2008). "How Bumps on Whale Flippers Delay Stall: An Aerodynamic Model". Physical Review Letters. 100 (5): 54502. Bibcode:2008PhRvL.100e4502V. doi:10.1103/PhysRevLett.100.054502. PMID 18352375. 7. Whale-Inspired Wind Turbines 8. Fluid dynamics: Lifting a whale 9. Self-assembly could simplify nanotech construction External links • Homepage • U Michigan faculty profile Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Mary Silber Mary Catherine Silber is a professor in the Department of Statistics at the University of Chicago who works on dynamical systems, in bifurcation theory and pattern formation.[1] Mary Silber AwardsFellow of the Society for Industrial and Applied Mathematics Academic background Alma materSonoma State University, University of California, Berkeley Academic work DisciplineStatistics InstitutionsThe University of Chicago Education and career Silber completed her Ph.D. in physics from the University of California, Berkeley in 1989, under the supervision of Edgar Knobloch. Her dissertation was Bifurcations with $D(4)$ Symmetry and Spatial Pattern Selection.[2] After postdoctoral research at the University of Minnesota, Georgia Institute of Technology, and California Institute of Technology, she joined the Northwestern faculty in 1993.[3] She moved to the Department of Statistics at the University of Chicago in 2015 as a faculty member in the Computational and Applied Mathematics Initiative. In 2020, Silber joined two other University of Chicago faculty members in representing the University on the Institute for Foundational Data Science.[4] She is the Director of the Committee on Computational and Applied Mathematics, an interdisciplinary graduate program in computational and applied mathematics at the University of Chicago.[5] Awards and recognition In 2012 Silber became a fellow of the Society for Industrial and Applied Mathematics "for contributions to the analysis of bifurcations in the presence of symmetry".[6] References 1. Faculty Directory: Mary Silber, The University of Chicago, retrieved 2016-01-21. 2. Mary Silber at the Mathematics Genealogy Project 3. Mary Silber CV (PDF), retrieved 2016-12-09 4. "UChicago Joins Three Universities in Institute for Foundational Data Science". UChicago CS News. 21 September 2020. Retrieved 28 February 2022. 5. "Our History". Committee on Computational and Applied Mathematics (CCAM). n.d. Retrieved 28 February 2022. 6. SIAM Fellows: Class of 2012, retrieved 2015-09-09. External links University of Chicago profile Authority control: Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID
Wikipedia
Shilov boundary In functional analysis, a branch of mathematics, the Shilov boundary is the smallest closed subset of the structure space of a commutative Banach algebra where an analog of the maximum modulus principle holds. It is named after its discoverer, Georgii Evgen'evich Shilov. Precise definition and existence Let ${\mathcal {A}}$ be a commutative Banach algebra and let $\Delta {\mathcal {A}}$ be its structure space equipped with the relative weak*-topology of the dual ${\mathcal {A}}^{*}$. A closed (in this topology) subset $F$ of $\Delta {\mathcal {A}}$ is called a boundary of ${\mathcal {A}}$ if $ \max _{f\in \Delta {\mathcal {A}}}|f(x)|=\max _{f\in F}|f(x)|$ for all $x\in {\mathcal {A}}$. The set $ S=\bigcap \{F:F{\text{ is a boundary of }}{\mathcal {A}}\}$ is called the Shilov boundary. It has been proved by Shilov[1] that $S$ is a boundary of ${\mathcal {A}}$. Thus one may also say that Shilov boundary is the unique set $S\subset \Delta {\mathcal {A}}$ which satisfies 1. $S$ is a boundary of ${\mathcal {A}}$, and 2. whenever $F$ is a boundary of ${\mathcal {A}}$, then $S\subset F$. Examples Let $\mathbb {D} =\{z\in \mathbb {C} :|z|<1\}$ :|z|<1\}} be the open unit disc in the complex plane and let ${\mathcal {A}}=H^{\infty }(\mathbb {D} )\cap {\mathcal {C}}({\bar {\mathbb {D} }})$ be the disc algebra, i.e. the functions holomorphic in $\mathbb {D} $ and continuous in the closure of $\mathbb {D} $ with supremum norm and usual algebraic operations. Then $\Delta {\mathcal {A}}={\bar {\mathbb {D} }}$ and $S=\{|z|=1\}$. References • "Bergman-Shilov boundary", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Notes 1. Theorem 4.15.4 in Einar Hille, Ralph S. Phillips: Functional analysis and semigroups. -- AMS, Providence 1957. See also • James boundary • Furstenberg boundary Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Spectral theory and *-algebras Basic concepts • Involution/*-algebra • Banach algebra • B*-algebra • C*-algebra • Noncommutative topology • Projection-valued measure • Spectrum • Spectrum of a C*-algebra • Spectral radius • Operator space Main results • Gelfand–Mazur theorem • Gelfand–Naimark theorem • Gelfand representation • Polar decomposition • Singular value decomposition • Spectral theorem • Spectral theory of normal C*-algebras Special Elements/Operators • Isospectral • Normal operator • Hermitian/Self-adjoint operator • Unitary operator • Unit Spectrum • Krein–Rutman theorem • Normal eigenvalue • Spectrum of a C*-algebra • Spectral radius • Spectral asymmetry • Spectral gap Decomposition • Decomposition of a spectrum • Continuous • Point • Residual • Approximate point • Compression • Direct integral • Discrete • Spectral abscissa Spectral Theorem • Borel functional calculus • Min-max theorem • Positive operator-valued measure • Projection-valued measure • Riesz projector • Rigged Hilbert space • Spectral theorem • Spectral theory of compact operators • Spectral theory of normal C*-algebras Special algebras • Amenable Banach algebra • With an Approximate identity • Banach function algebra • Disk algebra • Nuclear C*-algebra • Uniform algebra • Von Neumann algebra • Tomita–Takesaki theory Finite-Dimensional • Alon–Boppana bound • Bauer–Fike theorem • Numerical range • Schur–Horn theorem Generalizations • Dirac spectrum • Essential spectrum • Pseudospectrum • Structure space (Shilov boundary) Miscellaneous • Abstract index group • Banach algebra cohomology • Cohen–Hewitt factorization theorem • Extensions of symmetric operators • Fredholm theory • Limiting absorption principle • Schröder–Bernstein theorems for operator algebras • Sherman–Takeda theorem • Unbounded operator Examples • Wiener algebra Applications • Almost Mathieu operator • Corona theorem • Hearing the shape of a drum (Dirichlet eigenvalue) • Heat kernel • Kuznetsov trace formula • Lax pair • Proto-value function • Ramanujan graph • Rayleigh–Faber–Krahn inequality • Spectral geometry • Spectral method • Spectral theory of ordinary differential equations • Sturm–Liouville theory • Superstrong approximation • Transfer operator • Transform theory • Weyl law • Wiener–Khinchin theorem
Wikipedia
Silvano Martello Silvano Martello (born in Bologna, Italy) is an Italian scientist and engineer, and an Emeritus Professor of Operations Research at the University of Bologna.[1] He is known for his research in Operations Research and Mathematical Programming. In particular, he made significant contributions in the areas of knapsack and assignment problems, packing problems, and vehicle routing.[2] As of 2023, he published 160 peer-reviewed articles and was cited more than 7000 times. Silvano Martello Silvano Martello in 2018 Born Bologna, Italy NationalityItalian CitizenshipItalian OccupationEmeritus Professor of Operations Research TitleEmeritus Professor Board member ofEURO, ECCO, AIRO AwardsEURO Gold Medal, IFORS Distinguished Lecturer Academic background EducationUniversity of Bologna Alma materUniversity of Bologna Academic work DisciplineOperations Research Sub-disciplineCombinatorial Optimization InstitutionsUniversity of Bologna Main interestsKnapsack Problem, Assignment Problem, Packing Problems, Routing Problems Notable worksKnapsack problems: Algorithms and Computer implementations; Assignment Problems Websitehttps://www.unibo.it/sitoweb/silvano.martello/en He was vice-president of the Association of European Operational Research Societies (EURO) from 2014 to 2017,[3] and has been chairman of the European Chapter on Combinatorial Optimization (ECCO) since 1997.[4] He is editor-in-chief of 4OR, the joint official journal of the Belgian, French, and Italian Operations Research Societies.[5] Among his PhD students are Andrew H. and Ann R. Tisch Professor Andrea Lodi[6] (Cornell Tech), and Professor Mauro dell'Amico[7] (University of Modena and Reggio Emilia).[8] Education and early career Martello graduated from the University of Bologna with a degree in Electronic Engineering in 1973. He was an assistant and then an associate professor at the University of Bologna from 1980 to 1990. From 1990 to 1994 he was a full professor of Operations Research and Management Science at the University of Turin. From 1994 to 2018 he was a full professor of Operations Research at the University of Bologna. Awards • 2012 - IFORS Distinguished Lecturer.[9] • 2018 - Omega (journal) best paper award.[10] • 2018 - EURO Gold Medal from the Association of European Operational Research Societies.[11] Books He is the co-author, with Paolo Toth, of the book Knapsack problems: Algorithms and Computer implementations (John Wiley & Sons, Inc., 1990).[12] He also co-authored, with Rainer Burkard and Mauro dell'Amico, the book Assignment Problems (SIAM, 2009).[13] References 1. "Silvano Martello - UNIBO". Retrieved January 15, 2023. 2. "Paolo Toth". Scopus. Retrieved January 15, 2023. 3. "EURO - The Association of European Operational Research Societies - Previous members of the Executive Committee". www.euro-online.org. 4. "EURO Working Group ECCO". Retrieved January 15, 2023. 5. "4OR - Editors". Retrieved January 16, 2023. 6. "Andrea Lodi - Cornell University". Retrieved January 15, 2023. 7. "Mauro dell'Amico - UNIMORE". Retrieved January 15, 2023. 8. "Paolo Toth Math Genealogy". Retrieved January 15, 2023. 9. "IFORS Distinguished Lectures". Retrieved January 15, 2023. 10. "Omega Best Paper Awards 2018". Retrieved January 15, 2023. 11. "EURO Gold Medal 2018". Retrieved January 15, 2023. 12. Silvano Martello, and Paolo Toth (1990). Knapsack problems: Algorithms and Computer implementations. John Wiley & Sons, Inc. ISBN 0471924202. 13. Reiner Burkard, Silvano Martello, and Paolo Toth (2009). Assignment Problems. SIAM. ISBN 978-0-89871-663-4.{{cite book}}: CS1 maint: multiple names: authors list (link) External links • Home page • Silvano Martello publications indexed by the Scopus bibliographic database. (subscription required) Authority control International • ISNI • VIAF • WorldCat National • Norway • France • BnF data • Italy • Israel • United States • Czech Republic • Australia • Netherlands • Vatican Academics • CiNii • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH Other • IdRef
Wikipedia
Silver's dichotomy In descriptive set theory, a branch of mathematics, Silver's dichotomy (also known as Silver's theorem)[1] is a statement about equivalence relations, named after Jack Silver.[2][3] Statement and history A relation is said to be coanalytic if its complement is an analytic set. Silver's dichotomy is a statement about the equivalence classes of a coanalytic equivalence relation, stating any coanalytic equivalence relation either has countably many equivalence classes, or else there is a perfect set of reals that are each incomparable to each other.[4] In the latter case, there must be uncountably many equivalence classes of the relation.[2] The first published proof of Silver's dichotomy was by Jack Silver, appearing in 1980 in order to answer a question posed by Harvey Friedman.[5] One application of Silver's dichotomy appearing in recursive set theory is since equality restricted to a set $X$ is coanalytic, there is no Borel equivalence relation $R$ such that $(=\upharpoonright \aleph _{0})\leq _{B}R\leq _{B}(=\upharpoonright 2^{\aleph _{0}})$, where $\leq _{B}$ denotes Borel-reducibility. Some later results motivated by Silver's dichotomy founded a new field known as invariant descriptive set theory, which studies definable equivalence relations. Silver's dichotomy also admits several weaker recursive versions, which have been compared in strength with subsystems of second-order arithmetic from reverse mathematics,[2] while Silver's dichotomy itself is provably equivalent to $\Pi _{1}^{1}{\mathsf {-CA}}_{0}$ over ${\mathsf {RCA}}_{0}$.[1] References 1. S. G. Simpson, "Subsystems of Z2 and Reverse Mathematics", p.442. Appearing in G. Takeuti, Proof Theory (1987), ISBN 0 444 87943 9. 2. L. Yanfang, On Silver's Dichotomy, Ph.D thesis. Accessed 30 August 2022. 3. Sy D. Friedman, Consistency of the Silver dichotomy in generalized Baire space, Fundamenta Mathematicae (2014). Accessed 30 August 2022. 4. A. Kechris, New Directions in Descriptive Set Theory (1999, p.165). Accessed 1 September 2022. 5. J. Silver, Counting the number of equivalence classes of Borel and coanalytic equivalence relations (Annals of Mathematical Logic, 1980, received 1977). Accessed 31 August 2022.
Wikipedia
Silver ratio In mathematics, two quantities are in the silver ratio (or silver mean)[1][2] if the ratio of the smaller of those two quantities to the larger quantity is the same as the ratio of the larger quantity to the sum of the smaller quantity and twice the larger quantity (see below). This defines the silver ratio as an irrational mathematical constant, whose value of one plus the square root of 2 is approximately 2.4142135623. Its name is an allusion to the golden ratio; analogously to the way the golden ratio is the limiting ratio of consecutive Fibonacci numbers, the silver ratio is the limiting ratio of consecutive Pell numbers. The silver ratio is denoted by δS. Not to be confused with Silver constant. For a ratio also known as the silver ratio, see Golden ratio conjugate. "Silver triangle" redirects here. For other uses, see Kepler triangle. Silver ratio Silver rectangle Representations Decimal2.4142135623730950488... Algebraic form1 + √2 Continued fraction$\textstyle 2+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{\ddots }}}}}}}}$ Binary10.01101010000010011110... Hexadecimal2.6A09E667F3BCC908B2F... Mathematicians have studied the silver ratio since the time of the Greeks (although perhaps without giving a special name until recently) because of its connections to the square root of 2, its convergents, square triangular numbers, Pell numbers, octagons and the like. The relation described above can be expressed algebraically: ${\frac {2a+b}{a}}={\frac {a}{b}}\equiv \delta _{S}$ or equivalently, $2+{\frac {b}{a}}={\frac {a}{b}}\equiv \delta _{S}$ The silver ratio can also be defined by the simple continued fraction [2; 2, 2, 2, ...]: $2+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+\ddots }}}}}}=\delta _{S}$ The convergents of this continued fraction (2/1, 5/2, 12/5, 29/12, 70/29, ...) are ratios of consecutive Pell numbers. These fractions provide accurate rational approximations of the silver ratio, analogous to the approximation of the golden ratio by ratios of consecutive Fibonacci numbers. The silver rectangle is connected to the regular octagon. If a regular octagon is partitioned into two isosceles trapezoids and a rectangle, then the rectangle is a silver rectangle with an aspect ratio of 1:δS, and the 4 sides of the trapezoids are in a ratio of 1:1:1:δS. If the edge length of a regular octagon is t, then the span of the octagon (the distance between opposite sides) is δSt, and the area of the octagon is 2δSt2.[3] Calculation For comparison, two quantities a, b with a > b > 0 are said to be in the golden ratio φ if, ${\frac {a+b}{a}}={\frac {a}{b}}=\varphi $ However, they are in the silver ratio δS if, ${\frac {2a+b}{a}}={\frac {a}{b}}=\delta _{S}.$ Equivalently, $2+{\frac {b}{a}}={\frac {a}{b}}=\delta _{S}$ Therefore, $2+{\frac {1}{\delta _{S}}}=\delta _{S}.$ Multiplying by δS and rearranging gives ${\delta _{S}}^{2}-2\delta _{S}-1=0.$ Using the quadratic formula, two solutions can be obtained. Because δS is the ratio of positive quantities, it is necessarily positive, so, $\delta _{S}=1+{\sqrt {2}}=2.41421356237\dots $ Properties Number-theoretic properties The silver ratio is a Pisot–Vijayaraghavan number (PV number), as its conjugate 1 − √2 = −1/δS ≈ −0.41 has absolute value less than 1. In fact it is the second smallest quadratic PV number after the golden ratio. This means the distance from δ n S to the nearest integer is 1/δ n S ≈ 0.41n . Thus, the sequence of fractional parts of δ n S , n = 1, 2, 3, ... (taken as elements of the torus) converges. In particular, this sequence is not equidistributed mod 1. Powers The lower powers of the silver ratio are $\delta _{S}^{-1}=1\delta _{S}-2=[0;2,2,2,2,2,\dots ]\approx 0.41421$ $\delta _{S}^{0}=0\delta _{S}+1=[1]=1$ $\delta _{S}^{1}=1\delta _{S}+0=[2;2,2,2,2,2,\dots ]\approx 2.41421$ $\delta _{S}^{2}=2\delta _{S}+1=[5;1,4,1,4,1,\dots ]\approx 5.82842$ $\delta _{S}^{3}=5\delta _{S}+2=[14;14,14,14,\dots ]\approx 14.07107$ $\delta _{S}^{4}=12\delta _{S}+5=[33;1,32,1,32,\dots ]\approx 33.97056$ The powers continue in the pattern $\delta _{S}^{n}=K_{n}\delta _{S}+K_{n-1}$ where $K_{n}=2K_{n-1}+K_{n-2}$ For example, using this property: $\delta _{S}^{5}=29\delta _{S}+12=[82;82,82,82,\dots ]\approx 82.01219$ Using K0 = 1 and K1 = 2 as initial conditions, a Binet-like formula results from solving the recurrence relation $K_{n}=2K_{n-1}+K_{n-2}$ which becomes $K_{n}={\frac {1}{2{\sqrt {2}}}}\left(\delta _{S}^{n+1}-{(2-\delta _{S})}^{n+1}\right)$ Trigonometric properties See also: Exact trigonometric values § Common angles The silver ratio is intimately connected to trigonometric ratios for π/8 = 22.5°. $\tan {\frac {\pi }{8}}={\sqrt {2}}-1={\frac {1}{\delta _{s}}}$ $\cot {\frac {\pi }{8}}=\tan {\frac {3\pi }{8}}={\sqrt {2}}+1=\delta _{s}$ So the area of a regular octagon with side length a is given by $A=2a^{2}\cot {\frac {\pi }{8}}=2\delta _{s}a^{2}\simeq 4.828427a^{2}.$ See also • Metallic means • Ammann–Beenker tiling References 1. Vera W. de Spinadel (1999). The Family of Metallic Means, Vismath 1(3) from Mathematical Institute of Serbian Academy of Sciences and Arts. 2. de Spinadel, Vera W. (1998). Williams, Kim (ed.). "The Metallic Means and Design". Nexus II: Architecture and Mathematics. Fucecchio (Florence): Edizioni dell'Erba: 141–157. 3. Kapusta, Janos (2004), "The square, the circle, and the golden proportion: a new class of geometrical constructions" (PDF), Forma, 19: 293–313. Further reading • Buitrago, Antonia Redondo (2008). "Polygons, Diagonals, and the Bronze Mean", Nexus Network Journal 9,2: Architecture and Mathematics, p.321-2. Springer Science & Business Media. ISBN 9783764386993. External links • Weisstein, Eric W. "Silver Ratio". MathWorld. • "An Introduction to Continued Fractions: The Silver Means Archived 2018-12-08 at the Wayback Machine", Fibonacci Numbers and the Golden Section. • "Silver rectangle and its sequence" at Tartapelago by Giorgio Pietrocola Algebraic numbers • Algebraic integer • Chebyshev nodes • Constructible number • Conway's constant • Cyclotomic field • Eisenstein integer • Gaussian integer • Golden ratio (φ) • Perron number • Pisot–Vijayaraghavan number • Quadratic irrational number • Rational number • Root of unity • Salem number • Silver ratio (δS) • Square root of 2 • Square root of 3 • Square root of 5 • Square root of 6 • Square root of 7 • Doubling the cube • Twelfth root of two  Mathematics portal Fractions and ratios Division and ratio • Dividend ÷ Divisor = Quotient Fraction • Numerator/Denominator = Quotient • Algebraic • Aspect • Binary • Continued • Decimal • Dyadic • Egyptian • Golden • Silver • Integer • Irreducible • Reduction • Just intonation • LCD • Musical interval • Paper size • Percentage • Unit Irrational numbers • Chaitin's (Ω) • Liouville • Prime (ρ) • Omega • Cahen • Logarithm of 2 • Gauss's (G) • Twelfth root of 2 • Apéry's (ζ(3)) • Plastic (ρ) • Square root of 2 • Supergolden ratio (ψ) • Erdős–Borwein (E) • Golden ratio (φ) • Square root of 3 • Square root of pi (√π) • Square root of 5 • Silver ratio (δS) • Square root of 6 • Square root of 7 • Euler's (e) • Pi (π) • Schizophrenic • Transcendental • Trigonometric Metallic means • Pisot number • Gold • Angle • Base • Fibonacci sequence • Kepler triangle • Rectangle • Rhombus • Section search • Spiral • Triangle • Silver • Pell number • Bronze • Copper • Nickel • etc...
Wikipedia
Beraha constants The Beraha constants are a series of mathematical constants by which the $n{\text{th}}$ Beraha constant is given by $B(n)=2+2\cos \left({\frac {2\pi }{n}}\right).$ Notable examples of Beraha constants include $B(5)$is $\varphi +1$, where $\varphi $ is the golden ratio, $B(7)$is the silver constant[1] (also known as the silver root),[2] and $B(10)=\varphi +2$. The following table summarizes the first ten Beraha constants. $n$ $B(n)$ Approximately 1 4 2 0 3 1 4 2 5 ${\frac {1}{2}}(3+{\sqrt {5}})$ 2.618 6 3 7 $2+2\cos({\tfrac {2}{7}}\pi )$ 3.247 8 $2+{\sqrt {2}}$ 3.414 9 $2+2\cos({\tfrac {2}{9}}\pi )$ 3.532 10 ${\frac {1}{2}}(5+{\sqrt {5}})$ 3.618 See also • Chromatic polynomial Notes 1. Weisstein, Eric W. "Silver Constant". Wolfram MathWorld. Retrieved November 3, 2018. 2. Weisstein, Eric W. "Silver Root". Wolfram MathWorld. Retrieved May 5, 2020. References • Weisstein, Eric W. "Beraha Constants". Wolfram MathWorld. Retrieved November 3, 2018. • Beraha, S. Ph.D. thesis. Baltimore, MD: Johns Hopkins University, 1974. • Le Lionnais, F. Les nombres remarquables. Paris: Hermann, p. 143, 1983. • Saaty, T. L. and Kainen, P. C. The Four-Color Problem: Assaults and Conquest. New York: Dover, pp. 160–163, 1986. • Tutte, W. T. "Chromials." University of Waterloo, 1971. • Tutte, W. T. "More about Chromatic Polynomials and the Golden Ratio." In Combinatorial Structures and their Applications: Proc. Calgary Internat. Conf., Calgary, Alberta, 1969. New York: Gordon and Breach, p. 439, 1969. • Tutte, W. T. "Chromatic Sums for Planar Triangulations I: The Case $\lambda =1$," Research Report COPR 72–7, University of Waterloo, 1972a. • Tutte, W. T. "Chromatic Sums for Planar Triangulations IV: The Case $\lambda =\infty $." Research Report COPR 72–4, University of Waterloo, 1972b.
Wikipedia
Alice Silverberg Alice Silverberg (born 1958)[1] is professor of Mathematics and Computer Science at the University of California, Irvine.[2] She was faculty at the Ohio State University from 1984 through 2004.[2] She has given over 300 lectures at universities around the world, and she has brought attention to issues of sexism and discrimination through her blog Alice's Adventures in Numberland. Alice Silverberg Bornc. 1958 (age 64–65) NationalityAmerican Alma mater • Harvard University • Princeton Awards • Sloan Fellowship (1990) • Fellow, American Mathematical Society (2012) • Fellow, Association for Women in Mathematics (2019) Scientific career FieldsMathematics InstitutionsUniversity of California, Irvine ThesisMordell-Weil groups of generic polarized abelian varieties (1984) Doctoral advisorGoro Shimura Websitewww.math.uci.edu/~asilverb/ Research Silverberg's research concerns number theory and cryptography. With Karl Rubin, she introduced the CEILIDH system for torus-based cryptography in 2003,[3] and she currently holds 10 patents related to cryptography.[2] She is also known for her work on theoretical aspects of abelian varieties.[4][5] Education and career Silverberg graduated from Harvard University in 1979,[2] and received her Ph.D. from Princeton University in 1984 under the supervision of Goro Shimura.[6] She began her academic career at Ohio State University in 1984 and became a full professor in 1996. She moved to the University of California at Irvine in 2004 as Professor of Mathematics and Computer Science, and in 2018 she was awarded the title of Distinguished Professor.[2] Over the past 25 years she has organized or co-organized more than ten conferences in mathematics and cryptography, and has served on the program committees of more than twenty others.[2] Silverberg has a long record of service with the American Mathematical Society and is currently a member of their nominating committee.[2] She has served as an editor for the Association for Women in Mathematics since 2008,[2] and recently joined the board of the Number Theory Foundation.[7] Honors In 2012, Silverberg became a fellow of the American Mathematical Society.[8] She was elected to the 2019 class of fellows of the Association for Women in Mathematics "For her outstanding research in number theory and deep commitment to the promotion of fairness and equal opportunity evidenced by her service and outreach efforts", also citing her many invited lectures and her blog.[9] Additional work In 2017, Silverberg began a blog entitled Alice's Adventures in Numberland, which humorously discusses issues surrounding sexism in academia.[10] This is a topic which she has previously discussed in interviews,[11] and has been quoted on.[12] References 1. Birth year from ISNI authority control file, retrieved 2 December 2018. 2. Curriculum Vitae, University of California, Irvine, retrieved 18 January 2020. 3. Rubin, Karl; Silverberg, Alice (2003), "Torus-based cryptography", Advances in Cryptology - CRYPTO 2003, Lecture Notes in Computer Science, vol. 2729, Springer, pp. 349–365, doi:10.1007/978-3-540-45146-4_21. 4. Silverberg, Alice (1992), "Fields of definition for homomorphisms of abelian varieties", Journal of Pure and Applied Algebra, 77 (3): 253–262, doi:10.1016/0022-4049(92)90141-2, MR 1154704. 5. Silverberg, Alice (1988), "Torsion points on abelian varieties of CM-type", Compositio Mathematica, 68 (3): 241–249, MR 0971328, Zbl 0683.14002. 6. Alice Silverberg at the Mathematics Genealogy Project 7. Board of directors, Number Theory Foundation, retrieved 20 January 2020 8. List of Fellows of the American Mathematical Society 9. AWM Fellows: 2019 Class of Fellows, Association for Women in Mathematics, retrieved 26 January 2019 10. "Alice's Adventures in Numberland". Retrieved 18 January 2020. 11. "Interview with Alice Silverberg". Mathematical Association of America. Retrieved 10 October 2017. 12. Fine, Cordelia (2005). Delusions of gender: The real science behind sex differences. Icon Books Ltd. External links • Home page • Alice's Adventures in Numberland Authority control International • ISNI • VIAF National • Norway • Israel • United States Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Silverman's game In game theory, Silverman's game is a two-person zero-sum game played on the unit square. It is named for mathematician David Silverman. It is played by two players on a given set S of positive real numbers. Before play starts, a threshold T and penalty ν are chosen with 1 < T < ∞ and 0 < ν < ∞. For example, consider S to be the set of integers from 1 to n, T = 3 and ν = 2. Each player chooses an element of S, x and y. Suppose player A plays x and player B plays y. Without loss of generality, assume player A chooses the larger number, so x ≥ y. Then the payoff to A is 0 if x = y, 1 if 1 < x/y < T and −ν if x/y ≥ T. Thus each player seeks to choose the larger number, but there is a penalty of ν for choosing too large a number. A large number of variants have been studied, where the set S may be finite, countable, or uncountable. Extensions allow the two players to choose from different sets, such as the odd and even integers. References • Evans, Ronald J. (April 1979). "Silverman's game on intervals". American Mathematical Monthly. 86 (4): 277–281. doi:10.1080/00029890.1979.11994788. • Evans, Ronald J.; Heuer, Gerald A. (March 1992). "Silverman's game on discrete sets" (PDF). Linear Algebra and Its Applications. 166: 217–235. doi:10.1016/0024-3795(92)90279-J. • Heuer, Gerald; Leopold-Wildburger, Ulrike (1995). Silverman's Game. Springer. p. 293. ISBN 978-3-540-59232-7.
Wikipedia
Kernel density estimation In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form.[1][2] One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier,[3][4] which can improve its prediction accuracy.[3] Definition Let (x1, x2, ..., xn) be independent and identically distributed samples drawn from some univariate distribution with an unknown density ƒ at any given point x. We are interested in estimating the shape of this function ƒ. Its kernel density estimator is ${\widehat {f}}_{h}(x)={\frac {1}{n}}\sum _{i=1}^{n}K_{h}(x-x_{i})={\frac {1}{nh}}\sum _{i=1}^{n}K{\Big (}{\frac {x-x_{i}}{h}}{\Big )},$ where K is the kernel — a non-negative function — and h > 0 is a smoothing parameter called the bandwidth. A kernel with subscript h is called the scaled kernel and defined as Kh(x) = 1/h K(x/h). Intuitively one wants to choose h as small as the data will allow; however, there is always a trade-off between the bias of the estimator and its variance. The choice of bandwidth is discussed in more detail below. A range of kernel functions are commonly used: uniform, triangular, biweight, triweight, Epanechnikov, normal, and others. The Epanechnikov kernel is optimal in a mean square error sense,[5] though the loss of efficiency is small for the kernels listed previously.[6] Due to its convenient mathematical properties, the normal kernel is often used, which means K(x) = ϕ(x), where ϕ is the standard normal density function. The construction of a kernel density estimate finds interpretations in fields outside of density estimation.[7] For example, in thermodynamics, this is equivalent to the amount of heat generated when heat kernels (the fundamental solution to the heat equation) are placed at each data point locations xi. Similar methods are used to construct discrete Laplace operators on point clouds for manifold learning (e.g. diffusion map). Example Kernel density estimates are closely related to histograms, but can be endowed with properties such as smoothness or continuity by using a suitable kernel. The diagram below based on these 6 data points illustrates this relationship: Sample 1 2 3 4 5 6 Value -2.1 -1.3 -0.4 1.9 5.1 6.2 For the histogram, first, the horizontal axis is divided into sub-intervals or bins which cover the range of the data: In this case, six bins each of width 2. Whenever a data point falls inside this interval, a box of height 1/12 is placed there. If more than one data point falls inside the same bin, the boxes are stacked on top of each other. For the kernel density estimate, normal kernels with variance 2.25 (indicated by the red dashed lines) are placed on each of the data points xi. The kernels are summed to make the kernel density estimate (solid blue curve). The smoothness of the kernel density estimate (compared to the discreteness of the histogram) illustrates how kernel density estimates converge faster to the true underlying density for continuous random variables.[8] Bandwidth selection The bandwidth of the kernel is a free parameter which exhibits a strong influence on the resulting estimate. To illustrate its effect, we take a simulated random sample from the standard normal distribution (plotted at the blue spikes in the rug plot on the horizontal axis). The grey curve is the true density (a normal density with mean 0 and variance 1). In comparison, the red curve is undersmoothed since it contains too many spurious data artifacts arising from using a bandwidth h = 0.05, which is too small. The green curve is oversmoothed since using the bandwidth h = 2 obscures much of the underlying structure. The black curve with a bandwidth of h = 0.337 is considered to be optimally smoothed since its density estimate is close to the true density. An extreme situation is encountered in the limit $h\to 0$ (no smoothing), where the estimate is a sum of n delta functions centered at the coordinates of analyzed samples. In the other extreme limit $h\to \infty $ the estimate retains the shape of the used kernel, centered on the mean of the samples (completely smooth). The most common optimality criterion used to select this parameter is the expected L2 risk function, also termed the mean integrated squared error: $\operatorname {MISE} (h)=\operatorname {E} \!\left[\,\int ({\hat {f}}_{h}(x)-f(x))^{2}\,dx\right]$ Under weak assumptions on ƒ and K, (ƒ is the, generally unknown, real density function),[1][2] $\operatorname {MISE} (h)=\operatorname {AMISE} (h)+{\mathcal {o}}((nh)^{-1}+h^{4})$ where o is the little o notation, and n the sample size (as above). The AMISE is the asymptotic MISE, i. e. the two leading terms, $\operatorname {AMISE} (h)={\frac {R(K)}{nh}}+{\frac {1}{4}}m_{2}(K)^{2}h^{4}R(f'')$ where $R(g)=\int g(x)^{2}\,dx$ for a function g, $m_{2}(K)=\int x^{2}K(x)\,dx$ and $f''$ is the second derivative of $f$ and $K$ is the kernel. The minimum of this AMISE is the solution to this differential equation ${\frac {\partial }{\partial h}}\operatorname {AMISE} (h)=-{\frac {R(K)}{nh^{2}}}+m_{2}(K)^{2}h^{3}R(f'')=0$ or $h_{\operatorname {AMISE} }={\frac {R(K)^{1/5}}{m_{2}(K)^{2/5}R(f'')^{1/5}}}n^{-1/5}=Cn^{-1/5}$ Neither the AMISE nor the hAMISE formulas can be used directly since they involve the unknown density function $f$ or its second derivative $f''$. To overcome that difficulty, a variety of automatic, data-based methods have been developed to select the bandwidth. Several review studies have been undertaken to compare their efficacies,[9][10][11][12][13][14][15] with the general consensus that the plug-in selectors[7][16][17] and cross validation selectors[18][19][20] are the most useful over a wide range of data sets. Substituting any bandwidth h which has the same asymptotic order n−1/5 as hAMISE into the AMISE gives that AMISE(h) = O(n−4/5), where O is the big o notation. It can be shown that, under weak assumptions, there cannot exist a non-parametric estimator that converges at a faster rate than the kernel estimator.[21] Note that the n−4/5 rate is slower than the typical n−1 convergence rate of parametric methods. If the bandwidth is not held fixed, but is varied depending upon the location of either the estimate (balloon estimator) or the samples (pointwise estimator), this produces a particularly powerful method termed adaptive or variable bandwidth kernel density estimation. Bandwidth selection for kernel density estimation of heavy-tailed distributions is relatively difficult.[22] A rule-of-thumb bandwidth estimator If Gaussian basis functions are used to approximate univariate data, and the underlying density being estimated is Gaussian, the optimal choice for h (that is, the bandwidth that minimises the mean integrated squared error) is:[23] $h=\left({\frac {4{\hat {\sigma }}^{5}}{3n}}\right)^{\frac {1}{5}}\approx 1.06\,{\hat {\sigma }}\,n^{-1/5},$ An $h$ value is considered more robust when it improves the fit for long-tailed and skewed distributions or for bimodal mixture distributions. This is often done empirically by replacing the standard deviation ${\hat {\sigma }}$ by the parameter $A$ below: $A=\min \left({\hat {\sigma }},{\frac {IQR}{1.34}}\right)$ where IQR is the interquartile range. Another modification that will improve the model is to reduce the factor from 1.06 to 0.9. Then the final formula would be: $h=0.9\,\min \left({\hat {\sigma }},{\frac {IQR}{1.34}}\right)\,n^{-{\frac {1}{5}}}$ where $n$ is the sample size. This approximation is termed the normal distribution approximation, Gaussian approximation, or Silverman's rule of thumb.[23] While this rule of thumb is easy to compute, it should be used with caution as it can yield widely inaccurate estimates when the density is not close to being normal. For example, when estimating the bimodal Gaussian mixture model ${\frac {1}{2{\sqrt {2\pi }}}}e^{-{\frac {1}{2}}(x-10)^{2}}+{\frac {1}{2{\sqrt {2\pi }}}}e^{-{\frac {1}{2}}(x+10)^{2}}$ from a sample of 200 points, the figure on the right shows the true density and two kernel density estimates — one using the rule-of-thumb bandwidth, and the other using a solve-the-equation bandwidth.[7][17] The estimate based on the rule-of-thumb bandwidth is significantly oversmoothed. Relation to the characteristic function density estimator Given the sample (x1, x2, ..., xn), it is natural to estimate the characteristic function φ(t) = E[eitX] as ${\widehat {\varphi }}(t)={\frac {1}{n}}\sum _{j=1}^{n}e^{itx_{j}}$ Knowing the characteristic function, it is possible to find the corresponding probability density function through the Fourier transform formula. One difficulty with applying this inversion formula is that it leads to a diverging integral, since the estimate $\scriptstyle {\widehat {\varphi }}(t)$ is unreliable for large t’s. To circumvent this problem, the estimator $\scriptstyle {\widehat {\varphi }}(t)$ is multiplied by a damping function ψh(t) = ψ(ht), which is equal to 1 at the origin and then falls to 0 at infinity. The “bandwidth parameter” h controls how fast we try to dampen the function $\scriptstyle {\widehat {\varphi }}(t)$. In particular when h is small, then ψh(t) will be approximately one for a large range of t’s, which means that $\scriptstyle {\widehat {\varphi }}(t)$ remains practically unaltered in the most important region of t’s. The most common choice for function ψ is either the uniform function ψ(t) = 1{−1 ≤ t ≤ 1}, which effectively means truncating the interval of integration in the inversion formula to [−1/h, 1/h], or the Gaussian function ψ(t) = e−πt2. Once the function ψ has been chosen, the inversion formula may be applied, and the density estimator will be ${\begin{aligned}{\widehat {f}}(x)&={\frac {1}{2\pi }}\int _{-\infty }^{+\infty }{\widehat {\varphi }}(t)\psi _{h}(t)e^{-itx}\,dt={\frac {1}{2\pi }}\int _{-\infty }^{+\infty }{\frac {1}{n}}\sum _{j=1}^{n}e^{it(x_{j}-x)}\psi (ht)\,dt\\[5pt]&={\frac {1}{nh}}\sum _{j=1}^{n}{\frac {1}{2\pi }}\int _{-\infty }^{+\infty }e^{-i(ht){\frac {x-x_{j}}{h}}}\psi (ht)\,d(ht)={\frac {1}{nh}}\sum _{j=1}^{n}K{\Big (}{\frac {x-x_{j}}{h}}{\Big )},\end{aligned}}$ where K is the Fourier transform of the damping function ψ. Thus the kernel density estimator coincides with the characteristic function density estimator. Geometric and topological features We can extend the definition of the (global) mode to a local sense and define the local modes: $M=\{x:g(x)=0,\lambda _{1}(x)<0\}$ Namely, $M$ is the collection of points for which the density function is locally maximized. A natural estimator of $M$ is a plug-in from KDE,[24][25] where $g(x)$ and $\lambda _{1}(x)$ are KDE version of $g(x)$ and $\lambda _{1}(x)$. Under mild assumptions, $M_{c}$ is a consistent estimator of $M$. Note that one can use the mean shift algorithm[26][27][28] to compute the estimator $M_{c}$ numerically. Statistical implementation A non-exhaustive list of software implementations of kernel density estimators includes: • In Analytica release 4.4, the Smoothing option for PDF results uses KDE, and from expressions it is available via the built-in Pdf function. • In C/C++, FIGTree is a library that can be used to compute kernel density estimates using normal kernels. MATLAB interface available. • In C++, libagf is a library for variable kernel density estimation. • In C++, mlpack is a library that can compute KDE using many different kernels. It allows to set an error tolerance for faster computation. Python and R interfaces are available. • in C# and F#, Math.NET Numerics is an open source library for numerical computation which includes kernel density estimation • In CrimeStat, kernel density estimation is implemented using five different kernel functions – normal, uniform, quartic, negative exponential, and triangular. Both single- and dual-kernel density estimate routines are available. Kernel density estimation is also used in interpolating a Head Bang routine, in estimating a two-dimensional Journey-to-crime density function, and in estimating a three-dimensional Bayesian Journey-to-crime estimate. • In ELKI, kernel density functions can be found in the package de.lmu.ifi.dbs.elki.math.statistics.kernelfunctions • In ESRI products, kernel density mapping is managed out of the Spatial Analyst toolbox and uses the Quartic(biweight) kernel. • In Excel, the Royal Society of Chemistry has created an add-in to run kernel density estimation based on their Analytical Methods Committee Technical Brief 4. • In gnuplot, kernel density estimation is implemented by the smooth kdensity option, the datafile can contain a weight and bandwidth for each point, or the bandwidth can be set automatically[29] according to "Silverman's rule of thumb" (see above). • In Haskell, kernel density is implemented in the statistics package. • In IGOR Pro, kernel density estimation is implemented by the StatsKDE operation (added in Igor Pro 7.00). Bandwidth can be user specified or estimated by means of Silverman, Scott or Bowmann and Azzalini. Kernel types are: Epanechnikov, Bi-weight, Tri-weight, Triangular, Gaussian and Rectangular. • In Java, the Weka (machine learning) package provides weka.estimators.KernelEstimator, among others. • In JavaScript, the visualization package D3.js offers a KDE package in its science.stats package. • In JMP, the Graph Builder platform utilizes kernel density estimation to provide contour plots and high density regions (HDRs) for bivariate densities, and violin plots and HDRs for univariate densities. Sliders allow the user to vary the bandwidth. Bivariate and univariate kernel density estimates are also provided by the Fit Y by X and Distribution platforms, respectively. • In Julia, kernel density estimation is implemented in the KernelDensity.jl package. • In MATLAB, kernel density estimation is implemented through the ksdensity function (Statistics Toolbox). As of the 2018a release of MATLAB, both the bandwidth and kernel smoother can be specified, including other options such as specifying the range of the kernel density.[30] Alternatively, a free MATLAB software package which implements an automatic bandwidth selection method[7] is available from the MATLAB Central File Exchange for • 1-dimensional data • 2-dimensional data • n-dimensional data A free MATLAB toolbox with implementation of kernel regression, kernel density estimation, kernel estimation of hazard function and many others is available on these pages (this toolbox is a part of the book [31]). • In Mathematica, numeric kernel density estimation is implemented by the function SmoothKernelDistribution[32] and symbolic estimation is implemented using the function KernelMixtureDistribution[33] both of which provide data-driven bandwidths. • In Minitab, the Royal Society of Chemistry has created a macro to run kernel density estimation based on their Analytical Methods Committee Technical Brief 4.[34] • In the NAG Library, kernel density estimation is implemented via the g10ba routine (available in both the Fortran[35] and the C[36] versions of the Library). • In Nuklei, C++ kernel density methods focus on data from the Special Euclidean group $SE(3)$. • In Octave, kernel density estimation is implemented by the kernel_density option (econometrics package). • In Origin, 2D kernel density plot can be made from its user interface, and two functions, Ksdensity for 1D and Ks2density for 2D can be used from its LabTalk, Python, or C code. • In Perl, an implementation can be found in the Statistics-KernelEstimation module • In PHP, an implementation can be found in the MathPHP library • In Python, many implementations exist: pyqt_fit.kde Module in the PyQt-Fit package, SciPy (scipy.stats.gaussian_kde), Statsmodels (KDEUnivariate and KDEMultivariate), and scikit-learn (KernelDensity) (see comparison[37]). KDEpy supports weighted data and its FFT implementation is orders of magnitude faster than the other implementations. The commonly used pandas library offers support for kde plotting through the plot method (df.plot(kind='kde')). The getdist package for weighted and correlated MCMC samples supports optimized bandwidth, boundary correction and higher-order methods for 1D and 2D distributions. One newly used package for kernel density estimation is seaborn ( import seaborn as sns , sns.kdeplot() ).[38] A GPU implementation of KDE also exists.[39] • In R, it is implemented through density in the base distribution, and bw.nrd0 function is used in stats package, this function uses the optimized formula in Silverman's book. bkde in the KernSmooth library, ParetoDensityEstimation in the DataVisualizations library (for pareto distribution density estimation), kde in the ks library, dkden and dbckden in the evmix library (latter for boundary corrected kernel density estimation for bounded support), npudens in the np library (numeric and categorical data), sm.density in the sm library. For an implementation of the kde.R function, which does not require installing any packages or libraries, see kde.R. The btb library, dedicated to urban analysis, implements kernel density estimation through kernel_smoothing. • In SAS, proc kde can be used to estimate univariate and bivariate kernel densities. • In Apache Spark, the KernelDensity() class[40] • In Stata, it is implemented through kdensity;[41] for example histogram x, kdensity. Alternatively a free Stata module KDENS is available[42] allowing a user to estimate 1D or 2D density functions. • In Swift, it is implemented through SwiftStats.KernelDensityEstimation in the open-source statistics library SwiftStats. See also Wikimedia Commons has media related to Kernel density estimation. • Kernel (statistics) • Kernel smoothing • Kernel regression • Density estimation (with presentation of other examples) • Mean-shift • Scale space: The triplets {(x, h, KDE with bandwidth h evaluated at x: all x, h > 0} form a scale space representation of the data. • Multivariate kernel density estimation • Variable kernel density estimation • Head/tail breaks Further reading • Härdle, Müller, Sperlich, Werwatz, Nonparametric and Semiparametric Methods, Springer-Verlag Berlin Heidelberg 2004, pp. 39–83 References 1. Rosenblatt, M. (1956). "Remarks on Some Nonparametric Estimates of a Density Function". The Annals of Mathematical Statistics. 27 (3): 832–837. doi:10.1214/aoms/1177728190. 2. Parzen, E. (1962). "On Estimation of a Probability Density Function and Mode". The Annals of Mathematical Statistics. 33 (3): 1065–1076. doi:10.1214/aoms/1177704472. JSTOR 2237880. 3. Piryonesi S. Madeh; El-Diraby Tamer E. (2020-06-01). "Role of Data Analytics in Infrastructure Asset Management: Overcoming Data Size and Quality Problems". Journal of Transportation Engineering, Part B: Pavements. 146 (2): 04020022. doi:10.1061/JPEODX.0000175. S2CID 216485629. 4. Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome H. (2001). The Elements of Statistical Learning : Data Mining, Inference, and Prediction : with 200 full-color illustrations. New York: Springer. ISBN 0-387-95284-5. OCLC 46809224. 5. Epanechnikov, V.A. (1969). "Non-parametric estimation of a multivariate probability density". Theory of Probability and Its Applications. 14: 153–158. doi:10.1137/1114019. 6. Wand, M.P; Jones, M.C. (1995). Kernel Smoothing. London: Chapman & Hall/CRC. ISBN 978-0-412-55270-0. 7. Botev, Zdravko (2007). Nonparametric Density Estimation via Diffusion Mixing (Technical report). University of Queensland. 8. Scott, D. (1979). "On optimal and data-based histograms". Biometrika. 66 (3): 605–610. doi:10.1093/biomet/66.3.605. 9. Park, B.U.; Marron, J.S. (1990). "Comparison of data-driven bandwidth selectors". Journal of the American Statistical Association. 85 (409): 66–72. CiteSeerX 10.1.1.154.7321. doi:10.1080/01621459.1990.10475307. JSTOR 2289526. 10. Park, B.U.; Turlach, B.A. (1992). "Practical performance of several data driven bandwidth selectors (with discussion)". Computational Statistics. 7: 251–270. 11. Cao, R.; Cuevas, A.; Manteiga, W. G. (1994). "A comparative study of several smoothing methods in density estimation". Computational Statistics and Data Analysis. 17 (2): 153–176. doi:10.1016/0167-9473(92)00066-Z. 12. Jones, M.C.; Marron, J.S.; Sheather, S. J. (1996). "A brief survey of bandwidth selection for density estimation". Journal of the American Statistical Association. 91 (433): 401–407. doi:10.2307/2291420. JSTOR 2291420. 13. Sheather, S.J. (1992). "The performance of six popular bandwidth selection methods on some real data sets (with discussion)". Computational Statistics. 7: 225–250, 271–281. 14. Agarwal, N.; Aluru, N.R. (2010). "A data-driven stochastic collocation approach for uncertainty quantification in MEMS" (PDF). International Journal for Numerical Methods in Engineering. 83 (5): 575–597. Bibcode:2010IJNME..83..575A. doi:10.1002/nme.2844. S2CID 84834908. 15. Xu, X.; Yan, Z.; Xu, S. (2015). "Estimating wind speed probability distribution by diffusion-based kernel density method". Electric Power Systems Research. 121: 28–37. doi:10.1016/j.epsr.2014.11.029. 16. Botev, Z.I.; Grotowski, J.F.; Kroese, D.P. (2010). "Kernel density estimation via diffusion". Annals of Statistics. 38 (5): 2916–2957. arXiv:1011.2602. doi:10.1214/10-AOS799. S2CID 41350591. 17. Sheather, S.J.; Jones, M.C. (1991). "A reliable data-based bandwidth selection method for kernel density estimation". Journal of the Royal Statistical Society, Series B. 53 (3): 683–690. doi:10.1111/j.2517-6161.1991.tb01857.x. JSTOR 2345597. 18. Rudemo, M. (1982). "Empirical choice of histograms and kernel density estimators". Scandinavian Journal of Statistics. 9 (2): 65–78. JSTOR 4615859. 19. Bowman, A.W. (1984). "An alternative method of cross-validation for the smoothing of density estimates". Biometrika. 71 (2): 353–360. doi:10.1093/biomet/71.2.353. 20. Hall, P.; Marron, J.S.; Park, B.U. (1992). "Smoothed cross-validation". Probability Theory and Related Fields. 92: 1–20. doi:10.1007/BF01205233. S2CID 121181481. 21. Wahba, G. (1975). "Optimal convergence properties of variable knot, kernel, and orthogonal series methods for density estimation". Annals of Statistics. 3 (1): 15–29. doi:10.1214/aos/1176342997. 22. Buch-Larsen, TINE (2005). "Kernel density estimation for heavy-tailed distributions using the Champernowne transformation". Statistics. 39 (6): 503–518. CiteSeerX 10.1.1.457.1544. doi:10.1080/02331880500439782. S2CID 219697435. 23. Silverman, B.W. (1986). Density Estimation for Statistics and Data Analysis. London: Chapman & Hall/CRC. p. 45. ISBN 978-0-412-24620-3. 24. Chen, Yen-Chi; Genovese, Christopher R.; Wasserman, Larry (2016). "A comprehensive approach to mode clustering". Electronic Journal of Statistics. 10 (1): 210–241. doi:10.1214/15-ejs1102. ISSN 1935-7524. 25. Chazal, Frédéric; Fasy, Brittany Terese; Lecci, Fabrizio; Rinaldo, Alessandro; Wasserman, Larry (2014). "Stochastic Convergence of Persistence Landscapes and Silhouettes". Proceedings of the thirtieth annual symposium on Computational geometry. Vol. 6. New York, New York, USA: ACM Press. pp. 474–483. doi:10.1145/2582112.2582128. ISBN 978-1-4503-2594-3. S2CID 6029340. 26. Fukunaga, K.; Hostetler, L. (January 1975). "The estimation of the gradient of a density function, with applications in pattern recognition". IEEE Transactions on Information Theory. 21 (1): 32–40. doi:10.1109/tit.1975.1055330. ISSN 0018-9448. 27. Yizong Cheng (1995). "Mean shift, mode seeking, and clustering". IEEE Transactions on Pattern Analysis and Machine Intelligence. 17 (8): 790–799. doi:10.1109/34.400568. ISSN 0162-8828. 28. Comaniciu, D.; Meer, P. (May 2002). "Mean shift: a robust approach toward feature space analysis". IEEE Transactions on Pattern Analysis and Machine Intelligence. 24 (5): 603–619. doi:10.1109/34.1000236. ISSN 0162-8828. S2CID 691081. 29. Janert, Philipp K (2009). Gnuplot in action : understanding data with graphs. Connecticut, USA: Manning Publications. ISBN 978-1-933988-39-9. See section 13.2.2 entitled Kernel density estimates. 30. "Kernel smoothing function estimate for univariate and bivariate data - MATLAB ksdensity". www.mathworks.com. Retrieved 2020-11-05. 31. Horová, I.; Koláček, J.; Zelinka, J. (2012). Kernel Smoothing in MATLAB: Theory and Practice of Kernel Smoothing. Singapore: World Scientific Publishing. ISBN 978-981-4405-48-5. 32. "SmoothKernelDistribution—Wolfram Language Documentation". reference.wolfram.com. Retrieved 2020-11-05. 33. "KernelMixtureDistribution—Wolfram Language Documentation". reference.wolfram.com. Retrieved 2020-11-05. 34. "Software for calculating kernel densities". www.rsc.org. Retrieved 2020-11-05. 35. The Numerical Algorithms Group. "NAG Library Routine Document: nagf_smooth_kerndens_gauss (g10baf)" (PDF). NAG Library Manual, Mark 23. Retrieved 2012-02-16. 36. The Numerical Algorithms Group. "NAG Library Routine Document: nag_kernel_density_estim (g10bac)" (PDF). NAG Library Manual, Mark 9. Archived from the original (PDF) on 2011-11-24. Retrieved 2012-02-16. 37. Vanderplas, Jake (2013-12-01). "Kernel Density Estimation in Python". Retrieved 2014-03-12. 38. "seaborn.kdeplot — seaborn 0.10.1 documentation". seaborn.pydata.org. Retrieved 2020-05-12. 39. "Kde-gpu: We implemented nadaraya waston kernel density and kernel conditional probability estimator using cuda through cupy. It is much faster than cpu version but it requires GPU with high memory". 40. "Basic Statistics - RDD-based API - Spark 3.0.1 Documentation". spark.apache.org. Retrieved 2020-11-05. 41. https://www.stata.com/manuals15/rkdensity.pdf 42. Jann, Ben (2008-05-26), "KDENS: Stata module for univariate kernel density estimation", Statistical Software Components, Boston College Department of Economics, retrieved 2022-10-15 External links • Introduction to kernel density estimation A short tutorial which motivates kernel density estimators as an improvement over histograms. • Kernel Bandwidth Optimization A free online tool that generates an optimized kernel density estimate. • Free Online Software (Calculator) computes the Kernel Density Estimation for a data series according to the following Kernels: Gaussian, Epanechnikov, Rectangular, Triangular, Biweight, Cosine, and Optcosine. • Kernel Density Estimation Applet An online interactive example of kernel density estimation. Requires .NET 3.0 or later.
Wikipedia
Silverman–Toeplitz theorem In mathematics, the Silverman–Toeplitz theorem, first proved by Otto Toeplitz, is a result in summability theory characterizing matrix summability methods that are regular. A regular matrix summability method is a matrix transformation of a convergent sequence which preserves the limit.[1] An infinite matrix $(a_{i,j})_{i,j\in \mathbb {N} }$ with complex-valued entries defines a regular summability method if and only if it satisfies all of the following properties: ${\begin{aligned}&\lim _{i\to \infty }a_{i,j}=0\quad j\in \mathbb {N} &&{\text{(Every column sequence converges to 0.)}}\\[3pt]&\lim _{i\to \infty }\sum _{j=0}^{\infty }a_{i,j}=1&&{\text{(The row sums converge to 1.)}}\\[3pt]&\sup _{i}\sum _{j=0}^{\infty }\vert a_{i,j}\vert <\infty &&{\text{(The absolute row sums are bounded.)}}\end{aligned}}$ An example is Cesaro summation, a matrix summability method with $a_{mn}={\begin{cases}{\frac {1}{m}}&n\leq m\\0&n>m\end{cases}}={\begin{pmatrix}1&0&0&0&0&\cdots \\{\frac {1}{2}}&{\frac {1}{2}}&0&0&0&\cdots \\{\frac {1}{3}}&{\frac {1}{3}}&{\frac {1}{3}}&0&0&\cdots \\{\frac {1}{4}}&{\frac {1}{4}}&{\frac {1}{4}}&{\frac {1}{4}}&0&\cdots \\{\frac {1}{5}}&{\frac {1}{5}}&{\frac {1}{5}}&{\frac {1}{5}}&{\frac {1}{5}}&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}},$ References Citations 1. Silverman–Toeplitz theorem, by Ruder, Brian, Published 1966, Call number LD2668 .R4 1966 R915, Publisher Kansas State University, Internet Archive Further reading • Toeplitz, Otto (1911) "Über allgemeine lineare Mittelbildungen." Prace mat.-fiz., 22, 113–118 (the original paper in German) • Silverman, Louis Lazarus (1913) "On the definition of the sum of a divergent series." University of Missouri Studies, Math. Series I, 1–96 • Hardy, G. H. (1949), Divergent Series, Oxford: Clarendon Press, 43-48. • Boos, Johann (2000). Classical and modern methods in summability. New York: Oxford University Press. ISBN 019850165X.
Wikipedia
Silvia Heubach Silvia Heubach is a German-American mathematician specializing in enumerative combinatorics, combinatorial game theory, and bioinformatics. She is a professor of mathematics at California State University, Los Angeles. Education and career Heubach earned bachelor's and master's degrees in mathematics and economics from the University of Ulm in 1983 and 1986, respectively. Through a program at the University of Ulm, she came to the University of Southern California (USC) for a one-year exchange, but decided to stay on for a Ph.D. program. [1] She completed a master's degree in mathematics in 1988 and a Ph.D. in applied mathematics at USC in 1992. Her dissertation, A Stochastic Model for the Movement of a White Blood Cell, was supervised by Joseph C. Watkins.[1][2]. After completing her doctorate, Heubach held visiting faculty positions at Colorado College and Humboldt State University before joining the faculty at California State University, Los Angeles in 1994.[1] Contributions Heubach is the co-author of the book Combinatorics of Compositions and Words (with Toufik Mansour, CRC Press, 2009).[3] She is a contributor to a text in bioinformatics, "Concepts in Bioinformatics and Genomics" by Jamil Momand and Alison McCurdy, Oxford University Press, 2016. Her research in combinatorial game theory has also included analysis of a variant of nim in which piles of pebbles are placed on the edges of a tetrahedron and each move removes at least one pebble from the set of edges incident to a single triangle of the tetrahedron.[4] Recognition In 2018, Heubach won the California State University system's Faculty Innovation and Leadership Award, becoming the first professor at the Los Angeles campus of the system to be so honored. The award honored her educational initiatives including developing a new sequence of mathematics courses for life sciences students, developing flipped classroom mathematics courses, and improving statistics courses used to fulfill general education requirements.[5] References 1. "Silvia Heubach", Faculty profiles, California State University, Los Angeles, 2014-09-24, retrieved 2019-09-04 2. Silvia Heubach at the Mathematics Genealogy Project 3. Reviews of Combinatorics of Compositions and Words: • Vlamos, Panayiotis, zbMATH, Zbl 1184.68373{{citation}}: CS1 maint: untitled periodical (link) • Rampersad, Narad (2010), Mathematical Reviews, MR 2531482{{citation}}: CS1 maint: untitled periodical (link) • Boztaş, Serdar (June 2011), Review (PDF), International Association for Cryptologic Research 4. Ponder this, IBM, July 2014. For the credit to Heubach, see solutions. 5. "Pasadena Resident Silvia Heubach Receives California State University's Faculty Innovation and Leadership Award", Pasadena Now, August 31, 2018 External links • Silvia Heubach publications indexed by Google Scholar Authority control: Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project
Wikipedia
SimRank SimRank is a general similarity measure, based on a simple and intuitive graph-theoretic model. SimRank is applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, SimRank is a measure that says "two objects are considered to be similar if they are referenced by similar objects." Although SimRank is widely adopted, it may output unreasonable similarity scores which are influenced by different factors, and can be solved in several ways, such as introducing an evidence weight factor,[1] inserting additional terms that are neglected by SimRank[2] or using PageRank-based alternatives.[3] Introduction Many applications require a measure of "similarity" between objects. One obvious example is the "find-similar-document" query, on traditional text corpora or the World-Wide Web. More generally, a similarity measure can be used to cluster objects, such as for collaborative filtering in a recommender system, in which “similar” users and items are grouped based on the users’ preferences. Various aspects of objects can be used to determine similarity, usually depending on the domain and the appropriate definition of similarity for that domain. In a document corpus, matching text may be used, and for collaborative filtering, similar users may be identified by common preferences. SimRank is a general approach that exploits the object-to-object relationships found in many domains of interest. On the Web, for example, two pages are related if there are hyperlinks between them. A similar approach can be applied to scientific papers and their citations, or to any other document corpus with cross-reference information. In the case of recommender systems, a user’s preference for an item constitutes a relationship between the user and the item. Such domains are naturally modeled as graphs, with nodes representing objects and edges representing relationships. The intuition behind the SimRank algorithm is that, in many domains, similar objects are referenced by similar objects. More precisely, objects $a$ and $b$ are considered to be similar if they are pointed from objects $c$ and $d$, respectively, and $c$ and $d$ are themselves similar. The base case is that objects are maximally similar to themselves .[4] It is important to note that SimRank is a general algorithm that determines only the similarity of structural context. SimRank applies to any domain where there are enough relevant relationships between objects to base at least some notion of similarity on relationships. Obviously, similarity of other domain-specific aspects are important as well; these can — and should be combined with relational structural-context similarity for an overall similarity measure. For example, for Web pages SimRank can be combined with traditional textual similarity; the same idea applies to scientific papers or other document corpora. For recommendation systems, there may be built-in known similarities between items (e.g., both computers, both clothing, etc.), as well as similarities between users (e.g., same gender, same spending level). Again, these similarities can be combined with the similarity scores that are computed based on preference patterns, in order to produce an overall similarity measure. Basic SimRank equation For a node $v$ in a directed graph, we denote by $I(v)$ and $O(v)$ the set of in-neighbors and out-neighbors of $v$, respectively. Individual in-neighbors are denoted as $I_{i}(v)$, for $1\leq i\leq \left|I(v)\right|$, and individual out-neighbors are denoted as $O_{i}(v)$, for $1\leq i\leq \left|O(v)\right|$. Let us denote the similarity between objects $a$ and $b$ by $s(a,b)\in [0,1]$. Following the earlier motivation, a recursive equation is written for $s(a,b)$. If $a=b$ then $s(a,b)$ is defined to be $1$. Otherwise, $s(a,b)={\frac {C}{\left|I(a)\right|\left|I(b)\right|}}\sum _{i=1}^{\left|I(a)\right|}\sum _{j=1}^{\left|I(b)\right|}s(I_{i}(a),I_{j}(b))$ where $C$ is a constant between $0$ and $1$. A slight technicality here is that either $a$ or $b$ may not have any in-neighbors. Since there is no way to infer any similarity between $a$ and $b$ in this case, similarity is set to $s(a,b)=0$, so the summation in the above equation is defined to be $0$ when $I(a)=\emptyset $ or $I(b)=\emptyset $. Matrix representation of SimRank Given an arbitrary constant $C$ between $0$ and $1$, let $\mathbf {S} $ be the similarity matrix whose entry $[\mathbf {S} ]_{a,b}$ denotes the similarity score $s(a,b)$, and $\mathbf {A} $ be the column normalized adjacency matrix whose entry $[\mathbf {A} ]_{a,b}={\tfrac {1}{|{\mathcal {I}}(b)|}}$ if there is an edge from $a$ to $b$, and 0 otherwise. Then, in matrix notations, SimRank can be formulated as ${\mathbf {S} }=\max\{C\cdot (\mathbf {A} ^{T}\cdot {\mathbf {S} }\cdot {\mathbf {A} }),{\mathbf {I} }\},$ where $\mathbf {I} $ is an identity matrix. Computing SimRank A solution to the SimRank equations for a graph $G$ can be reached by iteration to a fixed-point. Let $n$ be the number of nodes in $G$. For each iteration $k$, we can keep $n^{2}$ entries $s_{k}(*,*)$, where $s_{k}(a,b)$ gives the score between $a$ and $b$ on iteration $k$. We successively compute $s_{k+1}(*,*)$ based on $s_{k}(*,*)$. We start with $s_{0}(*,*)$ where each $s_{0}(a,b)$ is a lower bound on the actual SimRank score $s(a,b)$: $s_{0}(a,b)={\begin{cases}1{\mbox{ }},{\mbox{ }}{\mbox{if }}a=b{\mbox{ }},\\0{\mbox{ }},{\mbox{ }}{\mbox{if }}a\neq b{\mbox{ }}.\end{cases}}$ To compute $s_{k+1}(a,b)$ from $s_{k}(*,*)$, we use the basic SimRank equation to get: $s_{k+1}(a,b)={\frac {C}{\left|I(a)\right|\left|I(b)\right|}}\sum _{i=1}^{\left|I(a)\right|}\sum _{j=1}^{\left|I(b)\right|}s_{k}(I_{i}(a),I_{j}(b))$ for $a\neq b$, and $s_{k+1}(a,b)=1$ for $a=b$. That is, on each iteration $k+1$, we update the similarity of $(a,b)$ using the similarity scores of the neighbours of $(a,b)$ from the previous iteration $k$ according to the basic SimRank equation. The values $s_{k}(*,*)$ are nondecreasing as $k$ increases. It was shown in [4] that the values converge to limits satisfying the basic SimRank equation, the SimRank scores $s(*,*)$, i.e., for all $a,b\in V$, $\lim _{k\to \infty }s_{k}(a,b)=s(a,b)$. The original SimRank proposal suggested choosing the decay factor $C=0.8$ and a fixed number $K=5$ of iterations to perform. However, the recent research [5] showed that the given values for $C$ and $K$ generally imply relatively low accuracy of iteratively computed SimRank scores. For guaranteeing more accurate computation results, the latter paper suggests either using a smaller decay factor (in particular, $C=0.6$) or taking more iterations. CoSimRank CoSimRank is a variant of SimRank with the advantage of also having a local formulation, i.e. CoSimRank can be computed for a single node pair.[6] Let $\mathbf {S} $ be the similarity matrix whose entry $[\mathbf {S} ]_{a,b}$ denotes the similarity score $s(a,b)$, and $\mathbf {A} $ be the column normalized adjacency matrix. Then, in matrix notations, CoSimRank can be formulated as: ${\mathbf {S} }=C\cdot (\mathbf {A} ^{T}\cdot {\mathbf {S} }\cdot {\mathbf {A} })+{\mathbf {I} },$ where $\mathbf {I} $ is an identity matrix. To compute the similarity score of only a single node pair, let $p^{(0)}(i)=e_{i}$, with $e_{i}$ being a vector of the standard basis, i.e., the $i$-th entry is 1 and all other entries are 0. Then, CoSimRank can be computed in two steps: 1. $p^{(k)}=Ap^{(k-1)}$ 2. $s(i,j)=\sum _{k=0}^{\infty }C^{k}\langle p^{(k)}(i),p^{(k)}(j)\rangle $ Step one can be seen a simplified version of Personalized PageRank. Step two sums up the vector similarity of each iteration. Both, matrix and local representation, compute the same similarity score. CoSimRank can also be used to compute the similarity of sets of nodes, by modifying $p^{(0)}(i)$. Further research on SimRank • Fogaras and Racz [7] suggested speeding up SimRank computation through probabilistic computation using the Monte Carlo method. • Antonellis et al.[8] extended SimRank equations to take into consideration (i) evidence factor for incident nodes and (ii) link weights. • Yu et al.[9] further improved SimRank computation via a fine-grained memoization method to share small common parts among different partial sums. • Chen and Giles discussed the limitations and proper use cases of SimRank.[3] Partial Sums Memoization Lizorkin et al.[5] proposed three optimization techniques for speeding up the computation of SimRank: 1. Essential nodes selection may eliminate the computation of a fraction of node pairs with a-priori zero scores. 2. Partial sums memoization can effectively reduce repeated calculations of the similarity among different node pairs by caching part of similarity summations for later reuse. 3. A threshold setting on the similarity enables a further reduction in the number of node pairs to be computed. In particular, the second observation of partial sums memoization plays a paramount role in greatly speeding up the computation of SimRank from ${\mathcal {O}}(Kd^{2}n^{2})$ to ${\mathcal {O}}(Kdn^{2})$, where $K$ is the number of iterations, $d$ is average degree of a graph, and $n$ is the number of nodes in a graph. The central idea of partial sums memoization consists of two steps: First, the partial sums over $I(a)$ are memoized as ${\text{Partial}}_{I(a)}^{s_{k}}(j)=\sum _{i\in I(a)}s_{k}(i,j),\qquad (\forall j\in I(b))$ and then $s_{k+1}(a,b)$ is iteratively computed from ${\text{Partial}}_{I(a)}^{s_{k}}(j)$ as $s_{k+1}(a,b)={\frac {C}{|I(a)||I(b)|}}\sum _{j\in I(b)}{\text{Partial}}_{I(a)}^{s_{k}}(j).$ Consequently, the results of ${\text{Partial}}_{I(a)}^{s_{k}}(j)$, $\forall j\in I(b)$, can be reused later when we compute the similarities $s_{k+1}(a,*)$ for a given vertex $a$ as the first argument. See also • PageRank Citations 1. I. Antonellis, H. Garcia-Molina and C.-C. Chang. Simrank++: Query Rewriting through Link Analysis of the Click Graph. In VLDB '08: Proceedings of the 34th International Conference on Very Large Data Bases, pages 408--421. 2. W. Yu, X. Lin, W. Zhang, L. Chang, and J. Pei. More is Simpler: Effectively and Efficiently Assessing Node-Pair Similarities Based on Hyperlinks. In VLDB '13: Proceedings of the 39th International Conference on Very Large Data Bases, pages 13--24. 3. H. Chen, and C. L. Giles. "ASCOS++: An Asymmetric Similarity Measure for Weighted Networks to Address the Problem of SimRank." ACM Transactions on Knowledge Discovery from Data (TKDD) 10.2 2015. 4. G. Jeh and J. Widom. SimRank: A Measure of Structural-Context Similarity. In KDD'02: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 538-543. ACM Press, 2002. "Archived copy" (PDF). Archived from the original (PDF) on 2008-05-12. Retrieved 2008-10-02.{{cite web}}: CS1 maint: archived copy as title (link) 5. D. Lizorkin, P. Velikhov, M. Grinev and D. Turdakov. Accuracy Estimate and Optimization Techniques for SimRank Computation. In VLDB '08: Proceedings of the 34th International Conference on Very Large Data Bases, pages 422--433. "Archived copy" (PDF). Archived from the original (PDF) on 2009-04-07. Retrieved 2008-10-25.{{cite web}}: CS1 maint: archived copy as title (link) 6. S. Rothe and H. Schütze. CoSimRank: A Flexible & Efficient Graph-Theoretic Similarity Measure. In ACL '14: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1392-1402 . 7. D. Fogaras and B. Racz. Scaling link-based similarity search. In WWW '05: Proceedings of the 14th international conference on World Wide Web, pages 641--650, New York, NY, USA, 2005. ACM. 8. Antonellis, Ioannis, Hector Garcia Molina, and Chi Chao Chang. "Simrank++: query rewriting through link analysis of the click graph." Proceedings of the VLDB Endowment 1.1 (2008): 408-421. arXiv:0712.0499 9. W. Yu, X. Lin, W. Zhang. Towards Efficient SimRank Computation on Large Networks. In ICDE '13: Proceedings of the 29th IEEE International Conference on Data Engineering, pages 601--612. "Archived copy" (PDF). Archived from the original (PDF) on 2014-05-12. Retrieved 2014-05-09.{{cite web}}: CS1 maint: archived copy as title (link) Sources • Cai, Y.; Cong, G.; Jia, X.; Liu, H.; He, J.; Lu, J.; Du, X. (2009-12-01). "Efficient Algorithm for Computing Link-Based Similarity in Real World Networks". 2009 Ninth IEEE International Conference on Data Mining. pp. 734–739. doi:10.1109/ICDM.2009.136. ISBN 978-1-4244-5242-2. S2CID 9799597.
Wikipedia
Sima Marković Sima Marković (8 November 1888 in Kragujevac, Kingdom of Serbia – 19 April 1939 in Moscow, USSR) was a Serbian mathematician, communist and socialist politician and philosopher, known as one of the founders and first leaders of the Communist Party of Yugoslavia. Marković was a doctor of mathematical sciences and a university professor. He has written many works in mathematics, philosophy, physics and politics.[1] He was an early activist and member of the Serbian Social Democratic Party in the Kingdom of Serbia, and since the unification of the Yugoslav communists in 1919 a member of the Communist Party of Yugoslavia.[2] He advocated the preservation and peaceful reform of Yugoslavia into the republic, as opposed to the then position of the Comintern. He was killed in Stalinist purges in 1939 along with many other leading Yugoslav communists. He was rehabilitated on 10 June 1958 by a decision of the Supreme Court of the Soviet Union.[3] References 1. "Razkrita Titova moskovska skrivnost". Mladina.si (in Slovenian). 16 July 2008. Retrieved 15 March 2021. 2. "Drakonski sud partije". www.novosti.rs. 3. Drachkovitch, Milorad M. (April 20, 1986). Biographical Dictionary of the Comintern. Hoover Press. ISBN 9780817984038 – via Google Books. Authority control International • ISNI • VIAF National • Germany • United States • Czech Republic • Netherlands Academics • Mathematics Genealogy Project Other • IdRef
Wikipedia
Siméon Denis Poisson Baron Siméon Denis Poisson FRS FRSE (French: [si.me.ɔ̃ də.ni pwa.sɔ̃]; 21 June 1781 – 25 April 1840) was a French mathematician and physicist who worked on statistics, complex analysis, partial differential equations, the calculus of variations, analytical mechanics, electricity and magnetism, thermodynamics, elasticity, and fluid mechanics. Moreover, he predicted the Poisson spot in his attempt to disprove the wave theory of Augustin-Jean Fresnel, which was later confirmed. Siméon Poisson Born(1781-06-21)21 June 1781 Pithiviers, Kingdom of France (present-day Loiret) Died25 April 1840(1840-04-25) (aged 58) Sceaux, Hauts-de-Seine, Kingdom of France Alma materÉcole Polytechnique Known forPoisson process Poisson equation Poisson kernel Poisson distribution Poisson bracket Poisson algebra Poisson regression Poisson summation formula Poisson's spot Poisson's ratio Poisson zeros Conway–Maxwell–Poisson distribution Euler–Poisson–Darboux equation Scientific career FieldsMathematics and physics InstitutionsÉcole Polytechnique Bureau des Longitudes Faculté des sciences de Paris École de Saint-Cyr Academic advisorsJoseph-Louis Lagrange Pierre-Simon Laplace Doctoral studentsMichel Chasles Joseph Liouville Other notable studentsNicolas Léonard Sadi Carnot Peter Gustav Lejeune Dirichlet Biography Poisson was born in Pithiviers, Loiret district in France, the son of Siméon Poisson, an officer in the French army. In 1798, he entered the École Polytechnique in Paris as first in his year, and immediately began to attract the notice of the professors of the school, who left him free to make his own decisions as to what he would study. In his final year of study, less than two years after his entry, he published two memoirs, one on Étienne Bézout's method of elimination, the other on the number of integrals of a finite difference equation and this was so impressive that he was allowed to graduate in 1800 without taking the final examination[1],.[2] The latter of the memoirs was examined by Sylvestre-François Lacroix and Adrien-Marie Legendre, who recommended that it should be published in the Recueil des savants étrangers, an unprecedented honor for a youth of eighteen. This success at once procured entry for Poisson into scientific circles. Joseph Louis Lagrange, whose lectures on the theory of functions he attended at the École Polytechnique, recognized his talent early on, and became his friend. Meanwhile, Pierre-Simon Laplace, in whose footsteps Poisson followed, regarded him almost as his son. The rest of his career, until his death in Sceaux near Paris, was occupied by the composition and publication of his many works and in fulfilling the duties of the numerous educational positions to which he was successively appointed.[3] Immediately after finishing his studies at the École Polytechnique, he was appointed répétiteur (teaching assistant) there, a position which he had occupied as an amateur while still a pupil in the school; for his schoolmates had made a custom of visiting him in his room after an unusually difficult lecture to hear him repeat and explain it. He was made deputy professor (professeur suppléant) in 1802, and, in 1806 full professor succeeding Jean Baptiste Joseph Fourier, whom Napoleon had sent to Grenoble. In 1808 he became astronomer to the Bureau des Longitudes; and when the Faculté des sciences de Paris was instituted in 1809 he was appointed a professor of rational mechanics (professeur de mécanique rationelle). He went on to become a member of the Institute in 1812, examiner at the military school (École Militaire) at Saint-Cyr in 1815, graduation examiner at the École Polytechnique in 1816, councillor of the university in 1820, and geometer to the Bureau des Longitudes succeeding Pierre-Simon Laplace in 1827.[3] In 1817, he married Nancy de Bardi and with her, he had four children. His father, whose early experiences had led him to hate aristocrats, bred him in the stern creed of the First Republic. Throughout the Revolution, the Empire, and the following restoration, Poisson was not interested in politics, concentrating instead on mathematics. He was appointed to the dignity of baron in 1825,[3] but he neither took out the diploma nor used the title. In March 1818, he was elected a Fellow of the Royal Society,[4] in 1822 a Foreign Honorary Member of the American Academy of Arts and Sciences,[5] and in 1823 a foreign member of the Royal Swedish Academy of Sciences. The revolution of July 1830 threatened him with the loss of all his honours; but this disgrace to the government of Louis-Philippe was adroitly averted by François Jean Dominique Arago, who, while his "revocation" was being plotted by the council of ministers, procured him an invitation to dine at the Palais-Royal, where he was openly and effusively received by the citizen king, who "remembered" him. After this, of course, his degradation was impossible, and seven years later he was made a peer of France, not for political reasons, but as a representative of French science.[3] As a teacher of mathematics Poisson is said to have been extraordinarily successful, as might have been expected from his early promise as a répétiteur at the École Polytechnique. As a scientific worker, his productivity has rarely if ever been equaled. Notwithstanding his many official duties, he found time to publish more than three hundred works, several of them extensive treatises, and many of them memoirs dealing with the most abstruse branches of pure mathematics,[3] applied mathematics, mathematical physics, and rational mechanics. (Arago attributed to him the quote, "Life is good for only two things: doing mathematics and teaching it."[6]) A list of Poisson's works, drawn up by himself, is given at the end of Arago's biography. All that is possible is a brief mention of the more important ones. It was in the application of mathematics to physics that his greatest services to science were performed. Perhaps the most original, and certainly the most permanent in their influence, were his memoirs on the theory of electricity and magnetism, which virtually created a new branch of mathematical physics.[3] Next (or in the opinion of some, first) in importance stand the memoirs on celestial mechanics, in which he proved himself a worthy successor to Pierre-Simon Laplace. The most important of these are his memoirs Sur les inégalités séculaires des moyens mouvements des planètes, Sur la variation des constantes arbitraires dans les questions de mécanique, both published in the Journal of the École Polytechnique (1809); Sur la libration de la lune, in Connaissance des temps (1821), etc.; and Sur le mouvement de la terre autour de son centre de gravité, in Mémoires de l'Académie (1827), etc. In the first of these memoirs, Poisson discusses the famous question of the stability of the planetary orbits, which had already been settled by Lagrange to the first degree of approximation for the disturbing forces. Poisson showed that the result could be extended to a second approximation, and thus made an important advance in planetary theory. The memoir is remarkable inasmuch as it roused Lagrange, after an interval of inactivity, to compose in his old age one of the greatest of his memoirs, entitled Sur la théorie des variations des éléments des planètes, et en particulier des variations des grands axes de leurs orbites. So highly did he think of Poisson's memoir that he made a copy of it with his own hand, which was found among his papers after his death. Poisson made important contributions to the theory of attraction.[3] As a tribute to Poisson's scientific work, which stretched to more than 300 publications, he was awarded a French peerage in 1837. His is one of the 72 names inscribed on the Eiffel Tower. Contributions Poisson's equation Main article: Poisson's equation Poisson's well-known generalization of Laplace's second order partial differential equation for potential $\phi $ $\nabla ^{2}\phi =-4\pi \rho \;$ is known as Poisson's equation after him, was first published in the Bulletin de la société philomatique (1813).[3] If $\rho =0$, we retrieve Laplace's equation $\nabla ^{2}\phi =0.\;$ If $\rho (x,y,z)$ is a continuous function and if for $r\rightarrow \infty $ (or if a point 'moves' to infinity) a function $\phi $ goes to 0 fast enough, a solution of Poisson's equation is the Newtonian potential of a function $\rho (x,y,z)$ $\phi =-{1 \over 4\pi }\iiint {\frac {\rho (x,y,z)}{r}}\,dV\;$ where $r$ is a distance between a volume element $dV$and a point $P$. The integration runs over the whole space. Poisson's two most important memoirs on the subject are Sur l'attraction des sphéroides (Connaiss. ft. temps, 1829), and Sur l'attraction d'un ellipsoide homogène (Mim. ft. l'acad., 1835).[3] Poisson discovered that Laplace's equation is valid only outside of a solid. A rigorous proof for masses with variable density was first given by Carl Friedrich Gauss in 1839. Poisson's equation is applicable in not just gravitation, but also electricity and magnetism.[7] Electricity and magnetism As the eighteenth century came to a close, human understanding of electrostatics approached maturity. Benjamin Franklin had already established the notion of electric charge and the conservation of charge; Charles-Augustin de Coulomb had enunciated his inverse-square law of electrostatics. In 1777, Joseph-Louis Lagrange introduced the concept of a potential function that can be used to compute the gravitational force of an extended body. In 1812, Poisson adopted this idea and obtained the appropriate expression for electricity, which relates the potential function $V$ to the electric charge density $\rho $.[8] Poisson's work on potential theory inspired George Green's 1828 paper, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism. In 1820, Hans Christian Ørsted demonstrated that it was possible to deflect a magnetic needle by closing or opening an electric circuit nearby, resulting in a deluge of published papers attempting to explain the phenomenon. Ampère's law and the Biot-Savart law were quickly deduced. The science of electromagnetism was born. Poisson was also investigating the phenomenon of magnetism at this time, though he insisted on treating electricity and magnetism as separate phenomena. He published two memoirs on magnetism in 1826.[9] By the 1830s, a major research question in the study of electricity was whether or not electricity was a fluid or fluids distinct from matter, or something that simply acts on matter like gravity. Coulomb, Ampère, and Poisson thought that electricity was a fluid distinct from matter. In his experimental research, starting with electrolysis, Michael Faraday sought to show this was not the case. Electricity, Faraday believed, was a part of matter.[10] Optics Poisson was a member of the academic "old guard" at the Académie royale des sciences de l'Institut de France, who were staunch believers in the particle theory of light and were skeptical of its alternative, the wave theory. In 1818, the Académie set the topic of their prize as diffraction. One of the participants, civil engineer and opticist Augustin-Jean Fresnel submitted a thesis explaining diffraction derived from analysis of both the Huygens–Fresnel principle and Young's double slit experiment.[11] Poisson studied Fresnel's theory in detail and looked for a way to prove it wrong. Poisson thought that he had found a flaw when he demonstrated that Fresnel's theory predicts an on-axis bright spot in the shadow of a circular obstacle blocking a point source of light, where the particle-theory of light predicts complete darkness. Poisson argued this was absurd and Fresnel's model was wrong. (Such a spot is not easily observed in everyday situations, because most everyday sources of light are not good point sources.) The head of the committee, Dominique-François-Jean Arago, performed the experiment. He molded a 2 mm metallic disk to a glass plate with wax.[12] To everyone's surprise he observed the predicted bright spot, which vindicated the wave model. Fresnel won the competition. After that, the corpuscular theory of light was dead, but was revived in the twentieth century in a different form, wave-particle duality. Arago later noted that the diffraction bright spot (which later became known as both the Arago spot and the Poisson spot) had already been observed by Joseph-Nicolas Delisle[12] and Giacomo F. Maraldi[13] a century earlier. Pure mathematics and statistics In pure mathematics, Poisson's most important works were his series of memoirs on definite integrals and his discussion of Fourier series, the latter paving the way for the classic researches of Peter Gustav Lejeune Dirichlet and Bernhard Riemann on the same subject; these are to be found in the Journal of the École Polytechnique from 1813 to 1823, and in the Memoirs de l'Académie for 1823. He also studied Fourier integrals.[3] Poisson wrote an essay on the calculus of variations (Mem. de l'acad., 1833), and memoirs on the probability of the mean results of observations (Connaiss. d. temps, 1827, &c). The Poisson distribution in probability theory is named after him.[3] In 1820 Poisson studied integrations along paths in the complex plane, becoming the first person to do so.[14] In 1829, Poisson published a paper on elastic bodies that contained a statement and proof of a special case of what became known as the divergence theorem.[15] Mechanics Part of a series on Classical mechanics ${\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})$ Second law of motion • History • Timeline • Textbooks Branches • Applied • Celestial • Continuum • Dynamics • Kinematics • Kinetics • Statics • Statistical mechanics Fundamentals • Acceleration • Angular momentum • Couple • D'Alembert's principle • Energy • kinetic • potential • Force • Frame of reference • Inertial frame of reference • Impulse • Inertia / Moment of inertia • Mass • Mechanical power • Mechanical work • Moment • Momentum • Space • Speed • Time • Torque • Velocity • Virtual work Formulations • Newton's laws of motion • Analytical mechanics • Lagrangian mechanics • Hamiltonian mechanics • Routhian mechanics • Hamilton–Jacobi equation • Appell's equation of motion • Koopman–von Neumann mechanics Core topics • Damping • Displacement • Equations of motion • Euler's laws of motion • Fictitious force • Friction • Harmonic oscillator • Inertial / Non-inertial reference frame • Mechanics of planar particle motion • Motion (linear) • Newton's law of universal gravitation • Newton's laws of motion • Relative velocity • Rigid body • dynamics • Euler's equations • Simple harmonic motion • Vibration Rotation • Circular motion • Rotating reference frame • Centripetal force • Centrifugal force • reactive • Coriolis force • Pendulum • Tangential speed • Rotational frequency • Angular acceleration / displacement / frequency / velocity Scientists • Kepler • Galileo • Huygens • Newton • Horrocks • Halley • Maupertuis • Daniel Bernoulli • Johann Bernoulli • Euler • d'Alembert • Clairaut • Lagrange • Laplace • Hamilton • Poisson • Cauchy • Routh • Liouville • Appell • Gibbs • Koopman • von Neumann •  Physics portal •  Category Analytical mechanics and the calculus of variations Founded mainly by Leonhard Euler and Joseph-Louis Lagrange in the eighteenth century, the calculus of variations saw further development and applications in the nineteenth.[16] Let $S=\int \limits _{a}^{b}f(x,y(x),y'(x))\,dx,$ where $y'={\frac {dy}{dx}}$. Then $S$ is extremized if it satisfies the Euler–Lagrange equations ${\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)=0.$ But if $S$ depends on higher-order derivatives of $y(x)$, that is, if $S=\int \limits _{a}^{b}f\left(x,y(x),y'(x),...,y^{(n)}(x)\right)\,dx,$ then $y$ must satisfy the Euler–Poisson equation, ${\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)+...+(-1)^{n}{\frac {d^{n}}{dx^{n}}}\left[{\frac {\partial f}{\partial y^{(n)}}}\right]=0.$[17] Poisson's Traité de mécanique (2 vols. 8vo, 1811 and 1833) was written in the style of Laplace and Lagrange and was long a standard work.[3] Let $q$ be the position, $T$ be the kinetic energy, $V$ the potential energy, both independent of time $t$. Lagrange's equation of motion reads[16] ${\frac {d}{dt}}\left({\frac {\partial T}{\partial {\dot {q}}_{i}}}\right)-{\frac {\partial T}{\partial q_{i}}}+{\frac {\partial V}{\partial q_{i}}}=0,~~~~i=1,2,...,n.$ Here, the dot notation for the time derivative is used, ${\frac {dq}{dt}}={\dot {q}}$. Poisson set $L=T-V$.[16] He argued that if $V$ is independent of ${\dot {q}}_{i}$, he could write ${\frac {\partial L}{\partial {\dot {q}}_{i}}}={\frac {\partial T}{\partial {\dot {q}}_{i}}},$ giving[16] ${\frac {d}{dt}}\left({\frac {\partial L}{\partial {\dot {q}}_{i}}}\right)-{\frac {\partial L}{\partial q_{i}}}=0.$ He introduced an explicit formula for momenta,[16] $p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}={\frac {\partial T}{\partial {\dot {q}}_{i}}}.$ Thus, from the equation of motion, he got[16] ${\dot {p}}_{i}={\frac {\partial L}{\partial q_{i}}}.$ Poisson's text influenced the work of William Rowan Hamilton and Carl Gustav Jacob Jacobi. A translation of Poisson's Treatise on Mechanics was published in London in 1842. Let $u$ and $v$ be functions of the canonical variables of motion $q$ and $p$. Then their Poisson bracket is given by $[u,v]={\frac {\partial u}{\partial q_{i}}}{\frac {\partial v}{\partial p_{i}}}-{\frac {\partial u}{\partial p_{i}}}{\frac {\partial v}{\partial q_{i}}}.$[18] Evidently, the operation anti-commutes. More precisely, $[u,v]=-[v,u]$.[18] By Hamilton's equations of motion, the total time derivative of $u=u(q,p,t)$ is ${\begin{aligned}{\frac {du}{dt}}&={\frac {\partial u}{\partial q_{i}}}{\dot {q}}_{i}+{\frac {\partial u}{\partial p_{i}}}{\dot {p}}_{i}+{\frac {\partial u}{\partial t}}\\[6pt]&={\frac {\partial u}{\partial q_{i}}}{\frac {\partial H}{\partial p_{i}}}-{\frac {\partial u}{\partial p_{i}}}{\frac {\partial H}{\partial q_{i}}}+{\frac {\partial u}{\partial t}}\\[6pt]&=[u,H]+{\frac {\partial u}{\partial t}},\end{aligned}}$ where $H$ is the Hamiltonian. In terms of Poisson brackets, then, Hamilton's equations can be written as ${\dot {q}}_{i}=[q_{i},H]$ and ${\dot {p}}_{i}=[p_{i},H]$.[18] Suppose $u$ is a constant of motion, then it must satisfy $[H,u]={\frac {\partial u}{\partial t}}.$ Moreover, Poisson's theorem states the Poisson bracket of any two constants of motion is also a constant of motion.[18] In September 1925, Paul Dirac received proofs of a seminal paper by Werner Heisenberg on the new branch of physics known as quantum mechanics. Soon he realized that the key idea in Heisenberg's paper was the anti-commutativity of dynamical variables and remembered that the analogous mathematical construction in classical mechanics was Poisson brackets. He found the treatment he needed in E. T. Whittaker's Analytical Dynamics of Particles and Rigid Bodies.[19][20] Continuum mechanics and fluid flow Unsolved problem in physics: Under what conditions do solutions to the Navier–Stokes equations exist and are smooth? This is a Millennium Prize Problem in mathematics. (more unsolved problems in physics) In 1821, using an analogy with elastic bodies, Claude-Louis Navier arrived at the basic equations of motion for viscous fluids, now identified as the Navier–Stokes equations. In 1829 Poisson independently obtained the same result. George Gabriel Stokes re-derived them in 1845 using continuum mechanics.[21] Poisson, Augustin-Louis Cauchy, and Sophie Germain were the main contributors to the theory of elasticity in the nineteenth century. The calculus of variations was frequently used to solve problems.[16] Wave propagation Poisson also published a memoir on the theory of waves (Mém. ft. l'acad., 1825).[3] Thermodynamics In his work on heat conduction, Joseph Fourier maintained that the arbitrary function may be represented as an infinite trigonometric series and made explicit the possibility of expanding functions in terms of Bessel functions and Legendre polynomials, depending on the context of the problem. It took some time for his ideas to be accepted as his use of mathematics was less than rigorous. Although initially skeptical, Poisson adopted Fourier's method. From around 1815 he studied various problems in heat conduction. He published his Théorie mathématique de la chaleur in 1835.[22] During the early 1800s, Pierre-Simon de Laplace developed a sophisticated, if speculative, description of gases based on the old caloric theory of heat, to which younger scientists such as Poisson were less committed. A success for Laplace was his correction of Newton's formula for the speed of sound in air that gives satisfactory answers when compared with experiments. The Newton–Laplace formula makes use of the specific heats of gases at constant volume $c_{V}$and at constant pressure $c_{P}$. In 1823 Poisson redid his teacher's work and reached the same results without resorting to complex hypotheses previously employed by Laplace. In addition, by using the gas laws of Robert Boyle and Joseph Louis Gay-Lussac, Poisson obtained the equation for gases undergoing adiabatic changes, namely $PV^{\gamma }={\text{constant}}$, where $P$ is the pressure of the gas, $V$ its volume, and $\gamma ={\frac {c_{P}}{c_{V}}}$.[23] Other works Besides his many memoirs, Poisson published a number of treatises, most of which were intended to form part of a great work on mathematical physics, which he did not live to complete. Among these may be mentioned:[3] • Nouvelle théorie de l'action capillaire (4to, 1831); • Recherches sur la probabilité des jugements en matières criminelles et matière civile (4to, 1837), all published at Paris. • A catalog of all of Poisson's papers and works can be found in Oeuvres complétes de François Arago, Vol. 2 • Mémoire sur l'équilibre et le mouvement des corps élastiques (v. 8 in Mémoires de l'Académie Royale des Sciences de l'Institut de France, 1829), digitized copy from the Bibliothèque nationale de France • Recherches sur le Mouvement des Projectiles dans l'Air, en ayant égard a leur figure et leur rotation, et a l'influence du mouvement diurne de la terre (1839) • Title page to Recherches sur le Mouvement des Projectiles dans l'Air (1839) • Mémoire sur le calcul numerique des integrales définies (1826) Interaction with Évariste Galois See also: Galois theory After political activist Évariste Galois had returned to mathematics after his expulsion from the École Normale, Poisson asked him to submit his work on the theory of equations, which he did January 1831. In early July, Poisson declared Galois' work "incomprehensible," but encouraged Galois to "publish the whole of his work in order to form a definitive opinion."[24] While Poisson's report was made before Galois' 14 July arrest, it took until October to reach Galois in prison. It is unsurprising, in the light of his character and situation at the time, that Galois vehemently decided against publishing his papers through the academy and instead publish them privately through his friend Auguste Chevalier. Yet Galois did not ignore Poisson's advice. He began collecting all his mathematical manuscripts while still in prison, and continued polishing his ideas until his release on 29 April 1832,[25] after which he was somehow persuaded to participate in what proved to be a fatal duel.[26] See also • List of things named after Siméon Denis Poisson • Hamilton−Jacobi equation References 1. "Siméon-Denis Poisson - Biography". Maths History. Retrieved 1 June 2022. 2. Grattan-Guinness, Ivor (2005). "The "Ecole Polytechnique", 1794-1850: Differences over Educational Purpose and Teaching Practice". The American Mathematical Monthly. 112 (3): 233–250. doi:10.2307/30037440. ISSN 0002-9890. JSTOR 30037440. 3. One or more of the preceding sentences incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Poisson, Siméon Denis". Encyclopædia Britannica. Vol. 21 (11th ed.). Cambridge University Press. p. 896. 4. "Poisson, Simeon Denis: certificate of election to the Royal Society". The Royal Society. Retrieved 20 October 2020. 5. "Book of Members, 1780–2010: Chapter P" (PDF). American Academy of Arts and Sciences. Retrieved 9 September 2016. 6. François Arago (1786–1853) attributed to Poisson the quote: "La vie n'est bonne qu'à deux choses: à faire des mathématiques et à les professer." (Life is good for only two things: to do mathematics and to teach it.) See: J.-A. Barral, ed., Oeuvres complétes de François Arago ..., vol. II (Paris, France: Gide et J. Baudry, 1854), page 662. 7. Kline, Morris (1972). "28.4: The Potential Equation and Green's Theorem". Mathematical Thought from Ancient to Modern Times. United States of America: Oxford University Press. pp. 682–4. ISBN 0-19-506136-5. 8. Baigrie, Brian (2007). "Chapter 5: From Effluvia to Fluids". Electricity and Magnetism: A Historical Perspective. United States of America: Greenwood Press. p. 47. ISBN 978-0-313-33358-3. 9. Baigrie, Brian (2007). "Chapter 7: The Current and the Needle". Electricity and Magnetism: A Historical Perspective. United States of America: Greenwood Press. p. 72. ISBN 978-0-313-33358-3. 10. Baigrie, Brian (2007). "Chapter 8: Forces and Fields". Electricity and Magnetism: A Historical Perspective. United States of America: Greenwood Press. p. 88. ISBN 978-0-313-33358-3. 11. Fresnel, A.J. (1868), OEuvres Completes 1, Paris: Imprimerie impériale 12. Fresnel, A.J. (1868), OEuvres Completes 1, Paris: Imprimerie impériale, p. 369 13. Maraldi, G.F. (1723), 'Diverses expèriences d'optique' in Mémoires de l'Académie Royale des Sciences, Imprimerie impériale, p. 111 14. Kline, Morris (1972). "27.4: The Foundation of Complex Function Theory". Mathematical Thought from Ancient to Modern Times. Oxford University Press. p. 633. ISBN 0-19-506136-5. 15. Katz, Victor (May 1979). "A History of Stokes' Theorem". Mathematics Magazine. 52 (3): 146–156. doi:10.1080/0025570X.1979.11976770. JSTOR 2690275. 16. Kline, Morris (1972). "Chapter 30: The Calculus of Variations in the Nineteenth Century". Mathematical Thought from Ancient to Modern Times. Oxford University Press. ISBN 0-19-506136-5. 17. Kot, Mark (2014). "Chapter 4: Basic Generalizations". A First Course in the Calculus of Variations. American Mathematical Society. ISBN 978-1-4704-1495-5. 18. Goldstein, Herbert (1980). "Chapter 9: Canonical Transformations". Classical Mechanics. Addison-Wesley Publishing Company. pp. 397, 399, 406–7. ISBN 0-201-02918-9. 19. Farmelo, Graham (2009). The Strangest Man: the Hidden Life of Paul Dirac, Mystic of the Atom. Great Britain: Basic Books. pp. 83–88. ISBN 978-0-465-02210-6. 20. Coutinho, S. C. (1 May 2014). "Whittaker's analytical dynamics: a biography". Archive for History of Exact Sciences. 68 (3): 355–407. doi:10.1007/s00407-013-0133-1. ISSN 1432-0657. S2CID 122266762. 21. Kline, Morris (1972). "28.7: Systems of Partial Differential Equations". Mathematical Thought from Ancient to Modern Times. United States of America: Oxford University Press. pp. 696–7. ISBN 0-19-506136-5. 22. Kline, Morris (1972). "28.2: The Heat Equation and Fourier Series". Mathematical Thought from Ancient to Modern Times. United States of America: Oxford University Press. pp. 678–9. ISBN 0-19-506136-5. 23. Lewis, Christopher (2007). "Chapter 2: The Rise and Fall of the Caloric Theory". Heat and Thermodynamics: A Historical Perspective. United States of America: Greenwood Press. ISBN 978-0-313-33332-3. 24. Taton, R. (1947). "Les relations d'Évariste Galois avec les mathématiciens de son temps". Revue d'Histoire des Sciences et de Leurs Applications. 1 (2): 114–130. doi:10.3406/rhs.1947.2607. 25. Dupuy, Paul (1896). "La vie d'Évariste Galois". Annales Scientifiques de l'École Normale Supérieure. 13: 197–266. doi:10.24033/asens.427. 26. C., Bruno, Leonard (2003) [1999]. Math and mathematicians : the history of math discoveries around the world. Baker, Lawrence W. Detroit, Mich.: U X L. p. 173. ISBN 978-0787638139. OCLC 41497065.{{cite book}}: CS1 maint: multiple names: authors list (link) External links • Media related to Siméon Denis Poisson at Wikimedia Commons • Quotations related to Siméon Denis Poisson at Wikiquote • O'Connor, John J.; Robertson, Edmund F., "Siméon Denis Poisson", MacTutor History of Mathematics Archive, University of St Andrews • Siméon Denis Poisson at the Mathematics Genealogy Project Copley Medallists (1801–1850) • Astley Cooper (1801) • William Hyde Wollaston (1802) • Richard Chenevix (1803) • Smithson Tennant (1804) • Humphry Davy (1805) • Thomas Andrew Knight (1806) • Everard Home (1807) • William Henry (1808) • Edward Troughton (1809) • Benjamin Collins Brodie (1811) • William Thomas Brande (1813) • James Ivory (1814) • David Brewster (1815) • Henry Kater (1817) • Robert Seppings (1818) • Hans Christian Ørsted (1820) • Edward Sabine / John Herschel (1821) • William Buckland (1822) • John Pond (1823) • John Brinkley (1824) • François Arago / Peter Barlow (1825) • James South (1826) • William Prout / Henry Foster (1827) • George Biddell Airy (1831) • Michael Faraday / Siméon Denis Poisson (1832) • Giovanni Antonio Amedeo Plana (1834) • William Snow Harris (1835) • Jöns Jacob Berzelius / Francis Kiernan (1836) • Antoine César Becquerel / John Frederic Daniell (1837) • Carl Friedrich Gauss / Michael Faraday (1838) • Robert Brown (1839) • Justus von Liebig / Jacques Charles François Sturm (1840) • Georg Ohm (1841) • James MacCullagh (1842) • Jean-Baptiste Dumas (1843) • Carlo Matteucci (1844) • Theodor Schwann (1845) • Urbain Le Verrier (1846) • John Herschel (1847) • John Couch Adams (1848) • Roderick Murchison (1849) • Peter Andreas Hansen (1850) Authority control International • FAST • ISNI • VIAF National • Norway • Spain • France • BnF data • Germany • Israel • Belgium • United States • Czech Republic • Australia • Netherlands • Poland • Portugal Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie • Trove Other • SNAC • IdRef
Wikipedia
SimFiT Simfit is a free open-source Windows package for simulation, curve fitting, statistics, and plotting, using a library of models or user-defined mathematical equations. Simfit has been developed by Bill Bardsley of the University of Manchester.[1] Although it is written for Windows, it can easily be installed and used on Linux machines via WINE.[2] Simfit Developer(s)William G. Bardsley, University of Manchester Stable release 8.07 / January, 2023 Operating systemWindows Typenumerical analysis LicenseGNU AGPL Website Simfit is developed using Silverfrost Limited's FTN95 Fortran Compiler and is currently featured on their website as a showcased application. The graphical functionality in Simfit has been released as a Fortran library called Simdem which allows the programmer to produce charts and graphs with just a few lines of Fortran. A version of Simdem is shipped with the Windows version of the NAG Fortran Builder.[3] A Spanish-language version of Simfit is maintained by a team in Salamanca. References 1. Burguillo, F.J.; Holgado,M.; Bardsley,W.G. (2003). "Using the SIMFIT Statistical Package to teach Data Analysis in Experimental Sciences". Revista de Educacíon en Ciencias-Journal of Science Education. 4: 8–14. 2. "WineHQ - Simfit". appdb.winehq.org. Retrieved 2021-10-04. 3. Chivers, Ian D.; Jane Sleightholme (August 2007). "Introduction to NAG Fortran builder". ACM SIGPLAN Fortran Forum. 26 (2): 12–24. doi:10.1145/1279941.1279943. S2CID 7955473. External links • Main Website • Website of the Silverfrost version • Website of the Spanish version Statistical software Public domain • Dataplot • Epi Info • CSPro • X-12-ARIMA Open-source • ADMB • DAP • gretl • JASP • JAGS • JMulTi • Julia • Jupyter (Julia, Python, R) • GNU Octave • OpenBUGS • Orange • PSPP • Python (statsmodels, PyMC3, IPython, IDLE) • R (RStudio) • SageMath • SimFiT • SOFA Statistics • Stan • XLispStat Freeware • BV4.1 • CumFreq • SegReg • XploRe • WinBUGS Commercial Cross-platform • Data Desk • GAUSS • GraphPad InStat • GraphPad Prism • IBM SPSS Statistics • IBM SPSS Modeler • JMP • Maple • Mathcad • Mathematica • MATLAB • OxMetrics • RATS • Revolution Analytics • SAS • SmartPLS • Stata • StatView • SUDAAN • S-PLUS • TSP • World Programming System (WPS) Windows only • BMDP • EViews • GenStat • LIMDEP • LISREL • MedCalc • Microfit • Minitab • MLwiN • NCSS • SHAZAM • SigmaStat • Statistica • StatsDirect • StatXact • SYSTAT • The Unscrambler • UNISTAT Excel add-ons • Analyse-it • UNISTAT for Excel • XLfit • RExcel • Category • Comparison
Wikipedia
Matrix similarity In linear algebra, two n-by-n matrices A and B are called similar if there exists an invertible n-by-n matrix P such that $B=P^{-1}AP.$ For other uses, see Similarity (geometry) and Similarity transformation (disambiguation). Similar matrices represent the same linear map under two (possibly) different bases, with P being the change of basis matrix.[1][2] A transformation A ↦ P−1AP is called a similarity transformation or conjugation of the matrix A. In the general linear group, similarity is therefore the same as conjugacy, and similar matrices are also called conjugate; however, in a given subgroup H of the general linear group, the notion of conjugacy may be more restrictive than similarity, since it requires that P be chosen to lie in H. Motivating example When defining a linear transformation, it can be the case that a change of basis can result in a simpler form of the same transformation. For example, the matrix representing a rotation in R3 when the axis of rotation is not aligned with the coordinate axis can be complicated to compute. If the axis of rotation were aligned with the positive z-axis, then it would simply be $S={\begin{bmatrix}\cos \theta &-\sin \theta &0\\\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}},$ where $\theta $ is the angle of rotation. In the new coordinate system, the transformation would be written as $y'=Sx',$ where x' and y' are respectively the original and transformed vectors in a new basis containing a vector parallel to the axis of rotation. In the original basis, the transform would be written as $y=Tx,$ where vectors x and y and the unknown transform matrix T are in the original basis. To write T in terms of the simpler matrix, we use the change-of-basis matrix P that transforms x and y as $x'=Px$ and $y'=Py$: ${\begin{aligned}&&y'&=Sx'\\&\Rightarrow &Py&=SPx\\&\Rightarrow &y&=\left(P^{-1}SP\right)x=Tx\end{aligned}}$ Thus, the matrix in the original basis, $T$, is given by $T=P^{-1}SP$. The transform in the original basis is found to be the product of three easy-to-derive matrices. In effect, the similarity transform operates in three steps: change to a new basis (P), perform the simple transformation (S), and change back to the old basis (P−1). Properties Similarity is an equivalence relation on the space of square matrices. Because matrices are similar if and only if they represent the same linear operator with respect to (possibly) different bases, similar matrices share all properties of their shared underlying operator: • Rank • Characteristic polynomial, and attributes that can be derived from it: • Determinant • Trace • Eigenvalues, and their algebraic multiplicities • Geometric multiplicities of eigenvalues (but not the eigenspaces, which are transformed according to the base change matrix P used). • Minimal polynomial • Frobenius normal form • Jordan normal form, up to a permutation of the Jordan blocks • Index of nilpotence • Elementary divisors, which form a complete set of invariants for similarity of matrices over a principal ideal domain Because of this, for a given matrix A, one is interested in finding a simple "normal form" B which is similar to A—the study of A then reduces to the study of the simpler matrix B. For example, A is called diagonalizable if it is similar to a diagonal matrix. Not all matrices are diagonalizable, but at least over the complex numbers (or any algebraically closed field), every matrix is similar to a matrix in Jordan form. Neither of these forms is unique (diagonal entries or Jordan blocks may be permuted) so they are not really normal forms; moreover their determination depends on being able to factor the minimal or characteristic polynomial of A (equivalently to find its eigenvalues). The rational canonical form does not have these drawbacks: it exists over any field, is truly unique, and it can be computed using only arithmetic operations in the field; A and B are similar if and only if they have the same rational canonical form. The rational canonical form is determined by the elementary divisors of A; these can be immediately read off from a matrix in Jordan form, but they can also be determined directly for any matrix by computing the Smith normal form, over the ring of polynomials, of the matrix (with polynomial entries) XIn − A (the same one whose determinant defines the characteristic polynomial). Note that this Smith normal form is not a normal form of A itself; moreover it is not similar to XIn − A either, but obtained from the latter by left and right multiplications by different invertible matrices (with polynomial entries). Similarity of matrices does not depend on the base field: if L is a field containing K as a subfield, and A and B are two matrices over K, then A and B are similar as matrices over K if and only if they are similar as matrices over L. This is so because the rational canonical form over K is also the rational canonical form over L. This means that one may use Jordan forms that only exist over a larger field to determine whether the given matrices are similar. In the definition of similarity, if the matrix P can be chosen to be a permutation matrix then A and B are permutation-similar; if P can be chosen to be a unitary matrix then A and B are unitarily equivalent. The spectral theorem says that every normal matrix is unitarily equivalent to some diagonal matrix. Specht's theorem states that two matrices are unitarily equivalent if and only if they satisfy certain trace equalities. See also • Canonical forms • Matrix congruence • Matrix equivalence Notes 1. Beauregard & Fraleigh (1973, pp. 240–243) 2. Bronson (1970, pp. 176–178) References • Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Co., ISBN 0-395-14017-X • Bronson, Richard (1970), Matrix Methods: An Introduction, New York: Academic Press, LCCN 70097490 • Horn and Johnson, Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-38632-2. (Similarity is discussed many places, starting at page 44.)
Wikipedia
Similarity invariance In linear algebra, similarity invariance is a property exhibited by a function whose value is unchanged under similarities of its domain. That is, $f$ is invariant under similarities if $f(A)=f(B^{-1}AB)$ where $B^{-1}AB$ is a matrix similar to A. Examples of such functions include the trace, determinant, characteristic polynomial, and the minimal polynomial. A more colloquial phrase that means the same thing as similarity invariance is "basis independence", since a matrix can be regarded as a linear operator, written in a certain basis, and the same operator in a new basis is related to one in the old basis by the conjugation $B^{-1}AB$, where $B$ is the transformation matrix to the new basis. See also • Invariant (mathematics) • Gauge invariance • Trace diagram
Wikipedia
Similitude Similitude is a concept applicable to the testing of engineering models. A model is said to have similitude with the real application if the two share geometric similarity, kinematic similarity and dynamic similarity. Similarity and similitude are interchangeable in this context. The term dynamic similitude is often used as a catch-all because it implies that geometric and kinematic similitude have already been met. Similitude's main application is in hydraulic and aerospace engineering to test fluid flow conditions with scaled models. It is also the primary theory behind many textbook formulas in fluid mechanics. The concept of similitude is strongly tied to dimensional analysis. Overview Engineering models are used to study complex fluid dynamics problems where calculations and computer simulations aren't reliable. Models are usually smaller than the final design, but not always. Scale models allow testing of a design prior to building, and in many cases are a critical step in the development process. Construction of a scale model, however, must be accompanied by an analysis to determine what conditions it is tested under. While the geometry may be simply scaled, other parameters, such as pressure, temperature or the velocity and type of fluid may need to be altered. Similitude is achieved when testing conditions are created such that the test results are applicable to the real design. The following criteria are required to achieve similitude; • Geometric similarity – the model is the same shape as the application, usually scaled. • Kinematic similarity – fluid flow of both the model and real application must undergo similar time rates of change motions. (fluid streamlines are similar) • Dynamic similarity – ratios of all forces acting on corresponding fluid particles and boundary surfaces in the two systems are constant. To satisfy the above conditions the application is analyzed; 1. All parameters required to describe the system are identified using principles from continuum mechanics. 2. Dimensional analysis is used to express the system with as few independent variables and as many dimensionless parameters as possible. 3. The values of the dimensionless parameters are held to be the same for both the scale model and application. This can be done because they are dimensionless and will ensure dynamic similitude between the model and the application. The resulting equations are used to derive scaling laws which dictate model testing conditions. It is often impossible to achieve strict similitude during a model test. The greater the departure from the application's operating conditions, the more difficult achieving similitude is. In these cases some aspects of similitude may be neglected, focusing on only the most important parameters. The design of marine vessels remains more of an art than a science in large part because dynamic similitude is especially difficult to attain for a vessel that is partially submerged: a ship is affected by wind forces in the air above it, by hydrodynamic forces within the water under it, and especially by wave motions at the interface between the water and the air. The scaling requirements for each of these phenomena differ, so models cannot replicate what happens to a full sized vessel nearly so well as can be done for an aircraft or submarine—each of which operates entirely within one medium. Similitude is a term used widely in fracture mechanics relating to the strain life approach. Under given loading conditions the fatigue damage in an un-notched specimen is comparable to that of a notched specimen. Similitude suggests that the component fatigue life of the two objects will also be similar. An example Consider a submarine modeled at 1/40th scale. The application operates in sea water at 0.5 °C, moving at 5 m/s. The model will be tested in fresh water at 20 °C. Find the power required for the submarine to operate at the stated speed. A free body diagram is constructed and the relevant relationships of force and velocity are formulated using techniques from continuum mechanics. The variables which describe the system are: Variable Application Scaled model Units L (diameter of submarine) 1 1/40 (m) V (speed) 5 calculate (m/s) $\rho $ (density) 1028 998 (kg/m3) $\mu $ (dynamic viscosity) 1.88x10−3 1.00x10−3 Pa·s (N s/m2) F (force) calculate to be measured N   (kg m/s2) This example has five independent variables and three fundamental units. The fundamental units are: meter, kilogram, second.[1] Invoking the Buckingham π theorem shows that the system can be described with two dimensionless numbers and one independent variable.[2] Dimensional analysis is used to rearrange the units to form the Reynolds number ($R_{e}$) and pressure coefficient ($C_{p}$). These dimensionless numbers account for all the variables listed above except F, which will be the test measurement. Since the dimensionless parameters will stay constant for both the test and the real application, they will be used to formulate scaling laws for the test. Scaling laws: ${\begin{aligned}&R_{e}=\left({\frac {\rho VL}{\mu }}\right)&\longrightarrow &V_{\text{model}}=V_{\text{application}}\times \left({\frac {\rho _{a}}{\rho _{m}}}\right)\times \left({\frac {L_{a}}{L_{m}}}\right)\times \left({\frac {\mu _{m}}{\mu _{a}}}\right)\\&C_{p}=\left({\frac {2\Delta p}{\rho V^{2}}}\right),F=\Delta pL^{2}&\longrightarrow &F_{\text{application}}=F_{\text{model}}\times \left({\frac {\rho _{a}}{\rho _{m}}}\right)\times \left({\frac {V_{a}}{V_{m}}}\right)^{2}\times \left({\frac {L_{a}}{L_{m}}}\right)^{2}.\end{aligned}}$ The pressure ($p$) is not one of the five variables, but the force ($F$) is. The pressure difference (Δ$p$) has thus been replaced with ($F/L^{2}$) in the pressure coefficient. This gives a required test velocity of: $V_{\text{model}}=V_{\text{application}}\times 21.9$ . A model test is then conducted at that velocity and the force that is measured in the model ($F_{model}$) is then scaled to find the force that can be expected for the real application ($F_{application}$): $F_{\text{application}}=F_{\text{model}}\times 3.44$ The power $P$ in watts required by the submarine is then: $P[\mathrm {W} ]=F_{\text{application}}\times V_{\text{application}}=F_{\text{model}}[\mathrm {N} ]\times 17.2\ \mathrm {m/s} $ Note that even though the model is scaled smaller, the water velocity needs to be increased for testing. This remarkable result shows how similitude in nature is often counterintuitive. Typical applications See also: List of dimensionless numbers Fluid mechanics Similitude has been well documented for a large number of engineering problems and is the basis of many textbook formulas and dimensionless quantities. These formulas and quantities are easy to use without having to repeat the laborious task of dimensional analysis and formula derivation. Simplification of the formulas (by neglecting some aspects of similitude) is common, and needs to be reviewed by the engineer for each application. Similitude can be used to predict the performance of a new design based on data from an existing, similar design. In this case, the model is the existing design. Another use of similitude and models is in validation of computer simulations with the ultimate goal of eliminating the need for physical models altogether. Another application of similitude is to replace the operating fluid with a different test fluid. Wind tunnels, for example, have trouble with air liquefying in certain conditions so helium is sometimes used. Other applications may operate in dangerous or expensive fluids so the testing is carried out in a more convenient substitute. Some common applications of similitude and associated dimensionless numbers; Incompressible flow (see example above) Reynolds number, pressure coefficient, (Froude number and Weber number for open channel hydraulics) Compressible flows Reynolds number, Mach number, Prandtl number, specific heat ratio Flow-excited vibration Strouhal number Centrifugal compressors Reynolds number, Mach number, pressure coefficient, velocity ratio Boundary layer thickness Reynolds number, Womersley number, dynamic similarity Solid mechanics: structural similitude Similitude analysis is a powerful engineering tool to design the scaled-down structures. Although both dimensional analysis and direct use of the governing equations may be used to derive the scaling laws, the latter results in more specific scaling laws.[3] The design of the scaled-down composite structures can be successfully carried out using the complete and partial similarities.[4] In the design of the scaled structures under complete similarity condition, all the derived scaling laws must be satisfied between the model and prototype which yields the perfect similarity between the two scales. However, the design of a scaled-down structure which is perfectly similar to its prototype has the practical limitation, especially for laminated structures. Relaxing some of the scaling laws may eliminate the limitation of the design under complete similarity condition and yields the scaled models that are partially similar to their prototype. However, the design of the scaled structures under the partial similarity condition must follow a deliberate methodology to ensure the accuracy of the scaled structure in predicting the structural response of the prototype.[5] Scaled models can be designed to replicate the dynamic characteristic (e.g. frequencies, mode shapes and damping ratios) of their full-scale counterparts. However, appropriate response scaling laws need to be derived to predict the dynamic response of the full-scale prototype from the experimental data of the scaled model.[6] See also • Similitude of ship models References 1. In the SI system of units, newtons can be expressed in terms of kg·m/s2. 2. 5 variables - 3 fundamental units ⇒ 2 dimensionless numbers. 3. Rezaeepazhand, J.; Simitses, G.J.; Starnes, J.H. (1996). "Scale models for laminated cylindrical shells subjected to axial compression". Composite Structures. 34 (4): 371–9. doi:10.1016/0263-8223(95)00154-9. 4. Asl, M.E.; Niezrecki, C.; Sherwood, J.; Avitabile, P. (2016). "Similitude Analysis of Composite I-Beams with Application to Subcomponent Testing of Wind Turbine Blades". Experimental and Applied Mechanics. Conference Proceedings of the Society for Experimental Mechanics Series. Vol. 4. Springer. pp. 115–126. doi:10.1007/978-3-319-22449-7_14. ISBN 978-3-319-22449-7. 5. Asl, M.E.; Niezrecki, C.; Sherwood, J.; Avitabile, P. (2017). "Vibration prediction of thin-walled composite I-beams using scaled models". Thin-Walled Structures. 113: 151–161. doi:10.1016/j.tws.2017.01.020. 6. Eydani Asl, M.; Niezrecki, C.; Sherwood, J.; Avitabile, P. (2015). "Predicting the Vibration Response in Subcomponent Testing of Wind Turbine Blades". Special Topics in Structural Dynamics. Conference Proceedings of the Society for Experimental Mechanics Series. Vol. 6. Springer. pp. 115–123. doi:10.1007/978-3-319-15048-2_11. ISBN 978-3-319-15048-2. Further reading • Binder, Raymond C. (1973). Fluid Mechanics. Prentice-Hall. ISBN 978-0-13-322594-5. OCLC 393400. • Howarth, L., ed. (1953). Modern Developments in Fluid Mechanics, High Speed Flow. Clarendon Press. OCLC 572735435 – via HathiTrust. • Kline, Stephen J. (1986). Similitude and Approximation Theory. Springer. ISBN 0-387-16518-5. • Chanson, Hubert (2009). "Turbulent Air-water Flows in Hydraulic Structures: Dynamic Similarity and Scale Effects". Environmental Fluid Mechanics. 9 (2): 125–142. doi:10.1007/s10652-008-9078-3. S2CID 121960118. • Heller, V. (2011). "Scale Effects in Physical Hydraulic Engineering Models". Journal of Hydraulic Research. 49 (3): 293–306. doi:10.1080/00221686.2011.578914. S2CID 121563448. • De Rosa, S.; Franco, F. (2015). "Analytical similitudes applied to thin cylindrical shells". Advances in Aircraft and Spacecraft Science. 2 (4): 403–425. doi:10.12989/aas.2015.2.4.403. • Emori, Richard I.; Schuring, Dieterich J. (2016). Scale models in engineering : fundamentals and applications (2nd ed.). Elsevier. ISBN 978-0-08-020860-2. External links • MIT open courseware lecture notes on Similitude for marine engineering (pdf file)
Wikipedia
Self-similar solution In the study of partial differential equations, particularly in fluid dynamics, a self-similar solution is a form of solution which is similar to itself if the independent and dependent variables are appropriately scaled. Self-similar solutions appear whenever the problem lacks a characteristic length or time scale (for example, the Blasius boundary layer of an infinite plate, but not of a finite-length plate). These include, for example, the Blasius boundary layer or the Sedov–Taylor shell.[1][2] Concept A powerful tool in physics is the concept of dimensional analysis and scaling laws. By examining the physical effects present in a system, we may estimate their size and hence which, for example, might be neglected. In some cases, the system may not have a fixed natural length or time scale, while the solution depends on space or time. It is then necessary to construct a scale using space or time and the other dimensional quantities present—such as the viscosity $\nu $. These constructs are not 'guessed' but are derived immediately from the scaling of the governing equations. Classification The normal self-similar solution is also referred to as a self-similar solution of the first kind, since another type of self-similar exists for finite-sized problems, which cannot be derived from dimensional analysis, known as a self-similar solution of the second kind. Self-similar solution of the second kind The early identification of self-similar solutions of the second kind can be found in problems of imploding shock waves (Guderley–Landau–Stanyukovich problem), analyzed by G. Guderley (1942) and Lev Landau and K. P. Stanyukovich (1944),[3] and propagation of shock waves by a short impulse, analysed by Carl Friedrich von Weizsäcker[4] and Yakov Borisovich Zel'dovich (1956), who also classified it as the second kind for the first time.[5] A complete description was made in 1972 by Grigory Barenblatt and Yakov Borisovich Zel'dovich.[6] The self-similar solution of the second kind also appears in different contexts such as in boundary-layer problems subjected to small perturbations,[7] as was identified by Keith Stewartson,[8] Paul A. Libby and Herbert Fox.[9] Moffatt eddies are also a self-similar solution of the second kind. Example - Rayleigh problem A simple example is a semi-infinite domain bounded by a rigid wall and filled with viscous fluid.[10] At time $t=0$ the wall is made to move with constant speed $U$ in a fixed direction (for definiteness, say the $x$ direction and consider only the $x-y$ plane), one can see that there is no distinguished length scale given in the problem. This is known as the Rayleigh problem. The boundary conditions of no-slip is $u=U$ on $y=0$ Also, the condition that the plate has no effect on the fluid at infinity is enforced as $u\rightarrow 0$ as $y\rightarrow \infty $. Now, from the Navier-Stokes equations $\rho \left({\dfrac {\partial {\vec {u}}}{\partial t}}+{\vec {u}}\cdot \nabla {\vec {u}}\right)=-\nabla p+\mu \nabla ^{2}{\vec {u}}$ one can observe that this flow will be rectilinear, with gradients in the $y$ direction and flow in the $x$ direction, and that the pressure term will have no tangential component so that ${\dfrac {\partial p}{\partial y}}=0$. The $x$ component of the Navier-Stokes equations then becomes ${\dfrac {\partial {\vec {u}}}{\partial t}}=\nu \partial _{y}^{2}{\vec {u}}$ and the scaling arguments can be applied to show that ${\frac {U}{t}}\sim \nu {\frac {U}{y^{2}}}$ which gives the scaling of the $y$ co-ordinate as $y\sim (\nu t)^{1/2}$. This allows one to pose a self-similar ansatz such that, with $f$ and $\eta $ dimensionless, $u=Uf\left(\eta \equiv {\dfrac {y}{(\nu t)^{1/2}}}\right)$ The above contains all the relevant physics and the next step is to solve the equations, which for many cases will include numerical methods. This equation is $-\eta f'/2=f''$ with solution satisfying the boundary conditions that $f=1-\operatorname {erf} (\eta /2)$ or $u=U\left(1-\operatorname {erf} \left(y/(4\nu t)^{1/2}\right)\right)$ which is a self-similar solution of the first kind. References 1. Gratton, J. (1991). Similarity and self similarity in fluid dynamics. Fundamentals of Cosmic Physics. Vol. 15. New York: Gordon and Breach. pp. 1–106. OCLC 35504041. 2. Barenblatt, Grigory Isaakovich (1996). Scaling, self-similarity, and intermediate asymptotics: dimensional analysis and intermediate asymptotics. Vol. 14. Cambridge University Press. ISBN 0-521-43522-6. 3. Stanyukovich, K. P. (2016). Unsteady motion of continuous media. Elsevier. Page 521 4. Weizsäcker, CF (1954). Approximate representation of strong unsteady shock waves through homology solutions. Zeitschrift für Naturforschung A , 9 (4), 269-275. 5. Zeldovich, Y. B. (1956). "The motion of a gas under the action of a short term pressure shock". Akust. Zh. 2 (1): 28–38. 6. Barenblatt, G. I.; Zel'dovich, Y. B. (1972). "Self-similar solutions as intermediate asymptotics". Annual Review of Fluid Mechanics. 4 (1): 285–312. Bibcode:1972AnRFM...4..285B. doi:10.1146/annurev.fl.04.010172.001441. 7. Coenen, W.; Rajamanickam, P.; Weiss, A. D.; Sánchez, A. L.; Williams, F. A. (2019). "Swirling flow induced by jets and plumes". Acta Mechanica. 230 (6): 2221–2231. doi:10.1007/s00707-019-02382-2. S2CID 126488392. 8. Stewartson, K. (1957). "On asymptotic expansions in the theory of boundary layers". Journal of Mathematics and Physics. 36 (1–4): 173–191. doi:10.1002/sapm1957361173. 9. Libby, P. A.; Fox, H. (1963). "Some perturbation solutions in laminar boundary-layer theory". Journal of Fluid Mechanics. 17 (3): 433–449. doi:10.1017/S0022112063001439. S2CID 123824364. 10. Batchelor (2000) [1967]. An Introduction to Fluid Dynamics. p. 189. ISBN 9780521663960.
Wikipedia
Similarity system of triangles A similarity system of triangles is a specific configuration involving a set of triangles.[1] A set of triangles is considered a configuration when all of the triangles share a minimum of one incidence relation with one of the other triangles present in the set.[1] An incidence relation between triangles refers to when two triangles share a point. For example, the two triangles to the right, $AHC$ and $BHC$, are a configuration made up of two incident relations, since points $C$and $H$are shared. The triangles that make up configurations are known as component triangles.[1] Triangles must not only be a part of a configuration set to be in a similarity system, but must also be directly similar.[1] Direct similarity implies that all angles are equal between two given triangle and that they share the same rotational sense.[2] As is seen in the adjacent images, in the directly similar triangles, the rotation of $B$ onto $C$ and $B^{1}$ onto $C^{1}$occurs in the same direction. In the opposite similar triangles, the rotation of $B$ onto $C$ and $B^{1}$ onto $C^{1}$ occurs in the opposite direction. In sum, a configuration is a similarity system when all triangles in the set, lie in the same plane and the following holds true: if there are n triangles in the set and n − 1 triangles are directly similar, then n triangles are directly similar.[1] Background J.G. Mauldon introduced the idea of similarity systems of triangles in his paper in Mathematics Magazine "Similar Triangles".[1] Mauldon began his analyses by examining given triangles $ABC,XYZ$ for direct similarity through complex numbers, specifically the equation ${\begin{vmatrix}a&b&c\\x&y&z\\1&1&1\end{vmatrix}}=0$.[1] He then furthered his analyses to equilateral triangles, showing that if a triangle $ABC$ satisfied the equation $a+wb+w^{2}c=0$ when $w={\frac {-1+i\surd 3}{2}}$, it was equilateral.[1] As evidence of this work, he applied his conjectures on direct similarity and equilateral triangles in proving Napoleon's theorem.[1] He then built off Napoleon by proving that if an equilateral triangle was constructed with equilateral triangles incident on each vertex, the midpoints of the connecting lines between the non-incident vertices of the outer three equilateral triangles create an equilateral triangle.[1] Other similar work was done by the French Geometer Thébault in his proof that given a parallelogram and squares that lie on each side of the parallelogram, the centers of the squares create a square.[3] Mauldon then analyzed coplanar sets of triangles, determining if they were similarity systems based on the criterion, if all but one of the triangles were directly similar, then all of the triangles are directly similar.[1] Examples Direct similarity If we construct a rectangle $ABCD$ with directly similar triangles $PAB,QBC,RCD,SDA$ on each side of the rectangle that are similar to $PQS$, then $RQS$ is directly similar and the set of triangles $\{PAB,QBC,RCD,SDA,PQS,RQS\}$ is a similarity system.[1] Indirect similarity However, if we acknowledge that the triangles can be degenerate and take points $B$ and $P$ to lie on each other and $Q,R,D$and $S$ to lie on each other, then the set of triangles is no longer a direct similarity system since the second triangle has area and the others do not.[1] Rectangular parallelepiped Given a figure where three sets of lines are parallel, but not equivalent in length (formally known as a rectangular parallelepiped) with all points of order two being labelled as follows: $\{A_{1}B_{1}C_{1},A_{2}B_{2}C_{2},A_{3}B_{3}C_{3},A_{4}B_{4}C_{4},A_{1}B_{4}C_{3},A_{2}B_{3}C_{4},A_{3}B_{2}C_{1},A_{4}B_{1}C_{2}\}$ Then we can take the above points, analyze them as triangles and we can show that they form a similarity system.[1] Proof: In order for any given triangle, $KLM$, to be directly similar to $A_{1},B_{1},C_{1}$the following equation should be satisfied: $(\ell -m)a_{1}+(m-k)b_{1}+(k-1)c_{1}=0.$[1] where ℓ, m, k, a1, b1, and c1 are sides of triangles. If the same pattern is followed for the rest of the triangles, one will notice that the summation of the equations for the first four triangles and the summation of the equations for the last four triangles provides the same result.[1] Therefore, by the definition of a similarity system of triangles, no matter the seven similar triangles selected, the eighth will satisfy the system, making them all directly similar.[1] Gallery • Direct Similarity Example • There are two incident relations between triangles AHC and BHC • Opposite similarity example • Thébault's theorem • Napoleon's theorem • Similarity System Example • Non-similarity system example • Rectangular parallelepiped References 1. Mauldon, J.G. (May 1966). "Similar Triangles". Mathematics Magazine. 39 (3): 165–174. doi:10.1080/0025570X.1966.11975709. 2. Weisstein, Eric. "Similar". Wolfram MathWorld. Retrieved 2018-12-12. 3. Gerber, Leon (October 1980). "Napoleon's Theorem and the Parallelogram Inequality for Affine-Regular Polygons". The American Mathematical Monthly. 87 (8): 644–648. doi:10.1080/00029890.1980.11995110. JSTOR 2320952.
Wikipedia
Simion Filip Simion Filip is a mathematician from Moldova with dual citizenship of Romania and Moldova. He is an associate professor of mathematics at the University of Chicago who works in dynamical systems and algebraic geometry. Simion Filip Simion Filip at the Mathematical Research Institute of Oberwolfach in 2014 Born Chișinău, Moldova CitizenshipRomanian Moldovan Alma materUniversity of Chicago University of Cambridge Princeton University Scientific career FieldsMathematics InstitutionsUniversity of Chicago ThesisTeichmüller dynamics and Hodge theory (2016) Doctoral advisorAlex Eskin Early life and education Filip was born in Chișinău, where he grew up and attended the Moldo-Turkish "Orizont" Lyceum, graduating in 2005.[1][2] He is a dual citizen of Romania and Moldova.[3] In 2004 and 2005, Filip won a bronze medal and a silver medal respectively while representing Moldova at the International Mathematical Olympiad.[4] Filip graduated with an A.B. in mathematics from Princeton University in 2009.[3] He attended Part III of the Mathematical Tripos at the University of Cambridge where he received a master's degree with distinction in 2010.[3] He received his Ph.D. under the supervision of Alex Eskin at the University of Chicago in 2016.[5][6] Career Filip spent two postdoctoral years as a Junior Fellow at Harvard University from 2016 to 2018, and another year at the Institute for Advanced Study.[3] Since 2019, he has been an associate professor at the University of Chicago.[3] Awards Filip received a five-year Clay Research Fellowship lasting from 2016 to 2021.[3][6] In 2020, Filip was one of the recipients of the EMS Prize.[7][8] Research Filip's research focuses on the interactions between dynamical systems and algebraic geometry.[6][7] In particular, he studies dynamics on Teichmüller spaces and studies Hodge theory in complex geometry.[6][7] References 1. "Students honored at Opening Exercises". Princeton University. September 7, 2008. Retrieved March 17, 2021. 2. Guțu, Liubomir (May 31, 2021). "Cu studii la Harvard, Cambridge și Princeton. Cine este Simion Filip, moldoveanul care a devenit laureat al premiului Societății Europene de Matematică". diez.md (in Romanian). Retrieved March 15, 2021. 3. "CV" (PDF). Simion Filip. Retrieved March 11, 2021. 4. "Simion Filip". International Mathematical Olympiad. Retrieved March 11, 2021. 5. Simion Filip at the Mathematics Genealogy Project 6. "Simion Filip". Clay Mathematics Institute. Retrieved March 11, 2021. 7. "Simion Filip EMS Prize winner". 8th European Congress of Mathematics. Retrieved March 11, 2021. 8. Muñoz, Vicente (May 8, 2020). "Prize Winners Announced". European Mathematical Society. Retrieved March 11, 2021. External links • "Personal website". Authority control International • VIAF Academics • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH Other • IdRef
Wikipedia
Simion Stoilow Prize The Simion Stoilow Prize (Romanian: Premiul Simion Stoilow) is the prize offered by the Romanian Academy for achievements in mathematics. It is named in honor of Simion Stoilow. The prize is awarded either for a mathematical work or for a cycle of works. The award consists of 30,000 lei and a diploma. The prize was established in 1963 and is awarded annually. Prizes of the Romanian Academy for a particular year are awarded two years later. Honorees Honorees of the Simion Stoilow Prize have included:[1] • 2020: Victor-Daniel Lie[2] • 2019: Marius Ghergu; Bogdan Teodor Udrea[3] • 2018: Iulian Cîmpean[4] • 2017: Aurel Mihai Fulger[5] • 2016: Arghir Dani Zărnescu[6] • 2015: No award • 2014: Florin Ambro[7] • 2013: Petru Jebelean[8] • 2012: George Marinescu[9] • 2011: Dan Timotin[10] • 2010: Laurențiu Leuștean; Mihai Mihăilescu[11] • 2009: Miodrag Iovanov; Sebastian Burciu[12] • 2008: Nicolae Bonciocat; Călin Ambrozie[13] • 2007: Cezar Joița; Bebe Prunaru; Liviu Ignat[14] • 2006: Radu Pantilie[15] • 2005: Eugen Mihăilescu, for the work "Estimates for the stable dimension for holomorphic maps"; Radu Păltânea, for the cycle of works "Approximation theory using positive linear operators"[16] • 2000: Liliana Pavel, for the book Hipergrupuri ("Hypergroups")[17] • 1999: Vicențiu Rădulescu for the work "Boundary value problems for nonlinear elliptic equations and hemivariational inequalities"[18] • 1995: No award • 1994: No award • 1993: No award • 1992: Florin Rădulescu • 1991: Ovidiu Cârjă • 1990: Ștefan Mirică • 1989: Gelu Popescu • 1988: Cornel Pasnicu • 1987: Călin-Ioan Gheorghiu; Titus Petrila • 1986: Vlad Bally; Paltin Ionescu • 1985: Vasile Brânzănescu; Paul Flondor; Dan Polisevschi; Mihai Putinar • 1984: Toma Albu; Mihnea Colțoiu; Dan Vuza • 1983: Mircea Puta;[19] Ion Chițescu; Eugen Popa • 1982: Mircea Craioveanu; Mircea Puta • 1981: Lucian Bădescu • 1980: Dumitru Gașpar; Costel Peligrad; Mihai Pimsner; Sorin T. Popa • 1979: Dumitru Motreanu; Dorin Popescu; Ilie Valusescu • 1978: Aurel Bejancu; Gheorghe Micula • 1977: Alexandru Brezuleanu; Nicolae Radu; Ion Văduva • 1976: Zoia Ceaușescu; Ion Cuculescu; Nicolae Popa • 1975: Șerban Strătilă; Elena Stroescu; László Zsidó • 1974: Ioana Ciorănescu; Dan Pascali; Constantin Vârsan • 1973: Vasile Istrătescu; Ioan Marusciac;[20] Constantin Năstăsescu; Veniamin Urseanu • 1972: Bernard Bereanu; Nicolae Pavel; Gustav Peeters; Elena Moldovan Popoviciu • 1971: Nicolae Popescu • 1970: Viorel Barbu; Dorin Ieșan • 1969: Ion Suciu • 1968: Petru Caraman • 1967: Constantin Apostol • 1966: Dan Burghelea; Cabiria Andreian Cazacu; Aristide Deleanu • 1965: Nicu Boboc; Alexandru Lascu • 1964: Nicolae Dinculeanu; Ivan Singer • 1963: Lazăr Dragoș; Martin Jurchescu[1] See also • List of mathematics awards References 1. Jaguszewski, Janice M. (1997). Recognizing excellence in the mathematical sciences: an international compilation of awards, prizes, and recipients. Greenwich, Conn.: JAI Press. ISBN 0762302356. OCLC 37513025. 2. "Premiile Academiei Române pentru anul 2019" (PDF). December 7, 2022. 3. "Premiile Academiei Române pentru anul 2019" (PDF). December 8, 2021. 4. "Premiile Academiei Române pentru anul 2018" (PDF). December 3, 2020. 5. "Premiile Academiei Române pentru anul 2017" (PDF). December 12, 2019. 6. "Premiile Academiei Române pentru anul 2016" (PDF). December 13, 2018. 7. "Premiile Academiei Române pentru anul 2014" (PDF). December 16, 2016. 8. "Premiile Academiei Române pentru anul 2013" (PDF). December 18, 2015. 9. "Premiile Academiei Române pentru anul 2012" (PDF). December 19, 2014. 10. "Premiile Academiei Române pentru anul 2011" (PDF). December 2013. 11. "Premiile Academiei Române pentru anul 2010" (PDF). December 2012. 12. "Premiile Academiei Române pentru anul 2009" (PDF). December 2011. 13. "Premiile Academiei Române pentru anul 2008" (PDF). December 2010. 14. "Premiile Academiei Române pentru anul 2007" (PDF). December 2009. 15. "Premiile Academiei Române pentru anul 2006" (PDF). December 2008. 16. Premiile pe anul 2005 ale Academiei Romane, decernate in 2007 Archived 2008-06-12 at the Wayback Machine (in Romanian) 17. Prizes of the Romanian Academy, 2000 Archived 2003-12-29 at the Wayback Machine 18. "Premiile Academiei Române pentru anul 1999" (PDF). December 18, 2001. 19. "Homage to the memory of professor Mircea Puta" 20. "Ioan Marusciac: Premiul Simion Stoilow al Academiei Române". ictp.acad.ro (in Romanian). Retrieved February 2, 2023.
Wikipedia
Simmons–Su protocols The Simmons–Su protocols are several protocols for envy-free division. They are based on Sperner's lemma. The merits of these protocols is that they put few restrictions on the preferences of the partners, and ask the partners only simple queries such as "which piece do you prefer?". Protocols were developed for solving several related problems: Cake cutting In the envy-free cake-cutting problem, a "cake" (a heterogeneous divisible resource) has to be divided among n partners with different preferences over parts of the cake. The cake has to be divided to n pieces such that: (a) each partner receives a single connected piece, and (b) each partner believes that his piece is (weakly) better than all other pieces. A protocol for solving this problem was developed by Forest Simmons in 1980, in a correspondence with Michael Starbird. It was first publicized by Francis Su in 1999.[1] Given a cut-set (i.e. a certain partition of the cake to n pieces), we say that a partner prefers a given piece if he believes that this piece is weakly better than all other pieces. "Weakly" means that the partner may be indifferent between the piece and one or more other pieces, so he may (in case of ties) "prefer" more than one piece. The protocol makes the following assumptions on the preferences of the partners: 1. Independence on other partners: The preference depends on the partner and the entire cut-set, but not on choices made by the other partners. 2. Hungry partners: Partners never prefer an empty piece. 3. Topologically closed preference sets: Any piece that is preferred for a convergent sequence of cut-sets, is preferred at the limiting cut-set. So for example, if a partner prefers the first piece in all cut-sets where the first cut is done at x > 0.2 and prefers the second piece in all cut-sets where the first cut is at x < 0.2, then at the cut-set where the first cut is at x = 0.2 that partner prefers both pieces equally. The closedness condition rules out the existence of single points of cake with positive desirability. These assumptions are very mild: in contrast to other protocols for fair cake-cutting, the utility functions are not required to be additive or monotonic. The protocol considers 1-dimensional cut-sets. For example, the cake may be the 1-dimensional interval [0,1] and each piece is an interval; or, the cake may be a rectangle cut along its longer side so that each piece is a rectangle. Every cut-set can be represented by n numbers xi, i = 1, ..., n, where xi is the length of the ith piece. We assume that the total length of the cake is 1, so x1 + ... + xn = 1. The space of possible partitions is thus an (n − 1)-dimensional simplex with n vertices in Rn. The protocol works on this simplex in the following way: 1. Triangulate the simplex-of-partitions to smaller simplices of any desired size. 2. Assign each vertex of the triangulation to one partner, such that each sub-simplex has n different labels. 3. For each vertex of the triangulation, ask its owner: “Which piece would you choose if the cake were cut with the cut-set represented by this point?”. Label that vertex by the number of the piece that is desired. The generated labeling satisfies the requirements of Sperner's lemma coloring: • Each vertex of the original simplex corresponds to a cut-set in which one piece contains the entire cake and all other pieces are empty. By the "hungry partners" assumption, the owner of that vertex must prefer that piece. Hence the labels of the vertices of the large simplex are all different. • Each side/face of the original simplex corresponds to the cut-sets in which some pieces are empty, and the non-empty pieces correspond to the vertices of that side. By the "hungry partners" assumption, the owners must prefer only non-empty pieces, so the triangulation vertices on these sides can have only one of the labels that appear in the corresponding vertices. Hence, by Sperner's lemma there must be at least one sub-simplex in which the labels are all different. In step #2 we assigned each vertex of this sub-simplex to a different partner. Hence we have found n very similar cut-sets in which different partners prefer different pieces of cake. We can now triangulate the sub-simplex to a finer mesh of sub-sub-simplices and repeat the process. We get a sequence of smaller and smaller simplices which converge to a single point. This point corresponds to a single cut-set. By the "Preference sets are closed" assumption, in this cut-set each partner prefers a different piece. This is an envy-free partition! The existence of an envy-free partition has been proved before,[2] but Simmons' proof also yields a constructive approximation algorithm. For example, assume that a certain land-estate has to be divided, and the partners agree that a difference of plus or minus 1 centimeter is irrelevant to them. Then the original simplex can be triangulated to simplices with side length less than 1 cm. Then every point in the sub-simplex in which all labels are different corresponds to an (approximate) envy-free partition. In contrast to other envy-free protocols, which may assign each partner a large number of crumbs, Simmons' protocol gives each partner a single connected piece. Moreover, if the original cake is rectangular then each piece is a rectangle. Several years after this algorithm has been published, it was proved that envy-free partitions with connected pieces cannot be found by finite protocols.[3] Hence, an approximation algorithm is the best that we can hope for in finite time. Currently, Simmons' algorithm is the only approximation algorithm for envy-free cake-cutting with connected pieces. Simmons' algorithm is one of the few fair division algorithms which have been implemented and put online.[4] One nice thing about the algorithm is that the queries it asks the partners are very simple: they just have to decide, in each partition, which piece they prefer. This is in contrast to other algorithm, which ask numerical queries such as "cut a piece with a value of 1/3" etc. Run-time complexity While an envy-free division with connected pieces can be approximated to any precision using the above protocol, the approximation might take a long time. In particular:[5] • When the utility functions are accessible only through oracles, the number of queries for achieving an envy of less than ϵ is $\Theta ({\frac {1}{\epsilon ^{n}}})$. • When the utility functions are given explicitly by polynomial-time algorithms, the envy-free cake-cutting problem has the same complexity as finding a Brouwer fixed-point, i.e. PPAD-complete. Rental Harmony In this problem, n housemates have decided to rent an n-bedroom house for rent fixed by the homeowner. Each housemate may have different preferences — one may prefer a large room, another may prefer a room with a view, etc. The following two problems should be solved simultaneously: (a) Assign a room to each partner, (b) Determine the rent that each partner should pay, such that the sum of payments equals to the total rent. The assignment should be envy-free in that every partner weakly prefers his parcel of room+rent over the other parcels, i.e. no partner would like to take another room at the rent assigned to that room. A protocol for solving this problem was developed by Francis Su in 1999.[1] The idea is as follows. Normalize the total rent to 1. Then each pricing scheme is a point in an $(n-1)$-dimensional simplex with $n$ vertices in $\mathbb {R} ^{n}$. Su's protocol operates on a dualized version of this simplex in a similar way to the Simmons–Su protocols for cake-cutting: for every vertex of a triangulation of the dual simplex, which corresponds to a certain price scheme, it asks the owning partner "which room do you prefer in that pricing scheme?". This results in a Sperner coloring of the dual simplex, and thus there exists a small sub-simplex which corresponds an approximate envy-free assignment of rooms and rents. [6] and [7] provide popularized explanations of the Rental Harmony protocol.[8] and [9] provide on-line implementations. See Rental harmony for more solutions to this problem. Chore division In this problem, there is a chore that has to be divided among n partners, e.g., lawn-mowing in a large area. The Rental Harmony protocol can be used to achieve approximate envy-free assignment of chores by simply thinking of the rent payments as chores and ignoring the rooms. Divisibility of chores can be achieved by dividing the time spent on them.[1] Multi-cake cutting In this problem, two or more cakes have to be divided simultaneously among two or more partners, giving each partner a single piece from each cake. Of course, if the preferences are independent (i.e. the utility from an allocation is the sum of utilities from the allocated piece in each cake), then the problem can be solved by one-cake division methods – simply do an envy-free partition on each cake separately. The question becomes interesting when the partners have linked preferences over the cakes, in which the portion of one cake that partner prefers is influenced by the portion of another cake allocated to him. For example, if the "cakes" are the times of work-shifts in two consecutive days, a typical employee may prefer to have the same shift every day (e.g. morning-morning or evening-evening) then to have different shifts. A solution to this problem for the case of 2 partners and 2 or 3 cakes was published in 2009.[10] If the number of cakes is m, and each cake is divided to k pieces, then the space of partitions can be represented by an n-vertex d-dimensional polytope, where d=m(k − 1) and n = km. A generalization of Sperner's lemma to polytopes[11] guarantees that, if this polytope is triangulated and labeled in an appropriate manner, there are at least n − d sub-simplices with a full labeling; each of these simplices corresponds to an (approximate) envy-free allocation in which each partner receives a different combination of pieces. However, the combinations might overlap: one partner might get the "morning" and "evening" shifts while another partner might get "evening" and "evening". Although these are different selections, they are incompatible. section 4 of [10] proves that an envy-free division to two partners with disjoint pieces might be impossible if m = k = 2, but is always possible if m = 2 and k = 3 (i.e. at least one cake is divided to three pieces, each partner receives a single piece from each cake, and at least one piece is discarded). Similar results are proved for three cakes. See also • Topological combinatorics • The Fair Division Calculator (Java applet of the Simmons-Su algorithms), at Harvey Mudd College References 1. Su, F. E. (1999). "Rental Harmony: Sperner's Lemma in Fair Division". The American Mathematical Monthly. 106 (10): 930–942. doi:10.2307/2589747. JSTOR 2589747. 2. Stromquist, Walter (1980). "How to Cut a Cake Fairly". The American Mathematical Monthly. 87 (8): 640–644. doi:10.2307/2320951. JSTOR 2320951. 3. Stromquist, Walter (2008). "Envy-free cake divisions cannot be found by finite protocols" (PDF). The Electronic Journal of Combinatorics. 15. doi:10.37236/735. Retrieved 26 August 2014. 4. An implementation by Francis Su, Elisha Peterson and Patrick Winograd is available at: https://www.math.hmc.edu/~su/fairdivision/ 5. Deng, X.; Qi, Q.; Saberi, A. (2012). "Algorithmic Solutions for Envy-Free Cake Cutting". Operations Research. 60 (6): 1461. doi:10.1287/opre.1120.1116. S2CID 4430655. 6. Sun, Albert (28 April 2014). "To Divide the Rent, Start With a Triangle". The New York Times. Retrieved 26 August 2014. 7. Procaccia, Ariel (15 August 2012). "Fair division and the whining philosophers problem". Turing's Invisible Hand. Retrieved 26 August 2014. 8. "Fair Division Calculator – Francis Su". 9. "Divide Your Rent Fairly". The New York Times. 28 April 2014. 10. Cloutier, J.; Nyman, K. L.; Su, F. E. (2010). "Two-player envy-free multi-cake division". Mathematical Social Sciences. 59: 26–37. arXiv:0909.0301. doi:10.1016/j.mathsocsci.2009.09.002. S2CID 15381541. 11. De Loera, J. A.; Peterson, E.; Edward Su, F. (2002). "A Polytopal Generalization of Sperner's Lemma". Journal of Combinatorial Theory, Series A. 100: 1–26. doi:10.1006/jcta.2002.3274.
Wikipedia
Simon P. Norton Simon Phillips Norton (28 February 1952 – 14 February 2019)[1] was a mathematician in Cambridge, England, who worked on finite simple groups. Simon P. Norton Born(1952-02-28)28 February 1952 Died14 February 2019(2019-02-14) (aged 66) NationalityBritish Alma materUniversity of Cambridge Scientific career FieldsMathematics ThesisF and Other Simple Groups (1976) Doctoral advisorJohn Horton Conway Education Simon Norton was born into a Sephardi family of Iraqi descent, the youngest of three brothers.[2] From 1964 he was a King's Scholar at Eton College, where he earned a reputation as an eccentric mathematical genius and was taught by Norman Routledge. He obtained an external first-class degree in Pure Mathematics at the University of London while still at the school, commuting to Royal Holloway College. He also represented the United Kingdom at the International Mathematical Olympiad thrice consecutively starring from 1967, winning a gold medal each time and two special prizes in 1967 and 1969.[3] He then went up to Trinity College, Cambridge, and achieved a first in the final examinations. Career and Life He stayed at Cambridge, working on finite groups. Norton was one of the authors of the ATLAS of Finite Groups. He constructed the Harada–Norton group and in 1979 together with John Conway proved there is a connection between the Monster group and the j-function in number theory. They dubbed this "monstrous moonshine", and made some conjectures later proved by Richard Borcherds. Norton also made several early discoveries in Conway's Game of Life,[4] and invented the game Snort. In 1985, Cambridge University did not renew his contract. Norton is the subject of the biography The Genius in My Basement, written by his Cambridge tenant, Alexander Masters,[5] which describes his eccentric lifestyle and his life-long obsession with buses. He was also an occasional contributor to Word Ways: The Journal of Recreational Linguistics. Norton was very interested in transport issues and was a member of Subterranea Britannica. He coordinated the local group of the Campaign for Better Transport (United Kingdom), and had done so since the organisation was known as Transport 2000, writing most of the newsletter for the local Cambridge group[6] and tirelessly campaigning for efficient, inclusive and environmentally friendly public transport in the region and across the United Kingdom. He collapsed and died in north London, aged 66, of a heart condition on 14 February 2019.[1] Selected publications • 1995: (with C. J. Cummins) Cummins, C. J.; Norton, S. P. (1995). "Rational Hauptmoduls are replicable". Canadian Journal of Mathematics. 47 (6): 1201–1218. doi:10.4153/cjm-1995-061-1. S2CID 123645483. • 1996: Arasu, K. T.; Dillon, J. F.; Harada, K.; Sehgal, S.; Solomon, R. (1996). "Non-monstrous moonshine". Groups, Difference Sets, and the Monster: Proceedings of a Special Research Quarter at The Ohio State University, Spring 1993. pp. 433–441. ISBN 9783110147919. • 1996: Norton, S.P. (1996). "Free transposition groups". Communications in Algebra. 24 (2): 425–432. doi:10.1080/00927879608825578. • 1998: Curtis, Robert (11 June 1998). "Anatomy of the Monster: I". The Atlas of Finite Groups: Ten Years On. London Mathematical Society Lecture Note Series, 249. pp. 198–214. ISBN 9780521575874. • 2001: Norton, Simon (2001). "Computing in the Monster". Journal of Symbolic Computation. 31 (1–2): 193–201. doi:10.1006/jsco.1999.1008. • 2002: (with Robert A. Wilson) Norton, Simon P.; Wilson, Robert A. (2002). "Anatomy of the Monster: II". Proceedings of the London Mathematical Society. 84 (3): 581–598. doi:10.1112/S0024611502013357. S2CID 2279725. References 1. Obituary: Daily Telegraph 2. Tessler, Gloria (28 March 2019). "Obituary: Simon Norton". The Jewish Chronicle. 3. https://www.imo-official.org/participant_r.aspx?id=10021 4. Poundstone, William (1985), The recursive universe: cosmic complexity and the limits of scientific knowledge, Contemporary Books, p. 7, ISBN 978-0-8092-5202-2 5. Masters, Alexander (2012), The Genius in My Basement, London: HarperCollins (published 1 September 2011), ISBN 978-0-00-724338-9, LCCN 2011535364, OCLC 739420610 6. "Cambridgeshire Campaign for Better Transport Homepage". Archive of the Cambridgeshire Campaign for Better Transport. Cambridgeshire Campaign for Better Transport. 2019. Retrieved 6 April 2022. External links • Simon Phillips Norton at the Mathematics Genealogy Project • Simon P. Norton's results at International Mathematical Olympiad • Simon Norton at the Cambridge mathematics department • Turner, Jenny (24 August 2011). "The Genius in My Basement by Alexander Masters – review". The Guardian. Retrieved 31 August 2015. • Feature profile on National Public Radio's Weekend Edition Sunday, 02/26/12 The Genius In My Basement • Cambridgeshire Campaign for Better Transport (Archive) coordinated by Simon Norton, who authored the bulk of the newsletters and reports. Authority control International • FAST • ISNI • VIAF National • Germany • United States • Czech Republic Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Simon Sidon Simon Sidon or Simon Szidon (1892 in Versec, Kingdom of Hungary – 27 April 1941, Budapest, Hungary) was a reclusive Hungarian mathematician who worked on trigonometric series and orthogonal systems and who introduced Sidon sequences and Sidon sets.[1][2] Death On 27 April 1941, Sidon died from pneumonia in the hospital after a ladder fell on him and broke his leg.[3] References 1. Trencsényi-Waldapfel, I.; Erdey-Grúz, T. (1965), Science in Hungary, Corvina Press, OCLC 718479630 2. Horváth, John (2006), A panorama of Hungarian mathematics in the twentieth century, Bolyai Society mathematical studies, Berlin, New York: Springer-Verlag, ISBN 978-3-540-28945-6 3. Csicsery, G. P. (Director) (1993). N Is a Number: A Portrait of Paul Erdős (Motion picture). External links • Simon Sidon at the Mathematics Genealogy Project Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Simon Spitzer Simon Spitzer (3 February 1826 – 2 April 1887)[2] was an Austrian mathematician, whose work largely focused on the integration of differential equations.[3] He was active as a writer in his field and, in addition to several independent works, published a large number of mathematical treatises in scholarly journals.[4] Simon Spitzer Spitzer and his family's grave at the Old Jewish Cemetery in Vienna Born(1826-02-03)3 February 1826 Vienna, Austrian Empire Died2 April 1887(1887-04-02) (aged 61) Vienna, Austria-Hungary Resting placeOld Jewish Cemetery, Vienna NationalityAustrian Relatives • Helen Adolf (granddaughter) • Leonie Adele Spitzer (granddaughter)[1] • Hans Nawiasky (grandson) Scientific career FieldsDifferential equations, analytical mechanics, financial mathematics Institutions • Vienna Polytechnic Institute • Vienna Handelsschule Biography Spitzer was born in Vienna into a Jewish family originating from Nikolsburg, Moravia.[5] He studied mathematics at the University of Vienna, from which he graduated in 1850, and became in 1851 privatdozent at the Vienna Polytechnic Institute. In 1857 he was appointed professor of algebra at the Vienna Handelsschule, which position he held until 1887, at the same time lecturing at the Polytechnic, where he became assistant professor of analytic mechanics in 1863, and professor in 1870. When the Handelsschule was changed into the Handelsakademie Spitzer became its first rector (1872–73). From 1871 he was one of the directors of the private Österreichischen Hypotheken-Bank and a trusted advisor to the world of finance and commerce.[1] Spitzer was known for his irritable nature, and became involved in scientific disputes—most notably with Joseph Petzval[6][7]—as the battleground for which he chose political newspapers as opposed to scholarly practice.[8] His granddaughter was writer Leonie Adele Spitzer.[9] Bibliography • Aufsuchung der reellen und imaginären Wurzeln einer Zahlengleichung höheren Grades. Vienna: Wilhelm Braumüller. 1849. • Allgemeine Auflösung der Zahlen-Gleichungen mit einer oder mehreren Unbekannten. Vienna: Verlag von Carl Gerold. 1851. • Neue Integrations-Methode für Differenzen-Gleichungen deren Coefficienten ganze algebraische Functionen der unabhängigen Veränderlichen sind. Vienna: K. K. Hof- und Staatsdruckerei. 1858. • Studien über die Integration linearer Differential-Gleichungen. Vienna: Verlag von Carl Gerold's Sohn. 1860. • Anleitung zur Berechnung der im Wiener Coursblatte notirten Papiere, nebst einem Anhange über Prämien, Nochgeschäfte und Stellagen. Vienna: Verlag von Carl Gerold's Sohn. 1863. • Gesammt-Uebersicht über die Production, Consumtion und Circulation der Mineralkohle als Erläuterung zur Kohlenrevier-Karte des Kaiserstaates Oesterreich. Vienna: Prandel & Ewald. 1864. • "Über Invalidenpensionen". Jahres-Bericht der Wiener Handels-Akademie. Vienna: Druck von J. Löwenthal: 1–57. 1864. • Tabellen für die Zinses-Zinsen- und Renten-Rechnung: mit Anwendung derselben auf die Berechnung von Anlehen, Construction von Amortisationsplänen etc. Vienna: Verlag von Carl Gerold's Sohn. 1865. • Anleitung zur Berechnung der Leibrenten und Anwartschaften sowie der Invaliden-Pensionen, Heirathsausstattungen und Krankencassen (2nd ed.). Vienna: Verlag von Carol Gerold's Sohn. 1881. • Ueber Münz- und Arbitragen-Rechnung (2nd ed.). Vienna: Verlag von Carl Gerold's Sohn. 1872. • Neue Studien über die Integration linearer Differential-Gleichungen. Vienna: Verlag von Carl Gerold's Sohn. 1874. • Vorlesungen über lineare Differential-Gleichungen. Vienna: Verlag von Carl Gerold's Sohn. 1878. • Integration partieller Differentialgleichungen. Vienna: Verlag von Carl Gerold's Sohn. 1879. • Neue Studien über die Integration linearen Differential-Gleichungen. Vol. 2. Vienna: Verlag von Carl Gerold's Sohn. 1881. • Untersuchungen im Gebiete linearer Differential-Gleichungen. Vol. 1. Vienna: Verlag von Carl Gerold's Sohn. 1884. References  This article incorporates text from a publication now in the public domain: Singer, Isidore; Haneman, Frederick T. (1905). "Spitzer, Simon". In Singer, Isidore; et al. (eds.). The Jewish Encyclopedia. Vol. 11. New York: Funk & Wagnalls. p. 525. 1. Pesditschek, M. (2007). "Spitzer, Simon" (PDF). Österreichisches Biographisches Lexikon (in German). Vol. 13. Vienna: Austrian Academy of Sciences. p. 43. doi:10.1553/0X00284CEE. 2. Killy, Walther; Vierhaus, Rudolf, eds. (2005). "Spitzer, Simon". Dictionary of German National Biography. Vol. 9. Munich: K. G. Saur. ISBN 978-3-598-23299-2. 3. Bobynin, V. V. (1900). Спитцер, Симон [Spitzer, Simon]. Brockhaus and Efron Encyclopedic Dictionary (in Russian). Vol. 31. pp. 264–265. 4. "Spitzer, Simon". Biographisches Lexikon des Kaiserthums Oesterreich (in German). Vol. 36. 1878. pp. 196–199. 5. Fraser, Craig (2018). "The Culture of Research Mathematics in 1860s Prussia: Adolph Myer and the Theory of the Second Variation in the Calculus of Variations". In Zack, Maria; Schlimm, Dirk (eds.). Research in History and Philosophy of Mathematics: The CSHPM 2017 Annual Meeting in Toronto, Ontario. Proceedings of the Canadian Society for History and Philosophy of Mathematics. Springer. p. 130. doi:10.1007/978-3-319-90983-7. ISBN 978-3-319-90983-7. 6. Deakin, Michael A. B. (1981). "The Development of the Laplace Transform, 1737–1937: I. Euler to Spitzer, 1737–1880". Archive for History of Exact Sciences. 25 (4): 343–390. doi:10.1007/BF01395660. JSTOR 41133637. S2CID 117913073. 7. Deakin, Michael A. B. (1994). "The Laplace transform". In Grattan-Guiness, Ivor (ed.). Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences. Vol. 1. London: Routledge. p. 554. doi:10.4324/9780203014585. ISBN 978-0-203-01458-5. 8. Cantor, Moritz (1893). "Spitzer, Simon". Allgemeine Deutsche Biographie. Vol. 35. p. 223. 9. Korotin, Ilse, ed. (2016). biografıA. Lexikon österreichischer Frauen (in German). Vol. 3. Vienna: Böhlau. pp. 3124–3125. doi:10.26530/oapen_611232. ISBN 978-3-205-79590-2. Authority control International • ISNI • VIAF National • Germany • Netherlands Academics • MathSciNet • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Simon Stevin Simon Stevin (Dutch: [ˈsimɔn ˈsteːvɪn]; 1548–1620), sometimes called Stevinus, was a Flemish mathematician, scientist and music theorist.[1] He made various contributions in many areas of science and engineering, both theoretical and practical. He also translated various mathematical terms into Dutch, making it one of the few European languages in which the word for mathematics, wiskunde (wis and kunde, i.e., "the knowledge of what is certain"), was not a loanword from Greek but a calque via Latin. He also replaced the word chemie, the Dutch for chemistry, by scheikunde ("the art of separating"), made in analogy with wiskunde. Simon Stevin Born1548 Bruges, Belgium Died1620 (aged 71–72) The Hague?[1] Alma materLeiden University Occupations • Mathematician • scientist • music theorist Known forDecimal fractions[lower-alpha 1] Delft tower experiment Intermediate value theorem Stevin's law Biography Very little is known with certainty about Simon Stevin's life, and what we know is mostly inferred from other recorded facts.[2] The exact birth date and the date and place of his death are uncertain. It is assumed he was born in Bruges, since he enrolled at Leiden University under the name Simon Stevinus Brugensis (meaning "Simon Stevin from Bruges"). His name is usually written as Stevin, but some documents regarding his father use the spelling Stevijn (pronunciation [ˈste:vεɪn]); this was a common spelling shift in 16th-century Dutch.[3] Simon Stevin's mother, Cathelijne (or Catelyne), was the daughter of a wealthy family from Ypres; her father Hubert was a poorter of Bruges. Cathelijne would later marry Joost Sayon, who was involved in the carpet and silk trade and was a member of the schuttersgilde Sint-Sebastiaan. Through her marriage, Cathelijne became a member of a family of Calvinists; it is thought that Simon Stevin was likely brought up in the Calvinist faith.[4] It is believed that Stevin grew up in a relatively affluent environment and enjoyed a good education. He was likely educated at a Latin school in his hometown.[5] Simon Stevin's travels Stevin left Bruges in 1571 apparently without a particular destination. Stevin was most likely a Calvinist since a Catholic would likely not have risen to the position of trust he later occupied with Maurice, Prince of Orange. It is assumed that he left Bruges to escape the religious persecution of Protestants by the Spanish rulers. Based on references in his work "Wisconstighe Ghedaechtenissen" (Mathematical Memoirs), it has been inferred that he must have moved first to Antwerp where he began his career as a merchant's clerk.[6] Some biographers mention that he travelled to Prussia, Poland, Denmark, Norway and Sweden and other parts of Northern Europe, between 1571 and 1577. It is possible that he completed these travels over a longer period of time. In 1577 Simon Stevin returned to Bruges and was appointed city clerk by the aldermen of Bruges, a function he occupied from 1577 to 1581. He worked in the office of Jan de Brune of the Brugse Vrije, the castellany of Bruges. Why he had returned to Bruges in 1577 is not clear. It may have been related to the political events of that period. Bruges was the scene of intense religious conflict. Catholics and Calvinists alternately controlled the government of the city. They usually opposed each other but would occasionally collaborate in order to counteract the dictates of King Philip II of Spain. In 1576 a certain level of official religious tolerance was decreed. This could explain why Stevin returned to Bruges in 1577. Later the Calvinists seized power in many Flemish cities and incarcerated Catholic clerics and secular governors supportive of the Spanish rulers. Between 1578 and 1584 Bruges was ruled by Calvinists. Simon Stevin in the Netherlands In 1581 Stevin again left his native Bruges and moved to Leiden where he attended the Latin school.[5] On 16 February 1583 he enrolled, under the name Simon Stevinus Brugensis (meaning "Simon Stevin from Bruges"), at Leiden University, which had been founded by William the Silent in 1575. Here he befriended William the Silent's second son and heir Prince Maurice, the Count of Nassau.[4] Stevin is listed in the university's registers until 1590 and apparently never graduated. Following William the Silent's assassination and Prince Maurice's assumption of his father's office, Stevin became the principal advisor and tutor of Prince Maurice. Prince Maurice asked his advice on many occasions, and made him a public officer – at first director of the so-called "waterstaet"[7] (the government authority for public works, especially water management) from 1592, and later quartermaster-general of the army of the States-General.[8] Prince Maurice also asked Stevin to found an engineering school within the University of Leiden. Stevin moved to The Hague where he bought a house in 1612. He married in 1610 or 1614 and had four children. It is known that he left a widow with two children at his death in Leiden or The Hague in 1620.[4] • Statue of Simon Stevin by Eugène Simonis, on the Simon Stevinplein in Bruges • Statue of Stevin (detail) • Statue (detail): Inclined plane diagram • Statue (detail) showing experiments on hydrostatic equilibrium Discoveries and inventions Stevin is responsible for many discoveries and inventions. He was a pioneer of the development and the practical application of (engineering related) science such as mathematics, physics and applied science like hydraulic engineering and surveying. He was thought to have invented the decimal fractions until the middle of the 20th century, when researchers discovered that decimal fractions had been previously introduced by the medieval Islamic scholar al-Uqlidisi in a book written in 952. Moreover, a systematic development of decimal fractions was given well before Stevin in the book Miftah al-Hisab written in 1427 by Al-Kashi. His contemporaries were most struck by his invention of a so-called land yacht, a carriage with sails, of which a model was preserved in Scheveningen until 1802. The carriage itself had been lost long before. Around the year 1600 Stevin, with Prince Maurice of Orange and twenty-six others, used the carriage on the beach between Scheveningen and Petten. The carriage was propelled solely by the force of wind and acquired a speed which exceeded that of horses.[7] Management of waterways Stevin's work in the waterstaet involved improvements to the sluices and spillways to control flooding, exercises in hydraulic engineering. Windmills were already in use to pump the water out but in Van de Molens (On mills), he suggested improvements including ideas that the wheels should move slowly with a better system for meshing of the gear teeth. These improved threefold the efficiency of the windmills used in pumping water out of the polders.[9] He received a patent on his innovation in 1586.[8] Philosophy of science Stevin's aim was to bring about a second age of wisdom, in which mankind would have recovered all of its earlier knowledge. He deduced that the language spoken in this age would have to be Dutch, because, as he showed empirically, in that language, more concepts could be indicated with monosyllabic words than in any of the (European) languages he had compared it with.[7] This was one of the reasons why he wrote all of his works in Dutch and left the translation of them for others to do. The other reason was that he wanted his works to be practically useful to people who had not mastered the common scientific language of the time, Latin. Thanks to Simon Stevin the Dutch language got its proper scientific vocabulary such as "wiskunde" ("kunst van het gewisse of zekere" the art of what is known or what is certain) for mathematics, "natuurkunde" (the "art of nature") for physics, "scheikunde" (the "art of separation") for chemistry, "sterrenkunde" (the "art of stars") for astronomy, "meetkunde" (the "art of measuring") for geometry. Geometry, physics and trigonometry Stevin was the first to show how to model regular and semiregular polyhedra by delineating their frames in a plane. He also distinguished stable from unstable equilibria.[7] Stevin contributed to trigonometry with his book, De Driehouckhandel. In The First Book of the Elements of the Art of Weighing, The second part: Of the propositions [The Properties of Oblique Weights], Page 41, Theorem XI, Proposition XIX,[10] he derived the condition for the balance of forces on inclined planes using a diagram with a "wreath" containing evenly spaced round masses resting on the planes of a triangular prism (see the illustration on the side). He concluded that the weights required were proportional to the lengths of the sides on which they rested assuming the third side was horizontal and that the effect of a weight was reduced in a similar manner. It is implicit that the reduction factor is the height of the triangle divided by the side (the sine of the angle of the side with respect to the horizontal). The proof diagram of this concept is known as the "Epitaph of Stevinus". As noted by E. J. Dijksterhuis, Stevin's proof of the equilibrium on an inclined plane can be faulted for using perpetual motion to imply a reductio ad absurdum. Dijksterhuis says Stevin "intuitively made use of the principle of conservation of energy ... long before it was formulated explicitly".[2]: 54  He demonstrated the resolution of forces before Pierre Varignon, which had not been remarked previously, even though it is a simple consequence of the law of their composition.[7] Stevin discovered the hydrostatic paradox, which states that the pressure in a liquid is independent of the shape of the vessel and the area of the base, but depends solely on its height.[7] He also gave the measure for the pressure on any given portion of the side of a vessel.[7] He was the first to explain the tides using the attraction of the moon.[7] In 1586, he demonstrated that two objects of different weight fall with the same acceleration.[11][12] Music theory The first mention of equal temperament related to the twelfth root of two in the West appeared in Simon Stevin's unfinished manuscript Van de Spiegheling der singconst (ca 1605) published posthumously three hundred years later in 1884;[13] however, due to insufficient accuracy of his calculation, many of the numbers (for string length) he obtained were off by one or two units from the correct values.[14] He appears to have been inspired by the writings of the Italian lutenist and musical theorist Vincenzo Galilei (father of Galileo Galilei), a onetime pupil of Gioseffo Zarlino. Bookkeeping Bookkeeping by double entry may have been known to Stevin, as he was a clerk in Antwerp in his younger years, either practically or through the medium of the works of Italian authors such as Luca Pacioli and Gerolamo Cardano. However, Stevin was the first to recommend the use of impersonal accounts in the national household. He brought it into practice for Prince Maurice, and recommended it to the French statesman Sully.[15][7] Decimal fractions Stevin wrote a 35-page booklet called De Thiende ("the art of tenths"), first published in Dutch in 1585 and translated into French as La Disme. The full title of the English translation was Decimal arithmetic: Teaching how to perform all computations whatsoever by whole numbers without fractions, by the four principles of common arithmetic: namely, addition, subtraction, multiplication, and division. The concepts referred to in the booklet included unit fractions and Egyptian fractions. Muslim mathematicians were the first to utilize decimals instead of fractions on a large scale. Al-Kashi's book, Key to Arithmetic, was written at the beginning of the 15th century and was the stimulus for the systematic application of decimals to whole numbers and fractions thereof.[16][17] But nobody established their daily use before Stevin. He felt that this innovation was so significant, that he declared the universal introduction of decimal coinage, measures and weights to be merely a question of time.[18][7] His notation is rather unwieldy. The point separating the integers from the decimal fractions seems to be the invention of Bartholomaeus Pitiscus, in whose trigonometrical tables (1612) it occurs, and it was accepted by John Napier in his logarithmic papers (1614 and 1619).[7] Stevin printed little circles around the exponents of the different powers of one-tenth. That Stevin intended these encircled numerals to denote mere exponents is clear from the fact that he employed the very same symbol for powers of algebraic quantities. He did not avoid fractional exponents; only negative exponents do not appear in his work.[7] Stevin wrote on other scientific subjects – for instance optics, geography, astronomy – and a number of his writings were translated into Latin by W. Snellius (Willebrord Snell). There are two complete editions in French of his works, both printed in Leiden, one in 1608, the other in 1634.[7] Mathematics Stevin wrote his Arithmetic in 1594. The work brought to the western world for the first time a general solution of the quadratic equation, originally documented nearly a millennium previously by Brahmagupta in India. According to Van der Waerden, Stevin eliminated "the classical restriction of 'numbers' to integers (Euclid) or to rational fractions (Diophantos)...the real numbers formed a continuum. His general notion of a real number was accepted, tacitly or explicitly, by all later scientists".[19] A recent study attributes a greater role to Stevin in developing the real numbers than has been acknowledged by Weierstrass's followers.[20] Stevin proved the intermediate value theorem for polynomials, anticipating Cauchy's proof thereof. Stevin uses a divide and conquer procedure, subdividing the interval into ten equal parts.[21] Stevin's decimals were the inspiration for Isaac Newton's work on infinite series.[22] Neologisms Stevin thought the Dutch language to be excellent for scientific writing, and he translated many of the mathematical terms to Dutch. As a result, Dutch is one of the few Western European languages that have many mathematical terms that do not stem from Greek or Latin. This includes the very name wiskunde (mathematics). His eye for the importance of having the scientific language be the same as the language of the craftsman may show from the dedication of his book De Thiende ('The Disme' or 'The Tenth'): 'Simon Stevin wishes the stargazers, surveyors, carpet measurers, body measurers in general, coin measurers and tradespeople good luck.' Further on in the same pamphlet, he writes: "[this text] teaches us all calculations that are needed by the people without using fractions. One can reduce all operations to adding, subtracting, multiplying and dividing with integers." Some of the words he invented evolved: 'aftrekken' (subtract) and 'delen' (divide) stayed the same, but over time 'menigvuldigen' became 'vermenigvuldigen' (multiply, the added 'ver' emphasizes the fact it is an action). 'Vergaderen' (gathering) became 'optellen' (add lit. count up). Another example is the Dutch word for diameter: 'middellijn', lit.: line through the middle. The word 'zomenigmaal' (quotient lit. 'that many times') has become the perhaps less poetic 'quotiënt' in modern-day Dutch. Other terms did not make it into modern day mathematical Dutch, like 'teerling' (die, although still being used in the meaning as die), instead of cube. His books were bestsellers. Trivia • The study association of mechanical engineering at the Technische Universiteit Eindhoven, W.S.V. Simon Stevin,[23] is named after Simon Stevin. In Stevin's memory, the association calls its bar "De Weeghconst" and owns a self-built fleet of land yachts. • Stevin, cited as Stevinus, is one of the favorite authors – if not the favorite author – of Uncle Toby Shandy in Laurence Sterne's The Life and Opinions of Tristram Shandy Gentleman. • Quote: A man in anger is no clever dissembler.[24] • In Bruges there is a Simon Stevin Square which holds a statue of Stevin made by Eugène Simonis. The statue incorporates Stevin's inclined plane diagram. • Operating from the port of Ostend is a survey vessel RV Simon Stevin named after him.[25] Publications Amongst others, he published: • Tafelen van Interest (Tables of interest) in 1582 with present value problems of simple and compound interest and interest tables that had previously been unpublished by bankers;[4] • Problemata geometrica in 1583; • De Thiende (La Disme, The tenth) in 1585 in which decimals were introduced in Europe; • La pratique d'arithmétique in 1585; • L'arithmétique in 1585 in which he presented a uniform treatment for solving algebraic equations; • Dialectike ofte bewysconst (Dialectics, or Art of Demonstration) in 1585 at Leyden by Christoffel Plantijn. Published again in 1621 at Rotterdam by Jan van Waesberge de Jonge. • De Beghinselen Der Weeghconst in 1586, accompanied by De Weeghdaet; • De Beghinselen des Waterwichts (Principles on the weight of water) in 1586 on the subject of hydrostatics; • Vita Politica. Named Burgherlick leven (Civil life) in 1590; • De Stercktenbouwing (The construction of fortifications) published in 1594; • De Havenvinding (Position finding) published in 1599; • De Hemelloop in 1608 in which he voiced support for the Copernican theory. • In Wiskonstighe Ghedachtenissen (Mathematical Memoirs, Latin: Hypomnemata Mathematica) from 1605 to 1608. This included Simon Stevin's earlier works like De Driehouckhandel (Trigonometry), De Meetdaet (Practice of measuring), and De Deursichtighe (Perspective), which he edited and published.;[26] • Castrametatio, dat is legermeting and Nieuwe Maniere van Stercktebou door Spilsluysen (New ways of building of sluices) published in 1617; • De Spiegheling der Singconst (Theory of the art of singing). • "Œuvres mathématiques..., Leiden, 1634[27] References 1. Researchers later discovered that decimal fractions had already been introduced by the medieval Islamic scholar al-Uqlidisi in a book written in 952. 1. Cohen, H. Floris (2001). "Stevin, Simon". Grove Music Online. Oxford: Oxford University Press. doi:10.1093/gmo/9781561592630.article.45068. ISBN 978-1-56159-263-0. (subscription or UK public library membership required) 2. E. J. Dijksterhuis (1970) Simon Stevin: Science in the Netherlands around 1600, The Hague: Martinus Nijhoff Publishers, Dutch original 1943, 's-Gravenhage 3. (nl) G. Van de Bergh Het tijdschrift De Vlaamse Stam, jaargang 34, pp. 323–328 and (nl) Bibliography to the Van Den Bergh article in De Vlaamse Stam' 4. O'Connor, John J.; Robertson, Edmund F. (January 2004), "Simon Stevin", MacTutor History of Mathematics Archive, University of St Andrews 5. The Wonderful World of Simon Stevin: 'Magic is No Magic', J. T. Devreese, G. Vanden Berghe, WIT Press, 1st ed., 2008 6. Dijksterhuis E.J. (ed.), The Principal Works of Simon Stevin, vol I, Mechanics (N.V. Swets & Zeitlinger, Amsterdam 1955) 7. One or more of the preceding sentences incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Stevinus, Simon". Encyclopædia Britannica (11th ed.). Cambridge University Press. 8. Sarton, George (1934). "Simon Stevin of Bruges (1548–1620)". Isis. 21 (2): 241–303. doi:10.1086/346851. S2CID 144054163. 9. The Story of Science: Power, Proof & Passion – EP4: Can We Have Unlimited Power? 10. The Principal Works of Simon Stevin 11. Appendix to De Beghinselen Der Weeghconst 12. Schilling, Govert (31 July 2017). Ripples in Spacetime: Einstein, Gravitational Waves, and the Future of Astronomy. Harvard University Press. ISBN 9780674971660. 13. "Van de spiegheling der singconst". Diapason.xentonic.org. 30 June 2009. Archived from the original on 17 July 2011. Retrieved 29 December 2012. 14. Christensen, Thomas S. (2006). The Cambridge History of Western Music Theory, p.205, Cambridge University Press. ISBN 9781316025482. 15. Volmer, Frans. "Stevin, Simon (1548–1620)." In History of Accounting: An International Encyclopedia, edited by Michael Chatfield and Richard Vangermeesch. New York: Garland Publishing, 1996, pp. 565–566. 16. O'Connor, John J.; Robertson, Edmund F. (July 2009), "Al-Kashi", MacTutor History of Mathematics Archive, University of St Andrews 17. Flegg, Graham (2002). Numbers: Their History and Meaning. Dover Publications. pp. 75–76. ISBN 9780486421650. 18. Tabak, John (2004). Numbers: Computers, philosophers, and the search for meaning. Facts on File. pp. 41–42. ISBN 0-8160-4955-6. 19. van der Waerden, B. L. (1985). A History of Algebra. From al-Khwarizmi to Emmy Noether. Berlin: Springer-Verlag. p. 69. ISBN 3-540-13610-X. 20. Karin Usadi Katz and Mikhail G. Katz (2011) A Burgessian Critique of Nominalistic Tendencies in Contemporary Mathematics and its Historiography. Foundations of Science. doi:10.1007/s10699-011-9223-1 21. Karin Usadi Katz and Mikhail G. Katz (2011) Stevin Numbers and Reality. Foundations of Science. doi:10.1007/s10699-011-9228-9 Online First. 22. Błaszczyk, Piotr; Katz, Mikhail; Sherry, David (2012), "Ten misconceptions from the history of analysis and their debunking", Foundations of Science, 18: 43–74, arXiv:1202.4153, doi:10.1007/s10699-012-9285-8, S2CID 119134151 23. simonstevin.tue.nl 24. Crone et al., eds. 1955–1966, [http://www.library.tudelft.nl/cgi-bin/digitresor/display.cgi?bookname=Mechanics%20I&page=11 Vol. I, p.11] 25. "RV Simon Stevin. Platform for marine research". Flanders Marine Institute. Retrieved 11 August 2022. 26. The topic contained in http://www-history.mcs.st-and.ac.uk/Biographies/Stevin.html, the relevant portion could be searched with string, "Wiskonstighe Ghedachtenissen". The summary of it may be found at the link 27. Stevin, Simon, Les œuvres mathématiques... Further reading • Virtually all of Stevin's writings have been published in five volumes with introduction and analysis in: Crone, Ernst; Dijksterhuis, E. J.; Forbes, R. J.; et al., eds. (1955–1966). The Principal Works of Simon Stevin. Lisse: Swets & Zeitlinger. The Principal Works are available online at The Digital Library of the Royal Netherlands Academy of Arts and Sciences. Does not include Dialectike ofte Bewysconst. • Another good source about Stevin is the French-language bundle: Bibliothèque royale de Belgique, ed. (2004). Simon Stevin (1548–1620): L'émergence de la nouvelle science. Turnhout: Brepols.. • A recent work on Simon Stevin in Dutch is: Devreese, J. T. en Vanden Berghe, G. (2003). Wonder en is gheen wonder. De geniale wereld van Simon Stevin 1548–1620. Leuven: Davidsfonds.{{cite book}}: CS1 maint: multiple names: authors list (link). • A recent work on Simon Stevin in English is: Devreese, J. T. en Vanden Berghe, G. (2007). Magic is no magic. The wonderful World of Simon Stevin 1548–1620. Southampton: WITpress.{{cite book}}: CS1 maint: multiple names: authors list (link) • van den Heuvel, C. (2005). De Huysbou. A reconstruction of an unfinished treatise on architecture, and civil engineering by Simon Stevin. Amsterdam: KNAW Edita. 545 pp – The work is available on line – see external links • van Bunge, Wiep (2001). From Stevin to Spinoza: An Essay on Philosophy in the Seventeenth-Century Dutch Republic. Leiden: Brill. • Herbermann, Charles, ed. (1913). "Simon Stevin" . Catholic Encyclopedia. New York: Robert Appleton Company. • Wonder, not miracle motto of Simon Stevin : English page about Simon Stevin maintained by Ad Davidse Cathie Schrier with links to some of his work • 3 Quarks Daily is a short essay on Simon Stevin by S. Abbas Raza at 3 Quarks Daily • Simonstevin.be is an Internet bibliography regarding Simon Stevin. • Loci: Convergence treats Stevin's use of the rule of false position. • MathPages • KNAW.nl link to unpublished treatise of Simon Stevin on architecture, town planning and civil engineering – C. van den Heuvel. De Huysbou. Authority control International • FAST • ISNI • VIAF National • Norway • Spain • France • BnF data • Germany • Italy • Israel • Belgium • United States • Sweden • Japan • Czech Republic • Netherlands • Poland • Vatican Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH Artists • KulturNav • RKD Artists • ULAN People • Netherlands • Deutsche Biographie Other • IdRef
Wikipedia
Simon Stringer Simon Stringer is a departmental lecturer,[1] Director of the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, and Editor-in-Chief of Network: Computation in Neural Systems[2] published by Taylor & Francis. Simon Stringer Alma materBSc University of Kent PhD University of Reading Scientific career FieldsTheoretical Neuroscience Computational Neuroscience Artificial Intelligence InstitutionsUniversity of Oxford Doctoral advisorNancy K. Nichols Websitewww.oftnai.org Research Stringer and his research group develop biological computer simulations[3] of the neuronal mechanisms underpinning various areas of brain function, including visual object recognition, spatial processing and navigation, motor function, language and consciousness. In particular, the study published in Psychological Review[4] and Interface Focus 2018,[5] the Royal Society's cross-disciplinary journal, proposes a novel approach to solve the Binding problem. Spiking neural network simulations[6] of the primate ventral visual system have shown the gradual emergence of a subpopulation of neurons, called polychronous neuronal groups (PNGs), that exhibits regularly repeating spatiotemporal patterns of spikes. The underlying phenomenon of these characteristic patterns of neural activity is known as polychronization.[7] The main point is that within these PNGs exist neurons, called binding neurons. Binding neurons learn to represent the hierarchical binding relationships between lower and higher level visual features in the hierarchy of visual primitives, at every spatial scale and across the entire visual field. This observation is consistent with the hierarchical nature of primate vision proposed by the two neuroscientists John Duncan and Glyn W. Humphreys almost thirty years ago.[8] Furthermore, this proposed mechanism for solving the binding problem suggests that information about visual features at every spatial scale, including the binding relations between these features, would be projected upwards to the higher layers of the network, where spatial information would be available for readout by later brain systems to guide behavior. This mechanism has been called the holographic principle. These feature binding representations are at the core of the capacity of the visual brain to perceive and make sense of its visuospatial world and of the consciousness itself. This finding represents an advancement towards the future development of artificial general intelligence and machine consciousness.[9] According to Stringer: Today’s machines are unable to perceive and comprehend their working environment in the same rich semantic way as the human brain. By incorporating these biological details into our models[...] will allow computers to begin to make sense of their visuospatial world in the same way as the [human] brain.[10][11] References 1. "Personal Webpage". 2. "Network: Computation in Neural Systems – New Editor-in-Chief Announcement". Retrieved 26 January 2018. 3. "University of Oxford developing Spiking Neural Networks with Novatech". Novatech. August 2018. 4. Eguchi, A.; Isbister, J.; Ahmad, N.; Stringer, S. (2018). "The emergence of polychronization and feature binding in a spiking neural network model of the primate ventral visual system". Psychological Review. 125 (4): 545–571. doi:10.1037/rev0000103. PMID 29863378. S2CID 44165646. 5. Isbister, J.; Eguchi, A.; Ahmad, N.; Galeazzi, J.M.; Buckley, M.J.; Stringer, S. (2018). "A new approach to solving the feature-binding problem in primate vision". Interface Focus. The Royal Society. 8 (4): 20180021. doi:10.1098/rsfs.2018.0021. PMC 6015810. PMID 29951198. 6. "Feature Binding within a Spiking Neural Network Model". University of Bristol. July 2018. 7. Izhikevich, EM (2006). "Polychronization: computation with spikes". Neural Computation. 18 (2): 245–282. doi:10.1162/089976606775093882. PMID 16378515. S2CID 14253998. 8. Duncan J.; Humphreys GW. (1989). "Visual Search and Stimulus Similarity" (PDF). Psychological Review. 96 (3): 433–58. doi:10.1037/0033-295x.96.3.433. PMID 2756067. 9. "Developments in machine learning". SC Magazine UK. January 2018. 10. The Future of Science Symposium. University of Oxford. 2017. 11. The weird events that make machines hallucinate. BBC Future. 2019. External links • Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence, University of Oxford
Wikipedia
Simon Donaldson Sir Simon Kirwan Donaldson FRS (born 20 August 1957) is an English mathematician known for his work on the topology of smooth (differentiable) four-dimensional manifolds, Donaldson–Thomas theory, and his contributions to Kähler geometry. He is currently a permanent member of the Simons Center for Geometry and Physics at Stony Brook University in New York,[1] and a Professor in Pure Mathematics at Imperial College London. Sir Simon Donaldson FRS Born Simon Kirwan Donaldson (1957-08-20) 20 August 1957 Cambridge, England NationalityBritish Alma materPembroke College, Cambridge (BA) Worcester College, Oxford (DPhil) Known forTopology of smooth (differentiable) four-dimensional manifolds Donaldson theory Donaldson theorem Donaldson–Thomas theory Donaldson–Uhlenbeck–Yau theorem K-stability K-stability of Fano varieties Yau–Tian–Donaldson conjecture AwardsJunior Whitehead Prize (1985) Fields Medal (1986) Royal Medal (1992) Crafoord Prize (1994) Pólya Prize (1999) King Faisal International Prize (2006) Nemmers Prize in Mathematics (2008) Shaw Prize in Mathematics (2009) Breakthrough Prize in Mathematics (2014) Oswald Veblen Prize (2019) Wolf Prize in Mathematics (2020) Scientific career FieldsTopology InstitutionsImperial College London Stony Brook University Institute for Advanced Study Stanford University All Souls College, Oxford ThesisThe Yang–Mills Equations on Kähler Manifolds (1983) Doctoral advisorMichael Atiyah Nigel Hitchin Doctoral studentsOscar Garcia Prada Dominic Joyce Dieter Kotschick Graham Nelson Paul Seidel Ivan Smith Gábor Székelyhidi Richard Thomas Michael Thaddeus Biography Donaldson's father was an electrical engineer in the physiology department at the University of Cambridge, and his mother earned a science degree there.[2] Donaldson gained a BA degree in mathematics from Pembroke College, Cambridge, in 1979, and in 1980 began postgraduate work at Worcester College, Oxford, at first under Nigel Hitchin and later under Michael Atiyah's supervision. Still a postgraduate student, Donaldson proved in 1982 a result that would establish his fame. He published the result in a paper "Self-dual connections and the topology of smooth 4-manifolds" which appeared in 1983. In the words of Atiyah, the paper "stunned the mathematical world."[3] Whereas Michael Freedman classified topological four-manifolds, Donaldson's work focused on four-manifolds admitting a differentiable structure, using instantons, a particular solution to the equations of Yang–Mills gauge theory which has its origin in quantum field theory. One of Donaldson's first results gave severe restrictions on the intersection form of a smooth four-manifold. As a consequence, a large class of the topological four-manifolds do not admit any smooth structure at all. Donaldson also derived polynomial invariants from gauge theory. These were new topological invariants sensitive to the underlying smooth structure of the four-manifold. They made it possible to deduce the existence of "exotic" smooth structures—certain topological four-manifolds could carry an infinite family of different smooth structures. After gaining his DPhil degree from Oxford University in 1983, Donaldson was appointed a Junior Research Fellow at All Souls College, Oxford. He spent the academic year 1983–84 at the Institute for Advanced Study in Princeton, and returned to Oxford as Wallis Professor of Mathematics in 1985. After spending one year visiting Stanford University,[4] he moved to Imperial College London in 1998 as Professor of Pure Mathematics.[5] In 2014, he joined the Simons Center for Geometry and Physics at Stony Brook University in New York, United States.[1] Awards Donaldson was an invited speaker of the International Congress of Mathematicians (ICM) in 1983,[6] and a plenary speaker at the ICM in 1986,[7] 1998,[8] and 2018.[9] In 1985, Donaldson received the Junior Whitehead Prize from the London Mathematical Society. In 1994, he was awarded the Crafoord Prize in Mathematics. In February 2006, Donaldson was awarded the King Faisal International Prize for science for his work in pure mathematical theories linked to physics, which have helped in forming an understanding of the laws of matter at a subnuclear level. In April 2008, he was awarded the Nemmers Prize in Mathematics, a mathematics prize awarded by Northwestern University. In 2009, he was awarded the Shaw Prize in Mathematics (jointly with Clifford Taubes) for their contributions to geometry in 3 and 4 dimensions.[10] In 2014, he was awarded the Breakthrough Prize in Mathematics "for the new revolutionary invariants of 4-dimensional manifolds and for the study of the relation between stability in algebraic geometry and in global differential geometry, both for bundles and for Fano varieties."[11] In January 2019, he was awarded the Oswald Veblen Prize in Geometry (jointly with Xiuxiong Chen and Song Sun).[12] In 2020 he received the Wolf Prize in Mathematics (jointly with Yakov Eliashberg).[13] In 1986, he was elected a Fellow of the Royal Society and received a Fields Medal at the International Congress of Mathematicians (ICM) in Berkeley. In 2010, Donaldson was elected a foreign member of the Royal Swedish Academy of Sciences.[14] He was knighted in the 2012 New Year Honours for services to mathematics.[15] In 2012, he became a fellow of the American Mathematical Society.[16] In March 2014, he was awarded the degree "Docteur Honoris Causa" by Université Joseph Fourier, Grenoble. In January 2017, he was awarded the degree "Doctor Honoris Causa" by the Universidad Complutense de Madrid, Spain. Research Further information: Donaldson theory Donaldson's work is on the application of mathematical analysis (especially the analysis of elliptic partial differential equations) to problems in geometry. The problems mainly concern gauge theory, 4-manifolds, complex differential geometry and symplectic geometry. The following theorems have been mentioned: • The diagonalizability theorem (Donaldson 1983a, 1983b, 1987a): If the intersection form of a smooth, closed, simply connected 4-manifold is positive- or negative-definite then it is diagonalizable over the integers. This result is sometimes called Donaldson's theorem. • A smooth h-cobordism between simply connected 4-manifolds need not be trivial (Donaldson 1987b). This contrasts with the situation in higher dimensions. • A stable holomorphic vector bundle over a non-singular projective algebraic variety admits a Hermitian–Einstein metric (Donaldson 1987c), proven using an inductive proof and the theory of determinant bundles and Quillen metrics.[17] • A non-singular, projective algebraic surface can be diffeomorphic to the connected sum of two oriented 4-manifolds only if one of them has negative-definite intersection form (Donaldson 1990). This was an early application of the Donaldson invariant (or instanton invariants). • Any compact symplectic manifold admits a symplectic Lefschetz pencil (Donaldson 1999). Donaldson's recent work centers on a problem in complex differential geometry concerning a conjectural relationship between algebro-geometric "stability" conditions for smooth projective varieties and the existence of "extremal" Kähler metrics, typically those with constant scalar curvature (see for example cscK metric). Donaldson obtained results in the toric case of the problem (see for example Donaldson (2001)). He then solved the Kähler–Einstein case of the problem in 2012, in collaboration with Chen and Sun. This latest spectacular achievement involved a number of difficult and technical papers. The first of these was the paper of Donaldson & Sun (2014) on Gromov–Hausdorff limits. The summary of the existence proof for Kähler–Einstein metrics appears in Chen, Donaldson & Sun (2014). Full details of the proofs appear in Chen, Donaldson, and Sun (2015a, 2015b, 2015c). Conjecture on Fano manifolds and Veblen Prize See also: K-stability and K-stability of Fano varieties In 2019, Donaldson was awarded the Oswald Veblen Prize in Geometry, together with Xiuxiong Chen and Song Sun, for proving a long-standing conjecture on Fano manifolds, which states "that a Fano manifold admits a Kähler–Einstein metric if and only if it is K-stable". It had been one of the most actively investigated topics in geometry since its proposal in the 1980s by Shing-Tung Yau after he proved the Calabi conjecture. It was later generalized by Gang Tian and Donaldson. The solution by Chen, Donaldson and Sun was published in the Journal of the American Mathematical Society in 2015 as a three-article series, "Kähler–Einstein metrics on Fano manifolds, I, II and III".[12] Selected publications • Donaldson, Simon K. (1983a). "An application of gauge theory to four-dimensional topology". J. Differential Geom. 18 (2): 279–315. doi:10.4310/jdg/1214437665. MR 0710056. • ——— (1983b). "Self-dual connections and the topology of smooth 4-manifolds". Bull. Amer. Math. Soc. 8 (1): 81–83. doi:10.1090/S0273-0979-1983-15090-5. MR 0682827. • ——— (1984b). "Instantons and geometric invariant theory". Comm. Math. Phys. 93 (4): 453–460. Bibcode:1984CMaPh..93..453D. doi:10.1007/BF01212289. MR 0892034. S2CID 120209762. • ——— (1987a). "The orientation of Yang-Mills moduli spaces and 4-manifold topology". J. Differential Geom. 26 (3): 397–428. doi:10.4310/jdg/1214441485. MR 0910015. • ——— (1987b). "Irrationality and the h-cobordism conjecture". J. Differential Geom. 26 (1): 141–168. doi:10.4310/jdg/1214441179. MR 0892034. • ——— (1987c). "Infinite determinants, stable bundles and curvature". Duke Math. J. 54 (1): 231–247. doi:10.1215/S0012-7094-87-05414-7. MR 0885784. • ——— (1990). "Polynomial invariants for smooth four-manifolds". Topology. 29 (3): 257–315. doi:10.1016/0040-9383(90)90001-Z. MR 1066174. • ——— (1999). "Lefschetz pencils on symplectic manifolds". J. Differential Geom. 53 (2): 205–236. doi:10.4310/jdg/1214425535. MR 1802722. • ——— (2001). "Scalar curvature and projective embeddings. I". J. Differential Geom. 59 (3): 479–522. doi:10.4310/jdg/1090349449. MR 1916953. • ———; Sun, Song (2014). "Gromov-Hausdorff limits of Kähler manifolds and algebraic geometry". Acta Math. 213 (1): 63–106. arXiv:1206.2609. doi:10.1007/s11511-014-0116-3. MR 3261011. S2CID 120450769. • Chen, Xiuxiong; Donaldson, Simon; Sun, Song (2014). "Kähler-Einstein metrics and stability". Int. Math. Res. Notices. 2014 (8): 2119–2125. arXiv:1210.7494. doi:10.1093/imrn/rns279. MR 3194014. S2CID 119165036. • Chen, Xiuxiong; Donaldson, Simon; Sun, Song (2015a). "Kähler-Einstein metrics on Fano manifolds I: Approximation of metrics with cone singularities". J. Amer. Math. Soc. 28 (1): 183–197. arXiv:1211.4566. doi:10.1090/S0894-0347-2014-00799-2. MR 3264766. S2CID 119641827. • Chen, Xiuxiong; Donaldson, Simon; Sun, Song (2015b). "Kähler-Einstein metrics on Fano manifolds II: Limits with cone angle less than 2π". J. Amer. Math. Soc. 28 (1): 199–234. arXiv:1212.4714. doi:10.1090/S0894-0347-2014-00800-6. MR 3264767. S2CID 119140033. • Chen, Xiuxiong; Donaldson, Simon; Sun, Song (2015c). "Kähler-Einstein metrics on Fano manifolds III: Limits as cone angle approaches 2π and completion of the main proof". J. Amer. Math. Soc. 28 (1): 235–278. arXiv:1302.0282. doi:10.1090/S0894-0347-2014-00801-8. MR 3264768. S2CID 119575364. Books • Donaldson, S.K.; Kronheimer, P.B. (1990). The geometry of four-manifolds. Oxford Mathematical Monographs. New York: Oxford University Press. ISBN 0-19-853553-8. MR 1079726.[18] • Donaldson, S.K. (2002). Floer homology groups in Yang-Mills theory. Cambridge Tracts in Mathematics. Vol. 147. Cambridge: Cambridge University Press. ISBN 0-521-80803-0. • Donaldson, Simon (2011). Riemann surfaces. Oxford Graduate Texts in Mathematics. Vol. 22. Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198526391.001.0001. ISBN 978-0-19-960674-0. MR 2856237.[19] References 1. "Simon Donaldson, Simons Center for Geometry and Physics". 2. Simon Donaldson Autobiography, The Shaw Prize, 2009 3. Atiyah, M. (1986). "On the work of Simon Donaldson". Proceedings of the International Congress of Mathematicians. 4. Biography at DeBretts Archived 20 June 2013 at the Wayback Machine 5. "Donaldson, Sir Simon (Kirwan)", Who's Who (online ed., Oxford University Press, December 2018). Retrieved 2 June 2019. 6. "ICM Plenary and Invited Speakers". International Mathematical Union (IMU). Retrieved 3 September 2022. 7. Donaldson, Simon K (1986). "The geometry of 4-manifolds". In AM Gleason (ed.). Proceedings of the International Congress of Mathematicians (Berkeley 1986). Vol. 1. pp. 43–54. CiteSeerX 10.1.1.641.1867. 8. Donaldson, S. K. (1998). "Lefschetz fibrations in symplectic geometry". Doc. Math. (Bielefeld) Extra Vol. ICM Berlin, 1998, vol. II. pp. 309–314. 9. "ICM Plenary and Invited Speakers, International Mathematical Union (IMU)". mathunion.org. 10. "The Shaw Prize". shawprize.org. 16 June 2009. 11. "Five Winners Receive Inaugural Breakthrough Prize in Mathematics". breakthroughprize.org. 23 June 2014. Retrieved 21 May 2022. 12. "2019 Oswald Veblen Prize in Geometry to Xiuxiong Chen, Simon Donaldson, and Song Sun". American Mathematical Society. 19 November 2018. Retrieved 9 April 2019. 13. Wolf Prize 2020, wolffund.org.il. Accessed 8 January 2023. 14. New foreign members elected to the academy, press announcement from the Royal Swedish Academy of Sciences, 26 May 2010. 15. "No. 60009". The London Gazette (Supplement). 31 December 2011. p. 1. 16. List of Fellows of the American Mathematical Society. Retrieved 10 November 2012. 17. Another proof of a somewhat more general result was given by Uhlenbeck, Karen & Yau, Shing-Tung (1986). "On the existence of Hermitian-Yang-Mills connections in stable vector bundles". Comm. Pure Appl. Math. 39 (S, suppl): S257–S293. doi:10.1002/cpa.3160390714. MR 0861491. 18. Hitchin, Nigel (1993). "Review: The geometry of four-manifolds, by S. K. Donaldson and P. B. Kronheimer". Bull. Amer. Math. Soc. (N.S.). 28 (2): 415–418. doi:10.1090/s0273-0979-1993-00377-x. 19. Kra, Irwin (2012). "Review: Riemann surfaces, by S. K. Donaldson". Bull. Amer. Math. Soc. (N.S.). 49 (3): 455–463. doi:10.1090/s0273-0979-2012-01375-7. External links Wikiquote has quotations related to Simon Donaldson. • O'Connor, John J.; Robertson, Edmund F., "Simon Donaldson", MacTutor History of Mathematics Archive, University of St Andrews • Simon Donaldson at the Mathematics Genealogy Project • Home page at Imperial College • "Some recent developments in Kähler geometry and exceptional holonomy – Simon Donaldson – ICM2018". YouTube. (Plenary Lecture 1) Fields Medalists • 1936  Ahlfors • Douglas • 1950  Schwartz • Selberg • 1954  Kodaira • Serre • 1958  Roth • Thom • 1962  Hörmander • Milnor • 1966  Atiyah • Cohen • Grothendieck • Smale • 1970  Baker • Hironaka • Novikov • Thompson • 1974  Bombieri • Mumford • 1978  Deligne • Fefferman • Margulis • Quillen • 1982  Connes • Thurston • Yau • 1986  Donaldson • Faltings • Freedman • 1990  Drinfeld • Jones • Mori • Witten • 1994  Bourgain • Lions • Yoccoz • Zelmanov • 1998  Borcherds • Gowers • Kontsevich • McMullen • 2002  Lafforgue • Voevodsky • 2006  Okounkov • Perelman • Tao • Werner • 2010  Lindenstrauss • Ngô • Smirnov • Villani • 2014  Avila • Bhargava • Hairer • Mirzakhani • 2018  Birkar • Figalli • Scholze • Venkatesh • 2022  Duminil-Copin • Huh • Maynard • Viazovska • Category • Mathematics portal Shaw Prize laureates Astronomy • Jim Peebles (2004) • Geoffrey Marcy and Michel Mayor (2005) • Saul Perlmutter, Adam Riess and Brian Schmidt (2006) • Peter Goldreich (2007) • Reinhard Genzel (2008) • Frank Shu (2009) • Charles Bennett, Lyman Page and David Spergel (2010) • Enrico Costa and Gerald Fishman (2011) • David C. Jewitt and Jane Luu (2012) • Steven Balbus and John F. Hawley (2013) • Daniel Eisenstein, Shaun Cole and John A. Peacock (2014) • William J. Borucki (2015) • Ronald Drever, Kip Thorne and Rainer Weiss (2016) • Simon White (2017) • Jean-Loup Puget (2018) • Edward C. Stone (2019) • Roger Blandford (2020) • Victoria Kaspi and Chryssa Kouveliotou (2021) • Lennart Lindegren and Michael Perryman (2022) • Matthew Bailes, Duncan Lorimer and Maura McLaughlin (2023) Life science and medicine • Stanley Norman Cohen, Herbert Boyer, Yuet-Wai Kan and Richard Doll (2004) • Michael Berridge (2005) • Xiaodong Wang (2006) • Robert Lefkowitz (2007) • Ian Wilmut, Keith H. S. Campbell and Shinya Yamanaka (2008) • Douglas Coleman and Jeffrey Friedman (2009) • David Julius (2010) • Jules Hoffmann, Ruslan Medzhitov and Bruce Beutler (2011) • Franz-Ulrich Hartl and Arthur L. Horwich (2012) • Jeffrey C. Hall, Michael Rosbash and Michael W. Young (2013) • Kazutoshi Mori and Peter Walter (2014) • Bonnie Bassler and Everett Peter Greenberg (2015) • Adrian Bird and Huda Zoghbi (2016) • Ian R. Gibbons and Ronald Vale (2017) • Mary-Claire King (2018) • Maria Jasin (2019) • Gero Miesenböck, Peter Hegemann and Georg Nagel (2020) • Scott D. Emr (2021) • Paul A. Negulescu and Michael J. Welsh (2022) • Patrick Cramer and Eva Nogales (2023) Mathematical science • Shiing-Shen Chern (2004) • Andrew Wiles (2005) • David Mumford and Wentsun Wu (2006) • Robert Langlands and Richard Taylor (2007) • Vladimir Arnold and Ludwig Faddeev (2008) • Simon Donaldson and Clifford Taubes (2009) • Jean Bourgain (2010) • Demetrios Christodoulou and Richard S. Hamilton (2011) • Maxim Kontsevich (2012) • David Donoho (2013) • George Lusztig (2014) • Gerd Faltings and Henryk Iwaniec (2015) • Nigel Hitchin (2016) • János Kollár and Claire Voisin (2017) • Luis Caffarelli (2018) • Michel Talagrand (2019) • Alexander Beilinson and David Kazhdan (2020) • Jean-Michel Bismut and Jeff Cheeger (2021) • Noga Alon and Ehud Hrushovski (2022) • Vladimir Drinfeld and Shing-Tung Yau (2023) Breakthrough Prize laureates Mathematics • Simon Donaldson, Maxim Kontsevich, Jacob Lurie, Terence Tao and Richard Taylor (2015) • Ian Agol (2016) • Jean Bourgain (2017) • Christopher Hacon, James McKernan (2018) • Vincent Lafforgue (2019) • Alex Eskin (2020) • Martin Hairer (2021) • Takuro Mochizuki (2022) • Daniel A. Spielman (2023) Fundamental physics • Nima Arkani-Hamed, Alan Guth, Alexei Kitaev, Maxim Kontsevich, Andrei Linde, Juan Maldacena, Nathan Seiberg, Ashoke Sen, Edward Witten (2012) • Special: Stephen Hawking, Peter Jenni, Fabiola Gianotti (ATLAS), Michel Della Negra, Tejinder Virdee, Guido Tonelli, Joseph Incandela (CMS) and Lyn Evans (LHC) (2013) • Alexander Polyakov (2013) • Michael Green and John Henry Schwarz (2014) • Saul Perlmutter and members of the Supernova Cosmology Project; Brian Schmidt, Adam Riess and members of the High-Z Supernova Team (2015) • Special: Ronald Drever, Kip Thorne, Rainer Weiss and contributors to LIGO project (2016) • Yifang Wang, Kam-Biu Luk and the Daya Bay team, Atsuto Suzuki and the KamLAND team, Kōichirō Nishikawa and the K2K / T2K team, Arthur B. McDonald and the Sudbury Neutrino Observatory team, Takaaki Kajita and Yōichirō Suzuki and the Super-Kamiokande team (2016) • Joseph Polchinski, Andrew Strominger, Cumrun Vafa (2017) • Charles L. Bennett, Gary Hinshaw, Norman Jarosik, Lyman Page Jr., David Spergel (2018) • Special: Jocelyn Bell Burnell (2018) • Charles Kane and Eugene Mele (2019) • Special: Sergio Ferrara, Daniel Z. Freedman, Peter van Nieuwenhuizen (2019) • The Event Horizon Telescope Collaboration (2020) • Eric Adelberger, Jens H. Gundlach and Blayne Heckel (2021) • Special: Steven Weinberg (2021) • Hidetoshi Katori and Jun Ye (2022) • Charles H. Bennett, Gilles Brassard, David Deutsch, Peter W. Shor (2023) Life sciences • Cornelia Bargmann, David Botstein, Lewis C. Cantley, Hans Clevers, Titia de Lange, Napoleone Ferrara, Eric Lander, Charles Sawyers, Robert Weinberg, Shinya Yamanaka and Bert Vogelstein (2013) • James P. Allison, Mahlon DeLong, Michael N. Hall, Robert S. Langer, Richard P. Lifton and Alexander Varshavsky (2014) • Alim Louis Benabid, Charles David Allis, Victor Ambros, Gary Ruvkun, Jennifer Doudna and Emmanuelle Charpentier (2015) • Edward Boyden, Karl Deisseroth, John Hardy, Helen Hobbs and Svante Pääbo (2016) • Stephen J. Elledge, Harry F. Noller, Roeland Nusse, Yoshinori Ohsumi, Huda Zoghbi (2017) • Joanne Chory, Peter Walter, Kazutoshi Mori, Kim Nasmyth, Don W. Cleveland (2018) • C. Frank Bennett and Adrian R. Krainer, Angelika Amon, Xiaowei Zhuang, Zhijian Chen (2019) • Jeffrey M. Friedman, Franz-Ulrich Hartl, Arthur L. Horwich, David Julius, Virginia Man-Yee Lee (2020) • David Baker, Catherine Dulac, Dennis Lo, Richard J. Youle (2021) • Jeffery W. Kelly, Katalin Karikó, Drew Weissman, Shankar Balasubramanian, David Klenerman and Pascal Mayer (2022) • Clifford P. Brangwynne, Anthony A. Hyman, Demis Hassabis, John Jumper, Emmanuel Mignot, Masashi Yanagisawa (2023) Recipients of the Oswald Veblen Prize in Geometry • 1964 Christos Papakyriakopoulos • 1964 Raoul Bott • 1966 Stephen Smale • 1966 Morton Brown and Barry Mazur • 1971 Robion Kirby • 1971 Dennis Sullivan • 1976 William Thurston • 1976 James Harris Simons • 1981 Mikhail Gromov • 1981 Shing-Tung Yau • 1986 Michael Freedman • 1991 Andrew Casson and Clifford Taubes • 1996 Richard S. Hamilton and Gang Tian • 2001 Jeff Cheeger, Yakov Eliashberg and Michael J. Hopkins • 2004 David Gabai • 2007 Peter Kronheimer and Tomasz Mrowka; Peter Ozsváth and Zoltán Szabó • 2010 Tobias Colding and William Minicozzi; Paul Seidel • 2013 Ian Agol and Daniel Wise • 2016 Fernando Codá Marques and André Neves • 2019 Xiuxiong Chen, Simon Donaldson and Song Sun Fellows of the Royal Society elected in 1986 Fellows • Roy M. Anderson • John Argyris • Ian Axford • Alec Broers • Geoffrey Burnstock • Dennis Chapman • John Clarke • Peter Day • Richard Dixon • Simon Donaldson • John Dowell • John M. Edmond • Peter Fellgett • Martin Fleischmann • C. Robin Ganellin • Adrian Gill • John Rodney Guest • Gabriel Horn • Werner Israel • Alec Jeffreys • Allen Kerr • Chris J. Leaver • George Lorimer • Thomas Nelson Marsham • William Mitchell • Keith Moffatt • Michael Augustine Raftery • Vulimiri Ramalingaswami • Peter Richardson • Harold Ridley • Raymond Smallman • Michael Smith • Charles J. M. Stirling • John Sulston • Jean Thomas • David Wallace • Elizabeth Warrington • Allan Wilson • Gordon Richard Wray Foreign • Piet Borst • Albert Eschenmoser • Antonio Garcia-Bellido • Joseph Keller • Edwin H. Land • Shosaku Numa • Vivian Fuchs Statute 12 • Roger Makins, 1st Baron Sherfield  Authority control International • ISNI • VIAF • 2 National • Norway • 2 • France • 2 • BnF data • 2 • Catalonia • Germany • Israel • United States • Australia • Croatia • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH People • Trove Other • IdRef
Wikipedia
Simon problems In mathematics, the Simon problems (or Simon's problems) are a series of fifteen questions posed in the year 2000 by Barry Simon, an American mathematical physicist.[1][2] Inspired by other collections of mathematical problems and open conjectures, such as the famous list by David Hilbert, the Simon problems concern quantum operators.[3] Eight of the problems pertain to anomalous spectral behavior of Schrödinger operators, and five concern operators that incorporate the Coulomb potential.[1] In 2014, Artur Avila won a Fields Medal for work including the solution of three Simon problems.[4][5] Among these was the problem of proving that the set of energy levels of one particular abstract quantum system was in fact the Cantor set, a challenge known as the "Ten Martini Problem" after the reward that Mark Kac offered for solving it.[5][6] The 2000 list was a refinement of a similar set of problems that Simon had posed in 1984.[7][8] Context Background definitions for the "Coulomb energies" problems ($N$ nonrelativistic particles (electrons) in $\mathbb {R} ^{3}$ with spin $1/2$ and an infinitely heavy nucleus with charge $Z$ and Coulombian mutual interaction): • ${\mathcal {H}}_{f}^{(N)}$ is the space of functions on $L^{2}(\mathbb {R} ^{3N};\mathbb {C} ^{2N})$ which are antisymmetric under exchange of the $N$ spin and space coordinates.[1] Equivalently, the subspace of $(L^{2}(\mathbb {R} ^{3})\otimes \mathbb {C} ^{2})^{\otimes N}$ which is antisymmetric under exchange of the $N$ factors. • The Hamiltonian is $H(N,Z):=\sum _{i=1}^{N}(-\Delta _{i}-{\frac {Z}{|x_{i}|}})+\sum _{i<j}{\frac {1}{|x_{i}-x_{j}|}}$. Here $x_{i}\in \mathbb {R} ^{3}$ is the coordinate of the $i$-th particle, $\Delta _{i}$ is the Laplacian with respect to the coordinate $x_{i}$. Even if the Hamiltonian does not explictly depend on the state of the spin sector, the presence of spin has an effect due to the antisymmetry condition on the total wavefunction. • We define $E(N,Z):=\min _{{\mathcal {H}}_{f}}H(N,Z)$, that is, the ground state energy of the $(N,Z)$ system. • We define $N_{0}(Z)$ to be the smallest value of $N$ such that $E(N+j,Z)=E(N,Z)$ for all positive integers $j$; it is known that such a number always exists and is always between $Z$ and $2Z$, inclusive.[1] The 1984 list Simon listed the following problems in 1984:[7] No. Short name Statement Status Year solved 1st (a) Almost always global existence for Newtonian gravitating particles (a) Prove that the set of initial conditions for which Newton's equations fail to have global solutions has measure zero.. Open as of 1984.[7] In 1977, Saari showed that this is true for 4-body problems.[9] ? (b) Existence of non-collisional singularities in the Newtonian N-body problem Show that there are non-collisional singularities in the Newtonian N-body problem for some N and suitable masses. In 1988, Xia gave an example of a 5-body configuration which undergoes a non-collisional singularity.[10][11] In 1991, Gerver showed that 3n-body problems in the plane for some sufficiently large value of n also undergo non-collisional singularities.[12] 1989 ~Duckmather --> 2nd (a) Ergodicity of gases with soft cores Find repulsive smooth potentials for which the dynamics of N particles in a box (with, e.g., smooth wall potentials) is ergodic. Open as of 1984. Sinai once proved that the hard sphere gas is ergodic, but no complete proof has appeared except for the case of two particles, and a sketch for three, four, and five particles.[7] ? (b) Approach to equilibrium Use the scenario above to justify that large systems with forces that are attractive at suitable distances approach equilibrium, or find an alternate scenario that does not rely on strict ergodicity in finite volume. Open as of 1984. ? (c) Asymptotic abelianness for the quantum Heisenberg dynamics Prove or disprove that the multidimensional quantum Heisenberg model is asymptotically abelian. Open as of 1984. ? 3rd Turbulence and all that Develop a comprehensive theory of long-time behavior of dynamical systems, including a theory of the onset of and of fully developed turbulence. Open as of 1984. ? 4th (a) Fourier's heat law Find a mechanical model in which a system of size $L$ with temperature difference $\Delta T$ between its ends has a rate of heat temperature that goes as $L^{-1}$ in the limit $L\to \infty $. Open as of 1984. ? (b) Kubo's formula Justify Kubo's formula in a quantum model or find an alternate theory of conductivity. Open as of 1984. ? 5th (a) Exponential decay of $v=2$ classical Heisenberg correlations Consider the two-dimensional classical Heisenberg model. Prove that for any beta, correlations decay exponentially as distance approaches infinity. Open as of 1984. ? (b) Pure phases and low temperatures for the $v\geq 3$ classical Heisenberg model Prove that, in the $D=3$ model at large beta and at dimension $v\geq 3$, the equilibrium states form a single orbit under $SO(3)$: the sphere. (c) GKS for classical Heisenberg models Let $f$ and $g$ be finite products of the form $(\sigma _{\alpha }\cdot \sigma _{\gamma })$ in the $D=3$ model. Is it true that $<fg>_{\Lambda ,\beta }\geq <f>_{\Lambda ,\beta }<g>_{\Lambda ,\beta }$ ? (d) Phase transitions in the quantum Heisenberg model Prove that for $v\geq 3$ and large beta, the quantum Heisenberg model has long range order. 6th Explanation of ferromagnetism Verify the Heisenberg picture of the origin of ferromagnetism (or an alternative) in a suitable model of a realistic quantum system. Open as of 1984. ? 7th Existence of continuum phase transitions Show that for suitable choices of pair potential and density, the free energy is non-$C^{1}$ at some beta. Open as of 1984. ? 8th (a) Formulation of the renormalization group Develop mathematically precise renormalization transformations for $v$-dimensional Ising-type systems. Open as of 1984. ? (b) Proof of universality Show that critical exponents for Ising-type systems with nearest neighbor coupling but different bond strengths in the three directions are independent of ratios of bond strengths. 9th (a) Asymptotic completeness for short-range N-body quantum systems Prove that $\oplus ~{\text{Ran}}~\Omega _{a}^{+}=L^{2}(X)$. Open as of 1984.[7] ? (b) Asymptotic completeness for Coulomb potentials Suppose $v=3,V_{ij}(x)=e_{ij}|x|^{-1}$. Prove that $\oplus ~{\text{Ran}}~\Omega _{a}^{D,+}=L^{2}(X)$. 10th (a) Monotonicity of ionization energy (a) Prove that $(\Delta E)(N-1,Z)\geq (\Delta E)(N,Z)$. Open as of 1984. ? (b) The Scott correction Prove that $\lim _{Z\to \infty }(E(Z,Z)-e_{TF}Z^{7/3})/Z^{2}$ exists and is the constant found by Scott. (c) Asymptotic ionization Find the leading asymptotics of $(\Delta E)(Z,Z)$. (d) Asymptotics of maximal ionized charge Prove that $\lim _{Z\to \infty }N(Z)/Z=1$. (e) Rate of collapse of Bose matter Find suitable $C_{1},C_{2},\alpha $ such that $-C_{1}N^{\alpha }\leq {\tilde {E}}_{B}(N,N;1)\leq C_{2}N^{\alpha }$. 11th Existence of crystals Prove a suitable version of the existence of crystals (e.g. there is a choice of minimizing configurations that converge to some infinite lattice configuration). Open as of 1984. ? 12th (a) Existence of extended states in the Anderson model Prove that in $v\geq 3$ and for small $\lambda $ that there is a region of absolutely continuous spectrum of the Anderson model, and determine whether this is false for $v=2$. Open as of 1984. ? (b) Diffusive bound on "transport" in random potentials Prove that ${\text{Exp}}(\delta _{0},(e^{itH}{\vec {N}}e^{-itH})^{2}\delta _{0})\leq c(1+|t|)$ for the Anderson model, and more general random potentials. (c) Smoothness of $k(E)$ through the mobility edge in the Anderson model Is $k(E)$, the integrated density of states, a $C^{\infty }$ function in the Anderson model at all couplings? (d) Analysis of the almost Mathieu equation Verify the following for the almost Mathieu equation: • If $\alpha $ is a Liouville number and $\lambda \neq 0$, then the spectrum is purely singular continuous for almost all $\theta $. • If $\alpha $ is a Roth number and $|\lambda |<2$, then the spectrum is purely absolutely continuous for almost all $\theta $. • If $\alpha $ is a Roth number and $|\lambda |>2$, then the spectrum is purely dense pure point. • If $\alpha $ is a Roth number and $|\lambda |=2$, then $\sigma (h)$has Lebesgue measure zero and the spectrum is purely singular continuous. (e) Point spectrum in a continuous almost periodic model Show that $-{\frac {d^{2}}{dx^{2}}}+\lambda \cos(2\pi x)+\mu \cos(2\pi \alpha x+\theta )$ has some point spectrum for suitable $\alpha ,\lambda ,\mu $ and almost all $\theta $. 13th Critical exponent for self-avoiding walks Let $D(n)$ be the mean displacement of a random self-avoiding walk of length $n$. Show that $v:=\lim _{n\to \infty }n^{-1}\ln D(n)$ is ${\frac {1}{2}}$ for dimension at least four and is greater otherwise. Open as of 1984. ? 14th (a) Construct QCD Give a precise mathematical construction of quantum chromodynamics. Open as of 1984. ? (b) Renormalizable QFT Construct a nontrivial quantum field theory that is renormalizable but not superrenormalizable. (c) Inconsistency of QED Prove that QED is not a consistent theory. (d) Inconsistency of $\varphi _{4}^{4}$ Prove that a nontrivial $\varphi _{4}^{4}$ theory does not exist. 15th Cosmic censorship Formulate and then prove or disprove a suitable version of cosmic censorship. Open as of 1984. ? In 2000, Simon claimed that five of the problems he listed had been solved.[1] The 2000 list The Simon problems as listed in 2000 (with original categorizations) are:[1][13] No. Short name Statement Status Year solved Quantum transport and anomalous spectral behavior 1st Extended states Prove that the Anderson model has purely absolutely continuous spectrum for $v\geq 3$ and suitable values of $b-a$ in some energy range. ? ? 2nd Localization in 2 dimensions Prove that the spectrum of the Anderson model for $v=2$ is dense pure point. ? ? 3rd Quantum diffusion Prove that, for $v\geq 3$ and values of $|b-a|$ where there is absolutely continuous spectrum, that $\sum _{n\in \mathbb {Z} ^{\nu }}n^{2}|e^{itH}(n,0)|^{2}$ grows like $ct$ as $t\to \infty $. ? ? 4th Ten Martini problem Prove that the spectrum of $h_{\alpha ,\lambda ,\theta }$ is a Cantor set (that is, nowhere dense) for all $\lambda \neq 0$ and all irrational $\alpha $. Solved by Puig (2003).[13][14] 2003 5th Prove that the spectrum of $h_{\alpha ,\lambda ,\theta }$ has measure zero for $\lambda =2$ and all irrational $\alpha $. Solved by Avila and Krikorian (2003).[13][15] 2003 6th Prove that the spectrum of $h_{\alpha ,\lambda ,\theta }$ is absolutely continuous for $\lambda =2$ and all irrational $\alpha $. ? ? 7th Do there exist potentials $V(x)$ on $[0,\infty )$ such that $|V(x)|\leq C|x|^{{\frac {1}{2}}+\varepsilon }$ for some $\varepsilon $ and such that $-{\frac {d^{2}}{dx^{2}}}+V$ has some singular continuous spectrum? Essentially solved by Denisov (2003) with only $L^{2}$ decay. Solved entirely by Kiselev (2005).[13][16][17] 2003, 2005 8th Suppose that $V(x)$ is a function on $\mathbb {R} ^{\nu }$ such that $\int |x|^{-\nu +1}|V(x)|^{2}d^{\nu }x<\infty $, where $\nu \geq 2$. Prove that $-\Delta +V$ has absolutely continuous spectrum of infinite multiplicity on $[0,\infty )$. ? ? Coulomb energies 9th Prove that $N_{0}(Z)-Z$ is bounded for $Z\to \infty $. ? ? 10th What are the asymptotics of$(\delta E)(Z):=E(Z,Z-1)-E(Z,Z)$ for $Z\to \infty $? ? ? 11th Make mathematical sense of the nuclear shell model. ? ? 12th Is there a mathematical sense in which one can justify current techniques for determining molecular configurations from first principles? ? ? 13th Prove that, as the number of nuclei approaches infinity, the ground state of some neutral system of molecules and electrons approaches a periodic limit (i.e. that crystals exist based on quantum principles). ? ? Other problems 14th Prove that the integrated density of states $k(E)$ is continuous in the energy. | k(E1 + ΔE) - k(E1) | < ε ? 15th Lieb-Thirring conjecture Prove the Lieb-Thirring conjecture on the constants $L_{\gamma ,\nu }$ where $\nu =1,{\frac {1}{2}}<\gamma <{\frac {3}{2}}$. ? ? See also • Almost Mathieu operator • Lieb–Thirring inequality External links • "Simon's Problems". MathWorld. Retrieved 2018-06-13. References 1. Simon, Barry (2000). "Schrödinger Operators in the Twenty-First Century". Mathematical Physics 2000. Imperial College London. pp. 283–288. doi:10.1142/9781848160224_0014. ISBN 978-1-86094-230-3. 2. Marx, C. A.; Jitomirskaya, S. (2017). "Dynamics and Spectral Theory of Quasi-Periodic Schrödinger-type Operators". Ergodic Theory and Dynamical Systems. 37 (8): 2353–2393. arXiv:1503.05740. doi:10.1017/etds.2016.16. S2CID 119317111. 3. Damanik, David. "Dynamics of SL(2,R)-Cocycles and Applications to Spectral Theory; Lecture 1: Barry Simon's 21st Century Problems" (PDF). Beijing International Center for Mathematical Research, Peking University. Retrieved 2018-07-07. 4. "Fields Medal awarded to Artur Avila". Centre national de la recherche scientifique. 2014-08-13. Retrieved 2018-07-07. 5. Bellos, Alex (2014-08-13). "Fields Medals 2014: the maths of Avila, Bhargava, Hairer and Mirzakhani explained". The Guardian. Retrieved 2018-07-07. 6. Tao, Terry (2014-08-12). "Avila, Bhargava, Hairer, Mirzakhani". What's New. Retrieved 2018-07-07. 7. Simon, Barry (1984). "Fifteen problems in mathematical physics". Perspectives in Mathematics: Anniversary of Oberwolfach 1984 (PDF). Birkhäuser. pp. 423–454. Retrieved 24 June 2021. 8. Coley, Alan A. (2017). "Open problems in mathematical physics". Physica Scripta. 92 (9): 093003. arXiv:1710.02105. Bibcode:2017PhyS...92i3003C. doi:10.1088/1402-4896/aa83c1. S2CID 3892374. 9. Saari, Donald G. (October 1977). "A global existence theorem for the four-body problem of Newtonian mechanics". Journal of Differential Equations. 26 (1): 80–111. Bibcode:1977JDE....26...80S. doi:10.1016/0022-0396(77)90100-0. 10. Xia, Zhihong (1992). "The Existence of Noncollision Singularities in Newtonian Systems". Annals of Mathematics. 135 (3): 411–468. doi:10.2307/2946572. JSTOR 2946572. MR 1166640. 11. Saari, Donald G.; Xia, Zhihong (April 1995). "Off to infinity in finite time" (PDF). Notices of the American Mathematical Society. 42 (5): 538–546. 12. Gerver, Joseph L (January 1991). "The existence of pseudocollisions in the plane". Journal of Differential Equations. 89 (1): 1–68. Bibcode:1991JDE....89....1G. doi:10.1016/0022-0396(91)90110-U. 13. Weisstein, Eric W. "Simon's Problems". mathworld.wolfram.com. Retrieved 2021-06-22. 14. Puig, Joaquim (1 January 2004). "Cantor Spectrum for the Almost Mathieu Operator". Communications in Mathematical Physics. 244 (2): 297–309. Bibcode:2004CMaPh.244..297P. doi:10.1007/s00220-003-0977-3. S2CID 120589515. 15. Ávila Cordeiro de Melo, Artur; Krikorian, Raphaël (1 November 2006). "Reducibility or nonuniform hyperbolicity for quasiperiodic Schrödinger cocycles". Annals of Mathematics. 164 (3): 911–940. arXiv:math/0306382. doi:10.4007/annals.2006.164.911. S2CID 14625584. 16. Denisov, Sergey A. (June 2003). "On the coexistence of absolutely continuous and singular continuous components of the spectral measure for some Sturm–Liouville operators with square summable potential". Journal of Differential Equations. 191 (1): 90–104. Bibcode:2003JDE...191...90D. doi:10.1016/S0022-0396(02)00145-6. 17. Kiselev, Alexander (27 April 2005). "Imbedded singular continuous spectrum for Schrödinger operators". Journal of the American Mathematical Society. 18 (3): 571–603. doi:10.1090/S0894-0347-05-00489-3.
Wikipedia
Simon von Stampfer Simon Ritter von Stampfer (26 October 1792 (according to other sources 1790)), in Windisch-Mattrai, Archbishopric of Salzburg today called Matrei in Osttirol, Tyrol – 10 November 1864 in Vienna) was an Austrian mathematician, surveyor and inventor. His most famous invention is that of the stroboscopic disk which has a claim to be the first device to show moving images. Almost simultaneously, a similar device was developed in Belgium (the phenakistiscope). Life Youth and education Simon Ritter von Stampfer was born in Matrei in Osttirol, and was the first son of Bartlmä Stampfer, a weaver. From 1801 he attended the local school and in 1804 and moved to the Franciscan Gymnasium in Lienz, where he studied until 1807. From there he went to the Lyceum in Salzburg, to study philosophy, however he was not assessed. In 1814 in Munich, he passed the state examination and applied there as a teacher. He chose, however, to stay in Salzburg, where he was assistant teacher in mathematics, natural history, physics and Greek at the high school. He then moved to the Lyceum, where he taught elementary mathematics, physics and applied mathematics . In 1819 he was also appointed a professor. In his spare time he made geodetic measurements, astronomical observations, experiments on the propagation speed of sound at different heights and measurements using the barometer. Stampfer was often to be seen in the Benedictine Monastery of Kremsmünster which had numerous pieces of astronomical equipment available. In 1822, von Stampfer married Johanna Wagner. They had a daughter in 1824 (Maria Aloysia Johanna) and in 1825 a son (Anton Josef Simon). First scientific and teaching work After several unsuccessful applications, in Innsbruck, Stampfer was finally promoted to full professor of pure mathematics in Salzburg. However, at the Polytechnic Institute in Vienna, he was also promoted to the Chair of Practical Geometry. He settled there in December 1825 to replace Franz Josef von Gerstner. He now taught Practical geometry, but was also employed as a physicist and astronomer. He produced a method for the computation of solar eclipses. He was concerned about his astronomical work with lenses and their accuracy and distortion. This led him to the field of optical illusions. In 1828, he developed test methods for telescopes and methods of measurement to determine the "Krümmungshalbmesser" of lenses and the refractive and dispersion property of the glass. For his work on the theoretical foundations of the production of high quality optics, he turned to the achromatic Fraunhofer lens. Development of "stroboscopic discs" In 1832, Stampfer became aware through the Journal of Physics and Mathematics of experiments by the British physicist, Michael Faraday, on the optical illusion caused by rapidly rotating gears, in which the human eye could not follow the movement of the gear. He was so impressed that he conducted similar experiments with intermittent views through the openings between the teeth of slotted cardboard wheels. From these experiments he eventually developed his Stroboscopische Scheiben (optische Zauberscheiben) (Stroboscopic Discs, or optical magic discs, or simply Stroboscope ), coining the term as a combination of the Ancient Greek words στρόβος - strobos, meaning "whirlpool" and σκοπεῖν - skopein, meaning "to look at". In a pamphlet published in July 1833, Stampfer mentioned that the sequence of images could be placed on either a disc, a cylinder (much like the Zoetrope, introduced in 1866) or longer scenes on a looped strip of paper or canvas stretched around two parallel rollers (somewhat similar to film on spools). A disc with pictures could be viewed though a slotted disc on the other side of an axis, but Stampfer found spinning one disc with slots as well as pictures in front of a mirror more simple. He also suggested covering up the view of all but one of the moving figures with a cut-out sheet of cardboard and painting theatrical coulisses and backdrops around the cut-out part (somewhat similar to the later Praxinoscope-Theatre).[1] The patent for the invention also mentions the option of transparent versions. Stampfer and lithographer Mathias Trentsensky chose to publish the invention in the shape of a disc to be viewed in a mirror.[2] Belgian scientist Joseph Antoine Ferdinand Plateau had been developing a very similar device for some time and finally published about what would later be named the Fantascope or Phénakisticope in January 1833 in a Belgian scientific periodical, illustrated with a plate of the device.[3] Plateau mentioned in 1836 that he thought it difficult to state the exact time when he got the idea, but he believed he was first able to successfully assemble his invention in December. He stated to trust the assertion of Stampfer to have started his experiments at the same time, which soon resulted in the discovery of the stroboscopic animation principle.[4] Both Stampfer and Plateau have a claim to be the founding father of Cinema. Most cited with this honour however is Joseph Antoine Ferdinand Plateau.[5] Stampfer received the imperial privilege No. 1920 for his invention on 7 May 1833 : The 1920th S. Stampfer, a professor at Imperial Polytechnic Institute in Vienna. (Wieden, Nro. 64), and Mathias Trentsensky; in the invention, figures and colored shapes, images ever of any kind, according to mathematical and physical laws so as to distinguish that, if the same with due speed by some mechanism before the eye passed , while the beam is constantly interrupted, the varied optical illusions in related movements and actions that represent the eye, and with these images the easiest way to slices of cardboard or any other materials zweckmässigcn are drawn to their peripheral holes are attached to Browse . When these discs, a mirror opposite, quickly turned around their axes, so evident to the eye when the holes Browse through the lively pictures in the mirror, and it can in this way not only machine movements of any kind, such as wheels and hammer works, continue rolling carts and rising balloons, but also the different kinds of actions and movements of people and animals depicted are surprising. Nor can follow the same principles by other mechanical devices themselves compound acts, such as theatrical scenes in Thätigkeit diverge workshops, etc., either through transparent as well as ordinary kind drawn pictures. In two years, from 7 May.(Jb Polytechnic. Inst Vol 19, 406f., Zit. In ) The device was developed by the Viennese art dealers Trentsensky & Vieweg and commercially marketed. The first edition was published in May 1833[6] and was soon sold out, so that in July a second, improved edition appeared.[7] His "stroboscopic discs" became known outside of Austria, and it was from this that the term "stroboscopic effect" arose. Literature • Franz Allmer: Simon Stampfer 1790–1864. Picture a life. In:Communications of the Geodetic Institute of the Technical University of Graz, No. 82, Graz 1996 • William Formann: Austrian pioneers of cinematography. Bergland Verlag, Wien 1966, p. 10–18 • Peter Schuster, and Christian Strasser:Simon Stampfer 1790–1864. From the magic disc for the film (series of press offices, Special Publications Series No. 142), Salzburg 1998 References 1. Stampfer, Simon (1833). Die stroboscopischen Scheiben; oder, Optischen Zauberscheiben: Deren Theorie und wissenschaftliche Anwendung. 2. "Stroboscopische Scheiben (optische Zauberscheiben)". Wiener Zeitung. 2 May 1833. p. 4. 3. Correspondance mathématique et physique (in French). Vol. 7. Brussels: Garnier and Quetelet. 1832. p. 365. 4. "Bulletin de l'Académie Royale des Sciences et Belles-Lettres de Bruxelles" (in French). III (1). Brussels: l'Académie Royale. 1836: 9–10. {{cite journal}}: Cite journal requires |journal= (help) 5. Weynants, Thomas (2003). "sTAMPFER DISCS AND PHENAKISTISCOPE DISCS". Early Visual Media. Retrieved 2009-06-14. 6. "Der Wanderer". 1833. 7. Stampfer, Simon (1833). Die stroboscopischen Scheiben; oder, Optischen Zauberscheiben: Deren Theorie und wissenschaftliche anwendung, erklärt von dem Erfinder [The stroboscopic discs; or optical magic discs: Its theory and scientific application, explained by the inventor] (in German). Vienna and Leipzig: Trentsensky and Vieweg. p. 2. External links Wikimedia Commons has media related to Simon Stampfer. • Simon Stampfer Stroboscopic slices of Academic Gymnasium Salzburg • 119533049 Simon von Stampfer in the German National Library catalogue • Simon Stampfer: scholar scientist inventor • Simon rammer stroboscopic slices (Object of the month from the Museum of Sternwarte Kremsmünster, August 2001) • Simon von Stampfer in Austria-Forum (in German) • Introduction to Animation (by Sandro Corsaro, 2003; PDF file, 112 KB) Authority control International • FAST • ISNI • VIAF National • France • BnF data • Germany • United States • Czech Republic Academics • zbMATH People • Deutsche Biographie Other • SNAC
Wikipedia
Simons' formula In the mathematical field of differential geometry, the Simons formula (also known as the Simons identity, and in some variants as the Simons inequality) is a fundamental equation in the study of minimal submanifolds. It was discovered by James Simons in 1968.[1] It can be viewed as a formula for the Laplacian of the second fundamental form of a Riemannian submanifold. It is often quoted and used in the less precise form of a formula or inequality for the Laplacian of the length of the second fundamental form. In the case of a hypersurface M of Euclidean space, the formula asserts that $\Delta h=\operatorname {Hess} H+Hh^{2}-|h|^{2}h,$ where, relative to a local choice of unit normal vector field, h is the second fundamental form, H is the mean curvature, and h2 is the symmetric 2-tensor on M given by h2 ij = gpqhiphqj .[2] This has the consequence that ${\frac {1}{2}}\Delta |h|^{2}=|\nabla h|^{2}-|h|^{4}+\langle h,\operatorname {Hess} H\rangle +H\operatorname {tr} (A^{3})$ where A is the shape operator.[3] In this setting, the derivation is particularly simple: ${\begin{aligned}\Delta h_{ij}&=\nabla ^{p}\nabla _{p}h_{ij}\\&=\nabla ^{p}\nabla _{i}h_{jp}\\&=\nabla _{i}\nabla ^{p}h_{jp}-{{R^{p}}_{ij}}^{q}h_{qp}-{{R^{p}}_{ip}}^{q}h_{jq}\\&=\nabla _{i}\nabla _{j}H-(h^{pq}h_{ij}-h_{j}^{p}h_{i}^{q})h_{qp}-(h^{pq}h_{ip}-Hh_{i}^{q})h_{jq}\\&=\nabla _{i}\nabla _{j}H-|h|^{2}h+Hh^{2};\end{aligned}}$ the only tools involved are the Codazzi equation (equalities #2 and 4), the Gauss equation (equality #4), and the commutation identity for covariant differentiation (equality #3). The more general case of a hypersurface in a Riemannian manifold requires additional terms to do with the Riemann curvature tensor.[4] In the even more general setting of arbitrary codimension, the formula involves a complicated polynomial in the second fundamental form.[5] References Footnotes 1. Simons 1968, Section 4.2. 2. Huisken 1984, Lemma 2.1(i). 3. Simon 1983, Lemma B.8. 4. Huisken 1986. 5. Simons 1968, Section 4.2; Chern, do Carmo & Kobayashi 1970. Books • Tobias Holck Colding and William P. Minicozzi, II. A course in minimal surfaces. Graduate Studies in Mathematics, 121. American Mathematical Society, Providence, RI, 2011. xii+313 pp. ISBN 978-0-8218-5323-8 • Enrico Giusti. Minimal surfaces and functions of bounded variation. Monographs in Mathematics, 80. Birkhäuser Verlag, Basel, 1984. xii+240 pp. ISBN 0-8176-3153-4 • Leon Simon. Lectures on geometric measure theory. Proceedings of the Centre for Mathematical Analysis, Australian National University, 3. Australian National University, Centre for Mathematical Analysis, Canberra, 1983. vii+272 pp. ISBN 0-86784-429-9 Articles • S.S. Chern, M. do Carmo, and S. Kobayashi. Minimal submanifolds of a sphere with second fundamental form of constant length. Functional Analysis and Related Fields (1970), 59–75. Proceedings of a Conference in honor of Professor Marshall Stone, held at the University of Chicago, May 1968. Springer, New York. Edited by Felix E. Browder. doi:10.1007/978-3-642-48272-4_2 • Gerhard Huisken. Flow by mean curvature of convex surfaces into spheres. J. Differential Geom. 20 (1984), no. 1, 237–266. doi:10.4310/jdg/1214438998 • Gerhard Huisken. Contracting convex hypersurfaces in Riemannian manifolds by their mean curvature. Invent. Math. 84 (1986), no. 3, 463–480. doi:10.1007/BF01388742 • James Simons. Minimal varieties in Riemannian manifolds. Ann. of Math. (2) 88 (1968), 62–105. doi:10.2307/1970556
Wikipedia
Simons Center for Geometry and Physics The Simons Center for Geometry and Physics is a center for theoretical physics and mathematics at Stony Brook University in New York. The focus of the center is mathematical physics and the interface of geometry and physics. It was founded in 2007 by a gift from the James and Marilyn Simons Foundation. The center's current director is physicist Luis Álvarez-Gaumé. Simons Center for Geometry and Physics[1] General information Location100 Nicolls Road Stony Brook, NY 11790 Construction startedSpring 2009 OpenedNovember 2010 Cost$30 million Technical details Floor count6 Floor area40,000 sq ft History Background James H. Simons was the chair of the mathematics department at Stony Brook from 1968 to 1976. After deciding to leave academia, he then went on to make billions with his investment firm Renaissance Technologies. On February 27, 2008 he announced a donation totaling $60 million (including a $25 million gift two years prior) to the mathematics and physics departments. This was the largest single gift ever given to any of the SUNY schools.[2] The gift came during Stony Brook's 50th anniversary and shortly after Gov. Spitzer announced his commitment to make Stony Brook a “flagship” of the SUNY system that would rival the nation’s most prestigious state research universities.[3] During his announcement speech, Jim Simons said "From Archimedes to Newton to Einstein, much of the most profound work in physics has been deeply intertwined with the geometric side of mathematics. Since then, in particular with the advent of such areas as quantum field theory and string theory, developments in geometry and physics have become if anything more interrelated. The new Center will give many of the world's best mathematicians and physicists the opportunity to work and interact in an environment and an architecture carefully designed to enhance progress. We believe there is a chance that work accomplished at the Center will significantly change and deepen our understanding of the physical universe and of its basic mathematical structure."[3] The Center results from extensive thought and planning between faculty, department chairs, and others, including Cumrun Vafa of Harvard, who directs the Simons Foundation-supported summer institutes on string theory at Stony Brook, and Isadore Singer of MIT.[3] Establishment John Morgan served as the founding director from 2009 to 2016.[4][5] Luis Álvarez-Gaumé has been the director since 2016.[6] Building The Simons Center's building was completed in September, 2010. The building is adjacent to the physics and mathematics departments to allow for close collaboration with the mathematics department and the C. N. Yang Institute for Theoretical Physics. The building offers 40,000 square feet (3,700 m2) of floor space, spread over six stories, and includes a 236–seat auditorium, a 90–seat lecture hall, offices, seminar rooms, and a cafe.[7] The building is LEED Gold certified,[8] and is connected to the Math Tower via an elevated walkway. Faculty The Center's permanent faculty currently consists of mathematicians Simon Donaldson, Kenji Fukaya, and John Pardon, and physicists Nikita Nekrasov and Zohar Komargodski. The Center's academic staff also includes roughly 10 research assistant professors and 20 visiting researchers at any given time.[9] Other former faculty members include physicists Michael R. Douglas and Anton Kapustin. References 1. "Archived copy" (PDF). Archived from the original (PDF) on 2020-01-25. Retrieved 2020-01-25.{{cite web}}: CS1 maint: archived copy as title (link) 2. Arneson, Karen W. (28 February 2008). "$60 Million Gift Is a Record, for Stony Brook". The New York Times. Retrieved 6 September 2010. 3. "Stony Brook University Announces $60 Million Gift From Renowned Financier And Former Math Chair James Simons And Wife Marilyn, A Ph.D. Alumna". Stony Brook Press Release. 28 February 2008. Archived from the original on 15 October 2010. Retrieved 6 September 2010. 4. "The Founding Director". Simons Center for Geometry and Physics at Stony Brook University. Retrieved January 27, 2021. 5. "John Morgan". Simons Center for Geometry and Physics at Stony Brook University. Retrieved January 27, 2021. 6. "About the Director". Simons Center for Geometry and Physics at Stony Brook University. Retrieved January 27, 2021. 7. "Stony Brook Facilities & Services newsletter" (PDF). Archived from the original (PDF) on 2011-01-06. Retrieved 4 October 2010. 8. "Simons Center for Geometry and Physics - U.S. Green Building Council". Retrieved 19 May 2015. 9. "People at the SCGP - Spring 2011". Retrieved 27 March 2011. External links • Official website Stony Brook University Based in Stony Brook, New York Academics • College of Arts & Sciences • College of Business • College of Engineering & Applied Sciences • The Graduate School • School of Dental Medicine • School of Health Technology & Management • School of Journalism • School of Marine & Atmospheric Sciences • School of Medicine • School of Nursing • School of Professional Development • School of Social Welfare • Undergraduate Colleges Athletics Teams • Stony Brook Seawolves • Football • Men's basketball • Women's basketball • Baseball • Men's lacrosse • Women's lacrosse • Men's soccer • Cross country Facilities • Kenneth P. LaValle Stadium • Island Federal Arena • Joe Nathan Field • Pritchard Gymnasium • University Track • Dubin Family Athletic Performance Center • Stony Brook Indoor Sports Complex Other • 2012 College World Series • Battle for the Golden Apple • Battle of Long Island • Joe Nathan • Dominick Reyes • Will Tye • Jameel Warney • America East Conference • CAA Football Campus • Stony Brook University Hospital • Jacob K. Javits Lecture Center • Marine Sciences Research Center • Frank Melville Jr. Memorial Library • Ward Melville Social and Behavioral Sciences • Mount Stony Brook Observatory • Pollock-Krasner House and Study Center • Simons Center for Geometry and Physics • Southampton campus • Staller Center for the Arts • Student Housing • Student Union • Charles B. Wang Center • Windmill Research • Brookhaven National Laboratory (co-managed by SBU) • Center of Excellence in Wireless and Information Technology • Advanced Energy Research & Technology Center • Long Island High Technology Incubator • Stony Brook University Research and Development Park Student life • Undergraduate Student Government • People • The Statesman • WUSB • Stony Brook Independent • Brookfest • Wolfstock • The Spirit of Stony Brook Marching Band • Stony Brook Film Festival • Wolfie the Seawolf • Roth Pond Regatta • Stony Brook Science Fiction Forum History • Maurie D. McInnis (President) • Alumni • Presidents • Ward Melville • Coe Hall • Association of American Universities • State University of New York • Founded: 1957 40.91588°N 73.12691°W / 40.91588; -73.12691 Authority control: National • Israel • United States
Wikipedia
Simons Laufer Mathematical Sciences Institute The Simons Laufer Mathematical Sciences Institute (SLMath), formerly the Mathematical Sciences Research Institute (MSRI), is an independent nonprofit mathematical research institution on the University of California campus in Berkeley, California.[2] It is widely regarded as a world leading mathematical center for collaborative research, drawing thousands of leading researchers from around the world each year.[3][4][5][6][7][8] Simons Laufer Mathematical Sciences Institute The MSRI entrance, May 2011 Former name Mathematical Sciences Research Institute Type501(c)(3) nonprofit mathematical research institute Established1982 (1982) Founders • Shiing-Shen Chern • Calvin Moore • Isadore M. Singer Endowment$89 million (2022)[1] DirectorTatiana Toro Address 17 Gauss Way , Berkeley , California , United States 37.879799°N 122.244294°W / 37.879799; -122.244294 Websiteslmath.org The institute was founded in 1982, and its funding sources include the National Science Foundation, private foundations, corporations, and more than 90 universities and institutions.[9][4] The institute is located at 17 Gauss Way on the Berkeley campus, close to Grizzly Peak in the Berkeley Hills.[2] Given its contribution to the nation's scientific potential, the institute is supported by the National Science Foundation and the National Security Agency.[10]  Private individuals, foundations, and nearly 100 Academic Sponsor Institutions, including the top mathematics departments in the United States, also provide crucial support and flexibility. James Simons, founder of Renaissance Technologies and a Berkeley alumnus, is a long-time supporter of the institute and serves on the board of trustees.[10] History The institute was founded in September 1982 by three Berkeley professors: Shiing-Shen Chern, Calvin Moore, and Isadore M. Singer.[9][11][12] Shiing-Shen Chern acted as the founding director of the institute and Calvin Moore acted as the founding deputy director. Originally located in Berkeley's extension building at 2223 Fulton Street, the institute moved into its current facility in the Berkeley hills on April 1, 1985.[9] The institute initially paid rent to the university for its "Hill Campus" building, but since August 2000, it has occupied the building free of rent, just one of several contributions by the Berkeley campus.[9] In May 2022, the institute announced that it received an unrestricted $70 million gift from James and Marilyn Simons and Henry and Marsha Laufer. In honor of the endowment, MSRI was renamed the Simons Laufer Mathematical Sciences Institute.[13] Governance SLMath is governed by a board of trustees consisting of up to 35 elected members and seven ex-officio members: the director of the institute, the deputy director, the Chair of the Committee of Academic Sponsors, the co-Chairs of the Human Resources Advisory Committee and the co-Chairs of the Scientific Advisory Committee (SAC).[14] Unlike many mathematical institutes, SLMath has no permanent faculty or members, and its research activities are overseen by its Scientific Advisory Committee (SAC), a panel of distinguished mathematicians drawn from a variety of different areas of mathematical research.[14] There are ten regular members in the SAC, and each member serves a four-year term and is elected by the board of trustees.[14] Research activities SLMath hosts some 85 mathematicians and postdoctoral research fellows each semester and holds programs and workshops that draw approximately 2,000 visits by mathematical scientists throughout the year. The visitors come to SLMath to work in an environment that promotes creativity and the effective interchange of ideas and techniques. SLMath features two focused programs each semester, attended by foremost mathematicians and postdocs from the United States and abroad; the institute has become a world center of activity in those fields.[15] SLMath takes advantage of its proximity to the Berkeley faculty and to the scientific talent and resources of Lawrence Berkeley National Laboratory; it also collaborates with organizations across the nation, including the Chicago Mercantile Exchange, Citadel LLC, IBM, and Microsoft Research. The institute's prize-winning forty-eight thousand square foot building has views of the San Francisco Bay. After 30 years of activity, the reputation of the institute is such that mathematicians make it a professional priority to participate in the institute's programs.[16] Education programs SLMath also serves a wider community through the development of human scientific capital, providing postdoctoral training to young scientists and increasing the diversity of the research workforce.[17] The institute also advances the education of young people with conferences on critical issues in mathematics education. Additionally, they host research workshops that are unconnected to the main programs, such as its annual workshop on K-12 mathematics education Critical Issues in Mathematics Education. During the summer, the institute holds workshops for graduate students.[18] It also sponsors programs for middle and high school students and their teachers as part of the Math Circles and Circles for Teachers that meet weekly in San Francisco, Berkeley, and Oakland. It also sponsors the Bay Area Mathematical Olympiad (BAMO), the Julia Robinson Mathematics Festival, and the U.S. team of young girls that competes at the China Girls Math Olympiad.[19] The lectures given at SLMath events are recorded and made available for free on the internet.[20] SLMath has sponsored a number of events that reach out to the non-mathematical public, and its Simons Auditorium also hosts special performances of classical music. Mathematician Robert Osserman has held a series of public "conversations" with artists who have been influenced by mathematics in their work, such as composer Philip Glass, actor and writer Steve Martin, playwright Tom Stoppard, and actor and author Alan Alda. SLMath also collaborates with local playwrights for an annual program of new short mathematics-inspired plays at Monday Night Playground at the Berkeley Repertory Theater, and co-sponsored a series of mathematics-inspired films with UC Berkeley's Pacific Film Archive for the institute's 20th anniversary.[20] It also created a series of mathematical puzzles that were posted among the advertising placards on San Francisco Muni buses. Mathical Book Prize The Mathical Award is presented to books "that inspire children of all ages to see math in the world around them."[21] Recipients of the award include John Rocco, Robie Harris, Jeffrey Kluger, Lauren Child, Michael J. Rosen, Leopoldo Gout, Elisha Cooper, Kate Banks, Gene Luen Yang, Steve Light, and Richard Evan Schwartz.[21] List of directors Image Name Timespan Shiing-Shen Chern1982–1984 Irving Kaplansky1984–1992 William Thurston1992–1997 David Eisenbud1997–2007 Robert Bryant2007–2013 David Eisenbud2013–2022 Tatiana Toro2022–present[22] References 1. "Simons Laufer Mathematical Sciences Institute". msri.org. 2. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 20 April 2018. 3. "12.06.2004 - Renowned mathematician Shiing-Shen Chern, who revitalized the study of geometry, has died at 93 in Tianjin, China". www.berkeley.edu. Retrieved 4 January 2017. 4. MSRI. "MSRI Mission". www.msri.org. Retrieved 4 January 2017. 5. "Mathematical Sciences Research Institute (MSRI)" (PDF). 6. "About MSRI". The Bridges Organization. Retrieved 4 January 2017. 7. "MSRI". Simons Foundation. Retrieved 4 January 2017. 8. "17 Gauss Way". Simons Foundation. Retrieved 4 January 2017. 9. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 20 April 2018. 10. MSRI. "Mathematical Sciences Research Institute Home". www.msri.org. Retrieved 30 March 2021. 11. "Shiing-Shen Chern". Institute for Advanced Study. Retrieved 20 April 2018. 12. "Isadore M. Singer". Department of Mathematics at University of California Berkeley. Retrieved 20 April 2018. 13. "Mathematical Sciences Research Institute Receives $70M Gift; Largest Unrestricted Endowed Gift to a U.S.-Based Mathematics Institute". Business Wire (Press release). 19 May 2022. 14. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 17 May 2018. 15. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 20 April 2018. 16. Paul Karon. "Funding for Mathematics Research is Scarce. A Big Gift Underscores the Potential Impact for Donors". Inside Philanthropy=August 3, 2022. 17. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 20 April 2018. 18. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 21 April 2018. 19. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 21 April 2018. 20. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 21 April 2018. 21. "2015-2021 Mathical Award Winners". mathicalbooks.org. Mathematical Sciences Research Institute. Retrieved 18 June 2021. 22. "Dr. Tatiana Toro Named Next Director of the Mathematical Sciences Research Institute" (PDF). National Science Foundation. September 2021. Archived (PDF) from the original on 24 July 2022. External links • Official website • NSF Math Institutes American mathematics Organizations • AMS • MAA • SIAM • AMATYC • AWM Institutions • AIM • CIMS • IAS • ICERM • IMA • IPAM • MBI • SLMath • SAMSI • Geometry Center Competitions • MATHCOUNTS • AMC • AIME • USAMO • MOP • Putnam Competition • Integration Bee Authority control International • ISNI • 2 • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Australia People • Trove Other • IdRef
Wikipedia
Simple-homotopy equivalence In mathematics, particularly the area of topology, a simple-homotopy equivalence is a refinement of the concept of homotopy equivalence. Two CW-complexes are simple-homotopy equivalent if they are related by a sequence of collapses and expansions (inverses of collapses), and a homotopy equivalence is a simple homotopy equivalence if it is homotopic to such a map. The obstruction to a homotopy equivalence being a simple homotopy equivalence is the Whitehead torsion, $\tau (f).$ A homotopy theory that studies simple-homotopy types is called simple homotopy theory. See also • Discrete Morse theory References • Cohen, Marshall M. (1973), A course in simple-homotopy theory, Berlin, New York: Springer-Verlag, ISBN 978-3-540-90055-9, MR 0362320
Wikipedia
Simple Lie algebra In algebra, a simple Lie algebra is a Lie algebra that is non-abelian and contains no nonzero proper ideals. The classification of real simple Lie algebras is one of the major achievements of Wilhelm Killing and Élie Cartan. Lie groups and Lie algebras Classical groups • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) Simple Lie groups Classical • An • Bn • Cn • Dn Exceptional • G2 • F4 • E6 • E7 • E8 Other Lie groups • Circle • Lorentz • Poincaré • Conformal group • Diffeomorphism • Loop • Euclidean Lie algebras • Lie group–Lie algebra correspondence • Exponential map • Adjoint representation • Killing form • Index • Simple Lie algebra • Loop algebra • Affine Lie algebra Semisimple Lie algebra • Dynkin diagrams • Cartan subalgebra • Root system • Weyl group • Real form • Complexification • Split Lie algebra • Compact Lie algebra Representation theory • Lie group representation • Lie algebra representation • Representation theory of semisimple Lie algebras • Representations of classical Lie groups • Theorem of the highest weight • Borel–Weil–Bott theorem Lie groups in physics • Particle physics and representation theory • Lorentz group representations • Poincaré group representations • Galilean group representations Scientists • Sophus Lie • Henri Poincaré • Wilhelm Killing • Élie Cartan • Hermann Weyl • Claude Chevalley • Harish-Chandra • Armand Borel • Glossary • Table of Lie groups A direct sum of simple Lie algebras is called a semisimple Lie algebra. A simple Lie group is a connected Lie group whose Lie algebra is simple. Complex simple Lie algebras Main article: root system A finite-dimensional simple complex Lie algebra is isomorphic to either of the following: ${\mathfrak {sl}}_{n}\mathbb {C} $, ${\mathfrak {so}}_{n}\mathbb {C} $, ${\mathfrak {sp}}_{2n}\mathbb {C} $ (classical Lie algebras) or one of the five exceptional Lie algebras.[1] To each finite-dimensional complex semisimple Lie algebra ${\mathfrak {g}}$, there exists a corresponding diagram (called the Dynkin diagram) where the nodes denote the simple roots, the nodes are jointed (or not jointed) by a number of lines depending on the angles between the simple roots and the arrows are put to indicate whether the roots are longer or shorter.[2] The Dynkin diagram of ${\mathfrak {g}}$ is connected if and only if ${\mathfrak {g}}$ is simple. All possible connected Dynkin diagrams are the following:[3] where n is the number of the nodes (the simple roots). The correspondence of the diagrams and complex simple Lie algebras is as follows:[2] (An) $\quad {\mathfrak {sl}}_{n+1}\mathbb {C} $ (Bn) $\quad {\mathfrak {so}}_{2n+1}\mathbb {C} $ (Cn) $\quad {\mathfrak {sp}}_{2n}\mathbb {C} $ (Dn) $\quad {\mathfrak {so}}_{2n}\mathbb {C} $ The rest, exceptional Lie algebras. Real simple Lie algebras If ${\mathfrak {g}}_{0}$ is a finite-dimensional real simple Lie algebra, its complexification is either (1) simple or (2) a product of a simple complex Lie algebra and its conjugate. For example, the complexification of ${\mathfrak {sl}}_{n}\mathbb {C} $ thought of as a real Lie algebra is ${\mathfrak {sl}}_{n}\mathbb {C} \times {\overline {{\mathfrak {sl}}_{n}\mathbb {C} }}$. Thus, a real simple Lie algebra can be classified by the classification of complex simple Lie algebras and some additional information. This can be done by Satake diagrams that generalize Dynkin diagrams. See also Table of Lie groups#Real Lie algebras for a partial list of real simple Lie algebras. Notes 1. Fulton & Harris 1991, Theorem 9.26. 2. Fulton & Harris 1991, § 21.1. 3. Fulton & Harris 1991, § 21.2. See also • Simple Lie group • Vogel plane References • Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. • Jacobson, Nathan, Lie algebras, Republication of the 1962 original. Dover Publications, Inc., New York, 1979. ISBN 0-486-63832-4; Chapter X considers a classification of simple Lie algebras over a field of characteristic zero. • "Lie algebra, semi-simple", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Simple Lie algebra at the nLab
Wikipedia
N-group (finite group theory) In mathematical finite group theory, an N-group is a group all of whose local subgroups (that is, the normalizers of nontrivial p-subgroups) are solvable groups. The non-solvable ones were classified by Thompson during his work on finding all the minimal finite simple groups. Not to be confused with n-group (category theory). Simple N-groups The simple N-groups were classified by Thompson (1968, 1970, 1971, 1973, 1974, 1974b) in a series of 6 papers totaling about 400 pages. The simple N-groups consist of the special linear groups PSL2(q), PSL3(3), the Suzuki groups Sz(22n+1), the unitary group U3(3), the alternating group A7, the Mathieu group M11, and the Tits group. (The Tits group was overlooked in Thomson's original announcement in 1968, but Hearn pointed out that it was also a simple N-group.) More generally Thompson showed that any non-solvable N-group is a subgroup of Aut(G) containing G for some simple N-group G. Gorenstein & Lyons (1976) generalized Thompson's theorem to the case of groups where all 2-local subgroups are solvable. The only extra simple groups that appear are the unitary groups U3(q). Proof Gorenstein (1980, 16.5) gives a summary of Thompson's classification of N-groups. The primes dividing the order of the group are divided into four classes π1, π2, π3, π4 as follows • π1 is the set of primes p such that a Sylow p-subgroup is nontrivial and cyclic. • π2 is the set of primes p such that a Sylow p-subgroup P is non-cyclic but SCN3(P) is empty • π3 is the set of primes p such that a Sylow p-subgroup P has SCN3(P) nonempty and normalizes a nontrivial abelian subgroup of order prime to p. • π4 is the set of primes p such that a Sylow p-subgroup P has SCN3(P) nonempty but does not normalize a nontrivial abelian subgroup of order prime to p. The proof is subdivided into several cases depending on which of these four classes the prime 2 belongs to, and also on an integer e, which is the largest integer for which there is an elementary abelian subgroup of rank e normalized by a nontrivial 2-subgroup intersecting it trivially. • Thompson (1968) Gives a general introduction, stating the main theorem and proving many preliminary lemmas. • Thompson (1970) characterizes the groups E2(3) and S4(3) (in Thompson's notation; these are the exceptional group G2(3) and the symplectic group Sp4(3)) which are not N-groups but whose characterizations are needed in the proof of the main theorem. • Thompson (1971) covers the case where 2∉π4. Theorem 11.2 shows that if 2∈π2 then the group is PSL2(q), M11, A7, U3(3), or PSL3(3). The possibility that 2∈π3 is ruled out by showing that any such group must be a C-group and using Suzuki's classification of C-groups to check that none of the groups found by Suzuki satisfy this condition. • Thompson (1973) and Thompson (1974) cover the cases when 2∈π4 and e≥3, or e=2. He shows that either G is a C-group so a Suzuki group, or satisfies his characterization of the groups E2(3) and S4(3) in his second paper, which are not N-groups. • Thompson (1974) covers the case when 2∈π4 and e=1, where the only possibilities are that G is a C-group or the Tits group. Consequences A minimal simple group is a non-cyclic simple group all of whose proper subgroups are solvable. The complete list of minimal finite simple groups is given as follows Thompson (1968, corollary 1) • PSL2(2p), p a prime. • PSL2(3p), p an odd prime. • PSL2(p), p > 3 a prime congruent to 2 or 3 mod 5 • Sz(2p), p an odd prime. • PSL3(3) In other words a non-cyclic finite simple group must have a subquotient isomorphic to one of these groups. References • Gorenstein, D.; Lyons, Richard (1976), "Nonsolvable finite groups with solvable 2-local subgroups", Journal of Algebra, 38 (2): 453–522, doi:10.1016/0021-8693(76)90233-7, ISSN 0021-8693, MR 0407128 • Gorenstein, D. (1980), Finite Groups, New York: Chelsea, ISBN 978-0-8284-0301-6, MR 0569209 • Thompson, John G. (1968), "Nonsolvable finite groups all of whose local subgroups are solvable", Bulletin of the American Mathematical Society, 74 (3): 383–437, doi:10.1090/S0002-9904-1968-11953-6, ISSN 0002-9904, MR 0230809 • Thompson, John G. (1970), "Nonsolvable finite groups all of whose local subgroups are solvable. II", Pacific Journal of Mathematics, 33 (2): 451–536, doi:10.2140/pjm.1970.33.451, ISSN 0030-8730, MR 0276325 • Thompson, John G. (1971), "Nonsolvable finite groups all of whose local subgroups are solvable. III", Pacific Journal of Mathematics, 39 (2): 483–534, doi:10.2140/pjm.1971.39.483, ISSN 0030-8730, MR 0313378 • Thompson, John G. (1973), "Nonsolvable finite groups all of whose local subgroups are solvable. IV", Pacific Journal of Mathematics, 48 (2): 511–592, doi:10.2140/pjm.1973.48.511, ISSN 0030-8730, MR 0369512 • Thompson, John G. (1974), "Nonsolvable finite groups all of whose local subgroups are solvable. V", Pacific Journal of Mathematics, 50: 215–297, doi:10.2140/pjm.1974.50.215, ISSN 0030-8730, MR 0369512 • Thompson, John G. (1974b), "Nonsolvable finite groups all of whose local subgroups are solvable. VI", Pacific Journal of Mathematics, 51 (2): 573–630, doi:10.2140/pjm.1974.51.573, ISSN 0030-8730, MR 0369512
Wikipedia
Cycle (graph theory) In graph theory, a cycle in a graph is a non-empty trail in which only the first and last vertices are equal. A directed cycle in a directed graph is a non-empty directed trail in which only the first and last vertices are equal. A graph without cycles is called an acyclic graph. A directed graph without directed cycles is called a directed acyclic graph. A connected graph without cycles is called a tree. Definitions Circuit and cycle • A circuit is a non-empty trail in which the first and last vertices are equal (closed trail).[1] Let G = (V, E, ϕ) be a graph. A circuit is a non-empty trail (e1, e2, …, en) with a vertex sequence (v1, v2, …, vn, v1). • A cycle or simple circuit is a circuit in which only the first and last vertices are equal.[1] Directed circuit and directed cycle • A directed circuit is a non-empty directed trail in which the first and last vertices are equal (closed directed trail).[1] Let G = (V, E, ϕ) be a directed graph. A directed circuit is a non-empty directed trail (e1, e2, …, en) with a vertex sequence (v1, v2, …, vn, v1). • A directed cycle or simple directed circuit is a directed circuit in which only the first and last vertices are equal.[1] Chordless cycle A chordless cycle in a graph, also called a hole or an induced cycle, is a cycle such that no two vertices of the cycle are connected by an edge that does not itself belong to the cycle. An antihole is the complement of a graph hole. Chordless cycles may be used to characterize perfect graphs: by the strong perfect graph theorem, a graph is perfect if and only if none of its holes or antiholes have an odd number of vertices that is greater than three. A chordal graph, a special type of perfect graph, has no holes of any size greater than three. The girth of a graph is the length of its shortest cycle; this cycle is necessarily chordless. Cages are defined as the smallest regular graphs with given combinations of degree and girth. A peripheral cycle is a cycle in a graph with the property that every two edges not on the cycle can be connected by a path whose interior vertices avoid the cycle. In a graph that is not formed by adding one edge to a cycle, a peripheral cycle must be an induced cycle. Cycle space The term cycle may also refer to an element of the cycle space of a graph. There are many cycle spaces, one for each coefficient field or ring. The most common is the binary cycle space (usually called simply the cycle space), which consists of the edge sets that have even degree at every vertex; it forms a vector space over the two-element field. By Veblen's theorem, every element of the cycle space may be formed as an edge-disjoint union of simple cycles. A cycle basis of the graph is a set of simple cycles that forms a basis of the cycle space.[2] Using ideas from algebraic topology, the binary cycle space generalizes to vector spaces or modules over other rings such as the integers, rational or real numbers, etc.[3] Cycle detection The existence of a cycle in directed and undirected graphs can be determined by whether a depth-first search (DFS) finds an edge that points to an ancestor of the current vertex (i.e., it contains a back edge).[4] All the back edges which DFS skips over are part of cycles.[5] In an undirected graph, the edge to the parent of a node should not be counted as a back edge, but finding any other already visited vertex will indicate a back edge. In the case of undirected graphs, only O(n) time is required to find a cycle in an n-vertex graph, since at most n − 1 edges can be tree edges. Many topological sorting algorithms will detect cycles too, since those are obstacles for topological order to exist. Also, if a directed graph has been divided into strongly connected components, cycles only exist within the components and not between them, since cycles are strongly connected.[5] For directed graphs, distributed message-based algorithms can be used. These algorithms rely on the idea that a message sent by a vertex in a cycle will come back to itself. Distributed cycle detection algorithms are useful for processing large-scale graphs using a distributed graph processing system on a computer cluster (or supercomputer). Applications of cycle detection include the use of wait-for graphs to detect deadlocks in concurrent systems.[6] Algorithm For every vertex v: visited(v) = false, finished(v) = false For every vertex v: DFS(v) DFS(v): if finished(v) return if visited(v) "Cycle found" and return visited(v) = true for every neighbour w DFS(w) finished(v) = true For undirected graphs, "neighbour" means all vertices connected to v, except for the one that recursively called DFS(v). This prevents the algorithm from finding trivial cycles, which exist in every undirected graph that has at least one edge. Programming The following example in the Programming language C# shows one implementation of an undirected graph using Adjacency lists. The undirected graph is declared as class UndirectedGraph. Executing the program uses the Main method, which - if one exists - prints the shortest, non-trivial cycle to the console.[7] using System; using System.Collections.Generic; // Declares the class for the vertices of the graph class Node { public int index; public string value; public HashSet<Node> adjacentNodes = new HashSet<Node>(); // Set of neighbour vertices } // Declares the class for the undirected graph class UndirectedGraph { public HashSet<Node> nodes = new HashSet<Node>(); // This method connects node1 and node2 with each other public void ConnectNodes(Node node1, Node node2) { node1.adjacentNodes.Add(node2); node2.adjacentNodes.Add(node1); } } class Program { // This method returns the cycle in the form A, B, C, ... as text. public static string ToString(List<Node> cycle) { string text = ""; for (int i = 0; i < cycle.Count; i++) // for-loop, iterating the vertices { text += cycle[i].value + ", "; } text = text.Substring(0, text.Length - 2); return text; } // Main method executing the program public static void Main(string[] args) { // Declares and initialises 5 vertices Node node1 = new Node{index = 0, value = "A"}; Node node2 = new Node{index = 1, value = "B"}; Node node3 = new Node{index = 2, value = "C"}; Node node4 = new Node{index = 3, value = "D"}; // Declares and initialises an array holding the vertices Node[] nodes = {node1, node2, node3, node4}; // Creates an undirected graph UndirectedGraph undirectedGraph = new UndirectedGraph(); int numberOfNodes = nodes.Length; for (int i = 0; i < numberOfNodes; i++) // for-loop, iterating all vertices { undirectedGraph.nodes.Add(nodes[i]); // Adds the vertices to the graph } // Connects the vertices of the graph with each other undirectedGraph.ConnectNodes(node1, node1); undirectedGraph.ConnectNodes(node1, node2); undirectedGraph.ConnectNodes(node2, node3); undirectedGraph.ConnectNodes(node3, node1); undirectedGraph.ConnectNodes(node3, node4); undirectedGraph.ConnectNodes(node4, node1); HashSet<Node> newNodes = new HashSet<Node>(nodes); // Set of new vertices to iterate HashSet<List<Node>> paths = new HashSet<List<Node>>(); // Set of current paths for (int i = 0; i < numberOfNodes; i++) // for-loop, iterating all vertices of the graph { Node node = nodes[i]; newNodes.Add(node); // Add the vertex to the set of new vertices to iterate List<Node> path = new List<Node>(); path.Add(node); paths.Add(path); // Adds a path for each node as a starting vertex } HashSet<List<Node>> shortestCycles = new HashSet<List<Node>>(); // Set of shortest cycles int lengthOfCycles = 0; // Length of shortest cycles bool cyclesAreFound = false; // Whether or not cycles were found at all while (!cyclesAreFound && newNodes.Count > 0) // As long as we still had vertices to iterate { newNodes.Clear(); // Empties the set of nodes to iterate HashSet<List<Node>> newPaths = new HashSet<List<Node>>(); // Set of newly found paths foreach (List<Node> path in paths) // foreach-loop, iterating all current paths { Node lastNode = path[path.Count - 1]; newNodes.Add(lastNode); // Adds the final vertex of the path to the list of vertices to iterate foreach (Node nextNode in lastNode.adjacentNodes) // foreach-loop, iterating all neighbours of the previous node { if (path.Count >= 3 && path[0] == nextNode) // If a cycle with length greater or equal 3 was found { cyclesAreFound = true; shortestCycles.Add(path); // Adds the path to the set of cycles lengthOfCycles = path.Count; } if (!path.Contains(nextNode)) // If the path doesn't contain the neighbour { newNodes.Add(nextNode); // Adds the neighbour to the set of vertices to iterate // Creates a new path List<Node> newPath = new List<Node>(); newPath.AddRange(path); // Adds the current path's vertex to the new path in the correct order newPath.Add(nextNode); // Adds the neighbour to the new path newPaths.Add(newPath); // Adds the path to the set of newly found paths } } } paths = newPaths; // Updates the set of current paths } if (shortestCycles.Count > 0) // If cycles were found { Console.WriteLine("The graph contains " + shortestCycles.Count + " cycles of length " + lengthOfCycles + "."); // Print to console foreach (List<Node> cycle in shortestCycles) // foreach-loop, iterating all found cycles { Console.WriteLine(ToString(cycle)); // Print to console } } else { Console.WriteLine("The graph contains no cycles."); // Print to console } Console.ReadLine(); } } Covering graphs by cycle In his 1736 paper on the Seven Bridges of Königsberg, widely considered to be the birth of graph theory, Leonhard Euler proved that, for a finite undirected graph to have a closed walk that visits each edge exactly once (making it a closed trail), it is necessary and sufficient that it be connected except for isolated vertices (that is, all edges are contained in one component) and have even degree at each vertex. The corresponding characterization for the existence of a closed walk visiting each edge exactly once in a directed graph is that the graph be strongly connected and have equal numbers of incoming and outgoing edges at each vertex. In either case, the resulting closed trail is known as an Eulerian trail. If a finite undirected graph has even degree at each of its vertices, regardless of whether it is connected, then it is possible to find a set of simple cycles that together cover each edge exactly once: this is Veblen's theorem.[8] When a connected graph does not meet the conditions of Euler's theorem, a closed walk of minimum length covering each edge at least once can nevertheless be found in polynomial time by solving the route inspection problem. The problem of finding a single simple cycle that covers each vertex exactly once, rather than covering the edges, is much harder. Such a cycle is known as a Hamiltonian cycle, and determining whether it exists is NP-complete.[9] Much research has been published concerning classes of graphs that can be guaranteed to contain Hamiltonian cycles; one example is Ore's theorem that a Hamiltonian cycle can always be found in a graph for which every non-adjacent pair of vertices have degrees summing to at least the total number of vertices in the graph.[10] The cycle double cover conjecture states that, for every bridgeless graph, there exists a multiset of simple cycles that covers each edge of the graph exactly twice. Proving that this is true (or finding a counterexample) remains an open problem.[11] Graph classes defined by cycle Several important classes of graphs can be defined by or characterized by their cycles. These include: • Bipartite graph, a graph without odd cycles (cycles with an odd number of vertices) • Cactus graph, a graph in which every nontrivial biconnected component is a cycle • Cycle graph, a graph that consists of a single cycle • Chordal graph, a graph in which every induced cycle is a triangle • Directed acyclic graph, a directed graph with no directed cycles • Line perfect graph, a graph in which every odd cycle is a triangle • Perfect graph, a graph with no induced cycles or their complements of odd length greater than three • Pseudoforest, a graph in which each connected component has at most one cycle • Strangulated graph, a graph in which every peripheral cycle is a triangle • Strongly connected graph, a directed graph in which every edge is part of a cycle • Triangle-free graph, a graph without three-vertex cycles • Even-cycle-free graph, a graph without even cycles • Even-hole-free graph, a graph without even cycles of length larger or equal to 6 See also • Cycle space • Cycle basis • Cycle detection in a sequence of iterated function values References 1. Bender & Williamson 2010, p. 164. 2. Gross, Jonathan L.; Yellen, Jay (2005), "4.6 Graphs and Vector Spaces", Graph Theory and Its Applications (2nd ed.), CRC Press, pp. 197–207, ISBN 9781584885054, archived from the original on 2023-02-04, retrieved 2016-09-27. 3. Diestel, Reinhard (2012), "1.9 Some linear algebra", Graph Theory, Graduate Texts in Mathematics, vol. 173, Springer, pp. 23–28, archived from the original on 2023-02-04, retrieved 2016-09-27. 4. Tucker, Alan (2006). "Chapter 2: Covering Circuits and Graph Colorings". Applied Combinatorics (5th ed.). Hoboken: John Wiley & sons. p. 49. ISBN 978-0-471-73507-6. 5. Sedgewick, Robert (1983), "Graph algorithms", Algorithms, Addison–Wesley, ISBN 0-201-06672-6 6. Silberschatz, Abraham; Peter Galvin; Greg Gagne (2003). Operating System Concepts. John Wiley & Sons, INC. pp. 260. ISBN 0-471-25060-0. 7. GeeksforGeeks: Shortest cycle in an undirected unweighted graph Archived 2022-01-11 at the Wayback Machine 8. Veblen, Oswald (1912), "An Application of Modular Equations in Analysis Situs", Annals of Mathematics, Second Series, 14 (1): 86–94, doi:10.2307/1967604, JSTOR 1967604. 9. Richard M. Karp (1972), "Reducibility Among Combinatorial Problems" (PDF), in R. E. Miller and J. W. Thatcher (ed.), Complexity of Computer Computations, New York: Plenum, pp. 85–103, archived (PDF) from the original on 2021-02-10, retrieved 2014-03-12. 10. Ore, Ø. (1960), "Note on Hamilton circuits", American Mathematical Monthly, 67 (1): 55, doi:10.2307/2308928, JSTOR 2308928. 11. Jaeger, F. (1985), "A survey of the cycle double cover conjecture", Annals of Discrete Mathematics 27 – Cycles in Graphs, North-Holland Mathematics Studies, vol. 27, pp. 1–12, doi:10.1016/S0304-0208(08)72993-1. • Balakrishnan, V. K. (2005). Schaum's outline of theory and problems of graph theory ([Nachdr.] ed.). McGraw–Hill. ISBN 978-0070054899. • Bender, Edward A.; Williamson, S. Gill (2010). Lists, Decisions and Graphs. With an Introduction to Probability.
Wikipedia
Simple function In the mathematical field of real analysis, a simple function is a real (or complex)-valued function over a subset of the real line, similar to a step function. Simple functions are sufficiently "nice" that using them makes mathematical reasoning, theory, and proof easier. For example, simple functions attain only a finite number of values. Some authors also require simple functions to be measurable; as used in practice, they invariably are. A basic example of a simple function is the floor function over the half-open interval [1, 9), whose only values are {1, 2, 3, 4, 5, 6, 7, 8}. A more advanced example is the Dirichlet function over the real line, which takes the value 1 if x is rational and 0 otherwise. (Thus the "simple" of "simple function" has a technical meaning somewhat at odds with common language.) All step functions are simple. Simple functions are used as a first stage in the development of theories of integration, such as the Lebesgue integral, because it is easy to define integration for a simple function and also it is straightforward to approximate more general functions by sequences of simple functions. Definition Formally, a simple function is a finite linear combination of indicator functions of measurable sets. More precisely, let (X, Σ) be a measurable space. Let A1, ..., An ∈ Σ be a sequence of disjoint measurable sets, and let a1, ..., an be a sequence of real or complex numbers. A simple function is a function $f:X\to \mathbb {C} $ of the form $f(x)=\sum _{k=1}^{n}a_{k}{\mathbf {1} }_{A_{k}}(x),$ where ${\mathbf {1} }_{A}$ is the indicator function of the set A. Properties of simple functions The sum, difference, and product of two simple functions are again simple functions, and multiplication by constant keeps a simple function simple; hence it follows that the collection of all simple functions on a given measurable space forms a commutative algebra over $\mathbb {C} $. Integration of simple functions If a measure μ is defined on the space (X,Σ), the integral of f with respect to μ is $\sum _{k=1}^{n}a_{k}\mu (A_{k}),$ if all summands are finite. Relation to Lebesgue integration The above integral of simple functions can be extended to a more general class of functions, which is how the Lebesgue integral is defined. This extension is based on the following fact. Theorem. Any non-negative measurable function $f\colon X\to \mathbb {R} ^{+}$ is the pointwise limit of a monotonic increasing sequence of non-negative simple functions. It is implied in the statement that the sigma-algebra in the co-domain $\mathbb {R} ^{+}$ is the restriction of the Borel σ-algebra ${\mathfrak {B}}(\mathbb {R} )$ to $\mathbb {R} ^{+}$. The proof proceeds as follows. Let $f$ be a non-negative measurable function defined over the measure space $(X,\Sigma ,\mu )$. For each $n\in \mathbb {N} $, subdivide the co-domain of $f$ into $2^{2n}+1$ intervals, $2^{2n}$ of which have length $2^{-n}$. That is, for each $n$, define $I_{n,k}=\left[{\frac {k-1}{2^{n}}},{\frac {k}{2^{n}}}\right)$ for $k=1,2,\ldots ,2^{2n}$, and $I_{n,2^{2n}+1}=[2^{n},\infty )$, which are disjoint and cover the non-negative real line ($\mathbb {R} ^{+}\subseteq \cup _{k}I_{n,k},\forall n\in \mathbb {N} $). Now define the sets $A_{n,k}=f^{-1}(I_{n,k})\,$ for $k=1,2,\ldots ,2^{2n}+1,$ which are measurable ($A_{n,k}\in \Sigma $) because $f$ is assumed to be measurable. Then the increasing sequence of simple functions $f_{n}=\sum _{k=1}^{2^{2n}+1}{\frac {k-1}{2^{n}}}{\mathbf {1} }_{A_{n,k}}$ converges pointwise to $f$ as $n\to \infty $. Note that, when $f$ is bounded, the convergence is uniform. See also Bochner measurable function References • J. F. C. Kingman, S. J. Taylor. Introduction to Measure and Probability, 1966, Cambridge. • S. Lang. Real and Functional Analysis, 1993, Springer-Verlag. • W. Rudin. Real and Complex Analysis, 1987, McGraw-Hill. • H. L. Royden. Real Analysis, 1968, Collier Macmillan.
Wikipedia
Simple group In mathematics, a simple group is a nontrivial group whose only normal subgroups are the trivial group and the group itself. A group that is not simple can be broken into two smaller groups, namely a nontrivial normal subgroup and the corresponding quotient group. This process can be repeated, and for finite groups one eventually arrives at uniquely determined simple groups, by the Jordan–Hölder theorem. Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve The complete classification of finite simple groups, completed in 2004, is a major milestone in the history of mathematics. Examples Finite simple groups The cyclic group $G=(\mathbb {Z} /3\mathbb {Z} ,+)=\mathbb {Z} _{3}$ of congruence classes modulo 3 (see modular arithmetic) is simple. If $H$ is a subgroup of this group, its order (the number of elements) must be a divisor of the order of $G$ which is 3. Since 3 is prime, its only divisors are 1 and 3, so either $H$ is $G$, or $H$ is the trivial group. On the other hand, the group $G=(\mathbb {Z} /12\mathbb {Z} ,+)=\mathbb {Z} _{12}$ is not simple. The set $H$ of congruence classes of 0, 4, and 8 modulo 12 is a subgroup of order 3, and it is a normal subgroup since any subgroup of an abelian group is normal. Similarly, the additive group of the integers $(\mathbb {Z} ,+)$ is not simple; the set of even integers is a non-trivial proper normal subgroup.[1] One may use the same kind of reasoning for any abelian group, to deduce that the only simple abelian groups are the cyclic groups of prime order. The classification of nonabelian simple groups is far less trivial. The smallest nonabelian simple group is the alternating group $A_{5}$ of order 60, and every simple group of order 60 is isomorphic to $A_{5}$.[2] The second smallest nonabelian simple group is the projective special linear group PSL(2,7) of order 168, and every simple group of order 168 is isomorphic to PSL(2,7).[3][4] Infinite simple groups The infinite alternating group $A_{\infty }$, i.e. the group of even finitely supported permutations of the integers, is simple. This group can be written as the increasing union of the finite simple groups $A_{n}$ with respect to standard embeddings $A_{n}\rightarrow A_{n+1}$. Another family of examples of infinite simple groups is given by $PSL_{n}(F)$, where $F$ is an infinite field and $n\geq 2$. It is much more difficult to construct finitely generated infinite simple groups. The first existence result is non-explicit; it is due to Graham Higman and consists of simple quotients of the Higman group.[5] Explicit examples, which turn out to be finitely presented, include the infinite Thompson groups $T$ and $V$. Finitely presented torsion-free infinite simple groups were constructed by Burger and Mozes.[6] Classification There is as yet no known classification for general (infinite) simple groups, and no such classification is expected. Finite simple groups Main article: list of finite simple groups Further information: Classification of finite simple groups The finite simple groups are important because in a certain sense they are the "basic building blocks" of all finite groups, somewhat similar to the way prime numbers are the basic building blocks of the integers. This is expressed by the Jordan–Hölder theorem which states that any two composition series of a given group have the same length and the same factors, up to permutation and isomorphism. In a huge collaborative effort, the classification of finite simple groups was declared accomplished in 1983 by Daniel Gorenstein, though some problems surfaced (specifically in the classification of quasithin groups, which were plugged in 2004). Briefly, finite simple groups are classified as lying in one of 18 families, or being one of 26 exceptions: • $\mathbb {Z} _{p}$ – cyclic group of prime order • $A_{n}$ – alternating group for $n\geq 5$ The alternating groups may be considered as groups of Lie type over the field with one element, which unites this family with the next, and thus all families of non-abelian finite simple groups may be considered to be of Lie type. • One of 16 families of groups of Lie type The Tits group is generally considered of this form, though strictly speaking it is not of Lie type, but rather index 2 in a group of Lie type. • One of 26 exceptions, the sporadic groups, of which 20 are subgroups or subquotients of the monster group and are referred to as the "Happy Family", while the remaining 6 are referred to as pariahs. Structure of finite simple groups The famous theorem of Feit and Thompson states that every group of odd order is solvable. Therefore, every finite simple group has even order unless it is cyclic of prime order. The Schreier conjecture asserts that the group of outer automorphisms of every finite simple group is solvable. This can be proved using the classification theorem. History for finite simple groups There are two threads in the history of finite simple groups – the discovery and construction of specific simple groups and families, which took place from the work of Galois in the 1820s to the construction of the Monster in 1981; and proof that this list was complete, which began in the 19th century, most significantly took place 1955 through 1983 (when victory was initially declared), but was only generally agreed to be finished in 2004. As of 2010, work on improving the proofs and understanding continues; see (Silvestri 1979) for 19th century history of simple groups. Construction Simple groups have been studied at least since early Galois theory, where Évariste Galois realized that the fact that the alternating groups on five or more points are simple (and hence not solvable), which he proved in 1831, was the reason that one could not solve the quintic in radicals. Galois also constructed the projective special linear group of a plane over a prime finite field, PSL(2,p), and remarked that they were simple for p not 2 or 3. This is contained in his last letter to Chevalier,[7] and are the next example of finite simple groups.[8] The next discoveries were by Camille Jordan in 1870.[9] Jordan had found 4 families of simple matrix groups over finite fields of prime order, which are now known as the classical groups. At about the same time, it was shown that a family of five groups, called the Mathieu groups and first described by Émile Léonard Mathieu in 1861 and 1873, were also simple. Since these five groups were constructed by methods which did not yield infinitely many possibilities, they were called "sporadic" by William Burnside in his 1897 textbook. Later Jordan's results on classical groups were generalized to arbitrary finite fields by Leonard Dickson, following the classification of complex simple Lie algebras by Wilhelm Killing. Dickson also constructed exception groups of type G2 and E6 as well, but not of types F4, E7, or E8 (Wilson 2009, p. 2). In the 1950s the work on groups of Lie type was continued, with Claude Chevalley giving a uniform construction of the classical groups and the groups of exceptional type in a 1955 paper. This omitted certain known groups (the projective unitary groups), which were obtained by "twisting" the Chevalley construction. The remaining groups of Lie type were produced by Steinberg, Tits, and Herzig (who produced 3D4(q) and 2E6(q)) and by Suzuki and Ree (the Suzuki–Ree groups). These groups (the groups of Lie type, together with the cyclic groups, alternating groups, and the five exceptional Mathieu groups) were believed to be a complete list, but after a lull of almost a century since the work of Mathieu, in 1964 the first Janko group was discovered, and the remaining 20 sporadic groups were discovered or conjectured in 1965–1975, culminating in 1981, when Robert Griess announced that he had constructed Bernd Fischer's "Monster group". The Monster is the largest sporadic simple group having order of 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000. The Monster has a faithful 196,883-dimensional representation in the 196,884-dimensional Griess algebra, meaning that each element of the Monster can be expressed as a 196,883 by 196,883 matrix. Classification The full classification is generally accepted as starting with the Feit–Thompson theorem of 1962–63, largely lasting until 1983, but only being finished in 2004. Soon after the construction of the Monster in 1981, a proof, totaling more than 10,000 pages, was supplied that group theorists had successfully listed all finite simple groups, with victory declared in 1983 by Daniel Gorenstein. This was premature – some gaps were later discovered, notably in the classification of quasithin groups, which were eventually replaced in 2004 by a 1,300 page classification of quasithin groups, which is now generally accepted as complete. Tests for nonsimplicity Sylow's test: Let n be a positive integer that is not prime, and let p be a prime divisor of n. If 1 is the only divisor of n that is congruent to 1 modulo p, then there does not exist a simple group of order n. Proof: If n is a prime-power, then a group of order n has a nontrivial center[10] and, therefore, is not simple. If n is not a prime power, then every Sylow subgroup is proper, and, by Sylow's Third Theorem, we know that the number of Sylow p-subgroups of a group of order n is equal to 1 modulo p and divides n. Since 1 is the only such number, the Sylow p-subgroup is unique, and therefore it is normal. Since it is a proper, non-identity subgroup, the group is not simple. Burnside: A non-Abelian finite simple group has order divisible by at least three distinct primes. This follows from Burnside's theorem. See also • Almost simple group • Characteristically simple group • Quasisimple group • Semisimple group • List of finite simple groups References Notes 1. Knapp (2006), p. 170 2. Rotman (1995), p. 226 3. Rotman (1995), p. 281 4. Smith & Tabachnikova (2000), p. 144 5. Higman, Graham (1951), "A finitely generated infinite simple group", Journal of the London Mathematical Society, Second Series, 26 (1): 61–64, doi:10.1112/jlms/s1-26.1.59, ISSN 0024-6107, MR 0038348 6. Burger, M.; Mozes, S. (2000). "Lattices in product of trees". Publ. Math. IHÉS. 92: 151–194. doi:10.1007/bf02698916. 7. Galois, Évariste (1846), "Lettre de Galois à M. Auguste Chevalier", Journal de Mathématiques Pures et Appliquées, XI: 408–415, retrieved 2009-02-04, PSL(2,p) and simplicity discussed on p. 411; exceptional action on 5, 7, or 11 points discussed on pp. 411–412; GL(ν,p) discussed on p. 410{{citation}}: CS1 maint: postscript (link) 8. Wilson, Robert (October 31, 2006), "Chapter 1: Introduction", The finite simple groups 9. Jordan, Camille (1870), Traité des substitutions et des équations algébriques 10. See the proof in p-group, for instance. Textbooks • Wilson, Robert A. (2009), The finite simple groups, Graduate Texts in Mathematics 251, vol. 251, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-84800-988-2, ISBN 978-1-84800-987-5, Zbl 1203.20012, 2007 preprint. {{citation}}: External link in |postscript= (help)CS1 maint: postscript (link) • Burnside, William (1897), Theory of groups of finite order, Cambridge University Press • Knapp, Anthony W. (2006), Basic algebra, Springer, ISBN 978-0-8176-3248-9 • Rotman, Joseph J. (1995), An introduction to the theory of groups, Graduate texts in mathematics, vol. 148, Springer, ISBN 978-0-387-94285-8 • Smith, Geoff; Tabachnikova, Olga (2000), Topics in group theory, Springer undergraduate mathematics series (2 ed.), Springer, ISBN 978-1-85233-235-8 Papers • Silvestri, R. (September 1979), "Simple groups of finite order in the nineteenth century", Archive for History of Exact Sciences, 20 (3–4): 313–356, doi:10.1007/BF00327738
Wikipedia
Simple magic cube A simple magic cube is the lowest of six basic classes of magic cubes. These classes are based on extra features required. The simple magic cube requires only the basic features a cube requires to be magic. Namely, all lines parallel to the faces, and all 4 space diagonals sum correctly.[1] i.e. all "1-agonals" and all "3-agonals" sum to $S={\frac {m(m^{3}+1)}{2}}.$ No planar diagonals (2-agonals) are required to sum correctly, so there are probably no magic squares in the cube. See also • Magic square • Magic cube classes References 1. Pickover, Clifford A. (2002). The Zen of Magic Squares, Circles, and Stars: An Exhibition of Surprising Structures Across Dimensions. Princeton University Press. p. 400. ISBN 9780691070414. External links • Aale de Winkel - Magic hypercubes encyclopedia • Harvey Heinz - large site on magic squares and cubes • Christian Boyer - Multimagic cubes • John Hendricks site on magic hypercubes
Wikipedia
Simple module In mathematics, specifically in ring theory, the simple modules over a ring R are the (left or right) modules over R that are non-zero and have no non-zero proper submodules. Equivalently, a module M is simple if and only if every cyclic submodule generated by a non-zero element of M equals M. Simple modules form building blocks for the modules of finite length, and they are analogous to the simple groups in group theory. In this article, all modules will be assumed to be right unital modules over a ring R. Examples Z-modules are the same as abelian groups, so a simple Z-module is an abelian group which has no non-zero proper subgroups. These are the cyclic groups of prime order. If I is a right ideal of R, then I is simple as a right module if and only if I is a minimal non-zero right ideal: If M is a non-zero proper submodule of I, then it is also a right ideal, so I is not minimal. Conversely, if I is not minimal, then there is a non-zero right ideal J properly contained in I. J is a right submodule of I, so I is not simple. If I is a right ideal of R, then the quotient module R/I is simple if and only if I is a maximal right ideal: If M is a non-zero proper submodule of R/I, then the preimage of M under the quotient map R → R/I is a right ideal which is not equal to R and which properly contains I. Therefore, I is not maximal. Conversely, if I is not maximal, then there is a right ideal J properly containing I. The quotient map R/I → R/J has a non-zero kernel which is not equal to R/I, and therefore R/I is not simple. Every simple R-module is isomorphic to a quotient R/m where m is a maximal right ideal of R.[1] By the above paragraph, any quotient R/m is a simple module. Conversely, suppose that M is a simple R-module. Then, for any non-zero element x of M, the cyclic submodule xR must equal M. Fix such an x. The statement that xR = M is equivalent to the surjectivity of the homomorphism R → M that sends r to xr. The kernel of this homomorphism is a right ideal I of R, and a standard theorem states that M is isomorphic to R/I. By the above paragraph, we find that I is a maximal right ideal. Therefore, M is isomorphic to a quotient of R by a maximal right ideal. If k is a field and G is a group, then a group representation of G is a left module over the group ring k[G] (for details, see the main page on this relationship).[2] The simple k[G]-modules are also known as irreducible representations. A major aim of representation theory is to understand the irreducible representations of groups. Basic properties of simple modules The simple modules are precisely the modules of length 1; this is a reformulation of the definition. Every simple module is indecomposable, but the converse is in general not true. Every simple module is cyclic, that is it is generated by one element. Not every module has a simple submodule; consider for instance the Z-module Z in light of the first example above. Let M and N be (left or right) modules over the same ring, and let f : M → N be a module homomorphism. If M is simple, then f is either the zero homomorphism or injective because the kernel of f is a submodule of M. If N is simple, then f is either the zero homomorphism or surjective because the image of f is a submodule of N. If M = N, then f is an endomorphism of M, and if M is simple, then the prior two statements imply that f is either the zero homomorphism or an isomorphism. Consequently, the endomorphism ring of any simple module is a division ring. This result is known as Schur's lemma. The converse of Schur's lemma is not true in general. For example, the Z-module Q is not simple, but its endomorphism ring is isomorphic to the field Q. Simple modules and composition series Main article: Composition series If M is a module which has a non-zero proper submodule N, then there is a short exact sequence $0\to N\to M\to M/N\to 0.$ A common approach to proving a fact about M is to show that the fact is true for the center term of a short exact sequence when it is true for the left and right terms, then to prove the fact for N and M/N. If N has a non-zero proper submodule, then this process can be repeated. This produces a chain of submodules $\cdots \subset M_{2}\subset M_{1}\subset M.$ In order to prove the fact this way, one needs conditions on this sequence and on the modules Mi /Mi + 1. One particularly useful condition is that the length of the sequence is finite and each quotient module Mi /Mi + 1 is simple. In this case the sequence is called a composition series for M. In order to prove a statement inductively using composition series, the statement is first proved for simple modules, which form the base case of the induction, and then the statement is proved to remain true under an extension of a module by a simple module. For example, the Fitting lemma shows that the endomorphism ring of a finite length indecomposable module is a local ring, so that the strong Krull–Schmidt theorem holds and the category of finite length modules is a Krull-Schmidt category. The Jordan–Hölder theorem and the Schreier refinement theorem describe the relationships amongst all composition series of a single module. The Grothendieck group ignores the order in a composition series and views every finite length module as a formal sum of simple modules. Over semisimple rings, this is no loss as every module is a semisimple module and so a direct sum of simple modules. Ordinary character theory provides better arithmetic control, and uses simple CG modules to understand the structure of finite groups G. Modular representation theory uses Brauer characters to view modules as formal sums of simple modules, but is also interested in how those simple modules are joined together within composition series. This is formalized by studying the Ext functor and describing the module category in various ways including quivers (whose nodes are the simple modules and whose edges are composition series of non-semisimple modules of length 2) and Auslander–Reiten theory where the associated graph has a vertex for every indecomposable module. The Jacobson density theorem Main article: Jacobson density theorem An important advance in the theory of simple modules was the Jacobson density theorem. The Jacobson density theorem states: Let U be a simple right R-module and let D = EndR(U). Let A be any D-linear operator on U and let X be a finite D-linearly independent subset of U. Then there exists an element r of R such that x·A = x·r for all x in X.[3] In particular, any primitive ring may be viewed as (that is, isomorphic to) a ring of D-linear operators on some D-space. A consequence of the Jacobson density theorem is Wedderburn's theorem; namely that any right Artinian simple ring is isomorphic to a full matrix ring of n-by-n matrices over a division ring for some n. This can also be established as a corollary of the Artin–Wedderburn theorem. See also • Semisimple modules are modules that can be written as a sum of simple submodules • Irreducible ideal • Irreducible representation References 1. Herstein, Non-commutative Ring Theory, Lemma 1.1.3 2. Serre, Jean-Pierre (1977). Linear Representations of Finite Groups. New York: Springer-Verlag. pp. 47. ISBN 0387901906. ISSN 0072-5285. OCLC 2202385. 3. Isaacs, Theorem 13.14, p. 185
Wikipedia
Simple point process A simple point process is a special type of point process in probability theory. In simple point processes, every point is assigned the weight one. Definition Let $S$ be a locally compact second countable Hausdorff space and let ${\mathcal {S}}$ be its Borel $\sigma $-algebra. A point process $\xi $, interpreted as random measure on $(S,{\mathcal {S}})$, is called a simple point process if it can be written as $\xi =\sum _{i\in I}\delta _{X_{i}}$ for an index set $I$ and random elements $X_{i}$ which are almost everywhere pairwise distinct. Here $\delta _{x}$ denotes the Dirac measure on the point $x$. Examples Simple point processes include many important classes of point processes such as Poisson processes, Cox processes and binomial processes. Uniqueness If ${\mathcal {I}}$ is a generating ring of ${\mathcal {S}}$ then a simple point process $\xi $ is uniquely determined by its values on the sets $U\in {\mathcal {I}}$. This means that two simple point processes $\xi $ and $\zeta $ have the same distributions iff $P(\xi (U)=0)=P(\zeta (U)=0){\text{ for all }}U\in {\mathcal {I}}$ Literature • Kallenberg, Olav (2017). Random Measures, Theory and Applications. Switzerland: Springer. doi:10.1007/978-3-319-41598-7. ISBN 978-3-319-41596-3. • Daley, D.J.; Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods,. New York: Springer. ISBN 0-387-95541-0.
Wikipedia
Zeros and poles In complex analysis (a branch of mathematics), a pole is a certain type of singularity of a complex-valued function of a complex variable. It is the simplest type of non-removable singularity of such a function (see essential singularity). Technically, a point z0 is a pole of a function f if it is a zero of the function 1/f and 1/f is holomorphic (i.e. complex differentiable) in some neighbourhood of z0. Mathematical analysis → Complex analysis Complex analysis Complex numbers • Real number • Imaginary number • Complex plane • Complex conjugate • Unit complex number Complex functions • Complex-valued function • Analytic function • Holomorphic function • Cauchy–Riemann equations • Formal power series Basic theory • Zeros and poles • Cauchy's integral theorem • Local primitive • Cauchy's integral formula • Winding number • Laurent series • Isolated singularity • Residue theorem • Conformal map • Schwarz lemma • Harmonic function • Laplace's equation Geometric function theory People • Augustin-Louis Cauchy • Leonhard Euler • Carl Friedrich Gauss • Jacques Hadamard • Kiyoshi Oka • Bernhard Riemann • Karl Weierstrass •  Mathematics portal A function f is meromorphic in an open set U if for every point z of U there is a neighborhood of z in which either f or 1/f is holomorphic. If f is meromorphic in U, then a zero of f is a pole of 1/f, and a pole of f is a zero of 1/f. This induces a duality between zeros and poles, that is fundamental for the study of meromorphic functions. For example, if a function is meromorphic on the whole complex plane plus the point at infinity, then the sum of the multiplicities of its poles equals the sum of the multiplicities of its zeros. Definitions A function of a complex variable z is holomorphic in an open domain U if it is differentiable with respect to z at every point of U. Equivalently, it is holomorphic if it is analytic, that is, if its Taylor series exists at every point of U, and converges to the function in some neighbourhood of the point. A function is meromorphic in U if every point of U has a neighbourhood such that either f or 1/f is holomorphic in it. A zero of a meromorphic function f is a complex number z such that f(z) = 0. A pole of f is a zero of 1/f. If f is a function that is meromorphic in a neighbourhood of a point $z_{0}$ of the complex plane, then there exists an integer n such that $(z-z_{0})^{n}f(z)$ is holomorphic and nonzero in a neighbourhood of $z_{0}$ (this is a consequence of the analytic property). If n > 0, then $z_{0}$ is a pole of order (or multiplicity) n of f. If n < 0, then $z_{0}$ is a zero of order $|n|$ of f. Simple zero and simple pole are terms used for zeroes and poles of order $|n|=1.$ Degree is sometimes used synonymously to order. This characterization of zeros and poles implies that zeros and poles are isolated, that is, every zero or pole has a neighbourhood that does not contain any other zero and pole. Because of the order of zeros and poles being defined as a non-negative number n and the symmetry between them, it is often useful to consider a pole of order n as a zero of order –n and a zero of order n as a pole of order –n. In this case a point that is neither a pole nor a zero is viewed as a pole (or zero) of order 0. A meromorphic function may have infinitely many zeros and poles. This is the case for the gamma function (see the image in the infobox), which is meromorphic in the whole complex plane, and has a simple pole at every non-positive integer. The Riemann zeta function is also meromorphic in the whole complex plane, with a single pole of order 1 at z = 1. Its zeros in the left halfplane are all the negative even integers, and the Riemann hypothesis is the conjecture that all other zeros are along Re(z) = 1/2. In a neighbourhood of a point $z_{0},$ a nonzero meromorphic function f is the sum of a Laurent series with at most finite principal part (the terms with negative index values): $f(z)=\sum _{k\geq -n}a_{k}(z-z_{0})^{k},$ where n is an integer, and $a_{-n}\neq 0.$ Again, if n > 0 (the sum starts with $a_{-|n|}(z-z_{0})^{-|n|}$, the principal part has n terms), one has a pole of order n, and if n ≤ 0 (the sum starts with $a_{|n|}(z-z_{0})^{|n|}$, there is no principal part), one has a zero of order $|n|$. At infinity A function $z\mapsto f(z)$ is meromorphic at infinity if it is meromorphic in some neighbourhood of infinity (that is outside some disk), and there is an integer n such that $\lim _{z\to \infty }{\frac {f(z)}{z^{n}}}$ exists and is a nonzero complex number. In this case, the point at infinity is a pole of order n if n > 0, and a zero of order $|n|$ if n < 0. For example, a polynomial of degree n has a pole of degree n at infinity. The complex plane extended by a point at infinity is called the Riemann sphere. If f is a function that is meromorphic on the whole Riemann sphere, then it has a finite number of zeros and poles, and the sum of the orders of its poles equals the sum of the orders of its zeros. Every rational function is meromorphic on the whole Riemann sphere, and, in this case, the sum of orders of the zeros or of the poles is the maximum of the degrees of the numerator and the denominator. Examples • The function $f(z)={\frac {3}{z}}$ is meromorphic on the whole Riemann sphere. It has a pole of order 1 or simple pole at $z=0,$ and a simple zero at infinity. • The function $f(z)={\frac {z+2}{(z-5)^{2}(z+7)^{3}}}$ is meromorphic on the whole Riemann sphere. It has a pole of order 2 at $z=5,$ and a pole of order 3 at $z=-7$. It has a simple zero at $z=-2,$ and a quadruple zero at infinity. • The function $f(z)={\frac {z-4}{e^{z}-1}}$ is meromorphic in the whole complex plane, but not at infinity. It has poles of order 1 at $z=2\pi ni{\text{ for }}n\in \mathbb {Z} $. This can be seen by writing the Taylor series of $e^{z}$ around the origin. • The function $f(z)=z$ has a single pole at infinity of order 1, and a single zero at the origin. All above examples except for the third are rational functions. For a general discussion of zeros and poles of such functions, see Pole–zero plot § Continuous-time systems. Function on a curve The concept of zeros and poles extends naturally to functions on a complex curve, that is complex analytic manifold of dimension one (over the complex numbers). The simplest examples of such curves are the complex plane and the Riemann surface. This extension is done by transferring structures and properties through charts, which are analytic isomorphisms. More precisely, let f be a function from a complex curve M to the complex numbers. This function is holomorphic (resp. meromorphic) in a neighbourhood of a point z of M if there is a chart $\phi $ such that $f\circ \phi ^{-1}$ is holomorphic (resp. meromorphic) in a neighbourhood of $\phi (z).$ Then, z is a pole or a zero of order n if the same is true for $\phi (z).$ If the curve is compact, and the function f is meromorphic on the whole curve, then the number of zeros and poles is finite, and the sum of the orders of the poles equals the sum of the orders of the zeros. This is one of the basic facts that are involved in Riemann–Roch theorem. See also • Control theory § Stability • Filter design • Filter (signal processing) • Gauss–Lucas theorem • Hurwitz's theorem (complex analysis) • Marden's theorem • Nyquist stability criterion • Pole–zero plot • Residue (complex analysis) • Rouché's theorem • Sendov's conjecture References • Conway, John B. (1986). Functions of One Complex Variable I. Springer. ISBN 0-387-90328-3. • Conway, John B. (1995). Functions of One Complex Variable II. Springer. ISBN 0-387-94460-5. • Henrici, Peter (1974). Applied and Computational Complex Analysis 1. John Wiley & Sons. External links • Weisstein, Eric W. "Pole". MathWorld.
Wikipedia
Simple polygon In geometry, a simple polygon is a polygon that does not intersect itself and has no holes. That is, they are piecewise-linear Jordan curves consisting of finitely many line segments. They include as special cases the convex polygons, star-shaped polygons, and monotone polygons. The sum of external angles of a simple polygon is $2\pi $. Every simple polygon with $n$ sides can be triangulated by $n-3$ of its diagonals, and by the art gallery theorem its interior is visible from some $\lfloor n/3\rfloor $ of its vertices. Simple polygons are commonly seen as the input to computational geometry problems, including point in polygon testing, area computation, the convex hull of a simple polygon, triangulation, and Euclidean shortest paths. Other constructions in geometry related to simple polygons include Schwarz–Christoffel mapping, used to find conformal maps involving simple polygons, polygonalization of point sets, constructive solid geometry formulas for polygons, and visibility graphs of polygons. Definitions A simple polygon is a closed curve in the Euclidean plane consisting of straight line segments, meeting end-to-end to form a polygonal chain. Other than the shared endpoints of consecutive line segments in this chain, no two of the line segments may intersect each other.[1] The qualifier simple is sometimes omitted, with the word polygon assumed to mean a simple polygon.[2] The line segments that form a polygon are called its edges or sides. An endpoint of a segment is called a vertex (plural: vertices)[1] or a corner. Edges and vertices are more formal, but may be ambiguous in contexts that also involve the edges and vertices of a graph; the more colloquial terms sides and corners can be used to avoid this ambiguity.[3] Exactly two edges meet at each vertex, and the number of edges always equals the number of vertices.[1] Some sources allow two line segments to form a straight angle (180°),[4] while others disallow this, instead requiring collinear segments of a closed polygonal chain to be merged into a single longer side.[5] Two vertices are neighbors if they are the two endpoints of one of the sides of the polygon. Simple polygons are sometimes called Jordan polygons, because they are Jordan curves; the Jordan curve theorem can be used to prove that such a polygon divides the plane into two regions.[6] Indeed, Camille Jordan's original proof of this theorem took the special case of simple polygons (stated without proof) as its starting point.[7] The region inside the polygon (its interior) forms a bounded set[1] topologically equivalent to a disk by the Jordan–Schönflies theorem,[8] with a finite but nonzero area.[9] The polygon itself is topologically equivalent to a circle,[10] and the region outside (the exterior) is an unbounded set, with infinite area.[9] Although the formal definition of a simple polygon is typically as a system of line segments, it is also possible (and common in informal usage) to define a simple polygon as a closed set in the plane, the union of these line segments with the interior of the polygon,[1] or as an open set, the interior itself. A diagonal of a simple polygon is any line segment that has two polygon vertices as its endpoints, and that otherwise is entirely interior to the polygon.[11] Properties The internal angle of a simple polygon, at one of its vertices, is the angle spanned by the interior of the polygon at that vertex. A vertex is convex if its internal angle is less than $\pi $ (a straight angle, 180°) and concave if the internal angle is greater than $\pi $. If the internal angle is $\theta $, the external angle at the same vertex is defined to be its supplement $\pi -\theta $, the turning angle from one directed side to the next. The external angle is positive at a convex vertex or negative at a concave vertex. For every simple polygon, the sum of the external angles is $2\pi $ (one full turn, 360°). Thus the sum of the internal angles, for a simple polygon with $n$ sides is $(n-2)\pi $. Every simple polygon can be partitioned into interior-disjoint triangles by a subset of its diagonals. When the polygon has $n$ sides, such a partition involves $n-3$ diagonals, forming $n-2$ triangles. The resulting partition is called a polygon triangulation.[6] The shape of a triangulated simple polygon can be uniquely determined by the internal angles of the polygon and by the cross-ratios of the quadrilaterals formed by pairs of triangles that share a diagonal.[12] According to the two ears theorem, every simple polygon that is not a triangle has two ears, vertices whose two neighbors are the endpoints of a diagonal.[6] A related theorem states that every simple polygon that is not a convex polygon has a mouth, a vertex whose two neighbors are the endpoints of a line segment that is otherwise entirely exterior to the polygon. The polygons that have exactly two ears and one mouth are called anthropomorphic polygons.[13] According to the art gallery theorem, in a simple polygon with $n$ vertices, it is always possible to find a subset of at most $\lfloor n/3\rfloor $ of the vertices with the property that every point in the polygon is visible from one of the selected vertices. This means that, for each point $p$ in the polygon, there exists a line segment connecting $p$ to a selected vertex, passing only through interior points of the polygon. One way to prove this is to use graph coloring on a triangulation of the polygon: it is always possible to color the vertices with three colors, so that each side or diagonal in the triangulation has two endpoints of different colors. Each point of the polygon is visible to a vertex of each color, for instance one of the three vertices of the triangle containing that point in the chosen triangulation. One of the colors is used by at most $\lfloor n/3\rfloor $ of the vertices, proving the theorem.[14] Special cases Every convex polygon is a simple polygon. Another important class of simple polygons are the star-shaped polygons, the polygons that have a point (interior or on their boundary) from which every point is visible.[1] A monotone polygon, with respect to a straight line $L$, is a polygon for which every line perpendicular to $L$ intersects the interior of the polygon in a connected set. Equivalently, it is a polygon whose boundary can be partitioned into two monotone polygonal chains, subsequences of edges whose vertices, when projected perpendicularly onto $L$, have the same order along $L$ as they do in the chain.[15] Computational problems In computational geometry, several important computational tasks involve inputs in the form of a simple polygon. • Point in polygon testing involves determining, for a simple polygon P and a query point q, whether q lies interior to P. It can be solved in linear time; alternatively, it is possible to process a given polygon into a data structure, in linear time, so that subsequent point in polygon tests can be performed in logarithmic time.[17] • Simple formulae are known for computing the area of the interior of a polygon. These include the shoelace formula for arbitrary polygons,[18] and Pick's theorem for polygons with integer vertex coordinates.[10][19] • The convex hull of a simple polygon can also be found in linear time, faster than algorithms for finding convex hulls of points that have not been connected into a polygon.[5] • Constructing a triangulation of a simple polygon can also be performed in linear time, although the algorithm is complicated. A modification of the same algorithm can also be used to test whether a closed polygonal chain forms a simple polygon (that is, whether it avoids self-intersections) in linear time.[20] This also leads to a linear time algorithm for solving the art gallery problem using at most $\lfloor n/3\rfloor $ points, although not necessarily using the optimal number of points for a given polygon.[21] Although it is possible to transform any two triangulations of the same polygon into each other by flips that replace one diagonal at a time, determining whether one can do so using only a limited number of flips is NP-complete.[22] • A geodesic path,[23] the shortest path in the plane that connects two points interior to a polygon, without crossing to the exterior, may be found in linear time by an algorithm that uses triangulation as a subroutine.[24] The same is true for the geodesic center, a point in the polygon that minimizes the maximum length of its geodesic paths to all other points.[23] • The visibility polygon of an interior point of a simple polygon, the points that are directly visible from the given point by line segments interior to the polygon, can be constructed in linear time.[25] The same is true for the subset that is visible from at least one point of a given line segment.[24] Other computational problems studied for simple polygons include constructions of the longest diagonal or the longest line segment interior to a polygon,[11] of the convex skull (the largest convex polygon within the given simple polygon),[26][27] and of various one-dimensional skeletons approximating its shape, including the medial axis[28] and straight skeleton.[29] Researchers have also studied producing other polygons from simple polygons using their offset curves,[30] unions and intersections,[9] and Minkowski sums,[31] but these operations do not always produce a simple polygon as their result. Indeed, for intersection and difference operations, care is needed to ensure that the result is a two-dimensional region, rather than a set that might also include one-dimensional features or even isolated points.[9] Related constructions According to the Riemann mapping theorem, any simply connected open subset of the plane can be conformally mapped onto a disk. Schwarz–Christoffel mapping provides a method to explicitly construct a map from a disk to any simple polygon using specified vertex angles and pre-images of the polygon vertices on the boundary of the disk. These pre-vertices are typically computed numerically.[32] Every finite set of points in the plane that does not lie on a single line can be connected to form the vertices of a simple polygon (allowing 180° angles); for instance, one such polygon is the solution to the traveling salesperson problem.[33] Connecting points to form a polygon in this way is called polygonalization.[34] Every simple polygon can be represented by a formula in constructive solid geometry that constructs the polygon (as a closed set including the interior) from unions and intersections of half-planes, with each side of the polygon appearing once as a half-plane in the formula. Converting an $n$-sided polygon into this representation can be performed in time $O(n\log n)$.[35] The visibility graph of a simple polygon connects its vertices by edges representing the sides and diagonals of the polygon.[2] It always contains a Hamiltonian cycle, formed by the polygon sides. The computational complexity of reconstructing a polygon that has a given graph as its visibility graph, with a specified Hamiltonian cycle as its cycle of sides, remains an open problem.[36] See also • Carpenter's rule problem, on continuous motion of a simple polygon into a convex polygon • Erdős–Nagy theorem, a process of reflecting pockets of a non-convex simple polygon to make it convex • Net (polyhedron), a simple polygon that can be folded and glued to form a given polyhedron • Spherical polygon, an analogous concept on the surface of a sphere • Weakly simple polygon, a generalization of simple polygons allowing the edges to touch in limited ways References 1. Preparata, Franco P.; Shamos, Michael Ian (1985). Computational Geometry: An Introduction. Texts and Monographs in Computer Science. Springer-Verlag. p. 18. doi:10.1007/978-1-4612-1098-6. ISBN 978-1-4612-1098-6. 2. Everett, Hazel; Corneil, Derek (1995). "Negative results on characterizing visibility graphs". Computational Geometry: Theory & Applications. 5 (2): 51–63. doi:10.1016/0925-7721(95)00021-Z. MR 1353288. 3. Aronov, Boris; Seidel, Raimund; Souvaine, Diane (1993). "On compatible triangulations of simple polygons". Computational Geometry: Theory & Applications. 3 (1): 27–35. doi:10.1016/0925-7721(93)90028-5. MR 1222755. 4. Malkevitch, Joseph (2016). "Are precise definitions a good idea?". AMS Feature Column. American Mathematical Society. 5. McCallum, Duncan; Avis, David (1979). "A linear algorithm for finding the convex hull of a simple polygon". Information Processing Letters. 9 (5): 201–206. doi:10.1016/0020-0190(79)90069-3. MR 0552534. 6. Meisters, G. H. (1975). "Polygons have ears". The American Mathematical Monthly. 82 (6): 648–651. doi:10.2307/2319703. JSTOR 2319703. MR 0367792. 7. Hales, Thomas C. (2007). "Jordan's proof of the Jordan curve theorem" (PDF). From Insight to Proof: Festschrift in Honour of Andrzej Trybulec. Studies in Logic, Grammar and Rhetoric. University of Białystok. 10 (23). 8. Thomassen, Carsten (1992). "The Jordan-Schönflies theorem and the classification of surfaces". The American Mathematical Monthly. 99 (2): 116–130. doi:10.1080/00029890.1992.11995820. JSTOR 2324180. MR 1144352. 9. Margalit, Avraham; Knott, Gary D. (1989). "An algorithm for computing the union, intersection or difference of two polygons". Computers & Graphics. 13 (2): 167–183. doi:10.1016/0097-8493(89)90059-9. 10. Niven, Ivan; Zuckerman, H. S. (1967). "Lattice points and polygonal area". The American Mathematical Monthly. 74: 1195–1200. doi:10.2307/2315660. MR 0225216. 11. Aggarwal, Alok; Suri, Subhash (1990). "Computing the longest diagonal of a simple polygon". Information Processing Letters. 35 (1): 13–18. doi:10.1016/0020-0190(90)90167-V. MR 1069001. 12. Snoeyink, Jack (1999). "Cross-ratios and angles determine a polygon". Discrete & Computational Geometry. 22 (4): 619–631. doi:10.1007/PL00009481. MR 1721028. 13. Toussaint, Godfried (1991). "Anthropomorphic polygons". The American Mathematical Monthly. 98 (1): 31–35. doi:10.2307/2324033. JSTOR 2324033. MR 1083611. 14. Fisk, S. (1978). "A short proof of Chvátal's watchman theorem". Journal of Combinatorial Theory, Series B. 24 (3): 374. doi:10.1016/0095-8956(78)90059-X. 15. Preparata, Franco P.; Supowit, Kenneth J. (1981). "Testing a simple polygon for monotonicity". Information Processing Letters. 12 (4): 161–164. doi:10.1016/0020-0190(81)90091-0. 16. Schirra, Stefan (2008). "How reliable are practical point-in-polygon strategies?" (PDF). In Halperin, Dan; Mehlhorn, Kurt (eds.). Algorithms – ESA 2008, 16th Annual European Symposium, Karlsruhe, Germany, September 15–17, 2008. Proceedings. Lecture Notes in Computer Science. Vol. 5193. Springer. pp. 744–755. doi:10.1007/978-3-540-87744-8_62. 17. Snoeyink, Jack (2017). "Point Location" (PDF). In Toth, Csaba D.; O'Rourke, Joseph; Goodman, Jacob E. (eds.). Handbook of Discrete and Computational Geometry (3rd ed.). Chapman and Hall/CRC. pp. 1005–1023. 18. Braden, Bart (1986). "The surveyor's area formula" (PDF). The College Mathematics Journal. 17 (4): 326–337. doi:10.2307/2686282. JSTOR 2686282. Archived from the original (PDF) on 2012-11-07. 19. Grünbaum, Branko; Shephard, G. C. (February 1993). "Pick's theorem". The American Mathematical Monthly. 100 (2): 150–161. doi:10.2307/2323771. JSTOR 2323771. MR 1212401. 20. Chazelle, Bernard (1991). "Triangulating a simple polygon in linear time". Discrete & Computational Geometry. 6 (5): 485–524. doi:10.1007/BF02574703. MR 1115104. 21. Urrutia, Jorge (2000). "Art gallery and illumination problems". In Sack, Jörg-Rüdiger; Urrutia, Jorge (eds.). Handbook of Computational Geometry. Amsterdam: North-Holland. pp. 973–1027. doi:10.1016/B978-044482537-7/50023-1. ISBN 0-444-82537-1. MR 1746693. 22. Aichholzer, Oswin; Mulzer, Wolfgang; Pilz, Alexander (2015). "Flip distance between triangulations of a simple polygon is NP-complete". Discrete & Computational Geometry. 54 (2): 368–389. arXiv:1209.0579. doi:10.1007/s00454-015-9709-7. MR 3372115. 23. Ahn, Hee-Kap; Barba, Luis; Bose, Prosenjit; De Carufel, Jean-Lou; Korman, Matias; Oh, Eunjin (2016). "A linear-time algorithm for the geodesic center of a simple polygon". Discrete & Computational Geometry. 56 (4): 836–859. arXiv:1501.00561. doi:10.1007/s00454-016-9796-0. MR 3561791. 24. Guibas, Leonidas; Hershberger, John; Leven, Daniel; Sharir, Micha; Tarjan, Robert E. (1987). "Linear-time algorithms for visibility and shortest path problems inside triangulated simple polygons". Algorithmica. 2 (2): 209–233. doi:10.1007/BF01840360. MR 0895445. 25. El Gindy, Hossam; Avis, David (1981). "A linear algorithm for computing the visibility polygon from a point". Journal of Algorithms. 2 (2): 186–197. doi:10.1016/0196-6774(81)90019-5. 26. Chang, J. S.; Yap, C.-K. (1986). "A polynomial solution for the potato-peeling problem". Discrete & Computational Geometry. 1 (2): 155–182. doi:10.1007/BF02187692. MR 0834056. 27. Cabello, Sergio; Cibulka, Josef; Kynčl, Jan; Saumell, Maria; Valtr, Pavel (2017). "Peeling potatoes near-optimally in near-linear time". SIAM Journal on Computing. 46 (5): 1574–1602. arXiv:1406.1368. doi:10.1137/16M1079695. MR 3708542. 28. Chin, Francis Y. L.; Snoeyink, Jack; Wang, Cao An (1999). "Finding the medial axis of a simple polygon in linear time". Discrete & Computational Geometry. 21 (3): 405–420. doi:10.1007/PL00009429. MR 1672988. 29. Cheng, Siu-Wing; Mencel, Liam; Vigneron, Antoine (2016). "A faster algorithm for computing straight skeletons". ACM Transactions on Algorithms. 12 (3): 44:1–44:21. arXiv:1405.4691. 30. Palfrader, Peter; Held, Martin (February 2015). "Computing mitered offset curves based on straight skeletons". Computer-Aided Design and Applications. 12 (4): 414–424. doi:10.1080/16864360.2014.997637. 31. Oks, Eduard; Sharir, Micha (2006). "Minkowski sums of monotone and general simple polygons". Discrete & Computational Geometry. 35 (2): 223–240. doi:10.1007/s00454-005-1206-y. MR 2195052. 32. Trefethen, Lloyd N.; Driscoll, Tobin A. (1998). "Schwarz–Christoffel mapping in the computer era". Proceedings of the International Congress of Mathematicians, Vol. III (Berlin, 1998). Documenta Mathematica. pp. 533–542. MR 1648186. 33. Quintas, L. V.; Supnick, Fred (1965). "On some properties of shortest Hamiltonian circuits". The American Mathematical Monthly. 72 (9): 977–980. doi:10.2307/2313333. JSTOR 2313333. MR 0188872. 34. Demaine, Erik D.; Fekete, Sándor P.; Keldenich, Phillip; Krupke, Dominik; Mitchell, Joseph S. B. (2022). "Area-optimal simple polygonalizations: the CG challenge 2019". ACM Journal of Experimental Algorithmics. 27: A2.4:1–12. doi:10.1145/3504000. hdl:1721.1/146480. MR 4390039. 35. Dobkin, David; Guibas, Leonidas; Hershberger, John; Snoeyink, Jack (1993). "An efficient algorithm for finding the CSG representation of a simple polygon". Algorithmica. 10 (1): 1–23. doi:10.1007/BF01908629. MR 1230699. 36. Ghosh, Subir Kumar; Goswami, Partha P. (2013). "Unsolved problems in visibility graphs of points, segments, and polygons". ACM Computing Surveys. 46 (2): 22:1–22:29. arXiv:1012.5187. doi:10.1145/2543581.2543589. External links • Weisstein, Eric W. "Simple polygon". MathWorld. Polygons (List) Triangles • Acute • Equilateral • Ideal • Isosceles • Kepler • Obtuse • Right Quadrilaterals • Antiparallelogram • Bicentric • Crossed • Cyclic • Equidiagonal • Ex-tangential • Harmonic • Isosceles trapezoid • Kite • Orthodiagonal • Parallelogram • Rectangle • Right kite • Right trapezoid • Rhombus • Square • Tangential • Tangential trapezoid • Trapezoid By number of sides 1–10 sides • Monogon (1) • Digon (2) • Triangle (3) • Quadrilateral (4) • Pentagon (5) • Hexagon (6) • Heptagon (7) • Octagon (8) • Nonagon (Enneagon, 9) • Decagon (10) 11–20 sides • Hendecagon (11) • Dodecagon (12) • Tridecagon (13) • Tetradecagon (14) • Pentadecagon (15) • Hexadecagon (16) • Heptadecagon (17) • Octadecagon (18) • Icosagon (20) >20 sides • Icositrigon (23) • Icositetragon (24) • Triacontagon (30) • 257-gon • Chiliagon (1000) • Myriagon (10,000) • 65537-gon • Megagon (1,000,000) • Apeirogon (∞) Star polygons • Pentagram • Hexagram • Heptagram • Octagram • Enneagram • Decagram • Hendecagram • Dodecagram Classes • Concave • Convex • Cyclic • Equiangular • Equilateral • Infinite skew • Isogonal • Isotoxal • Magic • Pseudotriangle • Rectilinear • Regular • Reinhardt • Simple • Skew • Star-shaped • Tangential • Weakly simple
Wikipedia
Simple polytope In geometry, a d-dimensional simple polytope is a d-dimensional polytope each of whose vertices are adjacent to exactly d edges (also d facets). The vertex figure of a simple d-polytope is a (d – 1)-simplex.[1] Not to be confused with Simplicial polytope. Simple polytopes are topologically dual to simplicial polytopes. The family of polytopes which are both simple and simplicial are simplices or two-dimensional polygons. A simple polyhedron is a three-dimensional polyhedron whose vertices are adjacent to three edges and three faces. The dual to a simple polyhedron is a simplicial polyhedron, in which all faces are triangles.[2] Examples Three-dimensional simple polyhedra include the prisms (including the cube), the regular tetrahedron and dodecahedron, and, among the Archimedean solids, the truncated tetrahedron, truncated cube, truncated octahedron, truncated cuboctahedron, truncated dodecahedron, truncated icosahedron, and truncated icosidodecahedron. They also include the Goldberg polyhedra and fullerenes, including the chamfered tetrahedron, chamfered cube, and chamfered dodecahedron. In general, any polyhedron can be made into a simple one by truncating its vertices of valence four or higher. For instance, truncated trapezohedrons are formed by truncating only the high-degree vertices of a trapezohedron; they are also simple. Four-dimensional simple polytopes include the regular 120-cell and tesseract. Simple uniform 4-polytope include the truncated 5-cell, truncated tesseract, truncated 24-cell, truncated 120-cell, and duoprisms. All bitruncated, cantitruncated, or omnitruncated four-polytopes are simple. Simple polytopes in higher dimensions include the d-simplex, hypercube, associahedron, permutohedron, and all omnitruncated polytopes. Unique reconstruction Micha Perles conjectured that a simple polytope is completely determined by its 1-skeleton; his conjecture was proven in 1987 by Roswitha Blind and Peter Mani-Levitska.[3] Gil Kalai shortly after provided a simpler proof of this result based on the theory of unique sink orientations.[4] References 1. Ziegler, Günter M. (2012), Lectures on Polytopes, Graduate Texts in Mathematics, vol. 152, Springer, p. 8, ISBN 9780387943657 2. Cromwell, Peter R. (1997), Polyhedra, Cambridge University Press, p. 341, ISBN 0-521-66405-5 3. Blind, Roswitha; Mani-Levitska, Peter (1987), "Puzzles and polytope isomorphisms", Aequationes Mathematicae, 34 (2–3): 287–297, doi:10.1007/BF01830678, MR 0921106 4. Kalai, Gil (1988), "A simple way to tell a simple polytope from its graph", Journal of Combinatorial Theory, Series A, 49 (2): 381–383, doi:10.1016/0097-3165(88)90064-7, MR 0964396
Wikipedia
Radical extension In mathematics and more specifically in field theory, a radical extension of a field K is an extension of K that is obtained by adjoining a sequence of nth roots of elements. Definition A simple radical extension is a simple extension F/K generated by a single element $\alpha $ satisfying $\alpha ^{n}=b$ for an element b of K. In characteristic p, we also take an extension by a root of an Artin–Schreier polynomial to be a simple radical extension. A radical series is a tower $K=F_{0}<F_{1}<\cdots <F_{k}$ where each extension $F_{i}/F_{i-1}$ is a simple radical extension. Properties 1. If E is a radical extension of F and F is a radical extension of K then E is a radical extension of K. 2. If E and F are radical extensions of K in an extension field C of K, then the compositum EF (the smallest subfield of C that contains both E and F) is a radical extension of K. 3. If E is a radical extension of F and E > K > F then E is a radical extension of K. Solvability by radicals Radical extensions occur naturally when solving polynomial equations in radicals. In fact a solution in radicals is the expression of the solution as an element of a radical series: a polynomial f over a field K is said to be solvable by radicals if there is a splitting field of f over K contained in a radical extension of K. The Abel–Ruffini theorem states that such a solution by radicals does not exist, in general, for equations of degree at least five. Évariste Galois showed that an equation is solvable in radicals if and only if its Galois group is solvable. The proof is based on the fundamental theorem of Galois theory and the following theorem. Let K be a field containing n distinct nth roots of unity. An extension of K of degree n is a radical extension generated by an nth root of an element of K if and only if it is a Galois extension whose Galois group is a cyclic group of order n. The proof is related to Lagrange resolvents. Let $\omega $ be a primitive nth root of unity (belonging to K). If the extension is generated by $\alpha $ with $x^{n}-a$ as a minimal polynomial, the mapping $\alpha \mapsto \omega \alpha $ induces a K-automorphism of the extension that generates the Galois group, showing the "only if" implication. Conversely, if $\phi $ is a K-automorphism generating the Galois group, and $\beta $ is a generator of the extension, let $\alpha =\sum _{i=0}^{n-1}\omega ^{-i}\phi ^{i}(\beta ).$ The relation $\phi (\alpha )=\omega \alpha $ implies that the product of the conjugates of $\alpha $ (that is the images of $\alpha $ by the K-automorphisms) belongs to K, and is equal to the product of $\alpha ^{n}$ by the product of the nth roots of unit. As the product of the nth roots of units is $\pm 1$, this implies that $\alpha ^{n}\in K,$ and thus that the extension is a radical extension. It follows from this theorem that a Galois extension may be extended to a radical extension if and only if its Galois group is solvable (but there are non-radical Galois extensions whose Galois group is solvable, for example $ \mathbb {Q} (\cos(2\pi /7))/\mathbb {Q} $). This is, in modern terminology, the criterion of solvability by radicals that was provided by Galois. The proof uses the fact that the Galois closure of a simple radical extension of degree n is the extension of it by a primitive nth root of unity, and that the Galois group of the nth roots of unity is cyclic. References • Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 • Roman, Steven (2006). Field theory. Graduate Texts in Mathematics. Vol. 158 (2nd ed.). New York, NY: Springer-Verlag. ISBN 0-387-27677-7. Zbl 1172.12001.
Wikipedia
Simple linear regression In statistics, simple linear regression is a linear regression model with a single explanatory variable.[1][2][3][4][5] That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the x and y coordinates in a Cartesian coordinate system) and finds a linear function (a non-vertical straight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective simple refers to the fact that the outcome variable is related to a single predictor. Part of a series on Regression analysis Models • Linear regression • Simple regression • Polynomial regression • General linear model • Generalized linear model • Vector generalized linear model • Discrete choice • Binomial regression • Binary regression • Logistic regression • Multinomial logistic regression • Mixed logit • Probit • Multinomial probit • Ordered logit • Ordered probit • Poisson • Multilevel model • Fixed effects • Random effects • Linear mixed-effects model • Nonlinear mixed-effects model • Nonlinear regression • Nonparametric • Semiparametric • Robust • Quantile • Isotonic • Principal components • Least angle • Local • Segmented • Errors-in-variables Estimation • Least squares • Linear • Non-linear • Ordinary • Weighted • Generalized • Generalized estimating equation • Partial • Total • Non-negative • Ridge regression • Regularized • Least absolute deviations • Iteratively reweighted • Bayesian • Bayesian multivariate • Least-squares spectral analysis Background • Regression validation • Mean and predicted response • Errors and residuals • Goodness of fit • Studentized residual • Gauss–Markov theorem •  Mathematics portal It is common to make the additional stipulation that the ordinary least squares (OLS) method should be used: the accuracy of each predicted value is measured by its squared residual (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. Other regression methods that can be used in place of ordinary least squares include least absolute deviations (minimizing the sum of absolute values of residuals) and the Theil–Sen estimator (which chooses a line whose slope is the median of the slopes determined by pairs of sample points). Deming regression (total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit. The remainder of the article assumes an ordinary least squares regression. In this case, the slope of the fitted line is equal to the correlation between y and x corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass (x, y) of the data points. Fitting the regression line Consider the model function $y=\alpha +\beta x,$ which describes a line with slope β and y-intercept α. In general such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation the errors. Suppose we observe n data pairs and call them {(xi, yi), i = 1, ..., n}. We can describe the underlying relationship between yi and xi involving this error term εi by $y_{i}=\alpha +\beta x_{i}+\varepsilon _{i}.$ This relationship between the true (but unobserved) underlying parameters α and β and the data points is called a linear regression model. The goal is to find estimated values ${\widehat {\alpha }}$ and ${\widehat {\beta }}$ for the parameters α and β which would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in the least-squares approach: a line that minimizes the sum of squared residuals (see also Errors and residuals) ${\widehat {\varepsilon }}_{i}$ (differences between actual and predicted values of the dependent variable y), each of which is given by, for any candidate parameter values $\alpha $ and $\beta $, ${\widehat {\varepsilon }}_{i}=y_{i}-\alpha -\beta x_{i}.$ In other words, ${\widehat {\alpha }}$ and ${\widehat {\beta }}$ solve the following minimization problem: ${\text{Find }}\min _{\alpha ,\,\beta }Q(\alpha ,\beta ),\quad {\text{for }}Q(\alpha ,\beta )=\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}^{\,2}=\sum _{i=1}^{n}(y_{i}-\alpha -\beta x_{i})^{2}\ .$ By expanding to get a quadratic expression in $\alpha $ and $\beta ,$ we can derive values of $\alpha $ and $\beta $ that minimize the objective function Q (these minimizing values are denoted ${\widehat {\alpha }}$ and ${\widehat {\beta }}$):[6] $ {\begin{aligned}{\widehat {\alpha }}&={\bar {y}}-({\widehat {\beta }}\,{\bar {x}}),\\[5pt]{\widehat {\beta }}&={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}\\[6pt]&={\frac {s_{x,y}}{s_{x}^{2}}}\\[5pt]&=r_{xy}{\frac {s_{y}}{s_{x}}}.\\[6pt]\end{aligned}}$ Here we have introduced • ${\bar {x}}$ and ${\bar {y}}$ as the average of the xi and yi, respectively • rxy as the sample correlation coefficient between x and y • sx and sy as the uncorrected sample standard deviations of x and y • $s_{x}^{2}$ and $s_{x,y}$ as the sample variance and sample covariance, respectively Substituting the above expressions for ${\widehat {\alpha }}$ and ${\widehat {\beta }}$ into $f={\widehat {\alpha }}+{\widehat {\beta }}x,$ yields ${\frac {f-{\bar {y}}}{s_{y}}}=r_{xy}{\frac {x-{\bar {x}}}{s_{x}}}.$ This shows that rxy is the slope of the regression line of the standardized data points (and that this line passes through the origin). Since $-1\leq r_{xy}\leq 1$ then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y (on average) will be closer to the mean measurement than it was to the original value of x. This phenomenon is known as regressions toward the mean. Generalizing the ${\bar {x}}$ notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example: ${\overline {xy}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}y_{i}.$ This notation allows us a concise formula for rxy: $r_{xy}={\frac {{\overline {xy}}-{\bar {x}}{\bar {y}}}{\sqrt {\left({\overline {x^{2}}}-{\bar {x}}^{2}\right)\left({\overline {y^{2}}}-{\bar {y}}^{2}\right)}}}.$ The coefficient of determination ("R squared") is equal to $r_{xy}^{2}$ when the model is linear with a single independent variable. See sample correlation coefficient for additional details. Intuition about the slope By multiplying all members of the summation in the numerator by : ${\begin{aligned}{\frac {(x_{i}-{\bar {x}})}{(x_{i}-{\bar {x}})}}=1\end{aligned}}$ (thereby not changing it): ${\begin{aligned}{\widehat {\beta }}&={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}{\frac {(y_{i}-{\bar {y}})}{(x_{i}-{\bar {x}})}}}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}=\sum _{i=1}^{n}{\frac {(x_{i}-{\bar {x}})^{2}}{\sum _{j=1}^{n}(x_{j}-{\bar {x}})^{2}}}{\frac {(y_{i}-{\bar {y}})}{(x_{i}-{\bar {x}})}}\\[6pt]\end{aligned}}$ We can see that the slope (tangent of angle) of the regression line is the weighted average of ${\frac {(y_{i}-{\bar {y}})}{(x_{i}-{\bar {x}})}}$ that is the slope (tangent of angle) of the line that connects the i-th point to the average of all points, weighted by $(x_{i}-{\bar {x}})^{2}$ because the further the point is the more "important" it is, since small errors in its position will affect the slope connecting it to the center point more. Intuition about the intercept ${\begin{aligned}{\widehat {\alpha }}&={\bar {y}}-{\widehat {\beta }}\,{\bar {x}},\\[5pt]\end{aligned}}$ Given ${\widehat {\beta }}=\tan(\theta )=dy/dx\rightarrow dy=dx\times {\widehat {\beta }}$ with $\theta $ the angle the line makes with the positive x axis, we have $y_{\rm {intersection}}={\bar {y}}-dx\times {\widehat {\beta }}={\bar {y}}-dy$ Intuition about the correlation In the above formulation, notice that each $x_{i}$ is a constant ("known upfront") value, while the $y_{i}$ are random variables that depend on the linear function of $x_{i}$ and the random term $\varepsilon _{i}$. This assumption is used when deriving the standard error of the slope and showing that it is unbiased. In this framing, when $x_{i}$ is not actually a random variable, what type of parameter does the empirical correlation $r_{xy}$ estimate? The issue is that for each value i we'll have: $E(x_{i})=x_{i}$ and $Var(x_{i})=0$. A possible interpretation of $r_{xy}$ is to imagine that $x_{i}$ defines a random variable drawn from the empirical distribution of the x values in our sample. For example, if x had 10 values from the natural numbers: [1,2,3...,10], then we can imagine x to be a Discrete uniform distribution. Under this interpretation all $x_{i}$ have the same expectation and some positive variance. With this interpretation we can think of $r_{xy}$ as the estimator of the Pearson's correlation between the random variable y and the random variable x (as we just defined it). Simple linear regression without the intercept term (single regressor) Sometimes it is appropriate to force the regression line to pass through the origin, because x and y are assumed to be proportional. For the model without the intercept term, y = βx, the OLS estimator for β simplifies to ${\widehat {\beta }}={\frac {\sum _{i=1}^{n}x_{i}y_{i}}{\sum _{i=1}^{n}x_{i}^{2}}}={\frac {\overline {xy}}{\overline {x^{2}}}}$ Substituting (x − h, y − k) in place of (x, y) gives the regression through (h, k): ${\begin{aligned}{\widehat {\beta }}&={\frac {\sum _{i=1}^{n}(x_{i}-h)(y_{i}-k)}{\sum _{i=1}^{n}(x_{i}-h)^{2}}}={\frac {\overline {(x-h)(y-k)}}{\overline {(x-h)^{2}}}}\\[6pt]&={\frac {{\overline {xy}}-k{\bar {x}}-h{\bar {y}}+hk}{{\overline {x^{2}}}-2h{\bar {x}}+h^{2}}}\\[6pt]&={\frac {{\overline {xy}}-{\bar {x}}{\bar {y}}+({\bar {x}}-h)({\bar {y}}-k)}{{\overline {x^{2}}}-{\bar {x}}^{2}+({\bar {x}}-h)^{2}}}\\[6pt]&={\frac {\operatorname {Cov} (x,y)+({\bar {x}}-h)({\bar {y}}-k)}{\operatorname {Var} (x)+({\bar {x}}-h)^{2}}},\end{aligned}}$ where Cov and Var refer to the covariance and variance of the sample data (uncorrected for bias). The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope. Numerical properties 1. The regression line goes through the center of mass point, $({\bar {x}},\,{\bar {y}})$, if the model includes an intercept term (i.e., not forced through the origin). 2. The sum of the residuals is zero if the model includes an intercept term: $\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}=0.$ 3. The residuals and x values are uncorrelated (whether or not there is an intercept term in the model), meaning: $\sum _{i=1}^{n}x_{i}{\widehat {\varepsilon }}_{i}\;=\;0$ 4. The relationship between $\rho _{xy}$ (the correlation coefficient for the population) and the population variances of $y$ ($\sigma _{y}^{2}$) and the error term of $\epsilon $ ($\sigma _{\epsilon }^{2}$) is:[7]: 401  $\sigma _{\epsilon }^{2}=(1-\rho _{xy}^{2})\sigma _{y}^{2}$ For extreme values of $\rho _{xy}$ this is self evident. Since when $\rho _{xy}=0$ then $\sigma _{\epsilon }^{2}=\sigma _{y}^{2}$. And when $\rho _{xy}=1$ then $\sigma _{\epsilon }^{2}=0$. Model-based properties Description of the statistical properties of estimators from the simple linear regression estimates requires the use of a statistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such as inhomogeneity, but this is discussed elsewhere. Unbiasedness The estimators ${\widehat {\alpha }}$ and ${\widehat {\beta }}$ are unbiased. To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residuals εi as random variables drawn independently from some distribution with mean zero. In other words, for each value of x, the corresponding value of y is generated as a mean response α + βx plus an additional random variable ε called the error term, equal to zero on average. Under such interpretation, the least-squares estimators ${\widehat {\alpha }}$ and ${\widehat {\beta }}$ will themselves be random variables whose means will equal the "true values" α and β. This is the definition of an unbiased estimator. Confidence intervals The formulas given in the previous section allow one to calculate the point estimates of α and β — that is, the coefficients of the regression line for the given set of data. However, those formulas don't tell us how precise the estimates are, i.e., how much the estimators ${\widehat {\alpha }}$ and ${\widehat {\beta }}$ vary from sample to sample for the specified sample size. Confidence intervals were devised to give a plausible set of values to the estimates one might have if one repeated the experiment a very large number of times. The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either: 1. the errors in the regression are normally distributed (the so-called classic regression assumption), or 2. the number of observations n is sufficiently large, in which case the estimator is approximately normally distributed. The latter case is justified by the central limit theorem. Normality assumption Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean β and variance $\sigma ^{2}\left/\sum (x_{i}-{\bar {x}})^{2}\right.,$ where σ2 is the variance of the error terms (see Proofs involving ordinary least squares). At the same time the sum of squared residuals Q is distributed proportionally to χ2 with n − 2 degrees of freedom, and independently from ${\widehat {\beta }}$. This allows us to construct a t-value $t={\frac {{\widehat {\beta }}-\beta }{s_{\widehat {\beta }}}}\ \sim \ t_{n-2},$ where $s_{\widehat {\beta }}={\sqrt {\frac {{\frac {1}{n-2}}\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}^{\,2}}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}}$ is the standard error of the estimator ${\widehat {\beta }}$. This t-value has a Student's t-distribution with n − 2 degrees of freedom. Using it we can construct a confidence interval for β: $\beta \in \left[{\widehat {\beta }}-s_{\widehat {\beta }}t_{n-2}^{*},\ {\widehat {\beta }}+s_{\widehat {\beta }}t_{n-2}^{*}\right],$ at confidence level (1 − γ), where $t_{n-2}^{*}$ is the $\scriptstyle \left(1\;-\;{\frac {\gamma }{2}}\right){\text{-th}}$ quantile of the tn−2 distribution. For example, if γ = 0.05 then the confidence level is 95%. Similarly, the confidence interval for the intercept coefficient α is given by $\alpha \in \left[{\widehat {\alpha }}-s_{\widehat {\alpha }}t_{n-2}^{*},\ {\widehat {\alpha }}+s_{\widehat {\alpha }}t_{n-2}^{*}\right],$ at confidence level (1 − γ), where $s_{\widehat {\alpha }}=s_{\widehat {\beta }}{\sqrt {{\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}}}={\sqrt {{\frac {1}{n(n-2)}}\left(\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}^{\,2}\right){\frac {\sum _{i=1}^{n}x_{i}^{2}}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}}}$ The confidence intervals for α and β give us the general idea where these regression coefficients are most likely to be. For example, in the Okun's law regression shown here the point estimates are ${\widehat {\alpha }}=0.859,\qquad {\widehat {\beta }}=-1.817.$ The 95% confidence intervals for these estimates are $\alpha \in \left[\,0.76,0.96\right],\qquad \beta \in \left[-2.06,-1.58\,\right].$ In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown[8] that at confidence level (1 − γ) the confidence band has hyperbolic form given by the equation $(\alpha +\beta \xi )\in \left[\,{\widehat {\alpha }}+{\widehat {\beta }}\xi \pm t_{n-2}^{*}{\sqrt {\left({\frac {1}{n-2}}\sum {\widehat {\varepsilon }}_{i}^{\,2}\right)\cdot \left({\frac {1}{n}}+{\frac {(\xi -{\bar {x}})^{2}}{\sum (x_{i}-{\bar {x}})^{2}}}\right)}}\,\right].$ When the model assumed the intercept is fixed and equal to 0 ($\alpha =0$), the standard error of the slope turns into: $s_{\widehat {\beta }}={\sqrt {{\frac {1}{n-1}}{\frac {\sum _{i=1}^{n}{\widehat {\varepsilon }}_{i}^{\,2}}{\sum _{i=1}^{n}x_{i}^{2}}}}}$ With: ${\hat {\varepsilon }}_{i}=y_{i}-{\hat {y}}_{i}$ Asymptotic assumption The alternative second assumption states that when the number of points in the dataset is "large enough", the law of large numbers and the central limit theorem become applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantile t*n−2 of Student's t distribution is replaced with the quantile q* of the standard normal distribution. Occasionally the fraction 1/n−2 is replaced with 1/n. When n is large such a change does not alter the results appreciably. Numerical example See also: Ordinary least squares § Example, and Linear least squares § Example This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although the OLS article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead. Height (m), xi 1.471.501.521.551.571.601.631.651.681.701.731.751.781.801.83 Mass (kg), yi 52.2153.1254.4855.8457.2058.5759.9361.2963.1164.4766.2868.1069.9272.1974.46 $i$$x_{i}$$y_{i}$$x_{i}^{2}$$x_{i}y_{i}$$y_{i}^{2}$ 11.4752.212.160976.74872725.8841 21.5053.122.250079.68002821.7344 31.5254.482.310482.80962968.0704 41.5555.842.402586.55203118.1056 51.5757.202.464989.80403271.8400 61.6058.572.560093.71203430.4449 71.6359.932.656997.68593591.6049 81.6561.292.7225101.12853756.4641 91.6863.112.8224106.02483982.8721 101.7064.472.8900109.59904156.3809 111.7366.282.9929114.66444393.0384 121.7568.103.0625119.17504637.6100 131.7869.923.1684124.45764888.8064 141.8072.193.2400129.94205211.3961 151.8374.463.3489136.26185544.2916 $\Sigma $24.76931.1741.05321548.245358498.5439 There are n = 15 points in this data set. Hand calculations would be started by finding the following five sums: ${\begin{aligned}S_{x}&=\sum x_{i}\,=24.76,\qquad S_{y}=\sum y_{i}\,=931.17,\\[5pt]S_{xx}&=\sum x_{i}^{2}=41.0532,\;\;\,S_{yy}=\sum y_{i}^{2}=58498.5439,\\[5pt]S_{xy}&=\sum x_{i}y_{i}=1548.2453\end{aligned}}$ These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors. ${\begin{aligned}{\widehat {\beta }}&={\frac {nS_{xy}-S_{x}S_{y}}{nS_{xx}-S_{x}^{2}}}=61.272\\[8pt]{\widehat {\alpha }}&={\frac {1}{n}}S_{y}-{\widehat {\beta }}{\frac {1}{n}}S_{x}=-39.062\\[8pt]s_{\varepsilon }^{2}&={\frac {1}{n(n-2)}}\left[nS_{yy}-S_{y}^{2}-{\widehat {\beta }}^{2}(nS_{xx}-S_{x}^{2})\right]=0.5762\\[8pt]s_{\widehat {\beta }}^{2}&={\frac {ns_{\varepsilon }^{2}}{nS_{xx}-S_{x}^{2}}}=3.1539\\[8pt]s_{\widehat {\alpha }}^{2}&=s_{\widehat {\beta }}^{2}{\frac {1}{n}}S_{xx}=8.63185\end{aligned}}$ The 0.975 quantile of Student's t-distribution with 13 degrees of freedom is t*13 = 2.1604, and thus the 95% confidence intervals for α and β are ${\begin{aligned}&\alpha \in [\,{\widehat {\alpha }}\mp t_{13}^{*}s_{\alpha }\,]=[\,{-45.4},\ {-32.7}\,]\\[5pt]&\beta \in [\,{\widehat {\beta }}\mp t_{13}^{*}s_{\beta }\,]=[\,57.4,\ 65.1\,]\end{aligned}}$ The product-moment correlation coefficient might also be calculated: ${\widehat {r}}={\frac {nS_{xy}-S_{x}S_{y}}{\sqrt {(nS_{xx}-S_{x}^{2})(nS_{yy}-S_{y}^{2})}}}=0.9946$ See also • Design matrix#Simple linear regression • Line fitting • Linear trend estimation • Linear segmented regression • Proofs involving ordinary least squares—derivation of all formulas used in this article in general multidimensional case References 1. Seltman, Howard J. (2008-09-08). Experimental Design and Analysis (PDF). p. 227. 2. "Statistical Sampling and Regression: Simple Linear Regression". Columbia University. Retrieved 2016-10-17. When one independent variable is used in a regression, it is called a simple regression;(...) 3. Lane, David M. Introduction to Statistics (PDF). p. 462. 4. Zou KH; Tuncali K; Silverman SG (2003). "Correlation and simple linear regression". Radiology. 227 (3): 617–22. doi:10.1148/radiol.2273011499. ISSN 0033-8419. OCLC 110941167. PMID 12773666. 5. Altman, Naomi; Krzywinski, Martin (2015). "Simple linear regression". Nature Methods. 12 (11): 999–1000. doi:10.1038/nmeth.3627. ISSN 1548-7091. OCLC 5912005539. PMID 26824102. 6. Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 in Mathematics of Statistics, Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252–285 7. Valliant, Richard, Jill A. Dever, and Frauke Kreuter. Practical tools for designing and weighting survey samples. New York: Springer, 2013. 8. Casella, G. and Berger, R. L. (2002), "Statistical Inference" (2nd Edition), Cengage, ISBN 978-0-534-24312-8, pp. 558–559. External links • Wolfram MathWorld's explanation of Least Squares Fitting, and how to calculate it • Mathematics of simple regression (Robert Nau, Duke University) Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject Quantitative forecasting methods Historical data forecasts • Moving average • Exponential smoothing • Trend analysis • Decomposition of time series • Naïve approach Associative (causal) forecasts • Moving average • Simple linear regression • Regression analysis • Econometric model
Wikipedia
Simple ring In abstract algebra, a branch of mathematics, a simple ring is a non-zero ring that has no two-sided ideal besides the zero ideal and itself. In particular, a commutative ring is a simple ring if and only if it is a field. The center of a simple ring is necessarily a field. It follows that a simple ring is an associative algebra over this field. It is then called a simple algebra over this field. Several references (e.g., Lang (2002) or Bourbaki (2012)) require in addition that a simple ring be left or right Artinian (or equivalently semi-simple). Under such terminology a non-zero ring with no non-trivial two-sided ideals is called quasi-simple. Rings which are simple as rings but are not a simple module over themselves do exist: a full matrix ring over a field does not have any nontrivial two-sided ideals (since any ideal of $M_{n}(R)$ is of the form $M_{n}(I)$ with $I$ an ideal of $R$), but it has nontrivial left ideals (for example, the sets of matrices which have some fixed zero columns). An immediate example of a simple ring is a division ring, where every nonzero element has a multiplicative inverse, for instance, the quaternions. Also, for any $n\geq 1$, the algebra of $n\times n$ matrices with entries in a division ring is simple. Joseph Wedderburn proved that if a ring $R$ is a finite-dimensional simple algebra over a field $k$, it is isomorphic to a matrix algebra over some division algebra over $k$. In particular, the only simple rings that are finite-dimensional algebras over the real numbers are rings of matrices over either the real numbers, the complex numbers, or the quaternions. Wedderburn proved these results in 1907 in his doctoral thesis, On hypercomplex numbers, which appeared in the Proceedings of the London Mathematical Society. His thesis classified finite-dimensional simple and also semisimple algebras over fields. Simple algebras are building blocks of semisimple algebras: any finite-dimensional semisimple algebra is a Cartesian product, in the sense of algebras, of finite-dimensional simple algebras. One must be careful of the terminology: not every simple ring is a semisimple ring, and not every simple algebra is a semisimple algebra! However, every finite-dimensional simple algebra is a semisimple algebra, and every simple ring that is left or right artinian is a semisimple ring. An example of a simple ring that is not semisimple is the Weyl algebra. The Weyl algebra also gives an example of a simple algebra that is not a matrix algebra over a division algebra over its center: the Weyl algebra is infinite-dimensional, so Wedderburn's theorem does not apply. Wedderburn's result was later generalized to semisimple rings in the Wedderburn-Artin theorem: this says that every semisimple ring is a finite product of matrix rings over division rings. As a consequence of this generalization, every simple ring that is left or right artinian is a matrix ring over a division ring. Examples Let $\mathbb {R} $ be the field of real numbers, $\mathbb {C} $ be the field of complex numbers, and $\mathbb {H} $ the quaternions. • A central simple algebra (sometimes called a Brauer algebra) is a simple finite-dimensional algebra over a field $F$ whose center is $F$. • Every finite-dimensional simple algebra over $\mathbb {R} $ is isomorphic to an algebra of $n\times n$ matrices with entries in $\mathbb {R} $, $\mathbb {C} $, or $\mathbb {H} $. Every central simple algebra over $\mathbb {R} $ is isomorphic to an algebra of $n\times n$ matrices with entries $\mathbb {R} $ or $\mathbb {H} $. These results follow from the Frobenius theorem. • Every finite-dimensional simple algebra over $\mathbb {C} $ is a central simple algebra, and is isomorphic to a matrix ring over $\mathbb {C} $. • Every finite-dimensional central simple algebra over a finite field is isomorphic to a matrix ring over that field. • The algebra of all linear transformations of an infinite-dimensional vector space over a field $k$ is a simple ring that is not a semisimple ring. It is also a simple algebra over $k$ that is not a semisimple algebra. See also • Simple (algebra) • Simple universal algebra References • A. A. Albert, Structure of Algebras, Colloquium publications 24, American Mathematical Society, 2003, ISBN 0-8218-1024-3. P.37. • Bourbaki, Nicolas (2012), Algèbre Ch. 8 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-35315-7 • Nicholson, William K. (1993). "A short proof of the Wedderburn-Artin theorem" (PDF). New Zealand J. Math. 22: 83–86. • Henderson, D.W. (1965). "A short proof of Wedderburn's theorem". Amer. Math. Monthly. 72: 385–386. doi:10.2307/2313499. • Lam, Tsit-Yuen (2001), A First Course in Noncommutative Rings (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4419-8616-0, ISBN 978-0-387-95325-0, MR 1838439 • Lang, Serge (2002), Algebra (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0387953854 • Jacobson, Nathan (1989), Basic Algebra II (2nd ed.), W. H. Freeman, ISBN 978-0-7167-1933-5
Wikipedia
Simple space In algebraic topology, a branch of mathematics, a simple space is a connected topological space that has a homotopy type of a CW complex and whose fundamental group is abelian and acts trivially on the homotopy and homology of the universal covering space. Though not all authors include the assumption on the homotopy type. Examples Topological groups For example, any topological group is a simple space (provided it satisfies the condition on the homotopy type). Eilenberg-Maclane spaces Most Eilenberg-Maclane spaces $K(A,n)$ are simple since the only nontrivial homotopy group is in degree $n$. This means the only non-simple spaces are $K(G,1)$ for $G$ nonabelian. Universal covers Every connected topological space $X$ has an associated universal space from the universal cover $\pi :U_{X}\to X$ since $\pi _{1}(U_{X})=*$ and the universal cover of a universal cover is the universal cover itself. References • Dennis Sullivan, Geometric Topology
Wikipedia
Simple theorems in the algebra of sets The simple theorems in the algebra of sets are some of the elementary properties of the algebra of union (infix operator: ∪), intersection (infix operator: ∩), and set complement (postfix ') of sets. These properties assume the existence of at least two sets: a given universal set, denoted U, and the empty set, denoted {}. The algebra of sets describes the properties of all possible subsets of U, called the power set of U and denoted P(U). P(U) is assumed closed under union, intersection, and set complement. The algebra of sets is an interpretation or model of Boolean algebra, with union, intersection, set complement, U, and {} interpreting Boolean sum, product, complement, 1, and 0, respectively. The properties below are stated without proof, but can be derived from a small number of properties taken as axioms. A "*" follows the algebra of sets interpretation of Huntington's (1904) classic postulate set for Boolean algebra. These properties can be visualized with Venn diagrams. They also follow from the fact that P(U) is a Boolean lattice. The properties followed by "L" interpret the lattice axioms. Elementary discrete mathematics courses sometimes leave students with the impression that the subject matter of set theory is no more than these properties. For more about elementary set theory, see set, set theory, algebra of sets, and naive set theory. For an introduction to set theory at a higher level, see also axiomatic set theory, cardinal number, ordinal number, Cantor–Bernstein–Schroeder theorem, Cantor's diagonal argument, Cantor's first uncountability proof, Cantor's theorem, well-ordering theorem, axiom of choice, and Zorn's lemma. The properties below include a defined binary operation, relative complement, denoted by the infix operator "\". The "relative complement of A in B," denoted B \A, is defined as (A ∪B′)′ and as A′ ∩B. PROPOSITION 1. For any U and any subset A of U: • {}′ = U; • 'U'′ = {}; • A \ {} = A; • {} \ A = {}; • A ∩ {} = {}; • A ∪ {} = A; * • A ∩ U = A; * • A ∪ U = U; • A′ ∪ A = U; * • A′ ∩ A = {}; * • A \ A = {}; • U \ A = A′; • A \ U = {}; • A′′ = A; • A ∩ A = A; • A ∪ A = A. PROPOSITION 2. For any sets A, B, and C: • A ∩ B = B ∩ A; * L • A ∪ B = B ∪ A; * L • A ∪ (A ∩ B) = A; L • A ∩ (A ∪ B) = A; L • (A ∪ B) \ A = B \ A; • A ∩ B = {} if and only if B \ A = B; • (A′ ∪ B)′ ∪ (A′ ∪ B′)′ = A; • (A ∩ B) ∩ C = A ∩ (B ∩ C); L • (A ∪ B) ∪ C = A ∪ (B ∪ C); L • C \ (A ∩ B) = (C \ A) ∪ (C \ B); • C \ (A ∪ B) = (C \ A) ∩ (C \ B); • C \ (B \ A) = (C \ B) ∪(C ∩ A); • (B \ A) ∩ C = (B ∩ C) \ A = B ∩ (C \ A); • (B \ A) ∪ C = (B ∪ C) \ (A \ C). The distributive laws: •  A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C); * •  A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C). * PROPOSITION 3. Some properties of ⊆: • A ⊆ B if and only if A ∩ B = A; • A ⊆ B if and only if A ∪ B = B; • A ⊆ B if and only if B′ ⊆ A′; • A ⊆ B if and only if A \ B = {}; • A ∩ B ⊆ A ⊆ A ∪ B. See also • List of set identities and relations – Equalities for combinations of sets References • Edward Huntington (1904) "Sets of independent postulates for the algebra of logic," Transactions of the American Mathematical Society 5: 288-309. • Whitesitt, J. E. (1961) Boolean Algebra and Its Applications. Addison-Wesley. Dover reprint, 1999.
Wikipedia
Simple algebra (universal algebra) In universal algebra, an abstract algebra A is called simple if and only if it has no nontrivial congruence relations, or equivalently, if every homomorphism with domain A is either injective or constant. As congruences on rings are characterized by their ideals, this notion is a straightforward generalization of the notion from ring theory: a ring is simple in the sense that it has no nontrivial ideals if and only if it is simple in the sense of universal algebra. The same remark applies with respect to groups and normal subgroups; hence the universal notion is also a generalization of a simple group (it is a matter of convention whether a one-element algebra should be or should not be considered simple, hence only in this special case the notions might not match). A theorem by Roberto Magari in 1969 asserts that every variety contains a simple algebra.[1] See also • simple group • simple ring • central simple algebra References 1. Lampe, W.A.; Taylor, W. (1982). "Simple algebras in varieties". Algebra Universalis. 14 (1): 36–43. doi:10.1007/BF02483905. S2CID 120637415. The original paper is Magari, R. (1969). "Una dimostrazione del fatto che ogni varietà ammette algebre semplici". Annalli dell'Università di Ferrara, Sez. VII (in Italian). 14 (1): 1–4. doi:10.1007/BF02896794. S2CID 115886103.
Wikipedia
Multiplicity (mathematics) In mathematics, the multiplicity of a member of a multiset is the number of times it appears in the multiset. For example, the number of times a given polynomial has a root at a given point is the multiplicity of that root. Look up multiplicity in Wiktionary, the free dictionary. The notion of multiplicity is important to be able to count correctly without specifying exceptions (for example, double roots counted twice). Hence the expression, "counted with multiplicity". If multiplicity is ignored, this may be emphasized by counting the number of distinct elements, as in "the number of distinct roots". However, whenever a set (as opposed to multiset) is formed, multiplicity is automatically ignored, without requiring use of the term "distinct". Multiplicity of a prime factor Main article: p-adic valuation In prime factorization, the multiplicity of a prime factor is its $p$-adic valuation. For example, the prime factorization of the integer 60 is 60 = 2 × 2 × 3 × 5, the multiplicity of the prime factor 2 is 2, while the multiplicity of each of the prime factors 3 and 5 is 1. Thus, 60 has four prime factors allowing for multiplicities, but only three distinct prime factors. Multiplicity of a root of a polynomial Let $F$ be a field and $p(x)$ be a polynomial in one variable with coefficients in $F$. An element $a\in F$ is a root of multiplicity $k$ of $p(x)$ if there is a polynomial $s(x)$ such that $s(a)\neq 0$ and $p(x)=(x-a)^{k}s(x)$. If $k=1$, then a is called a simple root. If $k\geq 2$, then $a$ is called a multiple root. For instance, the polynomial $p(x)=x^{3}+2x^{2}-7x+4$ has 1 and −4 as roots, and can be written as $p(x)=(x+4)(x-1)^{2}$. This means that 1 is a root of multiplicity 2, and −4 is a simple root (of multiplicity 1). The multiplicity of a root is the number of occurrences of this root in the complete factorization of the polynomial, by means of the fundamental theorem of algebra. If $a$ is a root of multiplicity $k$ of a polynomial, then it is a root of multiplicity $k-1$ of the derivative of that polynomial, unless the characteristic of the underlying field is a divisor of k, in which case $a$ is a root of multiplicity at least $k$ of the derivative. The discriminant of a polynomial is zero if and only if the polynomial has a multiple root. Behavior of a polynomial function near a multiple root The graph of a polynomial function f touches the x-axis at the real roots of the polynomial. The graph is tangent to it at the multiple roots of f and not tangent at the simple roots. The graph crosses the x-axis at roots of odd multiplicity and does not cross it at roots of even multiplicity. A non-zero polynomial function is everywhere non-negative if and only if all its roots have even multiplicity and there exists an $x_{0}$ such that $f(x_{0})>0$. Multiplicity of a solution of a nonlinear system of equations For an equation $f(x)=0$ with a single variable solution $x_{*}$, the multiplicity is $k$ if $f(x_{*})=f'(x_{*})=\cdots =f^{(k-1)}(x_{*})=0$ and $f^{(k)}(x_{*})\neq 0.$ In other words, the differential functional $\partial _{j}$, defined as the derivative ${\frac {1}{j!}}{\frac {d^{j}}{dx^{j}}}$ of a function at $x_{*}$, vanishes at $f$ for $j$ up to $k-1$. Those differential functionals $\partial _{0},\partial _{1},\cdots ,\partial _{k-1}$ span a vector space, called the Macaulay dual space at $x_{*}$,[1] and its dimension is the multiplicity of $x_{*}$ as a zero of $f$. Let $\mathbf {f} (\mathbf {x} )=\mathbf {0} $ be a system of $m$ equations of $n$ variables with a solution $\mathbf {x} _{*}$ where $\mathbf {f} $ is a mapping from $R^{n}$ to $R^{m}$ or from $C^{n}$ to $C^{m}$. There is also a Macaulay dual space of differential functionals at $\mathbf {x} _{*}$ in which every functional vanishes at $\mathbf {f} $. The dimension of this Macaulay dual space is the multiplicity of the solution $\mathbf {x} _{*}$ to the equation $\mathbf {f} (\mathbf {x} )=\mathbf {0} $. The Macaulay dual space forms the multiplicity structure of the system at the solution.[2][3] For example, the solution $\mathbf {x} _{*}=(0,0)$ of the system of equations in the form of $\mathbf {f} (\mathbf {x} )=\mathbf {0} $ with $\mathbf {f} (\mathbf {x} )=\left[{\begin{array}{c}\sin(x_{1})-x_{2}+x_{1}^{2}\\x_{1}-\sin(x_{2})+x_{2}^{2}\end{array}}\right]$ is of multiplicity 3 because the Macaualy dual space $span\{\partial _{00},\partial _{10}+\partial _{01},-\partial _{10}+\partial _{20}+\partial _{11}+\partial _{02}\}$ is of dimension 3, where $\partial _{ij}$ denotes the differential functional ${\frac {1}{i!j!}}{\frac {\partial ^{i+j}}{\partial x_{1}^{i}\partial x_{2}^{j}}}$ applied on a function at the point $\mathbf {x} _{*}=(0,0)$. The multiplicity is always finite if the solution is isolated, is perturbation invariant in the sense that a $k$-fold solution becomes a cluster of solutions with a combined multiplicity $k$ under perturbation in complex spaces, and is identical to the intersection multiplicity on polynomial systems. Intersection multiplicity Main article: Intersection multiplicity In algebraic geometry, the intersection of two sub-varieties of an algebraic variety is a finite union of irreducible varieties. To each component of such an intersection is attached an intersection multiplicity. This notion is local in the sense that it may be defined by looking at what occurs in a neighborhood of any generic point of this component. It follows that without loss of generality, we may consider, in order to define the intersection multiplicity, the intersection of two affines varieties (sub-varieties of an affine space). Thus, given two affine varieties V1 and V2, consider an irreducible component W of the intersection of V1 and V2. Let d be the dimension of W, and P be any generic point of W. The intersection of W with d hyperplanes in general position passing through P has an irreducible component that is reduced to the single point P. Therefore, the local ring at this component of the coordinate ring of the intersection has only one prime ideal, and is therefore an Artinian ring. This ring is thus a finite dimensional vector space over the ground field. Its dimension is the intersection multiplicity of V1 and V2 at W. This definition allows us to state Bézout's theorem and its generalizations precisely. This definition generalizes the multiplicity of a root of a polynomial in the following way. The roots of a polynomial f are points on the affine line, which are the components of the algebraic set defined by the polynomial. The coordinate ring of this affine set is $R=K[X]/\langle f\rangle ,$ where K is an algebraically closed field containing the coefficients of f. If $f(X)=\prod _{i=1}^{k}(X-\alpha _{i})^{m_{i}}$ is the factorization of f, then the local ring of R at the prime ideal $\langle X-\alpha _{i}\rangle $ is $K[X]/\langle (X-\alpha )^{m_{i}}\rangle .$ This is a vector space over K, which has the multiplicity $m_{i}$ of the root as a dimension. This definition of intersection multiplicity, which is essentially due to Jean-Pierre Serre in his book Local Algebra, works only for the set theoretic components (also called isolated components) of the intersection, not for the embedded components. Theories have been developed for handling the embedded case (see Intersection theory for details). In complex analysis Let z0 be a root of a holomorphic function f, and let n be the least positive integer such that, the nth derivative of f evaluated at z0 differs from zero. Then the power series of f about z0 begins with the nth term, and f is said to have a root of multiplicity (or “order”) n. If n = 1, the root is called a simple root.[4] We can also define the multiplicity of the zeroes and poles of a meromorphic function. If we have a meromorphic function $ f={\frac {g}{h}},$ take the Taylor expansions of g and h about a point z0, and find the first non-zero term in each (denote the order of the terms m and n respectively) then if m = n, then the point has non-zero value. If $m>n,$ then the point is a zero of multiplicity $m-n.$ If $m<n$, then the point has a pole of multiplicity $n-m.$ References 1. D.J. Bates, A.J. Sommese, J.D. Hauenstein and C.W. Wampler (2013). Numerically Solving Polynomial Systems with Bertini. SIAM. p. 186-187.{{cite book}}: CS1 maint: multiple names: authors list (link) 2. B.H. Dayton, T.-Y. Li and Z. Zeng (2011). "Multiple zeros of nonlinear systems". Mathematics of Computation. 80 (276): 2143-2168. arXiv:2103.05738. doi:10.1090/s0025-5718-2011-02462-2. S2CID 9867417. 3. Macaulay, F.S. (1916). The Algebraic Theory of Modular Systems. Cambridge Univ. Press 1994, reprint of 1916 original. 4. (Krantz 1999, p. 70) • Krantz, S. G. Handbook of Complex Variables. Boston, MA: Birkhäuser, 1999. ISBN 0-8176-4011-8.
Wikipedia
Symplectic group In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted Sp(2n, F) and Sp(n) for positive integer n and field F (usually C or R). The latter is called the compact symplectic group and is also denoted by $\mathrm {USp} (n)$. Many authors prefer slightly different notations, usually differing by factors of 2. The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group Sp(2n, C) is denoted Cn, and Sp(n) is the compact real form of Sp(2n, C). Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension n. For finite groups with all characteristic abelian subgroups cyclic, see group of symplectic type. Lie groups and Lie algebras Classical groups • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) Simple Lie groups Classical • An • Bn • Cn • Dn Exceptional • G2 • F4 • E6 • E7 • E8 Other Lie groups • Circle • Lorentz • Poincaré • Conformal group • Diffeomorphism • Loop • Euclidean Lie algebras • Lie group–Lie algebra correspondence • Exponential map • Adjoint representation • Killing form • Index • Simple Lie algebra • Loop algebra • Affine Lie algebra Semisimple Lie algebra • Dynkin diagrams • Cartan subalgebra • Root system • Weyl group • Real form • Complexification • Split Lie algebra • Compact Lie algebra Representation theory • Lie group representation • Lie algebra representation • Representation theory of semisimple Lie algebras • Representations of classical Lie groups • Theorem of the highest weight • Borel–Weil–Bott theorem Lie groups in physics • Particle physics and representation theory • Lorentz group representations • Poincaré group representations • Galilean group representations Scientists • Sophus Lie • Henri Poincaré • Wilhelm Killing • Élie Cartan • Hermann Weyl • Claude Chevalley • Harish-Chandra • Armand Borel • Glossary • Table of Lie groups Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve The name "symplectic group" is due to Hermann Weyl as a replacement for the previous confusing names (line) complex group and Abelian linear group, and is the Greek analog of "complex". The metaplectic group is a double cover of the symplectic group over R; it has analogues over other local fields, finite fields, and adele rings. Sp(2n, F) The symplectic group is a classical group defined as the set of linear transformations of a 2n-dimensional vector space over the field F which preserve a non-degenerate skew-symmetric bilinear form. Such a vector space is called a symplectic vector space, and the symplectic group of an abstract symplectic vector space V is denoted Sp(V). Upon fixing a basis for V, the symplectic group becomes the group of 2n × 2n symplectic matrices, with entries in F, under the operation of matrix multiplication. This group is denoted either Sp(2n, F) or Sp(n, F). If the bilinear form is represented by the nonsingular skew-symmetric matrix Ω, then $\operatorname {Sp} (2n,F)=\{M\in M_{2n\times 2n}(F):M^{\mathrm {T} }\Omega M=\Omega \},$ where MT is the transpose of M. Often Ω is defined to be $\Omega ={\begin{pmatrix}0&I_{n}\\-I_{n}&0\\\end{pmatrix}},$ where In is the identity matrix. In this case, Sp(2n, F) can be expressed as those block matrices $({\begin{smallmatrix}A&B\\C&D\end{smallmatrix}})$, where $A,B,C,D\in M_{n\times n}(F)$, satisfying the three equations: ${\begin{aligned}-C^{\mathrm {T} }A+A^{\mathrm {T} }C&=0,\\-C^{\mathrm {T} }B+A^{\mathrm {T} }D&=I_{n},\\-D^{\mathrm {T} }B+B^{\mathrm {T} }D&=0.\end{aligned}}$ Since all symplectic matrices have determinant 1, the symplectic group is a subgroup of the special linear group SL(2n, F). When n = 1, the symplectic condition on a matrix is satisfied if and only if the determinant is one, so that Sp(2, F) = SL(2, F). For n > 1, there are additional conditions, i.e. Sp(2n, F) is then a proper subgroup of SL(2n, F). Typically, the field F is the field of real numbers R or complex numbers C. In these cases Sp(2n, F) is a real/complex Lie group of real/complex dimension n(2n + 1). These groups are connected but non-compact. The center of Sp(2n, F) consists of the matrices I2n and −I2n as long as the characteristic of the field is not 2.[1] Since the center of Sp(2n, F) is discrete and its quotient modulo the center is a simple group, Sp(2n, F) is considered a simple Lie group. The real rank of the corresponding Lie algebra, and hence of the Lie group Sp(2n, F), is n. The Lie algebra of Sp(2n, F) is the set ${\mathfrak {sp}}(2n,F)=\{X\in M_{2n\times 2n}(F):\Omega X+X^{\mathrm {T} }\Omega =0\},$ equipped with the commutator as its Lie bracket.[2] For the standard skew-symmetric bilinear form $\Omega =({\begin{smallmatrix}0&I\\-I&0\end{smallmatrix}})$, this Lie algebra is the set of all block matrices $({\begin{smallmatrix}A&B\\C&D\end{smallmatrix}})$ subject to the conditions ${\begin{aligned}A&=-D^{\mathrm {T} },\\B&=B^{\mathrm {T} },\\C&=C^{\mathrm {T} }.\end{aligned}}$ Sp(2n, C) The symplectic group over the field of complex numbers is a non-compact, simply connected, simple Lie group. Sp(2n, R) Sp(n, C) is the complexification of the real group Sp(2n, R). Sp(2n, R) is a real, non-compact, connected, simple Lie group.[3] It has a fundamental group isomorphic to the group of integers under addition. As the real form of a simple Lie group its Lie algebra is a splittable Lie algebra. Some further properties of Sp(2n, R): • The exponential map from the Lie algebra sp(2n, R) to the group Sp(2n, R) is not surjective. However, any element of the group can be represented as the product of two exponentials.[4] In other words, $\forall S\in \operatorname {Sp} (2n,\mathbf {R} )\,\,\exists X,Y\in {\mathfrak {sp}}(2n,\mathbf {R} )\,\,S=e^{X}e^{Y}.$ • For all S in Sp(2n, R): $S=OZO'\quad {\text{such that}}\quad O,O'\in \operatorname {Sp} (2n,\mathbf {R} )\cap \operatorname {SO} (2n)\cong U(n)\quad {\text{and}}\quad Z={\begin{pmatrix}D&0\\0&D^{-1}\end{pmatrix}}.$ The matrix D is positive-definite and diagonal. The set of such Zs forms a non-compact subgroup of Sp(2n, R) whereas U(n) forms a compact subgroup. This decomposition is known as 'Euler' or 'Bloch–Messiah' decomposition.[5] Further symplectic matrix properties can be found on that Wikipedia page. • As a Lie group, Sp(2n, R) has a manifold structure. The manifold for Sp(2n, R) is diffeomorphic to the Cartesian product of the unitary group U(n) with a vector space of dimension n(n+1).[6] Infinitesimal generators The members of the symplectic Lie algebra sp(2n, F) are the Hamiltonian matrices. These are matrices, $Q$ such that $Q={\begin{pmatrix}A&B\\C&-A^{\mathrm {T} }\end{pmatrix}}$ where B and C are symmetric matrices. See classical group for a derivation. Example of symplectic matrices For Sp(2, R), the group of 2 × 2 matrices with determinant 1, the three symplectic (0, 1)-matrices are:[7] ${\begin{pmatrix}1&0\\0&1\end{pmatrix}},\quad {\begin{pmatrix}1&0\\1&1\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}1&1\\0&1\end{pmatrix}}.$ Sp(2n, R) It turns out that $\operatorname {Sp} (2n,\mathbf {R} )$ can have a fairly explicit description using generators. If we let $\operatorname {Sym} (n)$ denote the symmetric $n\times n$ matrices, then $\operatorname {Sp} (2n,\mathbf {R} )$ is generated by $D(n)\cup N(n)\cup \{\Omega \},$ where ${\begin{aligned}D(n)&=\left\{\left.{\begin{bmatrix}A&0\\0&(A^{T})^{-1}\end{bmatrix}}\,\right|\,A\in \operatorname {GL} (n,\mathbf {R} )\right\}\\[6pt]N(n)&=\left\{\left.{\begin{bmatrix}I_{n}&B\\0&I_{n}\end{bmatrix}}\,\right|\,B\in \operatorname {Sym} (n)\right\}\end{aligned}}$ are subgroups of $\operatorname {Sp} (2n,\mathbf {R} )$[8]pg 173[9]pg 2. Relationship with symplectic geometry Symplectic geometry is the study of symplectic manifolds. The tangent space at any point on a symplectic manifold is a symplectic vector space.[10] As noted earlier, structure preserving transformations of a symplectic vector space form a group and this group is Sp(2n, F), depending on the dimension of the space and the field over which it is defined. A symplectic vector space is itself a symplectic manifold. A transformation under an action of the symplectic group is thus, in a sense, a linearised version of a symplectomorphism which is a more general structure preserving transformation on a symplectic manifold. Sp(n) The compact symplectic group[11] Sp(n) is the intersection of Sp(2n, C) with the $2n\times 2n$ unitary group: $\operatorname {Sp} (n):=\operatorname {Sp} (2n;\mathbf {C} )\cap \operatorname {U} (2n)=\operatorname {Sp} (2n;\mathbf {C} )\cap \operatorname {SU} (2n).$ It is sometimes written as USp(2n). Alternatively, Sp(n) can be described as the subgroup of GL(n, H) (invertible quaternionic matrices) that preserves the standard hermitian form on Hn: $\langle x,y\rangle ={\bar {x}}_{1}y_{1}+\cdots +{\bar {x}}_{n}y_{n}.$ That is, Sp(n) is just the quaternionic unitary group, U(n, H).[12] Indeed, it is sometimes called the hyperunitary group. Also Sp(1) is the group of quaternions of norm 1, equivalent to SU(2) and topologically a 3-sphere S3. Note that Sp(n) is not a symplectic group in the sense of the previous section—it does not preserve a non-degenerate skew-symmetric H-bilinear form on Hn: there is no such form except the zero form. Rather, it is isomorphic to a subgroup of Sp(2n, C), and so does preserve a complex symplectic form in a vector space of twice the dimension. As explained below, the Lie algebra of Sp(n) is the compact real form of the complex symplectic Lie algebra sp(2n, C). Sp(n) is a real Lie group with (real) dimension n(2n + 1). It is compact and simply connected.[13] The Lie algebra of Sp(n) is given by the quaternionic skew-Hermitian matrices, the set of n-by-n quaternionic matrices that satisfy $A+A^{\dagger }=0$ where A† is the conjugate transpose of A (here one takes the quaternionic conjugate). The Lie bracket is given by the commutator. Important subgroups Some main subgroups are: $\operatorname {Sp} (n)\supset \operatorname {Sp} (n-1)$ $\operatorname {Sp} (n)\supset \operatorname {U} (n)$ $\operatorname {Sp} (2)\subset \operatorname {O} (4)$ Conversely it is itself a subgroup of some other groups: $\operatorname {SU} (2n)\supset \operatorname {Sp} (n)$ $\operatorname {F} _{4}\supset \operatorname {Sp} (4)$ $\operatorname {G} _{2}\supset \operatorname {Sp} (1)$ There are also the isomorphisms of the Lie algebras sp(2) = so(5) and sp(1) = so(3) = su(2). Relationship between the symplectic groups Every complex, semisimple Lie algebra has a split real form and a compact real form; the former is called a complexification of the latter two. The Lie algebra of Sp(2n, C) is semisimple and is denoted sp(2n, C). Its split real form is sp(2n, R) and its compact real form is sp(n). These correspond to the Lie groups Sp(2n, R) and Sp(n) respectively. The algebras, sp(p, n − p), which are the Lie algebras of Sp(p, n − p), are the indefinite signature equivalent to the compact form. Physical significance Classical mechanics The compact symplectic group Sp(n) comes up in classical physics as the symmetries of canonical coordinates preserving the Poisson bracket. Consider a system of n particles, evolving under Hamilton's equations whose position in phase space at a given time is denoted by the vector of canonical coordinates, $\mathbf {z} =(q^{1},\ldots ,q^{n},p_{1},\ldots ,p_{n})^{\mathrm {T} }.$ The elements of the group Sp(2n, R) are, in a certain sense, canonical transformations on this vector, i.e. they preserve the form of Hamilton's equations.[14][15] If $\mathbf {Z} =\mathbf {Z} (\mathbf {z} ,t)=(Q^{1},\ldots ,Q^{n},P_{1},\ldots ,P_{n})^{\mathrm {T} }$ are new canonical coordinates, then, with a dot denoting time derivative, ${\dot {\mathbf {Z} }}=M({\mathbf {z} },t){\dot {\mathbf {z} }},$ where $M(\mathbf {z} ,t)\in \operatorname {Sp} (2n,\mathbf {R} )$ for all t and all z in phase space.[16] For the special case of a Riemannian manifold, Hamilton's equations describe the geodesics on that manifold. The coordinates $q^{i}$ live on the underlying manifold, and the momenta $p_{i}$ live in the cotangent bundle. This is the reason why these are conventionally written with upper and lower indexes; it is to distinguish their locations. The corresponding Hamiltonian consists purely of the kinetic energy: it is $H={\tfrac {1}{2}}g^{ij}(q)p_{i}p_{j}$ where $g^{ij}$ is the inverse of the metric tensor $g_{ij}$ on the Riemannian manifold.[17][15] In fact, the cotangent bundle of any smooth manifold can be a given a symplectic structure in a canonical way, with the symplectic form defined as the exterior derivative of the tautological one-form.[18] Quantum mechanics Consider a system of n particles whose quantum state encodes its position and momentum. These coordinates are continuous variables and hence the Hilbert space, in which the state lives, is infinite-dimensional. This often makes the analysis of this situation tricky. An alternative approach is to consider the evolution of the position and momentum operators under the Heisenberg equation in phase space. Construct a vector of canonical coordinates, $\mathbf {\hat {z}} =({\hat {q}}^{1},\ldots ,{\hat {q}}^{n},{\hat {p}}_{1},\ldots ,{\hat {p}}_{n})^{\mathrm {T} }.$ The canonical commutation relation can be expressed simply as $[\mathbf {\hat {z}} ,\mathbf {\hat {z}} ^{\mathrm {T} }]=i\hbar \Omega $ where $\Omega ={\begin{pmatrix}\mathbf {0} &I_{n}\\-I_{n}&\mathbf {0} \end{pmatrix}}$ and In is the n × n identity matrix. Many physical situations only require quadratic Hamiltonians, i.e. Hamiltonians of the form ${\hat {H}}={\frac {1}{2}}\mathbf {\hat {z}} ^{\mathrm {T} }K\mathbf {\hat {z}} $ where K is a 2n × 2n real, symmetric matrix. This turns out to be a useful restriction and allows us to rewrite the Heisenberg equation as ${\frac {d\mathbf {\hat {z}} }{dt}}=\Omega K\mathbf {\hat {z}} $ The solution to this equation must preserve the canonical commutation relation. It can be shown that the time evolution of this system is equivalent to an action of the real symplectic group, Sp(2n, R), on the phase space. See also • Hamiltonian mechanics • Metaplectic group • Orthogonal group • Paramodular group • Projective unitary group • Representations of classical Lie groups • Symplectic manifold, Symplectic matrix, Symplectic vector space, Symplectic representation • Unitary group • Θ10 Notes 1. "Symplectic group", Encyclopedia of Mathematics Retrieved on 13 December 2014. 2. Hall 2015 Prop. 3.25 3. "Is the symplectic group Sp(2n, R) simple?", Stack Exchange Retrieved on 14 December 2014. 4. "Is the exponential map for Sp(2n, R) surjective?", Stack Exchange Retrieved on 5 December 2014. 5. "Standard forms and entanglement engineering of multimode Gaussian states under local operations – Serafini and Adesso", Retrieved on 30 January 2015. 6. "Symplectic Geometry – Arnol'd and Givental", Retrieved on 30 January 2015. 7. Symplectic Group, (source: Wolfram MathWorld), downloaded February 14, 2012 8. Gerald B. Folland. (2016). Harmonic analysis in phase space. Princeton: Princeton Univ Press. p. 173. ISBN 978-1-4008-8242-7. OCLC 945482850. 9. Habermann, Katharina, 1966- (2006). Introduction to symplectic Dirac operators. Springer. ISBN 978-3-540-33421-7. OCLC 262692314.{{cite book}}: CS1 maint: multiple names: authors list (link) 10. "Lecture Notes – Lecture 2: Symplectic reduction", Retrieved on 30 January 2015. 11. Hall 2015 Section 1.2.8 12. Hall 2015 p. 14 13. Hall 2015 Prop. 13.12 14. Arnold 1989 gives an extensive mathematical overview of classical mechanics. See chapter 8 for symplectic manifolds. 15. Ralph Abraham and Jerrold E. Marsden, Foundations of Mechanics, (1978) Benjamin-Cummings, London ISBN 0-8053-0102-X 16. Goldstein 1980, Section 9.3 17. Jurgen Jost, (1992) Riemannian Geometry and Geometric Analysis, Springer. 18. da Silva, Ana Cannas (2008). Lectures on Symplectic Geometry. Lecture Notes in Mathematics. Vol. 1764. Berlin, Heidelberg: Springer Berlin Heidelberg. p. 9. doi:10.1007/978-3-540-45330-7. ISBN 978-3-540-42195-5. References • Arnold, V. I. (1989), Mathematical Methods of Classical Mechanics, Graduate Texts in Mathematics, vol. 60 (second ed.), Springer-Verlag, ISBN 0-387-96890-3 • Hall, Brian C. (2015), Lie groups, Lie algebras, and representations: An elementary introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666 • Fulton, W.; Harris, J. (1991), Representation Theory, A first Course, Graduate Texts in Mathematics, vol. 129, Springer-Verlag, ISBN 978-0-387-97495-8. • Goldstein, H. (1980) [1950]. "Chapter 7". Classical Mechanics (2nd ed.). Reading MA: Addison-Wesley. ISBN 0-201-02918-9. • Lee, J. M. (2003), Introduction to Smooth manifolds, Graduate Texts in Mathematics, vol. 218, Springer-Verlag, ISBN 0-387-95448-1 • Rossmann, Wulf (2002), Lie Groups – An Introduction Through Linear Groups, Oxford Graduate Texts in Mathematics, Oxford Science Publications, ISBN 0-19-859683-9 • Ferraro, Alessandro; Olivares, Stefano; Paris, Matteo G. A. (March 2005), "Gaussian states in continuous variable quantum information", arXiv:quant-ph/0503237. Authority control: National • Japan
Wikipedia
Simplectic honeycomb In geometry, the simplectic honeycomb (or n-simplex honeycomb) is a dimensional infinite series of honeycombs, based on the ${\tilde {A}}_{n}$ affine Coxeter group symmetry. It is represented by a Coxeter-Dynkin diagram as a cyclic graph of n + 1 nodes with one node ringed. It is composed of n-simplex facets, along with all rectified n-simplices. It can be thought of as an n-dimensional hypercubic honeycomb that has been subdivided along all hyperplanes $x+y+\cdots \in \mathbb {Z} $, then stretched along its main diagonal until the simplices on the ends of the hypercubes become regular. The vertex figure of an n-simplex honeycomb is an expanded n-simplex. ${\tilde {A}}_{2}$ ${\tilde {A}}_{3}$ Triangular tiling Tetrahedral-octahedral honeycomb With red and yellow equilateral triangles With cyan and yellow tetrahedra, and red rectified tetrahedra (octahedra) In 2 dimensions, the honeycomb represents the triangular tiling, with Coxeter graph filling the plane with alternately colored triangles. In 3 dimensions it represents the tetrahedral-octahedral honeycomb, with Coxeter graph filling space with alternately tetrahedral and octahedral cells. In 4 dimensions it is called the 5-cell honeycomb, with Coxeter graph , with 5-cell and rectified 5-cell facets. In 5 dimensions it is called the 5-simplex honeycomb, with Coxeter graph , filling space by 5-simplex, rectified 5-simplex, and birectified 5-simplex facets. In 6 dimensions it is called the 6-simplex honeycomb, with Coxeter graph , filling space by 6-simplex, rectified 6-simplex, and birectified 6-simplex facets. By dimension n ${\tilde {A}}_{2+}$ Tessellation Vertex figure Facets per vertex figure Vertices per vertex figure Edge figure 1 ${\tilde {A}}_{1}$ Apeirogon Line segment 2 2 Point 2 ${\tilde {A}}_{2}$ Triangular tiling 2-simplex honeycomb Hexagon (Truncated triangle) 3+3 triangles 6 Line segment 3 ${\tilde {A}}_{3}$ Tetrahedral-octahedral honeycomb 3-simplex honeycomb Cuboctahedron (Cantellated tetrahedron) 4+4 tetrahedron 6 rectified tetrahedra 12 Rectangle 4 ${\tilde {A}}_{4}$ 4-simplex honeycomb Runcinated 5-cell 5+5 5-cells 10+10 rectified 5-cells 20 Triangular antiprism 5 ${\tilde {A}}_{5}$ 5-simplex honeycomb Stericated 5-simplex 6+6 5-simplex 15+15 rectified 5-simplex 20 birectified 5-simplex 30 Tetrahedral antiprism 6 ${\tilde {A}}_{6}$ 6-simplex honeycomb Pentellated 6-simplex 7+7 6-simplex 21+21 rectified 6-simplex 35+35 birectified 6-simplex 42 4-simplex antiprism 7 ${\tilde {A}}_{7}$ 7-simplex honeycomb Hexicated 7-simplex 8+8 7-simplex 28+28 rectified 7-simplex 56+56 birectified 7-simplex 70 trirectified 7-simplex 56 5-simplex antiprism 8 ${\tilde {A}}_{8}$ 8-simplex honeycomb Heptellated 8-simplex 9+9 8-simplex 36+36 rectified 8-simplex 84+84 birectified 8-simplex 126+126 trirectified 8-simplex 72 6-simplex antiprism 9 ${\tilde {A}}_{9}$ 9-simplex honeycomb Octellated 9-simplex 10+10 9-simplex 45+45 rectified 9-simplex 120+120 birectified 9-simplex 210+210 trirectified 9-simplex 252 quadrirectified 9-simplex 90 7-simplex antiprism 10 ${\tilde {A}}_{10}$ 10-simplex honeycomb Ennecated 10-simplex 11+11 10-simplex 55+55 rectified 10-simplex 165+165 birectified 10-simplex 330+330 trirectified 10-simplex 462+462 quadrirectified 10-simplex 110 8-simplex antiprism 11 ${\tilde {A}}_{11}$ 11-simplex honeycomb ... ... ... ... Projection by folding The (2n-1)-simplex honeycombs and 2n-simplex honeycombs can be projected into the n-dimensional hypercubic honeycomb by a geometric folding operation that maps two pairs of mirrors into each other, sharing the same vertex arrangement: ${\tilde {A}}_{2}$ ${\tilde {A}}_{4}$ ${\tilde {A}}_{6}$ ${\tilde {A}}_{8}$ ${\tilde {A}}_{10}$ ... ${\tilde {A}}_{3}$ ${\tilde {A}}_{3}$ ${\tilde {A}}_{5}$ ${\tilde {A}}_{7}$ ${\tilde {A}}_{9}$ ... ${\tilde {C}}_{1}$ ${\tilde {C}}_{2}$ ${\tilde {C}}_{3}$ ${\tilde {C}}_{4}$ ${\tilde {C}}_{5}$ ... Kissing number These honeycombs, seen as tangent n-spheres located at the center of each honeycomb vertex have a fixed number of contacting spheres and correspond to the number of vertices in the vertex figure. This represents the highest kissing number for 2 and 3 dimensions, but falls short on higher dimensions. In 2-dimensions, the triangular tiling defines a circle packing of 6 tangent spheres arranged in a regular hexagon, and for 3 dimensions there are 12 tangent spheres arranged in a cuboctahedral configuration. For 4 to 8 dimensions, the kissing numbers are 20, 30, 42, 56, and 72 spheres, while the greatest solutions are 24, 40, 72, 126, and 240 spheres respectively. See also • Hypercubic honeycomb • Alternated hypercubic honeycomb • Quarter hypercubic honeycomb • Truncated simplectic honeycomb • Omnitruncated simplectic honeycomb References • George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) • Branko Grünbaum, Uniform tilings of 3-space. Geombinatorics 4(1994), 49 - 56. • Norman Johnson Uniform Polytopes, Manuscript (1991) • Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 • Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings) • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] Fundamental convex regular and uniform honeycombs in dimensions 2–9 Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$ E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4 E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6 E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222 E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331 E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521 E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10 E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11 En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
Wikipedia
Simplex algorithm In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming.[1] The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin.[2] Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, and these become proper simplices with an additional constraint.[3][4][5][6] The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints applied to the objective function. History George Dantzig worked on planning methods for the US Army Air Force during World War II using a desk calculator. During 1946 his colleague challenged him to mechanize the planning process to distract him from taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of Wassily Leontief, however, at that time he didn't include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the "best" feasible solution, military-specified "ground rules" must be used that describe how goals can be achieved as opposed to specifying a goal itself. Dantzig's core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized.[7] Development of the simplex method was evolutionary and happened over a period of about a year.[8] After Dantzig included an objective function as part of his formulation during mid-1947, the problem was mathematically more tractable. Dantzig realized that one of the unsolved problems that he had mistaken as homework in his professor Jerzy Neyman's class (and actually later solved), was applicable to finding an algorithm for linear programs. This problem involved finding the existence of Lagrange multipliers for general linear programs over a continuum of variables, each bounded between zero and one, and satisfying linear constraints expressed in the form of Lebesgue integrals. Dantzig later published his "homework" as a thesis to earn his doctorate. The column geometry used in this thesis gave Dantzig insight that made him believe that the Simplex method would be very efficient.[9] Overview Further information: Linear programming The simplex algorithm operates on linear programs in the canonical form maximize $ \mathbf {c^{T}} \mathbf {x} $ subject to $A\mathbf {x} \leq \mathbf {b} $ and $\mathbf {x} \geq 0$ with $\mathbf {c} =(c_{1},\,\dots ,\,c_{n})$ the coefficients of the objective function, $(\cdot )^{\mathrm {T} }$ is the matrix transpose, and $\mathbf {x} =(x_{1},\,\dots ,\,x_{n})$ are the variables of the problem, $A$ is a p×n matrix, and $\mathbf {b} =(b_{1},\,\dots ,\,b_{p})$. There is a straightforward process to convert any linear program into one in standard form, so using this form of linear programs results in no loss of generality. In geometric terms, the feasible region defined by all values of $\mathbf {x} $ such that $ A\mathbf {x} \leq \mathbf {b} $ and $\forall i,x_{i}\geq 0$ is a (possibly unbounded) convex polytope. An extreme point or vertex of this polytope is known as basic feasible solution (BFS). It can be shown that for a linear program in standard form, if the objective function has a maximum value on the feasible region, then it has this value on (at least) one of the extreme points.[10] This in itself reduces the problem to a finite computation since there is a finite number of extreme points, but the number of extreme points is unmanageably large for all but the smallest linear programs.[11] It can also be shown that, if an extreme point is not a maximum point of the objective function, then there is an edge containing the point so that the value of the objective function is strictly increasing on the edge moving away from the point.[12] If the edge is finite, then the edge connects to another extreme point where the objective function has a greater value, otherwise the objective function is unbounded above on the edge and the linear program has no solution. The simplex algorithm applies this insight by walking along edges of the polytope to extreme points with greater and greater objective values. This continues until the maximum value is reached, or an unbounded edge is visited (concluding that the problem has no solution). The algorithm always terminates because the number of vertices in the polytope is finite; moreover since we jump between vertices always in the same direction (that of the objective function), we hope that the number of vertices visited will be small.[12] The solution of a linear program is accomplished in two steps. In the first step, known as Phase I, a starting extreme point is found. Depending on the nature of the program this may be trivial, but in general it can be solved by applying the simplex algorithm to a modified version of the original program. The possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty. In the latter case the linear program is called infeasible. In the second step, Phase II, the simplex algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either an optimum basic feasible solution or an infinite edge on which the objective function is unbounded above.[13][14][15] Standard form The transformation of a linear program to one in standard form may be accomplished as follows.[16] First, for each variable with a lower bound other than 0, a new variable is introduced representing the difference between the variable and bound. The original variable can then be eliminated by substitution. For example, given the constraint $x_{1}\geq 5$ a new variable, $y_{1}$, is introduced with ${\begin{aligned}y_{1}=x_{1}-5\\x_{1}=y_{1}+5\end{aligned}}$ The second equation may be used to eliminate $x_{1}$ from the linear program. In this way, all lower bound constraints may be changed to non-negativity restrictions. Second, for each remaining inequality constraint, a new variable, called a slack variable, is introduced to change the constraint to an equality constraint. This variable represents the difference between the two sides of the inequality and is assumed to be non-negative. For example, the inequalities ${\begin{aligned}x_{2}+2x_{3}&\leq 3\\-x_{4}+3x_{5}&\geq 2\end{aligned}}$ are replaced with ${\begin{aligned}x_{2}+2x_{3}+s_{1}&=3\\-x_{4}+3x_{5}-s_{2}&=2\\s_{1},\,s_{2}&\geq 0\end{aligned}}$ It is much easier to perform algebraic manipulation on inequalities in this form. In inequalities where ≥ appears such as the second one, some authors refer to the variable introduced as a surplus variable. Third, each unrestricted variable is eliminated from the linear program. This can be done in two ways, one is by solving for the variable in one of the equations in which it appears and then eliminating the variable by substitution. The other is to replace the variable with the difference of two restricted variables. For example, if $z_{1}$ is unrestricted then write ${\begin{aligned}&z_{1}=z_{1}^{+}-z_{1}^{-}\\&z_{1}^{+},\,z_{1}^{-}\geq 0\end{aligned}}$ The equation may be used to eliminate $z_{1}$ from the linear program. When this process is complete the feasible region will be in the form $\mathbf {A} \mathbf {x} =\mathbf {b} ,\,\forall i\ x_{i}\geq 0$ It is also useful to assume that the rank of $\mathbf {A} $ is the number of rows. This results in no loss of generality since otherwise either the system $\mathbf {A} \mathbf {x} =\mathbf {b} $ has redundant equations which can be dropped, or the system is inconsistent and the linear program has no solution.[17] Simplex tableau A linear program in standard form can be represented as a tableau of the form ${\begin{bmatrix}1&-\mathbf {c} ^{T}&0\\0&\mathbf {A} &\mathbf {b} \end{bmatrix}}$ The first row defines the objective function and the remaining rows specify the constraints. The zero in the first column represents the zero vector of the same dimension as vector b (different authors use different conventions as to the exact layout). If the columns of A can be rearranged so that it contains the identity matrix of order p (the number of rows in A) then the tableau is said to be in canonical form.[18] The variables corresponding to the columns of the identity matrix are called basic variables while the remaining variables are called nonbasic or free variables. If the values of the nonbasic variables are set to 0, then the values of the basic variables are easily obtained as entries in b and this solution is a basic feasible solution. The algebraic interpretation here is that the coefficients of the linear equation represented by each row are either $0$, $1$, or some other number. Each row will have $1$ column with value $1$, $p-1$ columns with coefficients $0$, and the remaining columns with some other coefficients (these other variables represent our non-basic variables). By setting the values of the non-basic variables to zero we ensure in each row that the value of the variable represented by a $1$ in its column is equal to the $b$ value at that row. Conversely, given a basic feasible solution, the columns corresponding to the nonzero variables can be expanded to a nonsingular matrix. If the corresponding tableau is multiplied by the inverse of this matrix then the result is a tableau in canonical form.[19] Let ${\begin{bmatrix}1&-\mathbf {c} _{B}^{T}&-\mathbf {c} _{D}^{T}&0\\0&I&\mathbf {D} &\mathbf {b} \end{bmatrix}}$ be a tableau in canonical form. Additional row-addition transformations can be applied to remove the coefficients cT B   from the objective function. This process is called pricing out and results in a canonical tableau ${\begin{bmatrix}1&0&-{\bar {\mathbf {c} }}_{D}^{T}&z_{B}\\0&I&\mathbf {D} &\mathbf {b} \end{bmatrix}}$ where zB is the value of the objective function at the corresponding basic feasible solution. The updated coefficients, also known as relative cost coefficients, are the rates of change of the objective function with respect to the nonbasic variables.[14] Pivot operations The geometrical operation of moving from a basic feasible solution to an adjacent basic feasible solution is implemented as a pivot operation. First, a nonzero pivot element is selected in a nonbasic column. The row containing this element is multiplied by its reciprocal to change this element to 1, and then multiples of the row are added to the other rows to change the other entries in the column to 0. The result is that, if the pivot element is in a row r, then the column becomes the r-th column of the identity matrix. The variable for this column is now a basic variable, replacing the variable which corresponded to the r-th column of the identity matrix before the operation. In effect, the variable corresponding to the pivot column enters the set of basic variables and is called the entering variable, and the variable being replaced leaves the set of basic variables and is called the leaving variable. The tableau is still in canonical form but with the set of basic variables changed by one element.[13][14] Algorithm Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations each of which give an improved basic feasible solution; the choice of pivot element at each step is largely determined by the requirement that this pivot improves the solution. Entering variable selection Since the entering variable will, in general, increase from 0 to a positive number, the value of the objective function will decrease if the derivative of the objective function with respect to this variable is negative. Equivalently, the value of the objective function is increased if the pivot column is selected so that the corresponding entry in the objective row of the tableau is positive. If there is more than one column so that the entry in the objective row is positive then the choice of which one to add to the set of basic variables is somewhat arbitrary and several entering variable choice rules[20] such as Devex algorithm[21] have been developed. If all the entries in the objective row are less than or equal to 0 then no choice of entering variable can be made and the solution is in fact optimal. It is easily seen to be optimal since the objective row now corresponds to an equation of the form $z(\mathbf {x} )=z_{B}+{\text{non-positive terms corresponding to non-basic variables}}$ By changing the entering variable choice rule so that it selects a column where the entry in the objective row is negative, the algorithm is changed so that it finds the minimum of the objective function rather than the maximum. Leaving variable selection Once the pivot column has been selected, the choice of pivot row is largely determined by the requirement that the resulting solution be feasible. First, only positive entries in the pivot column are considered since this guarantees that the value of the entering variable will be nonnegative. If there are no positive entries in the pivot column then the entering variable can take any non-negative value with the solution remaining feasible. In this case the objective function is unbounded below and there is no minimum. Next, the pivot row must be selected so that all the other basic variables remain positive. A calculation shows that this occurs when the resulting value of the entering variable is at a minimum. In other words, if the pivot column is c, then the pivot row r is chosen so that $b_{r}/a_{rc}\,$ is the minimum over all r so that arc > 0. This is called the minimum ratio test.[20] If there is more than one row for which the minimum is achieved then a dropping variable choice rule[22] can be used to make the determination. Example See also: Revised simplex algorithm § Numerical example Consider the linear program Minimize $Z=-2x-3y-4z\,$ Subject to ${\begin{aligned}3x+2y+z&\leq 10\\2x+5y+3z&\leq 15\\x,\,y,\,z&\geq 0\end{aligned}}$ With the addition of slack variables s and t, this is represented by the canonical tableau ${\begin{bmatrix}1&2&3&4&0&0&0\\0&3&2&1&1&0&10\\0&2&5&3&0&1&15\end{bmatrix}}$ where columns 5 and 6 represent the basic variables s and t and the corresponding basic feasible solution is $x=y=z=0,\,s=10,\,t=15.$ Columns 2, 3, and 4 can be selected as pivot columns, for this example column 4 is selected. The values of z resulting from the choice of rows 2 and 3 as pivot rows are 10/1 = 10 and 15/3 = 5 respectively. Of these the minimum is 5, so row 3 must be the pivot row. Performing the pivot produces ${\begin{bmatrix}3&-2&-11&0&0&-4&-60\\0&7&1&0&3&-1&15\\0&2&5&3&0&1&15\end{bmatrix}}$ Now columns 4 and 5 represent the basic variables z and s and the corresponding basic feasible solution is $x=y=t=0,\,z=5,\,s=5.$ For the next step, there are no positive entries in the objective row and in fact $Z={\frac {-60+2x+11y+4t}{3}}=-20+{\frac {2x+11y+4t}{3}}$ so the minimum value of Z is −20. Finding an initial canonical tableau In general, a linear program will not be given in the canonical form and an equivalent canonical tableau must be found before the simplex algorithm can start. This can be accomplished by the introduction of artificial variables. Columns of the identity matrix are added as column vectors for these variables. If the b value for a constraint equation is negative, the equation is negated before adding the identity matrix columns. This does not change the set of feasible solutions or the optimal solution, and it ensures that the slack variables will constitute an initial feasible solution. The new tableau is in canonical form but it is not equivalent to the original problem. So a new objective function, equal to the sum of the artificial variables, is introduced and the simplex algorithm is applied to find the minimum; the modified linear program is called the Phase I problem.[23] The simplex algorithm applied to the Phase I problem must terminate with a minimum value for the new objective function since, being the sum of nonnegative variables, its value is bounded below by 0. If the minimum is 0 then the artificial variables can be eliminated from the resulting canonical tableau producing a canonical tableau equivalent to the original problem. The simplex algorithm can then be applied to find the solution; this step is called Phase II. If the minimum is positive then there is no feasible solution for the Phase I problem where the artificial variables are all zero. This implies that the feasible region for the original problem is empty, and so the original problem has no solution.[13][14][24] Example Consider the linear program Minimize $Z=-2x-3y-4z\,$ Subject to ${\begin{aligned}3x+2y+z&=10\\2x+5y+3z&=15\\x,\,y,\,z&\geq 0\end{aligned}}$ This is represented by the (non-canonical) tableau ${\begin{bmatrix}1&2&3&4&0\\0&3&2&1&10\\0&2&5&3&15\end{bmatrix}}$ Introduce artificial variables u and v and objective function W = u + v, giving a new tableau ${\begin{bmatrix}1&0&0&0&0&-1&-1&0\\0&1&2&3&4&0&0&0\\0&0&3&2&1&1&0&10\\0&0&2&5&3&0&1&15\end{bmatrix}}$ The equation defining the original objective function is retained in anticipation of Phase II. By construction, u and v are both basic variables since they are part of the initial identity matrix. However, the objective function W currently assumes that u and v are both 0. In order to adjust the objective function to be the correct value where u = 10 and v = 15, add the third and fourth rows to the first row giving ${\begin{bmatrix}1&0&5&7&4&0&0&25\\0&1&2&3&4&0&0&0\\0&0&3&2&1&1&0&10\\0&0&2&5&3&0&1&15\end{bmatrix}}$ Select column 5 as a pivot column, so the pivot row must be row 4, and the updated tableau is ${\begin{bmatrix}3&0&7&1&0&0&-4&15\\0&3&-2&-11&0&0&-4&-60\\0&0&7&1&0&3&-1&15\\0&0&2&5&3&0&1&15\end{bmatrix}}$ Now select column 3 as a pivot column, for which row 3 must be the pivot row, to get ${\begin{bmatrix}7&0&0&0&0&-7&-7&0\\0&7&0&-25&0&2&-10&-130\\0&0&7&1&0&3&-1&15\\0&0&0&11&7&-2&3&25\end{bmatrix}}$ The artificial variables are now 0 and they may be dropped giving a canonical tableau equivalent to the original problem: ${\begin{bmatrix}7&0&-25&0&-130\\0&7&1&0&15\\0&0&11&7&25\end{bmatrix}}$ This is, fortuitously, already optimal and the optimum value for the original linear program is −130/7. Advanced topics Implementation Main article: Revised simplex algorithm The tableau form used above to describe the algorithm lends itself to an immediate implementation in which the tableau is maintained as a rectangular (m + 1)-by-(m + n + 1) array. It is straightforward to avoid storing the m explicit columns of the identity matrix that will occur within the tableau by virtue of B being a subset of the columns of [A, I]. This implementation is referred to as the "standard simplex algorithm". The storage and computation overhead is such that the standard simplex method is a prohibitively expensive approach to solving large linear programming problems. In each simplex iteration, the only data required are the first row of the tableau, the (pivotal) column of the tableau corresponding to the entering variable and the right-hand-side. The latter can be updated using the pivotal column and the first row of the tableau can be updated using the (pivotal) row corresponding to the leaving variable. Both the pivotal column and pivotal row may be computed directly using the solutions of linear systems of equations involving the matrix B and a matrix-vector product using A. These observations motivate the "revised simplex algorithm", for which implementations are distinguished by their invertible representation of B.[25] In large linear-programming problems A is typically a sparse matrix and, when the resulting sparsity of B is exploited when maintaining its invertible representation, the revised simplex algorithm is much more efficient than the standard simplex method. Commercial simplex solvers are based on the revised simplex algorithm.[24][25][26][27][28] Degeneracy: stalling and cycling If the values of all basic variables are strictly positive, then a pivot must result in an improvement in the objective value. When this is always the case no set of basic variables occurs twice and the simplex algorithm must terminate after a finite number of steps. Basic feasible solutions where at least one of the basic variables is zero are called degenerate and may result in pivots for which there is no improvement in the objective value. In this case there is no actual change in the solution but only a change in the set of basic variables. When several such pivots occur in succession, there is no improvement; in large industrial applications, degeneracy is common and such "stalling" is notable. Worse than stalling is the possibility the same set of basic variables occurs twice, in which case, the deterministic pivoting rules of the simplex algorithm will produce an infinite loop, or "cycle". While degeneracy is the rule in practice and stalling is common, cycling is rare in practice. A discussion of an example of practical cycling occurs in Padberg.[24] Bland's rule prevents cycling and thus guarantees that the simplex algorithm always terminates.[24][29][30] Another pivoting algorithm, the criss-cross algorithm never cycles on linear programs.[31] History-based pivot rules such as Zadeh's rule and Cunningham's rule also try to circumvent the issue of stalling and cycling by keeping track of how often particular variables are being used and then favor such variables that have been used least often. Efficiency in the worst case The simplex method is remarkably efficient in practice and was a great improvement over earlier methods such as Fourier–Motzkin elimination. However, in 1972, Klee and Minty[32] gave an example, the Klee–Minty cube, showing that the worst-case complexity of simplex method as formulated by Dantzig is exponential time. Since then, for almost every variation on the method, it has been shown that there is a family of linear programs for which it performs badly. It is an open question if there is a variation with polynomial time, although sub-exponential pivot rules are known.[33] In 2014, it was proved that a particular variant of the simplex method is NP-mighty, i.e., it can be used to solve, with polynomial overhead, any problem in NP implicitly during the algorithm's execution. Moreover, deciding whether a given variable ever enters the basis during the algorithm's execution on a given input, and determining the number of iterations needed for solving a given problem, are both NP-hard problems.[34] At about the same time it was shown that there exists an artificial pivot rule for which computing its output is PSPACE-complete.[35] In 2015, this was strengthened to show that computing the output of Dantzig's pivot rule is PSPACE-complete.[36] Efficiency in practice Analyzing and quantifying the observation that the simplex algorithm is efficient in practice despite its exponential worst-case complexity has led to the development of other measures of complexity. The simplex algorithm has polynomial-time average-case complexity under various probability distributions, with the precise average-case performance of the simplex algorithm depending on the choice of a probability distribution for the random matrices.[37][38] Another approach to studying "typical phenomena" uses Baire category theory from general topology, and to show that (topologically) "most" matrices can be solved by the simplex algorithm in a polynomial number of steps. Another method to analyze the performance of the simplex algorithm studies the behavior of worst-case scenarios under small perturbation – are worst-case scenarios stable under a small change (in the sense of structural stability), or do they become tractable? This area of research, called smoothed analysis, was introduced specifically to study the simplex method. Indeed, the running time of the simplex method on input with noise is polynomial in the number of variables and the magnitude of the perturbations.[39][40] Other algorithms Other algorithms for solving linear-programming problems are described in the linear-programming article. Another basis-exchange pivoting algorithm is the criss-cross algorithm.[41][42] There are polynomial-time algorithms for linear programming that use interior point methods: these include Khachiyan's ellipsoidal algorithm, Karmarkar's projective algorithm, and path-following algorithms.[15] Linear-fractional programming Main article: Linear-fractional programming Linear–fractional programming (LFP) is a generalization of linear programming (LP). In LP the objective function is a linear function, while the objective function of a linear–fractional program is a ratio of two linear functions. In other words, a linear program is a fractional–linear program in which the denominator is the constant function having the value one everywhere. A linear–fractional program can be solved by a variant of the simplex algorithm[43][44][45][46] or by the criss-cross algorithm.[47] See also • Criss-cross algorithm • Cutting-plane method • Devex algorithm • Fourier–Motzkin elimination • Gradient descent • Karmarkar's algorithm • Nelder–Mead simplicial heuristic • Pivoting rule of Bland, which avoids cycling Notes 1. Murty, Katta G. Linear programming. John Wiley & Sons Inc.1, 2000. 2. Murty (1983, Comment 2.2) 3. Murty (1983, Note 3.9) 4. Stone, Richard E.; Tovey, Craig A. (1991). "The simplex and projective scaling algorithms as iteratively reweighted least squares methods". SIAM Review. 33 (2): 220–237. doi:10.1137/1033049. JSTOR 2031142. MR 1124362. 5. Stone, Richard E.; Tovey, Craig A. (1991). "Erratum: The simplex and projective scaling algorithms as iteratively reweighted least squares methods". SIAM Review. 33 (3): 461. doi:10.1137/1033100. JSTOR 2031443. MR 1124362. 6. Strang, Gilbert (1 June 1987). "Karmarkar's algorithm and its place in applied mathematics". The Mathematical Intelligencer. 9 (2): 4–10. doi:10.1007/BF03025891. ISSN 0343-6993. MR 0883185. S2CID 123541868. 7. Dantzig, George B. (April 1982). "Reminiscences about the origins of linear programming" (PDF). Operations Research Letters. 1 (2): 43–48. doi:10.1016/0167-6377(82)90043-8. Archived from the original on May 20, 2015. 8. Albers and Reid (1986). "An Interview with George B. Dantzig: The Father of Linear Programming". College Mathematics Journal. 17 (4): 292–314. doi:10.1080/07468342.1986.11972971. 9. Dantzig, George (May 1987). Nash, Stephen G. (ed.). Origins of the simplex method (PDF). pp. 141–151. doi:10.1145/87252.88081. ISBN 978-0-201-50814-7. Archived (PDF) from the original on May 29, 2015. {{cite encyclopedia}}: |work= ignored (help) 10. Murty (1983, Theorem 3.3) 11. Murty (1983, p. 143, Section 3.13) 12. Murty (1983, p. 137, Section 3.8) 13. George B. Dantzig and Mukund N. Thapa. 1997. Linear programming 1: Introduction. Springer-Verlag. 14. Evar D. Nering and Albert W. Tucker, 1993, Linear Programs and Related Problems, Academic Press. (elementary) 15. Robert J. Vanderbei, Linear Programming: Foundations and Extensions, 3rd ed., International Series in Operations Research & Management Science, Vol. 114, Springer Verlag, 2008. ISBN 978-0-387-74387-5. 16. Murty (1983, Section 2.2) 17. Murty (1983, p. 173) 18. Murty (1983, section 2.3.2) 19. Murty (1983, section 3.12) 20. Murty (1983, p. 66) 21. Harris, Paula MJ. "Pivot selection methods of the Devex LP code." Mathematical programming 5.1 (1973): 1–28 22. Murty (1983, p. 67) 23. Murty (1983, p. 60) 24. Padberg, M. (1999). Linear Optimization and Extensions (Second ed.). Springer-Verlag. ISBN 3-540-65833-5. 25. Dantzig, George B.; Thapa, Mukund N. (2003). Linear Programming 2: Theory and Extensions. Springer-Verlag. 26. Alevras, Dmitris; Padberg, Manfred W. (2001). Linear Optimization and Extensions: Problems and Solutions. Universitext. Springer-Verlag. ISBN 3-540-41744-3. (Problems from Padberg with solutions.) 27. Maros, István; Mitra, Gautam (1996). "Simplex algorithms". In J. E. Beasley (ed.). Advances in linear and integer programming. Oxford Science. pp. 1–46. MR 1438309. 28. Maros, István (2003). Computational techniques of the simplex method. International Series in Operations Research & Management Science. Vol. 61. Boston, MA: Kluwer Academic Publishers. pp. xx+325. ISBN 978-1-4020-7332-8. MR 1960274. 29. Bland, Robert G. (May 1977). "New finite pivoting rules for the simplex method". Mathematics of Operations Research. 2 (2): 103–107. doi:10.1287/moor.2.2.103. JSTOR 3689647. MR 0459599. S2CID 18493293. 30. Murty (1983, p. 79) 31. There are abstract optimization problems, called oriented matroid programs, on which Bland's rule cycles (incorrectly) while the criss-cross algorithm terminates correctly. 32. Klee, Victor; Minty, George J. (1972). "How good is the simplex algorithm?". In Shisha, Oved (ed.). Inequalities III (Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1–9, 1969, dedicated to the memory of Theodore S. Motzkin). New York-London: Academic Press. pp. 159–175. MR 0332165. 33. Hansen, Thomas; Zwick, Uri (2015), "An Improved Version of the Random-Facet Pivoting Rule for the Simplex Algorithm", Proceedings of the forty-seventh annual ACM symposium on Theory of Computing, pp. 209–218, CiteSeerX 10.1.1.697.2526, doi:10.1145/2746539.2746557, ISBN 9781450335362, S2CID 1980659{{citation}}: CS1 maint: date and year (link) 34. Disser, Yann; Skutella, Martin (2018-11-01). "The Simplex Algorithm Is NP-Mighty". ACM Trans. Algorithms. 15 (1): 5:1–5:19. arXiv:1311.5935. doi:10.1145/3280847. ISSN 1549-6325. S2CID 54445546. 35. Adler, Ilan; Christos, Papadimitriou; Rubinstein, Aviad (2014), "On Simplex Pivoting Rules and Complexity Theory", Integer Programming and Combinatorial Optimization, Lecture Notes in Computer Science, vol. 17, pp. 13–24, arXiv:1404.3320, doi:10.1007/978-3-319-07557-0_2, ISBN 978-3-319-07556-3, S2CID 891022 36. Fearnly, John; Savani, Rahul (2015), "The Complexity of the Simplex Method", Proceedings of the forty-seventh annual ACM symposium on Theory of Computing, pp. 201–208, arXiv:1404.0605, doi:10.1145/2746539.2746558, ISBN 9781450335362, S2CID 2116116{{citation}}: CS1 maint: date and year (link) 37. Alexander Schrijver, Theory of Linear and Integer Programming. John Wiley & sons, 1998, ISBN 0-471-98232-6 (mathematical) 38. The simplex algorithm takes on average D steps for a cube. Borgwardt (1987): Borgwardt, Karl-Heinz (1987). The simplex method: A probabilistic analysis. Algorithms and Combinatorics (Study and Research Texts). Vol. 1. Berlin: Springer-Verlag. pp. xii+268. ISBN 978-3-540-17096-9. MR 0868467. 39. Spielman, Daniel; Teng, Shang-Hua (2001). "Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time". Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing. ACM. pp. 296–305. arXiv:cs/0111050. doi:10.1145/380752.380813. ISBN 978-1-58113-349-3. S2CID 1471. 40. Dadush, Daniel; Huiberts, Sophie (2020-01-01). "A Friendly Smoothed Analysis of the Simplex Method". SIAM Journal on Computing. 49 (5): STOC18–449. doi:10.1137/18M1197205. ISSN 0097-5397. S2CID 226351624. 41. Terlaky, Tamás; Zhang, Shu Zhong (1993). "Pivot rules for linear programming: A Survey on recent theoretical developments". Annals of Operations Research. 46–47 (1): 203–233. CiteSeerX 10.1.1.36.7658. doi:10.1007/BF02096264. ISSN 0254-5330. MR 1260019. S2CID 6058077. 42. Fukuda, Komei; Terlaky, Tamás (1997). Thomas M. Liebling; Dominique de Werra (eds.). "Criss-cross methods: A fresh view on pivot algorithms". Mathematical Programming, Series B. Amsterdam: North-Holland Publishing. 79 (1–3): 369–395. doi:10.1007/BF02614325. MR 1464775. S2CID 2794181. 43. Murty (1983, Chapter 3.20 (pp. 160–164) and pp. 168 and 179) 44. Chapter five: Craven, B. D. (1988). Fractional programming. Sigma Series in Applied Mathematics. Vol. 4. Berlin: Heldermann Verlag. p. 145. ISBN 978-3-88538-404-5. MR 0949209. 45. Kruk, Serge; Wolkowicz, Henry (1999). "Pseudolinear programming". SIAM Review. 41 (4): 795–805. Bibcode:1999SIAMR..41..795K. CiteSeerX 10.1.1.53.7355. doi:10.1137/S0036144598335259. JSTOR 2653207. MR 1723002. 46. Mathis, Frank H.; Mathis, Lenora Jane (1995). "A nonlinear programming algorithm for hospital management". SIAM Review. 37 (2): 230–234. doi:10.1137/1037046. JSTOR 2132826. MR 1343214. S2CID 120626738. 47. Illés, Tibor; Szirmai, Ákos; Terlaky, Tamás (1999). "The finite criss-cross method for hyperbolic programming". European Journal of Operational Research. 114 (1): 198–214. CiteSeerX 10.1.1.36.7090. doi:10.1016/S0377-2217(98)00049-6. ISSN 0377-2217. References • Murty, Katta G. (1983). Linear programming. New York: John Wiley & Sons, Inc. pp. xix+482. ISBN 978-0-471-09725-9. MR 0720547. Further reading These introductions are written for students of computer science and operations research: • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 29.3: The simplex algorithm, pp. 790–804. • Frederick S. Hillier and Gerald J. Lieberman: Introduction to Operations Research, 8th edition. McGraw-Hill. ISBN 0-07-123828-X • Rardin, Ronald L. (1997). Optimization in operations research. Prentice Hall. p. 919. ISBN 978-0-02-398415-0. External links The Wikibook Operations Research has a page on the topic of: The Simplex Method • An Introduction to Linear Programming and the Simplex Algorithm by Spyros Reveliotis of the Georgia Institute of Technology. • Greenberg, Harvey J., Klee–Minty Polytope Shows Exponential Time Complexity of Simplex Method the University of Colorado at Denver (1997) PDF download • Simplex Method A tutorial for Simplex Method with examples (also two-phase and M-method). • Mathstools Simplex Calculator from www.mathstools.com • Example of Simplex Procedure for a Standard Linear Programming Problem by Thomas McFarland of the University of Wisconsin-Whitewater. • PHPSimplex: online tool to solve Linear Programming Problems by Daniel Izquierdo and Juan José Ruiz of the University of Málaga (UMA, Spain) • simplex-m Online Simplex Solver Optimization: Algorithms, methods, and heuristics Unconstrained nonlinear Functions • Golden-section search • Interpolation methods • Line search • Nelder–Mead method • Successive parabolic interpolation Gradients Convergence • Trust region • Wolfe conditions Quasi–Newton • Berndt–Hall–Hall–Hausman • Broyden–Fletcher–Goldfarb–Shanno and L-BFGS • Davidon–Fletcher–Powell • Symmetric rank-one (SR1) Other methods • Conjugate gradient • Gauss–Newton • Gradient • Mirror • Levenberg–Marquardt • Powell's dog leg method • Truncated Newton Hessians • Newton's method Constrained nonlinear General • Barrier methods • Penalty methods Differentiable • Augmented Lagrangian methods • Sequential quadratic programming • Successive linear programming Convex optimization Convex minimization • Cutting-plane method • Reduced gradient (Frank–Wolfe) • Subgradient method Linear and quadratic Interior point • Affine scaling • Ellipsoid algorithm of Khachiyan • Projective algorithm of Karmarkar Basis-exchange • Simplex algorithm of Dantzig • Revised simplex algorithm • Criss-cross algorithm • Principal pivoting algorithm of Lemke Combinatorial Paradigms • Approximation algorithm • Dynamic programming • Greedy algorithm • Integer programming • Branch and bound/cut Graph algorithms Minimum spanning tree • Borůvka • Prim • Kruskal Shortest path • Bellman–Ford • SPFA • Dijkstra • Floyd–Warshall Network flows • Dinic • Edmonds–Karp • Ford–Fulkerson • Push–relabel maximum flow Metaheuristics • Evolutionary algorithm • Hill climbing • Local search • Parallel metaheuristics • Simulated annealing • Spiral optimization algorithm • Tabu search • Software Complementarity problems and algorithms Complementarity Problems • Linear programming (LP) • Quadratic programming (QP) • Linear complementarity problem (LCP) • Mixed linear (MLCP) • Mixed (MCP) • Nonlinear (NCP) Basis-exchange algorithms • Simplex (Dantzig) • Revised simplex • Criss-cross • Lemke Authority control: National • Germany • Israel • United States
Wikipedia
Simplex category In mathematics, the simplex category (or simplicial category or nonempty finite ordinal category) is the category of non-empty finite ordinals and order-preserving maps. It is used to define simplicial and cosimplicial objects. Formal definition The simplex category is usually denoted by $\Delta $. There are several equivalent descriptions of this category. $\Delta $ can be described as the category of non-empty finite ordinals as objects, thought of as totally ordered sets, and (non-strictly) order-preserving functions as morphisms. The objects are commonly denoted $[n]=\{0,1,\dots ,n\}$ (so that $[n]$ is the ordinal $n+1$). The category is generated by coface and codegeneracy maps, which amount to inserting or deleting elements of the orderings. (See simplicial set for relations of these maps.) A simplicial object is a presheaf on $\Delta $, that is a contravariant functor from $\Delta $ to another category. For instance, simplicial sets are contravariant with the codomain category being the category of sets. A cosimplicial object is defined similarly as a covariant functor originating from $\Delta $. Augmented simplex category The augmented simplex category, denoted by $\Delta _{+}$ is the category of all finite ordinals and order-preserving maps, thus $\Delta _{+}=\Delta \cup [-1]$, where $[-1]=\emptyset $. Accordingly, this category might also be denoted FinOrd. The augmented simplex category is occasionally referred to as algebraists' simplex category and the above version is called topologists' simplex category. A contravariant functor defined on $\Delta _{+}$ is called an augmented simplicial object and a covariant functor out of $\Delta _{+}$ is called an augmented cosimplicial object; when the codomain category is the category of sets, for example, these are called augmented simplicial sets and augmented cosimplicial sets respectively. The augmented simplex category, unlike the simplex category, admits a natural monoidal structure. The monoidal product is given by concatenation of linear orders, and the unit is the empty ordinal $[-1]$ (the lack of a unit prevents this from qualifying as a monoidal structure on $\Delta $). In fact, $\Delta _{+}$ is the monoidal category freely generated by a single monoid object, given by $[0]$ with the unique possible unit and multiplication. This description is useful for understanding how any comonoid object in a monoidal category gives rise to a simplicial object since it can then be viewed as the image of a functor from $\Delta _{+}^{\text{op}}$ to the monoidal category containing the comonoid; by forgetting the augmentation we obtain a simplicial object. Similarly, this also illuminates the construction of simplicial objects from monads (and hence adjoint functors) since monads can be viewed as monoid objects in endofunctor categories. See also • Simplicial category • PROP (category theory) • Abstract simplicial complex References • Goerss, Paul G.; Jardine, John F. (1999). Simplicial Homotopy Theory. Progress in Mathematics. Vol. 174. Basel–Boston–Berlin: Birkhäuser. doi:10.1007/978-3-0348-8707-6. ISBN 978-3-7643-6064-1. MR 1711612. External links • Simplex category at the nLab • What's special about the Simplex category? Category theory Key concepts Key concepts • Category • Adjoint functors • CCC • Commutative diagram • Concrete category • End • Exponential • Functor • Kan extension • Morphism • Natural transformation • Universal property Universal constructions Limits • Terminal objects • Products • Equalizers • Kernels • Pullbacks • Inverse limit Colimits • Initial objects • Coproducts • Coequalizers • Cokernels and quotients • Pushout • Direct limit Algebraic categories • Sets • Relations • Magmas • Groups • Abelian groups • Rings (Fields) • Modules (Vector spaces) Constructions on categories • Free category • Functor category • Kleisli category • Opposite category • Quotient category • Product category • Comma category • Subcategory Higher category theory Key concepts • Categorification • Enriched category • Higher-dimensional algebra • Homotopy hypothesis • Model category • Simplex category • String diagram • Topos n-categories Weak n-categories • Bicategory (pseudofunctor) • Tricategory • Tetracategory • Kan complex • ∞-groupoid • ∞-topos Strict n-categories • 2-category (2-functor) • 3-category Categorified concepts • 2-group • 2-ring • En-ring • (Traced)(Symmetric) monoidal category • n-group • n-monoid • Category • Outline • Glossary
Wikipedia
Simplex tree In topological data analysis, a simplex tree is a type of trie used to represent efficiently any general simplicial complex. Through its nodes, this data structure notably explicitly represents all the simplices. Its flexible structure allows implementation of many basic operations useful to computing persistent homology. This data structure was invented by Jean-Daniel Boissonnat and Clément Maria in 2014, in the article The Simplex Tree: An Efficient Data Structure for General Simplicial Complexes.[1] This data structure offers efficient operations on sparse simplicial complexes. For dense or maximal simplices, Skeleton-Blocker[2] representations or Toplex Map[3] representations are used. Definitions Many researchers in topological data analysis consider the simplex tree to be the most compact simplex-based data structure for simplicial complexes, and a data structure allowing an intuitive understanding of simplicial complexes due to integrated usage of their mathematical properties.[1][3][4] Heuristic definition Consider any simplicial complex is a set composed of points (0 dimensions), line segments (1 dimension), triangles (2 dimensions), and their n-dimensional counterparts, called n-simplexes within a topological space. By the mathematical properties of simplexes, any n-simplex is composed of multiple $(n-1)$-simplexes. Thus, lines are composed of points, triangles of lines, tetrahedrons of triangle. Notice each higher level adds 1 vertex to the vertices of the n-simplex. The data structure is simplex-based, therefore, it should represent all simplexes uniquely by the points defining the simplex. A simple way to achieve this is to define each simplex by its points in sorted order. Let $\mathrm {K} $ be a simplicial complex of dimension k, $V$ its vertex set, where vertices are labeled from 1 to $\left\vert V\right\vert $ and ordered accordingly. Now, construct a dictionary size $\left\vert V\right\vert $ containing all vertex labels in order. This represents the 0-dimensional simplexes. Then, for the path to the initial dictionary of each entry in the initial dictionary, add as a child dictionary all vertices fully-connected to the current set of vertices, all of which having a label greater than $l$. Represent this step on k levels. Clearly, considering the first dictionary as depth 0, any entry at depth $\tau $ of any dictionary in this data structure uniquely represents a $\tau $-simplex within $\mathrm {K} $. For completeness, the point to the initial dictionary is considered the representation of the empty simplex. For practicality of the operations, labels that are repeated on the same level are linked together, forming a looped linked list. Finally, child dictionaries also have pointers to their parent dictionary, for fast ancestor access.[1] Constructive definition Let $\mathrm {K} $ be a simplicial complex of dimension k. We begin by decomposing the simplicial complex into mutually exclusive simplexes. This can be achieved in a greedy way by iteratively removing from the simplicial complex the highest order simplexes until the simplicial complex is empty. We then need to label each vertex from 1 to $\left\vert V\right\vert $ and associate each simplex with its corresponding "word", that is the ordered list of its vertices by label. Ordering the labels ensures no repetition in the simplex tree, as there is only one way to describe a simplex. We start with a null root, representing the null simplex. Then, we iterate through all simplexes, and through each label of each simplex word. If the label is available as a child to the current root, make that child the temporary root of the insertion process, otherwise, create a new node for the child, make it the new temporary root, and continue with the rest of the word. During this process, k dictionaries are maintained with all the labels and insert the address of the node for the corresponding label. If an address is already at that space in the dictionary, a pointer is created from the old node to the new node. Once the process is finished all children of each node are entered into a dictionary, and all pointers are loop to make looped linked lists. A wide range of dictionaries could be applied here, like hash tables, but some operations assume the possibility of an ordered traversal of the entries, leading most of the implementations to use red-black trees are dictionaries.[1] Operations While simplex trees are not the most space efficient data structures for simplicial complex representation, their operations on sparse data are considered state-of-art. Here, we give the bounds of different useful operations possible through this representation. Many implementations of these operations are available.[1][4][5][6][7] We first introduce the notation. Consider $s$ is a given simplex, $\sigma $ is a given node corresponding to the last vertex of $s$, $l$ is the label associate to that node, $j$ is the depth of that node, $k$ is the dimension of the simplicial complex, $D_{\sigma }$ is the maximal number of operations to access $\sigma $ in a dictionary (if the dictionary is a red-black tree, $D_{\sigma }=O(log(deg(\sigma )))$ is the complexity) . Consider $C_{s}$ is the number of cofaces of $s$, and $N_{l}^{>j}$ is the number of nodes of the simplex tree ending with the label $l$ at depth greater than $j$. Notice $N_{l}^{>j}\leq C_{s}$. 1. Search, insert and remove words are done in $O(jD_{\sigma })$.[1] 2. Insert and remove an entire simplex is done in $O(2^{j}D_{\sigma })$.[1] 3. Computing persistent homology, or in a more involved way, computing Betti numbers, using a simplex tree most efficiently remains an open problem, however, current algorithms for this task on sparse simplicial complexes achieve state-of-art performance.[4] 4. The structure of simplex trees allows for elementary collapse of collapsible simplexes, however the bounds of this operation in the general case are unknown.[1][5][7] 5. A subcase of elementary collapse is edge-contraction. Edge contraction can be achieved in $O(kN_{l}^{>j}+C_{s}D_{\sigma })$.[1] 1. Locating cofaces of given simplex can be achieved in $O(kN_{l}^{>j})$.[1] 2. Locating cofacets of given simplex can be achieved in $O(j^{2}D_{\sigma })$.[1] As for construction, as seeing in the constructive definition, construction is proportional to the number and complexity of simplexes in the simplicial complex. This can be especially expensive if the simplicial complex is dense. However, some optimizations for particular simplicial complexes, including for Flag complexes, Rips complexes and Witness complexes.[1][8] Applications Simplex tree are efficient in sparse simplicial complexes. For this purpose, many persistent homology algorithms focusing on high-dimensional real data (often sparse) use simplex trees within these algorithms. While simplex trees are not as efficient as incidence matrix, their simplex-based structure allows them to be useful efficient for simplicial complex storage within persistent homology algorithms.[9] References 1. Boissonnat, Jean-Daniel; Maria, Clément (November 2014). "The Simplex Tree: an Efficient Data Structure for General Simplicial Complexes". Algorithmica. 70 (3): 406–427. arXiv:2001.02581. doi:10.1007/s00453-014-9887-3. ISSN 0178-4617. S2CID 15335393. 2. Salinas, David (7 February 2020). "Skeleton-Blocker". Geometry Understanding in Higher Dimensions. Retrieved 9 December 2021.{{cite web}}: CS1 maint: url-status (link) 3. Godi, Francois (7 February 2020). "Toplex Map". Geometry Understanding in Higher Dimensions. Retrieved 9 December 2021.{{cite web}}: CS1 maint: url-status (link) 4. Boissonnat, Jean-Daniel. "Simplex tree reference manual". Geometry Understanding in Higher Dimensions. Retrieved 9 December 2021.{{cite web}}: CS1 maint: url-status (link) 5. Piekenbrock, Matt (13 September 2020). "simplex_tree: Simplex Tree". rdrr.io. Retrieved 9 December 2021.{{cite web}}: CS1 maint: url-status (link) 6. Nanda, Vidit. "Perseus, the Persistent Homology Software". The Perseus Software Project for Rapid Computation of Persistent Homology. Retrieved 9 December 2021.{{cite web}}: CS1 maint: url-status (link) 7. Morozov, Dmitriy (2019). "Basics". Dionysus 2. Retrieved 9 December 2021.{{cite web}}: CS1 maint: url-status (link) 8. Morozov, Dmitriy (2019). "Vietoris–Rips Complexes". Dionysus 2. Retrieved 9 December 2021.{{cite web}}: CS1 maint: url-status (link) 9. Mandal, Sayan (2020). "Applications of Persistent Homology and Cycles". The Ohio State University. PHD Thesis.
Wikipedia
Simplicial Lie algebra In algebra, a simplicial Lie algebra is a simplicial object in the category of Lie algebras. In particular, it is a simplicial abelian group, and thus is subject to the Dold–Kan correspondence. See also • Differential graded Lie algebra References • Quillen, Daniel (September 1969). "Rational homotopy theory". Annals of Mathematics. 2. 90 (2): 205–295. doi:10.2307/1970725. JSTOR 1970725. External links • http://ncatlab.org/nlab/show/simplicial+Lie+algebra
Wikipedia