text
stringlengths
100
500k
subset
stringclasses
4 values
Student's t-test A t-test is a type of statistical analysis used to compare the averages of two groups and determine if the differences between them are more likely to arise from random chance. It is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known (typically, the scaling term is unknown and therefore a nuisance parameter). When the scaling term is estimated based on the data, the test statistic—under certain conditions—follows a Student's t distribution. The t-test's most common application is to test whether the means of two populations are different. History The term "t-statistic" is abbreviated from "hypothesis test statistic".[1] In statistics, the t-distribution was first derived as a posterior distribution in 1876 by Helmert[2][3][4] and Lüroth.[5][6][7] The t-distribution also appeared in a more general form as Pearson Type IV distribution in Karl Pearson's 1895 paper.[8] However, the T-Distribution, also known as Student's t-distribution, gets its name from William Sealy Gosset who first published it in English in 1908 in the scientific journal Biometrika using the pseudonym "Student"[9][10] because his employer preferred staff to use pen names when publishing scientific papers.[11] Gosset worked at the Guinness Brewery in Dublin, Ireland, and was interested in the problems of small samples – for example, the chemical properties of barley with small sample sizes. Hence a second version of the etymology of the term Student is that Guinness did not want their competitors to know that they were using the t-test to determine the quality of raw material. Although it was William Gosset after whom the term "Student" is penned, it was actually through the work of Ronald Fisher that the distribution became well known as "Student's distribution"[12] and "Student's t-test". Gosset had been hired owing to Claude Guinness's policy of recruiting the best graduates from Oxford and Cambridge to apply biochemistry and statistics to Guinness's industrial processes.[13] Gosset devised the t-test as an economical way to monitor the quality of stout. The t-test work was submitted to and accepted in the journal Biometrika and published in 1908.[9] Guinness had a policy of allowing technical staff leave for study (so-called "study leave"), which Gosset used during the first two terms of the 1906–1907 academic year in Professor Karl Pearson's Biometric Laboratory at University College London.[14] Gosset's identity was then known to fellow statisticians and to editor-in-chief Karl Pearson.[15] Uses The most frequently used t-tests are one-sample and two-sample tests: • A one-sample location test of whether the mean of a population has a value specified in a null hypothesis. • A two-sample location test of the null hypothesis such that the means of two populations are equal. All such tests are usually called Student's t-tests, though strictly speaking that name should only be used if the variances of the two populations are also assumed to be equal; the form of the test used when this assumption is dropped is sometimes called Welch's t-test. These tests are often referred to as unpaired or independent samples t-tests, as they are typically applied when the statistical units underlying the two samples being compared are non-overlapping.[16] Assumptions Most test statistics have the form t = Z/s, where Z and s are functions of the data. Z may be sensitive to the alternative hypothesis (i.e., its magnitude tends to be larger when the alternative hypothesis is true), whereas s is a scaling parameter that allows the distribution of t to be determined. As an example, in the one-sample t-test $t={\frac {Z}{s}}={\frac {{\bar {X}}-\mu }{{\widehat {\sigma }}/{\sqrt {n}}}}$ where X is the sample mean from a sample X1, X2, …, Xn, of size n, s is the standard error of the mean, $ {\widehat {\sigma }}$ is the estimate of the standard deviation of the population, and μ is the population mean. The assumptions underlying a t-test in the simplest form above are that: • X follows a normal distribution with mean μ and variance σ2/n • s2(n − 1)/σ2 follows a χ2 distribution with n − 1 degrees of freedom. This assumption is met when the observations used for estimating s2 come from a normal distribution (and i.i.d for each group). • Z and s are independent. In the t-test comparing the means of two independent samples, the following assumptions should be met: • The means of the two populations being compared should follow normal distributions. Under weak assumptions, this follows in large samples from the central limit theorem, even when the distribution of observations in each group is non-normal.[17] • If using Student's original definition of the t-test, the two populations being compared should have the same variance (testable using F-test, Levene's test, Bartlett's test, or the Brown–Forsythe test; or assessable graphically using a Q–Q plot). If the sample sizes in the two groups being compared are equal, Student's original t-test is highly robust to the presence of unequal variances.[18] Welch's t-test is insensitive to equality of the variances regardless of whether the sample sizes are similar. • The data used to carry out the test should either be sampled independently from the two populations being compared or be fully paired. This is in general not testable from the data, but if the data are known to be dependent (e.g. paired by test design), a dependent test has to be applied. For partially paired data, the classical independent t-tests may give invalid results as the test statistic might not follow a t distribution, while the dependent t-test is sub-optimal as it discards the unpaired data.[19] Most two-sample t-tests are robust to all but large deviations from the assumptions.[20] For exactness, the t-test and Z-test require normality of the sample means, and the t-test additionally requires that the sample variance follows a scaled χ2 distribution, and that the sample mean and sample variance be statistically independent. Normality of the individual data values is not required if these conditions are met. By the central limit theorem, sample means of moderately large samples are often well-approximated by a normal distribution even if the data are not normally distributed. For non-normal data, the distribution of the sample variance may deviate substantially from a χ2 distribution. However, if the sample size is large, Slutsky's theorem implies that the distribution of the sample variance has little effect on the distribution of the test statistic. That is as sample size $n$ increases: • ${\sqrt {n}}({\bar {X}}-\mu )\xrightarrow {d} N\left(0,\sigma ^{2}\right)$ as per the Central limit theorem. • $s^{2}\xrightarrow {p} \sigma ^{2}$ as per the Law of large numbers. • $\therefore {\frac {{\sqrt {n}}({\bar {X}}-\mu )}{s}}\xrightarrow {d} N(0,1)$ Unpaired and paired two-sample t-tests Two-sample t-tests for a difference in means involve independent samples (unpaired samples) or paired samples. Paired t-tests are a form of blocking, and have greater power (probability of avoiding a type II error, also known as a false negative) than unpaired tests when the paired units are similar with respect to "noise factors" (see confounder) that are independent of membership in the two groups being compared.[21] In a different context, paired t-tests can be used to reduce the effects of confounding factors in an observational study. Independent (unpaired) samples The independent samples t-test is used when two separate sets of independent and identically distributed samples are obtained, and one variable from each of the two populations is compared. For example, suppose we are evaluating the effect of a medical treatment, and we enroll 100 subjects into our study, then randomly assign 50 subjects to the treatment group and 50 subjects to the control group. In this case, we have two independent samples and would use the unpaired form of the t-test. Paired samples Main article: Paired difference test Paired samples t-tests typically consist of a sample of matched pairs of similar units, or one group of units that has been tested twice (a "repeated measures" t-test). A typical example of the repeated measures t-test would be where subjects are tested prior to a treatment, say for high blood pressure, and the same subjects are tested again after treatment with a blood-pressure-lowering medication. By comparing the same patient's numbers before and after treatment, we are effectively using each patient as their own control. That way the correct rejection of the null hypothesis (here: of no difference made by the treatment) can become much more likely, with statistical power increasing simply because the random interpatient variation has now been eliminated. However, an increase of statistical power comes at a price: more tests are required, each subject having to be tested twice. Because half of the sample now depends on the other half, the paired version of Student's t-test has only n/2 − 1 degrees of freedom (with n being the total number of observations). Pairs become individual test units, and the sample has to be doubled to achieve the same number of degrees of freedom. Normally, there are n − 1 degrees of freedom (with n being the total number of observations).[22] A paired samples t-test based on a "matched-pairs sample" results from an unpaired sample that is subsequently used to form a paired sample, by using additional variables that were measured along with the variable of interest.[23] The matching is carried out by identifying pairs of values consisting of one observation from each of the two samples, where the pair is similar in terms of other measured variables. This approach is sometimes used in observational studies to reduce or eliminate the effects of confounding factors. Paired samples t-tests are often referred to as "dependent samples t-tests". Calculations Explicit expressions that can be used to carry out various t-tests are given below. In each case, the formula for a test statistic that either exactly follows or closely approximates a t-distribution under the null hypothesis is given. Also, the appropriate degrees of freedom are given in each case. Each of these statistics can be used to carry out either a one-tailed or two-tailed test. Once the t value and degrees of freedom are determined, a p-value can be found using a table of values from Student's t-distribution. If the calculated p-value is below the threshold chosen for statistical significance (usually the 0.10, the 0.05, or 0.01 level), then the null hypothesis is rejected in favor of the alternative hypothesis. One-sample t-test In testing the null hypothesis that the population mean is equal to a specified value μ0, one uses the statistic $t={\frac {{\bar {x}}-\mu _{0}}{s/{\sqrt {n}}}}$ where ${\bar {x}}$ is the sample mean, s is the sample standard deviation and n is the sample size. The degrees of freedom used in this test are n − 1. Although the parent population does not need to be normally distributed, the distribution of the population of sample means ${\bar {x}}$ is assumed to be normal. By the central limit theorem, if the observations are independent and the second moment exists, then $t$ will be approximately normal N(0;1). Slope of a regression line Suppose one is fitting the model $Y=\alpha +\beta x+\varepsilon $ where x is known, α and β are unknown, ε is a normally distributed random variable with mean 0 and unknown variance σ2, and Y is the outcome of interest. We want to test the null hypothesis that the slope β is equal to some specified value β0 (often taken to be 0, in which case the null hypothesis is that x and y are uncorrelated). Let ${\begin{aligned}{\widehat {\alpha }},{\widehat {\beta }}&={\text{least-squares estimators}},\\SE_{\widehat {\alpha }},SE_{\widehat {\beta }}&={\text{the standard errors of least-squares estimators}}.\end{aligned}}$ Then $t_{\text{score}}={\frac {{\widehat {\beta }}-\beta _{0}}{SE_{\widehat {\beta }}}}\sim {\mathcal {T}}_{n-2}$ has a t-distribution with n − 2 degrees of freedom if the null hypothesis is true. The standard error of the slope coefficient: $SE_{\widehat {\beta }}={\frac {\sqrt {{\dfrac {1}{n-2}}\displaystyle \sum _{i=1}^{n}\left(y_{i}-{\widehat {y}}_{i}\right)^{2}}}{\sqrt \sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)^{2}}}}$ can be written in terms of the residuals. Let ${\begin{aligned}{\widehat {\varepsilon }}_{i}&=y_{i}-{\widehat {y}}_{i}=y_{i}-\left({\widehat {\alpha }}+{\widehat {\beta }}x_{i}\right)={\text{residuals}}={\text{estimated errors}},\\{\text{SSR}}&=\sum _{i=1}^{n}{{\widehat {\varepsilon }}_{i}}^{2}={\text{sum of squares of residuals}}.\end{aligned}}$ Then tscore is given by: $t_{\text{score}}={\frac {\left({\widehat {\beta }}-\beta _{0}\right){\sqrt {n-2}}}{\sqrt {\frac {SSR}{\sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)^{2}}}}}.$ Another way to determine the tscore is: $t_{\text{score}}={\frac {r{\sqrt {n-2}}}{\sqrt {1-r^{2}}}},$ where r is the Pearson correlation coefficient. The tscore, intercept can be determined from the tscore, slope: $t_{\text{score,intercept}}={\frac {\alpha }{\beta }}{\frac {t_{\text{score,slope}}}{\sqrt {s_{\text{x}}^{2}+{\bar {x}}^{2}}}}$ where sx2 is the sample variance. Equal sample sizes and variance Given two groups (1, 2), this test is only applicable when: • the two sample sizes are equal; • it can be assumed that the two distributions have the same variance; Violations of these assumptions are discussed below. The t statistic to test whether the means are different can be calculated as follows: $t={\frac {{\bar {X}}_{1}-{\bar {X}}_{2}}{s_{p}{\sqrt {\frac {2}{n}}}}}$ where $s_{p}={\sqrt {\frac {s_{X_{1}}^{2}+s_{X_{2}}^{2}}{2}}}.$ Here sp is the pooled standard deviation for n = n1 = n2 and s 2 X1 and s 2 X2 are the unbiased estimators of the population variance. The denominator of t is the standard error of the difference between two means. For significance testing, the degrees of freedom for this test is 2n − 2 where n is sample size. Equal or unequal sample sizes, similar variances (1/2 < sX1/sX2 < 2) This test is used only when it can be assumed that the two distributions have the same variance. (When this assumption is violated, see below.) The previous formulae are a special case of the formulae below, one recovers them when both samples are equal in size: n = n1 = n2. The t statistic to test whether the means are different can be calculated as follows: $t={\frac {{\bar {X}}_{1}-{\bar {X}}_{2}}{s_{p}\cdot {\sqrt {{\frac {1}{n_{1}}}+{\frac {1}{n_{2}}}}}}}$ where $s_{p}={\sqrt {\frac {\left(n_{1}-1\right)s_{X_{1}}^{2}+\left(n_{2}-1\right)s_{X_{2}}^{2}}{n_{1}+n_{2}-2}}}$ is the pooled standard deviation of the two samples: it is defined in this way so that its square is an unbiased estimator of the common variance whether or not the population means are the same. In these formulae, ni − 1 is the number of degrees of freedom for each group, and the total sample size minus two (that is, n1 + n2 − 2) is the total number of degrees of freedom, which is used in significance testing. Equal or unequal sample sizes, unequal variances (sX1 > 2sX2 or sX2 > 2sX1) Main article: Welch's t-test This test, also known as Welch's t-test, is used only when the two population variances are not assumed to be equal (the two sample sizes may or may not be equal) and hence must be estimated separately. The t statistic to test whether the population means are different is calculated as: $t={\frac {{\bar {X}}_{1}-{\bar {X}}_{2}}{s_{\bar {\Delta }}}}$ where $s_{\bar {\Delta }}={\sqrt {{\frac {s_{1}^{2}}{n_{1}}}+{\frac {s_{2}^{2}}{n_{2}}}}}.$ Here si2 is the unbiased estimator of the variance of each of the two samples with ni = number of participants in group i (i = 1 or 2). In this case $ (s_{\bar {\Delta }})^{2}$ is not a pooled variance. For use in significance testing, the distribution of the test statistic is approximated as an ordinary Student's t-distribution with the degrees of freedom calculated using $\mathrm {d.f.} ={\frac {\left({\frac {s_{1}^{2}}{n_{1}}}+{\frac {s_{2}^{2}}{n_{2}}}\right)^{2}}{{\frac {\left(s_{1}^{2}/n_{1}\right)^{2}}{n_{1}-1}}+{\frac {\left(s_{2}^{2}/n_{2}\right)^{2}}{n_{2}-1}}}}.$ This is known as the Welch–Satterthwaite equation. The true distribution of the test statistic actually depends (slightly) on the two unknown population variances (see Behrens–Fisher problem). Exact method for unequal variances and sample sizes The test[24] deals with the famous Behrens–Fisher problem, i.e., comparing the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples. The test is developed as an exact test that allows for unequal sample sizes and unequal variances of two populations. The exact property still holds even with small extremely small and unbalanced sample sizes (e.g. $n_{1}=5,n_{2}=50$). The statistic to test whether the means are different can be calculated as follows: Let $X=[X_{1},X_{2},\ldots ,X_{m}]^{T}$ and $Y=[Y_{1},Y_{2},\ldots ,Y_{n}]^{T}$ be the i.i.d. sample vectors ($m>n$) from $N(\mu _{1},\sigma _{1}^{2})$ and $N(\mu _{2},\sigma _{2}^{2})$ separately. Let $(P^{T})_{n\times n}$ be an $n\times n$ orthogonal matrix whose elements of the first row are all $1/{\sqrt {n}}$, similarly, let $(Q^{T})_{n\times m}$ be the first n rows of an $m\times m$ orthogonal matrix (whose elements of the first row are all $1/{\sqrt {m}}$). Then $Z:=(Q^{T})_{n\times m}X/{\sqrt {m}}-(P^{T})_{n\times n}Y/{\sqrt {n}}$ is an n-dimensional normal random vector. $Z\sim N((\mu _{1}-\mu _{2},0,...,0)^{T},({\frac {\sigma _{1}^{2}}{m}}+{\frac {\sigma _{2}^{2}}{n}})I_{n}).$ From the above distribution we see that $Z_{1}-(\mu _{1}-\mu _{2})\sim N(0,{\frac {\sigma _{1}^{2}}{m}}+{\frac {\sigma _{2}^{2}}{n}}),$ ${\frac {\sum _{i=2}^{n}Z_{i}^{2}}{n-1}}\sim {\frac {\chi _{n-1}^{2}}{n-1}}\times ({\frac {\sigma _{1}^{2}}{m}}+{\frac {\sigma _{2}^{2}}{n}})$ $Z_{1}-(\mu _{1}-\mu _{2})\perp \sum _{i=2}^{n}Z_{i}^{2}.$ $T_{e}:={\frac {Z_{1}-(\mu _{1}-\mu _{2})}{\sqrt {(\sum _{i=2}^{n}Z_{i}^{2})/(n-1)}}}\sim t_{n-1}.$ Dependent t-test for paired samples This test is used when the samples are dependent; that is, when there is only one sample that has been tested twice (repeated measures) or when there are two samples that have been matched or "paired". This is an example of a paired difference test. The t statistic is calculated as $t={\frac {{\bar {X}}_{D}-\mu _{0}}{s_{D}/{\sqrt {n}}}}$ where ${\bar {X}}_{D}$ and $s_{D}$ are the average and standard deviation of the differences between all pairs. The pairs are e.g. either one person's pre-test and post-test scores or between-pairs of persons matched into meaningful groups (for instance drawn from the same family or age group: see table). The constant μ0 is zero if we want to test whether the average of the difference is significantly different. The degree of freedom used is n − 1, where n represents the number of pairs. Example of repeated measures NumberNameTest 1Test 2 1Mike35%67% 2Melanie50%46% 3Melissa90%86% 4Mitchell78%91% Example of matched pairs PairNameAgeTest 1John35250 1Jane36340 2Jimmy22460 2Jessy21200 Worked examples Let A1 denote a set obtained by drawing a random sample of six measurements: $A_{1}=\{30.02,\ 29.99,\ 30.11,\ 29.97,\ 30.01,\ 29.99\}$ and let A2 denote a second set obtained similarly: $A_{2}=\{29.89,\ 29.93,\ 29.72,\ 29.98,\ 30.02,\ 29.98\}$ These could be, for example, the weights of screws that were chosen out of a bucket. We will carry out tests of the null hypothesis that the means of the populations from which the two samples were taken are equal. The difference between the two sample means, each denoted by Xi, which appears in the numerator for all the two-sample testing approaches discussed above, is ${\bar {X}}_{1}-{\bar {X}}_{2}=0.095.$ The sample standard deviations for the two samples are approximately 0.05 and 0.11, respectively. For such small samples, a test of equality between the two population variances would not be very powerful. Since the sample sizes are equal, the two forms of the two-sample t-test will perform similarly in this example. Unequal variances If the approach for unequal variances (discussed above) is followed, the results are ${\sqrt {{\frac {s_{1}^{2}}{n_{1}}}+{\frac {s_{2}^{2}}{n_{2}}}}}\approx 0.04849$ and the degrees of freedom ${\text{d.f.}}\approx 7.031.$ The test statistic is approximately 1.959, which gives a two-tailed test p-value of 0.09077. Equal variances If the approach for equal variances (discussed above) is followed, the results are $s_{p}\approx 0.08396$ and the degrees of freedom ${\text{d.f.}}=10.$ The test statistic is approximately equal to 1.959, which gives a two-tailed p-value of 0.07857. Related statistical tests Alternatives to the t-test for location problems The t-test provides an exact test for the equality of the means of two i.i.d. normal populations with unknown, but equal, variances. (Welch's t-test is a nearly exact test for the case where the data are normal but the variances may differ.) For moderately large samples and a one tailed test, the t-test is relatively robust to moderate violations of the normality assumption.[25] In large enough samples, the t-test asymptotically approaches the z-test, and becomes robust even to large deviations from normality.[17] If the data are substantially non-normal and the sample size is small, the t-test can give misleading results. See Location test for Gaussian scale mixture distributions for some theory related to one particular family of non-normal distributions. When the normality assumption does not hold, a non-parametric alternative to the t-test may have better statistical power. However, when data are non-normal with differing variances between groups, a t-test may have better type-1 error control than some non-parametric alternatives.[26] Furthermore, non-parametric methods, such as the Mann-Whitney U test discussed below, typically do not test for a difference of means, so should be used carefully if a difference of means is of primary scientific interest.[17] For example, Mann-Whitney U test will keep the type 1 error at the desired level alpha if both groups have the same distribution. It will also have power in detecting an alternative by which group B has the same distribution as A but after some shift by a constant (in which case there would indeed be a difference in the means of the two groups). However, there could be cases where group A and B will have different distributions but with the same means (such as two distributions, one with positive skewness and the other with a negative one, but shifted so to have the same means). In such cases, MW could have more than alpha level power in rejecting the Null hypothesis but attributing the interpretation of difference in means to such a result would be incorrect. In the presence of an outlier, the t-test is not robust. For example, for two independent samples when the data distributions are asymmetric (that is, the distributions are skewed) or the distributions have large tails, then the Wilcoxon rank-sum test (also known as the Mann–Whitney U test) can have three to four times higher power than the t-test.[25][27][28] The nonparametric counterpart to the paired samples t-test is the Wilcoxon signed-rank test for paired samples. For a discussion on choosing between the t-test and nonparametric alternatives, see Lumley, et al. (2002).[17] One-way analysis of variance (ANOVA) generalizes the two-sample t-test when the data belong to more than two groups. A design which includes both paired observations and independent observations When both paired observations and independent observations are present in the two sample design, assuming data are missing completely at random (MCAR), the paired observations or independent observations may be discarded in order to proceed with the standard tests above. Alternatively making use of all of the available data, assuming normality and MCAR, the generalized partially overlapping samples t-test could be used.[29] Multivariate testing A generalization of Student's t statistic, called Hotelling's t-squared statistic, allows for the testing of hypotheses on multiple (often correlated) measures within the same sample. For instance, a researcher might submit a number of subjects to a personality test consisting of multiple personality scales (e.g. the Minnesota Multiphasic Personality Inventory). Because measures of this type are usually positively correlated, it is not advisable to conduct separate univariate t-tests to test hypotheses, as these would neglect the covariance among measures and inflate the chance of falsely rejecting at least one hypothesis (Type I error). In this case a single multivariate test is preferable for hypothesis testing. Fisher's Method for combining multiple tests with alpha reduced for positive correlation among tests is one. Another is Hotelling's T2 statistic follows a T2 distribution. However, in practice the distribution is rarely used, since tabulated values for T2 are hard to find. Usually, T2 is converted instead to an F statistic. For a one-sample multivariate test, the hypothesis is that the mean vector (μ) is equal to a given vector (μ0). The test statistic is Hotelling's t2: $t^{2}=n({\bar {\mathbf {x} }}-{{\boldsymbol {\mu }}_{0}})'{\mathbf {S} }^{-1}({\bar {\mathbf {x} }}-{{\boldsymbol {\mu }}_{0}})$ where n is the sample size, x is the vector of column means and S is an m × m sample covariance matrix. For a two-sample multivariate test, the hypothesis is that the mean vectors (μ1, μ2) of two samples are equal. The test statistic is Hotelling's two-sample t2: $t^{2}={\frac {n_{1}n_{2}}{n_{1}+n_{2}}}\left({\bar {\mathbf {x} }}_{1}-{\bar {\mathbf {x} }}_{2}\right)'{\mathbf {S} _{\text{pooled}}}^{-1}\left({\bar {\mathbf {x} }}_{1}-{\bar {\mathbf {x} }}_{2}\right).$ The two-sample t-test is a special case of simple linear regression The two-sample t-test is a special case of simple linear regression as illustrated by the following example. A clinical trial examines 6 patients given drug or placebo. 3 patients get 0 units of drug (the placebo group). 3 patients get 1 unit of drug (the active treatment group). At the end of treatment, the researchers measure the change from baseline in the number of words that each patient can recall in a memory test. Data and code are given for the analysis using the R programming language with the t.test and lmfunctions for the t-test and linear regression. Here are the (fictitious) data generated in R. > word.recall.data=data.frame(drug.dose=c(0,0,0,1,1,1), word.recall=c(1,2,3,5,6,7)) Patientdrug.doseword.recall 1 0 1 2 0 2 3 0 3 4 1 5 5 1 6 6 1 7 Perform the t-test. Notice that the assumption of equal variance, var.equal=T, is required to make the analysis exactly equivalent to simple linear regression. > with(word.recall.data, t.test(word.recall~drug.dose, var.equal=T)) Running the R code gives the following results. • The mean word.recall in the 0 drug.dose group is 2. • The mean word.recall in the 1 drug.dose group is 6. • The difference between treatment groups in the mean word.recall is 6 – 2 = 4. • The difference in word.recall between drug doses is significant (p=0.00805). Perform a linear regression of the same data. Calculations may be performed using the R function lm() for a linear model. > word.recall.data.lm = lm(word.recall~drug.dose, data=word.recall.data) > summary(word.recall.data.lm) The linear regression provides a table of coefficients and p-values. CoefficientEstimateStd. Errort valueP-value Intercept 2 0.5774 3.464 0.02572 drug.dose 4 0.8165 4.899 0.000805 The table of coefficients gives the following results. • The estimate value of 2 for the intercept is the mean value of the word recall when the drug dose is 0. • The estimate value of 4 for the drug dose indicates that for a 1-unit change in drug dose (from 0 to 1) there is a 4-unit change in mean word recall (from 2 to 6). This is the slope of the line joining the two group means. • The p-value that the slope of 4 is different from 0 is p = 0.00805. The coefficients for the linear regression specify the slope and intercept of the line that joins the two group means, as illustrated in the graph. The intercept is 2 and the slope is 4. Compare the result from the linear regression to the result from the t-test. • From the t-test, the difference between the group means is 6-2=4. • From the regression, the slope is also 4 indicating that a 1-unit change in drug dose (from 0 to 1) gives a 4-unit change in mean word recall (from 2 to 6). • The t-test p-value for the difference in means, and the regression p-value for the slope, are both 0.00805. The methods give identical results. This example shows that, for the special case of a simple linear regression where there is a single x-variable that has values 0 and 1, the t-test gives the same results as the linear regression. The relationship can also be shown algebraically. Recognizing this relationship between the t-test and linear regression facilitates the use of multiple linear regression and multi-way analysis of variance. These alternatives to t-tests allow for the inclusion of additional explanatory variables that are associated with the response. Including such additional explanatory variables using regression or anova reduces the otherwise unexplained variance, and commonly yields greater power to detect differences than do two-sample t-tests. Software implementations Many spreadsheet programs and statistics packages, such as QtiPlot, LibreOffice Calc, Microsoft Excel, SAS, SPSS, Stata, DAP, gretl, R, Python, PSPP, MATLAB and Minitab, include implementations of Student's t-test. Language/ProgramFunctionNotes Microsoft Excel pre 2010TTEST(array1, array2, tails, type)See Microsoft Excel 2010 and laterT.TEST(array1, array2, tails, type)See Apple NumbersTTEST(sample-1-values, sample-2-values, tails, test-type)See LibreOffice CalcTTEST(Data1; Data2; Mode; Type)See Google SheetsTTEST(range1, range2, tails, type)See Pythonscipy.stats.ttest_ind(a, b, equal_var=True)See MATLABttest(data1, data2)See MathematicaTTest[{data1,data2}]See Rt.test(data1, data2, var.equal=TRUE)See SASPROC TTESTSee JavatTest(sample1, sample2)See JuliaEqualVarianceTTest(sample1, sample2)See Statattest data1 == data2See See also • Conditional change model • F-test – Statistical hypothesis test, mostly using multiple restrictions • Noncentral t-distribution in power analysis – Probability distribution • Student's t-statistic – Ratio in statisticsPages displaying short descriptions of redirect targets • Z-test – Statistical test • Mann–Whitney U test – Nonparametric test of the null hypothesis • Šidák correction for t-test – Statistical method • Welch's t-test – Statistical test of whether two populations have equal means • Analysis of variance – Collection of statistical models (ANOVA) References Citations 1. The Microbiome in Health and Disease. Academic Press. 2020-05-29. p. 397. ISBN 978-0-12-820001-8. 2. Szabó, István (2003). "Systeme aus einer endlichen Anzahl starrer Körper". Einführung in die Technische Mechanik. Springer Berlin Heidelberg. pp. 196–199. doi:10.1007/978-3-642-61925-0_16. ISBN 978-3-540-13293-6. 3. Schlyvitch, B. (October 1937). "Untersuchungen über den anastomotischen Kanal zwischen der Arteria coeliaca und mesenterica superior und damit in Zusammenhang stehende Fragen". Zeitschrift für Anatomie und Entwicklungsgeschichte. 107 (6): 709–737. doi:10.1007/bf02118337. ISSN 0340-2061. S2CID 27311567. 4. Helmert (1876). "Die Genauigkeit der Formel von Peters zur Berechnung des wahrscheinlichen Beobachtungsfehlers directer Beobachtungen gleicher Genauigkeit". Astronomische Nachrichten (in German). 88 (8–9): 113–131. Bibcode:1876AN.....88..113H. doi:10.1002/asna.18760880802. 5. Lüroth, J. (1876). "Vergleichung von zwei Werthen des wahrscheinlichen Fehlers". Astronomische Nachrichten (in German). 87 (14): 209–220. Bibcode:1876AN.....87..209L. doi:10.1002/asna.18760871402. 6. Pfanzagl, J. (1996). "Studies in the history of probability and statistics XLIV. A forerunner of the t-distribution". Biometrika. 83 (4): 891–898. doi:10.1093/biomet/83.4.891. MR 1766040. 7. Sheynin, Oscar (1995). "Helmert's work in the theory of errors". Archive for History of Exact Sciences. 49 (1): 73–104. doi:10.1007/BF00374700. ISSN 0003-9519. S2CID 121241599. 8. Pearson, Karl (1895). "X. Contributions to the mathematical theory of evolution.—II. Skew variation in homogeneous material". Philosophical Transactions of the Royal Society of London A. 186: 343–414. Bibcode:1895RSPTA.186..343P. doi:10.1098/rsta.1895.0010. 9. Student (1908). "The Probable Error of a Mean" (PDF). Biometrika. 6 (1): 1–25. doi:10.1093/biomet/6.1.1. hdl:10338.dmlcz/143545. Retrieved 24 July 2016. 10. "T Table". 11. Wendl, Michael C. (2016). "Pseudonymous fame". Science. 351 (6280): 1406. doi:10.1126/science.351.6280.1406. PMID 27013722. 12. Walpole, Ronald E. (2006). Probability & statistics for engineers & scientists. Myers, H. Raymond. (7th ed.). New Delhi: Pearson. ISBN 81-7758-404-9. OCLC 818811849. 13. O'Connor, John J.; Robertson, Edmund F. "William Sealy Gosset". MacTutor History of Mathematics Archive. University of St Andrews. 14. Raju, T. N. (2005). "William Sealy Gosset and William A. Silverman: Two 'Students' of Science". Pediatrics. 116 (3): 732–5. doi:10.1542/peds.2005-1134. PMID 16140715. S2CID 32745754. 15. Dodge, Yadolah (2008). The Concise Encyclopedia of Statistics. Springer Science & Business Media. pp. 234–235. ISBN 978-0-387-31742-7. 16. Fadem, Barbara (2008). High-Yield Behavioral Science. High-Yield Series. Hagerstown, MD: Lippincott Williams & Wilkins. ISBN 9781451130300. 17. Lumley, Thomas; Diehr, Paula; Emerson, Scott; Chen, Lu (May 2002). "The Importance of the Normality Assumption in Large Public Health Data Sets". Annual Review of Public Health. 23 (1): 151–169. doi:10.1146/annurev.publhealth.23.100901.140546. ISSN 0163-7525. PMID 11910059. 18. Markowski, Carol A.; Markowski, Edward P. (1990). "Conditions for the Effectiveness of a Preliminary Test of Variance". The American Statistician. 44 (4): 322–326. doi:10.2307/2684360. JSTOR 2684360. 19. Guo, Beibei; Yuan, Ying (2017). "A comparative review of methods for comparing means using partially paired data". Statistical Methods in Medical Research. 26 (3): 1323–1340. doi:10.1177/0962280215577111. PMID 25834090. S2CID 46598415. 20. Bland, Martin (1995). An Introduction to Medical Statistics. Oxford University Press. p. 168. ISBN 978-0-19-262428-4. 21. Rice, John A. (2006). Mathematical Statistics and Data Analysis (3rd ed.). Duxbury Advanced. 22. Weisstein, Eric. "Student's t-Distribution". mathworld.wolfram.com. 23. David, H. A.; Gunnink, Jason L. (1997). "The Paired t Test Under Artificial Pairing". The American Statistician. 51 (1): 9–12. doi:10.2307/2684684. JSTOR 2684684. 24. Wang, Chang; Jia, Jinzhu (2022). "Te Test: A New Non-asymptotic T-test for Behrens-Fisher Problems". arXiv:2210.16473 [math.ST]. 25. Sawilowsky, Shlomo S.; Blair, R. Clifford (1992). "A More Realistic Look at the Robustness and Type II Error Properties of the t Test to Departures From Population Normality". Psychological Bulletin. 111 (2): 352–360. doi:10.1037/0033-2909.111.2.352. 26. Zimmerman, Donald W. (January 1998). "Invalidation of Parametric and Nonparametric Statistical Tests by Concurrent Violation of Two Assumptions". The Journal of Experimental Education. 67 (1): 55–68. doi:10.1080/00220979809598344. ISSN 0022-0973. 27. Blair, R. Clifford; Higgins, James J. (1980). "A Comparison of the Power of Wilcoxon's Rank-Sum Statistic to That of Student's t Statistic Under Various Nonnormal Distributions". Journal of Educational Statistics. 5 (4): 309–335. doi:10.2307/1164905. JSTOR 1164905. 28. Fay, Michael P.; Proschan, Michael A. (2010). "Wilcoxon–Mann–Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules". Statistics Surveys. 4: 1–39. doi:10.1214/09-SS051. PMC 2857732. PMID 20414472. 29. Derrick, B; Toher, D; White, P (2017). "How to compare the means of two samples that include paired observations and independent observations: A companion to Derrick, Russ, Toher and White (2017)" (PDF). The Quantitative Methods for Psychology. 13 (2): 120–126. doi:10.20982/tqmp.13.2.p120. Sources • O'Mahony, Michael (1986). Sensory Evaluation of Food: Statistical Methods and Procedures. CRC Press. p. 487. ISBN 0-82477337-3. • Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (1992). Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press. p. 616. ISBN 0-521-43108-5. Further reading • Boneau, C. Alan (1960). "The effects of violations of assumptions underlying the t test". Psychological Bulletin. 57 (1): 49–64. doi:10.1037/h0041412. PMID 13802482. • Edgell, Stephen E.; Noon, Sheila M. (1984). "Effect of violation of normality on the t test of the correlation coefficient". Psychological Bulletin. 95 (3): 576–583. doi:10.1037/0033-2909.95.3.576. External links Wikiversity has learning resources about t-test • "Student test". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. • Trochim, William M.K. "The T-Test", Research Methods Knowledge Base, conjoint.ly • Econometrics lecture (topic: hypothesis testing) on YouTube by Mark Thoma Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject Public health General • Auxology • Biological hazard • Chief Medical Officer • Cultural competence • Deviance • Environmental health • Euthenics • Genomics • Globalization and disease • Harm reduction • Health economics • Health literacy • Health policy • Health system • Health care reform • Public health law • National public health institute • Health politics • Maternal health • Medical anthropology • Medical sociology • Mental health (Ministers) • Pharmaceutical policy • Pollution • Air • Water • Soil • Radiation • Light • Public health intervention • Public health laboratory • Sexual and reproductive health • Social psychology • Sociology of health and illness Preventive healthcare • Behavior change • Theories • Family planning • Health promotion • Human nutrition • Healthy diet • Preventive nutrition • Hygiene • Food safety • Hand washing • Infection control • Oral hygiene • Occupational safety and health • Human factors and ergonomics • Hygiene • Controlled Drugs • Injury prevention • Medicine • Nursing • Patient safety • Organization • Pharmacovigilance • Safe sex • Sanitation • Emergency • Fecal–oral transmission • Open defecation • Sanitary sewer • Waterborne diseases • Worker • School hygiene • Smoking cessation • Vaccination • Vector control Population health • Biostatistics • Child mortality • Community health • Epidemiology • Global health • Health impact assessment • Health system • Infant mortality • Open-source healthcare software • Multimorbidity • Public health informatics • Social determinants of health • Health equity • Race and health • Social medicine Biological and epidemiological statistics • Case–control study • Randomized controlled trial • Relative risk • Statistical hypothesis testing • Analysis of variance (ANOVA) • Regression analysis • ROC curve • Student's t-test • Z-test • Statistical software Infectious and epidemic disease prevention • Asymptomatic carrier • Epidemics • List • Notifiable diseases • List • Public health surveillance • Disease surveillance • Quarantine • Sexually transmitted infection • Social distancing • Tropical disease • Vaccine trial Food hygiene and safety management • Food • Additive • Chemistry • Engineering • Microbiology • Processing • Safety • Safety scandals • Genetically modified food • Good agricultural practice • Good manufacturing practice • HACCP • ISO 22000 Health behavioral sciences • Diffusion of innovations • Health belief model • Health communication • Health psychology • Positive deviance • PRECEDE-PROCEED model • Social cognitive theory • Social norms approach • Theory of planned behavior • Transtheoretical model Organizations, education and history Organizations • Caribbean • Caribbean Public Health Agency • China • Center for Disease Control and Prevention • Europe • Centre for Disease Prevention and Control • Committee on the Environment, Public Health and Food Safety • India • Ministry of Health and Family Welfare • Canada • Health Canada • Public Health Agency • U.S. • Centers for Disease Control and Prevention • City and county health departments • Council on Education for Public Health • Public Health Service • World Health Organization • World Toilet Organization • (Full list) Education • Health education • Higher education • Bachelor of Science in Public Health • Doctor of Public Health • Professional degrees of public health • Schools of public health History • Sara Josephine Baker • Samuel Jay Crumbine • Carl Rogers Darnall • Joseph Lister • Margaret Sanger • John Snow • Typhoid Mary • Radium Girls • Germ theory of disease • Social hygiene movement • Category • Commons • WikiProject Authority control: National • Germany
Wikipedia
T-theory T-theory is a branch of discrete mathematics dealing with analysis of trees and discrete metric spaces. General history T-theory originated from a question raised by Manfred Eigen in the late 1970s. He was trying to fit twenty distinct t-RNA molecules of the Escherichia coli bacterium into a tree. An important concept of T-theory is the tight span of a metric space. If X is a metric space, the tight span T(X) of X is, up to isomorphism, the unique minimal injective metric space that contains X. John Isbell was the first to discover the tight span in 1964, which he called the injective envelope. Andreas Dress independently constructed the same construct, which he called the tight span. Application areas • Phylogenetic analysis, which is used to create phylogenetic trees. • Online algorithms - k-server problem Recent developments • Bernd Sturmfels, Professor of Mathematics and Computer Science at Berkeley, and Josephine Yu classified six-point metrics using T-theory. References • Hans-Jurgen Bandelt; Andreas Dress (1992). "A canonical decomposition theory for metrics on a finite set". Advances in Mathematics. 92 (1): 47–105. doi:10.1016/0001-8708(92)90061-O. • Andreas Dress; Vincent Moulton; Werner Terhalle (1996). "T-theory: An Overview". European Journal of Combinatorics. 17 (2–3): 161–175. doi:10.1006/eujc.1996.0015. • John Isbell (1964). "Six theorems about metric spaces". Commentarii Mathematici Helvetici. 39: 65–74. doi:10.1007/BF02566944. • Bernd Sturmfels; Josephine Yu (2004). "Classification of Six-Point Metrics". The Electronic Journal of Combinatorics. 11: R44. doi:10.37236/1797.
Wikipedia
Run-length encoding Run-length encoding (RLE) is a form of lossless data compression in which runs of data (sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most efficient on data that contains many such runs, for example, simple graphic images such as icons, line drawings, Conway's Game of Life, and animations. For files that do not have many runs, RLE could increase the file size. RLE may also be used to refer to an early graphics file format supported by CompuServe for compressing black and white images, but was widely supplanted by their later Graphics Interchange Format (GIF). RLE also refers to a little-used image format in Windows 3.x, with the extension rle, which is a run-length encoded bitmap, used to compress the Windows 3.x startup screen. Example Consider a screen containing plain black text on a solid white background. There will be many long runs of white pixels in the blank space, and many short runs of black pixels within the text. A hypothetical scan line, with B representing a black pixel and W representing white, might read as follows: WWWWWWWWWWWWBWWWWWWWWWWWWBBBWWWWWWWWWWWWWWWWWWWWWWWWBWWWWWWWWWWWWWW With a run-length encoding (RLE) data compression algorithm applied to the above hypothetical scan line, it can be rendered as follows: 12W1B12W3B24W1B14W This can be interpreted as a sequence of twelve Ws, one B, twelve Ws, three Bs, etc., and represents the original 67 characters in only 18. While the actual format used for the storage of images is generally binary rather than ASCII characters like this, the principle remains the same. Even binary data files can be compressed with this method; file format specifications often dictate repeated bytes in files as padding space. However, newer compression methods such as DEFLATE often use LZ77-based algorithms, a generalization of run-length encoding that can take advantage of runs of strings of characters (such as BWWBWWBWWBWW). Run-length encoding can be expressed in multiple ways to accommodate data properties as well as additional compression algorithms. For instance, one popular method encodes run lengths for runs of two or more characters only, using an "escape" symbol to identify runs, or using the character itself as the escape, so that any time a character appears twice it denotes a run. On the previous example, this would give the following: WW12BWW12BB3WW24BWW14 This would be interpreted as a run of twelve Ws, a B, a run of twelve Ws, a run of three Bs, etc. In data where runs are less frequent, this can significantly improve the compression rate. One other matter is the application of additional compression algorithms. Even with the runs extracted, the frequencies of different characters may be large, allowing for further compression; however, if the run lengths are written in the file in the locations where the runs occurred, the presence of these numbers interrupts the normal flow and makes it harder to compress. To overcome this, some run-length encoders separate the data and escape symbols from the run lengths, so that the two can be handled independently. For the example data, this would result in two outputs, the string "WWBWWBBWWBWW" and the numbers (12,12,3,24,14). History and applications Run-length encoding (RLE) schemes were employed in the transmission of analog television signals as far back as 1967.[1] In 1983, run-length encoding was patented by Hitachi.[2][3][4] RLE is particularly well suited to palette-based bitmap images such as computer icons, and was a popular image compression method on early online services such as CompuServe before the advent of more sophisticated formats such as GIF.[5] It does not work well on continuous-tone images such as photographs, although JPEG uses it on the coefficients that remain after transforming and quantizing image blocks. Common formats for run-length encoded data include Truevision TGA, PackBits (by Apple, used in MacPaint), PCX and ILBM. The International Telecommunication Union also describes a standard to encode run-length-colour for fax machines, known as T.45.[6] The standard, which is combined with other techniques into Modified Huffman coding, is relatively efficient because most faxed documents are generally white space, with occasional interruptions of black. See also • Kolakoski sequence • Look-and-say sequence • Comparison of graphics file formats • Golomb coding • Burrows–Wheeler transform • Recursive indexing • Run-length limited • Bitmap index • Forsyth–Edwards Notation, which uses run-length-encoding for empty spaces in chess positions. • DEFLATE References 1. Robinson, A. H.; Cherry, C. (1967). "Results of a prototype television bandwidth compression scheme". Proceedings of the IEEE. IEEE. 55 (3): 356–364. doi:10.1109/PROC.1967.5493. 2. "Run Length Encoding Patents". Internet FAQ Consortium. 21 March 1996. Retrieved 14 July 2019. 3. "Method and system for data compression and restoration". Google Patents. 7 August 1984. Retrieved 14 July 2019. 4. "Data recording method". Google Patents. 8 August 1983. Retrieved 14 July 2019. 5. Dunn, Christopher (1987). "Smile! You're on RLE!" (PDF). The Transactor. Transactor Publishing. 7 (6): 16–18. Retrieved 2015-12-06. 6. Recommendation T.45 (02/00): Run-length colour encoding. International Telecommunication Union. 2000. Retrieved 2015-12-06. External links • Run-length encoding implemented in different programming languages (on Rosetta Code) • Single Header Run-Length Encoding Library smallest possible implementation (about 20 SLoC) in ANSI C. FOSS, compatible with Truevision TGA, supports 8, 16, 24 and 32 bit elements too. Data compression methods Lossless Entropy type • Adaptive coding • Arithmetic • Asymmetric numeral systems • Golomb • Huffman • Adaptive • Canonical • Modified • Range • Shannon • Shannon–Fano • Shannon–Fano–Elias • Tunstall • Unary • Universal • Exp-Golomb • Fibonacci • Gamma • Levenshtein Dictionary type • Byte pair encoding • Lempel–Ziv • 842 • Brotli • Deflate • LZ4 • LZFSE • LZJB • LZMA • LZO • LZRW • LZS • LZSS • LZW • LZWL • LZX • Snappy • Zstandard Other types • BWT • CTW • Delta • Incremental • DMC • DPCM • LDCT • MTF • PAQ • PPM • RLE Lossy Transform type • Discrete cosine transform • DCT • MDCT • DST • FFT • Wavelet • Daubechies • DWT • SPIHT Predictive type • DPCM • ADPCM • LPC • ACELP • CELP • LAR • LSP • WLPC • Motion • Compensation • Estimation • Vector • Psychoacoustic Audio Concepts • Bit rate • ABR • CBR • VBR • Companding • Convolution • Dynamic range • Latency • Nyquist–Shannon theorem • Sampling • Sound quality • Speech coding • Sub-band coding Codec parts • A-law • μ-law • DPCM • ADPCM • DM • FT • FFT • LPC • ACELP • CELP • LAR • LSP • WLPC • MDCT • Psychoacoustic model Image Concepts • Chroma subsampling • Coding tree unit • Color space • Compression artifact • Image resolution • Macroblock • Pixel • PSNR • Quantization • Standard test image Methods • Chain code • DCT • Deflate • Fractal • KLT • LP • RLE • Wavelet • Daubechies • DWT • EZW • SPIHT Video Concepts • Bit rate • ABR • CBR • VBR • Display resolution • Frame • Frame rate • Frame types • Interlace • Video characteristics • Video quality Codec parts • DCT • DPCM • Deblocking filter • Lapped transform • Motion • Compensation • Estimation • Vector • Wavelet • Daubechies • DWT Theory • Entropy • Grammar • Re-Pair • Sequitur • Information theory • Timeline • Kolmogorov complexity • Prefix code • Quantization • Rate–distortion • Redundancy • Compression formats • Compression software (codecs) Multimedia compression and container formats Video compression ISO, IEC, MPEG • DV • MJPEG • Motion JPEG 2000 • MPEG-1 • MPEG-2 • Part 2 • MPEG-4 • Part 2 / ASP • Part 10 / AVC • Part 33 / IVC • MPEG-H • Part 2 / HEVC • MPEG-I • Part 3 / VVC • MPEG-5 • Part 1 / EVC • Part 2 / LCEVC ITU-T, VCEG • H.120 • H.261 • H.262 • H.263 • H.264 / AVC • H.265 / HEVC • H.266 / VVC SMPTE • VC-1 • VC-2 • VC-3 • VC-5 • VC-6 TrueMotion • TrueMotion S • VP3 • VP6 • VP7 • VP8 • VP9 • AV1 Others • Apple Video • AVS • Bink • Cinepak • Daala • DVI • FFV1 • Huffyuv • Indeo • Lagarith • Microsoft Video 1 • MSU Lossless • OMS Video • Pixlet • ProRes • 422 • 4444 • QuickTime • Animation • Graphics • RealVideo • RTVideo • SheerVideo • Smacker • Sorenson Video/Spark • Theora • Thor • WMV • XEB • YULS Audio compression ISO, IEC, MPEG • MPEG-1 Layer II • Multichannel • MPEG-1 Layer I • MPEG-1 Layer III (MP3) • AAC • HE-AAC • AAC-LD • MPEG Surround • MPEG-4 ALS • MPEG-4 SLS • MPEG-4 DST • MPEG-4 HVXC • MPEG-4 CELP • MPEG-D USAC • MPEG-H 3D Audio ITU-T • G.711 • A-law • µ-law • G.718 • G.719 • G.722 • G.722.1 • G.722.2 • G.723 • G.723.1 • G.726 • G.728 • G.729 • G.729.1 IETF • Opus • iLBC • Speex • Vorbis 3GPP • AMR • AMR-WB • AMR-WB+ • EVRC • EVRC-B • EVS • GSM-HR • GSM-FR • GSM-EFR ETSI • AC-3 • AC-4 • DTS Bluetooth SIG • SBC • LC3 Others • ACELP • ALAC • Asao • ATRAC • AVS • CELT • Codec 2 • DRA • FLAC • iSAC • MELP • Monkey's Audio • MT9 • Musepack • OptimFROG • OSQ • QCELP • RCELP • RealAudio • RTAudio • SD2 • SHN • SILK • Siren • SMV • SVOPC • TTA • True Audio • TwinVQ • VMR-WB • VSELP • WavPack • WMA • MQA • aptX • aptX HD • aptX Low Latency • aptX Adaptive • LDAC • LHDC • LLAC Image compression IEC, ISO, IETF, W3C, ITU-T, JPEG • CCITT Group 4 • GIF • HEIC / HEIF • HEVC • JBIG • JBIG2 • JPEG • JPEG 2000 • JPEG-LS • JPEG XL • JPEG XR • JPEG XS • JPEG XT • PNG • TIFF • TIFF/EP • TIFF/IT Others • APNG • AV1 • AVIF • BPG • DjVu • EXR • FLIF • ICER • MNG • PGF • QOI • QTVR • WBMP • WebP Containers ISO, IEC • MPEG-ES • MPEG-PES • MPEG-PS • MPEG-TS • ISO/IEC base media file format • MPEG-4 Part 14 (MP4) • Motion JPEG 2000 • MPEG-21 Part 9 • MPEG media transport ITU-T • H.222.0 • T.802 IETF • RTP • Ogg SMPTE • GXF • MXF Others • 3GP and 3G2 • AMV • ASF • AIFF • AVI • AU • BPG • Bink • Smacker • BMP • DivX Media Format • EVO • Flash Video • HEIF • IFF • M2TS • Matroska • WebM • QuickTime File Format • RatDVD • RealMedia • RIFF • WAV • MOD and TOD • VOB, IFO and BUP Collaborations • NETVC • MPEG LA • Alliance for Open Media Methods • Entropy • Arithmetic • Huffman • Modified • LPC • ACELP • CELP • LSP • WLPC • Lossless • Lossy • LZ • DEFLATE • LZW • PCM • A-law • µ-law • ADPCM • DPCM • Transforms • DCT • FFT • MDCT • Wavelet • Daubechies • DWT Lists • Comparison of audio coding formats • Comparison of video codecs • List of codecs See Compression methods for techniques and Compression software for codecs
Wikipedia
Chris Stevens (mathematician) Terrie Christine Stevens, also known as T. Christine Stevens, is an American mathematician whose research concerns topological groups, the history of mathematics, and mathematics education.[1] She is also known as the co-founder of Project NExT, a mentorship program for recent doctorates in mathematics, which she directed from 1994 until 2009.[2][3][4] Education and career Stevens graduated from Smith College in 1970,[5] and completed her doctorate in 1978 at Harvard University under the supervision of Andrew M. Gleason. Her dissertation was Weakened Topologies for Lie Groups.[6][7] She held teaching positions at the University of Massachusetts Lowell, at Mount Holyoke College and at Arkansas State University before joining Saint Louis University, where for 25 years she was a professor of mathematics and computer science.[8][6] She was also a Congressional Science Fellow assisting congressman Theodore S. Weiss in 1984–1985,[1][5] and was a program officer at the National Science Foundation in 1987–1989.[1] After retiring from SLU, she became Associate Executive Director for Meetings and Professional Services of the American Mathematical Society.[9][6] She also served as an AMS Council member at large from 2011 to 2013.[10] Recognition In 2004 Stevens won the Gung and Hu Award for Distinguished Service to Mathematics of the Mathematical Association of America for her work on Project NExT.[6][8] In 2010 Stevens was awarded the Smith College Medal by her alma mater.[4][5] She has been a fellow of the American Association for the Advancement of Science since 2005,[11] and in 2012, she became one of the inaugural fellows of the American Mathematical Society.[12] She was the 2015 winner of the Louise Hay Award of the Association for Women in Mathematics.[9] References 1. Speaker bio, Society for Industrial and Applied Mathematics, retrieved 2015-01-25. 2. Project NExt, Mathematical Association of America, retrieved 2015-01-25. 3. Higgins, Aparna (November 2009), "AMS Sponsors NExT Fellows" (PDF), Inside the AMS, Notices of the AMS, 56 (10): 1310. 4. "MAA Member Chris Stevens Awarded Smith College Medal", Math in the News, MAA News, Mathematical Association of America, September 17, 2009. 5. Smith College Rally Day: Honors, Hats and a Secret Revealed, Smith College, September 10, 2009, retrieved 2015-01-25. 6. Jackson, Allyn (January 2015), "Chris Stevens Joins AMS Executive Staff" (PDF), Notices of the AMS, 62 (1): 56–57, doi:10.1090/noti1201. 7. Chris Stevens at the Mathematics Genealogy Project 8. Berry, Clayton (January 28, 2004), Professor Earns Highest Honor from Leading Mathematics Organization, Saint Louis University, archived from the original on March 3, 2016, retrieved 2015-01-25. 9. 2015 AWM Louise Hay Award, Association for Women in Mathematics, retrieved 2015-01-25. 10. "AMS Committees". American Mathematical Society. Retrieved 2023-03-29. 11. Elected Fellows, AAAS, retrieved 2017-10-30. 12. List of Fellows of the American Mathematical Society, retrieved 2015-01-25. External links • Home page Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Takao Hayashi Takao Hayashi (born 1949) is a Japanese mathematics educator, historian of mathematics specializing in Indian mathematics. Hayashi was born in Niigata, Japan. He obtained Bachelor of Science degree form Tohoku University, Sendai, Japan in 1974, Master of Arts degree from Tohuku University, Sendai, Japan in 1976 and a postgraduate degree from Kyoto University, Japan in 1979. He secured the Doctor of Philosophy degree from Brown University, USA in 1985 under the guidance of David Pingree. He was a researcher at Mehta Research Institute for Mathematics and Mathematical Physics, Allahabad, India during 1982–1983, a lecturer at Kyoto Women's College during 1985–1987. He joined Doshisha University, Kyoto as a lecturer in history of science in 1986 and was promoted as professor in 1995. He has also worked in various universities in Japan in different capacities.[1] Publications Hayashi has a large number of research publications relation to the history of Indian mathematics. He has also contributed chapters to several encyclopedic publications.[2] The books he has published include:[1] • The Babkhshali Manuscript: An Ancient Indian Mathematical Treatise, Egbert Forsten Publishing, 1995 • (jointly with S. R. Sarma, Takanori Kusuba and Michio Yano), Gaṇitasārakaumudī: The Moonlight of the Essence of Mathematics by Thakkura Pherū, Manohar Publishers and Distributors, 2009 • Kuṭṭā̄kāraśiromaṇi of Devarāja: Sanskrit Text with English Translation, Indian National Science Academy, 2012 • Gaṇitamañjarī of Gaṇeśa, Indian National Science Academy, 2013 • (jointly with Clemency Montelle, K. Ramasubramanian) Bhāskara-prabhā, Springer Singapore, 2018 Awards/Prizes The awards and prizes conferred on Hayashi include:[1] • The Salomon Reinach Foundation Prize, Institut de France (2001) • Kuwabara Prize, the History of Mathematics Society of Japan (2005) • Publication Prize, Mathematical Society of Japan (2005) References 1. "Hayashi Takao". J-Global. Japan Science and Technology Agency. Retrieved 28 July 2023. 2. "Takao Hayashi". Britannica. Encyclopedia Britannica. Retrieved 28 July 2023. Indian mathematics Mathematicians Ancient • Apastamba • Baudhayana • Katyayana • Manava • Pāṇini • Pingala • Yajnavalkya Classical • Āryabhaṭa I • Āryabhaṭa II • Bhāskara I • Bhāskara II • Melpathur Narayana Bhattathiri • Brahmadeva • Brahmagupta • Govindasvāmi • Halayudha • Jyeṣṭhadeva • Kamalakara • Mādhava of Saṅgamagrāma • Mahāvīra • Mahendra Sūri • Munishvara • Narayana • Parameshvara • Achyuta Pisharati • Jagannatha Samrat • Nilakantha Somayaji • Śrīpati • Sridhara • Gangesha Upadhyaya • Varāhamihira • Sankara Variar • Virasena Modern • Shanti Swarup Bhatnagar Prize recipients in Mathematical Science Treatises • Āryabhaṭīya • Bakhshali manuscript • Bijaganita • Brāhmasphuṭasiddhānta • Ganita Kaumudi • Karanapaddhati • Līlāvatī • Lokavibhaga • Paulisa Siddhanta • Paitamaha Siddhanta • Romaka Siddhanta • Sadratnamala • Siddhānta Shiromani • Śulba Sūtras • Surya Siddhanta • Tantrasamgraha • Vasishtha Siddhanta • Veṇvāroha • Yuktibhāṣā • Yavanajataka Pioneering innovations • Brahmi numerals • Hindu–Arabic numeral system • Symbol for zero (0) • Infinite series expansions for the trigonometric functions Centres • Kerala school of astronomy and mathematics • Jantar Mantar (Jaipur, New Delhi, Ujjain, Varanasi) Historians of mathematics • Bapudeva Sastri (1821–1900) • Shankar Balakrishna Dikshit (1853–1898) • Sudhakara Dvivedi (1855–1910) • M. Rangacarya (1861–1916) • P. C. Sengupta (1876–1962) • B. B. Datta (1888–1958) • T. Hayashi • A. A. Krishnaswamy Ayyangar (1892– 1953) • A. N. Singh (1901–1954) • C. T. Rajagopal (1903–1978) • T. A. Saraswati Amma (1918–2000) • S. N. Sen (1918–1992) • K. S. Shukla (1918–2007) • K. V. Sarma (1919–2005) Translators • Walter Eugene Clark • David Pingree Other regions • Babylon • China • Greece • Islamic mathematics • Europe Modern institutions • Indian Statistical Institute • Bhaskaracharya Pratishthana • Chennai Mathematical Institute • Institute of Mathematical Sciences • Indian Institute of Science • Harish-Chandra Research Institute • Homi Bhabha Centre for Science Education • Ramanujan Institute for Advanced Study in Mathematics • TIFR Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Japan • Korea • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Tychonoff space In topology and related branches of mathematics, Tychonoff spaces and completely regular spaces are kinds of topological spaces. These conditions are examples of separation axioms. A Tychonoff space refers to any completely regular space that is also a Hausdorff space; there exist completely regular spaces that are not Tychonoff (i.e. not Hausdorff). Separation axioms in topological spaces Kolmogorov classification T0 (Kolmogorov) T1 (Fréchet) T2 (Hausdorff) T2½(Urysohn) completely T2 (completely Hausdorff) T3 (regular Hausdorff) T3½(Tychonoff) T4 (normal Hausdorff) T5 (completely normal  Hausdorff) T6 (perfectly normal  Hausdorff) • History Tychonoff spaces are named after Andrey Nikolayevich Tychonoff, whose Russian name (Тихонов) is variously rendered as "Tychonov", "Tikhonov", "Tihonov", "Tichonov", etc. who introduced them in 1930 in order to avoid the pathological situation of Hausdorff spaces whose only continuous real-valued functions are constant maps.[1] Definitions A topological space $X$ is called completely regular if points can be separated from closed sets via (bounded) continuous real-valued functions. In technical terms this means: for any closed set $A\subseteq X$ and any point $x\in X\setminus A,$ there exists a real-valued continuous function $f:X\to \mathbb {R} $ such that $f(x)=1$ and $f\vert _{A}=0.$ (Equivalently one can choose any two values instead of $0$ and $1$ and even demand that $f$ be a bounded function.) A topological space is called a Tychonoff space (alternatively: T3½ space, or Tπ space, or completely T3 space) if it is a completely regular Hausdorff space. Remark. Completely regular spaces and Tychonoff spaces are related through the notion of Kolmogorov equivalence. A topological space is Tychonoff if and only if it's both completely regular and T0. On the other hand, a space is completely regular if and only if its Kolmogorov quotient is Tychonoff. Naming conventions Across mathematical literature different conventions are applied when it comes to the term "completely regular" and the "T"-Axioms. The definitions in this section are in typical modern usage. Some authors, however, switch the meanings of the two kinds of terms, or use all terms interchangeably. In Wikipedia, the terms "completely regular" and "Tychonoff" are used freely and the "T"-notation is generally avoided. In standard literature, caution is thus advised, to find out which definitions the author is using. For more on this issue, see History of the separation axioms. Examples Almost every topological space studied in mathematical analysis is Tychonoff, or at least completely regular. For example, the real line is Tychonoff under the standard Euclidean topology. Other examples include: • Every metric space is Tychonoff; every pseudometric space is completely regular. • Every locally compact regular space is completely regular, and therefore every locally compact Hausdorff space is Tychonoff. • In particular, every topological manifold is Tychonoff. • Every totally ordered set with the order topology is Tychonoff. • Every topological group is completely regular. • Generalizing both the metric spaces and the topological groups, every uniform space is completely regular. The converse is also true: every completely regular space is uniformisable. • Every CW complex is Tychonoff. • Every normal regular space is completely regular, and every normal Hausdorff space is Tychonoff. • The Niemytzki plane is an example of a Tychonoff space that is not normal. Properties Preservation Complete regularity and the Tychonoff property are well-behaved with respect to initial topologies. Specifically, complete regularity is preserved by taking arbitrary initial topologies and the Tychonoff property is preserved by taking point-separating initial topologies. It follows that: • Every subspace of a completely regular or Tychonoff space has the same property. • A nonempty product space is completely regular (respectively Tychonoff) if and only if each factor space is completely regular (respectively Tychonoff). Like all separation axioms, complete regularity is not preserved by taking final topologies. In particular, quotients of completely regular spaces need not be regular. Quotients of Tychonoff spaces need not even be Hausdorff, with one elementary counterexample being the line with two origins. There are closed quotients of the Moore plane that provide counterexamples. Real-valued continuous functions For any topological space $X,$ let $C(X)$ denote the family of real-valued continuous functions on $X$ and let $C_{b}(X)$ be the subset of bounded real-valued continuous functions. Completely regular spaces can be characterized by the fact that their topology is completely determined by $C(X)$ or $C_{b}(X).$ In particular: • A space $X$ is completely regular if and only if it has the initial topology induced by $C(X)$ or $C_{b}(X).$ • A space $X$ is completely regular if and only if every closed set can be written as the intersection of a family of zero sets in $X$ (i.e. the zero sets form a basis for the closed sets of $X$). • A space $X$ is completely regular if and only if the cozero sets of $X$ form a basis for the topology of $X.$ Given an arbitrary topological space $(X,\tau )$ there is a universal way of associating a completely regular space with $(X,\tau ).$ Let ρ be the initial topology on $X$ induced by $C_{\tau }(X)$ or, equivalently, the topology generated by the basis of cozero sets in $(X,\tau ).$ Then ρ will be the finest completely regular topology on $X$ that is coarser than $\tau .$ This construction is universal in the sense that any continuous function $f:(X,\tau )\to Y$ to a completely regular space $Y$ will be continuous on $(X,\rho ).$ In the language of category theory, the functor that sends $(X,\tau )$ to $(X,\rho )$ is left adjoint to the inclusion functor CReg → Top. Thus the category of completely regular spaces CReg is a reflective subcategory of Top, the category of topological spaces. By taking Kolmogorov quotients, one sees that the subcategory of Tychonoff spaces is also reflective. One can show that $C_{\tau }(X)=C_{\rho }(X)$ in the above construction so that the rings $C(X)$ and $C_{b}(X)$ are typically only studied for completely regular spaces $X.$ The category of realcompact Tychonoff spaces is anti-equivalent to the category of the rings $C(X)$ (where $X$ is realcompact) together with ring homomorphisms as maps. For example one can reconstruct $X$ from $C(X)$ when $X$ is (real) compact. The algebraic theory of these rings is therefore subject of intensive studies. A vast generalization of this class of rings that still resembles many properties of Tychonoff spaces, but is also applicable in real algebraic geometry, is the class of real closed rings. Embeddings Tychonoff spaces are precisely those spaces that can be embedded in compact Hausdorff spaces. More precisely, for every Tychonoff space $X,$ there exists a compact Hausdorff space $K$ such that $X$ is homeomorphic to a subspace of $K.$ In fact, one can always choose $K$ to be a Tychonoff cube (i.e. a possibly infinite product of unit intervals). Every Tychonoff cube is compact Hausdorff as a consequence of Tychonoff's theorem. Since every subspace of a compact Hausdorff space is Tychonoff one has: A topological space is Tychonoff if and only if it can be embedded in a Tychonoff cube. Compactifications Of particular interest are those embeddings where the image of $X$ is dense in $K;$ these are called Hausdorff compactifications of $X.$ Given any embedding of a Tychonoff space $X$ in a compact Hausdorff space $K$ the closure of the image of $X$ in $K$ is a compactification of $X.$ In the same 1930 article where Tychonoff defined completely regular spaces, he also proved that every Tychonoff space has a Hausdorff compactification.[2] Among those Hausdorff compactifications, there is a unique "most general" one, the Stone–Čech compactification $\beta X.$ It is characterized by the universal property that, given a continuous map $f$ from $X$ to any other compact Hausdorff space $Y,$ there is a unique continuous map $g:\beta X\to Y$ that extends $f$ in the sense that $f$ is the composition of $g$ and $j.$ Uniform structures Complete regularity is exactly the condition necessary for the existence of uniform structures on a topological space. In other words, every uniform space has a completely regular topology and every completely regular space $X$ is uniformizable. A topological space admits a separated uniform structure if and only if it is Tychonoff. Given a completely regular space $X$ there is usually more than one uniformity on $X$ that is compatible with the topology of $X.$ However, there will always be a finest compatible uniformity, called the fine uniformity on $X.$ If $X$ is Tychonoff, then the uniform structure can be chosen so that $\beta X$ becomes the completion of the uniform space $X.$ See also • Stone–Čech compactification – a universal map from a topological space X to a compact Hausdorff space βX, such that any map from X to a compact Hausdorff space factors through βX uniquely; if X is Tychonoff, then X is a dense subspace of βXPages displaying wikidata descriptions as a fallback Citations 1. Narici & Beckenstein 2011, p. 240. 2. Narici & Beckenstein 2011, pp. 225–273. Bibliography • Gillman, Leonard; Jerison, Meyer (1960). Rings of continuous functions. Graduate Texts in Mathematics, No. 43 (Dover reprint ed.). NY: Springer-Verlag. p. xiii. ISBN 978-048681688-3. • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Willard, Stephen (1970). General Topology (Dover reprint ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 0-486-43479-6.
Wikipedia
A* search algorithm A* (pronounced "A-star") is a graph traversal and path search algorithm, which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency.[1] One major practical drawback is its $O(b^{d})$ space complexity, as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance,[2] as well as memory-bounded approaches; however, A* is still the best solution in many cases.[3] ClassSearch algorithm Data structureGraph Worst-case performance$O(|E|)=O(b^{d})$ Worst-case space complexity$O(|V|)=O(b^{d})$ Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) first published the algorithm in 1968.[4] It can be seen as an extension of Dijkstra's algorithm. A* achieves better performance by using heuristics to guide its search. Compared to Dijkstra's algorithm, the A* algorithm only finds the shortest path from a specified source to a specified goal, and not the shortest-path tree from a specified source to all possible goals. This is a necessary trade-off for using a specific-goal-directed heuristic. For Dijkstra's algorithm, since the entire shortest-path tree is generated, every node is a goal, and there can be no specific-goal-directed heuristic. History A* was created as part of the Shakey project, which had the aim of building a mobile robot that could plan its own actions. Nils Nilsson originally proposed using the Graph Traverser algorithm[5] for Shakey's path planning.[6] Graph Traverser is guided by a heuristic function h(n), the estimated distance from node n to the goal node: it entirely ignores g(n), the distance from the start node to n. Bertram Raphael suggested using the sum, g(n) + h(n).[6] Peter Hart invented the concepts we now call admissibility and consistency of heuristic functions. A* was originally designed for finding least-cost paths when the cost of a path is the sum of its costs, but it has been shown that A* can be used to find optimal paths for any problem satisfying the conditions of a cost algebra.[7] The original 1968 A* paper[4] contained a theorem stating that no A*-like algorithm[lower-alpha 1] could expand fewer nodes than A* if the heuristic function is consistent and A*'s tie-breaking rule is suitably chosen. A "correction" was published a few years later[8] claiming that consistency was not required, but this was shown to be false in Dechter and Pearl's definitive study of A*'s optimality (now called optimal efficiency), which gave an example of A* with a heuristic that was admissible but not consistent expanding arbitrarily more nodes than an alternative A*-like algorithm.[9] Description A* is an informed search algorithm, or a best-first search, meaning that it is formulated in terms of weighted graphs: starting from a specific starting node of a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.). It does this by maintaining a tree of paths originating at the start node and extending those paths one edge at a time until its termination criterion is satisfied. At each iteration of its main loop, A* needs to determine which of its paths to extend. It does so based on the cost of the path and an estimate of the cost required to extend the path all the way to the goal. Specifically, A* selects the path that minimizes $f(n)=g(n)+h(n)$ where n is the next node on the path, g(n) is the cost of the path from the start node to n, and h(n) is a heuristic function that estimates the cost of the cheapest path from n to the goal. A* terminates when the path it chooses to extend is a path from start to goal or if there are no paths eligible to be extended. The heuristic function is problem-specific. If the heuristic function is admissible – meaning that it never overestimates the actual cost to get to the goal – A* is guaranteed to return a least-cost path from start to goal. Typical implementations of A* use a priority queue to perform the repeated selection of minimum (estimated) cost nodes to expand. This priority queue is known as the open set, fringe or frontier. At each step of the algorithm, the node with the lowest f(x) value is removed from the queue, the f and g values of its neighbors are updated accordingly, and these neighbors are added to the queue. The algorithm continues until a removed node (thus the node with the lowest f value out of all fringe nodes) is a goal node.[lower-alpha 2] The f value of that goal is then also the cost of the shortest path, since h at the goal is zero in an admissible heuristic. The algorithm described so far gives us only the length of the shortest path. To find the actual sequence of steps, the algorithm can be easily revised so that each node on the path keeps track of its predecessor. After this algorithm is run, the ending node will point to its predecessor, and so on, until some node's predecessor is the start node. As an example, when searching for the shortest route on a map, h(x) might represent the straight-line distance to the goal, since that is physically the smallest possible distance between any two points. For a grid map from a video game, using the Manhattan distance or the Chebyshev distance becomes better depending on the set of movements available (4-way or 8-way). If the heuristic h satisfies the additional condition h(x) ≤ d(x, y) + h(y) for every edge (x, y) of the graph (where d denotes the length of that edge), then h is called monotone, or consistent. With a consistent heuristic, A* is guaranteed to find an optimal path without processing any node more than once and A* is equivalent to running Dijkstra's algorithm with the reduced cost d'(x, y) = d(x, y) + h(y) − h(x). Pseudocode The following pseudocode describes the algorithm: function reconstruct_path(cameFrom, current) total_path := {current} while current in cameFrom.Keys: current := cameFrom[current] total_path.prepend(current) return total_path // A* finds a path from start to goal. // h is the heuristic function. h(n) estimates the cost to reach goal from node n. function A_Star(start, goal, h) // The set of discovered nodes that may need to be (re-)expanded. // Initially, only the start node is known. // This is usually implemented as a min-heap or priority queue rather than a hash-set. openSet := {start} // For node n, cameFrom[n] is the node immediately preceding it on the cheapest path from the start // to n currently known. cameFrom := an empty map // For node n, gScore[n] is the cost of the cheapest path from start to n currently known. gScore := map with default value of Infinity gScore[start] := 0 // For node n, fScore[n] := gScore[n] + h(n). fScore[n] represents our current best guess as to // how cheap a path could be from start to finish if it goes through n. fScore := map with default value of Infinity fScore[start] := h(start) while openSet is not empty // This operation can occur in O(Log(N)) time if openSet is a min-heap or a priority queue current := the node in openSet having the lowest fScore[] value if current = goal return reconstruct_path(cameFrom, current) openSet.Remove(current) for each neighbor of current // d(current,neighbor) is the weight of the edge from current to neighbor // tentative_gScore is the distance from start to the neighbor through current tentative_gScore := gScore[current] + d(current, neighbor) if tentative_gScore < gScore[neighbor] // This path to neighbor is better than any previous one. Record it! cameFrom[neighbor] := current gScore[neighbor] := tentative_gScore fScore[neighbor] := tentative_gScore + h(neighbor) if neighbor not in openSet openSet.add(neighbor) // Open set is empty but goal was never reached return failure Remark: In this pseudocode, if a node is reached by one path, removed from openSet, and subsequently reached by a cheaper path, it will be added to openSet again. This is essential to guarantee that the path returned is optimal if the heuristic function is admissible but not consistent. If the heuristic is consistent, when a node is removed from openSet the path to it is guaranteed to be optimal so the test ‘tentative_gScore < gScore[neighbor]’ will always fail if the node is reached again. Example An example of an A* algorithm in action where nodes are cities connected with roads and h(x) is the straight-line distance to the target point: Key: green: start; blue: goal; orange: visited The A* algorithm also has real-world applications. In this example, edges are railroads and h(x) is the great-circle distance (the shortest possible distance on a sphere) to the target. The algorithm is searching for a path between Washington, D.C., and Los Angeles. Implementation details There are a number of simple optimizations or implementation details that can significantly affect the performance of an A* implementation. The first detail to note is that the way the priority queue handles ties can have a significant effect on performance in some situations. If ties are broken so the queue behaves in a LIFO manner, A* will behave like depth-first search among equal cost paths (avoiding exploring more than one equally optimal solution). When a path is required at the end of the search, it is common to keep with each node a reference to that node's parent. At the end of the search, these references can be used to recover the optimal path. If these references are being kept then it can be important that the same node doesn't appear in the priority queue more than once (each entry corresponding to a different path to the node, and each with a different cost). A standard approach here is to check if a node about to be added already appears in the priority queue. If it does, then the priority and parent pointers are changed to correspond to the lower-cost path. A standard binary heap based priority queue does not directly support the operation of searching for one of its elements, but it can be augmented with a hash table that maps elements to their position in the heap, allowing this decrease-priority operation to be performed in logarithmic time. Alternatively, a Fibonacci heap can perform the same decrease-priority operations in constant amortized time. Special cases Dijkstra's algorithm, as another example of a uniform-cost search algorithm, can be viewed as a special case of A* where $h(x)=0$ for all x.[10][11] General depth-first search can be implemented using A* by considering that there is a global counter C initialized with a very large value. Every time we process a node we assign C to all of its newly discovered neighbors. After every single assignment, we decrease the counter C by one. Thus the earlier a node is discovered, the higher its $h(x)$ value. Both Dijkstra's algorithm and depth-first search can be implemented more efficiently without including an $h(x)$ value at each node. Properties Termination and completeness On finite graphs with non-negative edge weights A* is guaranteed to terminate and is complete, i.e. it will always find a solution (a path from start to goal) if one exists. On infinite graphs with a finite branching factor and edge costs that are bounded away from zero ($ d(x,y)>\varepsilon >0$ for some fixed $\varepsilon $), A* is guaranteed to terminate only if there exists a solution.[1] Admissibility A search algorithm is said to be admissible if it is guaranteed to return an optimal solution. If the heuristic function used by A* is admissible, then A* is admissible. An intuitive "proof" of this is as follows: When A* terminates its search, it has found a path from start to goal whose actual cost is lower than the estimated cost of any path from start to goal through any open node (the node's $f$ value). When the heuristic is admissible, those estimates are optimistic (not quite—see the next paragraph), so A* can safely ignore those nodes because they cannot possibly lead to a cheaper solution than the one it already has. In other words, A* will never overlook the possibility of a lower-cost path from start to goal and so it will continue to search until no such possibilities exist. The actual proof is a bit more involved because the $f$ values of open nodes are not guaranteed to be optimistic even if the heuristic is admissible. This is because the $g$ values of open nodes are not guaranteed to be optimal, so the sum $g+h$ is not guaranteed to be optimistic. Optimality and consistency Algorithm A is optimally efficient with respect to a set of alternative algorithms Alts on a set of problems P if for every problem P in P and every algorithm A′ in Alts, the set of nodes expanded by A in solving P is a subset (possibly equal) of the set of nodes expanded by A′ in solving P. The definitive study of the optimal efficiency of A* is due to Rina Dechter and Judea Pearl.[9] They considered a variety of definitions of Alts and P in combination with A*'s heuristic being merely admissible or being both consistent and admissible. The most interesting positive result they proved is that A*, with a consistent heuristic, is optimally efficient with respect to all admissible A*-like search algorithms on all "non-pathological" search problems. Roughly speaking, their notion of the non-pathological problem is what we now mean by "up to tie-breaking". This result does not hold if A*'s heuristic is admissible but not consistent. In that case, Dechter and Pearl showed there exist admissible A*-like algorithms that can expand arbitrarily fewer nodes than A* on some non-pathological problems. Optimal efficiency is about the set of nodes expanded, not the number of node expansions (the number of iterations of A*'s main loop). When the heuristic being used is admissible but not consistent, it is possible for a node to be expanded by A* many times, an exponential number of times in the worst case.[12] In such circumstances, Dijkstra's algorithm could outperform A* by a large margin. However, more recent research found that this pathological case only occurs in certain contrived situations where the edge weight of the search graph is exponential in the size of the graph and that certain inconsistent (but admissible) heuristics can lead to a reduced number of node expansions in A* searches.[13][14] Bounded relaxation While the admissibility criterion guarantees an optimal solution path, it also means that A* must examine all equally meritorious paths to find the optimal path. To compute approximate shortest paths, it is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that we can guarantee that the solution path is no worse than (1 + ε) times the optimal solution path. This new guarantee is referred to as ε-admissible. There are a number of ε-admissible algorithms: • Weighted A*/Static Weighting's.[15] If ha(n) is an admissible heuristic function, in the weighted version of the A* search one uses hw(n) = ε ha(n), ε > 1 as the heuristic function, and perform the A* search as usual (which eventually happens faster than using ha since fewer nodes are expanded). The path hence found by the search algorithm can have a cost of at most ε times that of the least cost path in the graph.[16] • Dynamic Weighting[17] uses the cost function $f(n)=g(n)+(1+\varepsilon w(n))h(n)$, where $w(n)={\begin{cases}1-{\frac {d(n)}{N}}&d(n)\leq N\\0&{\text{otherwise}}\end{cases}}$, and where $d(n)$ is the depth of the search and N is the anticipated length of the solution path. • Sampled Dynamic Weighting[18] uses sampling of nodes to better estimate and debias the heuristic error. • $A_{\varepsilon }^{*}$.[19] uses two heuristic functions. The first is the FOCAL list, which is used to select candidate nodes, and the second hF is used to select the most promising node from the FOCAL list. • Aε[20] selects nodes with the function $Af(n)+Bh_{F}(n)$, where A and B are constants. If no nodes can be selected, the algorithm will backtrack with the function $Cf(n)+Dh_{F}(n)$, where C and D are constants. • AlphA*[21] attempts to promote depth-first exploitation by preferring recently expanded nodes. AlphA* uses the cost function $f_{\alpha }(n)=(1+w_{\alpha }(n))f(n)$, where $w_{\alpha }(n)={\begin{cases}\lambda &g(\pi (n))\leq g({\tilde {n}})\\\Lambda &{\text{otherwise}}\end{cases}}$, where λ and Λ are constants with $\lambda \leq \Lambda $, π(n) is the parent of n, and ñ is the most recently expanded node. Complexity The time complexity of A* depends on the heuristic. In the worst case of an unbounded search space, the number of nodes expanded is exponential in the depth of the solution (the shortest path) d: O(bd), where b is the branching factor (the average number of successors per state).[22] This assumes that a goal state exists at all, and is reachable from the start state; if it is not, and the state space is infinite, the algorithm will not terminate. The heuristic function has a major effect on the practical performance of A* search, since a good heuristic allows A* to prune away many of the bd nodes that an uninformed search would expand. Its quality can be expressed in terms of the effective branching factor b*, which can be determined empirically for a problem instance by measuring the number of nodes generated by expansion, N, and the depth of the solution, then solving[23] $N+1=1+b^{*}+(b^{*})^{2}+\dots +(b^{*})^{d}.$ Good heuristics are those with low effective branching factor (the optimal being b* = 1). The time complexity is polynomial when the search space is a tree, there is a single goal state, and the heuristic function h meets the following condition: $|h(x)-h^{*}(x)|=O(\log h^{*}(x))$ where h* is the optimal heuristic, the exact cost to get from x to the goal. In other words, the error of h will not grow faster than the logarithm of the "perfect heuristic" h* that returns the true distance from x to the goal.[16][22] The space complexity of A* is roughly the same as that of all other graph search algorithms, as it keeps all generated nodes in memory.[1] In practice, this turns out to be the biggest drawback of the A* search, leading to the development of memory-bounded heuristic searches, such as Iterative deepening A*, memory-bounded A*, and SMA*. Applications A* is often used for the common pathfinding problem in applications such as video games, but was originally designed as a general graph traversal algorithm.[4] It finds applications in diverse problems, including the problem of parsing using stochastic grammars in NLP.[24] Other cases include an Informational search with online learning.[25] Relations to other algorithms What sets A* apart from a greedy best-first search algorithm is that it takes the cost/distance already traveled, g(n), into account. Some common variants of Dijkstra's algorithm can be viewed as a special case of A* where the heuristic $h(n)=0$ for all nodes;[10][11] in turn, both Dijkstra and A* are special cases of dynamic programming.[26] A* itself is a special case of a generalization of branch and bound.[27] A* is similar to beam search except that beam search maintains a limit on the numbers of paths that it has to explore.[28] Variants • Anytime A*[29] • Block A* • D* • Field D* • Fringe • Fringe Saving A* (FSA*) • Generalized Adaptive A* (GAA*) • Incremental heuristic search • Reduced A*[30] • Iterative deepening A* (IDA*) • Jump point search • Lifelong Planning A* (LPA*) • New Bidirectional A* (NBA*)[31] • Simplified Memory bounded A* (SMA*) • Theta* A* can also be adapted to a bidirectional search algorithm. Special care needs to be taken for the stopping criterion.[32] See also • Breadth-first search • Depth-first search • Any-angle path planning, search for paths that are not limited to moving along graph edges but rather can take on any angle Notes 1. “A*-like” means the algorithm searches by extending paths originating at the start node one edge at a time, just as A* does. This excludes, for example, algorithms that search backward from the goal or in both directions simultaneously. In addition, the algorithms covered by this theorem must be admissible, and “not more informed” than A*. 2. Goal nodes may be passed over multiple times if there remain other nodes with lower f values, as they may lead to a shorter path to a goal. References 1. Russell, Stuart J. (2018). Artificial intelligence a modern approach. Norvig, Peter (4th ed.). Boston: Pearson. ISBN 978-0134610993. OCLC 1021874142. 2. Delling, D.; Sanders, P.; Schultes, D.; Wagner, D. (2009). "Engineering Route Planning Algorithms". Algorithmics of Large and Complex Networks: Design, Analysis, and Simulation. Lecture Notes in Computer Science. Vol. 5515. Springer. pp. 117–139. doi:10.1007/978-3-642-02094-0_7. ISBN 978-3-642-02093-3. 3. Zeng, W.; Church, R. L. (2009). "Finding shortest paths on real road networks: the case for A*". International Journal of Geographical Information Science. 23 (4): 531–543. doi:10.1080/13658810801949850. S2CID 14833639. 4. Hart, P. E.; Nilsson, N.J.; Raphael, B. (1968). "A Formal Basis for the Heuristic Determination of Minimum Cost Paths". IEEE Transactions on Systems Science and Cybernetics. 4 (2): 100–7. doi:10.1109/TSSC.1968.300136. 5. Doran, J. E.; Michie, D. (1966-09-20). "Experiments with the Graph Traverser program". Proc. R. Soc. Lond. A. 294 (1437): 235–259. Bibcode:1966RSPSA.294..235D. doi:10.1098/rspa.1966.0205. S2CID 21698093. 6. Nilsson, Nils J. (2009-10-30). The Quest for Artificial Intelligence (PDF). Cambridge: Cambridge University Press. ISBN 9780521122931. 7. Edelkamp, Stefan; Jabbar, Shahid; Lluch-Lafuente, Alberto (2005). "Cost-Algebraic Heuristic Search" (PDF). Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI): 1362–7. ISBN 978-1-57735-236-5. 8. Hart, Peter E.; Nilsson, Nils J.; Raphael, Bertram (1972-12-01). "Correction to 'A Formal Basis for the Heuristic Determination of Minimum Cost Paths'" (PDF). ACM SIGART Bulletin (37): 28–29. doi:10.1145/1056777.1056779. S2CID 6386648. 9. Dechter, Rina; Judea Pearl (1985). "Generalized best-first search strategies and the optimality of A*". Journal of the ACM. 32 (3): 505–536. doi:10.1145/3828.3830. S2CID 2092415. 10. De Smith, Michael John; Goodchild, Michael F.; Longley, Paul (2007), Geospatial Analysis: A Comprehensive Guide to Principles, Techniques and Software Tools, Troubadour Publishing Ltd, p. 344, ISBN 9781905886609. 11. Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 214, ISBN 9781430232377. 12. Martelli, Alberto (1977). "On the Complexity of Admissible Search Algorithms". Artificial Intelligence. 8 (1): 1–13. doi:10.1016/0004-3702(77)90002-9. 13. Felner, Ariel; Uzi Zahavi (2011). "Inconsistent heuristics in theory and practice". Artificial Intelligence. 175 (9–10): 1570–1603. doi:10.1016/j.artint.2011.02.001. 14. Zhang, Zhifu; N. R. Sturtevant (2009). Using Inconsistent Heuristics on A* Search. Twenty-First International Joint Conference on Artificial Intelligence. pp. 634–639. 15. Pohl, Ira (1970). "First results on the effect of error in heuristic search". Machine Intelligence 5. Edinburgh University Press: 219–236. ISBN 978-0-85224-176-9. OCLC 1067280266. 16. Pearl, Judea (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley. ISBN 978-0-201-05594-8. 17. Pohl, Ira (August 1973). "The avoidance of (relative) catastrophe, heuristic competence, genuine dynamic weighting and computational issues in heuristic problem solving" (PDF). Proceedings of the Third International Joint Conference on Artificial Intelligence (IJCAI-73). Vol. 3. California, USA. pp. 11–17. 18. Köll, Andreas; Hermann Kaindl (August 1992). "A new approach to dynamic weighting". Proceedings of the Tenth European Conference on Artificial Intelligence (ECAI-92). Vienna, Austria: Wiley. pp. 16–17. ISBN 978-0-471-93608-4. 19. Pearl, Judea; Jin H. Kim (1982). "Studies in semi-admissible heuristics". IEEE Transactions on Pattern Analysis and Machine Intelligence. 4 (4): 392–399. doi:10.1109/TPAMI.1982.4767270. PMID 21869053. S2CID 3176931. 20. Ghallab, Malik; Dennis Allard (August 1983). "Aε – an efficient near admissible heuristic search algorithm" (PDF). Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83). Vol. 2. Karlsruhe, Germany. pp. 789–791. Archived from the original (PDF) on 2014-08-06. 21. Reese, Bjørn (1999). AlphA*: An ε-admissible heuristic search algorithm (Report). Institute for Production Technology, University of Southern Denmark. Archived from the original on 2016-01-31. Retrieved 2014-11-05. 22. Russell, Stuart; Norvig, Peter (2003) [1995]. Artificial Intelligence: A Modern Approach (2nd ed.). Prentice Hall. pp. 97–104. ISBN 978-0137903955. 23. Russell, Stuart; Norvig, Peter (2009) [1995]. Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall. p. 103. ISBN 978-0-13-604259-4. 24. Klein, Dan; Manning, Christopher D. (2003). "A* parsing: fast exact Viterbi parse selection" (PDF). Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. pp. 119–126. doi:10.3115/1073445.1073461. 25. Kagan E.; Ben-Gal I. (2014). "A Group-Testing Algorithm with Online Informational Learning" (PDF). IIE Transactions. 46 (2): 164–184. doi:10.1080/0740817X.2013.803639. S2CID 18588494. Archived from the original (PDF) on 2016-11-05. Retrieved 2016-02-12. 26. Ferguson, Dave; Likhachev, Maxim; Stentz, Anthony (2005). "A Guide to Heuristic-based Path Planning" (PDF). Proceedings of the international workshop on planning under uncertainty for autonomous systems, international conference on automated planning and scheduling (ICAPS). pp. 9–18. 27. Nau, Dana S.; Kumar, Vipin; Kanal, Laveen (1984). "General branch and bound, and its relation to A∗ and AO∗" (PDF). Artificial Intelligence. 23 (1): 29–58. doi:10.1016/0004-3702(84)90004-3. 28. "Variants of A*". theory.stanford.edu. Retrieved 2023-06-09. 29. Hansen, Eric A.; Zhou, Rong (2007). "Anytime Heuristic Search". Journal of Artificial Intelligence Research. 28: 267–297. doi:10.1613/jair.2096. S2CID 9832874. 30. Fareh, Raouf; Baziyad, Mohammed; Rahman, Mohammad H.; Rabie, Tamer; Bettayeb, Maamar (2019-05-14). "Investigating Reduced Path Planning Strategy for Differential Wheeled Mobile Robot". Robotica. 38 (2): 235–255. doi:10.1017/S0263574719000572. ISSN 0263-5747. S2CID 181849209. 31. Pijls, Wim; Post, Henk. Yet another bidirectional algorithm for shortest paths (PDF) (Technical report). Econometric Institute, Erasmus University Rotterdam. EI 2009-10. 32. Goldberg, Andrew V.; Harrelson, Chris; Kaplan, Haim; Werneck, Renato F. "Efficient Point-to-Point Shortest Path Algorithms" (PDF). Princeton University. Archived (PDF) from the original on 18 May 2022. Further reading • Nilsson, N. J. (1980). Principles of Artificial Intelligence. Palo Alto, California: Tioga Publishing Company. ISBN 978-0-935382-01-3. External links • Variation on A* called Hierarchical Path-Finding A* (HPA*) • Brian Grinstead. "A* Search Algorithm in JavaScript (Updated)". Archived from the original on 15 February 2020. Retrieved 8 February 2021. Graph and tree traversal algorithms • α–β pruning • A* • IDA* • LPA* • SMA* • Best-first search • Beam search • Bidirectional search • Breadth-first search • Lexicographic • Parallel • B* • Depth-first search • Iterative Deepening • D* • Fringe search • Jump point search • Monte Carlo tree search • SSS* Shortest path • Bellman–Ford • Dijkstra's • Floyd–Warshall • Johnson's • Shortest path faster • Yen's Minimum spanning tree • Borůvka's • Kruskal's • Prim's • Reverse-delete List of graph search algorithms
Wikipedia
TC0 TC0 is a complexity class used in circuit complexity. It is the first class in the hierarchy of TC classes. TC0 contains all languages which are decided by Boolean circuits with constant depth and polynomial size, containing only unbounded fan-in AND gates, OR gates, NOT gates, and majority gates. Equivalently, threshold gates can be used instead of majority gates. TC0 contains several important problems, such as sorting n n-bit numbers, multiplying two n-bit numbers, integer division[1] or recognizing the Dyck language with two types of parentheses. Complexity class relations We can relate TC0 to other circuit classes, including AC0 and NC1; Vollmer 1999 p. 126 states: ${\mathsf {AC}}^{0}\subsetneq {\mathsf {AC}}^{0}[p]\subsetneq {\mathsf {TC}}^{0}\subseteq {\mathsf {NC}}^{1}.$ Vollmer states that the question of whether the last inclusion above is strict is "one of the main open problems in circuit complexity" (ibid.). We also have that uniform ${\mathsf {TC}}^{0}\subsetneq {\mathsf {PP}}$. (Allender 1996, as cited in Burtschick 1999). Basis for uniform TC0 The functional version of the uniform ${\mbox{TC}}^{0}$ coincides with the closure with respect to composition of the projections and one of the following function sets $\{n+m,n\,{\stackrel {.}{-}}\,m,n\wedge m,\lfloor n/m\rfloor ,2^{\lfloor \log _{2}n\rfloor ^{2}}\}$, $\{n+m,n\,{\stackrel {.}{-}}\,m,n\wedge m,\lfloor n/m\rfloor ,n^{\lfloor \log _{2}m\rfloor }\}$.[2] Here $n\,{\stackrel {.}{-}}\,m=\max(0,n-m)$, $n\wedge m$ is a bitwise AND of $n$ and $m$. By functional version one means the set of all functions $f(x_{1},\ldots ,x_{n})$ over non-negative integers that are bounded by functions of FP and $(y{\text{-th bit of }}f(x_{1},\ldots ,x_{n}))$ is in the uniform ${\mbox{TC}}^{0}$. References 1. Hesse, William; Allender, Eric; Mix Barrington, David (2002). "Uniform constant-depth threshold circuits for division and iterated multiplication". Journal of Computer and System Sciences. 65 (4): 695–716. doi:10.1016/S0022-0000(02)00025-9. 2. Volkov, Sergey. (2016). "Finite Bases with Respect to the Superposition in Classes of Elementary Recursive Functions, dissertation". arXiv:1611.04843 [cs.CC]. • Allender, E. (1996). "A note on uniform circuit lower bounds for the counting hierarchy". Proceedings 2nd International Computing and Combinatorics Conference (COCOON). Springer Lecture Notes in Computer Science. Vol. 1090. pp. 127–135. • Clote, Peter; Kranakis, Evangelos (2002). Boolean functions and computation models. Texts in Theoretical Computer Science. An EATCS Series. Berlin: Springer-Verlag. ISBN 3-540-59436-1. Zbl 1016.94046. • Vollmer, Heribert (1999). Introduction to Circuit Complexity. A uniform approach. Texts in Theoretical Computer Science. Berlin: Springer-Verlag. ISBN 3-540-64310-9. Zbl 0931.68055. • Burtschick, Hans-Jörg; Vollmer, Heribert (1999). "Lindström Quantifiers and Leaf Language Definability". ECCC TR96-005. {{cite journal}}: Cite journal requires |journal= (help) External links • Complexity Zoo: TC0 Important complexity classes Considered feasible • DLOGTIME • AC0 • ACC0 • TC0 • L • SL • RL • NL • NL-complete • NC • SC • CC • P • P-complete • ZPP • RP • BPP • BQP • APX • FP Suspected infeasible • UP • NP • NP-complete • NP-hard • co-NP • co-NP-complete • AM • QMA • PH • ⊕P • PP • #P • #P-complete • IP • PSPACE • PSPACE-complete Considered infeasible • EXPTIME • NEXPTIME • EXPSPACE • 2-EXPTIME • ELEMENTARY • PR • R • RE • ALL Class hierarchies • Polynomial hierarchy • Exponential hierarchy • Grzegorczyk hierarchy • Arithmetical hierarchy • Boolean hierarchy Families of classes • DTIME • NTIME • DSPACE • NSPACE • Probabilistically checkable proof • Interactive proof system List of complexity classes
Wikipedia
TC (complexity) In theoretical computer science, and specifically computational complexity theory and circuit complexity, TC is a complexity class of decision problems that can be recognized by threshold circuits, which are Boolean circuits with AND, OR, and Majority gates. For each fixed i, the complexity class TCi consists of all languages that can be recognized by a family of threshold circuits of depth $O(\log ^{i}n)$, polynomial size, and unbounded fan-in. The class TC is defined via ${\mbox{TC}}=\bigcup _{i\geq 0}{\mbox{TC}}^{i}.$ Relation to NC and AC The relationship between the TC, NC and the AC hierarchy can be summarized as follows: ${\mbox{NC}}^{i}\subseteq {\mbox{AC}}^{i}\subseteq {\mbox{TC}}^{i}\subseteq {\mbox{NC}}^{i+1}.$ In particular, we know that ${\mbox{NC}}^{0}\subsetneq {\mbox{AC}}^{0}\subsetneq {\mbox{TC}}^{0}\subseteq {\mbox{NC}}^{1}.$ The first strict containment follows from the fact that NC0 cannot compute any function that depends on all the input bits. Thus choosing a problem that is trivially in AC0 and depends on all bits separates the two classes. (For example, consider the OR function.) The strict containment AC0 ⊊ TC0 follows because parity and majority (which are both in TC0) were shown to be not in AC0.[1][2] As an immediate consequence of the above containments, we have that NC = AC = TC. References 1. Furst, Merrick; Saxe, James B.; Sipser, Michael (1984), "Parity, circuits, and the polynomial-time hierarchy", Mathematical Systems Theory, 17 (1): 13–27, doi:10.1007/BF01744431, MR 0738749. 2. Håstad, Johan (1989), "Almost Optimal Lower Bounds for Small Depth Circuits", in Micali, Silvio (ed.), Randomness and Computation (PDF), Advances in Computing Research, vol. 5, JAI Press, pp. 6–20, ISBN 0-89232-896-7, archived from the original (PDF) on 2012-02-22 • Vollmer, Heribert (1999). Introduction to Circuit Complexity. Berlin: Springer. ISBN 3-540-64310-9. Important complexity classes Considered feasible • DLOGTIME • AC0 • ACC0 • TC0 • L • SL • RL • NL • NL-complete • NC • SC • CC • P • P-complete • ZPP • RP • BPP • BQP • APX • FP Suspected infeasible • UP • NP • NP-complete • NP-hard • co-NP • co-NP-complete • AM • QMA • PH • ⊕P • PP • #P • #P-complete • IP • PSPACE • PSPACE-complete Considered infeasible • EXPTIME • NEXPTIME • EXPSPACE • 2-EXPTIME • ELEMENTARY • PR • R • RE • ALL Class hierarchies • Polynomial hierarchy • Exponential hierarchy • Grzegorczyk hierarchy • Arithmetical hierarchy • Boolean hierarchy Families of classes • DTIME • NTIME • DSPACE • NSPACE • Probabilistically checkable proof • Interactive proof system List of complexity classes
Wikipedia
TFNP In computational complexity theory, the complexity class TFNP is the class of total function problems which can be solved in nondeterministic polynomial time. That is, it is the class of function problems that are guaranteed to have an answer, and this answer can be checked in polynomial time, or equivalently it is the subset of FNP where a solution is guaranteed to exist. The abbreviation TFNP stands for "Total Function Nondeterministic Polynomial". TFNP contains many natural problems that are of interest to computer scientists. These problems include integer factorization, finding a Nash Equilibrium of a game, and searching for local optima. TFNP is widely conjectured to contain problems that are computationally intractable, and several such problems have been shown to be hard under cryptographic assumptions.[1][2] However, there are no known unconditional intractability results or results showing NP-hardness of TFNP problems. TFNP is not believed to have any complete problems.[3] Formal Definition The class TFNP is formally defined as follows. A binary relation P(x,y) is in TFNP if and only if there is a deterministic polynomial time algorithm that can determine whether P(x,y) holds given both x and y, and for every x, there exists a y which is at most polynomially longer than x such that P(x,y) holds. It was first defined by Megiddo and Papadimitriou in 1989,[4] although TFNP problems and subclasses of TFNP had been defined and studied earlier.[5] Connections to Other Complexity Classes F(NP ∩ coNP) The complexity class ${\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})$ can be defined in two different ways, and those ways are not known to be equivalent. One way applies F to the machine model for ${\mathsf {NP}}\cap {\mathsf {coNP}}$. It is known that with this definition, ${\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})$ coincides with TFNP.[4] To see this, first notice that the inclusion ${\mathsf {TFNP}}\subseteq {\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})$ follows easily from the definitions of the classes. All "yes" answers to problems in TFNP can be easily verified by definition, and since problems in TFNP are total, there are no "no" answers, so it is vacuously true that "no" answers can be easily verified. For the reverse inclusion, let R be a binary relation in ${\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})$. Decompose R into $R_{1}\cup R_{2}$ such that $(x,0y)\in R_{1}$ precisely when $(x,y)\in R$ and y is a "yes" answer, and let R2 be $(x,1y)$ such $(x,y)\in R$ and y is a "no" answer. Then the binary relation $R_{1}\cup R_{2}$ is in TFNP. The other definition uses that ${\mathsf {NP}}\cap {\mathsf {coNP}}$ is known to be a well-behaved class of decision problems, and applies F to that class. With this definition, if ${\mathsf {NP}}\cap {\mathsf {coNP}}={\mathsf {P}}$ then ${\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})={\mathsf {\color {Blue}FP}}$. Connection to NP NP is one of the most widely studied complexity classes. The conjecture that there are intractable problems in NP is widely accepted and often used as the most basic hardness assumption. Therefore, it is only natural to ask how TFNP is related to NP. It is not difficult to see that solutions to problems in NP can imply solutions to problems in TFNP. However, there are no TFNP problems which are known to be NP-hard. Intuition for this fact comes from the fact that problems in TFNP are total. For a problem to be NP-hard, there must exist a reduction from some NP-complete problem to the problem of interest. A typical reduction from problem A to problem B is performed by creating and analyzing a map that sends "yes" instances of A to "yes" instances of B and "no" instances of A to "no" instances of B. However, TFNP problems are total, so there are no "no" instances for this type of reduction, causing common techniques to be difficult to apply. Beyond this rough intuition, there are several concrete results that suggest that it might be difficult or even impossible to prove NP-hardness for TFNP problems. For example, if any TFNP problem is NP-complete, then NP = coNP,[3] which is generally conjectured to be false, but is still a major open problem in complexity theory. This lack of connections with NP is a major motivation behind the study of TFNP as its own independent class. Notable Subclasses The structure of TFNP is often studied through the study of its subclasses. These subclasses are defined by the mathematical theorem by which solutions to the problems are guaranteed. One appeal of studying subclasses of TFNP is that although TFNP is believed not to have any complete problems, these subclasses are defined by a certain complete problem, making them easier to reason about. PLS Main article: PLS (complexity) PLS (standing for "Polynomial Local Search") is a class of problems designed to model the process of searching for a local optimum for a function. In particular, it is the class of total function problems that are polynomial-time reducible to the following problem Given input circuits S and C each with n input and output bits, find x such that $C(S(x))\leq C(X)$. It contains the class CLS. PPA Main article: PPA (complexity) PPA (standing for "Polynomial time Parity Argument") is the class of problems whose solution is guaranteed by the handshaking lemma: any undirected graph with an odd degree vertex must have another odd degree vertex. It contains the subclass PPAD. PPP Main article: PPP (complexity) PPP (standing for "Polynomial time Pigeonhole Principle") is the class of problems whose solution is guaranteed by the Pigeonhole principle. More precisely, it is the class of problems that can be reduced in polynomial time to the Pigeon problem, defined as follows Given circuit C with n input and output bits, find x such that $C(x)=0$ or x ≠ y such that $C(x)=C(y)$. PPP contains the classes PPAD and PWPP. Notable problems in this class include the short integer solution problem.[6] PPAD Main article: PPAD PPAD (standing for "Polynomial time Parity Argument, Directed") is a restriction of PPA to problems whose solutions are guaranteed by a directed version of the handshake lemma. It is often defined as the set of problems that are polynomial-time reducible to End-of-a-Line: Given circuits S and P with n input and output bits $S(0)\neq 0$ and $P(0)=0$, find x such that $P(S(x))\neq x$ or $x\neq 0$ such that $S(P(x))\neq x$. PPAD is in the intersection of PPA and PPP, and it contains CLS. Here, the circuit S in the definition sends each point of the line to its successor, or to itself if the point is a sink. Likewise P sends each point of the line to its predecessor, or to itself if the point is a source. Points outside of all lines are identified by being fixed under both P and S (in other words, any isolated points are removed from the graph). Then the condition $P(S(x))\neq x$ defines the end of a line, which is either a sink or is such that S(x) = S(y) for some other point y; similarly the condition $S(P(x))\neq x$ defines the beginning of a line (since we assume that 0 is a source, we require the solution be nonzero in this case). CLS CLS (standing for "Continuous Local Search") is a class of search problems designed to model the process of finding a local optima of a continuous function over a continuous domain. It is defined as the class of problems that are polynomial-time reducible to the Continuous Localpoint problem: Given two Lipschitz continuous functions S and C and parameters ε and λ, find an ε-approximate fixed point of S with respect to C or two points that violate the λ-continuity of C or S. This class was first defined by Daskalakis and Papadimitriou in 2011.[7] It is contained in the intersection of PPAD and PLS, and in 2020 it has been proven that ${\mathsf {CLS}}={\mathsf {PPAD}}\cap {\mathsf {PLS}}$.[8][9] It was designed to be a class of relatively simple optimization problems that still contains many interesting problems that are believed to be hard. Complete problems for CLS are for example finding an ε-KKT point,[10] finding an ε-Banach fixed point[11] and the Meta-Metric-Contraction problem.[12] EOPL and UEOPL EOPL and UEOPL (which stands for "end of potential line" and "unique end of potential line") were introduced in 2020 by.[10] EOPL captures search problems that can be solved by local search, i.e. it is possible to jump from one candidate solution to the next one in polynomial time. A problem in EOPL can be interpreted as an exponentially large, directed, acyclic graph where each node is a candidate solution and has a cost (also called potential) which increases along the edges. The in- and out-degree of each node is at most one which means that the nodes form a collection of exponentially long lines. The end of each line is the node with highest cost on that line. EOPL contains all problems that can be reduced in polynomial time to the search problem End-of-Potential-Line: Given input circuits S, P each with n input and output bits, and C with n input and m output bits, $S(0)\neq 0$, $P(0)=0$, and $C(0)=0$, find x such that • x is the end of the line $P(S(x))\neq x$, • x is the start of a second line $S(P(x))\neq x\neq 0$, or • x violates the increasing cost $P(S(x))=x$, $x\neq S(x)$ and $C(S(x))-C(x)\leq 0$ Here, S sends each vertex of the graph to its successor, or to itself if the vertex is a sink. Likewise P sends each vertex of the graph to its predecessor, or to itself. Points outside the graph are identified by being fixed under both P and S. Then the first and second solution types are respectively the upper and lower ends of the line, and the third solution type is a violation of the condition that the potential increases along the edges. If this last condition is violated, the endpoint may not maximize the potential on the line. Therefore the problem is total: Either a solution is found or a short proof is found that the conditions are not satisfied. UEOPL is defined very similarly, but it is promised that there is only one line. Hence finding the second type of solution above would violate the promise ensuring that the first type of solution is unique. A fourth solution type is added to provide another way of detecting the presence of a second line: • two points x, y such that $x\neq y,x\neq S(x),y\neq S(y)$ and either $C(x)=C(y)$ or $C(x)<C(y)<C(S(x))$. A solution of this type either indicates that x and y are on different lines, or indicates a violation of the condition that values on the same line are strictly increasing. The advantage of including this condition is that it may be easier to find x and y as required than to find the start of their lines, or an explicit violation of the increasing cost condition. UEOPL contains, among others, the problem of solving the P-matrix-Linear complementarity problem,[10] finding the sink of a Unique sink orientation in cubes,[10] solving a simple stochastic game[10] and the α-Ham Sandwich problem.[13] Complete problems of UEOPL are Unique-End-of-Potential-Line, some variants of it with costs increasing exactly by 1 or an instance without the P circuit, and One-Permutation-Discrete-Contraction.[10] EOPL captures search problems like the ones in UEOPL with the relaxation that there are multiple lines allowed and it is searched for any end of a line. There are currently no problems known that are in EOPL but not in UEOPL. EOPL is a subclass of CLS, it is unknown whether they are equal or not. UEOPL is trivially contained in EOPL. FP Main article: FP (complexity) FP (complexity) (standing for "Function Polynomial") is the class of function problems that can be solved in deterministic polynomial time. ${\mathsf {FP}}\subseteq {\mathsf {CLS}}$, and it is conjectured that this inclusion is strict. This class represents the class of function problems that are believed to be computationally tractable (without randomization). If TFNP = FP, then ${\mathsf {P}}={\mathsf {NP}}\cap {\mathsf {coNP}}$, which should be intuitive given the fact that ${\mathsf {TFNP}}={\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})$. However, it is generally conjectured that ${\mathsf {P}}\neq {\mathsf {NP}}\cap {\mathsf {coNP}}$, and so TFNP ≠ FP. References 1. Garg, Pandey, and Srinivasan. Revisiting Cryptographic Hardness of Finding a Nash Equilibrium. CRYPTO 2016. 2. Habàcek and Yogev. Hardness of Continuous Local Search: Query Complexity and Cryptographic Lower Bounds. SODA 2016. 3. Goldberg and Papadimitriou. Towards a Unified Complexity Theory of Total Functions. 2018. 4. Megiddo and Papadimitriou. A Note on Total Functions, Existence Theorems and Computational Complexity. Theoretical Computer Science 1989. 5. Johnson, Papadimitriou, and Yannakakis. How Easy is Local Search?. Journal of Computer System Sciences, 1988. 6. Sotiraki, Zampetakis, and Zidelis. PPP-Completeness with Connections to Cryptography. FOCS 2018 7. Daskalakis and Papadimitriou. Continuous Local Search. SODA 2011. 8. Fearnley, John; Goldberg, Paul W.; Hollender, Alexandros; Savani, Rahul (11 November 2020). "The Complexity of Gradient Descent: CLS = PPAD ∩ PLS". arXiv:2011.01929 [cs.CC]. 9. Thieme, Nick (2021-08-17). "Computer Scientists Discover Limits of Major Research Algorithm". Quanta Magazine. Retrieved 2021-08-17. 10. Fearnley, John; Gordon, Spencer; Mehta, Ruta; Savani, Rahul (December 2020). "Unique end of potential line". Journal of Computer and System Sciences. 114: 1–35. doi:10.1016/j.jcss.2020.05.007. S2CID 220277586. 11. Daskalakis, Constantinos; Tzamos, Christos; Zampetakis, Manolis (13 February 2018). "A Converse to Banach's Fixed Point Theorem and its CLS Completeness". arXiv:1702.07339 [cs.CC]. 12. Fearnley, John; Gordon, Spencer; Mehta, Ruta; Savani, Rahul (7 April 2017). "CLS: New Problems and Completeness". arXiv:1702.06017 [cs.CC]. 13. Chiu, Man-Kwun; Choudhary, Aruni; Mulzer, Wolfgang (20 March 2020). "Computational Complexity of the α-Ham-Sandwich Problem". arXiv:2003.09266 [cs.CG].
Wikipedia
The Analyst The Analyst (subtitled A Discourse Addressed to an Infidel Mathematician: Wherein It Is Examined Whether the Object, Principles, and Inferences of the Modern Analysis Are More Distinctly Conceived, or More Evidently Deduced, Than Religious Mysteries and Points of Faith) is a book by George Berkeley. It was first published in 1734, first by J. Tonson (London), then by S. Fuller (Dublin). The "infidel mathematician" is believed to have been Edmond Halley, though others have speculated Sir Isaac Newton was intended.[1] Background and purpose From his earliest days as a writer, Berkeley had taken up his satirical pen to attack what were then called 'free-thinkers' (secularists, skeptics, agnostics, atheists, etc.—in short, anyone who doubted the truths of received Christian religion or called for a diminution of religion in public life).[2] In 1732, in the latest installment in this effort, Berkeley published his Alciphron, a series of dialogues directed at different types of 'free-thinkers'. One of the archetypes Berkeley addressed was the secular scientist, who discarded Christian mysteries as unnecessary superstitions, and declared his confidence in the certainty of human reason and science. Against his arguments, Berkeley mounted a subtle defense of the validity and usefulness of these elements of the Christian faith. Alciphron was widely read and caused a bit of a stir. But it was an offhand comment mocking Berkeley's arguments by the 'free-thinking' royal astronomer Sir Edmund Halley that prompted Berkeley to pick up his pen again and try a new tack. The result was The Analyst, conceived as a satire attacking the foundations of mathematics with the same vigor and style as 'free-thinkers' routinely attacked religious truths. Berkeley sought to take mathematics apart, claimed to uncover numerous gaps in proof, attacked the use of infinitesimals, the diagonal of the unit square, the very existence of numbers, etc. The general point was not so much to mock mathematics or mathematicians, but rather to show that mathematicians, like Christians, relied upon incomprehensible 'mysteries' in the foundations of their reasoning. Moreover, the existence of these 'superstitions' was not fatal to mathematical reasoning, indeed it was an aid. So too with the Christian faithful and their 'mysteries'. Berkeley concluded that the certainty of mathematics is no greater than the certainty of religion. Content The Analyst was a direct attack on the foundations of calculus, specifically on Newton's notion of fluxions and on Leibniz's notion of infinitesimal change. In section 16, Berkeley criticizes ...the fallacious way of proceeding to a certain Point on the Supposition of an Increment, and then at once shifting your Supposition to that of no Increment . . . Since if this second Supposition had been made before the common Division by o, all had vanished at once, and you must have got nothing by your Supposition. Whereas by this Artifice of first dividing, and then changing your Supposition, you retain 1 and nxn-1. But, notwithstanding all this address to cover it, the fallacy is still the same.[3] Its most frequently quoted passage: And what are these Fluxions? The Velocities of evanescent Increments? And what are these same evanescent Increments? They are neither finite Quantities nor Quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?[4] Berkeley did not dispute the results of calculus; he acknowledged the results were true. The thrust of his criticism was that Calculus was not more logically rigorous than religion. He instead questioned whether mathematicians "submit to Authority, take things upon Trust"[5] just as followers of religious tenets did. According to Burton, Berkeley introduced an ingenious theory of compensating errors that were meant to explain the correctness of the results of calculus. Berkeley contended that the practitioners of calculus introduced several errors which cancelled, leaving the correct answer. In his own words, "by virtue of a two fold mistake you arrive, though not at science, yet truth."[6] Analysis The idea that Newton was the intended recipient of the discourse is put into doubt by a passage that appears toward the end of the book: "Query 58: Whether it be really an effect of Thinking, that the same Men admire the great author for his Fluxions, and deride him for his Religion?" [7] Here Berkeley ridicules those who celebrate Newton (the inventor of "fluxions", roughly equivalent to the differentials of later versions of the differential calculus) as a genius while deriding his well-known religiosity. Since Berkeley is here explicitly calling attention to Newton's religious faith, that seems to indicate he did not mean his readers to identify the "infidel (i.e., lacking faith) mathematician" with Newton. Mathematics historian Judith Grabiner comments, "Berkeley's criticisms of the rigor of the calculus were witty, unkind, and — with respect to the mathematical practices he was criticizing — essentially correct".[8] While his critiques of the mathematical practices were sound, his essay has been criticized on logical and philosophical grounds. For example, David Sherry argues that Berkeley's criticism of infinitesimal calculus consists of a logical criticism and a metaphysical criticism. The logical criticism is that of a fallacia suppositionis, which means gaining points in an argument by means of one assumption and, while keeping those points, concluding the argument with a contradictory assumption. The metaphysical criticism is a challenge to the existence itself of concepts such as fluxions, moments, and infinitesimals, and is rooted in Berkeley's empiricist philosophy which tolerates no expression without a referent.[9] Andersen (2011) showed that Berkeley's doctrine of the compensation of errors contains a logical circularity. Namely, Berkeley relies upon Apollonius's determination of the tangent of the parabola in Berkeley's own determination of the derivative of the quadratic function. Influence Two years after this publication, Thomas Bayes published anonymously "An Introduction to the Doctrine of Fluxions, and a Defence of the Mathematicians Against the Objections of the Author of the Analyst" (1736), in which he defended the logical foundation of Isaac Newton's calculus against the criticism outlined in The Analyst. Colin Maclaurin's two-volume Treatise of Fluxions published in 1742 also began as a response to Berkeley attacks, intended to show that Newton's calculus was rigorous by reducing it to the methods of Greek geometry.[8] Despite these attempts calculus continued to be developed using non-rigorous methods until around 1830 when Augustin Cauchy, and later Bernhard Riemann and Karl Weierstrass, redefined the derivative and integral using a rigorous definition of the concept of limit. The idea of using limits as a foundation for calculus had been suggested by d'Alembert, but d'Alembert's definition was not rigorous by modern standards.[10] The concept of limits had already appeared in the work of Newton,[11] but was not stated with sufficient clarity to hold up to the criticism of Berkeley.[12] In 1966, Abraham Robinson introduced Non-standard Analysis, which provided a rigorous foundation for working with infinitely small quantities. This provided another way of putting calculus on a mathematically rigorous foundation, the way it was done before the (ε, δ)-definition of limit had been fully developed. Ghosts of departed quantities Towards the end of The Analyst, Berkeley addresses possible justifications for the foundations of calculus that mathematicians may put forward. In response to the idea fluxions could be defined using ultimate ratios of vanishing quantities,[13] Berkeley wrote: It must, indeed, be acknowledged, that [Newton] used Fluxions, like the Scaffold of a building, as things to be laid aside or got rid of, as soon as finite Lines were found proportional to them. But then these finite Exponents are found by the help of Fluxions. Whatever therefore is got by such Exponents and Proportions is to be ascribed to Fluxions: which must therefore be previously understood. And what are these Fluxions? The Velocities of evanescent Increments? And what are these same evanescent Increments? They are neither finite Quantities nor Quantities infinitely small, nor yet nothing. May we not call them the Ghosts of departed Quantities?[4] Edwards describes this as the most memorable point of the book.[12] Katz and Sherry argue that the expression was intended to address both infinitesimals and Newton's theory of fluxions.[14] Today the phrase "ghosts of departed quantities" is also used when discussing Berkeley's attacks on other possible foundations of Calculus. In particular it is used when discussing infinitesimals,[15] but it is also used when discussing differentials,[16] and adequality.[17] Text and commentary The full text of The Analyst can be read on Wikisource, as well as on David R. Wilkins' website,[18] which includes some commentary and links to responses by Berkeley's contemporaries. The Analyst is also reproduced, with commentary, in recent works: • William Ewald's From Kant to Hilbert: A Source Book in the Foundations of Mathematics.[19] Ewald concludes that Berkeley's objections to the calculus of his day were mostly well taken at the time. • D. M. Jesseph's overview in the 2005 "Landmark Writings in Western Mathematics".[20] References 1. Burton 1997, 477. 2. Walmsley, Peter (1990-08-31). The Rhetoric of Berkeley's Philosophy. Cambridge University Press. doi:10.1017/cbo9780511519130. ISBN 978-0-521-37413-2. 3. Berkeley, George (1734). The Analyst: a Discourse addressed to an Infidel Mathematician . London. p. 25 – via Wikisource. 4. Berkeley 1734, p. 59. 5. Berkeley 1734, p. 93. 6. Berkeley 1734, p. 34. 7. Berkeley 1734, p. 92. 8. Grabiner 1997. 9. Sherry 1987. 10. Burton 1997. 11. Pourciau 2001. 12. Edwards 1994. 13. Boyer & Merzbach 1991. 14. Katz & Sherry 2012. 15. Arkeryd 2005. 16. Leader 1986. 17. Kleiner & Movshovitz-Hadar 1994. 18. Wilkins, D. R. (2002). "The Analyst". The History of Mathematics. Trinity College, Dublin. 19. Ewald, William, ed. (1996). From Kant to Hilbert: A Source Book in the Foundations of Mathematics. Vol. I. Oxford: Oxford University Press. ISBN 978-0198534709. 20. Jesseph, D. M. (2005). "The analyst". In Grattan-Guinness, Ivor (ed.). Landmark Writings in Western Mathematics 1640–1940. Elsevier. pp. 121–30. ISBN 978-0444508713. Sources • Kirsti, Andersen (2011), "One of Berkeley's arguments on compensating errors in the calculus.", Historia Mathematica, 38 (2): 219–318, doi:10.1016/j.hm.2010.07.001 • Arkeryd, Leif (Dec 2005), "Nonstandard Analysis", The American Mathematical Monthly, 112 (10): 926–928, doi:10.2307/30037635, JSTOR 30037635 • Błaszczyk, Piotr; Katz, Mikhail; Sherry, David (2012), "Ten misconceptions from the history of analysis and their debunking", Foundations of Science, 18: 43–74, arXiv:1202.4153, doi:10.1007/s10699-012-9285-8, S2CID 254508527 • Boyer, C; Merzbach, U (1991), A History of Mathematics (2 ed.) • Burton, David (1997), The History of Mathematics: An Introduction, McGraw-Hill • Edwards, C. H. (1994), The Historical Development of the Calculus, Springer • Grabiner, Judith (May 1997), "Was Newton's Calculus a Dead End? The Continental Influence of Maclaurin's Treatise of Fluxions", The American Mathematical Monthly, 104 (5): 393–410, doi:10.2307/2974733, JSTOR 2974733 • Grabiner, Judith V. (Dec 2004), "Newton, Maclaurin, and the Authority of Mathematics", The American Mathematical Monthly, 111 (10): 841–852, doi:10.2307/4145093, JSTOR 4145093 • Katz, Mikhail; Sherry, David (2012), "Leibniz's Infinitesimals: Their Fictionality, Their Modern Implementations, and Their Foes from Berkeley to Russell and Beyond", Erkenntnis, 78 (3): 571–625, arXiv:1205.0174, doi:10.1007/s10670-012-9370-y, S2CID 254471766 • Kleiner, I.; Movshovitz-Hadar, N. (Dec 1994), "The Role of Paradoxes in the Evolution of Mathematics", The American Mathematical Monthly, 101 (10): 963–974, doi:10.2307/2975163, JSTOR 2975163 • Leader, Solomon (May 1986), "What is a Differential? A New Answer from the Generalized Riemann Integral", The American Mathematical Monthly, 93 (5): 348–356, doi:10.2307/2323591, JSTOR 2323591 • Pourciau, Bruce (2001), "Newton and the notion of limit", Historia Math., 28 (1): 393–30, doi:10.1006/hmat.2000.2301 • Robert, Alain (1988), Nonstandard analysis, New York: Wiley, ISBN 978-0-471-91703-8 • Sherry, D. (1987), "The wake of Berkeley's Analyst: Rigor mathematicae?", Studies in Historical Philosophy and Science, 18 (4): 455–480, doi:10.1016/0039-3681(87)90003-3 • Wren, F. L.; Garrett, J. A. (May 1933), "The Development of the Fundamental Concepts of Infinitesimal Analysis", The American Mathematical Monthly, 40 (5): 269–281, doi:10.2307/2302202, JSTOR 2302202 External links • Works related to The Analyst: a Discourse addressed to an Infidel Mathematician at Wikisource Infinitesimals History • Adequality • Leibniz's notation • Integral symbol • Criticism of nonstandard analysis • The Analyst • The Method of Mechanical Theorems • Cavalieri's principle Related branches • Nonstandard analysis • Nonstandard calculus • Internal set theory • Synthetic differential geometry • Smooth infinitesimal analysis • Constructive nonstandard analysis • Infinitesimal strain theory (physics) Formalizations • Differentials • Hyperreal numbers • Dual numbers • Surreal numbers Individual concepts • Standard part function • Transfer principle • Hyperinteger • Increment theorem • Monad • Internal set • Levi-Civita field • Hyperfinite set • Law of continuity • Overspill • Microcontinuity • Transcendental law of homogeneity Mathematicians • Gottfried Wilhelm Leibniz • Abraham Robinson • Pierre de Fermat • Augustin-Louis Cauchy • Leonhard Euler Textbooks • Analyse des Infiniment Petits • Elementary Calculus • Cours d'Analyse
Wikipedia
TI-15 Explorer TI-15 Explorer is a calculator designed by Texas Instruments, intended for use in classes from grades 3-5. It is the successor to the TI-12 Math Explorer. For younger students, TI recommends the use of the TI-108. For older students, TI recommends the use of the TI-73 Explorer. Features include a 2-line pixel display (as opposed to the 7-segment display of several other calculators), and a quiz-like "problem-solving" mode. It also supports limited scientific capabilities, such as parentheses, fixed decimal, fractions, pi, and exponents. It is recommended by Everyday Mathematics. External links • Key features of the TI-15 Explorer Texas Instruments calculators Z80-based graphing • TI-73 • TI-81 • TI-82 • TI-83x • TI-84x • TI-85 • TI-86 M68k-based graphing • TI-89 • TI-92x and Voyage 200 PLT ARM-based graphing • TI-PLT (canceled) • TI-Nspire–TI-Nspire CAS Other graphing • TI-80 Non-graphing programmable • TI-55 • TI-55 II • TI-55 III • TI-56 • TI-57x • TI-58x • TI-59 • TI-74 • TI-95 (PROCALC) Scientific models • TI SR-50 • TI-30 • TI-32 • TI-34 • TI-35 • TI-36 • TI-54 • TI-68 Financial models • Business Analyst series Other models • TI-108 • TI-7 • TI-12 • TI-15 • TI-1030 • TI-1031 Related • TI-BASIC • Calculator-Based Laboratory • Texas Instruments signing key controversy
Wikipedia
TI-35 Texas Instruments TI-35 was a series of scientific calculators by Texas Instruments. The original TI-35 was notable for being one of Texas Instruments' first use of CMOS controller chips in their designs, and was at the time distinguished from the lower-end TI-30 line by the addition of some statistics functions. TI-35 (1979) It was built with the same slimline design as the 1978 TI-30, but with different processor and slightly changed feature set. It used TP0324-4N processor (CMOS variant of TMS1000 family). The display can handle 8 digits (5 digit mantissa with a 2-digit exponent) with 11-digit internal precision. It was manufactured in the USA. 1980 It was a version with TP0324-4NL processor, which increased accuracy. 1982 It was a version with CD4557, which increased accuracy over the TP processor. Cosmetic updates, include silver shell and new keyboard styling. It was manufactured in the USA. TI-35 (1982) It retained the 1982 styling, but used CD4557 processor. TI-35 GALAXY (1984) It is a horizontal variant, but the functionality was identical to the European 1984 TI-30 GALAXY. Solar version was called TI-35 GALAXY SOLAR.[1] TI-35 II (1984) It is a replacement for the 1979 TI-35. It used TP0456A or CD4557 processor. It was originally built in Taiwan, but later in Italy and USA. TI-35 PLUS (1986) It added hexadecimal and octal calculations and a 10+2 display (i.e. 10 digit mantissa with a 2-digit exponent) with 12-digit internal precision. TI-35X (1991) The design was based on contemporary TI-68. The cosmetics were updated in 1993. End of life Following the update of TI-36X Solar in 1996, the TI-35 designation was discontinued after 10 years of coexisting with the TI-36 line. References 1. http://www.calculator.org/Pages/calculator.aspx?model=TI-35%20GALAXY%20SOLAR&make=Texas%20Instruments calculator.org External links • Datamath.org, a museum of TI calculators Texas Instruments calculators Z80-based graphing • TI-73 • TI-81 • TI-82 • TI-83x • TI-84x • TI-85 • TI-86 M68k-based graphing • TI-89 • TI-92x and Voyage 200 PLT ARM-based graphing • TI-PLT (canceled) • TI-Nspire–TI-Nspire CAS Other graphing • TI-80 Non-graphing programmable • TI-55 • TI-55 II • TI-55 III • TI-56 • TI-57x • TI-58x • TI-59 • TI-74 • TI-95 (PROCALC) Scientific models • TI SR-50 • TI-30 • TI-32 • TI-34 • TI-35 • TI-36 • TI-54 • TI-68 Financial models • Business Analyst series Other models • TI-108 • TI-7 • TI-12 • TI-15 • TI-1030 • TI-1031 Related • TI-BASIC • Calculator-Based Laboratory • Texas Instruments signing key controversy
Wikipedia
TI-89 series The TI-89 and the TI-89 Titanium are graphing calculators developed by Texas Instruments (TI). They are differentiated from most other TI graphing calculators by their computer algebra system, which allows symbolic manipulation of algebraic expressions—equations can be solved in terms of variables, whereas the TI-83/84 series can only give a numeric result. TI-89 A TI-89 TypeProgrammable Graphing ManufacturerTexas Instruments IntroducedSeptember 1998[1] Discontinued2004 Latest firmware2.09 SuccessorTI-89 Titanium Calculator Entry modeDAL Display typeLCD Dot-matrix Display size160×100 CPU ProcessorMotorola 68000 Frequency10, 12 MHz Programming User memory256 KB RAM (188 KB user accessible) Firmware memory2 MB flash memory (639 KB user accessible) Other Power supply4 AAA batteries, 1 CR1616 or CR1620 for RAM backup TI-89 The TI-89 is a graphing calculator developed by Texas Instruments in 1998. The unit features a 160×100 pixel resolution LCD and a large amount of flash memory, and includes TI's Advanced Mathematics Software. The TI-89 is one of the highest model lines in TI's calculator products, along with the TI-Nspire. In the summer of 2004, the standard TI-89 was replaced by the TI-89 Titanium. The TI-89 runs on a 32-bit microprocessor, the Motorola 68000, which nominally runs at 10 or 12 MHz,[3] depending on the calculator's hardware version. The calculator has 256 kB of RAM, (190 kB of which are available to the user) and 2 MB of flash memory (700 kB of which is available to the user). The RAM and Flash ROM are used to store expressions, variables, programs, text files, and lists. The TI-89 is essentially a TI-92 Plus with a limited keyboard and smaller screen. It was created partially in response to the fact that while calculators are allowed on many standardized tests, the TI-92 was not due to the QWERTY layout of its keyboard. Additionally, some people found the TI-92 unwieldy and overly large. The TI-89 is significantly smaller—about the same size as most other graphing calculators. It has a flash ROM, a feature present on the TI-92 Plus but not on the original TI-92. User features The major advantage of the TI-89 over other TI calculators is its built-in computer algebra system, or CAS. The calculator can evaluate and simplify algebraic expressions symbolically. For example, entering x^2-4x+4 returns $x^{2}-4x+4$. The answer is "prettyprinted" by default; that is, displayed as it would be written by hand (e.g. the aforementioned $x^{2}-4x+4$ rather than x^2-4x+4). The TI-89's abilities include: • Algebraic factoring of expressions, including partial fraction decomposition. • Algebraic simplification; for example, the CAS can combine multiple terms into one fraction by finding a common denominator. • Evaluation of trigonometric expressions to exact values. For example, sin(60°) returns ${\frac {\sqrt {3}}{2}}$ instead of 0.86603. • Solving equations for a certain variable. The CAS can solve for one variable in terms of others; it can also solve systems of equations. For equations such as quadratics where there are multiple solutions, it returns all of them. Equations with infinitely many solutions are solved by introducing arbitrary constants: solve(tan(x+2)=0,x) returns x=2.(90.@n1-1), with the @n1 representing any integer. • Symbolic and numeric differentiation and integration. Derivatives and definite integrals are evaluated exactly when possible, and approximately otherwise. • Calculate greatest common divisor (gcd) and least common multiple (lcm) • Probability theory: factorial, combination,[4] permutation, binomial distribution, normal distribution[5] • PrettyPrint[6] (like equation editor and LaTeX) • These mathematical constants are shown as symbols $\pi $ and $e$ and $i$ and $\infty $ • Draw 2D and 3D graph[6] • Calculate taylorpolynomial[7] • Calculate limit of a function,[8] including infinite limits and limits from one direction • Vector calculation[9] • Matrix calculation[10] • Calculate series[11] (summation or infinite product) • Calculate chi squared test[12] • Calculate complex numbers[13][14] • Factoring polynomial: factor(polynomial) or cfactor(polynomial) • Solve equation:[15] solve(equation,$x$) or csolve(equation,$x$) • Solve first or second order differential equation: deSolve(differential equation,$x$,$y$) • Multiply and divide SI Units:[16] underscore _ "diamond" "MODE" • A number of regressions: • LinReg • QuadReg • CubicReg • QuartReg • ExpReg • LnReg • PowerReg • Logistic • SinReg In addition to the standard two-dimensional function plots, it can also produce graphs of parametric equations, polar equations, sequence plots, differential equation fields, and three-dimensional (two independent variable) functions. Programming The TI-89 is directly programmable in a language called TI-BASIC 89, TI's derivative of BASIC for calculators. With the use of a PC, it is also possible to develop more complex programs in Motorola 68000 assembly language or C, translate them to machine language, and copy them to the calculator. Two software development kits for C programming are available; one is TI Flash Studio, the official TI SDK, and the other is TIGCC, a third-party SDK based on GCC. In addition, there is a third party flash application called GTC that allows the writing and compilation of c programs directly on the calculator. It is built on TIGCC, with some modifications. Numerous BASIC extensions are also present, the most notable of which is NewProg. Since the TI-89's release in 1998, thousands of programs for math, science, or entertainment have been developed. Many video games have also been developed. Many are generic clones of Tetris, Minesweeper, and other classic games, but some programs are more advanced: for example, a ZX Spectrum emulator, a chess-playing program, a symbolic circuit simulator, and a clone of Link's Awakening. Some of the most popular and well-known games are Phoenix, Drugwars, and Snake. Many calculator games and other useful programs can be found on TI-program sharing sites. Ticalc.org is a major one that offers thousands of calculator programs. Hardware versions There are four hardware versions of the TI-89. These versions are normally referred to as HW1, HW2, HW3, and HW4 (released in May 2006). Entering the key sequence [F1] [A] displays the hardware version. Older versions (before HW2) don't display anything about the hardware version in the about menu. The differences in the hardware versions are not well documented by Texas Instruments. HW1 and HW2 correspond to the original TI-89; HW3 and HW4 are only present in the TI-89 Titanium. The most significant difference between HW1 and HW2 is in the way the calculator handles the display. In HW1 calculators there is a video buffer that stores all of the information that should be displayed on the screen, and every time the screen is refreshed the calculator accesses this buffer and flushes it to the display (direct memory access). In HW2 and later calculators, a region of memory is directly aliased to the display controller (memory-mapped I/O). This allows for slightly faster memory access, as the HW1's DMA controller used about 10% of the bus bandwidth. However, it interferes with a trick some programs use to implement grayscale graphics by rapidly switching between two or more displays (page-flipping). On the HW1, the DMA controller's base address can be changed (a single write into a memory-mapped hardware register) and the screen will automatically use a new section of memory at the beginning of the next frame. In HW2, the new page must be written to the screen by software. The effect of this is to cause increased flickering in grayscale mode, enough to make the 7-level grayscale supported on the HW1 unusable (although 4-level grayscale works on both calculators). HW2 calculators are slightly faster because TI increased the nominal speed of the processor from 10 MHz to 12 MHz. It is believed that TI increased the speed of HW4 calculators to 16 MHz, though many users disagree about this finding. The measured statistics are closer to 14 MHz. Another difference between HW1 and HW2 calculators is assembly program size limitations. The size limitation on HW2 calculators has varied with the AMS version of the calculator. As of AMS 2.09 the limit is 24k. Some earlier versions limited assembly programs to 8k, and the earliest AMS versions had no limit. The latest AMS version has a 64kb limit. HW1 calculators have no hardware to enforce the limits, so it is easy to bypass them in software. There are unofficial patches and kernels that can be installed on HW2 calculators to remove the limitations. TI-89 Titanium TI-89 Titanium A TI-89 Titanium with Computer Algebra System TypeProgrammable Graphing IntroducedJune 1, 2004[17] Latest firmware3.10 PredecessorTI-89 SuccessorTI-Nspire CAS Calculator Entry modeDAL Display typeLCD Dot-matrix Display size160×100 CPU ProcessorMotorola 68000 Frequency16 MHz Programming User memory256 KB RAM (188 KB user accessible) Firmware memory4 MB flash memory (2.7 MB user accessible) Other Power supply4 AAA batteries, 1 SR44 The TI-89 Titanium was released in the June 1st, 2004, and has largely replaced the popular classic TI-89. The TI-89 Titanium is referred to as HW3 and uses the corresponding AMS 3.x. In 2006, new calculators were upgraded to HW4 which was supposed to offer increases in RAM and speeds up to 16 MHz, but some benchmarks made by users reported speeds between 12.85 and 14.1 MHz. The touted advantages of the TI-89 Titanium over the original TI-89 include two times the flash memory (with over four times as much available to the user). The TI-89 Titanium is essentially a Voyage 200, without an integrated keyboard. The TI-89 Titanium also has a USB On-The-Go port, for connectivity to other TI-89 Titanium calculators, or to a computer (to store programs or update the operating system). The TI-89 Titanium also features some pre-loaded applications, such as "CellSheet", a spreadsheet program also offered with other TI calculators. The Titanium has a slightly updated CAS, which adds a few more mathematical functions, most notably implicit differentiation. The Titanium also has a slightly differing case design from that of the TI-89 (the Titanium's case design is similar to that of the TI-84 Plus). There are some minor compatibility issues with C and assembly programs developed for the original TI-89. Some have to be recompiled to work on the Titanium due to various small hardware changes, though in most cases the problems can be fixed by using a utility such as GhostBuster, by Olivier Armand and Kevin Kofler. This option is generally preferred as it requires no knowledge of the program, works without the need of the program's source code, is automated, and doesn't require additional computer software. In some cases, only one character needs to be changed (the ROM base on TI-89 is at 0x200000, whereas the TI-89 Titanium is at 0x800000) by hand or by patcher. Most, if not all, of these problems are caused by the mirror memory (ghost space) or lack thereof. A simulator for the TI-89 Titanium was released in April 2021.[18] Use in schools United Kingdom The Joint Council for Qualifications publish examination instructions on behalf of the main examination boards in England, Wales and Northern Ireland. These instructions state that a calculator used in an examination must not be designed to offer symbolic algebra manipulation, symbolic differentiation or integration.[19] This precludes use of the TI-89 or TI-89 Titanium in examinations, but it may be used as part of classroom study. The SQA give the same instructions for examinations in Scotland.[20] United States In the United States, the TI-89 is allowed by the College Board on all calculator-permitted tests, including the SAT, some SAT Subject Tests, and the AP Calculus, Physics, Chemistry, and Statistics exams. However, the calculator is banned from use on the ACT, the PLAN, and in some classrooms. The TI-92 series, with otherwise comparable features, has a QWERTY keyboard that results in it being classified as a computer device rather than as a calculator.[21] See also • Comparison of Texas Instruments graphing calculators • TI-Nspire References 1. "TI-89 Nears Release - ticalc.org". 1998-08-08. Retrieved 2023-01-23. 2. "ti89-simulator.org at WI. TI-89 Online Simulator". website.informer.com. Retrieved 2021-05-13. 3. Woerner, Joerg (July 27, 2020). "Texas Instruments TI-89". Datamath Calculator Museum. Retrieved June 27, 2022. 4. "Chapter 12: Calculator Notes for the TI-89, TI-92 Plus, and Voyage 200". Discovering Advanced Algebra (PDF) (1st ed.). Kendall Hunt. 2004. pp. 73–75. Archived from the original (PDF) on April 7, 2022. 5. Fairbourn, Camille. "Using the normalcdf function on the TI‐89" (PDF). Retrieved June 27, 2022. 6. se side 14 i "Vælg den rigtige regnemaskine. Lommeregnerguide 1998". af Texas Instruments 7. "Module 22 - Power Series". TI Education. 8. "Finding limits with the TI-89" (PDF). 9. "TI 89 for Vectors" (PDF). 10. O'Connell, Jeff. "Matrix Operations on the TI-89" (PDF). 11. side 14-15 i "Vælg den rigtige regnemaskine. Lommeregnerguide 1998". af Texas Instruments 12. Fairbourn, Camille. "Chi-square tests for Independence on the TI-89" (PDF). Retrieved June 27, 2022. 13. Brown, Stan. "Complex Numbers on TI-89". 14. "How to add vectors on the Ti-89" (PDF). 15. McCalla, Jeff; Ouellette, Steve (March 26, 2016). "Solve Command from TI-Nspire CAS Algebra Submenu". TI-Nspire For Dummies. Retrieved June 27, 2022. 16. "Calculator Quick Reference Guide and Instructions" (PDF). 17. "TI-89 Titanium Now Available - ticalc.org". 2004-06-17. Retrieved 2023-01-23. 18. "TI-89 Online Simulator". ti89-simulator.com. Retrieved 2021-04-10. 19. Joint Council for Qualifications (2014). "Instructions for Conducting Examinations 2014–2015". p. 13. Archived from the original on 2015-03-30. Retrieved 2015-03-29. 20. Scottish Qualifications Authority (2013). "Mathematics Update Letter" (PDF). p. 2. Retrieved 2015-03-29. 21. ACT's CAAP Tests: Use of Calculators on the CAAP Mathematics Test External links • Official website • Instruction Manual • Using the TI-89 Graphing Calculator • Complete disassembly of a TI-89 Titanium calculator at the Wayback Machine (archived April 7, 2016) Texas Instruments calculators Z80-based graphing • TI-73 • TI-81 • TI-82 • TI-83x • TI-84x • TI-85 • TI-86 M68k-based graphing • TI-89 • TI-92x and Voyage 200 PLT ARM-based graphing • TI-PLT (canceled) • TI-Nspire–TI-Nspire CAS Other graphing • TI-80 Non-graphing programmable • TI-55 • TI-55 II • TI-55 III • TI-56 • TI-57x • TI-58x • TI-59 • TI-74 • TI-95 (PROCALC) Scientific models • TI SR-50 • TI-30 • TI-32 • TI-34 • TI-35 • TI-36 • TI-54 • TI-68 Financial models • Business Analyst series Other models • TI-108 • TI-7 • TI-12 • TI-15 • TI-1030 • TI-1031 Related • TI-BASIC • Calculator-Based Laboratory • Texas Instruments signing key controversy Authority control: National • Israel • United States
Wikipedia
TI-92 series The TI-92 series of graphing calculators are a line of calculators produced by Texas Instruments. They include: the TI-92 (1995), the TI-92 II (1996), the TI-92 Plus (1998, 1999) and the Voyage 200 (2002). The design of these relatively large calculators includes a QWERTY keyboard. Because of this keyboard, it was given the status of a "computer" rather than "calculator" by American testing facilities and cannot be used on tests such as the SAT or AP Exams while the similar TI-89 can be.[1][2] TI-92 The original TI–92 TypeProgrammable Graphing Introduced1995 Discontinued1998 Latest firmware1.12 SuccessorTI-92 Plus Calculator Entry modeD.A.L. Precision14 Display typeLCD Dot-matrix Display size240x128 CPU ProcessorMotorola MC68000 Frequency10MHz Programming Programming language(s)TI-BASIC Memory register68 kB RAM Other Power supply4 AAs, 1 CR2032 Weight493 grams (17.4 oz) Dimensions119 mm × 208 mm × 30 mm (4.7 in × 8.2 in × 1.20 in) TI-92 II TI-92 II TypeProgrammable Graphing Introduced1996 Discontinued1999 Latest firmware2.1 SuccessorTI-92 Plus Calculator Entry modeD.A.L. Precision14 Display typeLCD Dot-matrix Display size240x128 CPU ProcessorMotorola MC68000 Frequency10MHz Programming Programming language(s)TI-BASIC Memory register128 kB RAM Other Power supply4 AAs, 1 CR2032 Weight493 grams (17.4 oz) Dimensions119 mm × 208 mm × 30 mm (4.7 in × 8.2 in × 1.20 in) TI-92 The TI-92 was originally released in 1995, and was the first symbolic calculator made by Texas Instruments. It came with a computer algebra system (CAS) based on Derive, geometry based on Cabri II, and was one of the first calculators to offer 3D graphing. The TI-92 was not allowed on most standardized tests due mostly to its QWERTY keyboard. Its larger size was also rather cumbersome compared to other graphing calculators. In response to these concerns, Texas Instruments introduced the TI-89 which is functionally similar to the original TI-92, but featured Flash ROM and 188 KB RAM, and a smaller design without the QWERTY keyboard. The TI-92 was then replaced by the TI-92 Plus, which was essentially a TI-89 with the larger QWERTY keyboard design of the TI-92. Eventually, TI released the Voyage 200, which is a smaller, lighter version of the TI-92 Plus with more Flash ROM. The TI-92 is no longer sold through TI or its dealers, and is very hard to come by in stores. TI-92 II The TI-92 II was released in 1996, and was the first successor to the TI-92. The TI-92 II was available both as a stand-alone product, and as a user-installable II module which could be added to original TI-92 units to gain most of the feature improvements. The TI-92 II module was introduced early in 1996 and added the choice of 5 user languages (English, French, German, Italian and Spanish) and an additional 128k User memory. Along with the TI-92, the TI-92 II was replaced by the TI-92 Plus in 1999, which offered even more Flash ROM and RAM. TI-92 Plus TI-92 Plus TI-92 Plus TypeProgrammable Graphing Introduced1998 Discontinued2006 Latest firmware2.09 PredecessorTI-92/TI-92 II SuccessorVoyage 200 Calculator Entry modeD.A.L. Precision14 Display typeLCD Dot-matrix Display size240x128 CPU ProcessorMotorola MC68000 Frequency12MHz Programming Programming language(s)TI-BASIC Memory register188 kB RAM 702 kB flash memory Other Power supply4 AAs, 1 CR2032 Weight493 grams (17.4 oz) Dimensions119 mm × 208 mm × 30 mm (4.7 in × 8.2 in × 1.20 in) The TI-92 Plus (or TI-92+) was released in 1998, slightly after the creation of the almost-identical (in terms of software) TI-89, while physically looking exactly like its predecessor, the TI-92 (which lacked flash memory). Besides increased memory over its predecessor, the TI-92 Plus also featured a sharper "black" screen, which had first appeared on the TI-89 and which eases viewing. The TI-92 Plus was available both as a stand-alone product, and as a user-installable Plus module which could be added to original TI-92 and TI-92 II units to gain most of the feature improvements, most notably Flash Memory. A stand-alone TI-92 Plus calculator was functionally similar to the HW2 TI-89, while a module-upgraded TI-92 was functionally similar to the HW1 TI-89. Both versions could run the same releases of operating system software. As of 2002, the TI-92 Plus was succeeded by the Voyage 200 and is no longer sold through TI or its dealers. The TI-92 Plus is now available in an online emulator,[3] featuring a list of frequently used commands.[4] Voyage 200 Voyage 200 TypeProgrammable Graphing Introduced2002 Discontinued2014 Latest firmware3.10 PredecessorTI-92 Plus Calculator Entry modeD.A.L. Precision14 Display typeLCD Dot-matrix Display size240x128 CPU ProcessorMotorola MC68000 Frequency12MHz Programming Programming language(s)TI-BASIC Memory register188 kB RAM 2.7 MB flash memory Other Power supply4 AAAs, 1 CR1616 or CR1620 Weight272 grams (9.6 oz) Dimensions117 mm × 185 mm × 28 mm (4.6 in × 7.3 in × 1.10 in) Voyage 200 (also V200 and Voyage 200 PLT) was released in 2002, being the replacement for the TI-92 Plus, with its only hardware upgrade over that calculator being an increase in the amount of flash memory available (2.7 megabytes for the Voyage 200 vs. 702 kilobytes for the TI-92 Plus). It also features a somewhat smaller and more rounded case design. Like its predecessor, Voyage 200 is an advanced calculator that supports plotting multiple functions on the same graph, parametric, polar, 3D, and differential equation graphing as well as sequence representations. Its symbolic calculation system is based on a trimmed version of the calculation software Derive. In addition to its algebra and calculus capabilities, the Voyage 200 is packaged with list, spreadsheet, and data processing applications and can perform curve fitting to a number of standard functions and other statistical analysis operations. The calculator can also run most programs written for the TI-89 and TI-92 as well as programs specifically written for it. A large number of applications, ranging from games to interactive periodic tables can be found online. The V200 is easily mistaken for a PDA or a small computer because of its massive enclosure and its full QWERTY keyboard — a feature which disqualifies the calculator for use in many tests and examinations, including the American ACT and SAT. The TI-89 Titanium offers exactly the same functionality in a smaller format that is also legal on the SAT test, but not the ACT test. Features Technical specifications TI-92 TI-92II TI-92 Plus Voyage 200 Display 240×128 pixels CPU Motorola MC68000 10 MHz Motorola MC68000 12 MHz RAM 128 KB (70 KB user-available) 256 KB (136 KB user-available) 256 KB (188 KB user-available) Flash ROM 1 MB ROM, (non-upgradeable) 2 MB, (702 KB[lower-alpha 1] user-available) 4 MB, (2.7 MB user-available) Link capability 2.5 mm I/O port Power 4×AA, 1×CR2032 4×AAA, 1×CR1616 Release 1995 1996 1998,1999 2002 1. Official page specifies user-available ROM amount for TI-92 Plus as 702K,[5] but other sources specify it as 388K.[6] This is due to the TI-92 Plus coming with Cabri Geometry pre-installed, which uses the 314 KB difference. See also • Comparison of Texas Instruments graphing calculators References 1. Calculator Policy, 13 January 2016 2. AP Calculator Policy 3. "TI-89 / TI-92+ / TI-V200 / TI-89T emulator (beta version 12-debrouxl)". tiplanet.org. Retrieved 2020-12-15. 4. "TI-89 Graphing Calculator For Dummies Cheat Sheet". dummies. Retrieved 2020-12-29. 5. "TI-92 Plus". TI Education. Archived from the original on 23 February 2012. 6. "CBL™ News". The Caliper. Archived from the original on 2005-12-17. Retrieved 2005-09-09. External links • Official documentation: features of the Voyage 200. Texas Instruments calculators Z80-based graphing • TI-73 • TI-81 • TI-82 • TI-83x • TI-84x • TI-85 • TI-86 M68k-based graphing • TI-89 • TI-92x and Voyage 200 PLT ARM-based graphing • TI-PLT (canceled) • TI-Nspire–TI-Nspire CAS Other graphing • TI-80 Non-graphing programmable • TI-55 • TI-55 II • TI-55 III • TI-56 • TI-57x • TI-58x • TI-59 • TI-74 • TI-95 (PROCALC) Scientific models • TI SR-50 • TI-30 • TI-32 • TI-34 • TI-35 • TI-36 • TI-54 • TI-68 Financial models • Business Analyst series Other models • TI-108 • TI-7 • TI-12 • TI-15 • TI-1030 • TI-1031 Related • TI-BASIC • Calculator-Based Laboratory • Texas Instruments signing key controversy
Wikipedia
T. M. F. Smith Terence Michael Frederick Smith (18 January 1934 – 7 December 2019)[1][2] was a British statistician known for his research in survey sampling. Fred Smith gained his first degree in 1959. He succeeded Prof Maurice Quenouille[3] as Professor of Statistics at the University of Southampton in 1975.[4] He received the Guy Medal in bronze from the Royal Statistical Society in 1979. In 1983 he was elected as a Fellow of the American Statistical Association.[5] He was President of the Royal Statistical Society in 1991–1993.[6] Selected bibliography • Smith, T. M. F. (1984). "Present Position and Potential Developments: Some Personal Views: Sample surveys". Journal of the Royal Statistical Society, Series A. 147 (2): 208–221. doi:10.2307/2981677. JSTOR 2981677. • Smith, T. M. F. (1993). "Populations and Selection: Limitations of Statistics (Presidential address)". Journal of the Royal Statistical Society, Series A. 156 (2): 144–166. doi:10.2307/2982726. JSTOR 2982726. (Portrait of T. M. F. Smith on page 144) • Smith, T. M. F. (2001). "Biometrika centenary: Sample surveys". Biometrika. 88 (1): 167–243. doi:10.1093/biomet/88.1.167. • Smith, T. M. F. (2001). "Biometrika centenary: Sample surveys". In D. M. Titterington and D. R. Cox (ed.). Biometrika: One Hundred Years. Oxford University Press. pp. 165–194. ISBN 0-19-850993-6. • Smith, T. M. F.; Staetsky, L. (2007). "The teaching of statistics in UK universities". Journal of the Royal Statistical Society, Series A. 170 (3): 581–622. doi:10.1111/j.1467-985X.2007.00482.x. MR 2380589. References 1. Fred Smith 1934-2019 2. "T. M. F. Smith, 1934–2019; Harvey Goldstein, 1939–2020; Allan Henry Seheult, 1942–2019;John Francis Bithell, 1939–2020; M. H. A. Davis, 1945–2020". Journal of the Royal Statistical Society, Series A (Statistics in Society). 183 (3): 1313–1322. 2020. doi:10.1111/rssa.12580. 3. Statistics: THe Art of Conjecture: T M F Smith 4. Smith, T. M. F. (2001). "Biometrika centenary: Sample surveys". Biometrika. 88 (1): 167–243. doi:10.1093/biomet/88.1.167. 5. View/Search Fellows of the ASA, accessed 2016-10-22. 6. Smith, T. M. F. (1993). "Populations and Selection: Limitations of Statistics (Presidential address)". Journal of the Royal Statistical Society, Series A. 156 (2): 144–166. doi:10.2307/2982726. JSTOR 2982726. (Portrait of T. M. F. Smith on page 144) The Royal Statistical Society lists Smith as the 1991–1993 President in its list of presidents of the Royal Statistical Society, which available on-line: List of presidents Archived 2008-10-13 at the Wayback Machine Guy Medallists Gold Medallists • Charles Booth (1892) • Robert Giffen (1894) • Jervoise Athelstane Baines (1900) • Francis Ysidro Edgeworth (1907) • Patrick G. Craigie (1908) • G. Udny Yule (1911) • T. H. C. Stevenson (1920) • A. William Flux (1930) • A. L. Bowley (1935) • Major Greenwood (1945) • R. A. Fisher (1946) • A. Bradford Hill (1953) • E. S. Pearson (1955) • Frank Yates (1960) • Harold Jeffreys (1962) • Jerzy Neyman (1966) • M. G. Kendall (1968) • M. S. Bartlett (1969) • Harald Cramér (1972) • David Cox (1973) • G. A. Barnard (1975) • Roy Allen (1978) • D. G. Kendall (1981) • Henry Daniels (1984) • Bernard Benjamin (1986) • Robin Plackett (1987) • Peter Armitage (1990) • George E. P. Box (1993) • Peter Whittle (1996) • Michael Healy (1999) • Dennis Lindley (2002) • John Nelder (2005) • James Durbin (2008) • C. R. Rao (2011) • John Kingman (2013) • Bradley Efron (2014) • Adrian Smith (2016) • Stephen Buckland (2019) • David Spiegelhalter (2020) • Nancy Reid (2022) Silver Medallists • John Glover (1893) • Augustus Sauerbeck (1894) • A. L. Bowley (1895) • F. J. Atkinson (1897) • C. S. Loch (1899) • Richard Crawford (1900) • Thomas A. Welton (1901) • R. H. Hooker (1902) • Yves Guyot (1903) • D. A. Thomas (1904) • R. H. Rew (1905) • W. H. Shaw (1906) • N. A. Humphreys (1907) • Edward Brabrook (1909) • G. H. Wood (1910) • R. Dudfield (1913) • S. Rowson (1914) • S. J. Chapman (1915) • J. S. Nicholson (1918) • J. C. Stamp (1919) • A. William Flux (1921) • H. W. Macrosty (1927) • Ethel Newbold (1928) • H. E. Soper (1930) • J. H. Jones (1934) • Ernest Charles Snow (1935) • R. G. Hawtrey (1936) • E. C. Ramsbottom (1938) • L. Isserlis (1939) • H. Leak (1940) • M. G. Kendall (1945) • Harry Campion (1950) • F. A. A. Menzler (1951) • M. S. Bartlett (1952) • J. O. Irwin (1953) • L. H. C. Tippett (1954) • D. G. Kendall (1955) • Henry Daniels (1957) • G. A. Barnard (1958) • E. C. Fieller (1960) • D. R. Cox (1961) • P. V. Sukhatme (1962) • George E. P. Box (1964) • C. R. Rao (1965) • Peter Whittle (1966) • Dennis Lindley (1968) • Robin Plackett (1973) • James Durbin (1976) • John Nelder (1977) • Peter Armitage (1978) • Michael Healy (1979) • M. Stone (1980) • John Kingman (1981) • Henry Wynn (1982) • Julian Besag (1983) • J. C. Gittins (1984) • A. Bissell (1985) • W. Pridmore (1985) • Richard Peto (1986) • John Copas (1987) • John Aitchison (1988) • F. P. Kelly (1989) • David Clayton (1990) • R. L. Smith (1991) • Robert Nicholas Curnow (1992) • A. F. M. Smith (1993) • David Spiegelhalter (1994) • B. W. Silverman (1995) • Steffen Lauritzen (1996) • Peter Diggle (1997) • Harvey Goldstein (1998) • Peter Green (1999) • Walter Gilks (2000) • Philip Dawid (2001) • David Hand (2002) • Kanti Mardia (2003) • Peter Donnelly (2004) • Peter McCullagh (2005) • Michael Titterington (2006) • Howell Tong (2007) • Gareth Roberts (2008) • Sylvia Richardson (2009) • Iain M. Johnstone (2010) • P. G. Hall (2011) • David Firth (2012) • Brian Ripley (2013) • Jianqing Fan (2014) • Anthony Davison (2015) • Nancy Reid (2016) • Neil Shephard (2017) • Peter Bühlmann (2018) • Susan Murphy (2019) • Arnaud Doucet (2020) • Håvard Rue (2021) • Paul Fearnhead (2022) Bronze Medallists • William Gemmell Cochran (1936) • R. F. George (1938) • W. J. Jennett (1949) • Peter Armitage (1962) • James Durbin (1966) • F. Downton (1967) • Robin Plackett (1968) • M. C. Pike (1969) • P. G. Moore (1970) • D. J. Bartholomew (1971) • G. N. Wilkinson (1974) • A. F. Bissell (1975) • P. L. Goldsmith (1976) • A. F. M. Smith (1977) • Philip Dawid (1978) • T. M. F. Smith (1979) • A. J. Fox (1980) • S. J. Pocock (1982) • Peter McCullagh (1983) • Bernard Silverman (1984) • David Spiegelhalter (1985) • D. F. Hendry (1986) • Peter Green (1987) • S. C. Darby (1988) • S. M. Gore (1989) • Valerie Isham (1990) • M. G. Kenward (1991) • C. Jennison (1992) • Jonathan Tawn (1993) • R. F. A. Poultney (1994) • Iain M. Johnstone (1995) • J. N. S. Matthews (1996) • Gareth Roberts (1997) • D. Firth (1998) • P. W. F. Smith • J. Forster (1999) • J. Wakefield (2000) • Guy Nason (2001) • Geert Molenberghs (2002) • Peter Lynn (2003) • Nicola Best (2004) • Steve Brooks (2005) • Matthew Stephens (2006) • Paul Fearnhead (2007) • Fiona Steele (2008) • Chris Holmes (2009) • Omiros Papaspiliopoulos (2010) • Nicolai Meinshausen (2011) • Richard Samworth (2012) • Piotr Fryzlewicz (2013) • Ming Yuan (2014) • Jinchi Lv (2015) • Yingying Fan (2017) • Peng Ding (2018) • Jonas Peters (2019) • Rachel McCrea (2020) • Pierre E. Jacob (2021) • Rajan Shah (2022) Presidents of the Royal Statistical Society 19th century • 1834–1836 The Marquess of Lansdowne • 1836–1838 Sir Charles Lemon, Bt • 1838–1840 The Earl FitzWilliam • 1840–1842 Viscount Sandon • 1842–1843 The Marquess of Lansdowne • 1843–1845 Lord Ashley • 1845–1847 The Lord Monteagle of Brandon • 1847–1849 The Earl FitzWilliam • 1849–1851 The Earl of Harrowby • 1851–1853 The Lord Overstone • 1853–1855 The Earl FitzWilliam • 1855–1857 The Earl of Harrowby • 1857–1859 Lord Stanley • 1859–1861 Lord John Russell • 1861–1863 Sir John Pakington, Bt • 1863–1865 William Henry Sykes • 1865–1867 The Lord Houghton • 1867–1869 William Ewart Gladstone • 1869–1871 William Newmarch • 1871–1873 William Farr • 1873–1875 William Guy • 1875–1877 James Heywood • 1877–1879 George Shaw-Lefevre • 1879–1880 Thomas Brassey • 1880–1882 James Caird • 1882–1884 Robert Giffen • 1884–1886 Rawson W. Rawson • 1886–1888 George Goschen • 1888–1890 Thomas Graham Balfour • 1890–1892 Frederic J. Mouat • 1892–1894 Charles Booth • 1894–1896 The Lord Farrer • 1896–1897 John Biddulph Martin • 1897 Alfred Edmund Bateman • 1897–1899 Leonard Courtney • 1899–1900 Henry Fowler • 1900–1902 The Lord Avebury 20th century • 1902–1904 Patrick George Craigie • 1904–1905 Sir Francis Powell, Bt • 1905–1906 The Earl of Onslow • 1906–1907 Richard Martin • 1907–1909 Sir Charles Dilke, Bt • 1909–1910 Jervoise Athelstane Baines • 1910–1912 Lord George Hamilton • 1912–1914 Francis Ysidro Edgeworth • 1914–1915 The Lord Welby • 1915–1916 Lord George Hamilton • 1916–1918 Bernard Mallet, Registrar General • 1918–1920 Herbert Samuel • 1920–1922 R. Henry Rew • 1922–1924 The Lord Emmott • 1924–1926 Udny Yule • 1926–1928 The Viscount D'Abernon • 1928–1930 A. William Flux • 1930–1932 Sir Josiah Stamp • 1932–1934 The Lord Meston • 1934–1936 Major Greenwood • 1936–1938 The Lord Kennet • 1938–1940 Arthur Lyon Bowley • 1940–1941 Henry William Macrosty • 1941 Hector Leak • 1941–1943 William Beveridge • 1943–1945 Ernest Charles Snow • 1945–1947 The Lord Woolton • 1947–1949 David Heron • 1949–1950 Sir Geoffrey Heyworth • 1950–1952 Austin Bradford Hill • 1952–1954 Ronald Fisher • 1954–1955 The Lord Piercy • 1955–1957 Egon Pearson • 1957–1959 Harry Campion • 1959–1960 Hugh Beaver • 1960–1962 Maurice Kendall • 1962–1964 Joseph Oscar Irwin • 1964–1965 Sir Paul Chambers • 1965–1966 L. H. C. Tippett • 1966–1967 M. S. Bartlett • 1967–1968 Frank Yates • 1968–1969 Arthur Cockfield • 1969–1970 R. G. D. Allen • 1970–1971 Bernard Benjamin • 1971–1972 George Alfred Barnard • 1972–1973 Harold Wilson • 1973–1974 D. J. Finney • 1974–1975 Henry Daniels • 1975–1977 Stella Cunliffe • 1977–1978 Henry Wynn • 1978–1980 Sir Claus Moser • 1980–1982 David Cox • 1982–1984 Peter Armitage • 1984–1985 Walter Bodmer • 1985–1986 John Nelder • 1986–1987 James Durbin • 1987–1989 John Kingman • 1989–1991 Peter G. Moore • 1991–1993 T. M. F. Smith • 1993–1995 D. J. Bartholomew • 1995–1997 Adrian Smith • 1997–1999 Robert Nicholas Curnow • 1999–2001 Denise Lievesley 21st century • 2001–2003 Peter Green • 2003–2005 Andy Grieve • 2005–2007 Tim Holt • 2008–2009 David Hand • 2010–2010 Bernard Silverman (resigned Feb 2010; replaced pro tem by David Hand) • 2011–2012 Valerie Isham • 2013–2014 John Pullinger • 2014–2016 Peter Diggle • 2017–2018 David Spiegelhalter • 2019– Deborah Ashby • Category • List Authority control International • ISNI • VIAF National • Norway • Germany • Israel • United States • Netherlands Academics • Scopus • zbMATH Other • IdRef
Wikipedia
TOPSIS The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method, which was originally developed by Ching-Lai Hwang and Yoon in 1981[1] with further developments by Yoon in 1987,[2] and Hwang, Lai and Liu in 1993.[3] TOPSIS is based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution (PIS) and the longest geometric distance from the negative ideal solution (NIS). A dedicated book in the fuzzy context was published in 2021 [4] Description It is a method of compensatory aggregation that compares a set of alternatives, normalising scores for each criterion and calculating the geometric distance between each alternative and the ideal alternative, which is the best score in each criterion. The weights of the criteria in TOPSIS method can be calculated using Ordinal Priority Approach, Analytic hierarchy process, etc. An assumption of TOPSIS is that the criteria are monotonically increasing or decreasing. Normalisation is usually required as the parameters or criteria are often of incongruous dimensions in multi-criteria problems.[5][6] Compensatory methods such as TOPSIS allow trade-offs between criteria, where a poor result in one criterion can be negated by a good result in another criterion. This provides a more realistic form of modelling than non-compensatory methods, which include or exclude alternative solutions based on hard cut-offs.[7] An example of application on nuclear power plants is provided in.[8] TOPSIS method The TOPSIS process is carried out as follows: Step 1 Create an evaluation matrix consisting of m alternatives and n criteria, with the intersection of each alternative and criteria given as $x_{ij}$, we therefore have a matrix $(x_{ij})_{m\times n}$. Step 2 The matrix $(x_{ij})_{m\times n}$ is then normalised to form the matrix $R=(r_{ij})_{m\times n}$, using the normalisation method $r_{ij}={\frac {x_{ij}}{\sqrt {\sum _{k=1}^{m}x_{kj}^{2}}}},\quad i=1,2,\ldots ,m,\quad j=1,2,\ldots ,n$ Step 3 Calculate the weighted normalised decision matrix $t_{ij}=r_{ij}\cdot w_{j},\quad i=1,2,\ldots ,m,\quad j=1,2,\ldots ,n$ where $w_{j}=W_{j}{\Big /}\sum _{k=1}^{n}W_{k},j=1,2,\ldots ,n$ so that $\sum _{i=1}^{n}w_{i}=1$, and $W_{j}$ is the original weight given to the indicator $v_{j},\quad j=1,2,\ldots ,n.$ Step 4 Determine the worst alternative $(A_{w})$ and the best alternative $(A_{b})$: $A_{w}=\{\langle \max(t_{ij}\mid i=1,2,\ldots ,m)\mid j\in J_{-}\rangle ,\langle \min(t_{ij}\mid i=1,2,\ldots ,m)\mid j\in J_{+}\rangle \rbrace \equiv \{t_{wj}\mid j=1,2,\ldots ,n\rbrace ,$ $A_{b}=\{\langle \min(t_{ij}\mid i=1,2,\ldots ,m)\mid j\in J_{-}\rangle ,\langle \max(t_{ij}\mid i=1,2,\ldots ,m)\mid j\in J_{+}\rangle \rbrace \equiv \{t_{bj}\mid j=1,2,\ldots ,n\rbrace ,$ where, $J_{+}=\{j=1,2,\ldots ,n\mid j\}$ associated with the criteria having a positive impact, and $J_{-}=\{j=1,2,\ldots ,n\mid j\}$ associated with the criteria having a negative impact. Step 5 Calculate the L2-distance between the target alternative $i$ and the worst condition $A_{w}$ $d_{iw}={\sqrt {\sum _{j=1}^{n}(t_{ij}-t_{wj})^{2}}},\quad i=1,2,\ldots ,m,$ and the distance between the alternative $i$ and the best condition $A_{b}$ $d_{ib}={\sqrt {\sum _{j=1}^{n}(t_{ij}-t_{bj})^{2}}},\quad i=1,2,\ldots ,m$ where $d_{iw}$ and $d_{ib}$ are L2-norm distances from the target alternative $i$ to the worst and best conditions, respectively. Step 6 Calculate the similarity to the worst condition: $s_{iw}=d_{iw}/(d_{iw}+d_{ib}),\quad 0\leq s_{iw}\leq 1,\quad i=1,2,\ldots ,m.$ $s_{iw}=1$ if and only if the alternative solution has the best condition; and $s_{iw}=0$ if and only if the alternative solution has the worst condition. Step 7 Rank the alternatives according to $s_{iw}\,\,(i=1,2,\ldots ,m).$ Normalisation Two methods of normalisation that have been used to deal with incongruous criteria dimensions are linear normalisation and vector normalisation. Linear normalisation can be calculated as in Step 2 of the TOPSIS process above. Vector normalisation was incorporated with the original development of the TOPSIS method,[1] and is calculated using the following formula: $r_{ij}={\frac {x_{ij}}{\sqrt {\sum _{k=1}^{m}x_{kj}^{2}}}},\quad i=1,2,\ldots ,m,\quad j=1,2,\ldots ,n$ In using vector normalisation, the non-linear distances between single dimension scores and ratios should produce smoother trade-offs.[9] Online tools • Decision Radar : A free online TOPSIS calculator written in Python. • Yadav, Vinay; Karmakar, Subhankar; Kalbar, Pradip P.; Dikshit, A.K. (January 2019). "PyTOPS: A Python based tool for TOPSIS". SoftwareX. 9: 217–222. Bibcode:2019SoftX...9..217Y. doi:10.1016/j.softx.2019.02.004. References 1. Hwang, C.L.; Yoon, K. (1981). Multiple Attribute Decision Making: Methods and Applications. New York: Springer-Verlag. 2. Yoon, K. (1987). "A reconciliation among discrete compromise situations". Journal of the Operational Research Society. 38 (3): 277–286. doi:10.1057/jors.1987.44. S2CID 121379674. 3. Hwang, C.L.; Lai, Y.J.; Liu, T.Y. (1993). "A new approach for multiple objective decision making". Computers and Operational Research. 20 (8): 889–899. doi:10.1016/0305-0548(93)90109-v. 4. El Alaoui, M. (2021). Fuzzy TOPSIS: Logic, Approaches, and Case Studies. New York: CRC Press. doi:10.1201/9781003168416. ISBN 978-0-367-76748-8. S2CID 233525185. 5. Yoon, K.P.; Hwang, C. (1995). Multiple Attribute Decision Making: An Introduction. SAGE publications. 6. Zavadskas, E.K.; Zakarevicius, A.; Antucheviciene, J. (2006). "Evaluation of Ranking Accuracy in Multi-Criteria Decisions". Informatica. 17 (4): 601–618. doi:10.15388/Informatica.2006.158. 7. Greene, R.; Devillers, R.; Luther, J.E.; Eddy, B.G. (2011). "GIS-based multi-criteria analysis". Geography Compass. 5 (6): 412–432. doi:10.1111/j.1749-8198.2011.00431.x. 8. Locatelli, Giorgio; Mancini, Mauro (2012-09-01). "A framework for the selection of the right nuclear power plant" (PDF). International Journal of Production Research. 50 (17): 4753–4766. doi:10.1080/00207543.2012.657965. ISSN 0020-7543. S2CID 28137959. 9. Huang, I.B.; Keisler, J.; Linkov, I. (2011). "Multi-criteria decision analysis in environmental science: ten years of applications and trends". Science of the Total Environment. 409 (19): 3578–3594. Bibcode:2011ScTEn.409.3578H. doi:10.1016/j.scitotenv.2011.06.022. PMID 21764422.
Wikipedia
True True most commonly refers to truth, the state of being in congruence with fact or reality. Look up TRUE, True, or true in Wiktionary, the free dictionary. True may also refer to: Places • True, West Virginia, an unincorporated community in the United States • True, Wisconsin, a town in the United States • True, a townland in County Tyrone, Northern Ireland People • True (singer) (stylized as TRUE), the stage name of Japanese singer Miho Karasawa • True (surname) • True O'Brien (born 1994), an American model and actress Arts, entertainment, and media Albums • True (Avicii album), 2013 • True (EP), a 2012 EP by Solange Knowles • True (L'Arc-en-Ciel album), 1996 • True (Roy Montgomery and Chris Heaphy album), 1999 • True (Mika Nakashima album), 2002 • True (Spandau Ballet album), 1983 • True (TrinityRoots album), 2001 • True (TRU album), 1995 Songs • "True" (Brandy song), by Brandy Norwood from Human (2008) • "True" (Concrete Blonde song), 1987 • "True" (Ryan Cabrera song), 2004 • "True" (Jaimeson song), 2003 • "True" (Spandau Ballet song), 1983 • "True" (George Strait song), 1998 • "True", a song by Cindy Walker, recorded on The International Jim Reeves 1963 • "True", by Lasgo from Far Away • "True", by Zion I from True & Livin' • "True...", a 2001 song by Riyu Kosaka • "True Song", a 2002 song by Do As Infinity • "You Don't Love Me (True)", a song by Louis Cottrell Jr. with Don Albert and Lloyd Glenn Periodicals • True (magazine), an American men's magazine • Trace (magazine), formerly True, a British hip-hop magazine Other uses in arts, entertainment, and media • True (film), a short film directed by Tom Tykwer, starring Natalie Portman • True, a 2013 Elixir novel by Hilary Duff with Elise Allen, a sequel to Devoted • GE True, an anthology TV series based on stories from True magazine • True, the main protagonist from the Netflix animated series True and the Rainbow Kingdom Computing • true (Unix), a Unix utility • true, a boolean value • TRUE (Temporal Reasoning Universal Elaboration), a discrete and continuous time simulation software program for 2D, 3D and 4D modeling Brands and enterprises • True (cigarette), a brand of cigarettes made by Lorillard Tobacco Company • True (dating service), an online dating service • True Corporation, a Thai communications group • TrueMove H, a Thai mobile operator • TrueVisions, a Thai television platform Other uses • True self • True value, a concept in statistics • Trust for Urban Ecology, a British ecological organisation See also • TRU (band), an American hip hop group • True north (disambiguation) • Truth value, in logic and mathematics, a logical value • False (disambiguation), the opposite of true • Wheel truing stand
Wikipedia
TUM School of Computation, Information and Technology The TUM School of Computation, Information and Technology (CIT) is a school of the Technical University of Munich, established in 2022 by the merger of three former departments. As of 2022, it is structured into the Department of Mathematics, the Department of Computer Engineering, the Department of Computer Science, and the Department of Electrical Engineering. TUM School of Computation, Information and Technology Established2022 DeanHans-Joachim Bungartz Websitecit.tum.de Department of Mathematics The Department of Mathematics (MATH) is located at the Garching campus. History Mathematics was taught from the beginning at the Polytechnische Schule in München and the later Technische Hochschule München. Otto Hesse was the department's first professor for calculus, analytical geometry and analytical mechanics. Over the years, several institutes for mathematics were formed. In 1974, the Institute of Geometry was merged with the Institute of Mathematics to form the Department of Mathematics, and informatics, which had been part of the Institute of Mathematics, became a separate department.[1] Research Groups As of 2022, the research groups at the department are:[2] • Algebra • Analysis • Analysis and Modelling • Applied Numerical Analysis, Optimization and Data Analysis • Biostatistics • Discrete Optimization • Dynamic Systems • Geometry and Topology • Mathematical Finance • Mathematical Optimization • Mathematical Physics • Mathematical Modeling of Biological Systems • Numerical Mathematics • Numerical Methods for Plasma Physics • Optimal Control • Probability Theory • Scientific Computing • Statistics Department of Computer Science The Department of Computer Science (CS) is located at the Garching campus. History The first courses in computer science at the Technical University of Munich were offered in 1967 at the Department of Mathematics, when Friedrich L. Bauer introduced a two-semester lecture titled Information Processing. In 1968, Klaus Samelson started offering a second lecture cycle titled Introduction to Informatics.[3] By 1992, the computer science department had separated from the Department of Mathematics to form an independent Department of Informatics.[3] In 2002, the department relocated from its old campus in the Munich city center to the new building on the Garching campus.[3] In 2017, the Department celebrated 50 Years of Informatics Munich with a series of lectures and ceremonies, together with the Ludwig Maximilian University of Munich and the Bundeswehr University Munich.[3] Chairs As of 2022, the department consists of the following chairs:[4] • AI in Healthcare and Medicine • Algorithmic Game Theory • Algorithms and Complexity • Application and Middleware Systems • Augmented Reality • Bioinformatics • Computational Imaging and AI in Medicine • Computational Molecular Medicine • Computer Aided Medical Procedures • Computer Graphics and Visualization • Computer Vision and AI • Cyber Trust • Data Analytics and Machine Learning • Data Science and Engineering • Database Systems • Decision Science & Systems • Dynamic Vision and Learning • Efficient Algorithms • Engineering Software for Decentralized Systems • Ethics in Systems Design and Machine Learning • Formal Languages, Compiler & Software Construction • Formal Methods for Software Reliability • Hardware-aware Algorithms and Software for HPC • Information Systems & Business Process Management • Law and Security of Digitization • Legal Tech • Logic and Verification • Machine Learning of 3D Scene Geometry • Physics-based Simulation • Quantum Computing • Scientific Computing • Software & Systems Engineering • Software Engineering • Software Engineering for Business Information Systems • Theoretical Computer Science • Theoretical Foundations of AI • Visual Computing Notable people Seven faculty members of the Department of Informatics have been awarded the Gottfried Wilhelm Leibniz Prize, one of the highest endowed research prizes in Germany with a maximum of €2.5 million per award: • 2020 – Thomas Neumann • 2016 – Daniel Cremers • 2008 – Susanne Albers • 1997 – Ernst Mayr • 1995 – Gerd Hirzinger • 1994 – Manfred Broy • 1991 – Karl-Heinz Hoffmann Friedrich L. Bauer was awarded the 1988 IEEE Computer Society Computer Pioneer Award for inventing the stack data structure. Gerd Hirzinger was awarded the 2005 IEEE Robotics and Automation Society Pioneer Award. Hans-Arno Jacobsen and Burkhard Rost were awarded the Alexander von Humboldt Professorship in 2011 and 2008, respectively. Rudolf Bayer was known for inventing the B-tree and Red–black tree. Department of Electrical Engineering The Department of Electrical Engineering (EE) is located at the Munich campus. History The first lectures in the field of electricity at the Polytechnische Schule München were given as early as 1876 by the physicist Wilhelm von Bezold. Over the years, as the field of electrical engineering became increasingly important, a separate department for electrical engineering emerged within the mechanical engineering department. In 1967, the department was renamed the Faculty of Mechanical and Electrical Engineering, and six electrical engineering departments were permanently established. In April 1974, the formal establishment of the new TUM Department of Electrical and Computer Engineering took place. While still located in the Munich campus, a new building is currently in construction on the Garching campus and the department is expected to move by 2025.[5] Professorships As of 2022, the department consists of the following chairs and professorships:[6] • Biomedical Electronics • Circuit Design • Computational Photonics • Control and Manipulation of Microscale Living Objects • Environmental Sensing and Modeling • High Frequency Engineering • Hybrid Electronic Systems • Measurement Systems and Sensor Technology • Micro- and Nanosystems Technology • Microwave Engineering • Molecular Electronics • Nano and Microrobotics • Nano and Quantum Sensors • Neuroelectronics • Physics of Electrotechnology • Quantum Electronics and Computer Engineering • Semiconductor Technology • Simulation of Nanosystems for Energy Conversion Department of Computer Engineering The Department of Computer Engineering was separated from the former Department of Electrical and Computer Engineering as the result of merger into the School of Computation, Information and Technology. Professorships As of 2022, the department consists of the following chairs and professorships:[7] • Architecture of Parallel and Distributed Systems • Audio Information Processing • Automatic Control Engineering • Bio-inspired Information Processing • Coding and Cryptography • Communications Engineering • Communication Networks • Computer Architecture & Operating Systems • Computer Architecture and Parallel Systems • Connected Mobility • Cognitive Systems • Cyber Physical Systems • Data Processing • Electronic Design Automation • Embedded Systems and Internet of Things • Healthcare and Rehabilitation Robotics • Human-Machine Communication • Information-oriented Control • Integrated Systems • Line Transmission Technology • Machine Learning for Robotics • Machine Learning in Engineering • Machine Vision and Perception • Media Technology • Network Architectures and Services • Neuroengineering Materials • Real-Time Computer Systems • Robotics Science and System Intelligence • Robotics, AI and realtime systems • Security in Information Technology • Sensor-based Robot Systems and Intelligent Assistance Systems • Signal Processing Methods • Theoretical Information Technology Building The Department of Computer Science shares a building with the Department of Mathematics. In the building, two massive parabolic slides run from the fourth floor to the ground floor. Their shape corresponds to the equation $z=y=hx^{2}/d^{2}$ and is supposed to represent the "connection of science and art".[8] Rankings University rankings By subject – Global & National QS Computer Science & Information Systems 2023[9] =29 1 THE Computer Science 2023[10] 10 1 ARWU Computer Science & Engineering 2022[11] 51-75 1 QS Mathematics 2023[12] =43 2 THE ARWU Mathematics 2022[13] 51-75 3-4 QS Statistics & Operational Research 2023[14] 28 1 THE ARWU QS Electrical & Electronic Engineering 2023[15] =18 1 THE Engineering 2023[16] 20 1 ARWU Electrical & Electronic Engineering 2022[17] 22 1 CHE Ranking 2020 – National Mathematics[18] (B. / M.) Overall study situation ● 1.6 ● 1.6 Research orientation – – Study organisation – – Support in studies ● 1.8 ● 2.0 Support in the study entry phase ● 11/14 pts. Coursed offered – – Teacher support ● 2.0 ● 2.0 Exam preparation – – Laboratory internships – – Teaching of scientific competence – – Scientific-artistical orientation – – Graduations in appropriate time ● 77.6% ● 70.9% International orientation ● 5/9 pts. ● 8/9 pts. Contact with work environment – – Job market preparation – – Citations per publication – Doctorates per professor – Publications per professor – Research reputation – Third party funds per professor – Third party funds per academic ● 45.9 Computer Science[19] (B. / M.) Overall study situation ● 1.8 ● 1.6 Research orientation ● 2.1 ● 1.7 Study organisation – – Support in studies ● 1.9 ● 1.9 Support in the study entry phase ● 10/14 pts. Coursed offered – – Teacher support – – Exam preparation – – Laboratory internships – – Teaching of scientific competence – – Scientific-artistical orientation – – Graduations in appropriate time ● 69.3% ● 42.8% International orientation ● 5/9 pts. ● 8/9 pts. Contact with work environment – – Job market preparation – – Citations per publication – Doctorates per professor – Publications per professor – Research reputation – Third party funds per professor – Third party funds per academic – The Department of Computer Science has been consistently rated the top computer science department in Germany by major rankings.[9][10][11] Globally, it ranks No. 29 (QS),[9] No. 10 (THE),[10] and within No. 51-75 (ARWU).[11] In the 2020 national CHE University Ranking, the department is among the top rated departments for computer science and business informatics, being rated in the top group for the majority of criteria.[19] The Department of Mathematics has been rated as one of the top mathematics departments in Germany, ranking 43rd in the world and 2nd in Germany (after the University of Bonn) in the QS World University Rankings, and within No. 51-75 in the Academic Ranking of World Universities.[12][13] In Statistics & Operational Research, QS ranks TUM first in Germany and 28th in the world.[14] The Departments of Electrical and Computer Engineering are leading in Germany.[15][17] In Electrical & Electronic Engineering, TUM is rated 18th worldwide by QS and 22nd by ARWU. In engineering as a whole, TUM is ranked 20th globally and 1st nationally in the Times Higher Education World University Rankings.[16] See also • Summer School Marktoberdorf References 1. "Die Geschichte der Mathematik an der TU". TUM Department of Mathematics (in German). Retrieved 23 December 2020. 2. "Research groups". Department of Mathematics. Retrieved 19 October 2022. 3. "History". TUM Department of Informatics. Retrieved 22 December 2020. 4. "Chairs and Professors". Department of Computer Science. Retrieved 19 October 2022. 5. "Major milestone in new construction on Garching hightech campus". Technical University Munich. 25 July 2019. Retrieved 22 December 2020. 6. "Chairs and Professorships". Department of Electrical Engineering. Retrieved 19 October 2022. 7. "Chairs and Professorships". Department of Computer Engineering. Retrieved 19 October 2022. 8. Hoffmann, Sabine (4 October 2002). "Akademiker-Bespaßung: Rutschpartie zum Café Latte". Der Spiegel (in German). Retrieved 22 December 2020. 9. "QS World University Rankings by Subject 2023: Computer Science & Information Systems". QS World University Rankings. Retrieved 23 March 2023. 10. "World University Rankings 2023 by subject: computer science". Times Higher Education World University Rankings. Retrieved 27 October 2022. 11. "ShanghaiRanking's Global Ranking of Academic Subjects 2022". Academic Ranking of World Universities. Retrieved 23 March 2023. 12. "QS World University Rankings by Subject 2023: Mathematics". QS World University Rankings. Retrieved 23 March 2023. 13. "ShanghaiRanking's Global Ranking of Academic Subjects 2022". Academic Ranking of World Universities. Retrieved 23 March 2023. 14. "QS World University Rankings by Subject 2023^: Statistics & Operational Research". QS World University Rankings. Retrieved 23 March 2023. 15. "QS World University Rankings by Subject 2023: Engineering - Electrical & Electronic". QS World University Rankings. Retrieved 23 March 2023. 16. "World University Rankings 2023 by subject: engineering". Times Higher Education World University Rankings. Retrieved 27 October 2022. 17. "ShanghaiRanking's Global Ranking of Academic Subjects 2022". Academic Ranking of World Universities. Retrieved 23 March 2023. 18. "Studying Mathematics in Germany". CHE University Ranking. Retrieved 31 December 2020. 19. "Subject information about Studying Computer Science in Germany". CHE University Ranking. Retrieved 31 December 2020. External links • Media related to TUM Department of Mathematics and Computer Science building at Wikimedia Commons Technical University of Munich Departments & Schools • School of Computation, Information and Technology (CIT) • School of Engineering and Design (ED) • School of Natural Sciences (NAT) • School of Life Sciences (LS) • School of Management (MGT) • School of Social Sciences and Technology (SOT) • School of Medicine (MED) • Rechts der Isar Hospital • Department of Sport and Health Sciences Research centers • FRM II (Research Neutron Source) • Walter Schottky Institute (WSI) Student initiatives • WARR Campuses • Munich • Garching • Weihenstephan • Singapore (TUM Asia) Other • TUM Fire Department • Bavarian School of Public Policy • Institute for Advanced Study (IAS) • Oskar von Miller Tower • Summer School Marktoberdorf • Center for Digital Technology and Management • Visio.M • Academic staff • Alumni • Presidents
Wikipedia
Topological vector space In mathematics, a topological vector space (also called a linear topological space and commonly abbreviated TVS or t.v.s.) is one of the basic structures investigated in functional analysis. A topological vector space is a vector space that is also a topological space with the property that the vector space operations (vector addition and scalar multiplication) are also continuous functions. Such a topology is called a vector topology and every topological vector space has a uniform topological structure, allowing a notion of uniform convergence and completeness. Some authors also require that the space is a Hausdorff space (although this article does not). One of the most widely studied categories of TVSs are locally convex topological vector spaces. This article focuses on TVSs that are not necessarily locally convex. Banach spaces, Hilbert spaces and Sobolev spaces are other well-known examples of TVSs. Many topological vector spaces are spaces of functions, or linear operators acting on topological vector spaces, and the topology is often defined so as to capture a particular notion of convergence of sequences of functions. In this article, the scalar field of a topological vector space will be assumed to be either the complex numbers $\mathbb {C} $ or the real numbers $\mathbb {R} ,$ unless clearly stated otherwise. Motivation Normed spaces Every normed vector space has a natural topological structure: the norm induces a metric and the metric induces a topology. This is a topological vector space because: 1. The vector addition map $\cdot \,+\,\cdot \;:X\times X\to X$ defined by $(x,y)\mapsto x+y$ is (jointly) continuous with respect to this topology. This follows directly from the triangle inequality obeyed by the norm. 2. The scalar multiplication map $\cdot :\mathbb {K} \times X\to X$ :\mathbb {K} \times X\to X} defined by $(s,x)\mapsto s\cdot x,$ where $\mathbb {K} $ is the underlying scalar field of $X,$ is (jointly) continuous. This follows from the triangle inequality and homogeneity of the norm. Thus all Banach spaces and Hilbert spaces are examples of topological vector spaces. Non-normed spaces There are topological vector spaces whose topology is not induced by a norm, but are still of interest in analysis. Examples of such spaces are spaces of holomorphic functions on an open domain, spaces of infinitely differentiable functions, the Schwartz spaces, and spaces of test functions and the spaces of distributions on them.[1] These are all examples of Montel spaces. An infinite-dimensional Montel space is never normable. The existence of a norm for a given topological vector space is characterized by Kolmogorov's normability criterion. A topological field is a topological vector space over each of its subfields. Definition A topological vector space (TVS) $X$ is a vector space over a topological field $\mathbb {K} $ (most often the real or complex numbers with their standard topologies) that is endowed with a topology such that vector addition $\cdot \,+\,\cdot \;:X\times X\to X$ and scalar multiplication $\cdot :\mathbb {K} \times X\to X$ :\mathbb {K} \times X\to X} are continuous functions (where the domains of these functions are endowed with product topologies). Such a topology is called a vector topology or a TVS topology on $X.$ Every topological vector space is also a commutative topological group under addition. Hausdorff assumption Many authors (for example, Walter Rudin), but not this page, require the topology on $X$ to be T1; it then follows that the space is Hausdorff, and even Tychonoff. A topological vector space is said to be separated if it is Hausdorff; importantly, "separated" does not mean separable. The topological and linear algebraic structures can be tied together even more closely with additional assumptions, the most common of which are listed below. Category and morphisms The category of topological vector spaces over a given topological field $\mathbb {K} $ is commonly denoted $\mathrm {TVS} _{\mathbb {K} }$ or $\mathrm {TVect} _{\mathbb {K} }.$ The objects are the topological vector spaces over $\mathbb {K} $ and the morphisms are the continuous $\mathbb {K} $-linear maps from one object to another. A topological vector space homomorphism (abbreviated TVS homomorphism), also called a topological homomorphism,[2][3] is a continuous linear map $u:X\to Y$ between topological vector spaces (TVSs) such that the induced map $u:X\to \operatorname {Im} u$ is an open mapping when $\operatorname {Im} u:=u(X),$ which is the range or image of $u,$ is given the subspace topology induced by $Y.$ A topological vector space embedding (abbreviated TVS embedding), also called a topological monomorphism, is an injective topological homomorphism. Equivalently, a TVS-embedding is a linear map that is also a topological embedding.[2] A topological vector space isomorphism (abbreviated TVS isomorphism), also called a topological vector isomorphism[4] or an isomorphism in the category of TVSs, is a bijective linear homeomorphism. Equivalently, it is a surjective TVS embedding[2] Many properties of TVSs that are studied, such as local convexity, metrizability, completeness, and normability, are invariant under TVS isomorphisms. A necessary condition for a vector topology A collection ${\mathcal {N}}$ of subsets of a vector space is called additive[5] if for every $N\in {\mathcal {N}},$ there exists some $U\in {\mathcal {N}}$ such that $U+U\subseteq N.$ Characterization of continuity of addition at $0$[5] — If $(X,+)$ is a group (as all vector spaces are), $\tau $ is a topology on $X,$ and $X\times X$ is endowed with the product topology, then the addition map $X\times X\to X$ (defined by $(x,y)\mapsto x+y$) is continuous at the origin of $X\times X$ if and only if the set of neighborhoods of the origin in $(X,\tau )$ is additive. This statement remains true if the word "neighborhood" is replaced by "open neighborhood." All of the above conditions are consequently a necessity for a topology to form a vector topology. Defining topologies using neighborhoods of the origin Since every vector topology is translation invariant (which means that for all $x_{0}\in X,$ the map $X\to X$ defined by $x\mapsto x_{0}+x$ is a homeomorphism), to define a vector topology it suffices to define a neighborhood basis (or subbasis) for it at the origin. Theorem[6] (Neighborhood filter of the origin) — Suppose that $X$ is a real or complex vector space. If ${\mathcal {B}}$ is a non-empty additive collection of balanced and absorbing subsets of $X$ then ${\mathcal {B}}$ is a neighborhood base at $0$ for a vector topology on $X.$ That is, the assumptions are that ${\mathcal {B}}$ is a filter base that satisfies the following conditions: 1. Every $B\in {\mathcal {B}}$ is balanced and absorbing, 2. ${\mathcal {B}}$ is additive: For every $B\in {\mathcal {B}}$ there exists a $U\in {\mathcal {B}}$ such that $U+U\subseteq B,$ If ${\mathcal {B}}$ satisfies the above two conditions but is not a filter base then it will form a neighborhood subbasis at $0$ (rather than a neighborhood basis) for a vector topology on $X.$ In general, the set of all balanced and absorbing subsets of a vector space does not satisfy the conditions of this theorem and does not form a neighborhood basis at the origin for any vector topology.[5] Defining topologies using strings Let $X$ be a vector space and let $U_{\bullet }=\left(U_{i}\right)_{i=1}^{\infty }$ be a sequence of subsets of $X.$ Each set in the sequence $U_{\bullet }$ is called a knot of $U_{\bullet }$ and for every index $i,$ $U_{i}$ is called the $i$-th knot of $U_{\bullet }.$ The set $U_{1}$ is called the beginning of $U_{\bullet }.$ The sequence $U_{\bullet }$ is/is a:[7][8][9] • Summative if $U_{i+1}+U_{i+1}\subseteq U_{i}$ for every index $i.$ • Balanced (resp. absorbing, closed,[note 1] convex, open, symmetric, barrelled, absolutely convex/disked, etc.) if this is true of every $U_{i}.$ • String if $U_{\bullet }$ is summative, absorbing, and balanced. • Topological string or a neighborhood string in a TVS $X$ if $U_{\bullet }$ is a string and each of its knots is a neighborhood of the origin in $X.$ If $U$is an absorbing disk in a vector space $X$ then the sequence defined by $U_{i}:=2^{1-i}U$ forms a string beginning with $U_{1}=U.$ This is called the natural string of $U$[7] Moreover, if a vector space $X$ has countable dimension then every string contains an absolutely convex string. Summative sequences of sets have the particularly nice property that they define non-negative continuous real-valued subadditive functions. These functions can then be used to prove many of the basic properties of topological vector spaces. Theorem ($\mathbb {R} $-valued function induced by a string) — Let $U_{\bullet }=\left(U_{i}\right)_{i=0}^{\infty }$ be a collection of subsets of a vector space such that $0\in U_{i}$ and $U_{i+1}+U_{i+1}\subseteq U_{i}$ for all $i\geq 0.$ For all $u\in U_{0},$ let $\mathbb {S} (u):=\left\{n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)~:~k\geq 1,n_{i}\geq 0{\text{ for all }}i,{\text{ and }}u\in U_{n_{1}}+\cdots +U_{n_{k}}\right\}.$ Define $f:X\to [0,1]$ by $f(x)=1$ if $x\not \in U_{0}$ and otherwise let $f(x):=\inf _{}\left\{2^{-n_{1}}+\cdots 2^{-n_{k}}~:~n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)\in \mathbb {S} (x)\right\}.$ Then $f$ is subadditive (meaning $f(x+y)\leq f(x)+f(y)$ for all $x,y\in X$) and $f=0$ on $ \bigcap _{i\geq 0}U_{i};$ so in particular, $f(0)=0.$ If all $U_{i}$ are symmetric sets then $f(-x)=f(x)$ and if all $U_{i}$ are balanced then $f(sx)\leq f(x)$ for all scalars $s$ such that $|s|\leq 1$ and all $x\in X.$ If $X$ is a topological vector space and if all $U_{i}$ are neighborhoods of the origin then $f$ is continuous, where if in addition $X$ is Hausdorff and $U_{\bullet }$ forms a basis of balanced neighborhoods of the origin in $X$ then $d(x,y):=f(x-y)$ is a metric defining the vector topology on $X.$ A proof of the above theorem is given in the article on metrizable topological vector spaces. If $U_{\bullet }=\left(U_{i}\right)_{i\in \mathbb {N} }$ and $V_{\bullet }=\left(V_{i}\right)_{i\in \mathbb {N} }$ are two collections of subsets of a vector space $X$ and if $s$ is a scalar, then by definition:[7] • $V_{\bullet }$ contains $U_{\bullet }$: $\ U_{\bullet }\subseteq V_{\bullet }$ if and only if $U_{i}\subseteq V_{i}$ for every index $i.$ • Set of knots: $\ \operatorname {Knots} U_{\bullet }:=\left\{U_{i}:i\in \mathbb {N} \right\}.$ • Kernel: $ \ \ker U_{\bullet }:=\bigcap _{i\in \mathbb {N} }U_{i}.$ • Scalar multiple: $\ sU_{\bullet }:=\left(sU_{i}\right)_{i\in \mathbb {N} }.$ • Sum: $\ U_{\bullet }+V_{\bullet }:=\left(U_{i}+V_{i}\right)_{i\in \mathbb {N} }.$ • Intersection: $\ U_{\bullet }\cap V_{\bullet }:=\left(U_{i}\cap V_{i}\right)_{i\in \mathbb {N} }.$ If $\mathbb {S} $ is a collection sequences of subsets of $X,$ then $\mathbb {S} $ is said to be directed (downwards) under inclusion or simply directed downward if $\mathbb {S} $ is not empty and for all $U_{\bullet },V_{\bullet }\in \mathbb {S} ,$ there exists some $W_{\bullet }\in \mathbb {S} $ such that $W_{\bullet }\subseteq U_{\bullet }$ and $W_{\bullet }\subseteq V_{\bullet }$ (said differently, if and only if $\mathbb {S} $ is a prefilter with respect to the containment $\,\subseteq \,$ defined above). Notation: Let $ \operatorname {Knots} \mathbb {S} :=\bigcup _{U_{\bullet }\in \mathbb {S} }\operatorname {Knots} U_{\bullet }$ :=\bigcup _{U_{\bullet }\in \mathbb {S} }\operatorname {Knots} U_{\bullet }} be the set of all knots of all strings in $\mathbb {S} .$ Defining vector topologies using collections of strings is particularly useful for defining classes of TVSs that are not necessarily locally convex. Theorem[7] (Topology induced by strings) — If $(X,\tau )$ is a topological vector space then there exists a set $\mathbb {S} $[proof 1] of neighborhood strings in $X$ that is directed downward and such that the set of all knots of all strings in $\mathbb {S} $ is a neighborhood basis at the origin for $(X,\tau ).$ Such a collection of strings is said to be $\tau $ fundamental. Conversely, if $X$ is a vector space and if $\mathbb {S} $ is a collection of strings in $X$ that is directed downward, then the set $\operatorname {Knots} \mathbb {S} $ of all knots of all strings in $\mathbb {S} $ forms a neighborhood basis at the origin for a vector topology on $X.$ In this case, this topology is denoted by $\tau _{\mathbb {S} }$ and it is called the topology generated by $\mathbb {S} .$ If $\mathbb {S} $ is the set of all topological strings in a TVS $(X,\tau )$ then $\tau _{\mathbb {S} }=\tau .$[7] A Hausdorff TVS is metrizable if and only if its topology can be induced by a single topological string.[10] Topological structure A vector space is an abelian group with respect to the operation of addition, and in a topological vector space the inverse operation is always continuous (since it is the same as multiplication by $-1$). Hence, every topological vector space is an abelian topological group. Every TVS is completely regular but a TVS need not be normal.[11] Let $X$ be a topological vector space. Given a subspace $M\subseteq X,$ the quotient space $X/M$ with the usual quotient topology is a Hausdorff topological vector space if and only if $M$ is closed.[note 2] This permits the following construction: given a topological vector space $X$ (that is probably not Hausdorff), form the quotient space $X/M$ where $M$ is the closure of $\{0\}.$ $X/M$ is then a Hausdorff topological vector space that can be studied instead of $X.$ Invariance of vector topologies One of the most used properties of vector topologies is that every vector topology is translation invariant: for all $x_{0}\in X,$ the map $X\to X$ defined by $x\mapsto x_{0}+x$ is a homeomorphism, but if $x_{0}\neq 0$ then it is not linear and so not a TVS-isomorphism. Scalar multiplication by a non-zero scalar is a TVS-isomorphism. This means that if $s\neq 0$ then the linear map $X\to X$ defined by $x\mapsto sx$ is a homeomorphism. Using $s=-1$ produces the negation map $X\to X$ defined by $x\mapsto -x,$ which is consequently a linear homeomorphism and thus a TVS-isomorphism. If $x\in X$ and any subset $S\subseteq X,$ then $\operatorname {cl} _{X}(x+S)=x+\operatorname {cl} _{X}S$[6] and moreover, if $0\in S$ then $x+S$ is a neighborhood (resp. open neighborhood, closed neighborhood) of $x$ in $X$ if and only if the same is true of $S$ at the origin. Local notions A subset $E$ of a vector space $X$ is said to be • absorbing (in $X$): if for every $x\in X,$ there exists a real $r>0$ such that $cx\in E$ for any scalar $c$ satisfying $|c|\leq r.$[12] • balanced or circled: if $tE\subseteq E$ for every scalar $|t|\leq 1.$[12] • convex: if $tE+(1-t)E\subseteq E$ for every real $0\leq t\leq 1.$[12] • a disk or absolutely convex: if $E$ is convex and balanced. • symmetric: if $-E\subseteq E,$ or equivalently, if $-E=E.$ Every neighborhood of the origin is an absorbing set and contains an open balanced neighborhood of $0$[6] so every topological vector space has a local base of absorbing and balanced sets. The origin even has a neighborhood basis consisting of closed balanced neighborhoods of $0;$ if the space is locally convex then it also has a neighborhood basis consisting of closed convex balanced neighborhoods of the origin. Bounded subsets A subset $E$ of a topological vector space $X$ is bounded[13] if for every neighborhood $V$ of the origin there exists $t$ such that $E\subseteq tV$. The definition of boundedness can be weakened a bit; $E$ is bounded if and only if every countable subset of it is bounded. A set is bounded if and only if each of its subsequences is a bounded set.[14] Also, $E$ is bounded if and only if for every balanced neighborhood $V$ of the origin, there exists $t$ such that $E\subseteq tV.$ Moreover, when $X$ is locally convex, the boundedness can be characterized by seminorms: the subset $E$ is bounded if and only if every continuous seminorm $p$ is bounded on $E.$[15] Every totally bounded set is bounded.[14] If $M$ is a vector subspace of a TVS $X,$ then a subset of $M$ is bounded in $M$ if and only if it is bounded in $X.$[14] Metrizability Birkhoff–Kakutani theorem — If $(X,\tau )$ is a topological vector space then the following four conditions are equivalent:[16][note 3] 1. The origin $\{0\}$ is closed in $X$ and there is a countable basis of neighborhoods at the origin in $X.$ 2. $(X,\tau )$ is metrizable (as a topological space). 3. There is a translation-invariant metric on $X$ that induces on $X$ the topology $\tau ,$ which is the given topology on $X.$ 4. $(X,\tau )$ is a metrizable topological vector space.[note 4] By the Birkhoff–Kakutani theorem, it follows that there is an equivalent metric that is translation-invariant. A TVS is pseudometrizable if and only if it has a countable neighborhood basis at the origin, or equivalent, if and only if its topology is generated by an F-seminorm. A TVS is metrizable if and only if it is Hausdorff and pseudometrizable. More strongly: a topological vector space is said to be normable if its topology can be induced by a norm. A topological vector space is normable if and only if it is Hausdorff and has a convex bounded neighborhood of the origin.[17] Let $\mathbb {K} $ be a non-discrete locally compact topological field, for example the real or complex numbers. A Hausdorff topological vector space over $\mathbb {K} $ is locally compact if and only if it is finite-dimensional, that is, isomorphic to $\mathbb {K} ^{n}$ for some natural number $n.$[18] Completeness and uniform structure Main article: Complete topological vector space The canonical uniformity[19] on a TVS $(X,\tau )$ is the unique translation-invariant uniformity that induces the topology $\tau $ on $X.$ Every TVS is assumed to be endowed with this canonical uniformity, which makes all TVSs into uniform spaces. This allows one to talk about related notions such as completeness, uniform convergence, Cauchy nets, and uniform continuity, etc., which are always assumed to be with respect to this uniformity (unless indicated other). This implies that every Hausdorff topological vector space is Tychonoff.[20] A subset of a TVS is compact if and only if it is complete and totally bounded (for Hausdorff TVSs, a set being totally bounded is equivalent to it being precompact). But if the TVS is not Hausdorff then there exist compact subsets that are not closed. However, the closure of a compact subset of a non-Hausdorff TVS is again compact (so compact subsets are relatively compact). With respect to this uniformity, a net (or sequence) $x_{\bullet }=\left(x_{i}\right)_{i\in I}$ is Cauchy if and only if for every neighborhood $V$ of $0,$ there exists some index $n$ such that $x_{i}-x_{j}\in V$ whenever $i\geq n$ and $j\geq n.$ Every Cauchy sequence is bounded, although Cauchy nets and Cauchy filters may not be bounded. A topological vector space where every Cauchy sequence converges is called sequentially complete; in general, it may not be complete (in the sense that all Cauchy filters converge). The vector space operation of addition is uniformly continuous and an open map. Scalar multiplication is Cauchy continuous but in general, it is almost never uniformly continuous. Because of this, every topological vector space can be completed and is thus a dense linear subspace of a complete topological vector space. • Every TVS has a completion and every Hausdorff TVS has a Hausdorff completion.[6] Every TVS (even those that are Hausdorff and/or complete) has infinitely many non-isomorphic non-Hausdorff completions. • A compact subset of a TVS (not necessarily Hausdorff) is complete.[21] A complete subset of a Hausdorff TVS is closed.[21] • If $C$ is a complete subset of a TVS then any subset of $C$ that is closed in $C$ is complete.[21] • A Cauchy sequence in a Hausdorff TVS $X$ is not necessarily relatively compact (that is, its closure in $X$ is not necessarily compact). • If a Cauchy filter in a TVS has an accumulation point $x$ then it converges to $x.$ • If a series $ \sum _{i=1}^{\infty }x_{i}$ converges[note 5] in a TVS $X$ then $x_{\bullet }\to 0$ in $X.$[22] Examples Finest and coarsest vector topology Let $X$ be a real or complex vector space. Trivial topology The trivial topology or indiscrete topology $\{X,\varnothing \}$ is always a TVS topology on any vector space $X$ and it is the coarsest TVS topology possible. An important consequence of this is that the intersection of any collection of TVS topologies on $X$ always contains a TVS topology. Any vector space (including those that are infinite dimensional) endowed with the trivial topology is a compact (and thus locally compact) complete pseudometrizable seminormable locally convex topological vector space. It is Hausdorff if and only if $\dim X=0.$ Finest vector topology There exists a TVS topology $\tau _{f}$ on $X,$ called the finest vector topology on $X,$ that is finer than every other TVS-topology on $X$ (that is, any TVS-topology on $X$ is necessarily a subset of $\tau _{f}$).[23][24] Every linear map from $\left(X,\tau _{f}\right)$ into another TVS is necessarily continuous. If $X$ has an uncountable Hamel basis then $\tau _{f}$ is not locally convex and not metrizable.[24] Cartesian products A Cartesian product of a family of topological vector spaces, when endowed with the product topology, is a topological vector space. Consider for instance the set $X$ of all functions $f:\mathbb {R} \to \mathbb {R} $ where $\mathbb {R} $ carries its usual Euclidean topology. This set $X$ is a real vector space (where addition and scalar multiplication are defined pointwise, as usual) that can be identified with (and indeed, is often defined to be) the Cartesian product $\mathbb {R} ^{\mathbb {R} },,$ which carries the natural product topology. With this product topology, $X:=\mathbb {R} ^{\mathbb {R} }$ becomes a topological vector space whose topology is called the topology of pointwise convergence on $\mathbb {R} .$ The reason for this name is the following: if $\left(f_{n}\right)_{n=1}^{\infty }$ is a sequence (or more generally, a net) of elements in $X$ and if $f\in X$ then $f_{n}$ converges to $f$ in $X$ if and only if for every real number $x,$ $f_{n}(x)$ converges to $f(x)$ in $\mathbb {R} .$ This TVS is complete, Hausdorff, and locally convex but not metrizable and consequently not normable; indeed, every neighborhood of the origin in the product topology contains lines (that is, 1-dimensional vector subspaces, which are subsets of the form $\mathbb {R} f:=\{rf:r\in \mathbb {R} \}$ with $f\neq 0$). Finite-dimensional spaces By F. Riesz's theorem, a Hausdorff topological vector space is finite-dimensional if and only if it is locally compact, which happens if and only if it has a compact neighborhood of the origin. Let $\mathbb {K} $ denote $\mathbb {R} $ or $\mathbb {C} $ and endow $\mathbb {K} $ with its usual Hausdorff normed Euclidean topology. Let $X$ be a vector space over $\mathbb {K} $ of finite dimension $n:=\dim X$ and so that $X$ is vector space isomorphic to $\mathbb {K} ^{n}$ (explicitly, this means that there exists a linear isomorphism between the vector spaces $X$ and $\mathbb {K} ^{n}$). This finite-dimensional vector space $X$ always has a unique Hausdorff vector topology, which makes it TVS-isomorphic to $\mathbb {K} ^{n},$ where $\mathbb {K} ^{n}$ is endowed with the usual Euclidean topology (which is the same as the product topology). This Hausdorff vector topology is also the (unique) finest vector topology on $X.$ $X$ has a unique vector topology if and only if $\dim X=0.$ If $\dim X\neq 0$ then although $X$ does not have a unique vector topology, it does have a unique Hausdorff vector topology. • If $\dim X=0$ then $X=\{0\}$ has exactly one vector topology: the trivial topology, which in this case (and only in this case) is Hausdorff. The trivial topology on a vector space is Hausdorff if and only if the vector space has dimension $0.$ • If $\dim X=1$ then $X$ has two vector topologies: the usual Euclidean topology and the (non-Hausdorff) trivial topology. • Since the field $\mathbb {K} $ is itself a $1$-dimensional topological vector space over $\mathbb {K} $ and since it plays an important role in the definition of topological vector spaces, this dichotomy plays an important role in the definition of an absorbing set and has consequences that reverberate throughout functional analysis. Proof outline The proof of this dichotomy (i.e. that a vector topology is either trivial or isomorphic to $\mathbb {K} $) is straightforward so only an outline with the important observations is given. As usual, $\mathbb {K} $ is assumed have the (normed) Euclidean topology. Let $B_{r}:=\{a\in \mathbb {K} :|a|<r\}$ :|a|<r\}} for all $r>0.$ Let $X$ be a $1$-dimensional vector space over $\mathbb {K} .$ If $S\subseteq X$ and $B\subseteq \mathbb {K} $ is a ball centered at $0$ then $B\cdot S=X$ whenever $S$ contains an "unbounded sequence", by which it is meant a sequence of the form $\left(a_{i}x\right)_{i=1}^{\infty }$ where $0\neq x\in X$ and $\left(a_{i}\right)_{i=1}^{\infty }\subseteq \mathbb {K} $ is unbounded in normed space $\mathbb {K} $ (in the usual sense). Any vector topology on $X$ will be translation invariant and invariant under non-zero scalar multiplication, and for every $0\neq x\in X,$ the map $M_{x}:\mathbb {K} \to X$ given by $M_{x}(a):=ax$ is a continuous linear bijection. Because $X=\mathbb {K} x$ for any such $x,$ every subset of $X$ can be written as $Fx=M_{x}(F)$ for some unique subset $F\subseteq \mathbb {K} .$ And if this vector topology on $X$ has a neighborhood $W$ of the origin that is not equal to all of $X,$ then the continuity of scalar multiplication $\mathbb {K} \times X\to X$ at the origin guarantees the existence of an open ball $B_{r}\subseteq \mathbb {K} $ centered at $0$ and an open neighborhood $S$ of the origin in $X$ such that $B_{r}\cdot S\subseteq W\neq X,$ which implies that $S$ does not contain any "unbounded sequence". This implies that for every $0\neq x\in X,$ there exists some positive integer $n$ such that $S\subseteq B_{n}x.$ From this, it can be deduced that if $X$ does not carry the trivial topology and if $0\neq x\in X,$ then for any ball $B\subseteq \mathbb {K} $ center at 0 in $\mathbb {K} ,$ $M_{x}(B)=Bx$ contains an open neighborhood of the origin in $X,$ which then proves that $M_{x}$ is a linear homeomorphism. Q.E.D. $\blacksquare $ • If $\dim X=n\geq 2$ then $X$ has infinitely many distinct vector topologies: • Some of these topologies are now described: Every linear functional $f$ on $X,$ which is vector space isomorphic to $\mathbb {K} ^{n},$ induces a seminorm $|f|:X\to \mathbb {R} $ defined by $|f|(x)=|f(x)|$ where $\ker f=\ker |f|.$ Every seminorm induces a (pseudometrizable locally convex) vector topology on $X$ and seminorms with distinct kernels induce distinct topologies so that in particular, seminorms on $X$ that are induced by linear functionals with distinct kernels will induce distinct vector topologies on $X.$ • However, while there are infinitely many vector topologies on $X$ when $\dim X\geq 2,$ there are, up to TVS-isomorphism, only $1+\dim X$ vector topologies on $X.$ For instance, if $n:=\dim X=2$ then the vector topologies on $X$ consist of the trivial topology, the Hausdorff Euclidean topology, and then the infinitely many remaining non-trivial non-Euclidean vector topologies on $X$ are all TVS-isomorphic to one another. Non-vector topologies Discrete and cofinite topologies If $X$ is a non-trivial vector space (that is, of non-zero dimension) then the discrete topology on $X$ (which is always metrizable) is not a TVS topology because despite making addition and negation continuous (which makes it into a topological group under addition), it fails to make scalar multiplication continuous. The cofinite topology on $X$ (where a subset is open if and only if its complement is finite) is also not a TVS topology on $X.$ Linear maps A linear operator between two topological vector spaces which is continuous at one point is continuous on the whole domain. Moreover, a linear operator $f$ is continuous if $f(X)$ is bounded (as defined below) for some neighborhood $X$ of the origin. A hyperplane in a topological vector space $X$ is either dense or closed. A linear functional $f$ on a topological vector space $X$ has either dense or closed kernel. Moreover, $f$ is continuous if and only if its kernel is closed. Types Depending on the application additional constraints are usually enforced on the topological structure of the space. In fact, several principal results in functional analysis fail to hold in general for topological vector spaces: the closed graph theorem, the open mapping theorem, and the fact that the dual space of the space separates points in the space. Below are some common topological vector spaces, roughly in order of increasing "niceness." • F-spaces are complete topological vector spaces with a translation-invariant metric.[25] These include $L^{p}$ spaces for all $p>0.$ • Locally convex topological vector spaces: here each point has a local base consisting of convex sets.[25] By a technique known as Minkowski functionals it can be shown that a space is locally convex if and only if its topology can be defined by a family of seminorms.[26] Local convexity is the minimum requirement for "geometrical" arguments like the Hahn–Banach theorem. The $L^{p}$ spaces are locally convex (in fact, Banach spaces) for all $p\geq 1,$ but not for $0<p<1.$ • Barrelled spaces: locally convex spaces where the Banach–Steinhaus theorem holds. • Bornological space: a locally convex space where the continuous linear operators to any locally convex space are exactly the bounded linear operators. • Stereotype space: a locally convex space satisfying a variant of reflexivity condition, where the dual space is endowed with the topology of uniform convergence on totally bounded sets. • Montel space: a barrelled space where every closed and bounded set is compact • Fréchet spaces: these are complete locally convex spaces where the topology comes from a translation-invariant metric, or equivalently: from a countable family of seminorms. Many interesting spaces of functions fall into this class -- $C^{\infty }(\mathbb {R} )$ is a Fréchet space under the seminorms $ \|f\|_{k,\ell }=\sup _{x\in [-k,k]}|f^{(\ell )}(x)|.$ A locally convex F-space is a Fréchet space.[25] • LF-spaces are limits of Fréchet spaces. ILH spaces are inverse limits of Hilbert spaces. • Nuclear spaces: these are locally convex spaces with the property that every bounded map from the nuclear space to an arbitrary Banach space is a nuclear operator. • Normed spaces and seminormed spaces: locally convex spaces where the topology can be described by a single norm or seminorm. In normed spaces a linear operator is continuous if and only if it is bounded. • Banach spaces: Complete normed vector spaces. Most of functional analysis is formulated for Banach spaces. This class includes the $L^{p}$ spaces with $1\leq p\leq \infty ,$ the space $BV$ of functions of bounded variation, and certain spaces of measures. • Reflexive Banach spaces: Banach spaces naturally isomorphic to their double dual (see below), which ensures that some geometrical arguments can be carried out. An important example which is not reflexive is $L^{1}$, whose dual is $L^{\infty }$ but is strictly contained in the dual of $L^{\infty }.$ • Hilbert spaces: these have an inner product; even though these spaces may be infinite-dimensional, most geometrical reasoning familiar from finite dimensions can be carried out in them. These include $L^{2}$ spaces, the $L^{2}$ Sobolev spaces $W^{2,k},$ and Hardy spaces. • Euclidean spaces: $\mathbb {R} ^{n}$ or $\mathbb {C} ^{n}$ with the topology induced by the standard inner product. As pointed out in the preceding section, for a given finite $n,$ there is only one $n$-dimensional topological vector space, up to isomorphism. It follows from this that any finite-dimensional subspace of a TVS is closed. A characterization of finite dimensionality is that a Hausdorff TVS is locally compact if and only if it is finite-dimensional (therefore isomorphic to some Euclidean space). Dual space Main articles: Algebraic dual space, Continuous dual space, and Strong dual space Every topological vector space has a continuous dual space—the set $X'$ of all continuous linear functionals, that is, continuous linear maps from the space into the base field $\mathbb {K} .$ A topology on the dual can be defined to be the coarsest topology such that the dual pairing each point evaluation $X'\to \mathbb {K} $ is continuous. This turns the dual into a locally convex topological vector space. This topology is called the weak-* topology.[27] This may not be the only natural topology on the dual space; for instance, the dual of a normed space has a natural norm defined on it. However, it is very important in applications because of its compactness properties (see Banach–Alaoglu theorem). Caution: Whenever $X$ is a non-normable locally convex space, then the pairing map $X'\times X\to \mathbb {K} $ is never continuous, no matter which vector space topology one chooses on $X'.$ A topological vector space has a non-trivial continuous dual space if and only if it has a proper convex neighborhood of the origin.[28] Properties See also: Locally convex topological vector space § Properties For any $S\subseteq X$ of a TVS $X,$ the convex (resp. balanced, disked, closed convex, closed balanced, closed disked') hull of $S$ is the smallest subset of $X$ that has this property and contains $S.$ The closure (respectively, interior, convex hull, balanced hull, disked hull) of a set $S$ is sometimes denoted by $\operatorname {cl} _{X}S$ (respectively, $\operatorname {Int} _{X}S,$ $\operatorname {co} S,$ $\operatorname {bal} S,$ $\operatorname {cobal} S$). The convex hull $\operatorname {co} S$ of a subset $S$ is equal to the set of all convex combinations of elements in $S,$ which are finite linear combinations of the form $t_{1}s_{1}+\cdots +t_{n}s_{n}$ where $n\geq 1$ is an integer, $s_{1},\ldots ,s_{n}\in S$ and $t_{1},\ldots ,t_{n}\in [0,1]$ sum to $1.$[29] The intersection of any family of convex sets is convex and the convex hull of a subset is equal to the intersection of all convex sets that contain it.[29] Neighborhoods and open sets Properties of neighborhoods and open sets Every TVS is connected[6] and locally connected[30] and any connected open subset of a TVS is arcwise connected. If $S\subseteq X$ and $U$ is an open subset of $X$ then $S+U$ is an open set in $X$[6] and if $S\subseteq X$ has non-empty interior then $S-S$ is a neighborhood of the origin.[6] The open convex subsets of a TVS $X$ (not necessarily Hausdorff or locally convex) are exactly those that are of the form $z+\{x\in X:p(x)<1\}~=~\{x\in X:p(x-z)<1\}$ for some $z\in X$ and some positive continuous sublinear functional $p$ on $X.$[28] If $K$ is an absorbing disk in a TVS $X$ and if $p:=p_{K}$ is the Minkowski functional of $K$ then[31] $\operatorname {Int} _{X}K~\subseteq ~\{x\in X:p(x)<1\}~\subseteq ~K~\subseteq ~\{x\in X:p(x)\leq 1\}~\subseteq ~\operatorname {cl} _{X}K$ where importantly, it was not assumed that $K$ had any topological properties nor that $p$ was continuous (which happens if and only if $K$ is a neighborhood of the origin). Let $\tau $ and $\nu $ be two vector topologies on $X.$ Then $\tau \subseteq \nu $ if and only if whenever a net $x_{\bullet }=\left(x_{i}\right)_{i\in I}$ in $X$ converges $0$ in $(X,\nu )$ then $x_{\bullet }\to 0$ in $(X,\tau ).$[32] Let ${\mathcal {N}}$ be a neighborhood basis of the origin in $X,$ let $S\subseteq X,$ and let $x\in X.$ Then $x\in \operatorname {cl} _{X}S$ if and only if there exists a net $s_{\bullet }=\left(s_{N}\right)_{N\in {\mathcal {N}}}$ in $S$ (indexed by ${\mathcal {N}}$) such that $s_{\bullet }\to x$ in $X.$[33] This shows, in particular, that it will often suffice to consider nets indexed by a neighborhood basis of the origin rather than nets on arbitrary directed sets. If $X$ is a TVS that is of the second category in itself (that is, a nonmeager space) then any closed convex absorbing subset of $X$ is a neighborhood of the origin.[34] This is no longer guaranteed if the set is not convex (a counter-example exists even in $X=\mathbb {R} ^{2}$) or if $X$ is not of the second category in itself.[34] Interior If $R,S\subseteq X$ and $S$ has non-empty interior then $\operatorname {Int} _{X}S~=~\operatorname {Int} _{X}\left(\operatorname {cl} _{X}S\right)~{\text{ and }}~\operatorname {cl} _{X}S~=~\operatorname {cl} _{X}\left(\operatorname {Int} _{X}S\right)$ and $\operatorname {Int} _{X}(R)+\operatorname {Int} _{X}(S)~\subseteq ~R+\operatorname {Int} _{X}S\subseteq \operatorname {Int} _{X}(R+S).$ The topological interior of a disk is not empty if and only if this interior contains the origin.[35] More generally, if $S$ is a balanced set with non-empty interior $\operatorname {Int} _{X}S\neq \varnothing $ in a TVS $X$ then $\{0\}\cup \operatorname {Int} _{X}S$ will necessarily be balanced;[6] consequently, $\operatorname {Int} _{X}S$ will be balanced if and only if it contains the origin.[proof 2] For this (i.e. $0\in \operatorname {Int} _{X}S$) to be true, it suffices for $S$ to also be convex (in addition to being balanced and having non-empty interior).;[6] The conclusion $0\in \operatorname {Int} _{X}S$ could be false if $S$ is not also convex;[35] for example, in $X:=\mathbb {R} ^{2},$ the interior of the closed and balanced set $S:=\{(x,y):xy\geq 0\}$ is $\{(x,y):xy>0\}.$ If $C$ is convex and $0<t\leq 1,$ then[36] $t\operatorname {Int} C+(1-t)\operatorname {cl} C~\subseteq ~\operatorname {Int} C.$ Explicitly, this means that if $C$ is a convex subset of a TVS $X$ (not necessarily Hausdorff or locally convex), $y\in \operatorname {int} _{X}C,$ and $x\in \operatorname {cl} _{X}C$ then the open line segment joining $x$ and $y$ belongs to the interior of $C;$ that is, $\{tx+(1-t)y:0<t<1\}\subseteq \operatorname {int} _{X}C.$[37][38][proof 3] If $N\subseteq X$ is any balanced neighborhood of the origin in $X$ then $ \operatorname {Int} _{X}N\subseteq B_{1}N=\bigcup _{0<|a|<1}aN\subseteq N$ where $B_{1}$ is the set of all scalars $a$ such that $|a|<1.$ If $x$ belongs to the interior of a convex set $S\subseteq X$ and $y\in \operatorname {cl} _{X}S,$ then the half-open line segment $[x,y):=\{tx+(1-t)y:0<t\leq 1\}\subseteq \operatorname {Int} _{X}{\text{ if }}x\neq y$ and[37] $[x,x)=\varnothing {\text{ if }}x=y.$ If $N$ is a balanced neighborhood of $0$ in $X$ and $B_{1}:=\{a\in \mathbb {K} :|a|<1\},$ :|a|<1\},} then by considering intersections of the form $N\cap \mathbb {R} x$ (which are convex symmetric neighborhoods of $0$ in the real TVS $\mathbb {R} x$) it follows that: $\operatorname {Int} N=[0,1)\operatorname {Int} N=(-1,1)N=B_{1}N,$ and furthermore, if $x\in \operatorname {Int} N{\text{ and }}r:=\sup\{r>0:[0,r)x\subseteq N\}$ then $r>1{\text{ and }}[0,r)x\subseteq \operatorname {Int} N,$ and if $r\neq \infty $ then $rx\in \operatorname {cl} N\setminus \operatorname {Int} N.$ Non-Hausdorff spaces and the closure of the origin A topological vector space $X$ is Hausdorff if and only if $\{0\}$ is a closed subset of $X,$ or equivalently, if and only if $\{0\}=\operatorname {cl} _{X}\{0\}.$ Because $\{0\}$ is a vector subspace of $X,$ the same is true of its closure $\operatorname {cl} _{X}\{0\},$ which is referred to as the closure of the origin in $X.$ This vector space satisfies $\operatorname {cl} _{X}\{0\}=\bigcap _{N\in {\mathcal {N}}(0)}N$ so that in particular, every neighborhood of the origin in $X$ contains the vector space $\operatorname {cl} _{X}\{0\}$ as a subset. The subspace topology on $\operatorname {cl} _{X}\{0\}$ is always the trivial topology, which in particular implies that the topological vector space $\operatorname {cl} _{X}\{0\}$ a compact space (even if its dimension is non-zero or even infinite) and consequently also a bounded subset of $X.$ In fact, a vector subspace of a TVS is bounded if and only if it is contained in the closure of $\{0\}.$[14] Every subset of $\operatorname {cl} _{X}\{0\}$ also carries the trivial topology and so is itself a compact, and thus also complete, subspace (see footnote for a proof).[proof 4] In particular, if $X$ is not Hausdorff then there exist subsets that are both compact and complete but not closed in $X$;[39] for instance, this will be true of any non-empty proper subset of $\operatorname {cl} _{X}\{0\}.$ If $S\subseteq X$ is compact, then $\operatorname {cl} _{X}S=S+\operatorname {cl} _{X}\{0\}$ and this set is compact. Thus the closure of a compact subset of a TVS is compact (said differently, all compact sets are relatively compact),[40] which is not guaranteed for arbitrary non-Hausdorff topological spaces.[note 6] For every subset $S\subseteq X,$ $S+\operatorname {cl} _{X}\{0\}\subseteq \operatorname {cl} _{X}S$ and consequently, if $S\subseteq X$ is open or closed in $X$ then $S+\operatorname {cl} _{X}\{0\}=S$[proof 5] (so that this arbitrary open or closed subsets $S$ can be described as a "tube" whose vertical side is the vector space $\operatorname {cl} _{X}\{0\}$). For any subset $S\subseteq X$ of this TVS $X,$ the following are equivalent: • $S$ is totally bounded. • $S+\operatorname {cl} _{X}\{0\}$ is totally bounded.[41] • $\operatorname {cl} _{X}S$ is totally bounded.[42][43] • The image if $S$ under the canonical quotient map $X\to X/\operatorname {cl} _{X}(\{0\})$ is totally bounded.[41] If $M$ is a vector subspace of a TVS $X$ then $X/M$ is Hausdorff if and only if $M$ is closed in $X.$ Moreover, the quotient map $q:X\to X/\operatorname {cl} _{X}\{0\}$ is always a closed map onto the (necessarily) Hausdorff TVS.[44] Every vector subspace of $X$ that is an algebraic complement of $\operatorname {cl} _{X}\{0\}$ (that is, a vector subspace $H$ that satisfies $\{0\}=H\cap \operatorname {cl} _{X}\{0\}$ and $X=H+\operatorname {cl} _{X}\{0\}$) is a topological complement of $\operatorname {cl} _{X}\{0\}.$ Consequently, if $H$ is an algebraic complement of $\operatorname {cl} _{X}\{0\}$ in $X$ then the addition map $H\times \operatorname {cl} _{X}\{0\}\to X,$ defined by $(h,n)\mapsto h+n$ is a TVS-isomorphism, where $H$ is necessarily Hausdorff and $\operatorname {cl} _{X}\{0\}$ has the indiscrete topology.[45] Moreover, if $C$ is a Hausdorff completion of $H$ then $C\times \operatorname {cl} _{X}\{0\}$ is a completion of $X\cong H\times \operatorname {cl} _{X}\{0\}.$[41] Closed and compact sets Compact and totally bounded sets A subset of a TVS is compact if and only if it is complete and totally bounded.[39] Thus, in a complete topological vector space, a closed and totally bounded subset is compact.[39] A subset $S$ of a TVS $X$ is totally bounded if and only if $\operatorname {cl} _{X}S$ is totally bounded,[42][43] if and only if its image under the canonical quotient map $X\to X/\operatorname {cl} _{X}(\{0\})$ is totally bounded.[41] Every relatively compact set is totally bounded[39] and the closure of a totally bounded set is totally bounded.[39] The image of a totally bounded set under a uniformly continuous map (such as a continuous linear map for instance) is totally bounded.[39] If $S$ is a subset of a TVS $X$ such that every sequence in $S$ has a cluster point in $S$ then $S$ is totally bounded.[41] If $K$ is a compact subset of a TVS $X$ and $U$ is an open subset of $X$ containing $K,$ then there exists a neighborhood $N$ of 0 such that $K+N\subseteq U.$[46] Closure and closed set The closure of any convex (respectively, any balanced, any absorbing) subset of any TVS has this same property. In particular, the closure of any convex, balanced, and absorbing subset is a barrel. The closure of a vector subspace of a TVS is a vector subspace. Every finite dimensional vector subspace of a Hausdorff TVS is closed. The sum of a closed vector subspace and a finite-dimensional vector subspace is closed.[6] If $M$ is a vector subspace of $X$ and $N$ is a closed neighborhood of the origin in $X$ such that $U\cap N$ is closed in $X$ then $M$ is closed in $X.$[46] The sum of a compact set and a closed set is closed. However, the sum of two closed subsets may fail to be closed[6] (see this footnote[note 7] for examples). If $S\subseteq X$ and $a$ is a scalar then $a\operatorname {cl} _{X}S\subseteq \operatorname {cl} _{X}(aS),$ where if $X$ is Hausdorff, $a\neq 0,{\text{ or }}S=\varnothing $ then equality holds: $\operatorname {cl} _{X}(aS)=a\operatorname {cl} _{X}S.$ In particular, every non-zero scalar multiple of a closed set is closed. If $S\subseteq X$ and if $A$ is a set of scalars such that neither $\operatorname {cl} S{\text{ nor }}\operatorname {cl} A$ contain zero then[47] $\left(\operatorname {cl} A\right)\left(\operatorname {cl} _{X}S\right)=\operatorname {cl} _{X}(AS).$ If $S\subseteq X{\text{ and }}S+S\subseteq 2\operatorname {cl} _{X}S$ then $\operatorname {cl} _{X}S$ is convex.[47] If $R,S\subseteq X$ then[6] $\operatorname {cl} _{X}(R)+\operatorname {cl} _{X}(S)~\subseteq ~\operatorname {cl} _{X}(R+S)~{\text{ and }}~\operatorname {cl} _{X}\left[\operatorname {cl} _{X}(R)+\operatorname {cl} _{X}(S)\right]~=~\operatorname {cl} _{X}(R+S)$ and so consequently, if $R+S$ is closed then so is $\operatorname {cl} _{X}(R)+\operatorname {cl} _{X}(S).$[47] If $X$ is a real TVS and $S\subseteq X,$ then $\bigcap _{r>1}rS\subseteq \operatorname {cl} _{X}S$ where the left hand side is independent of the topology on $X;$ moreover, if $S$ is a convex neighborhood of the origin then equality holds. For any subset $S\subseteq X,$ $\operatorname {cl} _{X}S~=~\bigcap _{N\in {\mathcal {N}}}(S+N)$ where ${\mathcal {N}}$ is any neighborhood basis at the origin for $X.$[48] However, $\operatorname {cl} _{X}U~\supseteq ~\bigcap \{U:S\subseteq U,U{\text{ is open in }}X\}$ and it is possible for this containment to be proper[49] (for example, if $X=\mathbb {R} $ and $S$ is the rational numbers). It follows that $\operatorname {cl} _{X}U\subseteq U+U$ for every neighborhood $U$ of the origin in $X.$[50] Closed hulls In a locally convex space, convex hulls of bounded sets are bounded. This is not true for TVSs in general.[14] • The closed convex hull of a set is equal to the closure of the convex hull of that set; that is, equal to $\operatorname {cl} _{X}(\operatorname {co} S).$[6] • The closed balanced hull of a set is equal to the closure of the balanced hull of that set; that is, equal to $\operatorname {cl} _{X}(\operatorname {bal} S).$[6] • The closed disked hull of a set is equal to the closure of the disked hull of that set; that is, equal to $\operatorname {cl} _{X}(\operatorname {cobal} S).$[51] If $R,S\subseteq X$ and the closed convex hull of one of the sets $S$ or $R$ is compact then[51] $\operatorname {cl} _{X}(\operatorname {co} (R+S))~=~\operatorname {cl} _{X}(\operatorname {co} R)+\operatorname {cl} _{X}(\operatorname {co} S).$ If $R,S\subseteq X$ each have a closed convex hull that is compact (that is, $\operatorname {cl} _{X}(\operatorname {co} R)$ and $\operatorname {cl} _{X}(\operatorname {co} S)$ are compact) then[51] $\operatorname {cl} _{X}(\operatorname {co} (R\cup S))~=~\operatorname {co} \left[\operatorname {cl} _{X}(\operatorname {co} R)\cup \operatorname {cl} _{X}(\operatorname {co} S)\right].$ Hulls and compactness In a general TVS, the closed convex hull of a compact set may fail to be compact. The balanced hull of a compact (respectively, totally bounded) set has that same property.[6] The convex hull of a finite union of compact convex sets is again compact and convex.[6] Other properties Meager, nowhere dense, and Baire A disk in a TVS is not nowhere dense if and only if its closure is a neighborhood of the origin.[9] A vector subspace of a TVS that is closed but not open is nowhere dense.[9] Suppose $X$ is a TVS that does not carry the indiscrete topology. Then $X$ is a Baire space if and only if $X$ has no balanced absorbing nowhere dense subset.[9] A TVS $X$ is a Baire space if and only if $X$ is nonmeager, which happens if and only if there does not exist a nowhere dense set $D$ such that $ X=\bigcup _{n\in \mathbb {N} }nD.$[9] Every nonmeager locally convex TVS is a barrelled space.[9] Important algebraic facts and common misconceptions If $S\subseteq X$ then $2S\subseteq S+S$; if $S$ is convex then equality holds. For an example where equality does not hold, let $x$ be non-zero and set $S=\{-x,x\};$ $S=\{x,2x\}$ also works. A subset $C$ is convex if and only if $(s+t)C=sC+tC$ for all positive real $s>0{\text{ and }}t>0,$[29] or equivalently, if and only if $tC+(1-t)C\subseteq C$ for all $0\leq t\leq 1.$[52] The convex balanced hull of a set $S\subseteq X$ is equal to the convex hull of the balanced hull of $S;$ that is, it is equal to $\operatorname {co} (\operatorname {bal} S).$ But in general, $\operatorname {bal} (\operatorname {co} S)~\subseteq ~\operatorname {cobal} S~=~\operatorname {co} (\operatorname {bal} S),$ where the inclusion might be strict since the balanced hull of a convex set need not be convex (counter-examples exist even in $\mathbb {R} ^{2}$). If $R,S\subseteq X$ and $a$ is a scalar then[6] $a(R+S)=aR+aS,~{\text{ and }}~\operatorname {co} (R+S)=\operatorname {co} R+\operatorname {co} S,~{\text{ and }}~\operatorname {co} (aS)=a\operatorname {co} S.$ If $R,S\subseteq X$ are convex non-empty disjoint sets and $x\not \in R\cup S,$ then $S\cap \operatorname {co} (R\cup \{x\})=\varnothing $ or $R\cap \operatorname {co} (S\cup \{x\})=\varnothing .$ In any non-trivial vector space $X,$ there exist two disjoint non-empty convex subsets whose union is $X.$ Other properties Every TVS topology can be generated by a family of F-seminorms.[53] If $P(x)$ is some unary predicate (a true or false statement dependent on $x\in X$) then for any $z\in X,$ $z+\{x\in X:P(x)\}=\{x\in X:P(x-z)\}.$[proof 6] So for example, if $P(x)$ denotes "$\|x\|<1$" then for any $z\in X,$ $z+\{x\in X:\|x\|<1\}=\{x\in X:\|x-z\|<1\}.$ Similarly, if $s\neq 0$ is a scalar then $s\{x\in X:P(x)\}=\left\{x\in X:P\left({\tfrac {1}{s}}x\right)\right\}.$ The elements $x\in X$ of these sets must range over a vector space (that is, over $X$) rather than not just a subset or else these equalities are no longer guaranteed; similarly, $z$ must belong to this vector space (that is, $z\in X$). Properties preserved by set operators • The balanced hull of a compact (respectively, totally bounded, open) set has that same property.[6] • The (Minkowski) sum of two compact (respectively, bounded, balanced, convex) sets has that same property.[6] But the sum of two closed sets need not be closed. • The convex hull of a balanced (resp. open) set is balanced (respectively, open). However, the convex hull of a closed set need not be closed.[6] And the convex hull of a bounded set need not be bounded. The following table, the color of each cell indicates whether or not a given property of subsets of $X$ (indicated by the column name, "convex" for instance) is preserved under the set operator (indicated by the row's name, "closure" for instance). If in every TVS, a property is preserved under the indicated set operator then that cell will be colored green; otherwise, it will be colored red. So for instance, since the union of two absorbing sets is again absorbing, the cell in row "$R\cup S$" and column "Absorbing" is colored green. But since the arbitrary intersection of absorbing sets need not be absorbing, the cell in row "Arbitrary intersections (of at least 1 set)" and column "Absorbing" is colored red. If a cell is not colored then that information has yet to be filled in. Properties preserved by set operators Operation Property of $R,$ $S,$ and any other subsets of $X$ that is considered Absorbing Balanced Convex Symmetric Convex Balanced Vector subspace Open Neighborhood of 0 Closed Closed Balanced Closed Convex Closed Convex Balanced Barrel Closed Vector subspace Totally bounded Compact Compact Convex Relatively compact Complete Sequentially Complete Banach disk Bounded Bornivorous Infrabornivorous Nowhere dense (in $X$) Meager Separable Pseudometrizable Operation $R\cup S$ $R\cup S$ $\cup $ of increasing nonempty chain $\cup $ of increasing nonempty chain Arbitrary unions (of at least 1 set) Arbitrary unions (of at least 1 set) $R\cap S$ $R\cap S$ $\cap $ of decreasing nonempty chain $\cap $ of decreasing nonempty chain Arbitrary intersections (of at least 1 set) Arbitrary intersections (of at least 1 set) $R+S$ $R+S$ Scalar multiple Scalar multiple Non-0 scalar multiple Non-0 scalar multiple Positive scalar multiple Positive scalar multiple Closure Closure Interior Interior Balanced core Balanced core Balanced hull Balanced hull Convex hull Convex hull Convex balanced hull Convex balanced hull Closed balanced hull Closed balanced hull Closed convex hull Closed convex hull Closed convex balanced hull Closed convex balanced hull Linear span Linear span Pre-image under a continuous linear map Pre-image under a continuous linear map Image under a continuous linear map Image under a continuous linear map Image under a continuous linear surjection Image under a continuous linear surjection Non-empty subset of $R$ Non-empty subset of $R$ Operation Absorbing Balanced Convex Symmetric Convex Balanced Vector subspace Open Neighborhood of 0 Closed Closed Balanced Closed Convex Closed Convex Balanced Barrel Closed Vector subspace Totally bounded Compact Compact Convex Relatively compact Complete Sequentially Complete Banach disk Bounded Bornivorous Infrabornivorous Nowhere dense (in $X$) Meager Separable Pseudometrizable Operation See also • Banach space – Normed vector space that is complete • Complete field – Algebraic structure that is complete relative to a metricPages displaying wikidata descriptions as a fallback • Hilbert space – Type of topological vector space • Normed space – Vector space on which a distance is definedPages displaying short descriptions of redirect targets • Locally compact field • Locally compact group – topological group G for which the underlying topology is locally compact and Hausdorff, so that the Haar measure can be definedPages displaying wikidata descriptions as a fallback • Locally compact quantum group – relatively new C*-algebraic approach toward quantum groupsPages displaying wikidata descriptions as a fallback • Locally convex topological vector space – A vector space with a topology defined by convex open sets • Ordered topological vector space • Topological abelian group – concept in mathematicsPages displaying wikidata descriptions as a fallback • Topological field – Algebraic structure with addition, multiplication, and divisionPages displaying short descriptions of redirect targets • Topological group – Group that is a topological space with continuous group action • Topological module • Topological ring – ring where ring operations are continuousPages displaying wikidata descriptions as a fallback • Topological semigroup – semigroup with continuous operationPages displaying wikidata descriptions as a fallback • Topological vector lattice Notes 1. The topological properties of course also require that $X$ be a TVS. 2. In particular, $X$ is Hausdorff if and only if the set $\{0\}$ is closed (that is, $X$ is a T1 space). 3. In fact, this is true for topological group, since the proof does not use the scalar multiplications. 4. Also called a metric linear space, which means that it is a real or complex vector space together with a translation-invariant metric for which addition and scalar multiplication are continuous. 5. A series $ \sum _{i=1}^{\infty }x_{i}$ is said to converge in a TVS $X$ if the sequence of partial sums converges. 6. In general topology, the closure of a compact subset of a non-Hausdorff space may fail to be compact (for example, the particular point topology on an infinite set). This result shows that this does not happen in non-Hausdorff TVSs. $S+\operatorname {cl} _{X}\{0\}$ is compact because it is the image of the compact set $S\times \operatorname {cl} _{X}\{0\}$ under the continuous addition map $\cdot \,+\,\cdot \;:X\times X\to X.$ Recall also that the sum of a compact set (that is, $S$) and a closed set is closed so $S+\operatorname {cl} _{X}\{0\}$ is closed in $X.$ 7. In $\mathbb {R} ^{2},$ the sum of the $y$-axis and the graph of $y={\frac {1}{x}},$ which is the complement of the $y$-axis, is open in $\mathbb {R} ^{2}.$ In $\mathbb {R} ,$ the Minkowski sum $\mathbb {Z} +{\sqrt {2}}\mathbb {Z} $ is a countable dense subset of $\mathbb {R} $ so not closed in $\mathbb {R} .$ Proofs 1. This condition is satisfied if $\mathbb {S} $ denotes the set of all topological strings in $(X,\tau ).$ 2. This is because every non-empty balanced set must contain the origin and because $0\in \operatorname {Int} _{X}S$ if and only if $\operatorname {Int} _{X}S=\{0\}\cup \operatorname {Int} _{X}S.$ 3. Fix $0<r<1$ so it remains to show that $w_{0}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~rx+(1-r)y$ belongs to $\operatorname {int} _{X}C.$ By replacing $C,x,y$ with $C-w_{0},x-w_{0},y-w_{0}$ if necessary, we may assume without loss of generality that $rx+(1-r)y=0,$ and so it remains to show that $C$ is a neighborhood of the origin. Let $s~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\tfrac {r}{r-1}}<0$ so that $y={\tfrac {r}{r-1}}x=sx.$ Since scalar multiplication by $s\neq 0$ is a linear homeomorphism $X\to X,$ $\operatorname {cl} _{X}\left({\tfrac {1}{s}}C\right)={\tfrac {1}{s}}\operatorname {cl} _{X}C.$ Since $x\in \operatorname {int} C$ and $y\in \operatorname {cl} C,$ it follows that $x={\tfrac {1}{s}}y\in \operatorname {cl} \left({\tfrac {1}{s}}C\right)\cap \operatorname {int} C$ where because $\operatorname {int} C$ is open, there exists some $c_{0}\in \left({\tfrac {1}{s}}C\right)\cap \operatorname {int} C,$ which satisfies $sc_{0}\in C.$ Define $h:X\to X$ by $x\mapsto rx+(1-r)sc_{0}=rx-rc_{0},$ which is a homeomorphism because $0<r<1.$ The set $h\left(\operatorname {int} C\right)$ is thus an open subset of $X$ that moreover contains $ h(c_{0})=rc_{0}-rc_{0}=0.$ If $c\in \operatorname {int} C$ then $ h(c)=rc+(1-r)sc_{0}\in C$ since $C$ is convex, $0<r<1,$ and $sc_{0},c\in C,$ which proves that $h\left(\operatorname {int} C\right)\subseteq C.$ Thus $h\left(\operatorname {int} C\right)$ is an open subset of $X$ that contains the origin and is contained in $C.$ Q.E.D. 4. Since $\operatorname {cl} _{X}\{0\}$ has the trivial topology, so does each of its subsets, which makes them all compact. It is known that a subset of any uniform space is compact if and only if it is complete and totally bounded. 5. If $s\in S$ then $s+\operatorname {cl} _{X}\{0\}=\operatorname {cl} _{X}(s+\{0\})=\operatorname {cl} _{X}\{s\}\subseteq \operatorname {cl} _{X}S.$ Because $S\subseteq S+\operatorname {cl} _{X}\{0\}\subseteq \operatorname {cl} _{X}S,$ if $S$ is closed then equality holds. Using the fact that $\operatorname {cl} _{X}\{0\}$ is a vector space, it is readily verified that the complement in $X$ of any set $S$ satisfying the equality $S+\operatorname {cl} _{X}\{0\}=S$ must also satisfy this equality (when $X\setminus S$ is substituted for $S$). 6. $z+\{x\in X:P(x)\}=\{z+x:x\in X,P(x)\}=\{z+x:x\in X,P((z+x)-z)\}$ and so using $y=z+x$ and the fact that $z+X=X,$ this is equal to $\{y:y-z\in X,P(y-z)\}=\{y:y\in X,P(y-z)\}=\{y\in X:P(y-z)\}.$ Q.E.D. $\blacksquare $ Citations 1. Rudin 1991, p. 4-5 §1.3. 2. Köthe 1983, p. 91. 3. Schaefer & Wolff 1999, pp. 74–78. 4. Grothendieck 1973, pp. 34–36. 5. Wilansky 2013, pp. 40–47. 6. Narici & Beckenstein 2011, pp. 67–113. 7. Adasch, Ernst & Keim 1978, pp. 5–9. 8. Schechter 1996, pp. 721–751. 9. Narici & Beckenstein 2011, pp. 371–423. 10. Adasch, Ernst & Keim 1978, pp. 10–15. 11. Wilansky 2013, p. 53. 12. Rudin 1991, p. 6 §1.4. 13. Rudin 1991, p. 8. 14. Narici & Beckenstein 2011, pp. 155–176. 15. Rudin 1991, p. 27-28 Theorem 1.37. 16. Köthe 1983, section 15.11. 17. "Topological vector space", Encyclopedia of Mathematics, EMS Press, 2001 [1994], retrieved 26 February 2021 18. Rudin 1991, p. 17 Theorem 1.22. 19. Schaefer & Wolff 1999, pp. 12–19. 20. Schaefer & Wolff 1999, p. 16. 21. Narici & Beckenstein 2011, pp. 115–154. 22. Swartz 1992, pp. 27–29. 23. "A quick application of the closed graph theorem". What's new. 2016-04-22. Retrieved 2020-10-07. 24. Narici & Beckenstein 2011, p. 111. 25. Rudin 1991, p. 9 §1.8. 26. Rudin 1991, p. 27 Theorem 1.36. 27. Rudin 1991, p. 62-68 §3.8-3.14. 28. Narici & Beckenstein 2011, pp. 177–220. 29. Rudin 1991, p. 38. 30. Schaefer & Wolff 1999, p. 35. 31. Narici & Beckenstein 2011, p. 119-120. 32. Wilansky 2013, p. 43. 33. Wilansky 2013, p. 42. 34. Rudin 1991, p. 55. 35. Narici & Beckenstein 2011, p. 108. 36. Jarchow 1981, pp. 101–104. 37. Schaefer & Wolff 1999, p. 38. 38. Conway 1990, p. 102. 39. Narici & Beckenstein 2011, pp. 47–66. 40. Narici & Beckenstein 2011, p. 156. 41. Schaefer & Wolff 1999, pp. 12–35. 42. Schaefer & Wolff 1999, p. 25. 43. Jarchow 1981, pp. 56–73. 44. Narici & Beckenstein 2011, pp. 107–112. 45. Wilansky 2013, p. 63. 46. Narici & Beckenstein 2011, pp. 19–45. 47. Wilansky 2013, pp. 43–44. 48. Narici & Beckenstein 2011, pp. 80. 49. Narici & Beckenstein 2011, pp. 108–109. 50. Jarchow 1981, pp. 30–32. 51. Narici & Beckenstein 2011, p. 109. 52. Rudin 1991, p. 6. 53. Swartz 1992, p. 35. Bibliography • Adasch, Norbert; Ernst, Bruno; Keim, Dieter (1978). Topological Vector Spaces: The Theory Without Convexity Conditions. Lecture Notes in Mathematics. Vol. 639. Berlin New York: Springer-Verlag. ISBN 978-3-540-08662-8. OCLC 297140003. • Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342. • Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704. • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. • Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. • Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067. • Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114. Further reading • Bierstedt, Klaus-Dieter (1988). "An Introduction to Locally Convex Inductive Limits". Functional Analysis and Applications. Singapore-New Jersey-Hong Kong: Universitätsbibliothek: 35–133. Retrieved 20 September 2020. • Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190. • Conway, John B. (1990). A Course in Functional Analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908. • Dunford, Nelson; Schwartz, Jacob T. (1988). Linear Operators. Pure and applied mathematics. Vol. 1. New York: Wiley-Interscience. ISBN 978-0-471-60848-6. OCLC 18412261. • Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138. • Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098. • Horváth, John (1966). Topological Vector Spaces and Distributions. Addison-Wesley series in mathematics. Vol. 1. Reading, MA: Addison-Wesley Publishing Company. ISBN 978-0201029857. • Köthe, Gottfried (1979). Topological Vector Spaces II. Grundlehren der mathematischen Wissenschaften. Vol. 237. New York: Springer Science & Business Media. ISBN 978-0-387-90400-9. OCLC 180577972. • Lang, Serge (1972). Differential manifolds. Reading, Mass.–London–Don Mills, Ont.: Addison-Wesley Publishing Co., Inc. ISBN 0-201-04166-9. • Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250. • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. • Valdivia, Manuel (1982). Nachbin, Leopoldo (ed.). Topics in Locally Convex Spaces. Vol. 67. Amsterdam New York, N.Y.: Elsevier Science Pub. Co. ISBN 978-0-08-087178-3. OCLC 316568534. • Voigt, Jürgen (2020). A Course on Topological Vector Spaces. Compact Textbooks in Mathematics. Cham: Birkhäuser Basel. ISBN 978-3-030-32945-7. OCLC 1145563701. External links • Media related to Topological vector spaces at Wikimedia Commons Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Topological vector spaces (TVSs) Basic concepts • Banach space • Completeness • Continuous linear operator • Linear functional • Fréchet space • Linear map • Locally convex space • Metrizability • Operator topologies • Topological vector space • Vector space Main results • Anderson–Kadec • Banach–Alaoglu • Closed graph theorem • F. Riesz's • Hahn–Banach (hyperplane separation • Vector-valued Hahn–Banach) • Open mapping (Banach–Schauder) • Bounded inverse • Uniform boundedness (Banach–Steinhaus) Maps • Bilinear operator • form • Linear map • Almost open • Bounded • Continuous • Closed • Compact • Densely defined • Discontinuous • Topological homomorphism • Functional • Linear • Bilinear • Sesquilinear • Norm • Seminorm • Sublinear function • Transpose Types of sets • Absolutely convex/disk • Absorbing/Radial • Affine • Balanced/Circled • Banach disks • Bounding points • Bounded • Complemented subspace • Convex • Convex cone (subset) • Linear cone (subset) • Extreme point • Pre-compact/Totally bounded • Prevalent/Shy • Radial • Radially convex/Star-shaped • Symmetric Set operations • Affine hull • (Relative) Algebraic interior (core) • Convex hull • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Types of TVSs • Asplund • B-complete/Ptak • Banach • (Countably) Barrelled • BK-space • (Ultra-) Bornological • Brauner • Complete • Convenient • (DF)-space • Distinguished • F-space • FK-AK space • FK-space • Fréchet • tame Fréchet • Grothendieck • Hilbert • Infrabarreled • Interpolation space • K-space • LB-space • LF-space • Locally convex space • Mackey • (Pseudo)Metrizable • Montel • Quasibarrelled • Quasi-complete • Quasinormed • (Polynomially • Semi-) Reflexive • Riesz • Schwartz • Semi-complete • Smith • Stereotype • (B • Strictly • Uniformly) convex • (Quasi-) Ultrabarrelled • Uniformly smooth • Webbed • With the approximation property •  Mathematics portal • Category • Commons Authority control: National • France • BnF data • Germany • Israel • United States • Japan
Wikipedia
TWINKLE TWINKLE (The Weizmann Institute Key Locating Engine) is a hypothetical integer factorization device described in 1999 by Adi Shamir[1] and purported to be capable of factoring 512-bit integers.[2] It is also a pun on the twinkling LEDs used in the device. Shamir estimated that the cost of TWINKLE could be as low as $5000 per unit with bulk production. TWINKLE has a successor named TWIRL[3] which is more efficient. Method The goal of TWINKLE is to implement the sieving step of the Number Field Sieve algorithm, which is the fastest known algorithm for factoring large integers. The sieving step, at least for 512-bit and larger integers, is the most time consuming step of NFS. It involves testing a large set of numbers for B-'smoothness', i.e., absence of a prime factor greater than a specified bound B. What is remarkable about TWINKLE is that it is not a purely digital device. It gets its efficiency by eschewing binary arithmetic for an "optical" adder which can add hundreds of thousands of quantities in a single clock cycle. The key idea used is "time-space inversion". Conventional NFS sieving is carried out one prime at a time. For each prime, all the numbers to be tested for smoothness in the range under consideration which are divisible by that prime have their counter incremented by the logarithm of the prime (similar to the sieve of Eratosthenes). TWINKLE, on the other hand, works one candidate smooth number (call it X) at a time. There is one LED corresponding to each prime smaller than B. At the time instant corresponding to X, the set of LEDs glowing corresponds to the set of primes that divide X. This can be accomplished by having the LED associated with the prime p glow once every p time instants. Further, the intensity of each LED is proportional to the logarithm of the corresponding prime. Thus, the total intensity equals the sum of the logarithms of all the prime factors of X smaller than B. This intensity is equal to the logarithm of X if and only if X is B-smooth. Even in PC-based implementations, it's a common optimization to speed up sieving by adding approximate logarithms of small primes together. Similarly, TWINKLE has much room for error in its light measurements; as long as the intensity is at about the right level, the number is very likely to be smooth enough for the purposes of known factoring algorithms. The existence of even one large factor would imply that the logarithm of a large number is missing, resulting in a very low intensity; because most numbers have this property, the device's output would tend to consist of stretches of low intensity output with brief bursts of high intensity output. In the above it is assumed that X is square-free, i.e. it is not divisible by the square of any prime. This is acceptable since the factoring algorithms only require "sufficiently many" smooth numbers, and the "yield" decreases only by a small constant factor due to the square-freeness assumption. There is also the problem of false positives due to the inaccuracy of the optoelectronic hardware, but this is easily solved by adding a PC-based post-processing step for verifying the smoothness of the numbers identified by TWINKLE. See also • TWIRL, the successor to TWINKLE References 1. Shamir, Adi (1999). Koç, Çetin K.; Paar, Christof (eds.). "Factoring Large Numbers with the TWINKLE Device". Cryptographic Hardware and Embedded Systems. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. 1717: 2–12. doi:10.1007/3-540-48059-5_2. ISBN 978-3-540-48059-4. 2. Shamir, Adi (1999), "Factoring Large Numbers with the TWINKLE Device", Cryptographic Hardware and Embedded Systems, Lecture Notes in Computer Science, vol. 1717, Springer Berlin Heidelberg, pp. 2–12, doi:10.1007/3-540-48059-5_2, ISBN 9783540666462 3. Shamir, Adi; Tromer, Eran (2003). "Factoring Large Numbers with the TWIRL Device". In Boneh, Dan (ed.). Advances in Cryptology - CRYPTO 2003. Lecture Notes in Computer Science. Vol. 2729. Berlin, Heidelberg: Springer. pp. 1–26. doi:10.1007/978-3-540-45146-4_1. ISBN 978-3-540-45146-4.
Wikipedia
T(1) theorem In mathematics, the T(1) theorem, first proved by David & Journé (1984), describes when an operator T given by a kernel can be extended to a bounded linear operator on the Hilbert space L2(Rn). The name T(1) theorem refers to a condition on the distribution T(1), given by the operator T applied to the function 1. Statement Suppose that T is a continuous operator from Schwartz functions on Rn to tempered distributions, so that T is given by a kernel K which is a distribution. Assume that the kernel is standard, which means that off the diagonal it is given by a function satisfying certain conditions. Then the T(1) theorem states that T can be extended to a bounded operator on the Hilbert space L2(Rn) if and only if the following conditions are satisfied: • T(1) is of bounded mean oscillation (where T is extended to an operator on bounded smooth functions, such as 1). • T*(1) is of bounded mean oscillation, where T* is the adjoint of T. • T is weakly bounded, a weak condition that is easy to verify in practice. References • David, Guy; Journé, Jean-Lin (1984), "A boundedness criterion for generalized Calderón-Zygmund operators", Annals of Mathematics, Second Series, 120 (2): 371–397, doi:10.2307/2006946, ISSN 0003-486X, JSTOR 2006946, MR 0763911 • Grafakos, Loukas (2009), Modern Fourier analysis, Graduate Texts in Mathematics, vol. 250 (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-0-387-09434-2, ISBN 978-0-387-09433-5, MR 2463316 Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Table of Clebsch–Gordan coefficients This is a table of Clebsch–Gordan coefficients used for adding angular momentum values in quantum mechanics. The overall sign of the coefficients for each set of constant $j_{1}$, $j_{2}$, $j$ is arbitrary to some degree and has been fixed according to the Condon–Shortley and Wigner sign convention as discussed by Baird and Biedenharn.[1] Tables with the same sign convention may be found in the Particle Data Group's Review of Particle Properties[2] and in online tables.[3] Formulation The Clebsch–Gordan coefficients are the solutions to $|j_{1},j_{2};J,M\rangle =\sum _{m_{1}=-j_{1}}^{j_{1}}\sum _{m_{2}=-j_{2}}^{j_{2}}|j_{1},m_{1};j_{2},m_{2}\rangle \langle j_{1},j_{2};m_{1},m_{2}\mid j_{1},j_{2};J,M\rangle $ Explicitly: ${\begin{aligned}&\langle j_{1},j_{2};m_{1},m_{2}\mid j_{1},j_{2};J,M\rangle \\[6pt]={}&\delta _{M,m_{1}+m_{2}}{\sqrt {\frac {(2J+1)(J+j_{1}-j_{2})!(J-j_{1}+j_{2})!(j_{1}+j_{2}-J)!}{(j_{1}+j_{2}+J+1)!}}}\ \times {}\\[6pt]&{\sqrt {(J+M)!(J-M)!(j_{1}-m_{1})!(j_{1}+m_{1})!(j_{2}-m_{2})!(j_{2}+m_{2})!}}\ \times {}\\[6pt]&\sum _{k}{\frac {(-1)^{k}}{k!(j_{1}+j_{2}-J-k)!(j_{1}-m_{1}-k)!(j_{2}+m_{2}-k)!(J-j_{2}+m_{1}+k)!(J-j_{1}-m_{2}+k)!}}.\end{aligned}}$ The summation is extended over all integer k for which the argument of every factorial is nonnegative.[4] For brevity, solutions with M < 0 and j1 < j2 are omitted. They may be calculated using the simple relations $\langle j_{1},j_{2};m_{1},m_{2}\mid j_{1},j_{2};J,M\rangle =(-1)^{J-j_{1}-j_{2}}\langle j_{1},j_{2};-m_{1},-m_{2}\mid j_{1},j_{2};J,-M\rangle .$ and $\langle j_{1},j_{2};m_{1},m_{2}\mid j_{1},j_{2};J,M\rangle =(-1)^{J-j_{1}-j_{2}}\langle j_{2},j_{1};m_{2},m_{1}\mid j_{2},j_{1};J,M\rangle .$ Specific values The Clebsch–Gordan coefficients for j values less than or equal to 5/2 are given below.[5]  j2 = 0 When j2 = 0, the Clebsch–Gordan coefficients are given by $\delta _{j,j_{1}}\delta _{m,m_{1}}$.  j1 = 1/2,  j2 = 1/2 m = 1 j m1, m2 1 1/2, 1/2 $1$ m = −1 j m1, m2 1 −1/2, −1/2 $1$ m = 0 j m1, m2 1 0 1/2, −1/2 ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{2}}}$ −1/2, 1/2 ${\sqrt {\frac {1}{2}}}$ $-{\sqrt {\frac {1}{2}}}$  j1 = 1,  j2 = 1/2 m = 3/2 j m1, m2 3/2 1, 1/2 $1$ m = 1/2 j m1, m2 3/2 1/2 1, −1/2 ${\sqrt {\frac {1}{3}}}$ ${\sqrt {\frac {2}{3}}}$ 0, 1/2 ${\sqrt {\frac {2}{3}}}$ $-{\sqrt {\frac {1}{3}}}$  j1 = 1,  j2 = 1 m = 2 j m1, m2 2 1, 1 $1$ m = 1 j m1, m2 2 1 1, 0 ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{2}}}$ 0, 1 ${\sqrt {\frac {1}{2}}}$ $-{\sqrt {\frac {1}{2}}}$ m = 0 j m1, m2 2 1 0 1, −1 ${\sqrt {\frac {1}{6}}}$ ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{3}}}$ 0, 0 ${\sqrt {\frac {2}{3}}}$ $0$ $-{\sqrt {\frac {1}{3}}}$ −1, 1 ${\sqrt {\frac {1}{6}}}$ $-{\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{3}}}$  j1 = 3/2,  j2 = 1/2 m = 2 j m1, m2 2 3/2, 1/2 $1$ m = 1 j m1, m2 2 1 3/2, −1/2 ${\frac {1}{2}}$ ${\sqrt {\frac {3}{4}}}$ 1/2, 1/2 ${\sqrt {\frac {3}{4}}}$ $-{\frac {1}{2}}$ m = 0 j m1, m2 2 1 1/2, −1/2 ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{2}}}$ −1/2, 1/2 ${\sqrt {\frac {1}{2}}}$ $-{\sqrt {\frac {1}{2}}}$  j1 = 3/2,  j2 = 1 m = 5/2 j m1, m2 5/2 3/2, 1 $1$ m = 3/2 j m1, m2 5/2 3/2 3/2, 0 ${\sqrt {\frac {2}{5}}}$ ${\sqrt {\frac {3}{5}}}$ 1/2, 1 ${\sqrt {\frac {3}{5}}}$ $-{\sqrt {\frac {2}{5}}}$ m = 1/2 j m1, m2 5/2 3/2 1/2 3/2, −1 ${\sqrt {\frac {1}{10}}}$ ${\sqrt {\frac {2}{5}}}$ ${\sqrt {\frac {1}{2}}}$ 1/2, 0 ${\sqrt {\frac {3}{5}}}$ ${\sqrt {\frac {1}{15}}}$ $-{\sqrt {\frac {1}{3}}}$ −1/2, 1 ${\sqrt {\frac {3}{10}}}$ $-{\sqrt {\frac {8}{15}}}$ ${\sqrt {\frac {1}{6}}}$  j1 = 3/2,  j2 = 3/2 m = 3 j m1, m2 3 3/2, 3/2 $1$ m = 2 j m1, m2 3 2 3/2, 1/2 ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{2}}}$ 1/2, 3/2 ${\sqrt {\frac {1}{2}}}$ $-{\sqrt {\frac {1}{2}}}$ m = 1 j m1, m2 3 2 1 3/2, −1/2 ${\sqrt {\frac {1}{5}}}$ ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {3}{10}}}$ 1/2, 1/2 ${\sqrt {\frac {3}{5}}}$ $0$ $-{\sqrt {\frac {2}{5}}}$ −1/2, 3/2 ${\sqrt {\frac {1}{5}}}$ $-{\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {3}{10}}}$ m = 0 j m1, m2 3 2 1 0 3/2, −3/2 ${\sqrt {\frac {1}{20}}}$ ${\frac {1}{2}}$ ${\sqrt {\frac {9}{20}}}$ ${\frac {1}{2}}$ 1/2, −1/2 ${\sqrt {\frac {9}{20}}}$ ${\frac {1}{2}}$ $-{\sqrt {\frac {1}{20}}}$ $-{\frac {1}{2}}$ −1/2, 1/2 ${\sqrt {\frac {9}{20}}}$ $-{\frac {1}{2}}$ $-{\sqrt {\frac {1}{20}}}$ ${\frac {1}{2}}$ −3/2, 3/2 ${\sqrt {\frac {1}{20}}}$ $-{\frac {1}{2}}$ ${\sqrt {\frac {9}{20}}}$ $-{\frac {1}{2}}$  j1 = 2,  j2 = 1/2 m = 5/2 j m1, m2 5/2 2, 1/2 $1$ m = 3/2 j m1, m2 5/2 3/2 2, −1/2 ${\sqrt {\frac {1}{5}}}$ ${\sqrt {\frac {4}{5}}}$ 1, 1/2 ${\sqrt {\frac {4}{5}}}$ $-{\sqrt {\frac {1}{5}}}$ m = 1/2 j m1, m2 5/2 3/2 1, −1/2 ${\sqrt {\frac {2}{5}}}$ ${\sqrt {\frac {3}{5}}}$ 0, 1/2 ${\sqrt {\frac {3}{5}}}$ $-{\sqrt {\frac {2}{5}}}$  j1 = 2,  j2 = 1 m = 3 j m1, m2 3 2, 1 $1$ m = 2 j m1, m2 3 2 2, 0 ${\sqrt {\frac {1}{3}}}$ ${\sqrt {\frac {2}{3}}}$ 1, 1 ${\sqrt {\frac {2}{3}}}$ $-{\sqrt {\frac {1}{3}}}$ m = 1 j m1, m2 3 2 1 2, −1 ${\sqrt {\frac {1}{15}}}$ ${\sqrt {\frac {1}{3}}}$ ${\sqrt {\frac {3}{5}}}$ 1, 0 ${\sqrt {\frac {8}{15}}}$ ${\sqrt {\frac {1}{6}}}$ $-{\sqrt {\frac {3}{10}}}$ 0, 1 ${\sqrt {\frac {2}{5}}}$ $-{\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{10}}}$ m = 0 j m1, m2 3 2 1 1, −1 ${\sqrt {\frac {1}{5}}}$ ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {3}{10}}}$ 0, 0 ${\sqrt {\frac {3}{5}}}$ $0$ $-{\sqrt {\frac {2}{5}}}$ −1, 1 ${\sqrt {\frac {1}{5}}}$ $-{\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {3}{10}}}$  j1 = 2,  j2 = 3/2 m = 7/2 j m1, m2 7/2 2, 3/2 $1$ m = 5/2 j m1, m2 7/2 5/2 2, 1/2 ${\sqrt {\frac {3}{7}}}$ ${\sqrt {\frac {4}{7}}}$ 1, 3/2 ${\sqrt {\frac {4}{7}}}$ $-{\sqrt {\frac {3}{7}}}$ m = 3/2 j m1, m2 7/2 5/2 3/2 2, −1/2 ${\sqrt {\frac {1}{7}}}$ ${\sqrt {\frac {16}{35}}}$ ${\sqrt {\frac {2}{5}}}$ 1, 1/2 ${\sqrt {\frac {4}{7}}}$ ${\sqrt {\frac {1}{35}}}$ $-{\sqrt {\frac {2}{5}}}$ 0, 3/2 ${\sqrt {\frac {2}{7}}}$ $-{\sqrt {\frac {18}{35}}}$ ${\sqrt {\frac {1}{5}}}$ m = 1/2 j m1, m2 7/2 5/2 3/2 1/2 2, −3/2 ${\sqrt {\frac {1}{35}}}$ ${\sqrt {\frac {6}{35}}}$ ${\sqrt {\frac {2}{5}}}$ ${\sqrt {\frac {2}{5}}}$ 1, −1/2 ${\sqrt {\frac {12}{35}}}$ ${\sqrt {\frac {5}{14}}}$ $0$ $-{\sqrt {\frac {3}{10}}}$ 0, 1/2 ${\sqrt {\frac {18}{35}}}$ $-{\sqrt {\frac {3}{35}}}$ $-{\sqrt {\frac {1}{5}}}$ ${\sqrt {\frac {1}{5}}}$ −1, 3/2 ${\sqrt {\frac {4}{35}}}$ $-{\sqrt {\frac {27}{70}}}$ ${\sqrt {\frac {2}{5}}}$ $-{\sqrt {\frac {1}{10}}}$  j1 = 2,  j2 = 2 m = 4 j m1, m2 4 2, 2 $1$ m = 3 j m1, m2 4 3 2, 1 ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{2}}}$ 1, 2 ${\sqrt {\frac {1}{2}}}$ $-{\sqrt {\frac {1}{2}}}$ m = 2 j m1, m2 4 3 2 2, 0 ${\sqrt {\frac {3}{14}}}$ ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {2}{7}}}$ 1, 1 ${\sqrt {\frac {4}{7}}}$ $0$ $-{\sqrt {\frac {3}{7}}}$ 0, 2 ${\sqrt {\frac {3}{14}}}$ $-{\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {2}{7}}}$ m = 1 j m1, m2 4 3 2 1 2, −1 ${\sqrt {\frac {1}{14}}}$ ${\sqrt {\frac {3}{10}}}$ ${\sqrt {\frac {3}{7}}}$ ${\sqrt {\frac {1}{5}}}$ 1, 0 ${\sqrt {\frac {3}{7}}}$ ${\sqrt {\frac {1}{5}}}$ $-{\sqrt {\frac {1}{14}}}$ $-{\sqrt {\frac {3}{10}}}$ 0, 1 ${\sqrt {\frac {3}{7}}}$ $-{\sqrt {\frac {1}{5}}}$ $-{\sqrt {\frac {1}{14}}}$ ${\sqrt {\frac {3}{10}}}$ −1, 2 ${\sqrt {\frac {1}{14}}}$ $-{\sqrt {\frac {3}{10}}}$ ${\sqrt {\frac {3}{7}}}$ $-{\sqrt {\frac {1}{5}}}$ m = 0 j m1, m2 4 3 2 1 0 2, −2 ${\sqrt {\frac {1}{70}}}$ ${\sqrt {\frac {1}{10}}}$ ${\sqrt {\frac {2}{7}}}$ ${\sqrt {\frac {2}{5}}}$ ${\sqrt {\frac {1}{5}}}$ 1, −1 ${\sqrt {\frac {8}{35}}}$ ${\sqrt {\frac {2}{5}}}$ ${\sqrt {\frac {1}{14}}}$ $-{\sqrt {\frac {1}{10}}}$ $-{\sqrt {\frac {1}{5}}}$ 0, 0 ${\sqrt {\frac {18}{35}}}$ $0$ $-{\sqrt {\frac {2}{7}}}$ $0$ ${\sqrt {\frac {1}{5}}}$ −1, 1 ${\sqrt {\frac {8}{35}}}$ $-{\sqrt {\frac {2}{5}}}$ ${\sqrt {\frac {1}{14}}}$ ${\sqrt {\frac {1}{10}}}$ $-{\sqrt {\frac {1}{5}}}$ −2, 2 ${\sqrt {\frac {1}{70}}}$ $-{\sqrt {\frac {1}{10}}}$ ${\sqrt {\frac {2}{7}}}$ $-{\sqrt {\frac {2}{5}}}$ ${\sqrt {\frac {1}{5}}}$  j1 = 5/2,  j2 = 1/2 m = 3 j m1, m2 3 5/2, 1/2 $1$ m = 2 j m1, m2 3 2 5/2, −1/2 ${\sqrt {\frac {1}{6}}}$ ${\sqrt {\frac {5}{6}}}$ 3/2, 1/2 ${\sqrt {\frac {5}{6}}}$ $-{\sqrt {\frac {1}{6}}}$ m = 1 j m1, m2 3 2 3/2, −1/2 ${\sqrt {\frac {1}{3}}}$ ${\sqrt {\frac {2}{3}}}$ 1/2, 1/2 ${\sqrt {\frac {2}{3}}}$ $-{\sqrt {\frac {1}{3}}}$ m = 0 j m1, m2 3 2 1/2, −1/2 ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{2}}}$ −1/2, 1/2 ${\sqrt {\frac {1}{2}}}$ $-{\sqrt {\frac {1}{2}}}$  j1 = 5/2,  j2 = 1 m = 7/2 j m1, m2 7/2 5/2, 1 $1$ m = 5/2 j m1, m2 7/2 5/2 5/2, 0 ${\sqrt {\frac {2}{7}}}$ ${\sqrt {\frac {5}{7}}}$ 3/2, 1 ${\sqrt {\frac {5}{7}}}$ $-{\sqrt {\frac {2}{7}}}$ m = 3/2 j m1, m2 7/2 5/2 3/2 5/2, −1 ${\sqrt {\frac {1}{21}}}$ ${\sqrt {\frac {2}{7}}}$ ${\sqrt {\frac {2}{3}}}$ 3/2, 0 ${\sqrt {\frac {10}{21}}}$ ${\sqrt {\frac {9}{35}}}$ $-{\sqrt {\frac {4}{15}}}$ 1/2, 1 ${\sqrt {\frac {10}{21}}}$ $-{\sqrt {\frac {16}{35}}}$ ${\sqrt {\frac {1}{15}}}$ m = 1/2 j m1, m2 7/2 5/2 3/2 3/2, −1 ${\sqrt {\frac {1}{7}}}$ ${\sqrt {\frac {16}{35}}}$ ${\sqrt {\frac {2}{5}}}$ 1/2, 0 ${\sqrt {\frac {4}{7}}}$ ${\sqrt {\frac {1}{35}}}$ $-{\sqrt {\frac {2}{5}}}$ −1/2, 1 ${\sqrt {\frac {2}{7}}}$ $-{\sqrt {\frac {18}{35}}}$ ${\sqrt {\frac {1}{5}}}$  j1 = 5/2,  j2 = 3/2 m = 4 j m1, m2 4 5/2, 3/2 $1$ m = 3 j m1, m2 4 3 5/2, 1/2 ${\sqrt {\frac {3}{8}}}$ ${\sqrt {\frac {5}{8}}}$ 3/2, 3/2 ${\sqrt {\frac {5}{8}}}$ $-{\sqrt {\frac {3}{8}}}$ m = 2 j m1, m2 4 3 2 5/2, −1/2 ${\sqrt {\frac {3}{28}}}$ ${\sqrt {\frac {5}{12}}}$ ${\sqrt {\frac {10}{21}}}$ 3/2, 1/2 ${\sqrt {\frac {15}{28}}}$ ${\sqrt {\frac {1}{12}}}$ $-{\sqrt {\frac {8}{21}}}$ 1/2, 3/2 ${\sqrt {\frac {5}{14}}}$ $-{\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{7}}}$ m = 1 j m1, m2 4 3 2 1 5/2, −3/2 ${\sqrt {\frac {1}{56}}}$ ${\sqrt {\frac {1}{8}}}$ ${\sqrt {\frac {5}{14}}}$ ${\sqrt {\frac {1}{2}}}$ 3/2, −1/2 ${\sqrt {\frac {15}{56}}}$ ${\sqrt {\frac {49}{120}}}$ ${\sqrt {\frac {1}{42}}}$ $-{\sqrt {\frac {3}{10}}}$ 1/2, 1/2 ${\sqrt {\frac {15}{28}}}$ $-{\sqrt {\frac {1}{60}}}$ $-{\sqrt {\frac {25}{84}}}$ ${\sqrt {\frac {3}{20}}}$ −1/2, 3/2 ${\sqrt {\frac {5}{28}}}$ $-{\sqrt {\frac {9}{20}}}$ ${\sqrt {\frac {9}{28}}}$ $-{\sqrt {\frac {1}{20}}}$ m = 0 j m1, m2 4 3 2 1 3/2, −3/2 ${\sqrt {\frac {1}{14}}}$ ${\sqrt {\frac {3}{10}}}$ ${\sqrt {\frac {3}{7}}}$ ${\sqrt {\frac {1}{5}}}$ 1/2, −1/2 ${\sqrt {\frac {3}{7}}}$ ${\sqrt {\frac {1}{5}}}$ $-{\sqrt {\frac {1}{14}}}$ $-{\sqrt {\frac {3}{10}}}$ −1/2, 1/2 ${\sqrt {\frac {3}{7}}}$ $-{\sqrt {\frac {1}{5}}}$ $-{\sqrt {\frac {1}{14}}}$ ${\sqrt {\frac {3}{10}}}$ −3/2, 3/2 ${\sqrt {\frac {1}{14}}}$ $-{\sqrt {\frac {3}{10}}}$ ${\sqrt {\frac {3}{7}}}$ $-{\sqrt {\frac {1}{5}}}$  j1 = 5/2,  j2 = 2 m = 9/2 j m1, m2 9/2 5/2, 2 $1$ m = 7/2 j m1, m2 9/2 7/2 5/2, 1 ${\frac {2}{3}}$ ${\sqrt {\frac {5}{9}}}$ 3/2, 2 ${\sqrt {\frac {5}{9}}}$ $-{\frac {2}{3}}$ m = 5/2 j m1, m2 9/2 7/2 5/2 5/2, 0 ${\sqrt {\frac {1}{6}}}$ ${\sqrt {\frac {10}{21}}}$ ${\sqrt {\frac {5}{14}}}$ 3/2, 1 ${\sqrt {\frac {5}{9}}}$ ${\sqrt {\frac {1}{63}}}$ $-{\sqrt {\frac {3}{7}}}$ 1/2, 2 ${\sqrt {\frac {5}{18}}}$ $-{\sqrt {\frac {32}{63}}}$ ${\sqrt {\frac {3}{14}}}$ m = 3/2 j m1, m2 9/2 7/2 5/2 3/2 5/2, −1 ${\sqrt {\frac {1}{21}}}$ ${\sqrt {\frac {5}{21}}}$ ${\sqrt {\frac {3}{7}}}$ ${\sqrt {\frac {2}{7}}}$ 3/2, 0 ${\sqrt {\frac {5}{14}}}$ ${\sqrt {\frac {2}{7}}}$ $-{\sqrt {\frac {1}{70}}}$ $-{\sqrt {\frac {12}{35}}}$ 1/2, 1 ${\sqrt {\frac {10}{21}}}$ $-{\sqrt {\frac {2}{21}}}$ $-{\sqrt {\frac {6}{35}}}$ ${\sqrt {\frac {9}{35}}}$ −1/2, 2 ${\sqrt {\frac {5}{42}}}$ $-{\sqrt {\frac {8}{21}}}$ ${\sqrt {\frac {27}{70}}}$ $-{\sqrt {\frac {4}{35}}}$ m = 1/2 j m1, m2 9/2 7/2 5/2 3/2 1/2 5/2, −2 ${\sqrt {\frac {1}{126}}}$ ${\sqrt {\frac {4}{63}}}$ ${\sqrt {\frac {3}{14}}}$ ${\sqrt {\frac {8}{21}}}$ ${\sqrt {\frac {1}{3}}}$ 3/2, −1 ${\sqrt {\frac {10}{63}}}$ ${\sqrt {\frac {121}{315}}}$ ${\sqrt {\frac {6}{35}}}$ $-{\sqrt {\frac {2}{105}}}$ $-{\sqrt {\frac {4}{15}}}$ 1/2, 0 ${\sqrt {\frac {10}{21}}}$ ${\sqrt {\frac {4}{105}}}$ $-{\sqrt {\frac {8}{35}}}$ $-{\sqrt {\frac {2}{35}}}$ ${\sqrt {\frac {1}{5}}}$ −1/2, 1 ${\sqrt {\frac {20}{63}}}$ $-{\sqrt {\frac {14}{45}}}$ $0$ ${\sqrt {\frac {5}{21}}}$ $-{\sqrt {\frac {2}{15}}}$ −3/2, 2 ${\sqrt {\frac {5}{126}}}$ $-{\sqrt {\frac {64}{315}}}$ ${\sqrt {\frac {27}{70}}}$ $-{\sqrt {\frac {32}{105}}}$ ${\sqrt {\frac {1}{15}}}$  j1 = 5/2,  j2 = 5/2 m = 5 j m1, m2 5 5/2, 5/2 $1$ m = 4 j m1, m2 5 4 5/2, 3/2 ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {1}{2}}}$ 3/2, 5/2 ${\sqrt {\frac {1}{2}}}$ $-{\sqrt {\frac {1}{2}}}$ m = 3 j m1, m2 5 4 3 5/2, 1/2 ${\sqrt {\frac {2}{9}}}$ ${\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {5}{18}}}$ 3/2, 3/2 ${\sqrt {\frac {5}{9}}}$ $0$ $-{\frac {2}{3}}$ 1/2, 5/2 ${\sqrt {\frac {2}{9}}}$ $-{\sqrt {\frac {1}{2}}}$ ${\sqrt {\frac {5}{18}}}$ m = 2 j m1, m2 5 4 3 2 5/2, −1/2 ${\sqrt {\frac {1}{12}}}$ ${\sqrt {\frac {9}{28}}}$ ${\sqrt {\frac {5}{12}}}$ ${\sqrt {\frac {5}{28}}}$ 3/2, 1/2 ${\sqrt {\frac {5}{12}}}$ ${\sqrt {\frac {5}{28}}}$ $-{\sqrt {\frac {1}{12}}}$ $-{\sqrt {\frac {9}{28}}}$ 1/2, 3/2 ${\sqrt {\frac {5}{12}}}$ $-{\sqrt {\frac {5}{28}}}$ $-{\sqrt {\frac {1}{12}}}$ ${\sqrt {\frac {9}{28}}}$ −1/2, 5/2 ${\sqrt {\frac {1}{12}}}$ $-{\sqrt {\frac {9}{28}}}$ ${\sqrt {\frac {5}{12}}}$ $-{\sqrt {\frac {5}{28}}}$ m = 1 j m1, m2 5 4 3 2 1 5/2, −3/2 ${\sqrt {\frac {1}{42}}}$ ${\sqrt {\frac {1}{7}}}$ ${\sqrt {\frac {1}{3}}}$ ${\sqrt {\frac {5}{14}}}$ ${\sqrt {\frac {1}{7}}}$ 3/2, −1/2 ${\sqrt {\frac {5}{21}}}$ ${\sqrt {\frac {5}{14}}}$ ${\sqrt {\frac {1}{30}}}$ $-{\sqrt {\frac {1}{7}}}$ $-{\sqrt {\frac {8}{35}}}$ 1/2, 1/2 ${\sqrt {\frac {10}{21}}}$ $0$ $-{\sqrt {\frac {4}{15}}}$ $0$ ${\sqrt {\frac {9}{35}}}$ −1/2, 3/2 ${\sqrt {\frac {5}{21}}}$ $-{\sqrt {\frac {5}{14}}}$ ${\sqrt {\frac {1}{30}}}$ ${\sqrt {\frac {1}{7}}}$ $-{\sqrt {\frac {8}{35}}}$ −3/2, 5/2 ${\sqrt {\frac {1}{42}}}$ $-{\sqrt {\frac {1}{7}}}$ ${\sqrt {\frac {1}{3}}}$ $-{\sqrt {\frac {5}{14}}}$ ${\sqrt {\frac {1}{7}}}$ m = 0 j m1, m2 5 4 3 2 1 0 5/2, −5/2 ${\sqrt {\frac {1}{252}}}$ ${\sqrt {\frac {1}{28}}}$ ${\sqrt {\frac {5}{36}}}$ ${\sqrt {\frac {25}{84}}}$ ${\sqrt {\frac {5}{14}}}$ ${\sqrt {\frac {1}{6}}}$ 3/2, −3/2 ${\sqrt {\frac {25}{252}}}$ ${\sqrt {\frac {9}{28}}}$ ${\sqrt {\frac {49}{180}}}$ ${\sqrt {\frac {1}{84}}}$ $-{\sqrt {\frac {9}{70}}}$ $-{\sqrt {\frac {1}{6}}}$ 1/2, −1/2 ${\sqrt {\frac {25}{63}}}$ ${\sqrt {\frac {1}{7}}}$ $-{\sqrt {\frac {4}{45}}}$ $-{\sqrt {\frac {4}{21}}}$ ${\sqrt {\frac {1}{70}}}$ ${\sqrt {\frac {1}{6}}}$ −1/2, 1/2 ${\sqrt {\frac {25}{63}}}$ $-{\sqrt {\frac {1}{7}}}$ $-{\sqrt {\frac {4}{45}}}$ ${\sqrt {\frac {4}{21}}}$ ${\sqrt {\frac {1}{70}}}$ $-{\sqrt {\frac {1}{6}}}$ −3/2, 3/2 ${\sqrt {\frac {25}{252}}}$ $-{\sqrt {\frac {9}{28}}}$ ${\sqrt {\frac {49}{180}}}$ $-{\sqrt {\frac {1}{84}}}$ $-{\sqrt {\frac {9}{70}}}$ ${\sqrt {\frac {1}{6}}}$ −5/2, 5/2 ${\sqrt {\frac {1}{252}}}$ $-{\sqrt {\frac {1}{28}}}$ ${\sqrt {\frac {5}{36}}}$ $-{\sqrt {\frac {25}{84}}}$ ${\sqrt {\frac {5}{14}}}$ $-{\sqrt {\frac {1}{6}}}$ SU(N) Clebsch–Gordan coefficients Algorithms to produce Clebsch–Gordan coefficients for higher values of $j_{1}$ and $j_{2}$, or for the su(N) algebra instead of su(2), are known.[6] A web interface for tabulating SU(N) Clebsch–Gordan coefficients is readily available. References 1. Baird, C.E.; L. C. Biedenharn (October 1964). "On the Representations of the Semisimple Lie Groups. III. The Explicit Conjugation Operation for SUn". J. Math. Phys. 5 (12): 1723–1730. Bibcode:1964JMP.....5.1723B. doi:10.1063/1.1704095. 2. Hagiwara, K.; et al. (July 2002). "Review of Particle Properties" (PDF). Phys. Rev. D. 66 (1): 010001. Bibcode:2002PhRvD..66a0001H. doi:10.1103/PhysRevD.66.010001. Retrieved 2007-12-20. 3. Mathar, Richard J. (2006-08-14). "SO(3) Clebsch Gordan coefficients" (text). Retrieved 2012-10-15. 4. (2.41), p. 172, Quantum Mechanics: Foundations and Applications, Arno Bohm, M. Loewe, New York: Springer-Verlag, 3rd ed., 1993, ISBN 0-387-95330-2. 5. Weissbluth, Mitchel (1978). Atoms and molecules. ACADEMIC PRESS. p. 28. ISBN 0-12-744450-5. Table 1.4 resumes the most common. 6. Alex, A.; M. Kalus; A. Huckleberry; J. von Delft (February 2011). "A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch–Gordan coefficients". J. Math. Phys. 82: 023507. arXiv:1009.0437. Bibcode:2011JMP....52b3507A. doi:10.1063/1.3521562. External links • Online, Java-based Clebsch–Gordan Coefficient Calculator by Paul Stevenson • Other formulae for Clebsch–Gordan coefficients. • Web interface for tabulating SU(N) Clebsch–Gordan coefficients
Wikipedia
List of Laplace transforms The following is a list of Laplace transforms for many common functions of a single variable.[1] The Laplace transform is an integral transform that takes a function of a positive real variable t (often time) to a function of a complex variable s (frequency). Properties Main article: Laplace transform § Properties and theorems The Laplace transform of a function $f(t)$ can be obtained using the formal definition of the Laplace transform. However, some properties of the Laplace transform can be used to obtain the Laplace transform of some functions more easily. Linearity For functions $f$ and $g$ and for scalar $a$, the Laplace transform satisfies ${\mathcal {L}}\{af(t)+g(t)\}=a{\mathcal {L}}\{f(t)\}+{\mathcal {L}}\{g(t)\}$ and is, therefore, regarded as a linear operator. Time shifting The Laplace transform of $f(t-a)u(t-a)$ is $e^{-as}F(s)$. Frequency shifting $F(s-a)$ is the Laplace transform of $e^{at}f(t)$. Explanatory notes The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t). The entries of the table that involve a time delay τ are required to be causal (meaning that τ > 0). A causal system is a system where the impulse response h(t) is zero for all time t prior to t = 0. In general, the region of convergence for causal systems is not the same as that of anticausal systems. The following functions and variables are used in the table below: • δ represents the Dirac delta function. • u(t) represents the Heaviside step function. Literature may refer to this by other notation, including $1(t)$ or $H(t)$. • Γ(z) represents the Gamma function. • γ is the Euler–Mascheroni constant. • t is a real number. It typically represents time, although it can represent any independent dimension. • s is the complex frequency domain parameter, and Re(s) is its real part. • n is an integer. • α, τ, and ω are real numbers. • q is a complex number. Table Function Time domain $f(t)={\mathcal {L}}^{-1}\{F(s)\}$ Laplace s-domain $F(s)={\mathcal {L}}\{f(t)\}$ Region of convergence Reference unit impulse $\delta (t)$ $1$ all s inspection delayed impulse $\delta (t-\tau )$ $e^{-\tau s}$ Re(s) > 0 time shift of unit impulse[2] unit step $u(t)$ ${1 \over s}$ Re(s) > 0 integrate unit impulse delayed unit step $u(t-\tau )$ ${\frac {1}{s}}e^{-\tau s}$ Re(s) > 0 time shift of unit step[3] ramp $t\cdot u(t)$ ${\frac {1}{s^{2}}}$ Re(s) > 0 integrate unit impulse twice nth power (for integer n) $t^{n}\cdot u(t)$ ${n! \over s^{n+1}}$ Re(s) > 0 (n > −1) Integrate unit step n times qth power (for complex q) $t^{q}\cdot u(t)$ ${\operatorname {\Gamma } (q+1) \over s^{q+1}}$ Re(s) > 0 Re(q) > −1 [4][5] nth root ${\sqrt[{n}]{t}}\cdot u(t)$ ${1 \over s^{{\frac {1}{n}}+1}}\operatorname {\Gamma } \left({\frac {1}{n}}+1\right)$ Re(s) > 0 Set q = 1/n above. nth power with frequency shift $t^{n}e^{-\alpha t}\cdot u(t)$ ${\frac {n!}{(s+\alpha )^{n+1}}}$ Re(s) > −α Integrate unit step, apply frequency shift delayed nth power with frequency shift $(t-\tau )^{n}e^{-\alpha (t-\tau )}\cdot u(t-\tau )$ ${\frac {n!\cdot e^{-\tau s}}{(s+\alpha )^{n+1}}}$ Re(s) > −α Integrate unit step, apply frequency shift, apply time shift exponential decay $e^{-\alpha t}u(t)$ ${1 \over s+\alpha }$ Re(s) > −α Frequency shift of unit step two-sided exponential decay (only for bilateral transform) $e^{-\alpha |t|}$ ${2\alpha \over \alpha ^{2}-s^{2}}$ −α < Re(s) < α Frequency shift of unit step exponential approach $(1-e^{-\alpha t})\cdot u(t)$ ${\frac {\alpha }{s(s+\alpha )}}$ Re(s) > 0 Unit step minus exponential decay sine $\sin(\omega t)\cdot u(t)$ ${\omega \over s^{2}+\omega ^{2}}$ Re(s) > 0 [6] cosine $\cos(\omega t)\cdot u(t)$ ${s \over s^{2}+\omega ^{2}}$ Re(s) > 0 [6] hyperbolic sine $\sinh(\alpha t)\cdot u(t)$ ${\alpha \over s^{2}-\alpha ^{2}}$ Re(s) > |α| [7] hyperbolic cosine $\cosh(\alpha t)\cdot u(t)$ ${s \over s^{2}-\alpha ^{2}}$ Re(s) > |α| [7] exponentially decaying sine wave $e^{-\alpha t}\sin(\omega t)\cdot u(t)$ ${\omega \over (s+\alpha )^{2}+\omega ^{2}}$ Re(s) > −α [6] exponentially decaying cosine wave $e^{-\alpha t}\cos(\omega t)\cdot u(t)$ ${s+\alpha \over (s+\alpha )^{2}+\omega ^{2}}$ Re(s) > −α [6] natural logarithm $\ln(t)\cdot u(t)$ ${\frac {-\ln(s)-\gamma }{s}}$ Re(s) > 0 [7] Bessel function of the first kind, of order n $J_{n}(\omega t)\cdot u(t)$ ${\frac {\left({\sqrt {s^{2}+\omega ^{2}}}-s\right)^{\!n}}{\omega ^{n}{\sqrt {s^{2}+\omega ^{2}}}}}$ Re(s) > 0 (n > −1) [7] Error function $\operatorname {erf} (t)\cdot u(t)$ ${\frac {e^{s^{2}/4}}{s}}\!\left(1-\operatorname {erf} \left({\frac {s}{2}}\right)\right)$ Re(s) > 0 [7] See also • List of Fourier transforms References 1. Distefano, J. J.; Stubberud, A. R.; Williams, I. J. (1995), Feedback systems and control, Schaum's outlines (2nd ed.), McGraw-Hill, p. 78, ISBN 978-0-07-017052-0 2. Riley, K. F.; Hobson, M. P.; Bence, S. J. (2010), Mathematical methods for physics and engineering (3rd ed.), Cambridge University Press, p. 455, ISBN 978-0-521-86153-3 3. Lipschutz, S.; Spiegel, M. R.; Liu, J. (2009), "Chapter 33: Laplace transforms", Mathematical Handbook of Formulas and Tables, Schaum's Outline Series (3rd ed.), McGraw-Hill, p. 192, ISBN 978-0-07-154855-7 4. Lipschutz, S.; Spiegel, M. R.; Liu, J. (2009), "Chapter 33: Laplace transforms", Mathematical Handbook of Formulas and Tables, Schaum's Outline Series (3rd ed.), McGraw-Hill, p. 183, ISBN 978-0-07-154855-7 5. "Laplace Transform". Wolfram MathWorld. Retrieved 30 April 2016. 6. Bracewell, Ronald N. (1978), The Fourier Transform and its Applications (2nd ed.), McGraw-Hill Kogakusha, p. 227, ISBN 978-0-07-007013-4 7. Williams, J. (1973), Laplace Transforms, Problem Solvers, George Allen & Unwin, p. 88, ISBN 978-0-04-512021-5
Wikipedia
Table of congruences In mathematics, a congruence is an equivalence relation on the integers. The following sections list important or interesting prime-related congruences. Table of congruences characterizing special primes $2^{n-1}\equiv 1{\pmod {n}}$special case of Fermat's little theorem, satisfied by all odd prime numbers $2^{p-1}\equiv 1{\pmod {p^{2}}}$solutions are called Wieferich primes (smallest example: 1093) $F_{n-\left({\frac {n}{5}}\right)}\equiv 0{\pmod {n}}$satisfied by all prime numbers $F_{p-\left({\frac {p}{5}}\right)}\equiv 0{\pmod {p^{2}}}$solutions are called Wall–Sun–Sun primes (no examples known) ${2n-1 \choose n-1}\equiv 1{\pmod {n^{3}}}$by Wolstenholme's theorem satisfied by all prime numbers greater than 3 ${2p-1 \choose p-1}\equiv 1{\pmod {p^{4}}},$solutions are called Wolstenholme primes (smallest example: 16843) $(n-1)!\ \equiv \ -1{\pmod {n}}$by Wilson's theorem a natural number n is prime if and only if it satisfies this congruence $(p-1)!\ \equiv \ -1{\pmod {p^{2}}}$solutions are called Wilson primes (smallest example: 5) $4[(p-1)!+1]\ \equiv \ -p{\pmod {p(p+2)}}$solutions are the twin primes Other prime-related congruences There are other prime-related congruences that provide necessary and sufficient conditions on the primality of certain subsequences of the natural numbers. Many of these alternate statements characterizing primality are related to Wilson's theorem, or are restatements of this classical result given in terms of other special variants of generalized factorial functions. For instance, new variants of Wilson's theorem stated in terms of the hyperfactorials, subfactorials, and superfactorials are given in.[1] Variants of Wilson's theorem For integers $k\geq 1$, we have the following form of Wilson's theorem: $(k-1)!(p-k)!\equiv (-1)^{k}{\pmod {p}}\iff p{\text{ prime. }}$ If $p$ is odd, we have that $\left({\frac {p-1}{2}}\right)!^{2}\equiv (-1)^{(p+1)/2}{\pmod {p}}\iff p{\text{ an odd prime. }}$ Clement's theorem concerning the twin primes Clement's congruence-based theorem characterizes the twin primes pairs of the form $(p,p+2)$ through the following conditions: $4[(p-1)!+1]\equiv -p{\pmod {p(p+2)}}\iff p,p+2{\text{ are both prime. }}$ P. A. Clement's original 1949 paper [2] provides a proof of this interesting elementary number theoretic criteria for twin primality based on Wilson's theorem. Another characterization given in Lin and Zhipeng's article provides that $2\left({\frac {p-1}{2}}\right)!^{2}+(-1)^{\frac {p-1}{2}}(5p+2)\equiv 0\iff p,p+2{\text{ are both prime. }}$ Characterizations of prime tuples and clusters The prime pairs of the form $(p,p+2k)$ for some $k\geq 1$ include the special cases of the cousin primes (when $k=2$) and the sexy primes (when $k=3$). We have elementary congruence-based characterizations of the primality of such pairs, proved for instance in the article.[3] Examples of congruences characterizing these prime pairs include $2k(2k)![(p-1)!+1]\equiv [1-(2k)!]p{\pmod {p(p+2k)}}\iff p,p+2k{\text{ are both prime, }}$ and the alternate characterization when $p$ is odd such that $p\not {\mid }(2k-1)!!^{2}$ given by $2k(2k-1)!!^{2}\left({\frac {p-1}{2}}\right)!^{2}+(-1)^{\frac {p-1}{2}}\left[(2k-1)!!^{2}(p+2k)-(-4)^{k}\cdot p\right]\equiv 0\iff p,p+2k{\text{ are both prime. }}$ Still other congruence-based characterizations of the primality of triples, and more general prime clusters (or prime tuples) exist and are typically proved starting from Wilson's theorem (see, for example, Section 3.3 in [4]). References 1. Aebi, Christian; Cairns, Grant (May 2015). "Generalizations of Wilson's Theorem for Double-, Hyper-, Sub- and Superfactorials". The American Mathematical Monthly. 122 (5): 433–443. doi:10.4169/amer.math.monthly.122.5.433. JSTOR 10.4169/amer.math.monthly.122.5.433. S2CID 207521192. 2. Clement, P. A. (1949). "Congruences for sets of primes". Amer. Math. Monthly. 56 (1): 23–25. doi:10.2307/2305816. JSTOR 2305816. 3. C. Lin and L. Zhipeng (2005). "On Wilson's theorem and Polignac conjecture". Math. Medley. 6. arXiv:math/0408018. Bibcode:2004math......8018C. 4. Schmidt, M. D. (2017). "New Congruences and Finite Difference Equations for Generalized Factorial Functions". arXiv:1701.04741. Bibcode:2017arXiv170104741S. {{cite journal}}: Cite journal requires |journal= (help)
Wikipedia
Table of costs of operations in elliptic curves Elliptic curve cryptography is a popular form of public key encryption that is based on the mathematical theory of elliptic curves. Points on an elliptic curve can be added and form a group under this addition operation. This article describes the computational costs for this group addition and certain related operations that are used in elliptic curve cryptography algorithms. Abbreviations for the operations The next section presents a table of all the time-costs of some of the possible operations in elliptic curves. The columns of the table are labelled by various computational operations. The rows of the table are for different models of elliptic curves. These are the operations considered : DBL - Doubling ADD - Addition mADD - Mixed addition: addition of an input that has been scaled to have Z-coordinate 1. mDBL - Mixed doubling: doubling of an input that has been scaled to have Z coordinate 1. TPL - Tripling. DBL+ADD - Combined double and add step To see how adding (ADD) and doubling (DBL) points on elliptic curves are defined, see The group law. The importance of doubling to speed scaler multiplication is discussed after the table. For information about other possible operations on elliptic curves see http://hyperelliptic.org/EFD/g1p/index.html. Tabulation Under different assumptions on the multiplication, addition, inversion for the elements in some fixed field, the time-cost of these operations varies. In this table it is assumed that: I = 100M, S = 1M, *param = 0M, add = 0M, *const = 0M This means that 100 multiplications (M) are required to invert (I) an element; one multiplication is required to compute the square (S) of an element; no multiplication is needed to multiply an element by a parameter (*param), by a constant (*const), or to add two elements. For more information about other results obtained with different assumptions, see http://hyperelliptic.org/EFD/g1p/index.html Curve shape, representation DBL ADD mADD mDBL TPL DBL+ADD Short Weierstrass projective 11 14 11 8 Short Weierstrass projective with a4=-1 11 14 11 8 Short Weierstrass projective with a4=-3 10 14 11 8 Short Weierstrass Relative Jacobian[1] 10 11 (7) (7) 18 Tripling-oriented Doche–Icart–Kohel curve 9 17 11 6 12 Hessian curve extended 9 12 11 9 Hessian curve projective 8 12 10 6 14 Jacobi quartic XYZ 8 13 11 5 Jacobi quartic doubling-oriented XYZ 8 13 11 5 Twisted Hessian curve projective 8 12 12 8 14 Doubling-oriented Doche–Icart–Kohel curve 7 17 12 6 Jacobi intersection projective 7 14 12 6 14 Jacobi intersection extended 7 12 11 7 16 Twisted Edwards projective 7 11 10 6 Twisted Edwards Inverted 7 10 9 6 Twisted Edwards Extended 8 9 8 7 Edwards projective 7 11 9 6 13 Jacobi quartic doubling-oriented XXYZZ 7 11 9 6 14 Jacobi quartic XXYZZ 7 11 9 6 14 Jacobi quartic XXYZZR 7 10 9 7 15 Edwards curve inverted 7 10 9 6 Montgomery curve 4 3 Importance of doubling In some applications of elliptic curve cryptography and the elliptic curve method of factorization (ECM) it is necessary to consider the scalar multiplication [n]P. One way to do this is to compute successively: $P,\quad [2]P=P+P,\quad [3]P=[2]P+P,\dots ,[n]P=[n-1]P+P$ But it is faster to use double-and-add method, e.g. [5]P = [2]([2]P) + P. In general to compute [k]P, write $k=\sum _{i\leq l}k_{i}2^{i}$ with ki in {0,1} and $l=[log_{2}k]$, kl = 1, then: $[2](....([2]([2]([2]([2]([2]P+[k_{(l-1)}]P)+[k_{(l-2)}]P)+[k_{(l-3)}]P)+\dots )\dots +[k_{1}]P)+[k_{0}]P=[2^{l}]P+[k_{(l-1)}2^{l-1}]P+\dots +[k_{1}2]P+[k_{0}]P$. Note that, this simple algorithm takes at most 2l steps and each step consists of a doubling and (if ki ≠ 0) adding two points. So, this is one of the reasons why addition and doubling formulas are defined. Furthermore, this method is applicable to any group and if the group law is written multiplicatively, the double-and-add algorithm is instead called square-and-multiply algorithm. References 1. Fay, Björn (2014-12-20). "Double-and-Add with Relative Jacobian Coordinates". Cryptology ePrint Archive. • http://hyperelliptic.org/EFD/g1p/index.html
Wikipedia
Table of Gaussian integer factorizations A Gaussian integer is either the zero, one of the four units (±1, ±i), a Gaussian prime or composite. The article is a table of Gaussian Integers x + iy followed either by an explicit factorization or followed by the label (p) if the integer is a Gaussian prime. The factorizations take the form of an optional unit multiplied by integer powers of Gaussian primes. Note that there are rational primes which are not Gaussian primes. A simple example is the rational prime 5, which is factored as 5=(2+i)(2−i) in the table, and therefore not a Gaussian prime. Conventions The second column of the table contains only integers in the first quadrant, which means the real part x is positive and the imaginary part y is non-negative. The table might have been further reduced to the integers in the first octant of the complex plane using the symmetry y + ix =i (x − iy). The factorizations are often not unique in the sense that the unit could be absorbed into any other factor with exponent equal to one. The entry 4+2i = −i(1+i)2(2+i), for example, could also be written as 4+2i= (1+i)2(1−2i). The entries in the table resolve this ambiguity by the following convention: the factors are primes in the right complex half plane with absolute value of the real part larger than or equal to the absolute value of the imaginary part. The entries are sorted according to increasing norm x2 + y2 (sequence A001481 in the OEIS). The table is complete up to the maximum norm at the end of the table in the sense that each composite or prime in the first quadrant appears in the second column. Gaussian primes occur only for a subset of norms, detailed in sequence OEIS: A055025. This here is a human-readable version of sequences OEIS: A103431 and OEIS: A103432. Factorizations normintegerfactors 2 1+i(p) 4 2 −i·(1+i)2 5 2+i 1+2i (p) (p) 8 2+2i −i·(1+i)3 9 3(p) 10 1+3i 3+i (1+i)·(2+i) (1+i)·(2−i) 13 3+2i 2+3i (p) (p) 16 4 −(1+i)4 17 1+4i 4+i (p) (p) 18 3+3i (1+i)·3 20 2+4i 4+2i (1+i)2·(2−i) −i·(1+i)2·(2+i) 25 3+4i 4+3i 5 (2+i)2 i·(2−i)2 (2+i)·(2−i) 26 1+5i 5+i (1+i)·(3+2i) (1+i)·(3−2i) 29 2+5i 5+2i (p) (p) 32 4+4i −(1+i)5 34 3+5i 5+3i (1+i)·(4+i) (1+i)·(4−i) 36 6 −i·(1+i)2·3 37 1+6i 6+i (p) (p) 40 2+6i 6+2i −i·(1+i)3·(2+i) −i·(1+i)3·(2−i) 41 4+5i 5+4i (p) (p) 45 3+6i 6+3i i·(2−i)·3 (2+i)·3 49 7(p) 50 1+7i 5+5i 7+i i·(1+i)·(2−i)2 (1+i)·(2+i)·(2−i) −i·(1+i)·(2+i)2 52 4+6i 6+4i (1+i)2·(3−2i) −i·(1+i)2·(3+2i) 53 2+7i 7+2i (p) (p) 58 3+7i 7+3i (1+i)·(5+2i) (1+i)·(5−2i) 61 5+6i 6+5i (p) (p) 64 8 i·(1+i)6 65 1+8i 4+7i 7+4i 8+i i·(2+i)·(3−2i) (2+i)·(3+2i) i·(2−i)·(3−2i) (2−i)·(3+2i) 68 2+8i 8+2i (1+i)2·(4−i) −i·(1+i)2·(4+i) 72 6+6i −i·(1+i)3·3 73 3+8i 8+3i (p) (p) 74 5+7i 7+5i (1+i)·(6+i) (1+i)·(6−i) 80 4+8i 8+4i −i·(1+i)4·(2−i) −(1+i)4·(2+i) 81 9 32 82 1+9i 9+i (1+i)·(5+4i) (1+i)·(5−4i) 85 2+9i 6+7i 7+6i 9+2i i·(2−i)·(4+i) i·(2−i)·(4−i) (2+i)·(4+i) (2+i)·(4−i) 89 5+8i 8+5i (p) (p) 90 3+9i 9+3i (1+i)·(2+i)·3 (1+i)·(2−i)·3 97 4+9i 9+4i (p) (p) 98 7+7i (1+i)·7 100 6+8i 8+6i 10 −i·(1+i)2·(2+i)2 (1+i)2·(2−i)2 −i·(1+i)2·(2+i)·(2−i) 101 1+10i 10+i (p) (p) 104 2+10i 10+2i −i·(1+i)3·(3+2i) −i·(1+i)3·(3−2i) 106 5+9i 9+5i (1+i)·(7+2i) (1+i)·(7−2i) 109 3+10i 10+3i (p) (p) 113 7+8i 8+7i (p) (p) 116 4+10i 10+4i (1+i)2·(5−2i) −i·(1+i)2·(5+2i) 117 6+9i 9+6i i·3·(3−2i) 3·(3+2i) 121 11(p) 122 1+11i 11+i (1+i)·(6+5i) (1+i)·(6−5i) 125 2+11i 5+10i 10+5i 11+2i (2+i)3 i·(2+i)·(2−i)2 (2+i)2·(2−i) i·(2−i)3 128 8+8i i·(1+i)7 130 3+11i 7+9i 9+7i 11+3i i·(1+i)·(2−i)·(3−2i) (1+i)·(2−i)·(3+2i) (1+i)·(2+i)·(3−2i) −i·(1+i)·(2+i)·(3+2i) 136 6+10i 10+6i −i·(1+i)3·(4+i) −i·(1+i)3·(4−i) 137 4+11i 11+4i (p) (p) 144 12 −(1+i)4·3 145 1+12i 8+9i 9+8i 12+i i·(2−i)·(5+2i) (2+i)·(5+2i) i·(2−i)·(5−2i) (2+i)·(5−2i) 146 5+11i 11+5i (1+i)·(8+3i) (1+i)·(8−3i) 148 2+12i 12+2i (1+i)2·(6−i) −i·(1+i)2·(6+i) 149 7+10i 10+7i (p) (p) 153 3+12i 12+3i i·3·(4−i) 3·(4+i) 157 6+11i 11+6i (p) (p) 160 4+12i 12+4i −(1+i)5·(2+i) −(1+i)5·(2−i) 162 9+9i (1+i)·32 164 8+10i 10+8i (1+i)2·(5−4i) −i·(1+i)2·(5+4i) 169 5+12i 12+5i 13 (3+2i)2 i·(3−2i)2 (3+2i)·(3−2i) 170 1+13i 7+11i 11+7i 13+i (1+i)·(2+i)·(4+i) (1+i)·(2+i)·(4−i) (1+i)·(2−i)·(4+i) (1+i)·(2−i)·(4−i) 173 2+13i 13+2i (p) (p) 178 3+13i 13+3i (1+i)·(8+5i) (1+i)·(8−5i) 180 6+12i 12+6i (1+i)2·(2−i)·3 −i·(1+i)2·(2+i)·3 181 9+10i 10+9i (p) (p) 185 4+13i 8+11i 11+8i 13+4i i·(2−i)·(6+i) i·(2−i)·(6−i) (2+i)·(6+i) (2+i)·(6−i) 193 7+12i 12+7i (p) (p) 194 5+13i 13+5i (1+i)·(9+4i) (1+i)·(9−4i) 196 14 −i·(1+i)2·7 197 1+14i 14+i (p) (p) 200 2+14i 10+10i 14+2i (1+i)3·(2−i)2 −i·(1+i)3·(2+i)·(2−i) −(1+i)3·(2+i)2 202 9+11i 11+9i (1+i)·(10+i) (1+i)·(10−i) 205 3+14i 6+13i 13+6i 14+3i i·(2+i)·(5−4i) (2+i)·(5+4i) i·(2−i)·(5−4i) (2−i)·(5+4i) 208 8+12i 12+8i −i·(1+i)4·(3−2i) −(1+i)4·(3+2i) 212 4+14i 14+4i (1+i)2·(7−2i) −i·(1+i)2·(7+2i) 218 7+13i 13+7i (1+i)·(10+3i) (1+i)·(10−3i) 221 5+14i 10+11i 11+10i 14+5i i·(3−2i)·(4+i) (3+2i)·(4+i) i·(3−2i)·(4−i) (3+2i)·(4−i) 225 9+12i 12+9i 15 (2+i)2·3 i·(2−i)2·3 (2+i)·(2−i)·3 226 1+15i 15+i (1+i)·(8+7i) (1+i)·(8−7i) 229 2+15i 15+2i (p) (p) 232 6+14i 14+6i −i·(1+i)3·(5+2i) −i·(1+i)3·(5−2i) 233 8+13i 13+8i (p) (p) 234 3+15i 15+3i (1+i)·3·(3+2i) (1+i)·3·(3−2i) 241 4+15i 15+4i (p) (p) 242 11+11i (1+i)·11 244 10+12i 12+10i (1+i)2·(6−5i) −i·(1+i)2·(6+5i) 245 7+14i 14+7i i·(2−i)·7 (2+i)·7 250 5+15i 9+13i 13+9i 15+5i (1+i)·(2+i)2·(2−i) i·(1+i)·(2−i)3 −i·(1+i)·(2+i)3 (1+i)·(2+i)·(2−i)2 normintegerfactors 256 16 (1+i)8 257 1+16i 16+i (p) (p) 260 2+16i 8+14i 14+8i 16+2i (1+i)2·(2+i)·(3−2i) −i·(1+i)2·(2+i)·(3+2i) (1+i)2·(2−i)·(3−2i) −i·(1+i)2·(2−i)·(3+2i) 261 6+15i 15+6i i·3·(5−2i) 3·(5+2i) 265 3+16i 11+12i 12+11i 16+3i i·(2−i)·(7+2i) i·(2−i)·(7−2i) (2+i)·(7+2i) (2+i)·(7−2i) 269 10+13i 13+10i (p) (p) 272 4+16i 16+4i −i·(1+i)4·(4−i) −(1+i)4·(4+i) 274 7+15i 15+7i (1+i)·(11+4i) (1+i)·(11−4i) 277 9+14i 14+9i (p) (p) 281 5+16i 16+5i (p) (p) 288 12+12i −(1+i)5·3 289 8+15i 15+8i 17 i·(4−i)2 (4+i)2 (4+i)·(4−i) 290 1+17i 11+13i 13+11i 17+i i·(1+i)·(2−i)·(5−2i) (1+i)·(2+i)·(5−2i) (1+i)·(2−i)·(5+2i) −i·(1+i)·(2+i)·(5+2i) 292 6+16i 16+6i (1+i)2·(8−3i) −i·(1+i)2·(8+3i) 293 2+17i 17+2i (p) (p) 296 10+14i 14+10i −i·(1+i)3·(6+i) −i·(1+i)3·(6−i) 298 3+17i 17+3i (1+i)·(10+7i) (1+i)·(10−7i) 305 4+17i 7+16i 16+7i 17+4i i·(2+i)·(6−5i) (2+i)·(6+5i) i·(2−i)·(6−5i) (2−i)·(6+5i) 306 9+15i 15+9i (1+i)·3·(4+i) (1+i)·3·(4−i) 313 12+13i 13+12i (p) (p) 314 5+17i 17+5i (1+i)·(11+6i) (1+i)·(11−6i) 317 11+14i 14+11i (p) (p) 320 8+16i 16+8i −(1+i)6·(2−i) i·(1+i)6·(2+i) 324 18 −i·(1+i)2·32 325 1+18i 6+17i 10+15i 15+10i 17+6i 18+i (2+i)2·(3+2i) i·(2−i)2·(3+2i) i·(2+i)·(2−i)·(3−2i) (2+i)·(2−i)·(3+2i) (2+i)2·(3−2i) i·(2−i)2·(3−2i) 328 2+18i 18+2i −i·(1+i)3·(5+4i) −i·(1+i)3·(5−4i) 333 3+18i 18+3i i·3·(6−i) 3·(6+i) 337 9+16i 16+9i (p) (p) 338 7+17i 13+13i 17+7i i·(1+i)·(3−2i)2 (1+i)·(3+2i)·(3−2i) −i·(1+i)·(3+2i)2 340 4+18i 12+14i 14+12i 18+4i (1+i)2·(2−i)·(4+i) (1+i)2·(2−i)·(4−i) −i·(1+i)2·(2+i)·(4+i) −i·(1+i)2·(2+i)·(4−i) 346 11+15i 15+11i (1+i)·(13+2i) (1+i)·(13−2i) 349 5+18i 18+5i (p) (p) 353 8+17i 17+8i (p) (p) 356 10+16i 16+10i (1+i)2·(8−5i) −i·(1+i)2·(8+5i) 360 6+18i 18+6i −i·(1+i)3·(2+i)·3 −i·(1+i)3·(2−i)·3 361 19(p) 362 1+19i 19+i (1+i)·(10+9i) (1+i)·(10−9i) 365 2+19i 13+14i 14+13i 19+2i i·(2−i)·(8+3i) (2+i)·(8+3i) i·(2−i)·(8−3i) (2+i)·(8−3i) 369 12+15i 15+12i i·3·(5−4i) 3·(5+4i) 370 3+19i 9+17i 17+9i 19+3i (1+i)·(2+i)·(6+i) (1+i)·(2+i)·(6−i) (1+i)·(2−i)·(6+i) (1+i)·(2−i)·(6−i) 373 7+18i 18+7i (p) (p) 377 4+19i 11+16i 16+11i 19+4i i·(3−2i)·(5+2i) (3+2i)·(5+2i) i·(3−2i)·(5−2i) (3+2i)·(5−2i) 386 5+19i 19+5i (1+i)·(12+7i) (1+i)·(12−7i) 388 8+18i 18+8i (1+i)2·(9−4i) −i·(1+i)2·(9+4i) 389 10+17i 17+10i (p) (p) 392 14+14i −i·(1+i)3·7 394 13+15i 15+13i (1+i)·(14+i) (1+i)·(14−i) 397 6+19i 19+6i (p) (p) 400 12+16i 16+12i 20 −(1+i)4·(2+i)2 −i·(1+i)4·(2−i)2 −(1+i)4·(2+i)·(2−i) 401 1+20i 20+i (p) (p) 404 2+20i 20+2i (1+i)2·(10−i) −i·(1+i)2·(10+i) 405 9+18i 18+9i i·(2−i)·32 (2+i)·32 409 3+20i 20+3i (p) (p) 410 7+19i 11+17i 17+11i 19+7i i·(1+i)·(2−i)·(5−4i) (1+i)·(2−i)·(5+4i) (1+i)·(2+i)·(5−4i) −i·(1+i)·(2+i)·(5+4i) 416 4+20i 20+4i −(1+i)5·(3+2i) −(1+i)5·(3−2i) 421 14+15i 15+14i (p) (p) 424 10+18i 18+10i −i·(1+i)3·(7+2i) −i·(1+i)3·(7−2i) 425 5+20i 8+19i 13+16i 16+13i 19+8i 20+5i i·(2+i)·(2−i)·(4−i) (2+i)2·(4+i) i·(2−i)2·(4+i) (2+i)2·(4−i) i·(2−i)2·(4−i) (2+i)·(2−i)·(4+i) 433 12+17i 17+12i (p) (p) 436 6+20i 20+6i (1+i)2·(10−3i) −i·(1+i)2·(10+3i) 441 21 3·7 442 1+21i 9+19i 19+9i 21+i i·(1+i)·(3−2i)·(4−i) (1+i)·(3+2i)·(4−i) (1+i)·(3−2i)·(4+i) −i·(1+i)·(3+2i)·(4+i) 445 2+21i 11+18i 18+11i 21+2i i·(2+i)·(8−5i) (2+i)·(8+5i) i·(2−i)·(8−5i) (2−i)·(8+5i) 449 7+20i 20+7i (p) (p) 450 3+21i 15+15i 21+3i i·(1+i)·(2−i)2·3 (1+i)·(2+i)·(2−i)·3 −i·(1+i)·(2+i)2·3 452 14+16i 16+14i (1+i)2·(8−7i) −i·(1+i)2·(8+7i) 457 4+21i 21+4i (p) (p) 458 13+17i 17+13i (1+i)·(15+2i) (1+i)·(15−2i) 461 10+19i 19+10i (p) (p) 464 8+20i 20+8i −i·(1+i)4·(5−2i) −(1+i)4·(5+2i) 466 5+21i 21+5i (1+i)·(13+8i) (1+i)·(13−8i) 468 12+18i 18+12i (1+i)2·3·(3−2i) −i·(1+i)2·3·(3+2i) 477 6+21i 21+6i i·3·(7−2i) 3·(7+2i) 481 9+20i 15+16i 16+15i 20+9i i·(3−2i)·(6+i) i·(3−2i)·(6−i) (3+2i)·(6+i) (3+2i)·(6−i) 482 11+19i 19+11i (1+i)·(15+4i) (1+i)·(15−4i) 484 22 −i·(1+i)2·11 485 1+22i 14+17i 17+14i 22+i i·(2−i)·(9+4i) (2+i)·(9+4i) i·(2−i)·(9−4i) (2+i)·(9−4i) 488 2+22i 22+2i −i·(1+i)3·(6+5i) −i·(1+i)3·(6−5i) 490 7+21i 21+7i (1+i)·(2+i)·7 (1+i)·(2−i)·7 493 3+22i 13+18i 18+13i 22+3i i·(4+i)·(5−2i) i·(4−i)·(5−2i) (4+i)·(5+2i) (4−i)·(5+2i) 500 4+22i 10+20i 20+10i 22+4i −i·(1+i)2·(2+i)3 (1+i)2·(2+i)·(2−i)2 −i·(1+i)2·(2+i)2·(2−i) (1+i)2·(2−i)3 normintegerfactors 505 8+21i 12+19i 19+12i 21+8i i·(2−i)·(10+i) i·(2−i)·(10−i) (2+i)·(10+i) (2+i)·(10−i) 509 5+22i 22+5i (p) (p) 512 16+16i (1+i)9 514 15+17i 17+15i (1+i)·(16+i) (1+i)·(16−i) 520 6+22i 14+18i 18+14i 22+6i (1+i)3·(2−i)·(3−2i) −i·(1+i)3·(2−i)·(3+2i) −i·(1+i)3·(2+i)·(3−2i) −(1+i)3·(2+i)·(3+2i) 521 11+20i 20+11i (p) (p) 522 9+21i 21+9i (1+i)·3·(5+2i) (1+i)·3·(5−2i) 529 23(p) 530 1+23i 13+19i 19+13i 23+i (1+i)·(2+i)·(7+2i) (1+i)·(2+i)·(7−2i) (1+i)·(2−i)·(7+2i) (1+i)·(2−i)·(7−2i) 533 2+23i 7+22i 22+7i 23+2i i·(3+2i)·(5−4i) (3+2i)·(5+4i) i·(3−2i)·(5−4i) (3−2i)·(5+4i) 538 3+23i 23+3i (1+i)·(13+10i) (1+i)·(13−10i) 541 10+21i 21+10i (p) (p) 544 12+20i 20+12i −(1+i)5·(4+i) −(1+i)5·(4−i) 545 4+23i 16+17i 17+16i 23+4i i·(2−i)·(10+3i) i·(2−i)·(10−3i) (2+i)·(10+3i) (2+i)·(10−3i) 548 8+22i 22+8i (1+i)2·(11−4i) −i·(1+i)2·(11+4i) 549 15+18i 18+15i i·3·(6−5i) 3·(6+5i) 554 5+23i 23+5i (1+i)·(14+9i) (1+i)·(14−9i) 557 14+19i 19+14i (p) (p) 562 11+21i 21+11i (1+i)·(16+5i) (1+i)·(16−5i) 565 6+23i 9+22i 22+9i 23+6i i·(2+i)·(8−7i) (2+i)·(8+7i) i·(2−i)·(8−7i) (2−i)·(8+7i) 569 13+20i 20+13i (p) (p) 576 24 i·(1+i)6·3 577 1+24i 24+i (p) (p) 578 7+23i 17+17i 23+7i (1+i)·(4+i)2 (1+i)·(4+i)·(4−i) (1+i)·(4−i)2 580 2+24i 16+18i 18+16i 24+2i (1+i)2·(2−i)·(5+2i) −i·(1+i)2·(2+i)·(5+2i) (1+i)2·(2−i)·(5−2i) −i·(1+i)2·(2+i)·(5−2i) 584 10+22i 22+10i −i·(1+i)3·(8+3i) −i·(1+i)3·(8−3i) 585 3+24i 12+21i 21+12i 24+3i i·(2+i)·3·(3−2i) (2+i)·3·(3+2i) i·(2−i)·3·(3−2i) (2−i)·3·(3+2i) 586 15+19i 19+15i (1+i)·(17+2i) (1+i)·(17−2i) 592 4+24i 24+4i −i·(1+i)4·(6−i) −(1+i)4·(6+i) 593 8+23i 23+8i (p) (p) 596 14+20i 20+14i (1+i)2·(10−7i) −i·(1+i)2·(10+7i) 601 5+24i 24+5i (p) (p) 605 11+22i 22+11i i·(2−i)·11 (2+i)·11 610 9+23i 13+21i 21+13i 23+9i i·(1+i)·(2−i)·(6−5i) (1+i)·(2−i)·(6+5i) (1+i)·(2+i)·(6−5i) −i·(1+i)·(2+i)·(6+5i) 612 6+24i 24+6i (1+i)2·3·(4−i) −i·(1+i)2·3·(4+i) 613 17+18i 18+17i (p) (p) 617 16+19i 19+16i (p) (p) 625 7+24i 15+20i 20+15i 24+7i 25 −(2−i)4 (2+i)3·(2−i) i·(2+i)·(2−i)3 −i·(2+i)4 (2+i)2·(2−i)2 626 1+25i 25+i (1+i)·(13+12i) (1+i)·(13−12i) 628 12+22i 22+12i (1+i)2·(11−6i) −i·(1+i)2·(11+6i) 629 2+25i 10+23i 23+10i 25+2i i·(4−i)·(6+i) i·(4−i)·(6−i) (4+i)·(6+i) (4+i)·(6−i) 634 3+25i 25+3i (1+i)·(14+11i) (1+i)·(14−11i) 637 14+21i 21+14i i·(3−2i)·7 (3+2i)·7 640 8+24i 24+8i i·(1+i)7·(2+i) i·(1+i)7·(2−i) 641 4+25i 25+4i (p) (p) 648 18+18i −i·(1+i)3·32 650 5+25i 11+23i 17+19i 19+17i 23+11i 25+5i (1+i)·(2+i)·(2−i)·(3+2i) (1+i)·(2+i)2·(3−2i) i·(1+i)·(2−i)2·(3−2i) −i·(1+i)·(2+i)2·(3+2i) (1+i)·(2−i)2·(3+2i) (1+i)·(2+i)·(2−i)·(3−2i) 653 13+22i 22+13i (p) (p) 656 16+20i 20+16i −i·(1+i)4·(5−4i) −(1+i)4·(5+4i) 657 9+24i 24+9i i·3·(8−3i) 3·(8+3i) 661 6+25i 25+6i (p) (p) 666 15+21i 21+15i (1+i)·3·(6+i) (1+i)·3·(6−i) 673 12+23i 23+12i (p) (p) 674 7+25i 25+7i (1+i)·(16+9i) (1+i)·(16−9i) 676 10+24i 24+10i 26 −i·(1+i)2·(3+2i)2 (1+i)2·(3−2i)2 −i·(1+i)2·(3+2i)·(3−2i) 677 1+26i 26+i (p) (p) 680 2+26i 14+22i 22+14i 26+2i −i·(1+i)3·(2+i)·(4+i) −i·(1+i)3·(2+i)·(4−i) −i·(1+i)3·(2−i)·(4+i) −i·(1+i)3·(2−i)·(4−i) 685 3+26i 18+19i 19+18i 26+3i i·(2−i)·(11+4i) (2+i)·(11+4i) i·(2−i)·(11−4i) (2+i)·(11−4i) 689 8+25i 17+20i 20+17i 25+8i i·(3−2i)·(7+2i) (3+2i)·(7+2i) i·(3−2i)·(7−2i) (3+2i)·(7−2i) 692 4+26i 26+4i (1+i)2·(13−2i) −i·(1+i)2·(13+2i) 697 11+24i 16+21i 21+16i 24+11i i·(4+i)·(5−4i) (4+i)·(5+4i) i·(4−i)·(5−4i) (4−i)·(5+4i) 698 13+23i 23+13i (1+i)·(18+5i) (1+i)·(18−5i) 701 5+26i 26+5i (p) (p) 706 9+25i 25+9i (1+i)·(17+8i) (1+i)·(17−8i) 709 15+22i 22+15i (p) (p) 712 6+26i 26+6i −i·(1+i)3·(8+5i) −i·(1+i)3·(8−5i) 720 12+24i 24+12i −i·(1+i)4·(2−i)·3 −(1+i)4·(2+i)·3 722 19+19i (1+i)·19 724 18+20i 20+18i (1+i)2·(10−9i) −i·(1+i)2·(10+9i) 725 7+26i 10+25i 14+23i 23+14i 25+10i 26+7i (2+i)2·(5+2i) i·(2+i)·(2−i)·(5−2i) i·(2−i)2·(5+2i) (2+i)2·(5−2i) (2+i)·(2−i)·(5+2i) i·(2−i)2·(5−2i) 729 27 33 730 1+27i 17+21i 21+17i 27+i i·(1+i)·(2−i)·(8−3i) (1+i)·(2+i)·(8−3i) (1+i)·(2−i)·(8+3i) −i·(1+i)·(2+i)·(8+3i) 733 2+27i 27+2i (p) (p) 738 3+27i 27+3i (1+i)·3·(5+4i) (1+i)·3·(5−4i) 740 8+26i 16+22i 22+16i 26+8i (1+i)2·(2−i)·(6+i) (1+i)2·(2−i)·(6−i) −i·(1+i)2·(2+i)·(6+i) −i·(1+i)2·(2+i)·(6−i) 745 4+27i 13+24i 24+13i 27+4i i·(2+i)·(10−7i) (2+i)·(10+7i) i·(2−i)·(10−7i) (2−i)·(10+7i) 746 11+25i 25+11i (1+i)·(18+7i) (1+i)·(18−7i) normintegerfactors 754 5+27i 15+23i 23+15i 27+5i i·(1+i)·(3−2i)·(5−2i) (1+i)·(3+2i)·(5−2i) (1+i)·(3−2i)·(5+2i) −i·(1+i)·(3+2i)·(5+2i) 757 9+26i 26+9i (p) (p) 761 19+20i 20+19i (p) (p) 765 6+27i 18+21i 21+18i 27+6i i·(2−i)·3·(4+i) i·(2−i)·3·(4−i) (2+i)·3·(4+i) (2+i)·3·(4−i) 769 12+25i 25+12i (p) (p) 772 14+24i 24+14i (1+i)2·(12−7i) −i·(1+i)2·(12+7i) 773 17+22i 22+17i (p) (p) 776 10+26i 26+10i −i·(1+i)3·(9+4i) −i·(1+i)3·(9−4i) 778 7+27i 27+7i (1+i)·(17+10i) (1+i)·(17−10i) 784 28 −(1+i)4·7 785 1+28i 16+23i 23+16i 28+i i·(2+i)·(11−6i) (2+i)·(11+6i) i·(2−i)·(11−6i) (2−i)·(11+6i) 788 2+28i 28+2i (1+i)2·(14−i) −i·(1+i)2·(14+i) 793 3+28i 8+27i 27+8i 28+3i i·(3+2i)·(6−5i) (3+2i)·(6+5i) i·(3−2i)·(6−5i) (3−2i)·(6+5i) 794 13+25i 25+13i (1+i)·(19+6i) (1+i)·(19−6i) 797 11+26i 26+11i (p) (p) 800 4+28i 20+20i 28+4i −i·(1+i)5·(2−i)2 −(1+i)5·(2+i)·(2−i) i·(1+i)5·(2+i)2 801 15+24i 24+15i i·3·(8−5i) 3·(8+5i) 802 19+21i 21+19i (1+i)·(20+i) (1+i)·(20−i) 808 18+22i 22+18i −i·(1+i)3·(10+i) −i·(1+i)3·(10−i) 809 5+28i 28+5i (p) (p) 810 9+27i 27+9i (1+i)·(2+i)·32 (1+i)·(2−i)·32 818 17+23i 23+17i (1+i)·(20+3i) (1+i)·(20−3i) 820 6+28i 12+26i 26+12i 28+6i (1+i)2·(2+i)·(5−4i) −i·(1+i)2·(2+i)·(5+4i) (1+i)2·(2−i)·(5−4i) −i·(1+i)2·(2−i)·(5+4i) 821 14+25i 25+14i (p) (p) 829 10+27i 27+10i (p) (p) 832 16+24i 24+16i −(1+i)6·(3−2i) i·(1+i)6·(3+2i) 833 7+28i 28+7i i·(4−i)·7 (4+i)·7 841 20+21i 21+20i 29 i·(5−2i)2 (5+2i)2 (5+2i)·(5−2i) 842 1+29i 29+i (1+i)·(15+14i) (1+i)·(15−14i) 845 2+29i 13+26i 19+22i 22+19i 26+13i 29+2i −(2−i)·(3−2i)2 i·(2−i)·(3+2i)·(3−2i) i·(2+i)·(3−2i)2 (2−i)·(3+2i)2 (2+i)·(3+2i)·(3−2i) −i·(2+i)·(3+2i)2 848 8+28i 28+8i −i·(1+i)4·(7−2i) −(1+i)4·(7+2i) 850 3+29i 11+27i 15+25i 25+15i 27+11i 29+3i (1+i)·(2+i)2·(4−i) i·(1+i)·(2−i)2·(4−i) (1+i)·(2+i)·(2−i)·(4+i) (1+i)·(2+i)·(2−i)·(4−i) −i·(1+i)·(2+i)2·(4+i) (1+i)·(2−i)2·(4+i) 853 18+23i 23+18i (p) (p) 857 4+29i 29+4i (p) (p) 865 9+28i 17+24i 24+17i 28+9i i·(2−i)·(13+2i) i·(2−i)·(13−2i) (2+i)·(13+2i) (2+i)·(13−2i) 866 5+29i 29+5i (1+i)·(17+12i) (1+i)·(17−12i) 872 14+26i 26+14i −i·(1+i)3·(10+3i) −i·(1+i)3·(10−3i) 873 12+27i 27+12i i·3·(9−4i) 3·(9+4i) 877 6+29i 29+6i (p) (p) 881 16+25i 25+16i (p) (p) 882 21+21i (1+i)·3·7 884 10+28i 20+22i 22+20i 28+10i (1+i)2·(3−2i)·(4+i) −i·(1+i)2·(3+2i)·(4+i) (1+i)2·(3−2i)·(4−i) −i·(1+i)2·(3+2i)·(4−i) 890 7+29i 19+23i 23+19i 29+7i i·(1+i)·(2−i)·(8−5i) (1+i)·(2−i)·(8+5i) (1+i)·(2+i)·(8−5i) −i·(1+i)·(2+i)·(8+5i) 898 13+27i 27+13i (1+i)·(20+7i) (1+i)·(20−7i) 900 18+24i 24+18i 30 −i·(1+i)2·(2+i)2·3 (1+i)2·(2−i)2·3 −i·(1+i)2·(2+i)·(2−i)·3 901 1+30i 15+26i 26+15i 30+i i·(4+i)·(7−2i) i·(4−i)·(7−2i) (4+i)·(7+2i) (4−i)·(7+2i) 904 2+30i 30+2i −i·(1+i)3·(8+7i) −i·(1+i)3·(8−7i) 905 8+29i 11+28i 28+11i 29+8i i·(2+i)·(10−9i) (2+i)·(10+9i) i·(2−i)·(10−9i) (2−i)·(10+9i) 909 3+30i 30+3i i·3·(10−i) 3·(10+i) 914 17+25i 25+17i (1+i)·(21+4i) (1+i)·(21−4i) 916 4+30i 30+4i (1+i)2·(15−2i) −i·(1+i)2·(15+2i) 922 9+29i 29+9i (1+i)·(19+10i) (1+i)·(19−10i) 925 5+30i 14+27i 21+22i 22+21i 27+14i 30+5i i·(2+i)·(2−i)·(6−i) (2+i)2·(6+i) i·(2−i)2·(6+i) (2+i)2·(6−i) i·(2−i)2·(6−i) (2+i)·(2−i)·(6+i) 928 12+28i 28+12i −(1+i)5·(5+2i) −(1+i)5·(5−2i) 929 20+23i 23+20i (p) (p) 932 16+26i 26+16i (1+i)2·(13−8i) −i·(1+i)2·(13+8i) 936 6+30i 30+6i −i·(1+i)3·3·(3+2i) −i·(1+i)3·3·(3−2i) 937 19+24i 24+19i (p) (p) 941 10+29i 29+10i (p) (p) 949 7+30i 18+25i 25+18i 30+7i i·(3−2i)·(8+3i) (3+2i)·(8+3i) i·(3−2i)·(8−3i) (3+2i)·(8−3i) 953 13+28i 28+13i (p) (p) 954 15+27i 27+15i (1+i)·3·(7+2i) (1+i)·3·(7−2i) 961 31(p) 962 1+31i 11+29i 29+11i 31+i (1+i)·(3+2i)·(6+i) (1+i)·(3+2i)·(6−i) (1+i)·(3−2i)·(6+i) (1+i)·(3−2i)·(6−i) 964 8+30i 30+8i (1+i)2·(15−4i) −i·(1+i)2·(15+4i) 965 2+31i 17+26i 26+17i 31+2i i·(2+i)·(12−7i) (2+i)·(12+7i) i·(2−i)·(12−7i) (2−i)·(12+7i) 968 22+22i −i·(1+i)3·11 970 3+31i 21+23i 23+21i 31+3i i·(1+i)·(2−i)·(9−4i) (1+i)·(2+i)·(9−4i) (1+i)·(2−i)·(9+4i) −i·(1+i)·(2+i)·(9+4i) 976 20+24i 24+20i −i·(1+i)4·(6−5i) −(1+i)4·(6+5i) 977 4+31i 31+4i (p) (p) 980 14+28i 28+14i (1+i)2·(2−i)·7 −i·(1+i)2·(2+i)·7 981 9+30i 30+9i i·3·(10−3i) 3·(10+3i) 985 12+29i 16+27i 27+16i 29+12i i·(2−i)·(14+i) i·(2−i)·(14−i) (2+i)·(14+i) (2+i)·(14−i) 986 5+31i 19+25i 25+19i 31+5i (1+i)·(4+i)·(5+2i) (1+i)·(4−i)·(5+2i) (1+i)·(4+i)·(5−2i) (1+i)·(4−i)·(5−2i) 997 6+31i 31+6i (p) (p) 1000 10+30i 18+26i 26+18i 30+10i −i·(1+i)3·(2+i)2·(2−i) (1+i)3·(2−i)3 −(1+i)3·(2+i)3 −i·(1+i)3·(2+i)·(2−i)2 See also • Gaussian integer • Table of divisors • Integer factorization References • Dresden, Greg; Dymacek, Wayne (2005). "Finding factors of factor rings over the Gaussian integers". American Mathematical Monthly. 112 (7): 602–611. doi:10.2307/30037545. JSTOR 30037545. MR 2158894. • Gethner, Ellen; Wagner, Stan; Wick, Brian (1998). "A stroll through the Gaussian primes". Amer. Math. Monthly. 105 (4): 327–337. doi:10.2307/2589708. JSTOR 2589708. MR 1614871. • Matsui, Hajime (2000). "A bound for the least Gaussian prime omega with alpha < arg(omega) < beta". Arch. Math. 74 (6): 423–431. doi:10.1007/s000130050463. MR 1753540. External links • Weisstein, Eric W. "Gaussian prime". MathWorld. • Weisstein, Eric W. "Prime Factorization". MathWorld. • OEIS: Gaussian Primes
Wikipedia
Table of Lie groups This article gives a table of some common Lie groups and their associated Lie algebras. Lie groups and Lie algebras Classical groups • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) Simple Lie groups Classical • An • Bn • Cn • Dn Exceptional • G2 • F4 • E6 • E7 • E8 Other Lie groups • Circle • Lorentz • Poincaré • Conformal group • Diffeomorphism • Loop • Euclidean Lie algebras • Lie group–Lie algebra correspondence • Exponential map • Adjoint representation • Killing form • Index • Simple Lie algebra • Loop algebra • Affine Lie algebra Semisimple Lie algebra • Dynkin diagrams • Cartan subalgebra • Root system • Weyl group • Real form • Complexification • Split Lie algebra • Compact Lie algebra Representation theory • Lie group representation • Lie algebra representation • Representation theory of semisimple Lie algebras • Representations of classical Lie groups • Theorem of the highest weight • Borel–Weil–Bott theorem Lie groups in physics • Particle physics and representation theory • Lorentz group representations • Poincaré group representations • Galilean group representations Scientists • Sophus Lie • Henri Poincaré • Wilhelm Killing • Élie Cartan • Hermann Weyl • Claude Chevalley • Harish-Chandra • Armand Borel • Glossary • Table of Lie groups The following are noted: the topological properties of the group (dimension; connectedness; compactness; the nature of the fundamental group; and whether or not they are simply connected) as well as on their algebraic properties (abelian; simple; semisimple). For more examples of Lie groups and other related topics see the list of simple Lie groups; the Bianchi classification of groups of up to three dimensions; see classification of low-dimensional real Lie algebras for up to four dimensions; and the list of Lie group topics. Real Lie groups and their algebras Column legend • Cpt: Is this group G compact? (Yes or No) • $\pi _{0}$: Gives the group of components of G. The order of the component group gives the number of connected components. The group is connected if and only if the component group is trivial (denoted by 0). • $\pi _{1}$: Gives the fundamental group of G whenever G is connected. The group is simply connected if and only if the fundamental group is trivial (denoted by 0). • UC: If G is not simply connected, gives the universal cover of G. Lie group Description Cpt $\pi _{0}$ $\pi _{1}$ UC Remarks Lie algebra dim/R Rn Euclidean space with addition N 0 0 abelian Rn n R× nonzero real numbers with multiplication N Z2 – abelian R 1 R+ positive real numbers with multiplication N 0 0 abelian R 1 S1 = U(1) the circle group: complex numbers of absolute value 1 with multiplication; Y 0 Z R abelian, isomorphic to SO(2), Spin(2), and R/Z R 1 Aff(1) invertible affine transformations from R to R. N Z2 – solvable, semidirect product of R+ and R× $\left\{\left[{\begin{smallmatrix}a&b\\0&1\end{smallmatrix}}\right]:a\in \mathbb {R} ^{*},b\in \mathbb {R} \right\}$ 2 H× non-zero quaternions with multiplication N 0 0 H 4 S3 = Sp(1) quaternions of absolute value 1 with multiplication; topologically a 3-sphere Y 0 0 isomorphic to SU(2) and to Spin(3); double cover of SO(3) Im(H) 3 GL(n,R) general linear group: invertible n×n real matrices N Z2 – M(n,R) n2 GL+(n,R) n×n real matrices with positive determinant N 0 Z  n=2 Z2 n>2 GL+(1,R) is isomorphic to R+ and is simply connected M(n,R) n2 SL(n,R) special linear group: real matrices with determinant 1 N 0 Z  n=2 Z2 n>2 SL(1,R) is a single point and therefore compact and simply connected sl(n,R) n2−1 SL(2,R) Orientation-preserving isometries of the Poincaré half-plane, isomorphic to SU(1,1), isomorphic to Sp(2,R). N 0 Z The universal cover has no finite-dimensional faithful representations. sl(2,R) 3 O(n) orthogonal group: real orthogonal matrices Y Z2 – The symmetry group of the sphere (n=3) or hypersphere. so(n) n(n−1)/2 SO(n) special orthogonal group: real orthogonal matrices with determinant 1 Y 0 Z  n=2 Z2 n>2 Spin(n) n>2 SO(1) is a single point and SO(2) is isomorphic to the circle group, SO(3) is the rotation group of the sphere. so(n) n(n−1)/2 Spin(n) spin group: double cover of SO(n) Y 0 n>1 0 n>2 Spin(1) is isomorphic to Z2 and not connected; Spin(2) is isomorphic to the circle group and not simply connected so(n) n(n−1)/2 Sp(2n,R) symplectic group: real symplectic matrices N 0 Z sp(2n,R) n(2n+1) Sp(n) compact symplectic group: quaternionic n×n unitary matrices Y 0 0 sp(n) n(2n+1) Mp(2n,R) metaplectic group: double cover of real symplectic group Sp(2n,R) Y 0 Z Mp(2,R) is a Lie group that is not algebraic sp(2n,R) n(2n+1) U(n) unitary group: complex n×n unitary matrices Y 0 Z R×SU(n) For n=1: isomorphic to S1. Note: this is not a complex Lie group/algebra u(n) n2 SU(n) special unitary group: complex n×n unitary matrices with determinant 1 Y 0 0 Note: this is not a complex Lie group/algebra su(n) n2−1 Real Lie algebras Lie algebra Description Simple? Semi-simple? Remarks dim/R R the real numbers, the Lie bracket is zero 1 Rn the Lie bracket is zero n R3 the Lie bracket is the cross product Yes Yes 3 H quaternions, with Lie bracket the commutator 4 Im(H) quaternions with zero real part, with Lie bracket the commutator; isomorphic to real 3-vectors, with Lie bracket the cross product; also isomorphic to su(2) and to so(3,R) Yes Yes 3 M(n,R) n×n matrices, with Lie bracket the commutator n2 sl(n,R) square matrices with trace 0, with Lie bracket the commutator Yes Yes n2−1 so(n) skew-symmetric square real matrices, with Lie bracket the commutator. Yes, except n=4 Yes Exception: so(4) is semi-simple, but not simple. n(n−1)/2 sp(2n,R) real matrices that satisfy JA + ATJ = 0 where J is the standard skew-symmetric matrix Yes Yes n(2n+1) sp(n) square quaternionic matrices A satisfying A = −A∗, with Lie bracket the commutator Yes Yes n(2n+1) u(n) square complex matrices A satisfying A = −A∗, with Lie bracket the commutator n2 su(n) n≥2 square complex matrices A with trace 0 satisfying A = −A∗, with Lie bracket the commutator Yes Yes n2−1 Complex Lie groups and their algebras Main article: Complex Lie group Note that a "complex Lie group" is defined as a complex analytic manifold that is also a group whose multiplication and inversion are each given by a holomorphic map. The dimensions in the table below are dimensions over C. Note that every complex Lie group/algebra can also be viewed as a real Lie group/algebra of twice the dimension. Lie group Description Cpt $\pi _{0}$ $\pi _{1}$ UC Remarks Lie algebra dim/C Cn group operation is addition N 0 0 abelian Cn n C× nonzero complex numbers with multiplication N 0 Z abelian C 1 GL(n,C) general linear group: invertible n×n complex matrices N 0 Z For n=1: isomorphic to C× M(n,C) n2 SL(n,C) special linear group: complex matrices with determinant 1 N 0 0 for n=1 this is a single point and thus compact. sl(n,C) n2−1 SL(2,C) Special case of SL(n,C) for n=2 N 0 0 Isomorphic to Spin(3,C), isomorphic to Sp(2,C) sl(2,C) 3 PSL(2,C) Projective special linear group N 0 Z2 SL(2,C) Isomorphic to the Möbius group, isomorphic to the restricted Lorentz group SO+(3,1,R), isomorphic to SO(3,C). sl(2,C) 3 O(n,C) orthogonal group: complex orthogonal matrices N Z2 – finite for n=1 so(n,C) n(n−1)/2 SO(n,C) special orthogonal group: complex orthogonal matrices with determinant 1 N 0 Z  n=2 Z2 n>2 SO(2,C) is abelian and isomorphic to C×; nonabelian for n>2. SO(1,C) is a single point and thus compact and simply connected so(n,C) n(n−1)/2 Sp(2n,C) symplectic group: complex symplectic matrices N 0 0 sp(2n,C) n(2n+1) Complex Lie algebras Main article: Complex Lie algebra The dimensions given are dimensions over C. Note that every complex Lie algebra can also be viewed as a real Lie algebra of twice the dimension. Lie algebra Description Simple? Semi-simple? Remarks dim/C C the complex numbers 1 Cn the Lie bracket is zero n M(n,C) n×n matrices with Lie bracket the commutator n2 sl(n,C) square matrices with trace 0 with Lie bracket the commutator Yes Yes n2−1 sl(2,C) Special case of sl(n,C) with n=2 Yes Yes isomorphic to su(2) $\otimes $ C 3 so(n,C) skew-symmetric square complex matrices with Lie bracket the commutator Yes Yes Exception: so(4,C) is semi-simple, but not simple. n(n−1)/2 sp(2n,C) complex matrices that satisfy JA + ATJ = 0 where J is the standard skew-symmetric matrix Yes Yes n(2n+1) The Lie algebra of affine transformations of dimension two, in fact, exist for any field. An instance has already been listed in the first table for real Lie algebras. References • Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
Wikipedia
Table of mathematical symbols by introduction date The following table lists many specialized symbols commonly used in modern mathematics, ordered by their introduction date. The table can also be ordered alphabetically by clicking on the relevant header title. Symbol Name Date of earliest use First author to use — horizontal bar for division 14th century (approx.) Nicole Oresme[1] + plus sign 1360 (approx.), abbreviation for Latin et resembling the plus sign Nicole Oresme − minus sign 1489 (first appearance of minus sign, and also first appearance of plus sign in print) Johannes Widmann √ radical symbol (for square root) 1525 (without the vinculum above the radicand) Christoff Rudolff (...) parentheses (for precedence grouping) 1544 (in handwritten notes) Michael Stifel 1556 Niccolò Tartaglia = equals sign 1557 Robert Recorde . decimal separator 1593 Christopher Clavius × multiplication sign 1618 William Oughtred ± plus–minus sign 1628 ∷ proportion sign n√   radical symbol (for nth root) 1629 Albert Girard < > strict inequality signs (less-than sign and greater-than sign) 1631 Thomas Harriot xy   superscript notation (for exponentiation) 1636 (using Roman numerals as superscripts) James Hume 1637 (in the modern form) René Descartes (La Géométrie) x   Use of the letter x for an independent variable or unknown value. See History of algebra: The symbol x. 1637[2] René Descartes (La Géométrie) √ ̅ radical symbol (for square root) 1637 (with the vinculum above the radicand) René Descartes (La Géométrie) % percent sign 1650 (approx.) unknown ∞ infinity sign 1655 John Wallis ÷ division sign (a repurposed obelus variant) 1659 Johann Rahn ≤ ≥ unstrict inequality signs (less-than or equals to sign and greater-than or equals to sign) 1670 (with the horizontal bar over the inequality sign, rather than below it) John Wallis 1734 (with double horizontal bar below the inequality sign) Pierre Bouguer d differential sign 1675 Gottfried Leibniz ∫ integral sign : colon (for division) 1684 (deriving from use of colon to denote fractions, dating back to 1633) · middle dot (for multiplication) 1698 (perhaps deriving from a much earlier use of middle dot to separate juxtaposed numbers) ⁄ division slash (a.k.a. solidus) 1718 (deriving from horizontal fraction bar, invented by Abu Bakr al-Hassar in the 12th century) Thomas Twining ≠ inequality sign (not equal to) unknown Leonhard Euler x′ prime symbol (for derivative) 1748 Σ summation symbol 1755 ∝ proportionality sign 1768 William Emerson ∂ partial differential sign (a.k.a. curly d or Jacobi's delta) 1770 Marquis de Condorcet ≡ identity sign (for congruence relation) 1801 (first appearance in print; used previously in personal writings of Gauss) Carl Friedrich Gauss [x] integral part (a.k.a. floor) 1808 ! factorial 1808 Christian Kramp Π product symbol 1812 Carl Friedrich Gauss ⊂ ⊃ set inclusion signs (subset of, superset of) 1817 Joseph Gergonne 1890 Ernst Schröder |...| absolute value notation 1841 Karl Weierstrass determinant of a matrix 1841 Arthur Cayley ‖...‖ matrix notation 1843[3] ∇ nabla symbol (for vector differential) 1846 (previously used by Hamilton as a general-purpose operator sign) William Rowan Hamilton ∩ ∪ intersection union 1888 Giuseppe Peano ℵ aleph symbol (for transfinite cardinal numbers) 1893 Georg Cantor ∈ membership sign (is an element of) 1894 Giuseppe Peano O Big O Notation 1894 Paul Bachmann {...} braces, a.k.a. curly brackets (for set notation) 1895 Georg Cantor $\mathbb {N} $ Blackboard bold capital N (for natural numbers set) 1895 Giuseppe Peano $\mathbb {Q} $ Blackboard bold capital Q (for rational numbers set) ∃ existential quantifier (there exists) 1897 · middle dot (for dot product) 1902 J. Willard Gibbs × multiplication sign (for cross product) ∨ logical disjunction (a.k.a. OR) 1906 Bertrand Russell (...) matrix notation 1909[3] Maxime Bôcher [...]   1909[3] Gerhard Kowalewski ∮ contour integral sign 1917 Arnold Sommerfeld $\mathbb {Z} $ Blackboard bold capital Z (for integer numbers set) 1930 Edmund Landau ∀ universal quantifier (for all) 1935 Gerhard Gentzen → arrow (for function notation) 1936 (to denote images of specific elements) Øystein Ore 1940 (in the present form of f: X → Y) Witold Hurewicz ∅ empty set sign 1939 André Weil / Nicolas Bourbaki[4] $\mathbb {C} $ Blackboard bold capital C (for complex numbers set) 1939 Nathan Jacobson ∎ end of proof sign (a.k.a. tombstone) 1950[5] Paul Halmos ⌊x⌋ ⌈x⌉ greatest integer ≤ x (a.k.a. floor) smallest integer ≥ x (a.k.a. ceiling) 1962[6] Kenneth E. Iverson See also • History of mathematical notation • History of the Hindu–Arabic numeral system • Glossary of mathematical symbols • List of mathematical symbols by subject • Mathematical notation • Mathematical operators and symbols in Unicode Sources 1. Cajori, Florian (1993). A History of Mathematical Notations. Mineola,New York: Dover Publications. 2. Boyer, Carl B. (1991), A History of Mathematics (Second ed.), John Wiley & Sons, Inc., ISBN 978-0-471-54397-8 3. "Earliest Uses of Symbols for Matrices and Vectors". jeff560.tripod.com. Retrieved 18 December 2016. 4. Weil, André (1992), The Apprenticeship of a Mathematician, Springer, p. 114, ISBN 9783764326500. 5. Halmos, Paul (1950). Measure Theory. New York: Van Nostrand. pp. vi. The symbol ∎ is used throughout the entire book in place of such phrases as "Q.E.D." or "This completes the proof of the theorem" to signal the end of a proof. 6. Kenneth E. Iverson (1962), A Programming Language, Wiley, retrieved 20 April 2016 External links • RapidTables: Math Symbols List • Jeff Miller: Earliest Uses of Various Mathematical Symbols Common mathematical notation, symbols, and formulas Lists of Unicode and LaTeX mathematical symbols • List of mathematical symbols by subject • Glossary of mathematical symbols • List of logic symbols Lists of Unicode symbols General • List of Unicode characters • Unicode block Alphanumeric • Mathematical Alphanumeric Symbols • Blackboard bold • Letterlike Symbols • Symbols for zero Arrows and Geometric Shapes • Arrows • Miscellaneous Symbols and Arrows • Geometric Shapes (Unicode block) Operators • Mathematical operators and symbols • Mathematical Operators (Unicode block) Supplemental Math Operators • Supplemental Mathematical Operators • Number Forms Miscellaneous • A • B • Technical • ISO 31-11 (Mathematical signs and symbols for use in physical sciences and technology) Typographical conventions and notations Language • APL syntax and symbols Letters • Diacritic • Letters in STEM • Greek letters in STEM • Latin letters in STEM Notation • Mathematical notation • Abbreviations • Notation in probability and statistics • List of common physics notations Meanings of symbols • Glossary of mathematical symbols • List of mathematical constants • Physical constants • Table of mathematical symbols by introduction date • List of typographical symbols and punctuation marks
Wikipedia
Table of Newtonian series In mathematics, a Newtonian series, named after Isaac Newton, is a sum over a sequence $a_{n}$ written in the form $f(s)=\sum _{n=0}^{\infty }(-1)^{n}{s \choose n}a_{n}=\sum _{n=0}^{\infty }{\frac {(-s)_{n}}{n!}}a_{n}$ where ${s \choose n}$ is the binomial coefficient and $(s)_{n}$ is the falling factorial. Newtonian series often appear in relations of the form seen in umbral calculus. List The generalized binomial theorem gives $(1+z)^{s}=\sum _{n=0}^{\infty }{s \choose n}z^{n}=1+{s \choose 1}z+{s \choose 2}z^{2}+\cdots .$ A proof for this identity can be obtained by showing that it satisfies the differential equation $(1+z){\frac {d(1+z)^{s}}{dz}}=s(1+z)^{s}.$ The digamma function: $\psi (s+1)=-\gamma -\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n}}{s \choose n}.$ The Stirling numbers of the second kind are given by the finite sum $\left\{{\begin{matrix}n\\k\end{matrix}}\right\}={\frac {1}{k!}}\sum _{j=0}^{k}(-1)^{k-j}{k \choose j}j^{n}.$ This formula is a special case of the kth forward difference of the monomial xn evaluated at x = 0: $\Delta ^{k}x^{n}=\sum _{j=0}^{k}(-1)^{k-j}{k \choose j}(x+j)^{n}.$ A related identity forms the basis of the Nörlund–Rice integral: $\sum _{k=0}^{n}{n \choose k}{\frac {(-1)^{n-k}}{s-k}}={\frac {n!}{s(s-1)(s-2)\cdots (s-n)}}={\frac {\Gamma (n+1)\Gamma (s-n)}{\Gamma (s+1)}}=B(n+1,s-n),s\notin \{0,\ldots ,n\}$ where $\Gamma (x)$ is the Gamma function and $B(x,y)$ is the Beta function. The trigonometric functions have umbral identities: $\sum _{n=0}^{\infty }(-1)^{n}{s \choose 2n}=2^{s/2}\cos {\frac {\pi s}{4}}$ and $\sum _{n=0}^{\infty }(-1)^{n}{s \choose 2n+1}=2^{s/2}\sin {\frac {\pi s}{4}}$ The umbral nature of these identities is a bit more clear by writing them in terms of the falling factorial $(s)_{n}$. The first few terms of the sin series are $s-{\frac {(s)_{3}}{3!}}+{\frac {(s)_{5}}{5!}}-{\frac {(s)_{7}}{7!}}+\cdots $ which can be recognized as resembling the Taylor series for sin x, with (s)n standing in the place of xn. In analytic number theory it is of interest to sum $\!\sum _{k=0}B_{k}z^{k},$ where B are the Bernoulli numbers. Employing the generating function its Borel sum can be evaluated as $\sum _{k=0}B_{k}z^{k}=\int _{0}^{\infty }e^{-t}{\frac {tz}{e^{tz}-1}}\,dt=\sum _{k=1}{\frac {z}{(kz+1)^{2}}}.$ The general relation gives the Newton series $\sum _{k=0}{\frac {B_{k}(x)}{z^{k}}}{\frac {1-s \choose k}{s-1}}=z^{s-1}\zeta (s,x+z),$ where $\zeta $ is the Hurwitz zeta function and $B_{k}(x)$ the Bernoulli polynomial. The series does not converge, the identity holds formally. Another identity is ${\frac {1}{\Gamma (x)}}=\sum _{k=0}^{\infty }{x-a \choose k}\sum _{j=0}^{k}{\frac {(-1)^{k-j}}{\Gamma (a+j)}}{k \choose j},$ which converges for $x>a$. This follows from the general form of a Newton series for equidistant nodes (when it exists, i.e. is convergent) $f(x)=\sum _{k=0}{{\frac {x-a}{h}} \choose k}\sum _{j=0}^{k}(-1)^{k-j}{k \choose j}f(a+jh).$ See also • Binomial transform • List of factorial and binomial topics • Nörlund–Rice integral • Carlson's theorem References • Philippe Flajolet and Robert Sedgewick, "Mellin transforms and asymptotics: Finite differences and Rice's integrals", Theoretical Computer Science 144 (1995) pp 101–124. Sir Isaac Newton Publications • Fluxions (1671) • De Motu (1684) • Principia (1687) • Opticks (1704) • Queries (1704) • Arithmetica (1707) • De Analysi (1711) Other writings • Quaestiones (1661–1665) • "standing on the shoulders of giants" (1675) • Notes on the Jewish Temple (c. 1680) • "General Scholium" (1713; "hypotheses non fingo" ) • Ancient Kingdoms Amended (1728) • Corruptions of Scripture (1754) Contributions • Calculus • fluxion • Impact depth • Inertia • Newton disc • Newton polygon • Newton–Okounkov body • Newton's reflector • Newtonian telescope • Newton scale • Newton's metal • Spectrum • Structural coloration Newtonianism • Bucket argument • Newton's inequalities • Newton's law of cooling • Newton's law of universal gravitation • post-Newtonian expansion • parameterized • gravitational constant • Newton–Cartan theory • Schrödinger–Newton equation • Newton's laws of motion • Kepler's laws • Newtonian dynamics • Newton's method in optimization • Apollonius's problem • truncated Newton method • Gauss–Newton algorithm • Newton's rings • Newton's theorem about ovals • Newton–Pepys problem • Newtonian potential • Newtonian fluid • Classical mechanics • Corpuscular theory of light • Leibniz–Newton calculus controversy • Newton's notation • Rotating spheres • Newton's cannonball • Newton–Cotes formulas • Newton's method • generalized Gauss–Newton method • Newton fractal • Newton's identities • Newton polynomial • Newton's theorem of revolving orbits • Newton–Euler equations • Newton number • kissing number problem • Newton's quotient • Parallelogram of force • Newton–Puiseux theorem • Absolute space and time • Luminiferous aether • Newtonian series • table Personal life • Woolsthorpe Manor (birthplace) • Cranbury Park (home) • Early life • Later life • Apple tree • Religious views • Occult studies • Scientific Revolution • Copernican Revolution Relations • Catherine Barton (niece) • John Conduitt (nephew-in-law) • Isaac Barrow (professor) • William Clarke (mentor) • Benjamin Pulleyn (tutor) • John Keill (disciple) • William Stukeley (friend) • William Jones (friend) • Abraham de Moivre (friend) Depictions • Newton by Blake (monotype) • Newton by Paolozzi (sculpture) • Isaac Newton Gargoyle • Astronomers Monument Namesake • Newton (unit) • Newton's cradle • Isaac Newton Institute • Isaac Newton Medal • Isaac Newton Telescope • Isaac Newton Group of Telescopes • XMM-Newton • Sir Isaac Newton Sixth Form • Statal Institute of Higher Education Isaac Newton • Newton International Fellowship Categories Isaac Newton
Wikipedia
Table of prime factors The tables contain the prime factorization of the natural numbers from 1 to 1000. When n is a prime number, the prime factorization is just n itself, written in bold below. The number 1 is called a unit. It has no prime factors and is neither prime nor composite. Properties Many properties of a natural number n can be seen or directly computed from the prime factorization of n. • The multiplicity of a prime factor p of n is the largest exponent m for which pm divides n. The tables show the multiplicity for each prime factor. If no exponent is written then the multiplicity is 1 (since p = p1). The multiplicity of a prime which does not divide n may be called 0 or may be considered undefined. • Ω(n), the big Omega function, is the number of prime factors of n counted with multiplicity (so it is the sum of all prime factor multiplicities). • A prime number has Ω(n) = 1. The first: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 (sequence A000040 in the OEIS). There are many special types of prime numbers. • A composite number has Ω(n) > 1. The first: 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21 (sequence A002808 in the OEIS). All numbers above 1 are either prime or composite. 1 is neither. • A semiprime has Ω(n) = 2 (so it is composite). The first: 4, 6, 9, 10, 14, 15, 21, 22, 25, 26, 33, 34 (sequence A001358 in the OEIS). • A k-almost prime (for a natural number k) has Ω(n) = k (so it is composite if k > 1). • An even number has the prime factor 2. The first: 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24 (sequence A005843 in the OEIS). • An odd number does not have the prime factor 2. The first: 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23 (sequence A005408 in the OEIS). All integers are either even or odd. • A square has even multiplicity for all prime factors (it is of the form a2 for some a). The first: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144 (sequence A000290 in the OEIS). • A cube has all multiplicities divisible by 3 (it is of the form a3 for some a). The first: 1, 8, 27, 64, 125, 216, 343, 512, 729, 1000, 1331, 1728 (sequence A000578 in the OEIS). • A perfect power has a common divisor m > 1 for all multiplicities (it is of the form am for some a > 1 and m > 1). The first: 4, 8, 9, 16, 25, 27, 32, 36, 49, 64, 81, 100 (sequence A001597 in the OEIS). 1 is sometimes included. • A powerful number (also called squareful) has multiplicity above 1 for all prime factors. The first: 1, 4, 8, 9, 16, 25, 27, 32, 36, 49, 64, 72 (sequence A001694 in the OEIS). • A prime power has only one prime factor. The first: 2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19 (sequence A000961 in the OEIS). 1 is sometimes included. • An Achilles number is powerful but not a perfect power. The first: 72, 108, 200, 288, 392, 432, 500, 648, 675, 800, 864, 968 (sequence A052486 in the OEIS). • A square-free integer has no prime factor with multiplicity above 1. The first: 1, 2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17 (sequence A005117 in the OEIS)). A number where some but not all prime factors have multiplicity above 1 is neither square-free nor squareful. • The Liouville function λ(n) is 1 if Ω(n) is even, and is -1 if Ω(n) is odd. • The Möbius function μ(n) is 0 if n is not square-free. Otherwise μ(n) is 1 if Ω(n) is even, and is −1 if Ω(n) is odd. • A sphenic number has Ω(n) = 3 and is square-free (so it is the product of 3 distinct primes). The first: 30, 42, 66, 70, 78, 102, 105, 110, 114, 130, 138, 154 (sequence A007304 in the OEIS). • a0(n) is the sum of primes dividing n, counted with multiplicity. It is an additive function. • A Ruth-Aaron pair is two consecutive numbers (x, x+1) with a0(x) = a0(x+1). The first (by x value): 5, 8, 15, 77, 125, 714, 948, 1330, 1520, 1862, 2491, 3248 (sequence A039752 in the OEIS), another definition is the same prime only count once, if so, the first (by x value): 5, 24, 49, 77, 104, 153, 369, 492, 714, 1682, 2107, 2299 (sequence A006145 in the OEIS) • A primorial x# is the product of all primes from 2 to x. The first: 2, 6, 30, 210, 2310, 30030, 510510, 9699690, 223092870, 6469693230, 200560490130, 7420738134810 (sequence A002110 in the OEIS). 1# = 1 is sometimes included. • A factorial x! is the product of all numbers from 1 to x. The first: 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 3628800, 39916800, 479001600 (sequence A000142 in the OEIS). 0! = 1 is sometimes included. • A k-smooth number (for a natural number k) has largest prime factor ≤ k (so it is also j-smooth for any j > k). • m is smoother than n if the largest prime factor of m is below the largest of n. • A regular number has no prime factor above 5 (so it is 5-smooth). The first: 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16 (sequence A051037 in the OEIS). • A k-powersmooth number has all pm ≤ k where p is a prime factor with multiplicity m. • A frugal number has more digits than the number of digits in its prime factorization (when written like below tables with multiplicities above 1 as exponents). The first in decimal: 125, 128, 243, 256, 343, 512, 625, 729, 1024, 1029, 1215, 1250 (sequence A046759 in the OEIS). • An equidigital number has the same number of digits as its prime factorization. The first in decimal: 1, 2, 3, 5, 7, 10, 11, 13, 14, 15, 16, 17 (sequence A046758 in the OEIS). • An extravagant number has fewer digits than its prime factorization. The first in decimal: 4, 6, 8, 9, 12, 18, 20, 22, 24, 26, 28, 30 (sequence A046760 in the OEIS). • An economical number has been defined as a frugal number, but also as a number that is either frugal or equidigital. • gcd(m, n) (greatest common divisor of m and n) is the product of all prime factors which are both in m and n (with the smallest multiplicity for m and n). • m and n are coprime (also called relatively prime) if gcd(m, n) = 1 (meaning they have no common prime factor). • lcm(m, n) (least common multiple of m and n) is the product of all prime factors of m or n (with the largest multiplicity for m or n). • gcd(m, n) × lcm(m, n) = m × n. Finding the prime factors is often harder than computing gcd and lcm using other algorithms which do not require known prime factorization. • m is a divisor of n (also called m divides n, or n is divisible by m) if all prime factors of m have at least the same multiplicity in n. The divisors of n are all products of some or all prime factors of n (including the empty product 1 of no prime factors). The number of divisors can be computed by increasing all multiplicities by 1 and then multiplying them. Divisors and properties related to divisors are shown in table of divisors. 1 to 100 1 − 20 1 22 33 422 55 62·3 77 823 932 102·5 1111 1222·3 1313 142·7 153·5 1624 1717 182·32 1919 2022·5 21 − 40 213·7 222·11 2323 2423·3 2552 262·13 2733 2822·7 2929 302·3·5 3131 3225 333·11 342·17 355·7 3622·32 3737 382·19 393·13 4023·5 41 − 60 4141 422·3·7 4343 4422·11 4532·5 462·23 4747 4824·3 4972 502·52 513·17 5222·13 5353 542·33 555·11 5623·7 573·19 582·29 5959 6022·3·5 61 − 80 6161 622·31 6332·7 6426 655·13 662·3·11 6767 6822·17 693·23 702·5·7 7171 7223·32 7373 742·37 753·52 7622·19 777·11 782·3·13 7979 8024·5 81 − 100 8134 822·41 8383 8422·3·7 855·17 862·43 873·29 8823·11 8989 902·32·5 917·13 9222·23 933·31 942·47 955·19 9625·3 9797 982·72 9932·11 10022·52 101 to 200 101 − 120 101101 1022·3·17 103103 10423·13 1053·5·7 1062·53 107107 10822·33 109109 1102·5·11 1113·37 11224·7 113113 1142·3·19 1155·23 11622·29 11732·13 1182·59 1197·17 12023·3·5 121 − 140 121112 1222·61 1233·41 12422·31 12553 1262·32·7 127127 12827 1293·43 1302·5·13 131131 13222·3·11 1337·19 1342·67 13533·5 13623·17 137137 1382·3·23 139139 14022·5·7 141 − 160 1413·47 1422·71 14311·13 14424·32 1455·29 1462·73 1473·72 14822·37 149149 1502·3·52 151151 15223·19 15332·17 1542·7·11 1555·31 15622·3·13 157157 1582·79 1593·53 16025·5 161 − 180 1617·23 1622·34 163163 16422·41 1653·5·11 1662·83 167167 16823·3·7 169132 1702·5·17 17132·19 17222·43 173173 1742·3·29 17552·7 17624·11 1773·59 1782·89 179179 18022·32·5 181 − 200 181181 1822·7·13 1833·61 18423·23 1855·37 1862·3·31 18711·17 18822·47 18933·7 1902·5·19 191191 19226·3 193193 1942·97 1953·5·13 19622·72 197197 1982·32·11 199199 20023·52 201 to 300 201 − 220 2013·67 2022·101 2037·29 20422·3·17 2055·41 2062·103 20732·23 20824·13 20911·19 2102·3·5·7 211211 21222·53 2133·71 2142·107 2155·43 21623·33 2177·31 2182·109 2193·73 22022·5·11 221 − 240 22113·17 2222·3·37 223223 22425·7 22532·52 2262·113 227227 22822·3·19 229229 2302·5·23 2313·7·11 23223·29 233233 2342·32·13 2355·47 23622·59 2373·79 2382·7·17 239239 24024·3·5 241 − 260 241241 2422·112 24335 24422·61 2455·72 2462·3·41 24713·19 24823·31 2493·83 2502·53 251251 25222·32·7 25311·23 2542·127 2553·5·17 25628 257257 2582·3·43 2597·37 26022·5·13 261 − 280 26132·29 2622·131 263263 26423·3·11 2655·53 2662·7·19 2673·89 26822·67 269269 2702·33·5 271271 27224·17 2733·7·13 2742·137 27552·11 27622·3·23 277277 2782·139 27932·31 28023·5·7 281 − 300 281281 2822·3·47 283283 28422·71 2853·5·19 2862·11·13 2877·41 28825·32 289172 2902·5·29 2913·97 29222·73 293293 2942·3·72 2955·59 29623·37 29733·11 2982·149 29913·23 30022·3·52 301 to 400 301 − 320 3017·43 3022·151 3033·101 30424·19 3055·61 3062·32·17 307307 30822·7·11 3093·103 3102·5·31 311311 31223·3·13 313313 3142·157 31532·5·7 31622·79 317317 3182·3·53 31911·29 32026·5 321 − 340 3213·107 3222·7·23 32317·19 32422·34 32552·13 3262·163 3273·109 32823·41 3297·47 3302·3·5·11 331331 33222·83 33332·37 3342·167 3355·67 33624·3·7 337337 3382·132 3393·113 34022·5·17 341 − 360 34111·31 3422·32·19 34373 34423·43 3453·5·23 3462·173 347347 34822·3·29 349349 3502·52·7 35133·13 35225·11 353353 3542·3·59 3555·71 35622·89 3573·7·17 3582·179 359359 36023·32·5 361 − 380 361192 3622·181 3633·112 36422·7·13 3655·73 3662·3·61 367367 36824·23 36932·41 3702·5·37 3717·53 37222·3·31 373373 3742·11·17 3753·53 37623·47 37713·29 3782·33·7 379379 38022·5·19 381 − 400 3813·127 3822·191 383383 38427·3 3855·7·11 3862·193 38732·43 38822·97 389389 3902·3·5·13 39117·23 39223·72 3933·131 3942·197 3955·79 39622·32·11 397397 3982·199 3993·7·19 40024·52 401 to 500 401 − 420 401401 4022·3·67 40313·31 40422·101 40534·5 4062·7·29 40711·37 40823·3·17 409409 4102·5·41 4113·137 41222·103 4137·59 4142·32·23 4155·83 41625·13 4173·139 4182·11·19 419419 42022·3·5·7 421 − 440 421421 4222·211 42332·47 42423·53 42552·17 4262·3·71 4277·61 42822·107 4293·11·13 4302·5·43 431431 43224·33 433433 4342·7·31 4353·5·29 43622·109 43719·23 4382·3·73 439439 44023·5·11 441 − 460 44132·72 4422·13·17 443443 44422·3·37 4455·89 4462·223 4473·149 44826·7 449449 4502·32·52 45111·41 45222·113 4533·151 4542·227 4555·7·13 45623·3·19 457457 4582·229 45933·17 46022·5·23 461 − 480 461461 4622·3·7·11 463463 46424·29 4653·5·31 4662·233 467467 46822·32·13 4697·67 4702·5·47 4713·157 47223·59 47311·43 4742·3·79 47552·19 47622·7·17 47732·53 4782·239 479479 48025·3·5 481 − 500 48113·37 4822·241 4833·7·23 48422·112 4855·97 4862·35 487487 48823·61 4893·163 4902·5·72 491491 49222·3·41 49317·29 4942·13·19 49532·5·11 49624·31 4977·71 4982·3·83 499499 50022·53 501 to 600 501 − 520 5013·167 5022·251 503503 50423·32·7 5055·101 5062·11·23 5073·132 50822·127 509509 5102·3·5·17 5117·73 51229 51333·19 5142·257 5155·103 51622·3·43 51711·47 5182·7·37 5193·173 52023·5·13 521 − 540 521521 5222·32·29 523523 52422·131 5253·52·7 5262·263 52717·31 52824·3·11 529232 5302·5·53 53132·59 53222·7·19 53313·41 5342·3·89 5355·107 53623·67 5373·179 5382·269 53972·11 54022·33·5 541 − 560 541541 5422·271 5433·181 54425·17 5455·109 5462·3·7·13 547547 54822·137 54932·61 5502·52·11 55119·29 55223·3·23 5537·79 5542·277 5553·5·37 55622·139 557557 5582·32·31 55913·43 56024·5·7 561 − 580 5613·11·17 5622·281 563563 56422·3·47 5655·113 5662·283 56734·7 56823·71 569569 5702·3·5·19 571571 57222·11·13 5733·191 5742·7·41 57552·23 57626·32 577577 5782·172 5793·193 58022·5·29 581 − 600 5817·83 5822·3·97 58311·53 58423·73 58532·5·13 5862·293 587587 58822·3·72 58919·31 5902·5·59 5913·197 59224·37 593593 5942·33·11 5955·7·17 59622·149 5973·199 5982·13·23 599599 60023·3·52 601 to 700 601 − 620 601601 6022·7·43 60332·67 60422·151 6055·112 6062·3·101 607607 60825·19 6093·7·29 6102·5·61 61113·47 61222·32·17 613613 6142·307 6153·5·41 61623·7·11 617617 6182·3·103 619619 62022·5·31 621 − 640 62133·23 6222·311 6237·89 62424·3·13 62554 6262·313 6273·11·19 62822·157 62917·37 6302·32·5·7 631631 63223·79 6333·211 6342·317 6355·127 63622·3·53 63772·13 6382·11·29 63932·71 64027·5 641 − 660 641641 6422·3·107 643643 64422·7·23 6453·5·43 6462·17·19 647647 64823·34 64911·59 6502·52·13 6513·7·31 65222·163 653653 6542·3·109 6555·131 65624·41 65732·73 6582·7·47 659659 66022·3·5·11 661 − 680 661661 6622·331 6633·13·17 66423·83 6655·7·19 6662·32·37 66723·29 66822·167 6693·223 6702·5·67 67111·61 67225·3·7 673673 6742·337 67533·52 67622·132 677677 6782·3·113 6797·97 68023·5·17 681 − 700 6813·227 6822·11·31 683683 68422·32·19 6855·137 6862·73 6873·229 68824·43 68913·53 6902·3·5·23 691691 69222·173 69332·7·11 6942·347 6955·139 69623·3·29 69717·41 6982·349 6993·233 70022·52·7 701 to 800 701 − 720 701701 7022·33·13 70319·37 70426·11 7053·5·47 7062·353 7077·101 70822·3·59 709709 7102·5·71 71132·79 71223·89 71323·31 7142·3·7·17 7155·11·13 71622·179 7173·239 7182·359 719719 72024·32·5 721 − 740 7217·103 7222·192 7233·241 72422·181 72552·29 7262·3·112 727727 72823·7·13 72936 7302·5·73 73117·43 73222·3·61 733733 7342·367 7353·5·72 73625·23 73711·67 7382·32·41 739739 74022·5·37 741 − 760 7413·13·19 7422·7·53 743743 74423·3·31 7455·149 7462·373 74732·83 74822·11·17 7497·107 7502·3·53 751751 75224·47 7533·251 7542·13·29 7555·151 75622·33·7 757757 7582·379 7593·11·23 76023·5·19 761 − 780 761761 7622·3·127 7637·109 76422·191 76532·5·17 7662·383 76713·59 76828·3 769769 7702·5·7·11 7713·257 77222·193 773773 7742·32·43 77552·31 77623·97 7773·7·37 7782·389 77919·41 78022·3·5·13 781 − 800 78111·71 7822·17·23 78333·29 78424·72 7855·157 7862·3·131 787787 78822·197 7893·263 7902·5·79 7917·113 79223·32·11 79313·61 7942·397 7953·5·53 79622·199 797797 7982·3·7·19 79917·47 80025·52 801 to 900 801 - 820 801 32·89 802 2·401 803 11·73 804 22·3·67 805 5·7·23 806 2·13·31 807 3·269 808 23·101 809 809 810 2·34·5 811 811 812 22·7·29 813 3·271 814 2·11·37 815 5·163 816 24·3·17 817 19·43 818 2·409 819 32·7·13 820 22·5·41 821 - 840 821 821 822 2·3·137 823 823 824 23·103 825 3·52·11 826 2·7·59 827 827 828 22·32·23 829 829 830 2·5·83 831 3·277 832 26·13 833 72·17 834 2·3·139 835 5·167 836 22·11·19 837 33·31 838 2·419 839 839 840 23·3·5·7 841 - 860 841 292 842 2·421 843 3·281 844 22·211 845 5·132 846 2·32·47 847 7·112 848 24·53 849 3·283 850 2·52·17 851 23·37 852 22·3·71 853 853 854 2·7·61 855 32·5·19 856 23·107 857 857 858 2·3·11·13 859 859 860 22·5·43 861 - 880 861 3·7·41 862 2·431 863 863 864 25·33 865 5·173 866 2·433 867 3·172 868 22·7·31 869 11·79 870 2·3·5·29 871 13·67 872 23·109 873 32·97 874 2·19·23 875 53·7 876 22·3·73 877 877 878 2·439 879 3·293 880 24·5·11 881 - 900 881 881 882 2·32·72 883 883 884 22·13·17 885 3·5·59 886 2·443 887 887 888 23·3·37 889 7·127 890 2·5·89 891 34·11 892 22·223 893 19·47 894 2·3·149 895 5·179 896 27·7 897 3·13·23 898 2·449 899 29·31 90022·32·52 901 to 1000 901 - 920 901 17·53 902 2·11·41 903 3·7·43 904 23·113 905 5·181 906 2·3·151 907 907 908 22·227 909 32·101 910 2·5·7·13 911 911 912 24·3·19 913 11·83 914 2·457 915 3·5·61 916 22·229 917 7·131 918 2·33·17 919 919 920 23·5·23 921 - 940 921 3·307 922 2·461 923 13·71 924 22·3·7·11 925 52·37 926 2·463 927 32·103 928 25·29 929 929 930 2·3·5·31 931 72·19 932 22·233 933 3·311 934 2·467 935 5·11·17 936 23·32·13 937 937 938 2·7·67 939 3·313 940 22·5·47 941 - 960 941 941 942 2·3·157 943 23·41 944 24·59 945 33·5·7 946 2·11·43 947 947 948 22·3·79 949 13·73 950 2·52·19 951 3·317 952 23·7·17 953 953 954 2·32·53 955 5·191 956 22·239 957 3·11·29 958 2·479 959 7·137 960 26·3·5 961 - 980 961 312 962 2·13·37 963 32·107 964 22·241 965 5·193 966 2·3·7·23 967 967 968 23·112 969 3·17·19 970 2·5·97 971 971 972 22·35 973 7·139 974 2·487 975 3·52·13 976 24·61 977 977 978 2·3·163 979 11·89 980 22·5·72 981 - 1000 981 32·109 982 2·491 983 983 984 23·3·41 985 5·197 986 2·17·29 987 3·7·47 988 22·13·19 989 23·43 990 2·32·5·11 991 991 992 25·31 993 3·331 994 2·7·71 995 5·199 996 22·3·83 997 997 998 2·499 999 33·37 1000 23·53 See also • Fundamental theorem of arithmetic – Integers have unique prime factorizations • List of prime numbers – List of prime numbers and notable types of prime numbers • Table of divisors – numbers divisible by another numberPages displaying wikidata descriptions as a fallback
Wikipedia
Random number table Random number tables have been used in statistics for tasks such as selected random samples. This was much more effective than manually selecting the random samples (with dice, cards, etc.). Nowadays, tables of random numbers have been replaced by computational random number generators. If carefully prepared, the filtering and testing processes remove any noticeable bias or asymmetry from the hardware-generated original numbers so that such tables provide the most "reliable" random numbers available to the casual user. Note that any published (or otherwise accessible) random data table is unsuitable for cryptographic purposes since the accessibility of the numbers makes them effectively predictable, and hence their effect on a cryptosystem is also predictable. By way of contrast, genuinely random numbers that are only accessible to the intended encoder and decoder allow literally unbreakable encryption of a similar or lesser amount of meaningful data (using a simple exclusive OR operation) in a method known as the one-time pad, which has often insurmountable problems that are barriers to implementing this method correctly. History Tables of random numbers have the desired properties no matter how chosen from the table: by row, column, diagonal or irregularly. The first such table was published by L.H.C. Tippett in 1927, and since then a number of other such tables were developed. The first tables were generated through a variety of ways—one (by L.H.C. Tippett) took its numbers "at random" from census registers, another (by R.A. Fisher and Francis Yates) used numbers taken "at random" from logarithm tables, and in 1939 a set of 100,000 digits were published by M.G. Kendall and B. Babington Smith produced by a specialized machine in conjunction with a human operator. In the mid-1940s, the RAND Corporation set about to develop a large table of random numbers for use with the Monte Carlo method, and using a hardware random number generator produced A Million Random Digits with 100,000 Normal Deviates. The RAND table used electronic simulation of a roulette wheel attached to a computer, the results of which were then carefully filtered and tested before being used to generate the table. The RAND table was an important breakthrough in delivering random numbers because such a large and carefully prepared table had never before been available (the largest previously published table was ten times smaller in size), and because it was also available on IBM punched cards, which allowed for its use in computers. In the 1950s, a hardware random number generator named ERNIE was used to draw British premium bond numbers. The first "testing" of random numbers for statistical randomness was developed by M.G. Kendall and B. Babington Smith in the late 1930s, and was based upon looking for certain types of probabilistic expectations in a given sequence. The simplest test looked to make sure that roughly equal numbers of 1s, 2s, 3s, etc. were present; more complicated tests looked for the number of digits between successive 0s and compared the total counts with their expected probabilities. Over the years more complicated tests were developed. Kendall and Smith also created the notion of "local randomness", whereby a given set of random numbers would be broken down and tested in segments. In their set of 100,000 numbers, for example, two of the thousands were somewhat less "locally random" than the rest, but the set as a whole would pass its tests. Kendall and Smith advised their readers not to use those particular thousands by themselves as a consequence. Published tables still have niche uses, particularly in the performance of experimental music pieces that call for them, such as Vision (1959) and Poem (1960) by La Monte Young.[1] See also • A Million Random Digits with 100,000 Normal Deviates • Kish grid References 1. "Following a Straight Line". Retrieved 29 August 2012. External links • Data from A Million Random Digits With 100,000 Normal Deviates by the RAND Corporation Authority control: National • Japan
Wikipedia
Table of spherical harmonics This is a table of orthonormalized spherical harmonics that employ the Condon-Shortley phase up to degree $\ell =10$. Some of these formulas are expressed in terms of the Cartesian expansion of the spherical harmonics into polynomials in x, y, z, and r. For purposes of this table, it is useful to express the usual spherical to Cartesian transformations that relate these Cartesian components to $\theta $ and $\varphi $ as ${\begin{cases}\cos(\theta )&=z/r\\e^{\pm i\varphi }\cdot \sin(\theta )&=(x\pm iy)/r\end{cases}}$ Complex spherical harmonics For ℓ = 0, …, 5, see.[1] ℓ = 0 $Y_{0}^{0}(\theta ,\varphi )={1 \over 2}{\sqrt {1 \over \pi }}$ ℓ = 1 ${\begin{aligned}Y_{1}^{-1}(\theta ,\varphi )&=&&{1 \over 2}{\sqrt {3 \over 2\pi }}\cdot e^{-i\varphi }\cdot \sin \theta &&=&&{1 \over 2}{\sqrt {3 \over 2\pi }}\cdot {(x-iy) \over r}\\Y_{1}^{0}(\theta ,\varphi )&=&&{1 \over 2}{\sqrt {3 \over \pi }}\cdot \cos \theta &&=&&{1 \over 2}{\sqrt {3 \over \pi }}\cdot {z \over r}\\Y_{1}^{1}(\theta ,\varphi )&=&-&{1 \over 2}{\sqrt {3 \over 2\pi }}\cdot e^{i\varphi }\cdot \sin \theta &&=&-&{1 \over 2}{\sqrt {3 \over 2\pi }}\cdot {(x+iy) \over r}\end{aligned}}$ ℓ = 2 ${\begin{aligned}Y_{2}^{-2}(\theta ,\varphi )&=&&{1 \over 4}{\sqrt {15 \over 2\pi }}\cdot e^{-2i\varphi }\cdot \sin ^{2}\theta \quad &&=&&{1 \over 4}{\sqrt {15 \over 2\pi }}\cdot {(x-iy)^{2} \over r^{2}}&\\Y_{2}^{-1}(\theta ,\varphi )&=&&{1 \over 2}{\sqrt {15 \over 2\pi }}\cdot e^{-i\varphi }\cdot \sin \theta \cdot \cos \theta \quad &&=&&{1 \over 2}{\sqrt {15 \over 2\pi }}\cdot {(x-iy)\cdot z \over r^{2}}&\\Y_{2}^{0}(\theta ,\varphi )&=&&{1 \over 4}{\sqrt {5 \over \pi }}\cdot (3\cos ^{2}\theta -1)\quad &&=&&{1 \over 4}{\sqrt {5 \over \pi }}\cdot {(3z^{2}-r^{2}) \over r^{2}}&\\Y_{2}^{1}(\theta ,\varphi )&=&-&{1 \over 2}{\sqrt {15 \over 2\pi }}\cdot e^{i\varphi }\cdot \sin \theta \cdot \cos \theta \quad &&=&-&{1 \over 2}{\sqrt {15 \over 2\pi }}\cdot {(x+iy)\cdot z \over r^{2}}&\\Y_{2}^{2}(\theta ,\varphi )&=&&{1 \over 4}{\sqrt {15 \over 2\pi }}\cdot e^{2i\varphi }\cdot \sin ^{2}\theta \quad &&=&&{1 \over 4}{\sqrt {15 \over 2\pi }}\cdot {(x+iy)^{2} \over r^{2}}&\end{aligned}}$ ℓ = 3 ${\begin{aligned}Y_{3}^{-3}(\theta ,\varphi )&=&&{1 \over 8}{\sqrt {35 \over \pi }}\cdot e^{-3i\varphi }\cdot \sin ^{3}\theta \quad &&=&&{1 \over 8}{\sqrt {35 \over \pi }}\cdot {(x-iy)^{3} \over r^{3}}&\\Y_{3}^{-2}(\theta ,\varphi )&=&&{1 \over 4}{\sqrt {105 \over 2\pi }}\cdot e^{-2i\varphi }\cdot \sin ^{2}\theta \cdot \cos \theta \quad &&=&&{1 \over 4}{\sqrt {105 \over 2\pi }}\cdot {(x-iy)^{2}\cdot z \over r^{3}}&\\Y_{3}^{-1}(\theta ,\varphi )&=&&{1 \over 8}{\sqrt {21 \over \pi }}\cdot e^{-i\varphi }\cdot \sin \theta \cdot (5\cos ^{2}\theta -1)\quad &&=&&{1 \over 8}{\sqrt {21 \over \pi }}\cdot {(x-iy)\cdot (5z^{2}-r^{2}) \over r^{3}}&\\Y_{3}^{0}(\theta ,\varphi )&=&&{1 \over 4}{\sqrt {7 \over \pi }}\cdot (5\cos ^{3}\theta -3\cos \theta )\quad &&=&&{1 \over 4}{\sqrt {7 \over \pi }}\cdot {(5z^{3}-3zr^{2}) \over r^{3}}&\\Y_{3}^{1}(\theta ,\varphi )&=&-&{1 \over 8}{\sqrt {21 \over \pi }}\cdot e^{i\varphi }\cdot \sin \theta \cdot (5\cos ^{2}\theta -1)\quad &&=&&{-1 \over 8}{\sqrt {21 \over \pi }}\cdot {(x+iy)\cdot (5z^{2}-r^{2}) \over r^{3}}&\\Y_{3}^{2}(\theta ,\varphi )&=&&{1 \over 4}{\sqrt {105 \over 2\pi }}\cdot e^{2i\varphi }\cdot \sin ^{2}\theta \cdot \cos \theta \quad &&=&&{1 \over 4}{\sqrt {105 \over 2\pi }}\cdot {(x+iy)^{2}\cdot z \over r^{3}}&\\Y_{3}^{3}(\theta ,\varphi )&=&-&{1 \over 8}{\sqrt {35 \over \pi }}\cdot e^{3i\varphi }\cdot \sin ^{3}\theta \quad &&=&&{-1 \over 8}{\sqrt {35 \over \pi }}\cdot {(x+iy)^{3} \over r^{3}}&\end{aligned}}$ ℓ = 4 ${\begin{aligned}Y_{4}^{-4}(\theta ,\varphi )&={3 \over 16}{\sqrt {35 \over 2\pi }}\cdot e^{-4i\varphi }\cdot \sin ^{4}\theta ={\frac {3}{16}}{\sqrt {\frac {35}{2\pi }}}\cdot {\frac {(x-iy)^{4}}{r^{4}}}\\Y_{4}^{-3}(\theta ,\varphi )&={3 \over 8}{\sqrt {35 \over \pi }}\cdot e^{-3i\varphi }\cdot \sin ^{3}\theta \cdot \cos \theta ={\frac {3}{8}}{\sqrt {\frac {35}{\pi }}}\cdot {\frac {(x-iy)^{3}z}{r^{4}}}\\Y_{4}^{-2}(\theta ,\varphi )&={3 \over 8}{\sqrt {5 \over 2\pi }}\cdot e^{-2i\varphi }\cdot \sin ^{2}\theta \cdot (7\cos ^{2}\theta -1)={\frac {3}{8}}{\sqrt {\frac {5}{2\pi }}}\cdot {\frac {(x-iy)^{2}\cdot (7z^{2}-r^{2})}{r^{4}}}\\Y_{4}^{-1}(\theta ,\varphi )&={3 \over 8}{\sqrt {5 \over \pi }}\cdot e^{-i\varphi }\cdot \sin \theta \cdot (7\cos ^{3}\theta -3\cos \theta )={\frac {3}{8}}{\sqrt {\frac {5}{\pi }}}\cdot {\frac {(x-iy)\cdot (7z^{3}-3zr^{2})}{r^{4}}}\\Y_{4}^{0}(\theta ,\varphi )&={3 \over 16}{\sqrt {1 \over \pi }}\cdot (35\cos ^{4}\theta -30\cos ^{2}\theta +3)={\frac {3}{16}}{\sqrt {\frac {1}{\pi }}}\cdot {\frac {(35z^{4}-30z^{2}r^{2}+3r^{4})}{r^{4}}}\\Y_{4}^{1}(\theta ,\varphi )&={-3 \over 8}{\sqrt {5 \over \pi }}\cdot e^{i\varphi }\cdot \sin \theta \cdot (7\cos ^{3}\theta -3\cos \theta )={\frac {-3}{8}}{\sqrt {\frac {5}{\pi }}}\cdot {\frac {(x+iy)\cdot (7z^{3}-3zr^{2})}{r^{4}}}\\Y_{4}^{2}(\theta ,\varphi )&={3 \over 8}{\sqrt {5 \over 2\pi }}\cdot e^{2i\varphi }\cdot \sin ^{2}\theta \cdot (7\cos ^{2}\theta -1)={\frac {3}{8}}{\sqrt {\frac {5}{2\pi }}}\cdot {\frac {(x+iy)^{2}\cdot (7z^{2}-r^{2})}{r^{4}}}\\Y_{4}^{3}(\theta ,\varphi )&={-3 \over 8}{\sqrt {35 \over \pi }}\cdot e^{3i\varphi }\cdot \sin ^{3}\theta \cdot \cos \theta ={\frac {-3}{8}}{\sqrt {\frac {35}{\pi }}}\cdot {\frac {(x+iy)^{3}z}{r^{4}}}\\Y_{4}^{4}(\theta ,\varphi )&={3 \over 16}{\sqrt {35 \over 2\pi }}\cdot e^{4i\varphi }\cdot \sin ^{4}\theta ={\frac {3}{16}}{\sqrt {\frac {35}{2\pi }}}\cdot {\frac {(x+iy)^{4}}{r^{4}}}\end{aligned}}$ ℓ = 5 ${\begin{aligned}Y_{5}^{-5}(\theta ,\varphi )&={3 \over 32}{\sqrt {77 \over \pi }}\cdot e^{-5i\varphi }\cdot \sin ^{5}\theta \\Y_{5}^{-4}(\theta ,\varphi )&={3 \over 16}{\sqrt {385 \over 2\pi }}\cdot e^{-4i\varphi }\cdot \sin ^{4}\theta \cdot \cos \theta \\Y_{5}^{-3}(\theta ,\varphi )&={1 \over 32}{\sqrt {385 \over \pi }}\cdot e^{-3i\varphi }\cdot \sin ^{3}\theta \cdot (9\cos ^{2}\theta -1)\\Y_{5}^{-2}(\theta ,\varphi )&={1 \over 8}{\sqrt {1155 \over 2\pi }}\cdot e^{-2i\varphi }\cdot \sin ^{2}\theta \cdot (3\cos ^{3}\theta -\cos \theta )\\Y_{5}^{-1}(\theta ,\varphi )&={1 \over 16}{\sqrt {165 \over 2\pi }}\cdot e^{-i\varphi }\cdot \sin \theta \cdot (21\cos ^{4}\theta -14\cos ^{2}\theta +1)\\Y_{5}^{0}(\theta ,\varphi )&={1 \over 16}{\sqrt {11 \over \pi }}\cdot (63\cos ^{5}\theta -70\cos ^{3}\theta +15\cos \theta )\\Y_{5}^{1}(\theta ,\varphi )&={-1 \over 16}{\sqrt {165 \over 2\pi }}\cdot e^{i\varphi }\cdot \sin \theta \cdot (21\cos ^{4}\theta -14\cos ^{2}\theta +1)\\Y_{5}^{2}(\theta ,\varphi )&={1 \over 8}{\sqrt {1155 \over 2\pi }}\cdot e^{2i\varphi }\cdot \sin ^{2}\theta \cdot (3\cos ^{3}\theta -\cos \theta )\\Y_{5}^{3}(\theta ,\varphi )&={-1 \over 32}{\sqrt {385 \over \pi }}\cdot e^{3i\varphi }\cdot \sin ^{3}\theta \cdot (9\cos ^{2}\theta -1)\\Y_{5}^{4}(\theta ,\varphi )&={3 \over 16}{\sqrt {385 \over 2\pi }}\cdot e^{4i\varphi }\cdot \sin ^{4}\theta \cdot \cos \theta \\Y_{5}^{5}(\theta ,\varphi )&={-3 \over 32}{\sqrt {77 \over \pi }}\cdot e^{5i\varphi }\cdot \sin ^{5}\theta \end{aligned}}$ ℓ = 6 ${\begin{aligned}Y_{6}^{-6}(\theta ,\varphi )&={1 \over 64}{\sqrt {3003 \over \pi }}\cdot e^{-6i\varphi }\cdot \sin ^{6}\theta \\Y_{6}^{-5}(\theta ,\varphi )&={3 \over 32}{\sqrt {1001 \over \pi }}\cdot e^{-5i\varphi }\cdot \sin ^{5}\theta \cdot \cos \theta \\Y_{6}^{-4}(\theta ,\varphi )&={3 \over 32}{\sqrt {91 \over 2\pi }}\cdot e^{-4i\varphi }\cdot \sin ^{4}\theta \cdot (11\cos ^{2}\theta -1)\\Y_{6}^{-3}(\theta ,\varphi )&={1 \over 32}{\sqrt {1365 \over \pi }}\cdot e^{-3i\varphi }\cdot \sin ^{3}\theta \cdot (11\cos ^{3}\theta -3\cos \theta )\\Y_{6}^{-2}(\theta ,\varphi )&={1 \over 64}{\sqrt {1365 \over \pi }}\cdot e^{-2i\varphi }\cdot \sin ^{2}\theta \cdot (33\cos ^{4}\theta -18\cos ^{2}\theta +1)\\Y_{6}^{-1}(\theta ,\varphi )&={1 \over 16}{\sqrt {273 \over 2\pi }}\cdot e^{-i\varphi }\cdot \sin \theta \cdot (33\cos ^{5}\theta -30\cos ^{3}\theta +5\cos \theta )\\Y_{6}^{0}(\theta ,\varphi )&={1 \over 32}{\sqrt {13 \over \pi }}\cdot (231\cos ^{6}\theta -315\cos ^{4}\theta +105\cos ^{2}\theta -5)\\Y_{6}^{1}(\theta ,\varphi )&=-{1 \over 16}{\sqrt {273 \over 2\pi }}\cdot e^{i\varphi }\cdot \sin \theta \cdot (33\cos ^{5}\theta -30\cos ^{3}\theta +5\cos \theta )\\Y_{6}^{2}(\theta ,\varphi )&={1 \over 64}{\sqrt {1365 \over \pi }}\cdot e^{2i\varphi }\cdot \sin ^{2}\theta \cdot (33\cos ^{4}\theta -18\cos ^{2}\theta +1)\\Y_{6}^{3}(\theta ,\varphi )&=-{1 \over 32}{\sqrt {1365 \over \pi }}\cdot e^{3i\varphi }\cdot \sin ^{3}\theta \cdot (11\cos ^{3}\theta -3\cos \theta )\\Y_{6}^{4}(\theta ,\varphi )&={3 \over 32}{\sqrt {91 \over 2\pi }}\cdot e^{4i\varphi }\cdot \sin ^{4}\theta \cdot (11\cos ^{2}\theta -1)\\Y_{6}^{5}(\theta ,\varphi )&=-{3 \over 32}{\sqrt {1001 \over \pi }}\cdot e^{5i\varphi }\cdot \sin ^{5}\theta \cdot \cos \theta \\Y_{6}^{6}(\theta ,\varphi )&={1 \over 64}{\sqrt {3003 \over \pi }}\cdot e^{6i\varphi }\cdot \sin ^{6}\theta \end{aligned}}$ ℓ = 7 ${\begin{aligned}Y_{7}^{-7}(\theta ,\varphi )&={3 \over 64}{\sqrt {715 \over 2\pi }}\cdot e^{-7i\varphi }\cdot \sin ^{7}\theta \\Y_{7}^{-6}(\theta ,\varphi )&={3 \over 64}{\sqrt {5005 \over \pi }}\cdot e^{-6i\varphi }\cdot \sin ^{6}\theta \cdot \cos \theta \\Y_{7}^{-5}(\theta ,\varphi )&={3 \over 64}{\sqrt {385 \over 2\pi }}\cdot e^{-5i\varphi }\cdot \sin ^{5}\theta \cdot (13\cos ^{2}\theta -1)\\Y_{7}^{-4}(\theta ,\varphi )&={3 \over 32}{\sqrt {385 \over 2\pi }}\cdot e^{-4i\varphi }\cdot \sin ^{4}\theta \cdot (13\cos ^{3}\theta -3\cos \theta )\\Y_{7}^{-3}(\theta ,\varphi )&={3 \over 64}{\sqrt {35 \over 2\pi }}\cdot e^{-3i\varphi }\cdot \sin ^{3}\theta \cdot (143\cos ^{4}\theta -66\cos ^{2}\theta +3)\\Y_{7}^{-2}(\theta ,\varphi )&={3 \over 64}{\sqrt {35 \over \pi }}\cdot e^{-2i\varphi }\cdot \sin ^{2}\theta \cdot (143\cos ^{5}\theta -110\cos ^{3}\theta +15\cos \theta )\\Y_{7}^{-1}(\theta ,\varphi )&={1 \over 64}{\sqrt {105 \over 2\pi }}\cdot e^{-i\varphi }\cdot \sin \theta \cdot (429\cos ^{6}\theta -495\cos ^{4}\theta +135\cos ^{2}\theta -5)\\Y_{7}^{0}(\theta ,\varphi )&={1 \over 32}{\sqrt {15 \over \pi }}\cdot (429\cos ^{7}\theta -693\cos ^{5}\theta +315\cos ^{3}\theta -35\cos \theta )\\Y_{7}^{1}(\theta ,\varphi )&=-{1 \over 64}{\sqrt {105 \over 2\pi }}\cdot e^{i\varphi }\cdot \sin \theta \cdot (429\cos ^{6}\theta -495\cos ^{4}\theta +135\cos ^{2}\theta -5)\\Y_{7}^{2}(\theta ,\varphi )&={3 \over 64}{\sqrt {35 \over \pi }}\cdot e^{2i\varphi }\cdot \sin ^{2}\theta \cdot (143\cos ^{5}\theta -110\cos ^{3}\theta +15\cos \theta )\\Y_{7}^{3}(\theta ,\varphi )&=-{3 \over 64}{\sqrt {35 \over 2\pi }}\cdot e^{3i\varphi }\cdot \sin ^{3}\theta \cdot (143\cos ^{4}\theta -66\cos ^{2}\theta +3)\\Y_{7}^{4}(\theta ,\varphi )&={3 \over 32}{\sqrt {385 \over 2\pi }}\cdot e^{4i\varphi }\cdot \sin ^{4}\theta \cdot (13\cos ^{3}\theta -3\cos \theta )\\Y_{7}^{5}(\theta ,\varphi )&=-{3 \over 64}{\sqrt {385 \over 2\pi }}\cdot e^{5i\varphi }\cdot \sin ^{5}\theta \cdot (13\cos ^{2}\theta -1)\\Y_{7}^{6}(\theta ,\varphi )&={3 \over 64}{\sqrt {5005 \over \pi }}\cdot e^{6i\varphi }\cdot \sin ^{6}\theta \cdot \cos \theta \\Y_{7}^{7}(\theta ,\varphi )&=-{3 \over 64}{\sqrt {715 \over 2\pi }}\cdot e^{7i\varphi }\cdot \sin ^{7}\theta \end{aligned}}$ ℓ = 8 ${\begin{aligned}Y_{8}^{-8}(\theta ,\varphi )&={3 \over 256}{\sqrt {12155 \over 2\pi }}\cdot e^{-8i\varphi }\cdot \sin ^{8}\theta \\Y_{8}^{-7}(\theta ,\varphi )&={3 \over 64}{\sqrt {12155 \over 2\pi }}\cdot e^{-7i\varphi }\cdot \sin ^{7}\theta \cdot \cos \theta \\Y_{8}^{-6}(\theta ,\varphi )&={1 \over 128}{\sqrt {7293 \over \pi }}\cdot e^{-6i\varphi }\cdot \sin ^{6}\theta \cdot (15\cos ^{2}\theta -1)\\Y_{8}^{-5}(\theta ,\varphi )&={3 \over 64}{\sqrt {17017 \over 2\pi }}\cdot e^{-5i\varphi }\cdot \sin ^{5}\theta \cdot (5\cos ^{3}\theta -\cos \theta )\\Y_{8}^{-4}(\theta ,\varphi )&={3 \over 128}{\sqrt {1309 \over 2\pi }}\cdot e^{-4i\varphi }\cdot \sin ^{4}\theta \cdot (65\cos ^{4}\theta -26\cos ^{2}\theta +1)\\Y_{8}^{-3}(\theta ,\varphi )&={1 \over 64}{\sqrt {19635 \over 2\pi }}\cdot e^{-3i\varphi }\cdot \sin ^{3}\theta \cdot (39\cos ^{5}\theta -26\cos ^{3}\theta +3\cos \theta )\\Y_{8}^{-2}(\theta ,\varphi )&={3 \over 128}{\sqrt {595 \over \pi }}\cdot e^{-2i\varphi }\cdot \sin ^{2}\theta \cdot (143\cos ^{6}\theta -143\cos ^{4}\theta +33\cos ^{2}\theta -1)\\Y_{8}^{-1}(\theta ,\varphi )&={3 \over 64}{\sqrt {17 \over 2\pi }}\cdot e^{-i\varphi }\cdot \sin \theta \cdot (715\cos ^{7}\theta -1001\cos ^{5}\theta +385\cos ^{3}\theta -35\cos \theta )\\Y_{8}^{0}(\theta ,\varphi )&={1 \over 256}{\sqrt {17 \over \pi }}\cdot (6435\cos ^{8}\theta -12012\cos ^{6}\theta +6930\cos ^{4}\theta -1260\cos ^{2}\theta +35)\\Y_{8}^{1}(\theta ,\varphi )&={-3 \over 64}{\sqrt {17 \over 2\pi }}\cdot e^{i\varphi }\cdot \sin \theta \cdot (715\cos ^{7}\theta -1001\cos ^{5}\theta +385\cos ^{3}\theta -35\cos \theta )\\Y_{8}^{2}(\theta ,\varphi )&={3 \over 128}{\sqrt {595 \over \pi }}\cdot e^{2i\varphi }\cdot \sin ^{2}\theta \cdot (143\cos ^{6}\theta -143\cos ^{4}\theta +33\cos ^{2}\theta -1)\\Y_{8}^{3}(\theta ,\varphi )&={-1 \over 64}{\sqrt {19635 \over 2\pi }}\cdot e^{3i\varphi }\cdot \sin ^{3}\theta \cdot (39\cos ^{5}\theta -26\cos ^{3}\theta +3\cos \theta )\\Y_{8}^{4}(\theta ,\varphi )&={3 \over 128}{\sqrt {1309 \over 2\pi }}\cdot e^{4i\varphi }\cdot \sin ^{4}\theta \cdot (65\cos ^{4}\theta -26\cos ^{2}\theta +1)\\Y_{8}^{5}(\theta ,\varphi )&={-3 \over 64}{\sqrt {17017 \over 2\pi }}\cdot e^{5i\varphi }\cdot \sin ^{5}\theta \cdot (5\cos ^{3}\theta -\cos \theta )\\Y_{8}^{6}(\theta ,\varphi )&={1 \over 128}{\sqrt {7293 \over \pi }}\cdot e^{6i\varphi }\cdot \sin ^{6}\theta \cdot (15\cos ^{2}\theta -1)\\Y_{8}^{7}(\theta ,\varphi )&={-3 \over 64}{\sqrt {12155 \over 2\pi }}\cdot e^{7i\varphi }\cdot \sin ^{7}\theta \cdot \cos \theta \\Y_{8}^{8}(\theta ,\varphi )&={3 \over 256}{\sqrt {12155 \over 2\pi }}\cdot e^{8i\varphi }\cdot \sin ^{8}\theta \end{aligned}}$ ℓ = 9 ${\begin{aligned}Y_{9}^{-9}(\theta ,\varphi )&={1 \over 512}{\sqrt {230945 \over \pi }}\cdot e^{-9i\varphi }\cdot \sin ^{9}\theta \\Y_{9}^{-8}(\theta ,\varphi )&={3 \over 256}{\sqrt {230945 \over 2\pi }}\cdot e^{-8i\varphi }\cdot \sin ^{8}\theta \cdot \cos \theta \\Y_{9}^{-7}(\theta ,\varphi )&={3 \over 512}{\sqrt {13585 \over \pi }}\cdot e^{-7i\varphi }\cdot \sin ^{7}\theta \cdot (17\cos ^{2}\theta -1)\\Y_{9}^{-6}(\theta ,\varphi )&={1 \over 128}{\sqrt {40755 \over \pi }}\cdot e^{-6i\varphi }\cdot \sin ^{6}\theta \cdot (17\cos ^{3}\theta -3\cos \theta )\\Y_{9}^{-5}(\theta ,\varphi )&={3 \over 256}{\sqrt {2717 \over \pi }}\cdot e^{-5i\varphi }\cdot \sin ^{5}\theta \cdot (85\cos ^{4}\theta -30\cos ^{2}\theta +1)\\Y_{9}^{-4}(\theta ,\varphi )&={3 \over 128}{\sqrt {95095 \over 2\pi }}\cdot e^{-4i\varphi }\cdot \sin ^{4}\theta \cdot (17\cos ^{5}\theta -10\cos ^{3}\theta +\cos \theta )\\Y_{9}^{-3}(\theta ,\varphi )&={1 \over 256}{\sqrt {21945 \over \pi }}\cdot e^{-3i\varphi }\cdot \sin ^{3}\theta \cdot (221\cos ^{6}\theta -195\cos ^{4}\theta +39\cos ^{2}\theta -1)\\Y_{9}^{-2}(\theta ,\varphi )&={3 \over 128}{\sqrt {1045 \over \pi }}\cdot e^{-2i\varphi }\cdot \sin ^{2}\theta \cdot (221\cos ^{7}\theta -273\cos ^{5}\theta +91\cos ^{3}\theta -7\cos \theta )\\Y_{9}^{-1}(\theta ,\varphi )&={3 \over 256}{\sqrt {95 \over 2\pi }}\cdot e^{-i\varphi }\cdot \sin \theta \cdot (2431\cos ^{8}\theta -4004\cos ^{6}\theta +2002\cos ^{4}\theta -308\cos ^{2}\theta +7)\\Y_{9}^{0}(\theta ,\varphi )&={1 \over 256}{\sqrt {19 \over \pi }}\cdot (12155\cos ^{9}\theta -25740\cos ^{7}\theta +18018\cos ^{5}\theta -4620\cos ^{3}\theta +315\cos \theta )\\Y_{9}^{1}(\theta ,\varphi )&={-3 \over 256}{\sqrt {95 \over 2\pi }}\cdot e^{i\varphi }\cdot \sin \theta \cdot (2431\cos ^{8}\theta -4004\cos ^{6}\theta +2002\cos ^{4}\theta -308\cos ^{2}\theta +7)\\Y_{9}^{2}(\theta ,\varphi )&={3 \over 128}{\sqrt {1045 \over \pi }}\cdot e^{2i\varphi }\cdot \sin ^{2}\theta \cdot (221\cos ^{7}\theta -273\cos ^{5}\theta +91\cos ^{3}\theta -7\cos \theta )\\Y_{9}^{3}(\theta ,\varphi )&={-1 \over 256}{\sqrt {21945 \over \pi }}\cdot e^{3i\varphi }\cdot \sin ^{3}\theta \cdot (221\cos ^{6}\theta -195\cos ^{4}\theta +39\cos ^{2}\theta -1)\\Y_{9}^{4}(\theta ,\varphi )&={3 \over 128}{\sqrt {95095 \over 2\pi }}\cdot e^{4i\varphi }\cdot \sin ^{4}\theta \cdot (17\cos ^{5}\theta -10\cos ^{3}\theta +\cos \theta )\\Y_{9}^{5}(\theta ,\varphi )&={-3 \over 256}{\sqrt {2717 \over \pi }}\cdot e^{5i\varphi }\cdot \sin ^{5}\theta \cdot (85\cos ^{4}\theta -30\cos ^{2}\theta +1)\\Y_{9}^{6}(\theta ,\varphi )&={1 \over 128}{\sqrt {40755 \over \pi }}\cdot e^{6i\varphi }\cdot \sin ^{6}\theta \cdot (17\cos ^{3}\theta -3\cos \theta )\\Y_{9}^{7}(\theta ,\varphi )&={-3 \over 512}{\sqrt {13585 \over \pi }}\cdot e^{7i\varphi }\cdot \sin ^{7}\theta \cdot (17\cos ^{2}\theta -1)\\Y_{9}^{8}(\theta ,\varphi )&={3 \over 256}{\sqrt {230945 \over 2\pi }}\cdot e^{8i\varphi }\cdot \sin ^{8}\theta \cdot \cos \theta \\Y_{9}^{9}(\theta ,\varphi )&={-1 \over 512}{\sqrt {230945 \over \pi }}\cdot e^{9i\varphi }\cdot \sin ^{9}\theta \end{aligned}}$ ℓ = 10 ${\begin{aligned}Y_{10}^{-10}(\theta ,\varphi )&={1 \over 1024}{\sqrt {969969 \over \pi }}\cdot e^{-10i\varphi }\cdot \sin ^{10}\theta \\Y_{10}^{-9}(\theta ,\varphi )&={1 \over 512}{\sqrt {4849845 \over \pi }}\cdot e^{-9i\varphi }\cdot \sin ^{9}\theta \cdot \cos \theta \\Y_{10}^{-8}(\theta ,\varphi )&={1 \over 512}{\sqrt {255255 \over 2\pi }}\cdot e^{-8i\varphi }\cdot \sin ^{8}\theta \cdot (19\cos ^{2}\theta -1)\\Y_{10}^{-7}(\theta ,\varphi )&={3 \over 512}{\sqrt {85085 \over \pi }}\cdot e^{-7i\varphi }\cdot \sin ^{7}\theta \cdot (19\cos ^{3}\theta -3\cos \theta )\\Y_{10}^{-6}(\theta ,\varphi )&={3 \over 1024}{\sqrt {5005 \over \pi }}\cdot e^{-6i\varphi }\cdot \sin ^{6}\theta \cdot (323\cos ^{4}\theta -102\cos ^{2}\theta +3)\\Y_{10}^{-5}(\theta ,\varphi )&={3 \over 256}{\sqrt {1001 \over \pi }}\cdot e^{-5i\varphi }\cdot \sin ^{5}\theta \cdot (323\cos ^{5}\theta -170\cos ^{3}\theta +15\cos \theta )\\Y_{10}^{-4}(\theta ,\varphi )&={3 \over 256}{\sqrt {5005 \over 2\pi }}\cdot e^{-4i\varphi }\cdot \sin ^{4}\theta \cdot (323\cos ^{6}\theta -255\cos ^{4}\theta +45\cos ^{2}\theta -1)\\Y_{10}^{-3}(\theta ,\varphi )&={3 \over 256}{\sqrt {5005 \over \pi }}\cdot e^{-3i\varphi }\cdot \sin ^{3}\theta \cdot (323\cos ^{7}\theta -357\cos ^{5}\theta +105\cos ^{3}\theta -7\cos \theta )\\Y_{10}^{-2}(\theta ,\varphi )&={3 \over 512}{\sqrt {385 \over 2\pi }}\cdot e^{-2i\varphi }\cdot \sin ^{2}\theta \cdot (4199\cos ^{8}\theta -6188\cos ^{6}\theta +2730\cos ^{4}\theta -364\cos ^{2}\theta +7)\\Y_{10}^{-1}(\theta ,\varphi )&={1 \over 256}{\sqrt {1155 \over 2\pi }}\cdot e^{-i\varphi }\cdot \sin \theta \cdot (4199\cos ^{9}\theta -7956\cos ^{7}\theta +4914\cos ^{5}\theta -1092\cos ^{3}\theta +63\cos \theta )\\Y_{10}^{0}(\theta ,\varphi )&={1 \over 512}{\sqrt {21 \over \pi }}\cdot (46189\cos ^{10}\theta -109395\cos ^{8}\theta +90090\cos ^{6}\theta -30030\cos ^{4}\theta +3465\cos ^{2}\theta -63)\\Y_{10}^{1}(\theta ,\varphi )&={-1 \over 256}{\sqrt {1155 \over 2\pi }}\cdot e^{i\varphi }\cdot \sin \theta \cdot (4199\cos ^{9}\theta -7956\cos ^{7}\theta +4914\cos ^{5}\theta -1092\cos ^{3}\theta +63\cos \theta )\\Y_{10}^{2}(\theta ,\varphi )&={3 \over 512}{\sqrt {385 \over 2\pi }}\cdot e^{2i\varphi }\cdot \sin ^{2}\theta \cdot (4199\cos ^{8}\theta -6188\cos ^{6}\theta +2730\cos ^{4}\theta -364\cos ^{2}\theta +7)\\Y_{10}^{3}(\theta ,\varphi )&={-3 \over 256}{\sqrt {5005 \over \pi }}\cdot e^{3i\varphi }\cdot \sin ^{3}\theta \cdot (323\cos ^{7}\theta -357\cos ^{5}\theta +105\cos ^{3}\theta -7\cos \theta )\\Y_{10}^{4}(\theta ,\varphi )&={3 \over 256}{\sqrt {5005 \over 2\pi }}\cdot e^{4i\varphi }\cdot \sin ^{4}\theta \cdot (323\cos ^{6}\theta -255\cos ^{4}\theta +45\cos ^{2}\theta -1)\\Y_{10}^{5}(\theta ,\varphi )&={-3 \over 256}{\sqrt {1001 \over \pi }}\cdot e^{5i\varphi }\cdot \sin ^{5}\theta \cdot (323\cos ^{5}\theta -170\cos ^{3}\theta +15\cos \theta )\\Y_{10}^{6}(\theta ,\varphi )&={3 \over 1024}{\sqrt {5005 \over \pi }}\cdot e^{6i\varphi }\cdot \sin ^{6}\theta \cdot (323\cos ^{4}\theta -102\cos ^{2}\theta +3)\\Y_{10}^{7}(\theta ,\varphi )&={-3 \over 512}{\sqrt {85085 \over \pi }}\cdot e^{7i\varphi }\cdot \sin ^{7}\theta \cdot (19\cos ^{3}\theta -3\cos \theta )\\Y_{10}^{8}(\theta ,\varphi )&={1 \over 512}{\sqrt {255255 \over 2\pi }}\cdot e^{8i\varphi }\cdot \sin ^{8}\theta \cdot (19\cos ^{2}\theta -1)\\Y_{10}^{9}(\theta ,\varphi )&={-1 \over 512}{\sqrt {4849845 \over \pi }}\cdot e^{9i\varphi }\cdot \sin ^{9}\theta \cdot \cos \theta \\Y_{10}^{10}(\theta ,\varphi )&={1 \over 1024}{\sqrt {969969 \over \pi }}\cdot e^{10i\varphi }\cdot \sin ^{10}\theta \end{aligned}}$ Visualization of complex spherical harmonics 2D polar/azimuthal angle maps Below the complex spherical harmonics are represented on 2D plots with the azimuthal angle, $\phi $, on the horizontal axis and the polar angle, $\theta $, on the vertical axis. The saturation of the color at any point represents the magnitude of the spherical harmonic and the hue represents the phase. Polar plots Below the complex spherical harmonics are represented on polar plots. The magnitude of the spherical harmonic at particular polar and azimuthal angles is represented by the saturation of the color at that point and the phase is represented by the hue at that point. Polar plots with magnitude as radius Below the complex spherical harmonics are represented on polar plots. The magnitude of the spherical harmonic at particular polar and azimuthal angles is represented by the radius of the plot at that point and the phase is represented by the hue at that point. Real spherical harmonics For each real spherical harmonic, the corresponding atomic orbital symbol (s, p, d, f) is reported as well.[2][3] For ℓ = 0, …, 3, see.[4][5] ℓ = 0 $Y_{00}=s=Y_{0}^{0}={\frac {1}{2}}{\sqrt {\frac {1}{\pi }}}$ ℓ = 1 ${\begin{aligned}Y_{1,-1}&=p_{y}=i{\sqrt {\frac {1}{2}}}\left(Y_{1}^{-1}+Y_{1}^{1}\right)={\sqrt {\frac {3}{4\pi }}}\cdot {\frac {y}{r}}\\Y_{1,0}&=p_{z}=Y_{1}^{0}={\sqrt {\frac {3}{4\pi }}}\cdot {\frac {z}{r}}\\Y_{1,1}&=p_{x}={\sqrt {\frac {1}{2}}}\left(Y_{1}^{-1}-Y_{1}^{1}\right)={\sqrt {\frac {3}{4\pi }}}\cdot {\frac {x}{r}}\end{aligned}}$ ℓ = 2 ${\begin{aligned}Y_{2,-2}&=d_{xy}=i{\sqrt {\frac {1}{2}}}\left(Y_{2}^{-2}-Y_{2}^{2}\right)={\frac {1}{2}}{\sqrt {\frac {15}{\pi }}}\cdot {\frac {xy}{r^{2}}}\\Y_{2,-1}&=d_{yz}=i{\sqrt {\frac {1}{2}}}\left(Y_{2}^{-1}+Y_{2}^{1}\right)={\frac {1}{2}}{\sqrt {\frac {15}{\pi }}}\cdot {\frac {y\cdot z}{r^{2}}}\\Y_{2,0}&=d_{z^{2}}=Y_{2}^{0}={\frac {1}{4}}{\sqrt {\frac {5}{\pi }}}\cdot {\frac {3z^{2}-r^{2}}{r^{2}}}\\Y_{2,1}&=d_{xz}={\sqrt {\frac {1}{2}}}\left(Y_{2}^{-1}-Y_{2}^{1}\right)={\frac {1}{2}}{\sqrt {\frac {15}{\pi }}}\cdot {\frac {x\cdot z}{r^{2}}}\\Y_{2,2}&=d_{x^{2}-y^{2}}={\sqrt {\frac {1}{2}}}\left(Y_{2}^{-2}+Y_{2}^{2}\right)={\frac {1}{4}}{\sqrt {\frac {15}{\pi }}}\cdot {\frac {x^{2}-y^{2}}{r^{2}}}\end{aligned}}$ ℓ = 3 ${\begin{aligned}Y_{3,-3}&=f_{y(3x^{2}-y^{2})}=i{\sqrt {\frac {1}{2}}}\left(Y_{3}^{-3}+Y_{3}^{3}\right)={\frac {1}{4}}{\sqrt {\frac {35}{2\pi }}}\cdot {\frac {y\left(3x^{2}-y^{2}\right)}{r^{3}}}\\Y_{3,-2}&=f_{xyz}=i{\sqrt {\frac {1}{2}}}\left(Y_{3}^{-2}-Y_{3}^{2}\right)={\frac {1}{2}}{\sqrt {\frac {105}{\pi }}}\cdot {\frac {xy\cdot z}{r^{3}}}\\Y_{3,-1}&=f_{yz^{2}}=i{\sqrt {\frac {1}{2}}}\left(Y_{3}^{-1}+Y_{3}^{1}\right)={\frac {1}{4}}{\sqrt {\frac {21}{2\pi }}}\cdot {\frac {y\cdot (5z^{2}-r^{2})}{r^{3}}}\\Y_{3,0}&=f_{z^{3}}=Y_{3}^{0}={\frac {1}{4}}{\sqrt {\frac {7}{\pi }}}\cdot {\frac {5z^{3}-3zr^{2}}{r^{3}}}\\Y_{3,1}&=f_{xz^{2}}={\sqrt {\frac {1}{2}}}\left(Y_{3}^{-1}-Y_{3}^{1}\right)={\frac {1}{4}}{\sqrt {\frac {21}{2\pi }}}\cdot {\frac {x\cdot (5z^{2}-r^{2})}{r^{3}}}\\Y_{3,2}&=f_{z(x^{2}-y^{2})}={\sqrt {\frac {1}{2}}}\left(Y_{3}^{-2}+Y_{3}^{2}\right)={\frac {1}{4}}{\sqrt {\frac {105}{\pi }}}\cdot {\frac {\left(x^{2}-y^{2}\right)\cdot z}{r^{3}}}\\Y_{3,3}&=f_{x(x^{2}-3y^{2})}={\sqrt {\frac {1}{2}}}\left(Y_{3}^{-3}-Y_{3}^{3}\right)={\frac {1}{4}}{\sqrt {\frac {35}{2\pi }}}\cdot {\frac {x\left(x^{2}-3y^{2}\right)}{r^{3}}}\end{aligned}}$ ℓ = 4 ${\begin{aligned}Y_{4,-4}&=i{\sqrt {\frac {1}{2}}}\left(Y_{4}^{-4}-Y_{4}^{4}\right)={\frac {3}{4}}{\sqrt {\frac {35}{\pi }}}\cdot {\frac {xy\left(x^{2}-y^{2}\right)}{r^{4}}}\\Y_{4,-3}&=i{\sqrt {\frac {1}{2}}}\left(Y_{4}^{-3}+Y_{4}^{3}\right)={\frac {3}{4}}{\sqrt {\frac {35}{2\pi }}}\cdot {\frac {y(3x^{2}-y^{2})\cdot z}{r^{4}}}\\Y_{4,-2}&=i{\sqrt {\frac {1}{2}}}\left(Y_{4}^{-2}-Y_{4}^{2}\right)={\frac {3}{4}}{\sqrt {\frac {5}{\pi }}}\cdot {\frac {xy\cdot (7z^{2}-r^{2})}{r^{4}}}\\Y_{4,-1}&=i{\sqrt {\frac {1}{2}}}\left(Y_{4}^{-1}+Y_{4}^{1}\right)={\frac {3}{4}}{\sqrt {\frac {5}{2\pi }}}\cdot {\frac {y\cdot (7z^{3}-3zr^{2})}{r^{4}}}\\Y_{4,0}&=Y_{4}^{0}={\frac {3}{16}}{\sqrt {\frac {1}{\pi }}}\cdot {\frac {35z^{4}-30z^{2}r^{2}+3r^{4}}{r^{4}}}\\Y_{4,1}&={\sqrt {\frac {1}{2}}}\left(Y_{4}^{-1}-Y_{4}^{1}\right)={\frac {3}{4}}{\sqrt {\frac {5}{2\pi }}}\cdot {\frac {x\cdot (7z^{3}-3zr^{2})}{r^{4}}}\\Y_{4,2}&={\sqrt {\frac {1}{2}}}\left(Y_{4}^{-2}+Y_{4}^{2}\right)={\frac {3}{8}}{\sqrt {\frac {5}{\pi }}}\cdot {\frac {(x^{2}-y^{2})\cdot (7z^{2}-r^{2})}{r^{4}}}\\Y_{4,3}&={\sqrt {\frac {1}{2}}}\left(Y_{4}^{-3}-Y_{4}^{3}\right)={\frac {3}{4}}{\sqrt {\frac {35}{2\pi }}}\cdot {\frac {x(x^{2}-3y^{2})\cdot z}{r^{4}}}\\Y_{4,4}&={\sqrt {\frac {1}{2}}}\left(Y_{4}^{-4}+Y_{4}^{4}\right)={\frac {3}{16}}{\sqrt {\frac {35}{\pi }}}\cdot {\frac {x^{2}\left(x^{2}-3y^{2}\right)-y^{2}\left(3x^{2}-y^{2}\right)}{r^{4}}}\end{aligned}}$ Visualization of real spherical harmonics 2D polar/azimuthal angle maps Below the real spherical harmonics are represented on 2D plots with the azimuthal angle, $\phi $, on the horizontal axis and the polar angle, $\theta $, on the vertical axis. The saturation of the color at any point represents the magnitude of the spherical harmonic and the hue represents the phase. Polar plots Below the real spherical harmonics are represented on polar plots. The magnitude of the spherical harmonic at particular polar and azimuthal angles is represented by the saturation of the color at that point and the phase is represented by the hue at that point. Polar plots with magnitude as radius Below the real spherical harmonics are represented on polar plots. The magnitude of the spherical harmonic at particular polar and azimuthal angles is represented by the radius of the plot at that point and the phase is represented by the hue at that point. Polar plots with amplitude as elevation Below the real spherical harmonics are represented on polar plots. The amplitude of the spherical harmonic (magnitude and sign) at a particular polar and azimuthal angle is represented by the elevation of the plot at that point above or below the surface of a uniform sphere. The magnitude is also represented by the saturation of the color at a given point. The phase is represented by the hue at a given point. See also • Spherical harmonics External links • Spherical Harmonics at MathWorld • Spherical Harmonics 3D representation References Cited references 1. D. A. Varshalovich; A. N. Moskalev; V. K. Khersonskii (1988). Quantum theory of angular momentum : irreducible tensors, spherical harmonics, vector coupling coefficients, 3nj symbols (1. repr. ed.). Singapore: World Scientific Pub. pp. 155–156. ISBN 9971-50-107-4. 2. Petrucci (2016). General chemistry : principles and modern applications. [Place of publication not identified]: Prentice Hall. ISBN 0133897311. 3. Friedman (1964). "The shapes of the f orbitals". J. Chem. Educ. 41 (7): 354. 4. C.D.H. Chisholm (1976). Group theoretical techniques in quantum chemistry. New York: Academic Press. ISBN 0-12-172950-8. 5. Blanco, Miguel A.; Flórez, M.; Bermejo, M. (1 December 1997). "Evaluation of the rotation matrices in the basis of real spherical harmonics". Journal of Molecular Structure: THEOCHEM. 419 (1–3): 19–27. doi:10.1016/S0166-1280(97)00185-1. General references • See section 3 in Mathar, R. J. (2009). "Zernike basis to cartesian transformations". Serbian Astronomical Journal. 179 (179): 107–120. arXiv:0809.2368. Bibcode:2009SerAJ.179..107M. doi:10.2298/SAJ0979107M. (see section 3.3) • For complex spherical harmonics, see also SphericalHarmonicY[l,m,theta,phi] at Wolfram Alpha, especially for specific values of l and m.
Wikipedia
Table of vertex-symmetric digraphs The best known vertex transitive digraphs (as of October 2008) in the directed Degree diameter problem are tabulated below. Table of the orders of the largest known vertex-symmetric graphs for the directed degree diameter problem k d 234567891011 2 610202772144171336504737 3 1227601653331 1522 0415 11511 56841 472 4 20601684651 3787 20014 40042 309137 370648 000 5 301203601 1523 77528 80086 400259 2001 010 6585 184 000 6 422108402 5209 02088 200352 8001 411 2005 184 00027 783 000 7 563361 6806 72020 160225 7921 128 9605 644 80027 783 000113 799 168 8 725043 02415 12060 480508 0323 048 19218 289 152113 799 168457 228 800 9 907205 04030 240151 2001 036 8007 257 60050 803 200384 072 1921 828 915 200 10 1109907 92055 400332 6401 960 20015 681 600125 452 8001 119 744 0006 138 320 000 11 1321 32011 88095 040665 2803 991 68031 152 000282 268 8002 910 897 00018 065 203 200 12 1561 71617 160154 4401 235 5208 648 64058 893 120588 931 2006 899 904 00047 703 427 200 13 1822 18424 024240 2402 162 16017 297 280121 080 9601 154 305 15215 159 089 098115 430 515 200 Key to colors ColorDetails *Family of digraphs found by W.H.Kautz. More details are available in a paper by the author. *Family of digraphs found by V.Faber and J.W.Moore. More details are available also by other authors. *Digraph found by V.Faber and J.W.Moore. The complete set of cayley digraphs in that order was found by Eyal Loz. *Digraphs found by Francesc Comellas and M. A. Fiol. More details are available in a paper by the authors. *Cayley digraphs found by Michael J. Dinneen. Details about this graph are available in a paper by the author. *Cayley digraphs found by Michael J. Dinneen. The complete set of cayley digraphs in that order was found by Eyal Loz. *Cayley digraphs found by Paul Hafner. Details about this graph are available in a paper by the author. *Cayley digraph found by Paul Hafner. The complete set of cayley digraphs in that order was found by Eyal Loz. *Digraphs found by J. Gómez. *Cayley digraphs found by Eyal Loz. More details are available in a paper by Eyal Loz and Jozef Širáň. References • Kautz, W.H. (1969), "Design of optimal interconnection networks for multiprocessors", Architecture and Design of Digital Computers, Nato Advanced Summer Institute: 249–272 • Faber, V.; Moore, J.W. (1988), "High-degree low-diameter interconnection networks with vertex symmetry:the directed case", Technical Report LA-UR-88-1051, los Alamos National Laboratory • J. Dinneen, Michael; Hafner, Paul R. (1994), "New Results for the Degree/Diameter Problem", Networks, 24 (7): 359–367, arXiv:math/9504214, doi:10.1002/net.3230240702 • Comellas, F.; Fiol, M.A. (1995), "Vertex-symmetric digraphs with small diameter", Discrete Applied Mathematics, 58 (1): 1–12, doi:10.1016/0166-218X(93)E0145-O • Miller, Mirka; Širáň, Jozef (2005), "Moore graphs and beyond: A survey of the degree/diameter problem" (PDF), Electronic Journal of Combinatorics, Dynamic, survey D • Loz, Eyal; Širáň, Jozef (2008), "New record graphs in the degree-diameter problem" (PDF), Australasian Journal of Combinatorics, 41: 63–80 External links • Vertex-symmetric Digraphs online table. • The Degree - Diameter Problem on CombinatoricsWiki.org. • Eyal Loz's Degree-Diameter problem page.
Wikipedia
Tableau de Concordance The Tableau de Concordance was the main French diplomatic code used during World War I; the term also refers to any message sent using the code. It was a superenciphered four-digit code that was changed three times between 1 August 1914 and 15 January 1915. The Tableau de Concordance is considered superenciphered because there is more than one step required to use it. First, each word in a message is replaced by four digits via a codebook. These four digits are divided into three groups (one digit, two digits, one digit) so that when the whole message has been translated into code, the four-digit sets can be put together so it looks like the entire message is made up of two-digit pairs. This is called a "Straddle Gimmick." Then, in turn, each of these two digit pairs (and the single digits at the beginning and end) are replaced by two letters. The letters are then combined with no spaces for the final ciphertext. The manual for the Tableau de Concordance included the instruction that if there was not adequate time for completely enciphering the message, it should simply be sent in clear, because a partially enciphered message would have provided insight into the inner workings of the code. Sources • The Codebreakers, by David Kahn, copyright 1967, 1996
Wikipedia
Tachytrope A tachytrope is a curve in which the law of the velocity is given. It was first used by American mathematician Benjamin Peirce in A System of Analytic Mechanics, first published in 1855.[1] References 1. Peirce 1855, pp. 364–370. Sources • Peirce, Benjamin (1855). A System of Analytic Mechanics. Boston: Little, Brown and Company. pp. 364–370.{{cite book}}: CS1 maint: date and year (link)
Wikipedia
Osculant In mathematical invariant theory, the osculant or tacinvariant or tact invariant is an invariant of a hypersurface that vanishes if the hypersurface touches itself, or an invariant of several hypersurfaces that osculate, meaning that they have a common point where they meet to unusually high order. References • Salmon, George (1885) [1859], Lessons introductory to the modern higher algebra (4th ed.), Dublin, Hodges, Figgis, and Co., ISBN 978-0-8284-0150-0
Wikipedia
Tacnode In classical algebraic geometry, a tacnode (also called a point of osculation or double cusp)[1] is a kind of singular point of a curve. It is defined as a point where two (or more) osculating circles to the curve at that point are tangent. This means that two branches of the curve have ordinary tangency at the double point.[1] The canonical example is $y^{2}-x^{4}=0.$ A tacnode of an arbitrary curve may then be defined from this example, as a point of self-tangency locally diffeomorphic to the point at the origin of this curve. Another example of a tacnode is given by the links curve shown in the figure, with equation $(x^{2}+y^{2}-3x)^{2}-4x^{2}(2-x)=0.$ More general background Consider a smooth real-valued function of two variables, say f (x, y) where x and y are real numbers. So f is a function from the plane to the line. The space of all such smooth functions is acted upon by the group of diffeomorphisms of the plane and the diffeomorphisms of the line, i.e. diffeomorphic changes of coordinate in both the source and the target. This action splits the whole function space up into equivalence classes, i.e. orbits of the group action. One such family of equivalence classes is denoted by $A_{k}^{\pm },$ where k is a non-negative integer. This notation was introduced by V. I. Arnold. A function f is said to be of type $A_{k}^{\pm }$ if it lies in the orbit of $x^{2}\pm y^{k+1},$ i.e. there exists a diffeomorphic change of coordinate in source and target which takes f into one of these forms. These simple forms $x^{2}\pm y^{k+1}$ are said to give normal forms for the type $A_{k}^{\pm }$-singularities. A curve with equation f = 0 will have a tacnode, say at the origin, if and only if f has a type $A_{3}^{-}$-singularity at the origin. Notice that a node $(x^{2}-y^{2}=0)$ corresponds to a type $A_{1}^{-}$-singularity. A tacnode corresponds to a type $A_{3}^{-}$-singularity. In fact each type $A_{2n+1}^{-}$-singularity, where n ≥ 0 is an integer, corresponds to a curve with self-intersection. As n increases, the order of self-intersection increases: transverse crossing, ordinary tangency, etc. The type $A_{2n+1}^{+}$-singularities are of no interest over the real numbers: they all give an isolated point. Over the complex numbers, type $A_{2n+1}^{+}$-singularities and type $A_{2n+1}^{-}$-singularities are equivalent: (x, y) → (x, iy) gives the required diffeomorphism of the normal forms. See also • Acnode • Cusp or Spinode • Crunode References 1. Schwartzman, Steven (1994), The Words of Mathematics: An Etymological Dictionary of Mathematical Terms Used in English, MAA Spectrum, Mathematical Association of America, p. 217, ISBN 978-0-88385-511-9. Further reading • Salmon, George (1873). A Treatise on the Higher Plane Curves: Intended as a Sequel to a Treatise on Conic Sections. External links • Weisstein, Eric W. "Tacnode". MathWorld. • Hazewinkel, M. (2001) [1994], "Tacnode", Encyclopedia of Mathematics, EMS Press Topics in algebraic curves Rational curves • Five points determine a conic • Projective line • Rational normal curve • Riemann sphere • Twisted cubic Elliptic curves Analytic theory • Elliptic function • Elliptic integral • Fundamental pair of periods • Modular form Arithmetic theory • Counting points on elliptic curves • Division polynomials • Hasse's theorem on elliptic curves • Mazur's torsion theorem • Modular elliptic curve • Modularity theorem • Mordell–Weil theorem • Nagell–Lutz theorem • Supersingular elliptic curve • Schoof's algorithm • Schoof–Elkies–Atkin algorithm Applications • Elliptic curve cryptography • Elliptic curve primality Higher genus • De Franchis theorem • Faltings's theorem • Hurwitz's automorphisms theorem • Hurwitz surface • Hyperelliptic curve Plane curves • AF+BG theorem • Bézout's theorem • Bitangent • Cayley–Bacharach theorem • Conic section • Cramer's paradox • Cubic plane curve • Fermat curve • Genus–degree formula • Hilbert's sixteenth problem • Nagata's conjecture on curves • Plücker formula • Quartic plane curve • Real plane curve Riemann surfaces • Belyi's theorem • Bring's curve • Bolza surface • Compact Riemann surface • Dessin d'enfant • Differential of the first kind • Klein quartic • Riemann's existence theorem • Riemann–Roch theorem • Teichmüller space • Torelli theorem Constructions • Dual curve • Polar curve • Smooth completion Structure of curves Divisors on curves • Abel–Jacobi map • Brill–Noether theory • Clifford's theorem on special divisors • Gonality of an algebraic curve • Jacobian variety • Riemann–Roch theorem • Weierstrass point • Weil reciprocity law Moduli • ELSV formula • Gromov–Witten invariant • Hodge bundle • Moduli of algebraic curves • Stable curve Morphisms • Hasse–Witt matrix • Riemann–Hurwitz formula • Prym variety • Weber's theorem (Algebraic curves) Singularities • Acnode • Crunode • Cusp • Delta invariant • Tacnode Vector bundles • Birkhoff–Grothendieck theorem • Stable vector bundle • Vector bundles on algebraic curves
Wikipedia
Tadashi Nagano Tadashi Nagano (長野正, Nagano Tadashi, January 9, 1930 – February 1, 2017) was a Taiwan-born Japanese mathematician who worked mainly on differential geometry and related subjects.[3] Tadashi Nagano 長野正 Born(1930-01-09)January 9, 1930 Taipei, Taiwan DiedFebruary 1, 2017(2017-02-01) (aged 87) Tokyo, Japan NationalityJapanese CitizenshipJapan Alma materUniversity of Tokyo, Japan AwardsGeometry Prize from Mathematical Society of Japan, 1994.[1] Scientific career FieldsDifferential geometry, Riemannian Geometry, Symmetric spaces InstitutionsUniversity of Notre Dame, University of Tokyo, Sophia University ThesisOn compact transformation groups with (n − 1)-dimensional orbits[2] (1959) Doctoral advisorKentaro Yano Doctoral studentsBang-Yen Chen InfluencesGaston Darboux, Élie Cartan, Shiing-Shen Chern, Kentaro Yano. Biography Nagano was born in Taipei in 1930, when Taiwan was administered by Japan. He returned to Japan for undergraduate study from 1951 to 1954 at the University of Tokyo, and defended his doctoral thesis under Kentaro Yano's supervision at University of Tokyo in 1959. He worked at the University of Tokyo from April in 1959 to May 1967 as a lecturer (1959-1962) and as an assistant professor (1962–1967). Nagano moved to United States to pursue an academic career with the University of Notre Dame in 1967.[4] He became a full professor of University of Notre Dame in 1969. Tadashi Nagano was a visiting professor at University of California at Berkeley from 1962–1964, National Tsing Hua University in Taiwan twice, first in 1966 and then one more time in 1978. After a successful academic career with University of Notre Dame, Tadashi Nagano returned to Japan and became a professor with Sophia University in 1986.[5] He retired from Sophia University at 70 years old in 2000. Tadashi Nagano co-authored 10 papers with Shoshichi Kobayashi in the interval 1966–1972, including A theorem on filtered Lie algebras and its applications, Bull. Amer. Math. Soc. 70 1964, pp. 401–403.[6] Tadashi Nagano served an editor-in-chief of Tokyo Journal of Mathematics for several years since 1990. In 1994, Tadashi Nagano was presented with the Geometry Prize from Mathematical Society of Japan[7] for his research achievements over a large field of the differential geometry including a geometric construction of compact symmetric spaces ((M+,M-)-method joint with Bang-Yen Chen). References 1. "List of Recipients of the Geometry Prize from Mathematical Society of Japan". 2. "Tadashi Nagano on Research Gate". 3. "Tadashi Nagano on Math Genealogy". 4. "Tadashi Nagano on Math Genealogy". 5. "Tadashi Nagano on Research Gate". 6. "Tadashi Nagano on MathSciNet". 7. "List of Recipients of the Geometry Prize from Mathematical Society of Japan". Authority control International • ISNI • VIAF • WorldCat National • Germany • Israel • United States • Japan Academics • CiNii • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Tadashi Nakayama (mathematician) Tadashi Nakayama or Tadasi Nakayama (中山 正, Nakayama Tadashi, July 26, 1912, Tokyo Prefecture – June 5, 1964, Nagoya) was a mathematician who made important contributions to representation theory. Career He received his degrees from Tokyo University and Osaka University and held permanent positions at Osaka University and Nagoya University. He had visiting positions at Princeton University, Illinois University, and Hamburg University. Nakayama's lemma, Nakayama algebras, Nakayama's conjecture and Murnaghan–Nakayama rule are named after him. Selected works • Nakayama, Tadasi (1939), "On Frobeniusean algebras. I", Annals of Mathematics, Second Series, Annals of Mathematics, 40 (3): 611–633, Bibcode:1939AnMat..40..611N, doi:10.2307/1968946, JSTOR 1968946, MR 0000016 • Nakayama, Tadasi (1941), "On Frobeniusean algebras. II" (PDF), Annals of Mathematics, Second Series, Annals of Mathematics, 42 (1): 1–21, doi:10.2307/1968984, JSTOR 1968984, MR 0004237 • Tadasi Nakayama. A note on the elementary divisor theory in non-commutative domains. Bull. Amer. Math. Soc. 44 (1938) 719–723. MR1563855 doi:10.1090/S0002-9904-1938-06850-4 • Tadasi Nakayama. A remark on representations of groups. Bull. Amer. Math. Soc. 44 (1938) 233–235. MR1563716 doi:10.1090/S0002-9904-1938-06723-7 • Tadasi Nakayama. A remark on the sum and the intersection of two normal ideals in an algebra. Bull. Amer. Math. Soc. 46 (1940) 469–472. MR0001967 doi:10.1090/S0002-9904-1940-07235-0 • Tadasi Nakayama and Junji Hashimoto. On a problem of G. Birkhoff . Proc. Amer. Math. Soc. 1 (1950) 141–142. MR0035279 doi:10.1090/S0002-9939-1950-0035279-X • Tadasi Nakayama. Remark on the duality for noncommutative compact groups . Proc. Amer. Math. Soc. 2 (1951) 849–854. MR0045131 doi:10.1090/S0002-9939-1951-0045131-2 • Tadasi Nakayama. Orthogonality relation for Frobenius- and quasi-Frobenius-algebras . Proc. Amer. Math. Soc. 3 (1952) 183–195. MR0049876 doi:10.2307/2032255 • Tadasi Nakayama. Galois theory of simple rings . Trans. Amer. Math. Soc. 73 (1952) 276–292. MR0049875 doi:10.1090/S0002-9947-1952-0049875-3 • Masatosi Ikeda and Tadasi Nakayama. On some characteristic properties of quasi-Frobenius and regular rings . Proc. Amer. Math. Soc. 5 (1954) 15–19. MR0060489 doi:10.1090/S0002-9939-1954-0060489-9 References • "Obituary: Tadasi Nakayama", Nagoya Mathematical Journal, 27: i–vii. (1 plate), 1966, ISSN 0027-7630, MR 0191789 External links • O'Connor, John J.; Robertson, Edmund F., "Tadashi Nakayama", MacTutor History of Mathematics Archive, University of St Andrews • Tadasi Nakayama at the Mathematics Genealogy Project • https://www.math.uni-bielefeld.de/~sek/collect/nakayama.html Authority control International • ISNI • VIAF National • Germany • United States • Japan Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie
Wikipedia
Tadeusz Iwaniec Tadeusz Iwaniec (born October 9, 1947 in Elbląg) is a Polish-American mathematician, and since 1996 John Raymond French Distinguished Professor of Mathematics at Syracuse University.[1] Tadeusz Iwaniec Born (1947-10-09) October 9, 1947 Elbląg, Poland CitizenshipUnited States Scientific career FieldsMathematician InstitutionsSyracuse University He and mathematician Henryk Iwaniec are twin brothers. Awards and honors Iwaniec was given the Prize of the President of the Polish Academy of Sciences, 1980, the Alfred Jurzykowski Award in Mathematics in 1997, the Prix 2001 Institut Henri-Poincaré Gauthier-Villars,[1] and the 2009 Sierpinski Medal of the Polish Mathematical Society and Warsaw University.[2] In 1998 he was elected as a foreign member of the Academia di Scienze Fisiche e Matematiche, Italy [1] and in 2012 as a foreign member of the Finnish Academy of Science and Letters.[3] References 1. Biography of Tadeusz Iwaniec, Mathematics Department, Syracuse University. Retrieved 2010-01-22. 2. SU math professor gets international honor, Syracuse Post-Standard, March 26, 2009. 3. New Members. Finnish Academy of Science and Letters. Accessed May 24, 2012. Authority control International • ISNI • VIAF National • Norway • France • BnF data • Catalonia • Germany • Israel • United States • Netherlands • Poland Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Partition of an interval In mathematics, a partition of an interval [a, b] on the real line is a finite sequence x0, x1, x2, …, xn of real numbers such that a = x0 < x1 < x2 < … < xn = b. This article is about grouping elements of an interval using a sequence. For grouping elements of a set using a set of sets, see Partition of a set. In other terms, a partition of a compact interval I is a strictly increasing sequence of numbers (belonging to the interval I itself) starting from the initial point of I and arriving at the final point of I. Every interval of the form [xi, xi + 1] is referred to as a subinterval of the partition x. Refinement of a partition Another partition Q of the given interval [a, b] is defined as a refinement of the partition P, if Q contains all the points of P and possibly some other points as well; the partition Q is said to be “finer” than P. Given two partitions, P and Q, one can always form their common refinement, denoted P ∨ Q, which consists of all the points of P and Q, in increasing order.[1] Norm of a partition The norm (or mesh) of the partition x0 < x1 < x2 < … < xn is the length of the longest of these subintervals[2][3] max{|xi − xi−1| : i = 1, … , n }. Applications Partitions are used in the theory of the Riemann integral, the Riemann–Stieltjes integral and the regulated integral. Specifically, as finer partitions of a given interval are considered, their mesh approaches zero and the Riemann sum based on a given partition approaches the Riemann integral.[4] Tagged partitions A tagged partition[5] is a partition of a given interval together with a finite sequence of numbers t0, …, tn − 1 subject to the conditions that for each i, xi ≤ ti ≤ xi + 1. In other words, a tagged partition is a partition together with a distinguished point of every subinterval: its mesh is defined in the same way as for an ordinary partition. It is possible to define a partial order on the set of all tagged partitions by saying that one tagged partition is bigger than another if the bigger one is a refinement of the smaller one. Suppose that x0, …, xn together with t0, …, tn − 1 is a tagged partition of [a, b], and that y0, …, ym together with s0, …, sm − 1 is another tagged partition of [a, b]. We say that y0, …, ym together with s0, …, sm − 1 is a refinement of a tagged partition x0, …, xn together with t0, …, tn − 1 if for each integer i with 0 ≤ i ≤ n, there is an integer r(i) such that xi = yr(i) and such that ti = sj for some j with r(i) ≤ j ≤ r(i + 1) − 1. Said more simply, a refinement of a tagged partition takes the starting partition and adds more tags, but does not take any away. See also • Regulated integral • Riemann integral • Riemann–Stieltjes integral References 1. Brannan, D. A. (2006). A First Course in Mathematical Analysis. Cambridge University Press. p. 262. ISBN 9781139458955. 2. Hijab, Omar (2011). Introduction to Calculus and Classical Analysis. Springer. p. 60. ISBN 9781441994882. 3. Zorich, Vladimir A. (2004). Mathematical Analysis II. Springer. p. 108. ISBN 9783540406334. 4. Ghorpade, Sudhir; Limaye, Balmohan (2006). A Course in Calculus and Real Analysis. Springer. p. 213. ISBN 9780387364254. 5. Dudley, Richard M.; Norvaiša, Rimas (2010). Concrete Functional Calculus. Springer. p. 2. ISBN 9781441969507. Further reading • Gordon, Russell A. (1994). The integrals of Lebesgue, Denjoy, Perron, and Henstock. Graduate Studies in Mathematics, 4. Providence, RI: American Mathematical Society. ISBN 0-8218-3805-9.
Wikipedia
Tail dependence In probability theory, the tail dependence of a pair of random variables is a measure of their comovements in the tails of the distributions. The concept is used in extreme value theory. Random variables that appear to exhibit no correlation can show tail dependence in extreme deviations. For instance, it is a stylized fact of stock returns that they commonly exhibit tail dependence.[1] Definition The lower tail dependence is defined as[2] $\lambda _{\ell }=\lim _{q\rightarrow 0}\operatorname {P} (X_{2}\leq F_{2}^{-1}(q)\mid X_{1}\leq F_{1}^{-1}(q)).$ where $F^{-1}(q)={\rm {inf}}\{x\in \mathbb {R} :F(x)\geq q\}$, that is, the inverse of the cumulative probability distribution function for q. The upper tail dependence is defined analogously as $\lambda _{u}=\lim _{q\rightarrow 1}\operatorname {P} (X_{2}>F_{2}^{-1}(q)\mid X_{1}>F_{1}^{-1}(q)).$ See also • Correlation • Dependence References 1. Hartmann, Philip; Straetmans, Stefan T.M.; De Vries, Casper G. (2004). "Asset Market Linkages in Crisis Periods". Review of Economics and Statistics. 86 (1): 313–326. doi:10.1162/003465304323023831. hdl:10419/152505. 2. McNeil, Alexander J.; Frey, Rüdiger; Embrechts, Paul (2005), Quantitative Risk Management. Concepts, Techniques and Tools, Princeton Series in Finance, Princeton, NJ: Princeton University Press, ISBN 978-0-691-12255-7, MR 2175089, Zbl 1089.91037 Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Tait's conjecture In mathematics, Tait's conjecture states that "Every 3-connected planar cubic graph has a Hamiltonian cycle (along the edges) through all its vertices". It was proposed by P. G. Tait (1884) and disproved by W. T. Tutte (1946), who constructed a counterexample with 25 faces, 69 edges and 46 vertices. Several smaller counterexamples, with 21 faces, 57 edges and 38 vertices, were later proved minimal by Holton & McKay (1988). The condition that the graph be 3-regular is necessary due to polyhedra such as the rhombic dodecahedron, which forms a bipartite graph with six degree-four vertices on one side and eight degree-three vertices on the other side; because any Hamiltonian cycle would have to alternate between the two sides of the bipartition, but they have unequal numbers of vertices, the rhombic dodecahedron is not Hamiltonian. The conjecture was significant, because if true, it would have implied the four color theorem: as Tait described, the four-color problem is equivalent to the problem of finding 3-edge-colorings of bridgeless cubic planar graphs. In a Hamiltonian cubic planar graph, such an edge coloring is easy to find: use two colors alternately on the cycle, and a third color for all remaining edges. Alternatively, a 4-coloring of the faces of a Hamiltonian cubic planar graph may be constructed directly, using two colors for the faces inside the cycle and two more colors for the faces outside. Tutte's counterexample Tutte's fragment The key to this counter-example is what is now known as Tutte's fragment, shown on the right. If this fragment is part of a larger graph, then any Hamiltonian cycle through the graph must go in or out of the top vertex (and either one of the lower ones). It cannot go in one lower vertex and out the other. The counterexample The fragment can then be used to construct the non-Hamiltonian Tutte graph, by putting together three such fragments as shown on the picture. The "compulsory" edges of the fragments, that must be part of any Hamiltonian path through the fragment, are connected at the central vertex; because any cycle can use only two of these three edges, there can be no Hamiltonian cycle. The resulting Tutte graph is 3-connected and planar, so by Steinitz' theorem it is the graph of a polyhedron. In total it has 25 faces, 69 edges and 46 vertices. It can be realized geometrically from a tetrahedron (the faces of which correspond to the four large faces in the drawing, three of which are between pairs of fragments and the fourth of which forms the exterior) by multiply truncating three of its vertices. Smaller counterexamples As Holton & McKay (1988) show, there are exactly six 38-vertex non-Hamiltonian polyhedra that have nontrivial three-edge cuts. They are formed by replacing two of the vertices of a pentagonal prism by the same fragment used in Tutte's example. See also • Grinberg's theorem, a necessary condition on the existence of a Hamiltonian cycle that can be used to show that a graph forms a counterexample to Tait's conjecture • Barnette's conjecture, a still-open refinement of Tait's conjecture stating that every bipartite cubic polyhedral graph is Hamiltonian.[1] Notes 1. Barnette's conjecture, the Open Problem Garden, retrieved 2009-10-12. References • Holton, D. A.; McKay, B. D. (1988), "The smallest non-Hamiltonian 3-connected cubic planar graphs have 38 vertices", Journal of Combinatorial Theory, Series B, 45 (3): 305–319, doi:10.1016/0095-8956(88)90075-5. • Tait, P. G. (1884), "Listing's Topologie", Philosophical Magazine, 5th Series, 17: 30–46. Reprinted in Scientific Papers, Vol. II, pp. 85–98. • Tutte, W. T. (1946), "On Hamiltonian circuits" (PDF), Journal of the London Mathematical Society, 21 (2): 98–101, doi:10.1112/jlms/s1-21.2.98. Partly based on sci.math posting by Bill Taylor, used by permission. External links • Weisstein, Eric W. "Tait's Hamiltonian Graph Conjecture". MathWorld. Disproved conjectures • Borsuk's • Chinese hypothesis • Connes • Euler's sum of powers • Ganea • Hedetniemi's • Hauptvermutung • Hirsch • Kalman's • Keller's • Mertens • Ono's inequality • Pólya • Ragsdale • Schoen–Yau • Seifert • Tait's • Von Neumann • Weyl–Berry • Williamson
Wikipedia
Tait–Kneser theorem In differential geometry, the Tait–Kneser theorem states that, if a smooth plane curve has monotonic curvature, then the osculating circles of the curve are disjoint and nested within each other.[1] The logarithmic spiral or the pictured Archimedean spiral provide examples of curves whose curvature is monotonic for the entire curve. This monotonicity cannot happen for a simple closed curve (by the four-vertex theorem, there are at least four vertices where the curvature reaches an extreme point)[1] but for such curves the theorem can be applied to the arcs of the curves between its vertices. The theorem is named after Peter Tait, who published it in 1896, and Adolf Kneser, who rediscovered it and published it in 1912.[1][2][3] Tait's proof follows simply from the properties of the evolute, the curve traced out by the centers of osculating circles. For curves with monotone curvature, the arc length along the evolute between two centers equals the difference in radii of the corresponding circles. This arc length must be greater than the straight-line distance between the same two centers, so the two circles have centers closer together than the difference of their radii, from which the theorem follows.[1][2] Analogous disjointness theorems can be proved for the family of Taylor polynomials of a given smooth function, and for the osculating conics to a given smooth curve.[1][4] References 1. Ghys, Étienne; Tabachnikov, Sergei; Timorin, Vladlen (2013), "Osculating curves: around the Tait–Kneser theorem", The Mathematical Intelligencer, 35 (1): 61–66, arXiv:1207.5662, doi:10.1007/s00283-012-9336-6, MR 3041992, S2CID 253808284 2. Professor Tait (February 1895), "Note on the Circles of Curvature of a Plane Curve", Proceedings of the Edinburgh Mathematical Society, 14: 26, doi:10.1017/s0013091500031710 3. Kneser, Adolf (1912), "Bemerkungen über die Anzahl der Extreme der Krümmung auf geschlossenen Kurven und über verwandte Fragen in einer nicht-euklidischen Geometrie", Festschrift Heinrich Weber zu seinem siebzigsten Geburtstag am 5. März 1912 gewidmet von Freunden und Schülern; mit dem Bildnis von H. Weber in Heliogravüre und Figuren im Text, Leipzig: B. G. Teubner, pp. 170–180 4. Bor, Gil; Jackman, Connor; Tabachnikov, Serge (2021-08-04). "Variations on the Tait–Kneser Theorem". The Mathematical Intelligencer. 43 (3): 8–14. arXiv:2104.02170. doi:10.1007/s00283-021-10119-0. ISSN 0343-6993. S2CID 16664105.
Wikipedia
Taivo Arak Taivo Arak (2 November 1946, Tallinn – 17 October 2007, Stockholm) was an Estonian mathematician, specializing in probability theory. Biography In 1969 he graduated from Leningrad State University.[1] There he received in 1972 his Russian candidate degree (Ph.D.) under I. A. Ibragimov.[2] In 1983 Arak defended his dissertation for his Russian doctorate (higher doctoral degree similar to habilitation). From 1972 to 1981 he worked at the Tallinn University of Technology. From 1981 he worked at the Institute of Cybernetics of the Academy of Sciences of the Estonian SSR.[3] In 1986 he was an Invited Speaker at the International Congress of Mathematicians in Berkeley, California.[4] Most of his research dealt with the theory of probability. Awards • Markov Prize (1983) - for the series of papers "Равномерные предельные теоремы для сумм независимых случайных величин" (Uniform limit theorems for sums of independent random variables). Selected publications • with Andrei Yuryevich Zaitsev: Uniform limit theorems for sums of independent random variables. Proceedings of the Steklov Institute of Mathematics, Vol. 174. American Mathematical Soc. 1988. ISBN 9780821831182. • with Donatas Surgailis: Arak, T.; Surgailis, D. (February 1989). "Markov fields with polygonal realizations". Probability Theory and Related Fields. 80 (4): 543–579. doi:10.1007/BF00318906. S2CID 120932428. • with D. Surgailis: Grigelionis, Bronius (1990). "Markov random graphs and polygonal fields with Y-shaped nodes". In: Probability theory and mathematical statistics. Proc. 5th Vilnius conference, vol. 1. pp. 57–67. ISBN 9067641286. • with Peter Clifford and D. Surgailis: Arak, T.; Clifford, P.; Surgailis, D. (1993). "Point-based polygonal models for random graphs". Advances in Applied Probability. 25 (2): 348–372. doi:10.2307/1427657. JSTOR 1427657. S2CID 120107892. References 1. "Выдающиеся матмеховцы (Outstanding mathematicians)". math.spbu.ru. 2. Taivo Arak at the Mathematics Genealogy Project 3. Арак, Тайво Викторович (Arak, Taivo Viktorovich)— Биографическая энциклопедия (Biographical Encyclopedia) 4. Arak, Taivo (1986). "A class of Markov fields with finite range". In: Proc. Intern. Congress of Mathematicians, Berkeley. pp. 994–999. External links • Арак Тайво Викторович, ras.ru • Arak Taivo Viktorovich, list of publications. mathnet.ru Authority control International • VIAF National • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Blancmange curve In mathematics, the blancmange curve is a self-affine curve constructible by midpoint subdivision. It is also known as the Takagi curve, after Teiji Takagi who described it in 1901, or as the Takagi–Landsberg curve, a generalization of the curve named after Takagi and Georg Landsberg. The name blancmange comes from its resemblance to a Blancmange pudding. It is a special case of the more general de Rham curve; see also fractal curve. Definition The blancmange function is defined on the unit interval by $\operatorname {blanc} (x)=\sum _{n=0}^{\infty }{s(2^{n}x) \over 2^{n}},$ where $s(x)$ is the triangle wave, defined by $s(x)=\min _{n\in {\mathbf {Z} }}|x-n|$, that is, $s(x)$ is the distance from x to the nearest integer. The Takagi–Landsberg curve is a slight generalization, given by $T_{w}(x)=\sum _{n=0}^{\infty }w^{n}s(2^{n}x)$ for a parameter $w$; thus the blancmange curve is the case $w=1/2$. The value $H=-\log _{2}w$ is known as the Hurst parameter. The function can be extended to all of the real line: applying the definition given above shows that the function repeats on each unit interval. The function could also be defined by the series in the section Fourier series expansion. Functional equation definition The periodic version of the Takagi curve can also be defined as the unique bounded solution $T=T_{w}:\mathbb {R} \to \mathbb {R} $ to the functional equation $T(x)=s(x)+wT(2x).$ Indeed, the blancmange function $T_{w}$ is certainly bounded, and solves the functional equation, since $T_{w}(x):=\sum _{n=0}^{\infty }w^{n}s(2^{n}x)=s(x)+\sum _{n=1}^{\infty }w^{n}s(2^{n}x)$$=s(x)+w\sum _{n=0}^{\infty }w^{n}s(2^{n+1}x)=s(x)+wT_{w}(2x).$ Conversely, if $T:\mathbb {R} \to \mathbb {R} $ is a bounded solution of the functional equation, iterating the equality one has for any N $T(x)=\sum _{n=0}^{N}w^{n}s(2^{n}x)+w^{N+1}T(2^{N+1}x)=\sum _{n=0}^{N}w^{n}s(2^{n}x)+o(1),{\text{ for }}N\to \infty ,$ whence $T=T_{w}$. Incidentally, the above functional equations possesses infinitely many continuous, non-bounded solutions, e.g. $T_{w}(x)+c|x|^{-\log _{2}w}.$ Graphical construction The blancmange curve can be visually built up out of triangle wave functions if the infinite sum is approximated by finite sums of the first few terms. In the illustrations below, progressively finer triangle functions (shown in red) are added to the curve at each stage. n = 0n ≤ 1n ≤ 2n ≤ 3 Properties Convergence and continuity The infinite sum defining $T_{w}(x)$ converges absolutely for all $x$: since $0\leq s(x)\leq 1/2$ for all $x\in \mathbb {R} $, we have: $\sum _{n=0}^{\infty }|w^{n}s(2^{n}x)|\leq {\frac {1}{2}}\sum _{n=0}^{\infty }|w|^{n}={\frac {1}{2}}\cdot {\frac {1}{1-|w|}}$ if $|w|<1.$ Therefore, the Takagi curve of parameter $w$ is defined on the unit interval (or $\mathbb {R} $) if $|w|<1$. The Takagi function of parameter $w$ is continuous. Indeed, the functions $T_{w,n}$ defined by the partial sums $T_{w,n}(x)=\sum _{k=0}^{n}w^{k}s(2^{k}x)$ are continuous and converge uniformly toward $T_{w}$, since: $\left|T_{w}(x)-T_{w,n}(x)\right|=\left|\sum _{k=n+1}^{\infty }w^{k}s(2^{k}x)\right|=\left|w^{n+1}\sum _{k=0}^{\infty }w^{k}s(2^{k+n+1}x)\right|\leq {\frac {|w|^{n+1}}{2}}\cdot {\frac {1}{1-|w|}}$ for all x when $|w|<1.$ This value can be made as small as we want by selecting a big enough value of n. Therefore, by the uniform limit theorem, $T_{w}$ is continuous if |w| < 1. • parameter w = 2/3 • parameter w = 1/2 • parameter w = 1/3 • parameter w = 1/4 • parameter w = 1/8 Subadditivity Since the absolute value is a subadditive function so is the function $s(x)=\min _{n\in {\mathbf {Z} }}|x-n|$, and its dilations $s(2^{k}x)$; since positive linear combinations and point-wise limits of subadditive functions are subadditive, the Takagi function is subadditive for any value of the parameter $w$. The special case of the parabola For $w=1/4$, one obtains the parabola: the construction of the parabola by midpoint subdivision was described by Archimedes. Differentiability For values of the parameter $0<w<1/2$ the Takagi function $T_{w}$ is differentiable in classical sense at any $x\in \mathbb {R} $ which is not a dyadic rational. Precisely, by derivation under the sign of series, for any non dyadic rational $x\in \mathbb {R} $ one finds $T_{w}'(x)=\sum _{n=0}^{\infty }(2w)^{n}\,(-1)^{x_{-n-1}}$ where $(x_{n})_{n\in \mathbb {Z} }\in \{0,1\}^{\mathbb {Z} }$ is the sequence of binary digits in the base 2 expansion of $x$, that is, $x=\sum _{n\in \mathbb {Z} }2^{n}x_{n}$. Moreover, for these values of $w$ the function $T_{w}$ is Lipschitz of constant $1 \over 1-2w$. In particular for the special value $w=1/4$ one finds, for any non dyadic rational $x\in [0,1]$ $T_{1/4}'(x)=2-4x$, according with the mentioned $T_{1/4}(x)=2x(1-x).$ For $w=1/2$ the blancmange function $T_{w}$ it is of bounded variation on no non-empty open set; it is not even locally Lipschitz, but it is quasi-Lipschitz, indeed, it admits the function $\omega (t):=t(|\log _{2}t|+1/2)$ as a modulus of continuity . Fourier series expansion The Takagi–Landsberg function admits an absolutely convergent Fourier series expansion: $T_{w}(x)=\sum _{m=0}^{\infty }a_{m}\cos(2\pi mx)$ with $a_{0}=1/4(1-w)$ and, for $m\geq 1$ $a_{m}:=-{\frac {2}{\pi ^{2}m^{2}}}(4w)^{\nu (m)},$ where $2^{\nu (m)}$ is the maximum power of $2$ that divides $m$. Indeed, the above triangle wave $s(x)$ has an absolutely convergent Fourier series expansion $s(x)={\frac {1}{4}}-{\frac {2}{\pi ^{2}}}\sum _{k=0}^{\infty }{\frac {1}{(2k+1)^{2}}}\cos {\big (}2\pi (2k+1)x{\big )}.$ By absolute convergence, one can reorder the corresponding double series for $T_{w}(x)$: $T_{w}(x):=\sum _{n=0}^{\infty }w^{n}s(2^{n}x)={\frac {1}{4}}\sum _{n=0}^{\infty }w^{n}-{\frac {2}{\pi ^{2}}}\sum _{n=0}^{\infty }\sum _{k=0}^{\infty }{\frac {w^{n}}{(2k+1)^{2}}}\cos {\big (}2\pi 2^{n}(2k+1)x{\big )}\,:$ putting $m=2^{n}(2k+1)$ yields the above Fourier series for $T_{w}(x).$ Self similarity The recursive definition allows the monoid of self-symmetries of the curve to be given. This monoid is given by two generators, g and r, which act on the curve (restricted to the unit interval) as $[g\cdot T_{w}](x)=T_{w}\left({\frac {x}{2}}\right)={\frac {x}{2}}+wT_{w}(x)$ and $[r\cdot T_{w}](x)=T_{w}(1-x)=T_{w}(x).$ A general element of the monoid then has the form $\gamma =g^{a_{1}}rg^{a_{2}}r\cdots rg^{a_{n}}$ for some integers $a_{1},a_{2},\cdots ,a_{n}$ This acts on the curve as a linear function: $\gamma \cdot T_{w}=a+bx+cT_{w}$ for some constants a, b and c. Because the action is linear, it can be described in terms of a vector space, with the vector space basis: $1\mapsto e_{1}={\begin{bmatrix}1\\0\\0\end{bmatrix}}$ $x\mapsto e_{2}={\begin{bmatrix}0\\1\\0\end{bmatrix}}$ $T_{w}\mapsto e_{3}={\begin{bmatrix}0\\0\\1\end{bmatrix}}$ In this representation, the action of g and r are given by $g={\begin{bmatrix}1&0&0\\0&{\frac {1}{2}}&{\frac {1}{2}}\\0&0&w\end{bmatrix}}$ and $r={\begin{bmatrix}1&1&0\\0&-1&0\\0&0&1\end{bmatrix}}$ That is, the action of a general element $\gamma $ maps the blancmange curve on the unit interval [0,1] to a sub-interval $[m/2^{p},n/2^{p}]$ for some integers m, n, p. The mapping is given exactly by $[\gamma \cdot T_{w}](x)=a+bx+cT_{w}(x)$ where the values of a, b and c can be obtained directly by multiplying out the above matrices. That is: $\gamma ={\begin{bmatrix}1&{\frac {m}{2^{p}}}&a\\0&{\frac {n-m}{2^{p}}}&b\\0&0&c\end{bmatrix}}$ Note that $p=a_{1}+a_{2}+\cdots +a_{n}$ is immediate. The monoid generated by g and r is sometimes called the dyadic monoid; it is a sub-monoid of the modular group. When discussing the modular group, the more common notation for g and r is T and S, but that notation conflicts with the symbols used here. The above three-dimensional representation is just one of many representations it can have; it shows that the blancmange curve is one possible realization of the action. That is, there are representations for any dimension, not just 3; some of these give the de Rham curves. Integrating the Blancmange curve Given that the integral of $\operatorname {blanc} (x)$ from 0 to 1 is 1/2, the identity $\operatorname {blanc} (x)=\operatorname {blanc} (2x)/2+s(x)$ allows the integral over any interval to be computed by the following relation. The computation is recursive with computing time on the order of log of the accuracy required. Defining $I(x)=\int _{0}^{x}\operatorname {blanc} (y)\,dy$ one has that $I(x)={\begin{cases}I(2x)/4+x^{2}/2&{\text{if }}0\leq x\leq 1/2\\1/2-I(1-x)&{\text{if }}1/2\leq x\leq 1\\n/2+I(x-n)&{\text{if }}n\leq x\leq (n+1)\\\end{cases}}$ The definite integral is given by: $\int _{a}^{b}\operatorname {blanc} (y)\,dy=I(b)-I(a).$ A more general expression can be obtained by defining $S(x)=\int _{0}^{x}s(y)dy={\begin{cases}x^{2}/2,&0\leq x\leq {\frac {1}{2}}\\-x^{2}/2+x-1/4,&{\frac {1}{2}}\leq x\leq 1\\n/4+S(x-n),&(n\leq x\leq n+1)\end{cases}}$ which, combined with the series representation, gives $I_{w}(x)=\int _{0}^{x}T_{w}(y)dy=\sum _{n=0}^{\infty }(w/2)^{n}S(2^{n}x)$ Note that $I_{w}(1)={\frac {1}{4(1-w)}}$ This integral is also self-similar on the unit interval, under an action of the dyadic monoid described in the section Self similarity. Here, the representation is 4-dimensional, having the basis $\{1,x,x^{2},I(x)\}$. Re-writing the above to make the action of g more clear: on the unit interval, one has $[g\cdot I_{w}](x)=I_{w}\left({\frac {x}{2}}\right)={\frac {x^{2}}{8}}+{\frac {w}{2}}I_{w}(x).$ From this, one can then immediately read off the generators of the four-dimensional representation: $g={\begin{bmatrix}1&0&0&0\\0&{\frac {1}{2}}&0&0\\0&0&{\frac {1}{4}}&{\frac {1}{8}}\\0&0&0&{\frac {w}{2}}\end{bmatrix}}$ and $r={\begin{bmatrix}1&1&1&{\frac {1}{4(1-w)}}\\0&-1&-2&0\\0&0&1&0\\0&0&0&-1\end{bmatrix}}$ Repeated integrals transform under a 5,6,... dimensional representation. Relation to simplicial complexes Let $N={\binom {n_{t}}{t}}+{\binom {n_{t-1}}{t-1}}+\ldots +{\binom {n_{j}}{j}},\quad n_{t}>n_{t-1}>\ldots >n_{j}\geq j\geq 1.$ Define the Kruskal–Katona function $\kappa _{t}(N)={n_{t} \choose t+1}+{n_{t-1} \choose t}+\dots +{n_{j} \choose j+1}.$ The Kruskal–Katona theorem states that this is the minimum number of (t − 1)-simplexes that are faces of a set of N t-simplexes. As t and N approach infinity, $\kappa _{t}(N)-N$ (suitably normalized) approaches the blancmange curve. See also • Cantor function (also known as the Devil's staircase) • Minkowski's question mark function • Weierstrass function • Dyadic transformation References • Weisstein, Eric W. "Blancmange Function". MathWorld. • Takagi, Teiji (1901), "A Simple Example of the Continuous Function without Derivative", Proc. Phys.-Math. Soc. Jpn., 1: 176–177, doi:10.11429/subutsuhokoku1901.1.F176 • Benoit Mandelbrot, "Fractal Landscapes without creases and with rivers", appearing in The Science of Fractal Images, ed. Heinz-Otto Peitgen, Dietmar Saupe; Springer-Verlag (1988) pp 243–260. • Linas Vepstas, Symmetries of Period-Doubling Maps, (2004) • Donald Knuth, The Art of Computer Programming, volume 4a. Combinatorial algorithms, part 1. ISBN 0-201-03804-8. See pages 372–375. Further reading • Allaart, Pieter C.; Kawamura, Kiko (11 October 2011), The Takagi function: a survey, arXiv:1110.1691, Bibcode:2011arXiv1110.1691A • Lagarias, Jeffrey C. (17 December 2011), The Takagi Function and Its Properties, arXiv:1112.4205, Bibcode:2011arXiv1112.4205L External links • Takagi Explorer • (Some properties of the Takagi function) Fractals Characteristics • Fractal dimensions • Assouad • Box-counting • Higuchi • Correlation • Hausdorff • Packing • Topological • Recursion • Self-similarity Iterated function system • Barnsley fern • Cantor set • Koch snowflake • Menger sponge • Sierpinski carpet • Sierpinski triangle • Apollonian gasket • Fibonacci word • Space-filling curve • Blancmange curve • De Rham curve • Minkowski • Dragon curve • Hilbert curve • Koch curve • Lévy C curve • Moore curve • Peano curve • Sierpiński curve • Z-order curve • String • T-square • n-flake • Vicsek fractal • Hexaflake • Gosper curve • Pythagoras tree • Weierstrass function Strange attractor • Multifractal system L-system • Fractal canopy • Space-filling curve • H tree Escape-time fractals • Burning Ship fractal • Julia set • Filled • Newton fractal • Douady rabbit • Lyapunov fractal • Mandelbrot set • Misiurewicz point • Multibrot set • Newton fractal • Tricorn • Mandelbox • Mandelbulb Rendering techniques • Buddhabrot • Orbit trap • Pickover stalk Random fractals • Brownian motion • Brownian tree • Brownian motor • Fractal landscape • Lévy flight • Percolation theory • Self-avoiding walk People • Michael Barnsley • Georg Cantor • Bill Gosper • Felix Hausdorff • Desmond Paul Henry • Gaston Julia • Helge von Koch • Paul Lévy • Aleksandr Lyapunov • Benoit Mandelbrot • Hamid Naderi Yeganeh • Lewis Fry Richardson • Wacław Sierpiński Other • "How Long Is the Coast of Britain?" • Coastline paradox • Fractal art • List of fractals by Hausdorff dimension • The Fractal Geometry of Nature (1982 book) • The Beauty of Fractals (1986 book) • Chaos: Making a New Science (1987 book) • Kaleidoscope • Chaos theory
Wikipedia
Takagi existence theorem In class field theory, the Takagi existence theorem states that for any number field K there is a one-to-one inclusion reversing correspondence between the finite abelian extensions of K (in a fixed algebraic closure of K) and the generalized ideal class groups defined via a modulus of K. It is called an existence theorem because a main burden of the proof is to show the existence of enough abelian extensions of K. Formulation Here a modulus (or ray divisor) is a formal finite product of the valuations (also called primes or places) of K with positive integer exponents. The archimedean valuations that might appear in a modulus include only those whose completions are the real numbers (not the complex numbers); they may be identified with orderings on K and occur only to exponent one. The modulus m is a product of a non-archimedean (finite) part mf and an archimedean (infinite) part m∞. The non-archimedean part mf is a nonzero ideal in the ring of integers OK of K and the archimedean part m∞ is simply a set of real embeddings of K. Associated to such a modulus m are two groups of fractional ideals. The larger one, Im, is the group of all fractional ideals relatively prime to m (which means these fractional ideals do not involve any prime ideal appearing in mf). The smaller one, Pm, is the group of principal fractional ideals (u/v) where u and v are nonzero elements of OK which are prime to mf, u ≡ v mod mf, and u/v > 0 in each of the orderings of m∞. (It is important here that in Pm, all we require is that some generator of the ideal has the indicated form. If one does, others might not. For instance, taking K to be the rational numbers, the ideal (3) lies in P4 because (3) = (−3) and −3 fits the necessary conditions. But (3) is not in P4∞ since here it is required that the positive generator of the ideal is 1 mod 4, which is not so.) For any group H lying between Im and Pm, the quotient Im/H is called a generalized ideal class group. It is these generalized ideal class groups which correspond to abelian extensions of K by the existence theorem, and in fact are the Galois groups of these extensions. That generalized ideal class groups are finite is proved along the same lines of the proof that the usual ideal class group is finite, well in advance of knowing these are Galois groups of finite abelian extensions of the number field. A well-defined correspondence Strictly speaking, the correspondence between finite abelian extensions of K and generalized ideal class groups is not quite one-to-one. Generalized ideal class groups defined relative to different moduli can give rise to the same abelian extension of K, and this is codified a priori in a somewhat complicated equivalence relation on generalized ideal class groups. In concrete terms, for abelian extensions L of the rational numbers, this corresponds to the fact that an abelian extension of the rationals lying in one cyclotomic field also lies in infinitely many other cyclotomic fields, and for each such cyclotomic overfield one obtains by Galois theory a subgroup of the Galois group corresponding to the same field L. In the idelic formulation of class field theory, one obtains a precise one-to-one correspondence between abelian extensions and appropriate groups of ideles, where equivalent generalized ideal class groups in the ideal-theoretic language correspond to the same group of ideles. Earlier work A special case of the existence theorem is when m = 1 and H = P1. In this case the generalized ideal class group is the ideal class group of K, and the existence theorem says there exists a unique abelian extension L/K with Galois group isomorphic to the ideal class group of K such that L is unramified at all places of K. This extension is called the Hilbert class field. It was conjectured by David Hilbert to exist, and existence in this special case was proved by Furtwängler in 1907, before Takagi's general existence theorem. A further and special property of the Hilbert class field, not true of smaller abelian extensions of a number field, is that all ideals in a number field become principal in the Hilbert class field. It required Artin and Furtwängler to prove that principalization occurs. History The existence theorem is due to Takagi, who proved it in Japan during the isolated years of World War I. He presented it at the International Congress of Mathematicians in 1920, leading to the development of the classical theory of class field theory during the 1920s. At Hilbert's request, the paper was published in Mathematische Annalen in 1925. See also • Class formation References • Helmut Hasse, History of Class Field Theory, pp. 266–279 in Algebraic Number Theory, eds. J. W. S. Cassels and A. Fröhlich, Academic Press 1967. (See also the rich bibliography attached to Hasse's article.)
Wikipedia
Takao Nishizeki Takao Nishizeki (西関 隆夫, Nishizeki Takao, 1947 – 30 January 2022[1]) was a Japanese mathematician and computer scientist who specialized in graph algorithms and graph drawing. Education and career Nishizeki was born in 1947 in Fukushima, and was a student at Tohoku University, earning a bachelor's degree in 1969, a master's in 1971, and a doctorate in 1974. He continued at Tohoku as a faculty member, and became a full professor there in 1988.[2] He was the Dean of the Graduate School of Information Sciences, Tohoku University, from April 2008 to March 2010. He retired in 2010, becoming a professor emeritus at Tohoku University, but continued teaching as a professor at Kwansei Gakuin University until March 2015.[3] He was an Auditor of Japan Advanced Institute of Science and Technology from April 2016 to October 2018. Contributions Nishizeki made significant contributions to algorithms for series–parallel graphs,[4] finding cliques in sparse graphs,[5] planarity testing[6] and the secret sharing with any access structure. He is the co-author of two books on planar graphs and graph drawing.[7] In 1990, Nishizeki founded the annual International Symposium on Algorithms and Computation (ISAAC).[8] Awards and honors At the 18th ISAAC symposium, in 2007, a workshop was held to celebrate his 60th birthday.[8] In 1996, he became a life fellow of the IEEE "for contributions to graph algorithms with applications to physical design of electronic systems."[9] In 1996 he was selected as a fellow of the Association for Computing Machinery "for contributions to the design and analysis of efficient algorithms for planar graphs, network flows and VLSI routing".[10] Nishizeki was also a foreign fellow of the Bangladesh Academy of Sciences;[11] one of his students and frequent co-authors, Md. Saidur Rahman, is from Bangladesh. Selected publications Books • Nishizeki, T.; Chiba, N. (1988), Planar Graphs: Theory and Algorithms, North-Holland Mathematics Studies, vol. 140, North-Holland, ISBN 978-0-444-70212-8, MR 0941967. • Nishizeki, Takao; Rahman, Md. Saidur (2004), Planar Graph Drawing, Lecture Notes Series on Computing, vol. 12, World Scientific, doi:10.1142/5648, ISBN 978-981-256-033-9, MR 2112244. Research articles • Takamizawa, K.; Nishizeki, T.; Saito, N. (1982), "Linear-time computability of combinatorial problems on series–parallel graphs", Journal of the ACM, 29 (3): 623–641, doi:10.1145/322326.322328, MR 0666771, S2CID 16082154. • Chiba, Norishige; Nishizeki, Takao (1985), "Arboricity and subgraph listing algorithms", SIAM Journal on Computing, 14 (1): 210–223, doi:10.1137/0214017, MR 0774940. • Chiba, Norishige; Nishizeki, Takao; Abe, Shigenobu; Ozawa, Takao (1985), "A linear algorithm for embedding planar graphs using PQ-trees", Journal of Computer and System Sciences, 30 (1): 54–76, doi:10.1016/0022-0000(85)90004-2, MR 0788831. • Ito, Mitsuru; Saito, Akira; Nishizeki, Takao (1989), "Secret sharing scheme realizing general access structure", Electronics and Communications in Japan (Part III: Fundamental Electronic Science), 72 (9): 56–64, doi:10.1002/ecjc.4430720906. References 1. Okamoto, Yoshio (1 February 2022), "Takao Nishizeki", GDNET 2. Biography, Tohoku University, retrieved 2015-03-19. 3. Faculty profile, Kwansei Gakuin University, retrieved 2015-03-19. 4. Takamizawa, Nishizeki & Saito (1982). 5. Chiba & Nishizeki (1985). 6. Chiba et al. (1985). 7. Nishizeki & Chiba (1988); Nishizeki & Rahman (2004). 8. ISAAC Day 1, Joachim Gudmundsson, dense outliers, December 21, 2007, retrieved 2015-03-19. 9. 1995 New Fellows, IEEE Japan Section, retrieved 2015-03-19. 10. ACM Fellow award citation, retrieved 2015-03-19. 11. Member profile, Bangladesh Academy of Sciences, retrieved 2015-03-20. External links • Takao Nishizeki publications indexed by Google Scholar Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Latvia • Japan • Czech Republic • Australia • Croatia • Netherlands Academics • Association for Computing Machinery • CiNii • DBLP • Google Scholar • MathSciNet • zbMATH Other • IdRef
Wikipedia
Takashi Ono (mathematician) Takashi Ono (小野 孝, Ono Takashi, born 18 December 1928) is a retired Japanese-born American mathematician, specializing in number theory and algebraic groups. Early life and education Ono was born in Nishinomiya, Japan. He received his Ph.D. in 1958 at Nagoya University.[1] Career Ono immigrated to the United States after receiving an invitation from J. Robert Oppenheimer to work at the Institute for Advanced Study with a fellowship for the two academic years from 1959 to 1961[2] and then went to the University of British Columbia to work as an assistant professor of mathematics[3] from 1961 to 1964. From 1964 to 1969 Ono was a tenured professor at the University of Pennsylvania. From 1969 to his retirement in 2011, he was a professor at Johns Hopkins University. In 1966 he was an invited speaker at the International Congress of Mathematicians in Moscow.[2] In 2012 he was elected a Fellow of the American Mathematical Society.[4] Personal life Ono's youngest son, Ken Ono, is also a mathematician[5] and professor at the University of Virginia as well as a former triathlete.[6] His middle son, Santa J. Ono, is serving as the 15th President of the University of Michigan (previously the President and Vice-Chancellor of the University of British Columbia) and is a biomedical researcher. His eldest son, Momoro Ono, is a music professor at Creighton University.[7] Selected publications • 1959: Ono, Takashi (1959). "On some arithmetic properties of linear algebraic groups". Annals of Mathematics. 70 (2): 266–290. doi:10.2307/1970104. JSTOR 1970104. • 1961: Ono, Takashi (1961). "Arithmetic of algebraic tori". Annals of Mathematics. 74 (1): 101–139. doi:10.2307/1970307. JSTOR 1970307. • 1963: Ono, Takashi (1963). "On the Tamagawa number of algebraic tori". Annals of Mathematics. 7 8 (1): 47–73. doi:10.2307/1970502. JSTOR 1970502. • 1964: Ono, Takashi (1964). "On the relative theory of Tamagawa numbers". Bulletin of the American Mathematical Society. 70 (2): 325–326. doi:10.1090/s0002-9904-1964-11140-x. MR 0156856. • 1965: Ono, Takashi (1965). "On the relative theory of Tamagawa numbers". Annals of Mathematics. 82 (1): 88–111. doi:10.2307/1970563. JSTOR 1970563. • 1965: Ono, Takashi (1965). "The Gauss-Bonnet theorem and the Tamagawa number". Bulletin of the American Mathematical Society. 71 (2): 345–348. doi:10.1090/s0002-9904-1965-11290-3. MR 0176986. • 1969: Ono, Takashi (1969). "On Gaussian sums". Bulletin of the American Mathematical Society. 75 (7): 43–45. doi:10.1090/s0002-9904-1969-12139-7. MR 0245547. PMC 220918. PMID 16590967. • 1969: Ono, Takashi (1966). "On algebraic groups and discontinuous groups". Nagoya Mathematical Journal. 27 (Pt 1): 279–322. doi:10.1017/S002776300001206X. MR 0199193. Zbl 0166.29802. • 1990: An Introduction to Algebraic Number Theory. Plenum Publishers., Ono, Takashi (6 December 2012). 2nd edition. ISBN 9781461305736. • 1994: Variations on a Theme of Euler: Quadratic Forms, Elliptic Curves and Hopf Maps. Plenum. 1994. ISBN 9780306447891. • 2008: Gauss sums and Poincaré sums (in Japanese). Nippon Hyoron Sha. References 1. Takashi Ono at the Mathematics Genealogy Project 2. "Ono, Takashi". IAS.edu. Institute for Advanced Study. 9 December 2019. Archived from the original on 28 September 2015. 3. "Inauguration Address | Office of the President". president.umich.edu. Archived from the original on 2023-03-08. Retrieved 2023-03-08. 4. "10 from JHU among inaugural fellows of American Mathematical Society". JHU.edu. Johns Hopkins University. 2 November 2012. Archived from the original on 28 September 2015. 5. Johnson, Mike (13 March 2007). "A flash of insight brings answers". Milwaukee Journal Sentinel. Archived from the original on November 12, 2011. Retrieved August 24, 2017. 6. Ono, Ken. "About Me". Emory.edu. Department of Mathematics, Emory University. Archived from the original on August 26, 2017. Retrieved August 24, 2017. 7. "Dr. Momoro Ono". Creighton.edu. Fine and Performing Arts, Creighton University. Archived from the original on August 1, 2017. Retrieved August 24, 2017. External links • Takashi Ono at Department of Mathematics, Johns Hopkins University Authority control International • ISNI • VIAF National • Germany Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie
Wikipedia
Takeo Wada Takeo Wada (Japanese: 和田健雄, Hepburn: Wada Takeo, 1882–1944) was a Japanese mathematician at Kyoto University working in analysis and topology. He suggested the Lakes of Wada to Kunizo Yoneyama, who wrote about them and named them after Wada. Publications • Wada, Takeo (1912), "The conception of a curve", The Memoirs of the College of Science and Engineering, Kyoto Imperial University, 3 (9): 265–275 References • Neoi, Makoto (2004), A Study on Educational Viewpoints of a Mathematician Kunizo Yoneyama (in Japanese), Tokyo: Tokai University, p. 12 • Mimura, Mamoru (1999), "The Japanese school of topology", in James, I. M. (ed.), History of topology, Amsterdam: North-Holland, pp. 863–882, doi:10.1016/B978-044482375-5/50032-8, ISBN 978-0-444-82375-5, MR 1721126 Authority control International • VIAF National • Japan Academics • zbMATH
Wikipedia
Takeuti's conjecture In mathematics, Takeuti's conjecture is the conjecture of Gaisi Takeuti that a sequent formalisation of second-order logic has cut-elimination (Takeuti 1953). It was settled positively: • By Tait, using a semantic technique for proving cut-elimination, based on work by Schütte (Tait 1966); • Independently by Prawitz (Prawitz 1968) and Takahashi (Takahashi 1967) by a similar technique (Takahashi 1967) - although Prawitz's and Takahashi's proofs are not limited to second-order logic, but concern higher-order logics in general; • It is a corollary of Jean-Yves Girard's syntactic proof of strong normalization for System F. Takeuti's conjecture is equivalent to the 1-consistency of second-order arithmetic in the sense that each of the statements can be derived from each other in the weak system PRA. It is also equivalent to the strong normalization of the Girard/Reynold's System F. See also • Hilbert's second problem References • Dag Prawitz, 1968. Hauptsatz for higher order logic. J. Symb. Log., 33:452–457, 1968. • William W. Tait, 1966. A nonconstructive proof of Gentzen's Hauptsatz for second order predicate logic. In Bulletin of the American Mathematical Society, 72:980–983. • Gaisi Takeuti, 1953. On a generalized logic calculus. In Japanese Journal of Mathematics, 23:39–96. An errata to this article was published in the same journal, 24:149–156, 1954. • Moto-o Takahashi, 1967. A proof of cut-elimination in simple type theory. In Japanese Mathematical Society, 10:44–45.
Wikipedia
Takeuti–Feferman–Buchholz ordinal In the mathematical fields of set theory and proof theory, the Takeuti–Feferman–Buchholz ordinal (TFBO) is a large countable ordinal, which acts as the limit of the range of Buchholz's psi function and Feferman's theta function.[1][2] It was named by David Madore,[2] after Gaisi Takeuti, Solomon Feferman and Wilfried Buchholz. It is written as $\psi _{0}(\varepsilon _{\Omega _{\omega }+1})$ using Buchholz's psi function,[3] an ordinal collapsing function invented by Wilfried Buchholz,[4][5][6] and $\theta _{\varepsilon _{\Omega _{\omega }+1}}(0)$ in Feferman's theta function, an ordinal collapsing function invented by Solomon Feferman.[7][8] It is the proof-theoretic ordinal of several formal theories: • $\Pi _{1}^{1}-CA+BI$,[9] a subsystem of second-order arithmetic • $\Pi _{1}^{1}$-comprehension + transfinite induction[3] • IDω, the system of ω-times iterated inductive definitions[10] Despite being one of the largest large countable ordinals and recursive ordinals, it is still vastly smaller than the proof-theoretic ordinal of ZFC.[11] Definition • Let $\Omega _{\alpha }$ represent the smallest uncountable ordinal with cardinality $\aleph _{\alpha }$. • Let $\varepsilon _{\beta }$ represent the $\beta $th epsilon number, equal to the $1+\beta $th fixed point of $\alpha \mapsto \omega ^{\alpha }$ • Let $\psi $ represent Buchholz's psi function References 1. "Buchholz's ψ functions". cantors-attic. Retrieved 2021-08-10. 2. "Buchholz's ψ functions". cantors-attic. Retrieved 2021-08-17. 3. "A Zoo of Ordinals" (PDF). Madore. 2017-07-29. Retrieved 2021-08-10.{{cite web}}: CS1 maint: url-status (link) 4. "Collapsingfunktionen" (PDF). University of Munich. 1981. Retrieved 2021-08-10.{{cite web}}: CS1 maint: url-status (link) 5. Buchholz, W. (1986-01-01). "A new system of proof-theoretic ordinal functions". Annals of Pure and Applied Logic. 32: 195–207. doi:10.1016/0168-0072(86)90052-7. ISSN 0168-0072. 6. Buchholz, W.; Schütte, K. (1988). "Proof Theory of Impredicative Subsystems of Analysis". S2CID 118806161. Retrieved 2021-08-10. 7. "[PDF] Proof Theory Second Edition by Gaisi Takeuti | Perlego". www.perlego.com. Retrieved 2021-08-10. 8. Buchholz, W. (1975). "Normalfunktionen und Konstruktive Systeme von Ordinalzahlen". $\models $ISILC Proof Theory Symposion. Lecture Notes in Mathematics (in German). Vol. 500. Springer. pp. 4–25. doi:10.1007/BFb0079544. ISBN 978-3-540-07533-2. 9. Buchholz, Wilfried; Feferman, Solomon; Pohlers, Wolfram; Sieg, Wilfried (1981). Iterated Inductive Definitions and Subsystems of Analysis: Recent Proof-Theoretical Studies. Lecture Notes in Mathematics. Vol. 897. Springer-Verlag, Berlin-New York. doi:10.1007/bfb0091894. ISBN 3-540-11170-0. MR 0655036. 10. "ordinal analysis in nLab". ncatlab.org. Retrieved 2021-08-28. 11. "number theory - Can PA prove very fast growing functions to be total?". Mathematics Stack Exchange. Retrieved 2021-08-17. Large countable ordinals • First infinite ordinal ω • First uncountable ordinal Ω • Epsilon numbers ε0 • Feferman–Schütte ordinal Γ0 • Ackermann ordinal θ(Ω2) • small Veblen ordinal θ(Ωω) • large Veblen ordinal θ(ΩΩ) • Bachmann–Howard ordinal ψ(εΩ+1) • Buchholz's ordinal ψ0(Ωω) • Takeuti–Feferman–Buchholz ordinal ψ(εΩω+1) • Proof-theoretic ordinals of the theories of iterated inductive definitions • Nonrecursive ordinal ≥ ω‍CK 1
Wikipedia
Taking Sudoku Seriously Taking Sudoku Seriously: The math behind the world's most popular pencil puzzle is a book on the mathematics of Sudoku. It was written by Jason Rosenhouse and Laura Taalman, and published in 2011 by the Oxford University Press. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.[1] It was the 2012 winner of the PROSE Awards in the popular science and popular mathematics category.[2] Author • Jason Rosenhouse • Laura Taalman SubjectMathematics–Social aspects, Sudoku PublisherOxford University Press Publication date 2011 Pages214 AwardsPROSE: Popular science & Popular mathematics ISBN978-0-19-991315-2 OCLC774293834 Dewey Decimal 793.74 LC ClassGV1507.S83 Topics The book is centered around Sudoku puzzles, using them as a jumping-off point "to discuss a broad spectrum of topics in mathematics".[1] In many cases these topics are presented through simplified examples which can be understood by hand calculation before extending them to Sudoku itself using computers.[3] The book also includes discussions on the nature of mathematics and the use of computers in mathematics.[4] After an introductory chapter on Sudoku and its deductive puzzle-solving techniques[1] (also touching on Euler tours and Hamiltonian cycles),[5] the book has eight more chapters and an epilogue. Chapters two and three discuss Latin squares, the thirty-six officers problem, Leonhard Euler's incorrect conjecture on Graeco-Latin squares, and related topics.[1][4] Here, a Latin square is a grid of numbers with the same property as a Sudoku puzzle's solution of having each number appear once in each row and once in each column. They can be traced back to mathematics in medieval Islam, were studied recreationally by Benjamin Franklin, and have seen more serious application in the design of experiments and in error correction codes.[6] Sudoku puzzles also constrain square blocks of cells to contain each number once, making a restricted type of Latin square called a gerechte design.[1] Chapters four and five concern the combinatorial enumeration of completed Sudoku puzzles, before and after factoring out the symmetries and equivalence classes of these puzzles using Burnside's lemma in group theory. Chapter six looks at combinatorial search techniques for finding small systems of givens that uniquely define a puzzle solution; soon after the book's publication, these methods were used to show that the minimum possible number of givens is 17.[1][4][5] The next two chapters look at two different mathematical formalizations of the problem of going from a Sudoku problem to its solution, one involving graph coloring (more precisely, precoloring extension of the Sudoku graph) and another involving using the Gröbner basis method to solve systems of polynomial equations. The final chapter studies questions in extremal combinatorics motivated by Sudoku, and (although 76 Sudoku puzzles of various types are scattered throughout the earlier chapters) the epilogue presents a collection of 20 additional puzzles, in advanced variations of Sudoku.[1][4] Audience and reception This book is intended for a general audience interested in recreational mathematics,[7] including mathematically inclined high school students.[4] It is intended to counter the widespread misimpression that Sudoku is not mathematical,[5][6][8] and could help students appreciate the distinction between mathematical reasoning and rote calculation.[4][5][7] Reviewer Mark Hunacek writes that "a person with very limited background in mathematics, or a person without much experience solving Sudoku puzzles, could still find something of interest here".[1] It can also be used by professional mathematicians, for instance in setting research projects for students.[7] It is unlikely to improve Sudoku puzzle-solving skills, but Keith Devlin writes that Sudoku players can still gain "a deeper appreciation for the puzzle they love".[6] However, reviewer Nicola Tilt is unsure of the book's audience, writing that "the content may be deemed a little simplistic for mathematicians, and a little too diverse for real puzzle enthusiasts".[8] Reviewer David Bevan calls the book "beautifully produced", "well written", and "highly recommended".[4] Reviewer Mark Hunacek calls it "a delightful book which I thoroughly enjoyed reading".[1] And (despite complaining that the section on graph coloring is "abstract and demanding" and overly US-centric in its approach), reviewer Donald Keedwell writes "This well-written book would be of interest to anyone, mathematician or not, who likes solving Sudoku puzzles."[5] References 1. Hunacek, Mark (January 2012), "Review of Taking Sudoku Seriously", MAA Reviews, Mathematical Association of America 2. "2012 Award Winners", PROSE Awards, Association of American Publishers, retrieved 2018-05-14 3. Hösli, Hansueli, "Review of Taking Sudoku Seriously", zbMATH, Zbl 1239.00014 4. Bevan, David (November 2013), "Review of Taking Sudoku Seriously", The Mathematical Gazette, 97 (540): 574–575, doi:10.1017/S0025557200000589, JSTOR 24496749 5. Keedwell, Donald (February 2018), "Review of Taking Sudoku Seriously", The Mathematical Gazette, 102 (553): 186–187, doi:10.1017/mag.2018.39 6. Devlin, Keith (January 28, 2012), "The numbers game (review of Taking Sudoku Seriously)", The Wall Street Journal 7. Li, Aihua, "Review of Taking Sudoku Seriously", Mathematical Reviews, MR 2859240 8. Tilt, Nicola (February 2013), "Review of Taking Sudoku Seriously", Significance, Royal Statistical Society, 10 (1): 43, doi:10.1111/j.1740-9713.2013.00640.x
Wikipedia
Equating coefficients In mathematics, the method of equating the coefficients is a way of solving a functional equation of two expressions such as polynomials for a number of unknown parameters. It relies on the fact that two expressions are identical precisely when corresponding coefficients are equal for each different type of term. The method is used to bring formulas into a desired form. Example in real fractions Suppose we want to apply partial fraction decomposition to the expression: ${\frac {1}{x(x-1)(x-2)}},\,$ that is, we want to bring it into the form: ${\frac {A}{x}}+{\frac {B}{x-1}}+{\frac {C}{x-2}},\,$ in which the unknown parameters are A, B and C. Multiplying these formulas by x(x − 1)(x − 2) turns both into polynomials, which we equate: $A(x-1)(x-2)+Bx(x-2)+Cx(x-1)=1,\,$ or, after expansion and collecting terms with equal powers of x: $(A+B+C)x^{2}-(3A+2B+C)x+2A=1.\,$ At this point it is essential to realize that the polynomial 1 is in fact equal to the polynomial 0x2 + 0x + 1, having zero coefficients for the positive powers of x. Equating the corresponding coefficients now results in this system of linear equations: $A+B+C=0,\,$ $3A+2B+C=0,\,$ $2A=1.\,$ Solving it results in: $A={\frac {1}{2}},\,B=-1,\,C={\frac {1}{2}}.\,$ Example in nested radicals A similar problem, involving equating like terms rather than coefficients of like terms, arises if we wish to de-nest the nested radicals ${\sqrt {a+b{\sqrt {c}}\ }}$ to obtain an equivalent expression not involving a square root of an expression itself involving a square root, we can postulate the existence of rational parameters d, e such that ${\sqrt {a+b{\sqrt {c}}\ }}={\sqrt {d}}+{\sqrt {e}}.$ Squaring both sides of this equation yields: $a+b{\sqrt {c}}=d+e+2{\sqrt {de}}.$ To find d and e we equate the terms not involving square roots, so $a=d+e,$ and equate the parts involving radicals, so $b{\sqrt {c}}=2{\sqrt {de}}$ which when squared implies $b^{2}c=4de.$ This gives us two equations, one quadratic and one linear, in the desired parameters d and e, and these can be solved to obtain $e={\frac {a+{\sqrt {a^{2}-b^{2}c}}}{2}},$ $d={\frac {a-{\sqrt {a^{2}-b^{2}c}}}{2}},$ which is a valid solution pair if and only if ${\sqrt {a^{2}-b^{2}c}}$ is a rational number. Example of testing for linear dependence of equations Consider this overdetermined system of equations (with 3 equations in just 2 unknowns): $x-2y+1=0,$ $3x+5y-8=0,$ $4x+3y-7=0.$ To test whether the third equation is linearly dependent on the first two, postulate two parameters a and b such that a times the first equation plus b times the second equation equals the third equation. Since this always holds for the right sides, all of which are 0, we merely need to require it to hold for the left sides as well: $a(x-2y+1)+b(3x+5y-8)=4x+3y-7.$ Equating the coefficients of x on both sides, equating the coefficients of y on both sides, and equating the constants on both sides gives the following system in the desired parameters a, b: $a+3b=4,$ $-2a+5b=3,$ $a-8b=-7.$ Solving it gives: $a=1,\ b=1$ The unique pair of values a, b satisfying the first two equations is (a, b) = (1, 1); since these values also satisfy the third equation, there do in fact exist a, b such that a times the original first equation plus b times the original second equation equals the original third equation; we conclude that the third equation is linearly dependent on the first two. Note that if the constant term in the original third equation had been anything other than –7, the values (a, b) = (1, 1) that satisfied the first two equations in the parameters would not have satisfied the third one (a – 8b = constant), so there would exist no a, b satisfying all three equations in the parameters, and therefore the third original equation would be independent of the first two. Example in complex numbers The method of equating coefficients is often used when dealing with complex numbers. For example, to divide the complex number a+bi by the complex number c+di, we postulate that the ratio equals the complex number e+fi, and we wish to find the values of the parameters e and f for which this is true. We write ${\frac {a+bi}{c+di}}=e+fi,$ and multiply both sides by the denominator to obtain $(ce-fd)+(ed+cf)i=a+bi.$ Equating real terms gives $ce-fd=a,$ and equating coefficients of the imaginary unit i gives $ed+cf=b.$ These are two equations in the unknown parameters e and f, and they can be solved to obtain the desired coefficients of the quotient: $e={\frac {ac+bd}{c^{2}+d^{2}}}\quad \quad {\text{and}}\quad \quad f={\frac {bc-ad}{c^{2}+d^{2}}}.$ References • Tanton, James (2005). Encyclopedia of Mathematics. Facts on File. p. 162. ISBN 0-8160-5124-0.
Wikipedia
Takurō Mochizuki Takurō Mochizuki (望月 拓郎, born 28 August 1972) is a Japanese mathematician at Kyoto University. Overview As a student at the University of Kyoto in 1994, Mochizuki left his undergraduate studies early to become a graduate student in mathematics at the same university. He completed his Ph.D. in 1999, and joined the faculty of Osaka City University, returning to Kyoto in 2004.[1] He was awarded the Japan Academy Prize in 2011 for his research on D-modules in algebraic analysis.[1][2] In 2014 he was a plenary speaker at the International Congress of Mathematicians.[3] Mochizuki was awarded the 2022 Breakthrough Prize in Mathematics for his work on "the theory of bundles with flat connections over algebraic varieties".[4] References 1. Professor Emeritus Masahiro Shogaito and Associate Professor Takuro Mochizuki of the Research Institute for Mathematical Sciences Receive the Japan Academy Prize, Kyoto University, April 12, 2011, retrieved 2015-08-01. 2. Japan Academy Prize to: Takuro Mochizuki (PDF), Japan Academy, retrieved 2015-08-01. 3. "Schedule of Plenary Lectures", Seoul ICM 2014, archived from the original on 2015-07-16, retrieved 2015-08-01. 4. "Winners of the 2022 Breakthrough Prizes in life sciences, fundamental physics and mathematics announced". Retrieved 9 September 2020. Further reading • Sabbah, Claude (January 2012), "Théorie de Hodge et correspondance de Hitchin-Kobayashi sauvages (d'après T. Mochizuki)" (PDF), Séminaire Bourbaki (in French), 1050: 1–36, MR 3087348. Also in Astérisque No. 352 (2013), Exp. No. 1050, viii, 205–241, ISBN 978-2-85629-371-3. External links • Home page Authority control International • ISNI • VIAF • 2 National • France • BnF data • Catalonia • Germany • Israel • United States • Croatia • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Talagrand's concentration inequality In the probability theory field of mathematics , Talagrand's concentration inequality is an isoperimetric-type inequality for product probability spaces.[1][2] It was first proved by the French mathematician Michel Talagrand.[3] The inequality is one of the manifestations of the concentration of measure phenomenon.[2] Statement The inequality states that if $\Omega =\Omega _{1}\times \Omega _{2}\times \cdots \times \Omega _{n}$ is a product space endowed with a product probability measure and $A$ is a subset in this space, then for any $t\geq 0$ $\Pr[A]\cdot \Pr \left[{A_{t}^{c}}\right]\leq e^{-t^{2}/4}\,,$ where ${A_{t}^{c}}$ is the complement of $A_{t}$ where this is defined by $A_{t}=\{x\in \Omega ~:~\rho (A,x)\leq t\}$ and where $\rho $ is Talagrand's convex distance defined as $\rho (A,x)=\max _{\alpha ,\|\alpha \|_{2}\leq 1}\ \min _{y\in A}\ \sum _{i~:~x_{i}\neq y_{i}}\alpha _{i}$ where $\alpha \in \mathbf {R} ^{n}$, $x,y\in \Omega $ are $n$-dimensional vectors with entries $\alpha _{i},x_{i},y_{i}$ respectively and $\|\cdot \|_{2}$ is the $\ell ^{2}$-norm. That is, $\|\alpha \|_{2}=\left(\sum _{i}\alpha _{i}^{2}\right)^{1/2}$ References 1. Alon, Noga; Spencer, Joel H. (2000). The Probabilistic Method (2nd ed.). John Wiley & Sons, Inc. ISBN 0-471-37046-0. 2. Ledoux, Michel (2001). The Concentration of Measure Phenomenon. American Mathematical Society. ISBN 0-8218-2864-9. 3. Talagrand, Michel (1995). "Concentration of measure and isoperimetric inequalities in product spaces". Publications Mathématiques de l'IHÉS. Springer-Verlag. 81: 73–205. arXiv:math/9406212. doi:10.1007/BF02699376. ISSN 0073-8301. S2CID 119668709.
Wikipedia
Talitha Washington Talitha Washington (born 1974) is an American mathematician and academic who specializes in applied mathematics and STEM education policy.[1] She was recognized by Mathematically Gifted & Black as a Black History Month 2018 Honoree.[1] Talitha Washington Washington in 2003 Born1974 Frankfort, Indiana, U.S. Academic background Alma materSpelman College (BS) University of Connecticut (MS, PhD) ThesisMathematical Model of Proteins Acting as On/Off Switches Doctoral advisorYung-Sze Choi Academic work Sub-discipline • Nonstandard finite difference scheme • Population models • Black–Scholes equation • One-dimensional systems InstitutionsDuke University (2001–2003) College of New Rochelle (2003–2005) University of Evansville (2005–2011) Howard University (2011–2020) National Science Foundation Atlanta University Center Consortium (AUCC) (2020–present) Education and career Washington was born in Frankfort, Indiana, and adopted at a young age by Ruthanne and Walter Wangerin.[2][3] She was raised in Evansville, Indiana, and attended Benjamin Bosse High School.[4] After serving in Costa Rica with the American Field Service, she earned a Bachelor of Science in Mathematics from Spelman College in 1996.[5] She then attended the University of Connecticut, earning a master's degree in 1998 and a Ph.D in 2001. Her doctoral thesis was Mathematical Model of Proteins Acting as On/Off Switches, under the supervision of Yung-Sze Choi.[6] Washington served on the faculty of Duke University from 2001 to 2003, the College of New Rochelle from 2003 to 2005, the University of Evansville from 2005 to 2011, and Howard University, starting in 2011, where she became associate professor of mathematics.[1] After a few years on leave as a program director at the National Science Foundation,[7] in 2020 she became the inaugural director of the Atlanta University Center Consortium (AUCC) Data Science Initiative. In 2022 she became president-elect of the Association for Women in Mathematics and assumes the presidency on February 1, 2023.[8][9] Washington's research interests include nonstandard finite difference (NSFD) schemes for certain systems of differential equations, including population models, one-dimensional systems, and the Black–Scholes equation.[1][10] Education policy and awards Washington is active in education policy, especially best practices on achieving racial and ethnic diversity in STEM.[11] At the NSF, she has served as co-Lead of the Hispanic-Serving Institutions Program and is a graduate of the STEM diversity organization SACNAS.[12] She serves on the Council of the American Mathematical Society and served on the Executive Committee of the Association for Women in Mathematics (AWM).[13] Washington helped to champion the once-forgotten Evansville mathematician Elbert Frank Cox (1895–1969), from her hometown of Evansville,[1] leading to the November 2006 unveiling of a plaque honoring the longtime Howard professor as the first African-American scholar to earn a doctorate in mathematics.[14] She received the 2019 Black Engineer of the Year Awards STEM Innovator Award.[15] Talitha Washington was named a Fellow of the American Mathematical Society, Class of 2021. Her citation read "For contributions to broadening the participation of underrepresented groups, and service to the mathematical profession".[16] Washington was also named a Fellow of the AWM, Class of 2021, "for her dedication to raise awareness of African American women in STEM; for her lifelong promotion of Historically Black Colleges and Universities; and for her unwavering dedication to the National Association of Mathematicians."[17] On February 1, 2022 she became president-elect of the AWM.[9] She was elected to the 2022 class of Fellows of the American Association for the Advancement of Science (AAAS).[18] Selected publications Applied mathematics • R. E. Mickens and T. M. Washington. "A Note on Exact Finite Difference Schemes for the Differential Equations Satisfied by the Jacobi Cosine and Sine Functions" Journal of Difference Equations and Applications, Vol. 19, Iss. 2 (2013), pp. 1042–1047. • T. M. Washington. NSFD "Representations for Polynomial Terms Appearing in the Potential Functions of 1-Dimensional Conservative Systems" Computers & Mathematics with Applications, Vol. 66, Iss. 11 (2013), pp. 2251–2258. • R. E. Mickens and T. M. Washington. NSFD "Discretizations of Interacting Population Models Satisfying Conservation Laws" Computers & Mathematics with Applications, Vol. 66, Iss. 11 (2013), pp. 2307–2316. • E. H. Goins and T. M. Washington. "On the Generalized Climbing Stairs Problem" Ars Combinatoria, Vol. 117 (2014), pp. 183–190. • R. E. Mickens, J. Munyakazi and T. M. Washington. "A Note on the Exact Discretization for a Cauchy-Euler ODE: Application to the Black-Scholes Equation" Journal of Difference Equations and Applications, Vol. 21, Iss. 7 (2015), pp. 547–552. • O. Adekanye and T. Washington. "Numerical Comparison of Nonstandard Schemes for the Airy Equation" International Journal of Applied Mathematical Research, Vol. 6, Iss. 4 (2017), pp. 141–146. • O. Adekanye and T. Washington. "Nonstandard Finite Difference Scheme for a Tacoma Narrows Bridge Model" Applied Mathematical Modelling, Accepted May 21, 2018. STEM education policy • T. M. Washington. "Evansville Honors the First Black Ph.D. in Mathematics and His Family" The Notices of the American Mathematical Society, Vol. 55, Iss. 5 (2008), pp. 588–589. • M. J. Wolyniak, C. J. Alvarez, V. Chandrasekaran, T. M. Grana, A. Holgado, C. J. Jones, R. W. Morris, A. L. Pereira, J. Stamm, T. M. Washington, and Y. Yang. "Building Better Scientists Through Cross-disciplinary Collaboration in Synthetic Biology: A Meeting Report from the Genome Consortium for Active Teaching Workshop 2010" CBE-Life Sciences Education, Vol. 9, No. 4 (2010), pp. 399–404. • R. De Veaux, M. Agarwal, M. Averett, B. Baumer, A. Bray, T. Bressoud, L. Bryant, L. Cheng, A. Francis, R. Gould, A. Y. Kim, M. Kretchmar, Q. Lu, A. Moskol, D. Nolan, R. Pelayo, S. Raleigh, R. J. Sethi, M. Sondjaja, N. Tiruviluamala, P. Uhlig, T. Washington, C. Wesley, D. White, P. Ye. "Curriculum Guidelines for Undergraduate Programs in Data Science" Annual Review of Statistics and Its Application, Vol. 4 (2017), pp. 2.1-2.16. • V. R. Morris and T. Washington. "The Role of Professional Societies in STEM Diversity" Journal of the National Technical Association, Vol. 87, Iss. 1 (2017), pp. 22–31. • T. Washington. "Behind Every Successful Woman, There are a Few Good Men" The Notices of the AMS, Vol. 65, Iss. 02 (2018), pp. 132–134. References 1. Black History Month 2018 Honoree: Talitha M. Washington mathematicallygiftedandblack.com 2. "The Rev. Walter Martin Wangerin Jr. Obituary (1944 - 2021) The Times". Legacy.com. Retrieved 2021-11-05. 3. "Talitha Washington - Biography". Maths History. Retrieved 2021-11-05. 4. "Talitha Washington at Howard University until Fall 2012". AceNotes Today. University of Evansville. May 17, 2011. Retrieved October 28, 2019. 5. O'Connor, John J.; Robertson, Edmund F., "Talitha Washington", MacTutor History of Mathematics Archive, University of St Andrews 6. Talitha Washington at the Mathematics Genealogy Project 7. "Talitha M. Washington | NSF - National Science Foundation". nsf.gov. Retrieved 2019-01-26. 8. "Talitha Washington Elected President of the Association for Women in Mathematics". www.pr.com. Retrieved January 7, 2023. 9. Taylor, Tommy Jr. "Talitha Washington Elected President of the Association for Women in Mathematics | The AUC Data Science". Retrieved 2022-02-06. 10. Finite Difference Schemes that Achieve Dynamical Consistency for Population Models Thirteenth Virginia L. Chatelain Memorial Lecture presented by Talitha Washington at Kansas State University on November 9, 2017 11. Mathematics Professor Talitha Washington Receives Prestigious NSF DUE Appointment Howard University, College of Arts and Sciences 12. Talitha Washington's homepage 13. Hidden no more: Dr. Talitha Washington by Lango Deen, US Black Engineer Information Technology Magazine, January 9, 2018 14. Evansville Honors the First Black Ph.D. in Mathematics and His Family by Talitha M. Washington 15. Howard University Professor receives Innovation Award US Black Engineer Information Technology Magazine, January 31, 2019 16. "Class of 2021 Fellows". American Mathematical Society. Retrieved 2 November 2020. 17. "The AWM Fellows Program: 2021 Class of AWM Fellows". Association for Women in Mathematics. Retrieved 7 December 2020. 18. "2022 AAAS Fellows | American Association for the Advancement of Science (AAAS)". www.aaas.org. Retrieved 2023-03-15. External links • Talitha Washington at the Mathematics Genealogy Project • Talitha Washington homepage Presidents of the Association for Women in Mathematics 1971–1990 • Mary W. Gray (1971–1973) • Alice T. Schafer (1973–1975) • Lenore Blum (1975–1979) • Judith Roitman (1979–1981) • Bhama Srinivasan (1981–1983) • Linda Preiss Rothschild (1983–1985) • Linda Keen (1985–1987) • Rhonda Hughes (1987–1989) • Jill P. Mesirov (1989–1991) 1991–2010 • Carol S. Wood (1991–1993) • Cora Sadosky (1993–1995) • Chuu-Lian Terng (1995–1997) • Sylvia M. Wiegand (1997–1999) • Jean E. Taylor (1999–2001) • Suzanne Lenhart (2001–2003) • Carolyn S. Gordon (2003–2005) • Barbara Keyfitz (2005–2007) • Cathy Kessel (2007–2009) • Georgia Benkart (2009–2011) 2011–0000 • Jill Pipher (2011–2013) • Ruth Charney (2013–2015) • Kristin Lauter (2015–2017) • Ami Radunskaya (2017–2019) • Ruth Haas (2019–2021) • Kathryn Leonard (2021–2023) • Talitha Washington (2023–2025) Authority control: Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Talithia Williams Talithia D. Williams is an American statistician and mathematician at Harvey Mudd College who researches the spatiotemporal structure of data.[1][2] She was the first black woman to achieve tenure at Harvey Mudd College.[2] Williams is an advocate for engaging more African Americans in engineering and science.[3] Talithia D. Williams NationalityAmerican Alma materSpelman College Howard University Rice University Known forSpatial–temporal modeling of rainfall data Scientific career FieldsStatistics InstitutionsHarvey Mudd College ThesisReal-time estimation of rainfall: A dynamic spatio-temporal model (2008) Doctoral advisorKatherine Bennett Ensor Education Her educational background includes a bachelor's degree in Mathematics from Spelman College, Master's degrees in both Mathematics from Howard University and Statistics from Rice University, and a Ph.D. in Statistics from Rice University.[4] Dr. Williams was in one of the first EDGE cohorts.[5] Career and research Williams has worked at the Jet Propulsion Laboratory (JPL), the National Security Agency (NSA), and NASA.[1][6] She is an associate professor of mathematics and also serves as Associate Dean for Research and Experiential Learning at Harvey Mudd College.[7][1][6] She is Secretary and Treasurer for the EDGE Foundation which sponsors summer programs for women, and on the boards of the MAA and SACNAS.[1] Williams has done significant outreach, with the goal of bringing mathematics to life and "rebranding the field of mathematics as anything but dry, technical or male-dominated but instead a logical, productive career path that is crucial to the future of the country."[4][8] Williams has developed statistical models focused on understanding the structure of spatiotemporal data, with environmental applications.[1][9] She has partnered with the World Health Organization in developing a cataract model used to predict the cataract surgical rate for countries in Africa.[9] Williams was a host of the six part PBS series NOVA Wonders in April 2018.[10] She is the author of the book Power in Numbers: The Rebel Women of Mathematics (Race Point Publishing, 2018).[11][12] Williams was the narrator for the five-part PBS series NOVA Universe Revealed in November 2021.[13] TED talk In 2014, Williams gave a highly viewed TED talk titled "Own Your Body's Data", discussing the potential insights to be gained from collecting personal health data.[2] Honors In 2015 Williams received the MAA Henry L. Alder Award for exemplary teaching by an early career mathematics professor.[14] Williams was honored by the Association for Women in Mathematics and the Mathematical Association of America, when they selected her to be the AWM/MAA Falconer Lecturer at MathFest 2017 in Chicago, IL.[15] The title of her talk is "Not So Hidden Figures: Unveiling Mathematical Talent." Williams was also recognized by Mathematically Gifted & Black as a Black History Month 2017 Honoree.[16] She received the 2022 Joint Policy Board for Mathematics Communication Award "for bringing mathematics and statistics into the homes of millions through her work as a TV host, renowned speaker, and author."[17][18] References 1. "Talithia Williams, Harvey Mudd College - AWM Association for Women in Mathematics". sites.google.com. Retrieved 2017-04-08. 2. Paoletta, Rae. "These Black Female Mathematicians Should Be Stars in the Blockbusters of Tomorrow". Gizmodo. Retrieved 2017-04-08. 3. Klawe, Maria. "Increasing Education Opportunities For Minorities In STEM". Forbes. Retrieved 2017-04-08. 4. "Talithia Williams : Harvey Mudd College". www.math.hmc.edu. Retrieved 2017-04-08. 5. "EDGE: A Program for Women in Mathematics - THE EDGE PROGRAM". THE EDGE PROGRAM. Retrieved 2017-04-08. 6. Williams, Talithia. "Talithia Williams | Speaker | TED.com". Retrieved 2017-04-08. 7. "Mathematics Faculty". Harvey Mudd Department of Mathematics. Retrieved 20 August 2018. 8. "Talithia Williams | Book for Speaking, Events and Appearances". www.apbspeakers.com. 2015-12-16. Retrieved 2017-04-08. 9. "Mosaic: Talithia Williams - Mackinac Gazette - Grand Valley State University". www.gvsu.edu. Retrieved 2017-04-08. 10. "Meet Talithia Williams". NOVA Wonders. Retrieved 20 August 2018. 11. Reviews of Power in Numbers: The Rebel Women of Mathematics: • Ackerberg-Hastings, Amy. Mathematical Reviews. MR 3929685.{{cite journal}}: CS1 maint: untitled periodical (link) • Stenger, Allen (August 2018). "Review". MAA Reviews. • Schaefer, Jennifer (December 2018). "Power in Numbers: The Rebel Women of Mathematics". Math Horizons. 26 (3): 29. doi:10.1080/10724117.2018.1547039. S2CID 127006558. • Mihai, L. Angela (2019). "Review". London Mathematical Society Newsletter. 485: 49–50. • Lawrence, Emille Davie (February 2019). "Review" (PDF). Notices of the American Mathematical Society. 66 (2): 251–253. doi:10.1090/noti1800. • Cabrera Arnau, Carmen; Kalaydzhieva, Nikoleta (March 2019). "Review". Chalkdust. 12. "Williams' Book Highlights Female Mathematicians". Harvey Mudd College News. June 11, 2018. 13. "NOVA Universe Revealed". NOVA. Retrieved 25 November 2021. 14. "Henry L. Adler Award". Mathematical Association of America. Retrieved 10 April 2017. 15. "Invited Lectures at MathFest 2017". Mathematical Association of America. Retrieved 10 April 2017. 16. "Talithia Williams". Mathematically Gifted & Black. 17. "Statistician Talithia Williams is the 2022 JPBM Communications Award Recipient". SIAM News. Retrieved 2021-09-13. 18. Meetings (JMM), Joint Mathematics. "Joint Mathematics Meetings". Joint Mathematics Meetings. Retrieved 2022-06-19. Authority control International • VIAF • WorldCat National • Israel • United States Academics • DBLP • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH
Wikipedia
Tall cardinal In mathematics, a tall cardinal is a large cardinal κ that is θ-tall for all ordinals θ, where a cardinal is called θ-tall if there is an elementary embedding j : V → M with critical point κ such that j(κ) > θ and Mκ ⊆ M. Tall cardinals are equiconsistent with strong cardinals. References • Hamkins, Joel David (2009), "Tall cardinals", Mathematical Logic Quarterly, 55 (1): 68–86, doi:10.1002/malq.200710084, ISSN 0942-5616, MR 2489293, S2CID 19062078
Wikipedia
Tamagawa number In mathematics, the Tamagawa number $\tau (G)$ of a semisimple algebraic group defined over a global field k is the measure of $G(\mathbb {A} )/G(k)$, where $\mathbb {A} $ is the adele ring of k. Tamagawa numbers were introduced by Tamagawa (1966), and named after him by Weil (1959). Tsuneo Tamagawa's observation was that, starting from an invariant differential form ω on G, defined over k, the measure involved was well-defined: while ω could be replaced by cω with c a non-zero element of $k$, the product formula for valuations in k is reflected by the independence from c of the measure of the quotient, for the product measure constructed from ω on each effective factor. The computation of Tamagawa numbers for semisimple groups contains important parts of classical quadratic form theory. Definition Let k be a global field, A its ring of adeles, and G a semisimple algebraic group defined over k. Choose Haar measures on the completions kv of k such that Ov has volume 1 for all but finitely many places v. These then induce a Haar measure on A, which we further assume is normalized so that A/k has volume 1 with respect to the induced quotient measure. The Tamagawa measure on the adelic algebraic group G(A) is now defined as follows. Take a left-invariant n-form ω on G(k) defined over k, where n is the dimension of G. This, together with the above choices of Haar measure on the kv, induces Haar measures on G(kv) for all places of v. As G is semisimple, the product of these measures yields a Haar measure on G(A), called the Tamagawa measure. The Tamagawa measure does not depend on the choice of ω, nor on the choice of measures on the kv, because multiplying ω by an element of k* multiplies the Haar measure on G(A) by 1, using the product formula for valuations. The Tamagawa number τ(G) is defined to be the Tamagawa measure of G(A)/G(k). Weil's conjecture on Tamagawa numbers See also: Weil conjecture on Tamagawa numbers Weil's conjecture on Tamagawa numbers states that the Tamagawa number τ(G) of a simply connected (i.e. not having a proper algebraic covering) simple algebraic group defined over a number field is 1. Weil (1959) calculated the Tamagawa number in many cases of classical groups and observed that it is an integer in all considered cases and that it was equal to 1 in the cases when the group is simply connected. Ono (1963) found examples where the Tamagawa numbers are not integers, but the conjecture about the Tamagawa number of simply connected groups was proven in general by several works culminating in a paper by Kottwitz (1988) and for the analogue over function fields over finite fields by Lurie and Gaitsgory in 2011.[1] See also • Adelic algebraic group References • "Tamagawa number", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Kottwitz, Robert E. (1988), "Tamagawa numbers", Ann. of Math., 2, Annals of Mathematics, 127 (3): 629–646, doi:10.2307/2007007, JSTOR 2007007, MR 0942522. • Ono, Takashi (1963), "On the Tamagawa number of algebraic tori", Annals of Mathematics, Second Series, 78 (1): 47–73, doi:10.2307/1970502, ISSN 0003-486X, JSTOR 1970502, MR 0156851 • Ono, Takashi (1965), "On the relative theory of Tamagawa numbers", Annals of Mathematics, Second Series, 82 (1): 88–111, doi:10.2307/1970563, ISSN 0003-486X, JSTOR 1970563, MR 0177991 • Tamagawa, Tsuneo (1966), "Adèles", Algebraic Groups and Discontinuous Subgroups, Proc. Sympos. Pure Math., vol. IX, Providence, R.I.: American Mathematical Society, pp. 113–121, MR 0212025 • Weil, André (1959), Exp. No. 186, Adèles et groupes algébriques, Séminaire Bourbaki, vol. 5, pp. 249–257 • Weil, André (1982) [1961], Adeles and algebraic groups, Progress in Mathematics, vol. 23, Boston, MA: Birkhäuser Boston, ISBN 978-3-7643-3092-7, MR 0670072 • Lurie, Jacob (2014), Tamagawa Numbers via Nonabelian Poincaré Duality Further reading • Aravind Asok, Brent Doran and Frances Kirwan, "Yang-Mills theory and Tamagawa Numbers: the fascination of unexpected links in mathematics", February 22, 2013 • J. Lurie, The Siegel Mass Formula, Tamagawa Numbers, and Nonabelian Poincaré Duality posted June 8, 2012. 1. Lurie 2014.
Wikipedia
Tamar Schlick Tamar Schlick is an American applied mathematician who works as a professor of chemistry, mathematics, and computer science at New York University. Her research involves developing and applying tools for modeling and simulating biomolecules.[1] Education and career Schlick did her undergraduate studies at Wayne State University, graduating in 1982 with a B.S. in mathematics.[1] She continued her graduate studies at the Courant Institute of Mathematical Sciences at New York University, completing a Ph.D. in applied mathematics in 1987 under the supervision of Charles S. Peskin.[1][2] After postdoctoral studies at NYU and the Weizmann Institute of Science, she returned as a faculty member to NYU in 1989.[1] Recognition She is a fellow of the American Association for the Advancement of Science (2004), American Physical Society (2005), Biophysical Society (2012), and Society for Industrial and Applied Mathematics (2012).[1][3] References 1. Curriculum vitae: Tamar Schlick (PDF), October 8, 2012, retrieved 2015-09-09. 2. Tamar Schlick at the Mathematics Genealogy Project 3. SIAM Fellows: Class of 2012, retrieved 2015-09-09. External links • Home page Authority control International • ISNI • VIAF National • Catalonia • Israel • United States • Czech Republic • Netherlands Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID Other • IdRef
Wikipedia
Tamar Ziegler Tamar Debora Ziegler (Hebrew: תמר ציגלר; born 1971) is an Israeli mathematician known for her work in ergodic theory, combinatorics and number theory. She holds the Henry and Manya Noskwith Chair of Mathematics at the Einstein Institute of Mathematics at the Hebrew University. Tamar Ziegler Ziegler in 2013 CitizenshipIsraeli Alma materThe Hebrew University AwardsErdős Prize (2011)[1] Scientific career FieldsErgodic theory, Combinatorics, Number theory InstitutionsHebrew University Technion ThesisNonconventional ergodic averages (2003) Doctoral advisorHillel Furstenberg Websitewww.ma.huji.ac.il/~tamarz/ Career Ziegler received her Ph.D. in Mathematics from the Hebrew University under the supervision of Hillel Furstenberg.[2] Her thesis title was “Non conventional ergodic averages”. She spent five years in the US as a postdoc at the Ohio State University, the Institute for Advanced Study at Princeton, and the University of Michigan. She was a faculty member at the Technion during the years 2007–2013, and joined the Hebrew University in the Fall of 2013 as a full professor. Ziegler serves as an editor of several journals. Among others she is an editor of the Journal of the European Mathematical Society (JEMS), an associate editor of the Annals of Mathematics, and the Editor in Chief of the Israel Journal of Mathematics. Research Ziegler’s research lies in the interface of ergodic theory with several mathematical fields including combinatorics, number theory, algebraic geometry and theoretical computer science. One of her major contributions, in joint work with Ben Green and Terence Tao (and combined with earlier work of theirs[3][4]), is the resolution of the generalized Hardy–Littlewood conjecture for affine linear systems of finite complexity.[5] Other important contributions include the generalization of the Green-Tao theorem to polynomial patterns,[6][7] and the proof of the inverse conjecture for the Gowers norms in finite field geometry.[8][9][10] Recognition Ziegler won the Erdős Prize of the Israel Mathematical Union in 2011,[1] and the Bruno memorial award in 2015. She was the European Mathematical Society lecturer of the year in 2013, and an invited speaker at the 2014 International Congress of Mathematicians. She was named MSRI Simons Professor for 2016-2017.[11] She was elected to the Academia Europaea in 2021.[12] References 1. 2011 Erdos Prize in Mathematics (PDF), Israel Mathematical Union, retrieved 2015-08-02. 2. Tamar Ziegler at the Mathematics Genealogy Project 3. Green, Ben; Tao, Terence (2010). "Linear equations in primes". Annals of Mathematics. 171 (3): 1753–1850. arXiv:math/0606088. doi:10.4007/annals.2010.171.1753. MR 2680398. S2CID 119596965. 4. Green, Ben; Tao, Terence (2012). "The Möbius function is strongly orthogonal to nilsequences". Annals of Mathematics. 175 (2): 541–566. arXiv:0807.1736. doi:10.4007/annals.2012.175.2.3. MR 2877066. 5. Green, Ben; Tao, Terence; Ziegler, Tamar (2012). "An inverse theorem for the Gowers $U^{s+1}[N]$-norm". Annals of Mathematics. 176 (2): 1231–1372. arXiv:1009.3998. doi:10.4007/annals.2012.176.2.11. MR 2950773. S2CID 119588323. 6. Tao, Terence; Ziegler, Tamar (2008). "The primes contain arbitrarily long polynomial progressions". Acta Mathematica. 201 (2): 213–305. arXiv:math/0610050. doi:10.1007/s11511-008-0032-5. MR 2461509. S2CID 119138411. 7. Tao, Terence; Ziegler, Tamar (2018). "Polynomial patterns in primes". Forum of Mathematics, Pi. 6. arXiv:1603.07817. doi:10.1017/fmp.2017.3. S2CID 119316066. 8. Bergelson, Vitaly; Tao, Terence; Ziegler, Tamar (2010). "An inverse theorem for the uniformity seminorms associated with the action of $\mathbb {F} _{p}^{\infty }$". Geom. Funct. Anal. 19 (6): 1539–1596. arXiv:0901.2602. doi:10.1007/s00039-010-0051-1. MR 2594614. S2CID 10875469. 9. Tao, Terence; Ziegler, Tamar (2010). "The inverse conjecture for the Gowers norms over finite fields via the correspondence principle". Analysis & PDE. 3 (1): 1–20. arXiv:0810.5527. doi:10.2140/apde.2010.3.1. MR 2663409. S2CID 16850505. 10. Tao, Terence; Ziegler, Tamar (2011). "The Inverse conjecture for the Gowers norms over finite fields in low characteristic". Annals of Combinatorics. 16: 121–188. arXiv:1101.1469. Bibcode:2011arXiv1101.1469T. doi:10.1007/s00026-011-0124-3. MR 2948765. S2CID 119593656. 11. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 2021-06-07. 12. "Tamar Ziegler". Members. Academia Europaea. Retrieved 2021-12-18. Authority control International • ISNI • VIAF National • Israel Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH
Wikipedia
Tamara Awerbuch-Friedlander Tamara Eugenia Awerbuch-Friedlander is a biomathematician and public health scientist who worked at the Harvard School of Public Health (HSPH) in Boston, Massachusetts.[1] Her primary research and publications focus on biosocial interactions that cause or contribute to disease. She also is believed to be the first female Harvard faculty member to have had a jury trial for a lawsuit filed against Harvard University for sex discrimination.[2][3] Tamara Eugenia Awerbuch-Friedlander BornUruguay OccupationBiomathematician, Public Health Researcher, professor NationalityIsraeli, Uruguayan CitizenshipUnited States Alma mater • Hebrew University, Israel • Massachusetts Institute of Technology, 1979 Period20th and 21st centuries Genrebiomathematics, Biostatistics, statistics, public health, biomathematics, emergent diseases, Epidemiology, HIV/AIDS SubjectBiostatistics, statistics, public health, biomathematics, disease vectors, entomology Literary movementWomen's health, feminism, university women Notable worksThe Truth is the Whole: Essays in Honor of Richard Levins Paperback – 1 Sept. 2018 Notable awardsFulbright Scholarship (mathematical epidemiology), Robert Wood Johnson Foundation Investigator Award RelativesParents: Chaya Clara Goldman Friedlander and Michael Friedlander Early life Tamara Awerbuch was born in Uruguay, lived until the age of 12 in Buenos Aires, Argentina, then moved to Israel with her parents, where her grandparents and parents had lived after they had escaped Nazi Germany just before the Holocaust began. She studied and completed two degrees at Hebrew University in Jerusalem. She studied chemistry and minored in biochemistry and completed the BSc degree in 1965. In 1967, she completed both the Master of Science (MSc) in Physiology and the Master of Education (MEd) degree from Hebrew University. She was certified to teach grades K–12 in Israel, where she lectures and appears on panels and in workshops, as she does also in the United States and elsewhere. She also served for two years in the Israeli army. In October 1973, while visiting friends in America, she was offered employment at MIT in Cambridge, Massachusetts, to study chemical carcinogens in tissue cultures, then a recently developed technique. During this period, she worked in the lab studying carcinogenicity in tissue cultures, studied one course each semester, and lived frugally, sharing a house with MIT junior Faculty and graduate students. As one of her allotted courses per semester, in spring of 1974 she first started to study mathematics, taking mathematics and statistics. In summer 1975, she matriculated as a full-time student at MIT, where in 1979 she completed her doctorate in Nutrition and Food Science. She became a US citizen and has resided in the United States since that time. She was recruited in 1983 to the Biostatistics Department of the Harvard T.H. Chan School of Public Health by Department Chair Marvin Zelen. She was a Fulbright scholar in 1988.[4] In 1993, she began a long career in the Department of Global Health and Population at the Harvard T.H. Chan School of Public Health. Her two sons, Danny and Ari, were born in the 1980s and reared in Brookline, Massachusetts. She speaks English, Hebrew, and Spanish fluently and understands and reads German. Education • Undergraduate study at Hebrew University in Israel. • BSc in Chemistry (minor in biochemistry) – 1965 • MSc in Physiology – 1967 • MEd – Education (certified to teach K–12) – 1967 • PhD, MIT, Department of Nutrition and Food Science, Major in Metabolism, 1979 • Thesis: "A diffusion bioassay for the quantitative determination of mutagenicity of chemical carcinogens" (a theoretical study for determining safe threshold concentrations of food additives re: carcinogenesis) • Postdoc, MIT, in Somatic Cell Genetics 1979-1981 Career Since the early 2000s, she has organized and carried out research on conditions that lead to the emergence, maintenance, and spread of epidemics. Her research encompasses sexually-transmitted diseases (STDs) such as HIV/AIDS, as well as vector-borne diseases, such as Lyme disease, dengue, and Zika virus and Zika fever. Awerbuch-Friedlander recently researched the spread and control of rabies based on an eco-historical analysis. Her work is interdisciplinary, and some of her publications are co-authored with international scientists and members of different departments of the HSPH and the Massachusetts Institute of Technology. Some of her analytical mathematical models led to fundamental epidemiological discoveries, for example, that oscillations are an intrinsic property of tick dynamics. She presented her work in many international conferences and at the Isaac Newton Institute of Mathematical Sciences in Cambridge, England, where she was invited to participate in the Program on Models of Epidemics.[5] Awerbuch-Friedlander is a founding member of the New and Resurgent Disease Working Group.[6][7] Within this context, she was involved in organizing a conference in Woods Hole, Massachusetts, on the emergence and resurgence of diseases, where she led the workshop on Mathematical Modeling. In addition, she established international collaborations, such as with Israeli scientists on emerging infectious diseases in the Middle East, with Cuban scientists on infectious diseases of plants and the development of general methodologies, and with Brazilian scientists on the development of concepts to guide effective surveillance. In the late 1990s, Awerbuch-Friedlander was co-investigator in a project, "Why New and Resurgent Diseases Caught Public Health by Surprise and a Strategy to Prevent This" (supported by the Robert Wood Johnson Foundation). At Harvard T.H. Chan School of Public Health, Awerbuch-Friedlander co-chaired the committee on Bio- and Public Health Mathematics. Some of her research papers were the result of collaboration with students through the course Mathematical Models in Biology, which had large portions dedicated to infectious diseases. She is indeed interested in public health education and has developed for high school adolescents educational software based on models for determining the risk that an individual with certain risky sexual behaviors actually would become infected with HIV. These models helped risk-prone youth, parents, educators, community health leaders, and public health researchers explore how changes in sexual behavior impact their probability of contracting HIV.[5] The Truth is the Whole Awerbuch-Friedlander also chaired the planning committee for the 85th birthday celebration of Richard Levins,[8] founder of the Human Ecology program in the Global Health and Population Department of the Harvard School of Public Health, a three-day conference with the Hegelian theme "The Truth is the Whole" held in mid-2015 at the Harvard School of Public Health, focusing on the manifold contributions in models of complexity theory and holistic research from mathematical biologist Levins and his colleagues, students, and disciples, who broadly are interested in complex systems biology. The September 2018 book, The Truth Is the Whole: Essays in Honor of Richard Levins (ISBN 0998889105/9780998889108), in which she was co-editor with Maynard Clark and Dr. Peter Taylor, includes parts of the proceedings from over 20 contributors from that Harvard symposium. Sex-discrimination suit against Harvard Although Theda Skocpol had alleged gender bias in denial of tenure as early as 1980,[9] Awerbuch-Friedlander is believed to be the first female Harvard Faculty member to file a lawsuit against Harvard University for sex discrimination.[3][10][11] The suit was "filed with the Middlesex County Superior Court in June 1997."[12] Encouraged by her mentors, Richard Levins and Marvin Zelen, Awerbuch-Friedlander sought "nearly $1 million in lost wages and benefits, as well as a promotion at the HSPH"[13] and argued "that Fineberg refused to promote her to a tenure-track position because she is a woman, despite the positive recommendation of the HSPH's selection committee of appointment and re-appointment (SCARP)."[13] Intermittently from 1998 through 2007, the gender discrimination case was covered by the Harvard Crimson (campus media), The Boston Globe (local media), and Science magazine (professional and scientific print media). Science documented the case developments of the sex-discrimination case in its "News of the Week: Women in Science" section.[14] and in Science's SCIENCESCOPE two months later.[15] Her sex discrimination lawsuit was based upon Harvard's denial of tenure to her, despite her significant accomplishments in her fields of expertise, biomathematics, epidemiology, biostatistics and public health. The University argued that no tenure track positions were open in her new department, after she had been reassigned from one department to another. Notable students • Christl Donnelly and Wendy Leisenring. Worked on the comparison of transmission rates of HIV1 and HIV2 in a cohort of prostitutes in Senegal 1990–1991. Publication: Bulletin of Mathematical Biology 55:731-743, 1993. • Sandro Galea - Variability and vulnerability at the ecological level: Implications for understanding the social determinants of health. Spring 2000. Appeared in American Journal of Public Health, 92:1768-1772, 2002. References 1. "Tamara Awerbuch-Friedlander | Harvard Catalyst Profiles | Harvard Catalyst". 7 April 2014. Archived from the original on 7 April 2014. 2. 'Issues' page on Women in the Academic Profession, accessed 05/02/2013. 3. K., Dyer, Susan (2004). Tenure denied: cases of sex discrimination in academia (PDF). AAUW Educational Fund. ISBN 1-879922-34-7. OCLC 642196404.{{cite book}}: CS1 maint: multiple names: authors list (link) 4. "Directory of American Fulbright Scholars" (PDF). libraries.uark.edu. Council for International Exchange of Scholars. 1988–89. Archived (PDF) from the original on 12 August 2015. Retrieved 16 April 2023. 5. "Home | Tamara Awerbuch | Harvard T.H. Chan School of Public Health". 4 March 2016. Archived from the original on 4 March 2016. Retrieved 7 April 2018. 6. Awerbuch-Friedlander, T., Levins, R., Mathematical Models of Public Health Policy, Mathematical Models, Volume III, EOLSS (Encyclopedia of Life Support Systems), Note: in Biographical Sketches, Accessed online 4/2/2014 7. "Note 20 In "Health Impacts of Climate Change", Medical Journal of Australia, vol.163, 1995, pp. 570–574. By: Erwin Jackson, Climate Impacts Specialist, Greenpeace International, Accessed online 4/2/2014". Archived from the original on 8 November 2019. Retrieved 3 April 2014. 8. "Dr. Richard Levins 85th Birthday". www.hsph.harvard.edu. Archived from the original on 2 April 2015. 9. "A QUESTION OF SEX BIAS AT HARVARD". The New York Times. 18 October 1981. Retrieved 14 February 2017. 10. "AWERBUCH-FRIEDLANDER v. P | 449 Mass. 1105 (2007) | s1105251332 | Leagle.com". Leagle. 11. 'Issues' page on Women in the Academic Profession, accessed 05/07/2013. 12. Resnick, S. A., SPH Lecturer Sues University For Gender Bias: Harvard denies allegations, says system fair to all, Harvard Crimson, June 3, 1998 13. McPherson, F. Reynolds (14 March 2001). "Fineberg Testifies in Discrimination Case | News | The Harvard Crimson". www.thecrimson.com. 14. Lawler, A., Court to Hear Charges by Harvard Researcher, Science 23 February 2001: Vol. 291 no. 5508 p. 1466, doi:10.1126/science.291.5508.1466a 15. Lawler, A., Appealing Case, SCIENCESCOPE, Science 27 April 2001: 619 External links • 'Issues' page on Women in the Academic Profession, accessed 05/02/2013. • The American Association of University Women, Tenure Denied: Cases of Sex Discrimination in Academia. 2004. • "The Truth Is the Whole" – 2-day symposium on the 85th birthday of Dr. Richard Levins • Website Authority control International • VIAF National • United States Academics • zbMATH
Wikipedia
Tamara G. Kolda Tamara G. Kolda is an American applied mathematician and former Distinguished Member of Technical Staff at Sandia National Laboratories. She is noted for her contributions in computational science, multilinear algebra, data mining, graph algorithms, mathematical optimization, parallel computing, and software engineering.[2][3] She is currently a member of the SIAM Board of Trustees and served as associate editor for both the SIAM Journal on Scientific Computing and the SIAM Journal on Matrix Analysis and Applications.[4] Tamara G. Kolda Alma materUniversity of Maryland Baltimore County University of Maryland College Park Awards • ACM Fellow (2019)[1] • Presidential Early Career Award for Scientists and Engineers (2003) Scientific career FieldsApplied mathematics Computational science InstitutionsOak Ridge National Laboratory Sandia National Laboratories ThesisLimited-Memory Matrix Methods With Applications (1997) Doctoral advisorDianne P. O'Leary Education Kolda received her bachelor's degree in mathematics in 1992 from the University of Maryland Baltimore County and her PhD in applied mathematics from the University of Maryland College Park in 1997.[5] Career and research Kolda was a Householder Postdoctoral Fellow at Oak Ridge National Laboratory from 1997 to 1999 before joining Sandia National Laboratories. Awards and honors Kolda received a Presidential Early Career Award for Scientists and Engineers in 2003, best paper prizes at the 2008 IEEE International Conference on Data Mining and the 2013 SIAM International Conference on Data Mining, and has been a distinguished member of the Association for Computing Machinery since 2011.[2][6] She was elected a Fellow of the Society for Industrial and Applied Mathematics in 2015.[7] She was elected a Fellow of the Association for Computing Machinery in 2019 for "innovations in algorithms for tensor decompositions, contributions to data science, and community leadership."[1] She was elected to the National Academy of Engineering in 2020, for "contributions to the design of scientific software, including tensor decompositions and multilinear algebra".[8] References 1. "Tamara G Kolda". awards.acm.org. 2. Camacho-Lopez, Tara (23 June 2015). "Sandian Named Fellow of the Society for Industrial and Applied Mathematics". Sandia Energy. 3. "SC16 Invited Talk Spotlight: Dr. Tamara G. Kolda Presents "Parallel Multiway Methods for Compression of Massive Data and Other Applications"". SuperComputing16. 29 September 2016. Retrieved 15 October 2017. 4. "Q&A: Tamara Kolda on SIAM Journal Macro Update". SIAM News. 21 March 2016. 5. "Tamara G. Kolda - CV" (PDF). 6. "Tamara G Kolda". awards.acm.org. 7. "SIAM Fellows Class of 2015". fellows.siam.org. 8. "National Academy of Engineering Elects 86 Members and 18 International Members". National Academy of Engineering. February 6, 2020. Retrieved 2020-10-08. Authority control International • ISNI • VIAF National • United States Academics • Association for Computing Machinery • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID • ResearcherID • zbMATH
Wikipedia
Tamari lattice In mathematics, a Tamari lattice, introduced by Dov Tamari (1962), is a partially ordered set in which the elements consist of different ways of grouping a sequence of objects into pairs using parentheses; for instance, for a sequence of four objects abcd, the five possible groupings are ((ab)c)d, (ab)(cd), (a(bc))d, a((bc)d), and a(b(cd)). Each grouping describes a different order in which the objects may be combined by a binary operation; in the Tamari lattice, one grouping is ordered before another if the second grouping may be obtained from the first by only rightward applications of the associative law (xy)z = x(yz). For instance, applying this law with x = a, y = bc, and z = d gives the expansion (a(bc))d = a((bc)d), so in the ordering of the Tamari lattice (a(bc))d ≤ a((bc)d). In this partial order, any two groupings g1 and g2 have a greatest common predecessor, the meet g1 ∧ g2, and a least common successor, the join g1 ∨ g2. Thus, the Tamari lattice has the structure of a lattice. The Hasse diagram of this lattice is isomorphic to the graph of vertices and edges of an associahedron. The number of elements in a Tamari lattice for a sequence of n + 1 objects is the nth Catalan number Cn. The Tamari lattice can also be described in several other equivalent ways: • It is the poset of sequences of n integers a1, ..., an, ordered coordinatewise, such that i ≤ ai ≤ n and if i ≤ j ≤ ai then aj ≤ ai (Huang & Tamari 1972). • It is the poset of binary trees with n leaves, ordered by tree rotation operations. • It is the poset of ordered forests, in which one forest is earlier than another in the partial order if, for every j, the jth node in a preorder traversal of the first forest has at least as many descendants as the jth node in a preorder traversal of the second forest (Knuth 2005). • It is the poset of triangulations of a convex n-gon, ordered by flip operations that substitute one diagonal of the polygon for another. Notation The Tamari lattice of the Cn groupings of n+1 objects is called Tn, but the corresponding associahedron is called Kn+1. In The Art of Computer Programming T4 is called the Tamari lattice of order 4 and its Hasse diagram K5 the associahedron of order 4. References • Chapoton, F. (2005), "Sur le nombre d'intervalles dans les treillis de Tamari", Séminaire Lotharingien de Combinatoire (in French), 55 (55): 2368, arXiv:math/0602368, Bibcode:2006math......2368C, MR 2264942. • Csar, Sebastian A.; Sengupta, Rik; Suksompong, Warut (2014), "On a Subposet of the Tamari Lattice", Order, 31 (3): 337–363, arXiv:1108.5690, doi:10.1007/s11083-013-9305-5, MR 3265974. • Early, Edward (2004), "Chain lengths in the Tamari lattice", Annals of Combinatorics, 8 (1): 37–43, doi:10.1007/s00026-004-0203-9, MR 2061375. • Friedman, Haya; Tamari, Dov (1967), "Problèmes d'associativité: Une structure de treillis finis induite par une loi demi-associative", Journal of Combinatorial Theory (in French), 2 (3): 215–242, doi:10.1016/S0021-9800(67)80024-3, MR 0238984. • Geyer, Winfried (1994), "On Tamari lattices", Discrete Mathematics, 133 (1–3): 99–122, doi:10.1016/0012-365X(94)90019-1, MR 1298967. • Huang, Samuel; Tamari, Dov (1972), "Problems of associativity: A simple proof for the lattice property of systems ordered by a semi-associative law", Journal of Combinatorial Theory, Series A, 13: 7–13, doi:10.1016/0097-3165(72)90003-9, MR 0306064. • Knuth, Donald E. (2005), "Draft of Section 7.2.1.6: Generating All Trees", The Art of Computer Programming, vol. IV, p. 34. • Tamari, Dov (1962), "The algebra of bracketings and their enumeration", Nieuw Archief voor Wiskunde, Series 3, 10: 131–146, MR 0146227.
Wikipedia
Tame group In mathematical group theory, a tame group is a certain kind of group defined in model theory. Formally, we define a bad field as a structure of the form (K, T), where K is an algebraically closed field and T is an infinite, proper, distinguished subgroup of K, such that (K, T) is of finite Morley rank in its full language. A group G is then called a tame group if no bad field is interpretable in G. References • A. V. Borovik, Tame groups of odd and even type, pp. 341–-366, in Algebraic Groups and their Representations, R. W. Carter and J. Saxl, eds. (NATO ASI Series C: Mathematical and Physical Sciences, vol. 517), Kluwer Academic Publishers, Dordrecht, 1998.
Wikipedia
Wild knot In the mathematical theory of knots, a knot is tame if it can be "thickened", that is, if there exists an extension to an embedding of the solid torus $S^{1}\times D^{2}$ into the 3-sphere. A knot is tame if and only if it can be represented as a finite closed polygonal chain. Every closed curve containing a wild arc is a wild knot.[1] Knots that are not tame are called wild and can have pathological behavior. In knot theory and 3-manifold theory, often the adjective "tame" is omitted. Smooth knots, for example, are always tame. It has been conjectured that every wild knot has infinitely many quadrisecants.[2] As well as their mathematical study, wild knots have also been studied for their decorative purposes in Celtic-style ornamental knotwork.[3] See also • Eilenberg–Mazur swindle, a technique for analyzing connected sums using infinite sums of knots References 1. Voitsekhovskii, M. I. (December 13, 2014) [1994], "Wild knot", Encyclopedia of Mathematics, EMS Press 2. Kuperberg, Greg (1994), "Quadrisecants of knots and links", Journal of Knot Theory and Its Ramifications, 3: 41–50, arXiv:math/9712205, doi:10.1142/S021821659400006X, MR 1265452, S2CID 6103528 3. Browne, Cameron (December 2006), "Wild knots", Computers & Graphics, 30 (6): 1027–1032, doi:10.1016/j.cag.2006.08.021 Knot theory (knots and links) Hyperbolic • Figure-eight (41) • Three-twist (52) • Stevedore (61) • 62 • 63 • Endless (74) • Carrick mat (818) • Perko pair (10161) • (−2,3,7) pretzel (12n242) • Whitehead (52 1 ) • Borromean rings (63 2 ) • L10a140 • Conway knot (11n34) Satellite • Composite knots • Granny • Square • Knot sum Torus • Unknot (01) • Trefoil (31) • Cinquefoil (51) • Septafoil (71) • Unlink (02 1 ) • Hopf (22 1 ) • Solomon's (42 1 ) Invariants • Alternating • Arf invariant • Bridge no. • 2-bridge • Brunnian • Chirality • Invertible • Crosscap no. • Crossing no. • Finite type invariant • Hyperbolic volume • Khovanov homology • Genus • Knot group • Link group • Linking no. • Polynomial • Alexander • Bracket • HOMFLY • Jones • Kauffman • Pretzel • Prime • list • Stick no. • Tricolorability • Unknotting no. and problem Notation and operations • Alexander–Briggs notation • Conway notation • Dowker–Thistlethwaite notation • Flype • Mutation • Reidemeister move • Skein relation • Tabulation Other • Alexander's theorem • Berge • Braid theory • Conway sphere • Complement • Double torus • Fibered • Knot • List of knots and links • Ribbon • Slice • Sum • Tait conjectures • Twist • Wild • Writhe • Surgery theory • Category • Commons
Wikipedia
Tame topology In mathematics, a tame topology is a hypothetical topology proposed by Alexander Grothendieck in his research program Esquisse d’un programme[1] under the French name topologie modérée (moderate topology). It is a topology in which the theory of dévissage can be applied to stratified structures such as semialgebraic or semianalytic sets.[2] Not to be confused with Tame manifold. Some authors consider an o-minimal structure to be a candidate for realizing tame topology in the real case.[3][4] There are also some other suggestions.[5] See also • Thom's first isotopy lemma References 1. Alexander Grothendieck, 1984. "Esquisse d'un Programme", (1984 manuscript), finally published in Schneps and Lochak (1997, I), pp.5-48; English transl., ibid., pp. 243-283. MR1483107 2. A'Campo, Ji & Papadopoulos 2016, § 1. 3. Dries, L. P. D. van den (1998). Tame Topology and O-minimal Structures. London Mathematical Society lecture note series, no. 248. London Mathematical Society Lecture Note Series. Cambridge, New York, and Oakleigh, Victoria: Cambridge University Press. doi:10.1017/CBO9780511525919. ISBN 9780521598385. 4. Trimble, Todd (2011-06-12). "Answer to "A 'meta-mathematical principle' of MacPherson"". MathOverflow. 5. Ayala, David; Francis, John; Tanaka, Hiro Lee (5 February 2017). "Local structures on stratified spaces". Advances in Mathematics. 307: 903–1028. doi:10.1016/j.aim.2016.11.032. ISSN 0001-8708. We conceive this package of results as a dévissage of stratified structures in the sense of Grothendieck. • A'Campo, Norbert; Ji, Lizhen; Papadopoulos, Athanase (2016). "On Grothendieck's tame topology". Handbook of Teichmüller Theory, Volume VI. IRMA Lectures in Mathematics and Theoretical Physics. Vol. 27. pp. 521–533. arXiv:1603.03016. doi:10.4171/161-1/17. ISBN 978-3-03719-161-3. External links • https://ncatlab.org/nlab/show/tame+topology
Wikipedia
Dade isometry In mathematical finite group theory, the Dade isometry is an isometry from class function on a subgroup H with support on a subset K of H to class functions on a group G (Collins 1990, 6.1). It was introduced by Dade (1964) as a generalization and simplification of an isometry used by Feit & Thompson (1963) in their proof of the odd order theorem, and was used by Peterfalvi (2000) in his revision of the character theory of the odd order theorem. Definitions Suppose that H is a subgroup of a finite group G, K is an invariant subset of H such that if two elements in K are conjugate in G, then they are conjugate in H, and π a set of primes containing all prime divisors of the orders of elements of K. The Dade lifting is a linear map f → fσ from class functions f of H with support on K to class functions fσ of G, which is defined as follows: fσ(x) is f(k) if there is an element k ∈ K conjugate to the π-part of x, and 0 otherwise. The Dade lifting is an isometry if for each k ∈ K, the centralizer CG(k) is the semidirect product of a normal Hall π' subgroup I(K) with CH(k). Tamely embedded subsets in the Feit–Thompson proof The Feit–Thompson proof of the odd-order theorem uses "tamely embedded subsets" and an isometry from class functions with support on a tamely embedded subset. If K1 is a tamely embedded subset, then the subset K consisting of K1 without the identity element 1 satisfies the conditions above, and in this case the isometry used by Feit and Thompson is the Dade isometry. References • Collins, Michael J. (1990), Representations and characters of finite groups, Cambridge Studies in Advanced Mathematics, vol. 22, Cambridge University Press, ISBN 978-0-521-23440-5, MR 1050762 • Dade, Everett C. (1964), "Lifting group characters", Annals of Mathematics, Second Series, 79 (3): 590–596, doi:10.2307/1970409, ISSN 0003-486X, JSTOR 1970409, MR 0160813 • Feit, Walter (1967), Characters of finite groups, W. A. Benjamin, Inc., New York-Amsterdam, ISBN 9780805324341, MR 0219636 • Feit, Walter; Thompson, John G. (1963), "Solvability of groups of odd order", Pacific Journal of Mathematics, 13: 775–1029, doi:10.2140/pjm.1963.13.775, ISSN 0030-8730, MR 0166261 • Peterfalvi, Thomas (2000), Character theory for the odd order theorem, London Mathematical Society Lecture Note Series, vol. 272, Cambridge University Press, doi:10.1017/CBO9780511565861, ISBN 978-0-521-64660-4, MR 1747393
Wikipedia
Tamil numerals The Tamil language has number words and dedicated symbols for them in the Tamil script. Part of a series on Numeral systems Place-value notation Hindu-Arabic numerals • Western Arabic • Eastern Arabic • Bengali • Devanagari • Gujarati • Gurmukhi • Odia • Sinhala • Tamil • Malayalam • Telugu • Kannada • Dzongkha • Tibetan • Balinese • Burmese • Javanese • Khmer • Lao • Mongolian • Sundanese • Thai East Asian systems Contemporary • Chinese • Suzhou • Hokkien • Japanese • Korean • Vietnamese Historic • Counting rods • Tangut Other systems • History Ancient • Babylonian Post-classical • Cistercian • Mayan • Muisca • Pentadic • Quipu • Rumi Contemporary • Cherokee • Kaktovik (Iñupiaq) By radix/base Common radices/bases • 2 • 3 • 4 • 5 • 6 • 8 • 10 • 12 • 16 • 20 • 60 • (table) Non-standard radices/bases • Bijective (1) • Signed-digit (balanced ternary) • Mixed (factorial) • Negative • Complex (2i) • Non-integer (φ) • Asymmetric Sign-value notation Non-alphabetic • Aegean • Attic • Aztec • Brahmi • Chuvash • Egyptian • Etruscan • Kharosthi • Prehistoric counting • Proto-cuneiform • Roman • Tally marks Alphabetic • Abjad • Armenian • Alphasyllabic • Akṣarapallī • Āryabhaṭa • Kaṭapayādi • Coptic • Cyrillic • Geʽez • Georgian • Glagolitic • Greek • Hebrew List of numeral systems Part of a series on Tamils History • History of Tamil Nadu • History of Sri Lanka • Sources of ancient Tamil history • Sangam period • Keezhadi excavation site • Tamilakam • Agriculture • Economy • Education • Industry • Chronology of Tamil history • Eelam • Tamil Kingdoms • Tamilization Culture • Language • Literature • Philosophy • Script • Numeral system • Medicine • Music • Architecture • Cuisine • Calendar • Cinema People • Indian Tamils • Sri Lankan Tamils • Malaysian Tamils • Singapore Tamils Tamil diaspora • Indian Tamil diaspora • Sri Lankan Tamil diaspora • Malaysian Tamil diaspora Tamil Australians, French Tamils, British Tamils, Tamil Italians, Tamil Indonesians, Tamil Canadians, Tamil Americans, Tamil South Africans, Myanmar Tamils, Tamil Mauritians, Tamil Germans, Tamil Pakistanis, Tamil Seychellois, Tamil New Zealanders, Swiss Tamils, Dutch Tamils Religion • Religion in ancient Tamil country • Tamil Hindu • Hinduism in Tamil Nadu • Hinduism in Sri Lanka • Tamil Buddhism • Tamil Jain • Tamil Muslim • Christianity in Tamil Nadu Politics • Politics of Tamil Nadu • Dravidian nationalism • Tamil Nationalism • Sri Lankan Tamil nationalism  Tamil portal Basic numbering Zero Old Tamil possesses a special numerical character for zero (see Old Tamil numerals below) and it is read as andru (literally, no/nothing). But yet Modern Tamil renounces the use of its native character and uses the Indian symbol '0' for Shunya meaning nothingness in Indic thought. Modern Tamil words for zero include சுழியம் (suḻiyam) or பூஜ்ஜியம் (pūjjiyam). first ten numbers (முதல் எண்கள்) Modern Tamil script Tamil numeralTamil word and transliteration ௦0சுழியம் (suḻiyam) Old Tamil: பாழ் (pāḻ)[1] ௧1ஒன்று (oṉṟu) ௨2இரண்டு (iraṇḍu) ௩3மூன்று (mūṉṟu) ௪4நான்கு (nāṉku) ௫5ஐந்து (aindhu) ௬6ஆறு (āṟu) ௭7ஏழு (ēḻu) ௮8எட்டு (eṭṭu) ௯9ஒன்பது (oṉpathu) ௰10பத்து (paththu) Transcribing other numbers Reproductive and attributive prefixes Tamil has a numeric prefix for each number from 1 to 9, which can be added to the words for the powers of ten (ten, hundred, thousand, etc.) to form multiples of them. For instance, the word for fifty, ஐம்பது (aimpatu) is a combination of ஐ (ai, the prefix for five) and பத்து (pattu, which is ten). The prefix for nine changes with respect to the succeeding base 10. தொ + the unvoiced consonant of the succeeding base 10 forms the prefix for nine. For instance, 90 is தொ + ண் (ண் being the unvoiced version of ணூ), hence, தொண்ணூறு). Tamil scriptTamil prefixTransliteration ௧ஓர்ōr ௨ஈர்īr ௩மூmū ௪நான்nāṉ ௫ஐai ௬ஆறுāṟ(u) ௭ஏழ்ēḻ(u) ௮எண்eṇ These are typically void in the Tamil language except for some Hindu references; for example, அட்ட இலட்சுமிகள் (the eight Lakshmis). Even in religious contexts, the Tamil language is usually more preferred for its more poetic nature and relatively low incidence of consonant clusters. Specific characters Unlike other Indian writing systems, Tamil has distinct digits for 10, 100, and 1000. It also has distinct characters for other number-based aspects of day-to-day life. tenhundredthousand ௰௱௲ daymonthyeardebitcreditas aboverupeenumeral ௳௴௵௶௷௸௹௺ Powers of ten (பதின்பெருக்கம்) There are two numeral systems that can be used in the Tamil language: the Tamil system which is as follows[2] The following are the traditional numbers of the Ancient Tamil Country, Tamiḻakam. Original Tamil system Rank 101 102 103 104 105 106 109 1012 1015 1018 1020 1021 Words பத்து நூறு ஆயிரம் பத்தாயிரம் நூறாயிரம் மெய்யிரம் தொள்ளுண் ஈகியம் நெளை இளஞ்சி வெள்ளம் ஆம்பல் Character ௰ ௱ ௲ ௰௲ ௱௲ ௲௲ ௲௲௲ ௲௲௲௲ ௲௲௲௲௲ ௲௲௲௲௲௲ ௱௲௲௲௲௲௲ ௲௲௲௲௲௲௲ Transliteration pattu nūṟu āyiram pattāyiram nūṟāyiram meyyiram toḷḷuṇ īkiyam neḷai iḷañci veḷḷam āmbal Translation ten hundred thousand ten thousand hundred thousand million billion (milliard) trillion (billion) quadrillion (billiard) quintillion (trillion) hundred quintillion sextillion (trilliard) Current Tamil system Rank 105 106 107 108 109 1011 1013 1015 1017 1019 1021 1025 Words இலட்சம் பத்து இலட்சம் கோடி பத்துக் கோடி அற்புதம் நிகர்ப்புதம் கர்வம் சங்கம் அர்த்தம் பூரியம் முக்கொடி மாயுகம் Character ௱௲ ௲௲ ௰௲௲ ௱௲௲ ௲௲௲ ௱௲௲௲ ௱௲௲௲௲ ௲௲௲௲௲ ௱௲௲௲௲௲ ௰௲௲௲௲௲௲ ௲௲௲௲௲௲௲ ௰௲௲௲௲௲௲௲௲ Transliteration ilaṭcam pattu ilaṭcam kōṭi pattuk kōṭi aṟputam nikarpputam karvam śaṅkam arttam pūriyam mukkoṭi māyukam Translation lakh ten lakh crore ten crore arab kharab nil / hundred kharab padma shankh / hundred padma hundred shankh ten thousand shankh ten crore shankh Fractions (பின்னம்) Proposals to encode Tamil fractions and symbols to Unicode were submitted.[3][4] As of version 12.0, Tamil characters used for fractional values in traditional accounting practices were added to the Unicode Standard. Transcribing fractions (பின்னம் எழுத்தல்) You can transcribe any fraction, by affixing -இல் (-il) after the denominator followed by the numerator. For instance, 1/41 can be said as நாற்பத்து ஒன்றில் ஒன்று (nāṟpattu oṉṟil oṉṟu). The suffixing of the -இல் (-il) requires you to change the last consonant of the number to its இ (i) form. For example, மூன்று + இல் (mūṉṟu + -il) becomes மூன்றில் (mūṉṟil); note the உ (u) has been omitted. Common fractions (பொது பின்னங்கள்) have names already allocated to them, hence, these names are often used rather than the above method. Value 1⁄4 1⁄2 3⁄4 1⁄5 1⁄8 1⁄10 1⁄16 1⁄20 1⁄40 1⁄80 1⁄160 Symbol Name கால் அரை முக்கால் நாலுமா அரைக்கால் இருமா மாகாணி, வீசம் ஒருமா அரைமா காணி அரைக்காணி Transliteration kāl arai mukkāl nālumā araikkāl irumā mākāṇi, vīsam orumā araimā kāṇi araikkāṇi Other fractions include: ValueNameTransliteration 3⁄16 = 0.1875மும்மாகாணிmummākāṇi 3⁄20 = 0.15மும்மாmummā 3⁄64 = 0.046875முக்கால்வீசம்mukkālvīsam 3⁄80 = 0.0375முக்காணிmukkāṇi 1⁄32 = 0.03125அரைவீசம்araivīsam 1⁄64 = 0.015625கால் வீசம்kāl vīsam 3⁄320 = 0.009375முக்கால்காணிmukkālkāṇi 1⁄320 = 0.003125முந்திரிmuntiri 3⁄1280 = 0.00234375கீழ் முக்கால்kīḻ mukkāl 1⁄640 = 0.0015625கீழரைkīḻarai 1⁄1280 = 7.8125×10−4கீழ் கால்kīḻ kāl 1⁄1600 = 0.000625கீழ் நாலுமாkīḻ nālumā 3⁄5120 ≈ 5.85938×10−4கீழ் மூன்று வீசம்kīḻ mūṉṟu vīsam 3⁄6400 = 4.6875×10−4கீழ் மும்மாkīḻ mummā 1⁄2500 = 0.0004கீழ் அரைக்கால்kīḻ araikkāl 1⁄3200 = 3.12500×10−4கீழ் இருமாkīḻ irumā 1⁄5120 ≈ 1.95313×10−4கீழ் வீசம்kīḻ vīsam 1⁄6400 = 1.56250×10−4கீழொருமாkīḻorumā 1⁄102400 ≈ 9.76563×10−6கீழ்முந்திரிkīḻmuntiri 1⁄2150400 ≈ 4.65030×10−7இம்மிimmi 1⁄23654400 ≈ 4.22754×10−8மும்மிmummi 1⁄165580800 ≈ 6.03935×10−9அணுaṇu 1⁄1490227200 ≈ 6.71039×10−10குணம்kuṇam 1⁄7451136000 ≈ 1.34208×10−10பந்தம்pantam 1⁄44706816000 ≈ 2.23680×10−11பாகம்pāgam 1⁄312947712000 ≈ 3.19542×10−12விந்தம்vintam 1⁄5320111104000 ≈ 1.87966×10−13நாகவிந்தம்nāgavintam 1⁄74481555456000 ≈ 1.34261×10−14சிந்தைsintai 1⁄1489631109120000 ≈ 6.71307×10−16கதிர்முனைkatirmuṉai 1⁄59585244364800000 ≈ 1.67827×10−17குரல்வளைப்படிkuralvaḷaippaḍi 1⁄3575114661888000000 ≈ 2.79711×10−19வெள்ளம்veḷḷam 1⁄357511466188800000000 ≈ 2.79711×10−21நுண்மணல்nuṇmaṇal 1⁄2323824530227200000000 ≈ 4.30325×10−22தேர்த்துகள்tērttugaḷ ^ Aṇu was considered as the lowest fraction by ancient Tamils as size of smallest physical object (similar to an atom). Later, this term went to Sanskrit to refer directly to atoms. Decimals (பதின்மம்) Decimal point is called புள்ளி (puḷḷi) in Tamil. For example, 1.1 would be read as ஒன்று புள்ளி ஒன்று (oṉṟu puḷḷi oṉṟu). In Sri Lankan Tamil, Thasam தசம். Percentage (விழுக்காடு) Percentage is known as விழுக்காடு (viḻukkāḍu) in Tamil or சதவீதம் (śatavītam). These words are simply added after a number to form percentages. For instance, four percent is நான்கு சதவீதம் (nāṉku satavītam) or நான்கு விழுக்காடு (nāṉku viḻukkāḍu). Percentage symbol (%) is also recognised and used. Ordinal numbers (வரிசை எண்கள்) Ordinal numbers are formed by adding the suffix -ஆம் (ām) after the number, except for 'First'. Ordinal Tamil Transliteration Firstமுதல்mudal Secondஇரண்டாம்iraṇḍām Thirdமூன்றாம்mūṉṟām Fourthநான்காம்nāṉkām 101stநூற்று ஒறாம்nūṟṟu oṉṟām Collective numerals (கூட்டெண்கள்) EnglishTamilTransliteration Singleஒற்றைoṟṟai Pairஇரட்டைiraṭṭai Reproductives௺ + வினைச்சொல்Numeric prefix + noun* Single (pillar), double (pillar)...ஒருக்(கால்), இருக்(கால்)-oruk(kāl), iruk(kāl)* Distributives௺ + முறைNumeric prefix + muṟai Once, twice...ஒருமுறை, இருமுறைorumuṟai, irumuṟai • As always, when blending two words into one, an unvoiced form of the consonant as the one that the second starts with, is placed in between to blend. Traditional Tamil counting song This song is a list of each number with a concept its primarily associated with. TamilTransliterationEnglish ஒரு குலம்oru kulamOne race ஈரினம்īriṉamTwo sexes – male (ஆண், āṇ), female (பெண், peṇ) முத்தமிழ்muttamiḻThree sections of Tamil – literature (இயல், iyal), music (இசை, isai), and drama (நாடகம், nāṭakam) நான்மறைnāṉmaṟaiFour scriptures ஐம்புலன்aimpulaṉFive senses அறுசுவைaṟucuvaiSix tastes – sweet (iṉippu), pungent (kārppu), bitter (kasappu), sour (puḷippu), salty (uvarppu), and astringent (tuvarppu). ஏழிசைēḻicaiSeven musical notes (kural, tuttam, kaikkiḷai, uḻai, iḷi, viḷari, tāram) எண் பக்கம்eṇ pakkamEight directions – east (kiḻakku), west (mēṟku), north (vaḍakku), south (teṟku), south-west (teṉ-mēṟku), south-east (teṉ-kiḻakku), north-west (vaḍa-mēṟku), and north-east (vaḍa-kiḻakku). நவமணிகள்navamaṇikaḷNine gems – diamond (வைரம், vairam), emerald (மரகதம், marakatam), blue sapphire (நீலம், nīlam), garnet (கோமேதகம், kōmētakam), red coral (பவளம், pavaḷam), ruby (மாணிக்கம், māṇikkam), pearl (முத்து, muttu), topaz (புட்பராகம், puṭparākam), and cat's eye (வைடூரியம், vaiṭūriyam). தொன்மெய்ப்பாடுtoṉmeyppāṭuAlso known as navarasam as per the dance expressions. These are joyful (uvakai), humour (nakai), cries (aḻukai), innocent (vekuḷi), proud (perumitam), fear (accam), disgust (iḷivaral), wonder (maruṭkai), and tranquility (amaiti).[5] Influence As the ancient classical language of the Dravidian languages, Tamil numerals influenced and shaped the numerals of the others in the family. The following table compares the main Dravidian languages. Number Tamil Kannada Malayalam Tulu Telugu Kolami Kurukh Brahui Proto-Dravidian 1 oṉṟu ondu onnŭ oñji okaṭi okkod oṇṭa asiṭ *oru(1) 2 iraṇḍu eraḍu raṇṭŭ eraḍ, iraḍ renḍu irāṭ indiṅ irāṭ *iru(2) 3 mūṉṟu mūru mūnnŭ mūji mūḍu mūndiṅ mūnd musiṭ *muC 4 nālu, nāṉku nālku nālŭ‌ nāl nālugu nāliṅ nākh čār (II) *nān 5 aintu, añju aydu añcŭ ayin, ain ayidu ayd 3 pancē (II) panč (II) *cayN 6 āṟu āru āṟŭ āji āru ār 3 soyyē (II) šaš (II) *caru 7 ēḻu ēḷu ēḻŭ ēḍ, ēl, ēḷ ēḍu ēḍ 3 sattē (II) haft (II) *ēlu 8 eṭṭu eṇṭu eṭṭŭ eḍma, yeḍma, eṇma, enma enimidi enumadī 3 aṭṭhē (II) hašt (II) *eṭṭu 9 oṉpatu ombattu onpatŭ ormba tommidi tomdī 3 naiṃyē (II) nōh (II) *toḷ 10 pattu hattu pattŭ patt padi padī 3 dassē (II) dah (II) *pat(tu) Also, Tamil through the Pallava script which itself through the Kawi script, Khmer script and other South-east Asian scripts has shaped the numeral grapheme of most South-east Asian languages. History Before the Government of India unveiled ₹ as the new rupee symbol, people in Tamil Nadu used the Tamil letter ௹ as the symbol. This symbol continues to be used occasionally as rupee symbol by Indian Tamils. It is also used by Tamils in Sri Lanka ௳ is also known as the Piḷḷaiyār Suḻi (lit. 'Curl of Piḷḷaiyār'), a symbol that most Tamil Hindus will start off any auspicious document with. It is written to invoke the god Piḷḷaiyār, known otherwise as Ganesha, who is the remover of obstacles. See also • Tamil script • Tamil units of measurement References 1. N. Subrahmanian (1996). Śaṅgam polity: the administration and social life of the Śaṅgam Tamils (3 ed.). Ennes Publications. pp. 235, 416. Retrieved 2 December 2015. 2. Selvakumar, V. (2016). History of Numbers and Fractions and Arithmetic Calculations in the Tamil Region: Some Observations. HuSS: International Journal of Research in Humanities and Social Sciences, 3(1), 27-35. https://doi.org/10.15613/HIJRH/2016/V3I1/111730 3. Sharma, Shriramana. (2012). Proposal to encode Tamil fractions and symbols. Retrieved 12 March 2019 from https://www.unicode.org/L2/L2012/12231-tamil-fractions-symbols-proposal.pdf 4. Government of Tamil Nadu. (2017). Finalized proposal to encode Tamil fractions and symbols. Retrieved 12 March 2019 from http://unicode.org/wg2/docs/n4822-tamil-frac.pdf 5. Literary theories in Tamil: with special reference to Tolka:ppiyam. Pondicherry Institute of Linguistics and Culture. 1997. p. 135. Tamil language History • Old Tamil • Middle Tamil • Modern Tamil • Manipravalam • Proto-Dravidian • Proto-South Dravidian • Tamil Sangams • First Sangam • Second Sangam • Third Sangam Dialects Indian • Bangalore Tamil dialects • Central Tamil dialect • Kongu Tamil • Madras Bashai • Madurai Tamil • Nellai Tamil • Tirunelveli Tamil • Palakkad Tamil • Thanjai Tamil • Kongu Tamil • Kanyakumari Tamil Sri Lankan • Negombo • Batticaloa • Jaffna Sociolects • Brahmin Tamil • Arwi • Malaysian Tamil • Singapore Tamil Global organizations • World Tamil Conference • World Classical Tamil Conference 2010 Literature Classics • Sangam literature • Tamil books of Law • The Five Great Epics of Tamil Literature • The Five Lesser Epics of Tamil Literature • Ponniyin Selvan • List of Sangam poets • Commentaries in Tamil literary tradition Devotional Literature • Cīrappurānam • Kampa Irāmāyaṉam • Nālāyira Tivviya Pirapantam • Tēmpāvaṉi • Tirumurai • Tirumurukāṟṟuppaṭai • Thiruppugal • Thiruvilaiyadal Puranam • Vinayagar Agaval • Tamil Ganaptya Texts Poetry • Kural • Venpa • Iraichchi • Akam • Puram • Thinai • Ullurai • Ulā Grammars and Dictionaries • Agattiyam • Nannūl • Nikantu books • Purapporul Venbamaalai • Tolkāppiyam • Tivakāram • Caturakarāti • Tamil Lexicon dictionary • Madurai Tamil Paeragaraadhi History • Yāḻpāna Vaipava Mālai Mathematics and Natural Science • Yerambam Tamil and other languages • English • Sinhala • Indo-Aryan languages • Dravidian languages • Malay • Korean Scripts • Megalithic graffiti symbols • Tamil-Brahmi • Koleḻuttu • Vatteluttu • Pallava grantha • Modern script • Tamil Braille • Arwi Lexis and grammar • Tamil grammar • Tamil honorifics • Tamil numerals Phonology • Tamil phonology • Tamil onomatopoeia • Tamil prosody Transliteration • ISO • Moḻi Events • Standardisation of Tamil script • Tanittamil Iyakkam • Simplified Tamil script • Printing in Tamil language • Ancient manuscript digitalisation • Formation of CICT • Project Madurai
Wikipedia
Tamilla Nasirova Tamilla Nasirova (Azerbaijani: Tamilla Nəsirova; 20 October 1936 – 12 April 2023) was an Azerbaijani mathematician. She specialized in probability theory and is known for her discoveries pertaining to the semi-Markov process. She was a professor at Baku State University from 1980 to 2018 and at the Karadeniz Technical University from 1996 to 2000. Nasirova was the first woman to earn a doctorate in mathematics in Azerbaijan and the first Azerbaijani woman to become a professor of mathematics. Biography Tamilla Nasirova was born in Nəvahı, Azerbaijan on 20 October 1936.[1] She attended School No. 176 in Baku and graduated in 1953. She enrolled at Azerbaijan State University (now Baku State University) and graduated in 1958. After studying at the Ukrainian Academy of Sciences and Moscow State University, she earned a Doctor of Philosophy at the Tashkent University of Information Technologies in 1964.[2] In 1980, Nasirova was given a position as an associate professor of mathematics at Baku State University,[2] becoming the first woman to hold such a position in Azerbaijan.[3] She became a full professor in 1995. She also taught at the Karadeniz Technical University in Turkey from 1996 to 2000.[2] Nasirova continued teaching at Baku State University until 2018,[4] teaching probability theory and mathematical statistics.[5] Nasirova worked in probability theory and is known for her work involving semi-Markov processes,[2][4] proving the ergodic theorem of semi-Markov processes and advancing several related developments that influenced their study.[3][5] Nasirova was a researcher for the Azerbaijan National Academy of Sciences Institute of Control Systems for much of her life, beginning in 1958, continuing until 1980, and resuming her work in 1994.[2] During her career, she published a total of 96 scientific works and trained ten doctoral students.[4] She was awarded three Azerbaijani Certificates of Honor: two from Baku State University in 2007 and 2016, and one from the National Academy of Sciences in 2016. She was also recognized as an Honored Teacher of the Republic in 2009.[4] Nasirova died on 12 April 2023, at the age of 86.[2] References 1. "Azerbaijan's first female mathematician D.Sc and Prof. passed away". Apa.az. Archived from the original on 2023-04-14. Retrieved 2023-04-14. 2. "Tamilla Nəsirova vəfat etdi" [Tamilla Nasirova has died]. Pravda.az (in Azerbaijani). 2023-04-12. Archived from the original on 2023-04-14. Retrieved 2023-04-14. 3. "Azərbaycanın ilk qadın riyaziyyatçı professoru Tamilla Nəsirovanın 80 illik yubileyi qeyd olundu" [The 80th birthday of Azerbaijan's first female mathematics professor Tamilla Nasirova was celebrated]. isi.az (in Azerbaijani). 2016-10-20. Archived from the original on 2021-10-22. Retrieved 2023-04-14. 4. "Nasirova Tamilla Khilal – Institute of Control Systems". isi.az. Archived from the original on 2023-04-14. Retrieved 2023-04-14. 5. Zeynalova, Vafa (2017-02-15). "Making the Science". Region Plus. Archived from the original on 2020-11-24. Retrieved 2023-04-14.
Wikipedia
Tammes problem In geometry, the Tammes problem is a problem in packing a given number of points on the surface of a sphere such that the minimum distance between points is maximized. It is named after the Dutch botanist Pieter Merkus Lambertus Tammes (the nephew of pioneering botanist Jantina Tammes) who posed the problem in his 1930 doctoral dissertation on the distribution of pores on pollen grains.[1] Mathematicians independent of Tammes began studying circle packing on the sphere in the early 1940s; it was not until twenty years later that the problem became associated with his name. It can be viewed as a particular special case of the generalized Thomson problem of minimizing the total Coulomb force of electrons in a spherical arrangement.[2] Thus far, solutions have been proven only for small numbers of circles: 3 through 14, and 24.[3] There are conjectured solutions for many other cases, including those in higher dimensions.[4] See also • Spherical code • Kissing number problem • Cylinder sphere packings References 1. Pieter Merkus Lambertus Tammes (1930): On the number and arrangements of the places of exit on the surface of pollen-grains, University of Groningen 2. Batagelj, Vladimir; Plestenjak, Bor. "Optimal arrangements of n points on a sphere and in a circle" (PDF). IMFM/TCS. Archived from the original (PDF) on 25 June 2018. 3. Musin, Oleg R.; Tarasov, Alexey S. (2015). "The Tammes Problem for N = 14". Experimental Mathematics. 24 (4): 460–468. doi:10.1080/10586458.2015.1022842. S2CID 39429109. 4. Sloane, N. J. A. "Spherical Codes: Nice arrangements of points on a sphere in various dimensions". Bibliography Journal articles • Tarnai T; Gáspár Zs (1987). "Multi-symmetric close packings of equal spheres on the spherical surface". Acta Crystallographica. A43 (5): 612–616. doi:10.1107/S0108767387098842. • Erber T, Hockney GM (1991). "Equilibrium configurations of N equal charges on a sphere" (PDF). Journal of Physics A: Mathematical and General. 24 (23): Ll369–Ll377. Bibcode:1991JPhA...24L1369E. doi:10.1088/0305-4470/24/23/008. S2CID 122561279. • Melissen JBM (1998). "How Different Can Colours Be? Maximum Separation of Points on a Spherical Octant". Proceedings of the Royal Society A. 454 (1973): 1499–1508. Bibcode:1998RSPSA.454.1499M. doi:10.1098/rspa.1998.0218. S2CID 122700006. • Bruinsma RF, Gelbart WM, Reguera D, Rudnick J, Zandi R (2003). "Viral Self-Assembly as a Thermodynamic Process" (PDF). Physical Review Letters. 90 (24): 248101–1–248101–4. arXiv:cond-mat/0211390. Bibcode:2003PhRvL..90x8101B. doi:10.1103/PhysRevLett.90.248101. hdl:2445/13275. PMID 12857229. S2CID 1353095. Archived from the original (PDF) on 2007-09-15. Books • Aste T, Weaire DL (2000). The Pursuit of Perfect Packing. Taylor and Francis. pp. 108–110. ISBN 978-0-7503-0648-5. • Wells D (1991). The Penguin Dictionary of Curious and Interesting Geometry. New York: Penguin Books. pp. 31. ISBN 0-14-011813-6. External links • How to Stay Away from Each Other in a Spherical Universe (PDF). • Packing and Covering of Congruent Spherical Caps on a Sphere. • Science of Spherical Arrangements (PPT). • General discussion of packing points on surfaces, with focus on tori (PDF). Packing problems Abstract packing • Bin • Set Circle packing • In a circle / equilateral triangle / isosceles right triangle / square • Apollonian gasket • Circle packing theorem • Tammes problem (on sphere) Sphere packing • Apollonian • Finite • In a sphere • In a cube • In a cylinder • Close-packing • Kissing number • Sphere-packing (Hamming) bound Other 2-D packing • Square packing Other 3-D packing • Tetrahedron • Ellipsoid Puzzles • Conway • Slothouber–Graatsma
Wikipedia
Tan Eng Chye Tan Eng Chye (simplified Chinese: 陈永财; traditional Chinese: 陳永財; pinyin: Chén Yǒngcái) is a Singaporean college administrator who has been serving as the third president of the National University of Singapore since 2018.[1] Prior to his presidency, he served as Deputy President (Academic Affairs) and a provost at the National University of Singapore. Tan Eng Chye 陳永財 3rd President of the National University of Singapore Incumbent Assumed office 1 January 2018 Preceded byTan Chorh Chuan Personal details Alma materNational University of Singapore (BS) Yale University (PhD) ProfessionCollege administrator Websitepresident.nus.edu.sg Scientific career FieldsMathematics InstitutionsNational University of Singapore ThesisOn Some Geometrical Properties of K-Types of Representations (1989) Doctoral advisorRoger Evans Howe Education Tan attended Raffles Institution between 1974 and 1979 before graduating from the National University of Singapore in 1985 with a Bachelor of Science (First Class Honours) degree in mathematics. He later went on to obtain his PhD from Yale University in 1989, under the guidance of Roger Howe.[2] Career He joined NUS as a faculty member in the Department of Mathematics in 1985, as a Senior Tutor, eventually becoming the Department's Deputy Head in 1999. In June 2003, he was appointed Dean of the Faculty of Science, a post he held till March 2007. Up till 2017, he served as NUS’ Deputy President (Academic Affairs) and Provost. Tan Eng Chye's research interests are representation theory of Lie groups and Lie algebras, invariant theory and algebraic combinatorics. In collaboration with Roger Howe, he has written a well-known graduate-level textbook on non-Abelian harmonic analysis and contributed to several subjects in representation theory including degenerate principal series representations and branching rules. He has also been active in promoting mathematics, having established the Singapore Mathematical Society Enrichment Programmes in 1994, revamped the Singapore Mathematical Olympiad in 1995 to allow more participation from students, and initiated a series of project teaching workshops for teachers in 1998. He served as president of the Singapore Mathematical Society from 2001 to 2005 and President of the South East Asian Mathematical Society from 2004 to 2005. 2018–present: NUS presidency On 28 July 2017, Tan was named as the next president of NUS, taking over Tan Chorh Chuan.[3] He assumed office at the start of 2018. Along with the appointed, he was appointed to A*STAR's board as well, taking the seat meant for the university's president.[4] In 2020, NUS raised US$300 million through its first green bond.[5] In the same year, it established a research institute called the Asian Institute of Digital Finance along with the Monetary Authority of Singapore and National Research Foundation.[6] In 2020, Tan said in an interview that he had plans for NUS to "tear down structures that inhibit interdisciplinarity",[7] with Professor Joanne Roberts of Yale-NUS College commenting that there were similarities between Yale-NUS and Tan's plans.[8] On 22 September 2020, NUS unveiled its plans for an interdisciplinary college, the College of Humanities and Sciences, allowing students to take courses from both the Faculty of Science and the Faculty of Arts and Social Sciences.[9] The new college took in its first intake in 2021.[10] As part of a broader plan to introduce interdisciplinary colleges, in 2021,[11] Tan announced that Yale-NUS College would be closed by 2025, with 2021 intake of freshmen being the last intake. The college would also be merged with NUS' University Scholars Programme to offer a new cirriculum. The decision was unilaterally made by NUS, and came as a surprise to Yale-NUS' students and faculty, NUS' faculty, and Yale.[12] More than 10,000 people had signed a petition calling for the reversal of the decision.[13] Questions about the decision filed in the Singapore Parliament by various members of parliament were answered on 13 September 2021.[14][11] Honours and awards Tan received the Pingat Pentadbiran Awam, Emas (Public Administration Medal, Gold) in Singapore's National Day Awards 2014.[15] He has been a Fellow of the Singapore National Academy of Science since 2011.[16] He was conferred the Wilbur Lucius Cross Medal by the Yale Graduate School Alumni Association in 2018,[17] and received an Honorary Doctor of Science from the University of Southampton, UK in the same year.[18] Tan was conferred the title of Knight of the French Order of the Legion of Honour on 5 July 2022, in recognition of his distinguished contributions in education and research.[19] Selected works • Howe, Roger; Tan, Eng-Chye (1992). Non-Abelian harmonic analysis. Applications of SL(2,R). Universitext. Springer-Verlag, New York. doi:10.1007/978-1-4613-9200-2. ISBN 0-387-97768-6. • Howe, Roger E.; Tan, Eng-Chye (1993). "Homogeneous Functions on Light Cones: \\the Infinitesimal Structure of some Degenerate Principal Series Representations". Bulletin of the American Mathematical Society. 28: 1–75. doi:10.1090/S0273-0979-1993-00360-4. • Aslaksen, Helmer; Tan, Eng-Chye; Zhu, Chen-Bo (1995). "Invariant theory of special orthogonal groups". Pacific Journal of Mathematics. 168 (2): 207–215. doi:10.2140/pjm.1995.168.207. • Li, Jian-Shu; Paul, Annegret; Tan, Eng-Chye; Zhu, Chen-Bo (2003). "The explicit duality correspondence of (Sp(p,q),O∗(2n))". Journal of Functional Analysis. 200 (1): 71–100. doi:10.1016/S0022-1236(02)00079-4. • Howe, Roger; Tan, Eng-Chye; Willenbring, Jeb F. (2005). "Stable branching rules for classical symmetric pairs". Transactions of the American Mathematical Society. 357 (4): 1601–1627. doi:10.1090/S0002-9947-04-03722-5. • Howe, Roger; Jackson, Steven; Teck Lee, Soo; Tan, Eng-Chye; Willenbring, Jeb (2009). "Toric degeneration of branching algebras". Advances in Mathematics. 220 (6): 1809–1841. doi:10.1016/j.aim.2008.11.010. References 1. "Prof Tan Eng Chye to be next NUS President". news.nus.com. 2. Tan, Eng-Chye (1989). On some geometrical properties of K-types of representations (Ph.D.). Yale University. OCLC 54174603 – via ProQuest. 3. hermesauto (28 July 2017). "NUS provost Tan Eng Chye will take over as university's president next year". The Straits Times. Retrieved 9 September 2021. 4. hermesauto (29 December 2017). "New NUS, NTU presidents among 3 new board members for A*Star in 2018". The Straits Times. Retrieved 9 September 2021. 5. Tay, Vivienne (27 May 2020). "NUS raises S$300m in its first green bond issuance". www.businesstimes.com.sg. Retrieved 9 September 2021.{{cite web}}: CS1 maint: url-status (link) 6. hermesauto (4 August 2020). "Singapore to set up Asian digital finance research institute by end-2020". The Straits Times. Retrieved 9 September 2021. 7. "COVID-19 starts push for more interdisciplinary research". University World News. Retrieved 9 September 2021. 8. "NUS' big push for interdisciplinary learning: Timely change but there'll be practical challenges, experts say". TODAYonline. Retrieved 9 September 2021. 9. "NUS unveils draft plans to set up combined College of Humanities and Sciences in 2021". TODAYonline. Retrieved 9 September 2021. 10. "Common modules, various subject combinations for first cohort at new NUS College of Humanities and Sciences in 2021". TODAYonline. Retrieved 9 September 2021. 11. hermesauto (13 September 2021). "Yale-NUS closure part of NUS interdisciplinary road map, cost not the main motivation: Chan Chun Sing". The Straits Times. Retrieved 13 September 2021. 12. "Students, faculty angry over closure of Yale-NUS College". University World News. Retrieved 9 September 2021. 13. hermesauto (31 August 2021). "Over 10,000 sign petition calling for reversal of Yale-NUS merger, students and alumni seek answers at townhall". The Straits Times. Retrieved 9 September 2021. 14. "WP MPs to file parliamentary questions on Yale-NUS, USP merger for possible debate at Sept 13 sitting". TODAYonline. Retrieved 9 September 2021. 15. Singapore, Prime Minister's Office (17 November 2018). "PMO | Recipients". Prime Minister's Office Singapore. 16. "Nine NUS professors conferred prestigious SNAS Fellowships". Archived from the original on 18 November 2016. 17. "NUS President conferred Yale medal". news.nus.com. 18. "NUS President receives honorary degree from Southampton". news.nus.com. 19. "NUS President Prof Tan Eng Chye conferred Knight of the French Order of the Legion of Honour". news.nus.com. External links • Tan Eng Chye Personal Web Page • The Mathematics Genealogy Project – Eng-Chye Tan • National University of Singapore President Biography National University of Singapore Presidents • Shih Choon Fong (2000-2008) • Tan Chorh Chuan (2008-2017) • Tan Eng Chye (2018-) Faculties and Schools • Law • Medicine • Music • Public Policy • Duke–NUS • Yale-NUS Facilities • Adam Park Guild House • Institute of Systems Science • Lee Kong Chian Natural History Museum • NUS Museum Research centres • Centre for Advanced 2D Materials • Centre for International Law • Centre for Quantum Technologies • East Asian Institute • Energy Studies Institute • Institute for Mathematical Sciences • Institute of Policy Studies Publications • Asian Journal of Public Affairs • Journal of Southeast Asian Studies • NUS Press Related • Internal Shuttle Bus • List of NUS people • National University Hospital • Nanyang University • NUS High School of Math and Science • NUS Muslim Society • NUS university professor • NUSSU Rag and Flag • Singapore General Hospital • Temasek Life Sciences Laboratory Authority control International • ISNI • VIAF National • Israel • United States Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Tanaka's formula In the stochastic calculus, Tanaka's formula for the Brownian motion states that $|B_{t}|=\int _{0}^{t}\operatorname {sgn}(B_{s})\,dB_{s}+L_{t}$ Not to be confused with Tanaka equation. where Bt is the standard Brownian motion, sgn denotes the sign function $\operatorname {sgn}(x)={\begin{cases}+1,&x>0;\\0,&x=0\\-1,&x<0.\end{cases}}$ and Lt is its local time at 0 (the local time spent by B at 0 before time t) given by the L2-limit $L_{t}=\lim _{\varepsilon \downarrow 0}{\frac {1}{2\varepsilon }}|\{s\in [0,t]|B_{s}\in (-\varepsilon ,+\varepsilon )\}|.$ One can also extend the formula to semimartingales. Properties Tanaka's formula is the explicit Doob–Meyer decomposition of the submartingale |Bt| into the martingale part (the integral on the right-hand side, which is a Brownian motion[1]), and a continuous increasing process (local time). It can also be seen as the analogue of Itō's lemma for the (nonsmooth) absolute value function $f(x)=|x|$, with $f'(x)=\operatorname {sgn}(x)$ and $f''(x)=2\delta (x)$; see local time for a formal explanation of the Itō term. Outline of proof The function |x| is not C2 in x at x = 0, so we cannot apply Itō's formula directly. But if we approximate it near zero (i.e. in [−ε, ε]) by parabolas ${\frac {x^{2}}{2|\varepsilon |}}+{\frac {|\varepsilon |}{2}}.$ and use Itō's formula, we can then take the limit as ε → 0, leading to Tanaka's formula. References 1. Rogers, L.G.C. "I.14". Diffusions , Markov Processes and Martingales: Volume 1, Foundations. p. 30. • Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications (Sixth ed.). Berlin: Springer. ISBN 3-540-04758-1. (Example 5.3.2) • Shiryaev, Albert N.; trans. N. Kruzhilin (1999). Essentials of stochastic finance: Facts, models, theory. Advanced Series on Statistical Science & Applied Probability No. 3. River Edge, NJ: World Scientific Publishing Co. Inc. ISBN 981-02-3605-0.
Wikipedia
Tanaka equation In mathematics, Tanaka's equation is an example of a stochastic differential equation which admits a weak solution but has no strong solution. It is named after the Japanese mathematician Hiroshi Tanaka (Tanaka Hiroshi). Not to be confused with Tanaka's formula. Tanaka's equation is the one-dimensional stochastic differential equation $\mathrm {d} X_{t}=\operatorname {sgn}(X_{t})\,\mathrm {d} B_{t},$ driven by canonical Brownian motion B, with initial condition X0 = 0, where sgn denotes the sign function $\operatorname {sgn}(x)={\begin{cases}+1,&x\geq 0;\\-1,&x<0.\end{cases}}$ (Note the unconventional value for sgn(0).) The signum function does not satisfy the Lipschitz continuity condition required for the usual theorems guaranteeing existence and uniqueness of strong solutions. The Tanaka equation has no strong solution, i.e. one for which the version B of Brownian motion is given in advance and the solution X is adapted to the filtration generated by B and the initial conditions. However, the Tanaka equation does have a weak solution, one for which the process X and version of Brownian motion are both specified as part of the solution, rather than the Brownian motion being given a priori. In this case, simply choose X to be any Brownian motion ${\hat {B}}$ and define ${\tilde {B}}$ by ${\tilde {B}}_{t}=\int _{0}^{t}\operatorname {sgn} {\big (}{\hat {B}}_{s}{\big )}\,\mathrm {d} {\hat {B}}_{s}=\int _{0}^{t}\operatorname {sgn} {\big (}X_{s}{\big )}\,\mathrm {d} X_{s},$ i.e. $\mathrm {d} {\tilde {B}}_{t}=\operatorname {sgn}(X_{t})\,\mathrm {d} X_{t}.$ Hence, $\mathrm {d} X_{t}=\operatorname {sgn}(X_{t})\,\mathrm {d} {\tilde {B}}_{t},$ and so X is a weak solution of the Tanaka equation. Furthermore, this solution is weakly unique, i.e. any other weak solution must have the same law. Another counterexample of this type is Tsirelson's stochastic differential equation. References • Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications (Sixth ed.). Berlin: Springer. ISBN 3-540-04758-1. (Example 5.3.2)
Wikipedia
Contact graph In the mathematical area of graph theory, a contact graph or tangency graph is a graph whose vertices are represented by geometric objects (e.g. curves, line segments, or polygons), and whose edges correspond to two objects touching (but not crossing) according to some specified notion.[1] It is similar to the notion of an intersection graph but differs from it in restricting the ways that the underlying objects are allowed to intersect each other. The circle packing theorem[2] states that every planar graph can be represented as a contact graph of circles. The contact graphs of unit circles are called penny graphs.[3] Representations as contact graphs of triangles,[4] rectangles,[5] squares,[6] line segments,[7] or circular arcs[8] have also been studied. References 1. Chaplick, Steven; Kobourov, Stephen G.; Ueckerdt, Torsten (2013), "Equilateral L-contact graphs", in Brandstädt, Andreas; Jansen, Klaus; Reischuk, Rüdiger (eds.), Graph-Theoretic Concepts in Computer Science - 39th International Workshop, WG 2013, Lübeck, Germany, June 19-21, 2013, Revised Papers, Lecture Notes in Computer Science, vol. 8165, Springer, pp. 139–151, arXiv:1303.1279, doi:10.1007/978-3-642-45043-3_13, S2CID 13541242 2. Koebe, Paul (1936), "Kontaktprobleme der Konformen Abbildung", Ber. Sächs. Akad. Wiss. Leipzig, Math.-Phys. Kl., 88: 141–164 3. Pisanski, Tomaž; Randić, Milan (2000), "Bridges between geometry and graph theory" (PDF), in Gorini, Catherine A. (ed.), Geometry at Work, MAA Notes, vol. 53, Cambridge University Press, pp. 174–194, MR 1782654; see especially p. 176 4. de Fraysseix, Hubert; Ossona de Mendez, Patrice; Rosenstiehl, Pierre (1994), "On triangle contact graphs", Combinatorics, Probability and Computing, 3 (2): 233–246, doi:10.1017/S0963548300001139, MR 1288442, S2CID 46160405 5. Buchsbaum, Adam L.; Gansner, Emden R.; Procopiuc, Cecilia M.; Venkatasubramanian, Suresh (2008), "Rectangular layouts and contact graphs", ACM Transactions on Algorithms, 4 (1): Art. 8, 28, arXiv:cs/0611107, doi:10.1145/1328911.1328919, MR 2398588, S2CID 1038771 6. Klawitter, Jonathan; Nöllenburg, Martin; Ueckerdt, Torsten (2015), "Combinatorial properties of triangle-free rectangle arrangements and the squarability problem", Graph Drawing and Network Visualization: 23rd International Symposium, GD 2015, Los Angeles, CA, USA, September 24-26, 2015, Revised Selected Papers, Lecture Notes in Computer Science, vol. 9411, Springer, pp. 231–244, arXiv:1509.00835, doi:10.1007/978-3-319-27261-0_20, S2CID 18477964 7. Hliněný, Petr (2001), "Contact graphs of line segments are NP-complete" (PDF), Discrete Mathematics, 235 (1–3): 95–106, doi:10.1016/S0012-365X(00)00263-6, MR 1829839 8. Alam, Md. Jawaherul; Eppstein, David; Kaufmann, Michael; Kobourov, Stephen G.; Pupyrev, Sergey; Schulz, André; Ueckerdt, Torsten (2015), "Contact graphs of circular arcs", Algorithms and Data Structures: 14th International Symposium, WADS 2015, Victoria, BC, Canada, August 5-7, 2015, Proceedings, Lecture Notes in Computer Science, vol. 9214, Springer, pp. 1–13, arXiv:1501.00318, doi:10.1007/978-3-319-21840-3_1, S2CID 6454732
Wikipedia
Tangent–secant theorem The tangent-secant theorem describes the relation of line segments created by a secant and a tangent line with the associated circle. This result is found as Proposition 36 in Book 3 of Euclid's Elements. Given a secant g intersecting the circle at points G1 and G2 and a tangent t intersecting the circle at point T and given that g and t intersect at point P, the following equation holds: $|PT|^{2}=|PG_{1}|\cdot |PG_{2}|$ The tangent-secant theorem can be proven using similar triangles (see graphic). Like the intersecting chords theorem and the intersecting secants theorem, the tangent-secant theorem represents one of the three basic cases of a more general theorem about two intersecting lines and a circle, namely, the power of point theorem. References • S. Gottwald: The VNR Concise Encyclopedia of Mathematics. Springer, 2012, ISBN 9789401169820, pp. 175-176 • Michael L. O'Leary: Revolutions in Geometry. Wiley, 2010, ISBN 9780470591796, p. 161 • Schülerduden - Mathematik I. Bibliographisches Institut & F.A. Brockhaus, 8. Auflage, Mannheim 2008, ISBN 978-3-411-04208-1, pp. 415-417 (German) External links • Tangent Secant Theorem at proofwiki.org • Power of a Point Theorem auf cut-the-knot.org • Weisstein, Eric W. "Chord". MathWorld. Ancient Greek mathematics Mathematicians (timeline) • Anaxagoras • Anthemius • Archytas • Aristaeus the Elder • Aristarchus • Aristotle • Apollonius • Archimedes • Autolycus • Bion • Bryson • Callippus • Carpus • Chrysippus • Cleomedes • Conon • Ctesibius • Democritus • Dicaearchus • Diocles • Diophantus • Dinostratus • Dionysodorus • Domninus • Eratosthenes • Eudemus • Euclid • Eudoxus • Eutocius • Geminus • Heliodorus • Heron • Hipparchus • Hippasus • Hippias • Hippocrates • Hypatia • Hypsicles • Isidore of Miletus • Leon • Marinus • Menaechmus • Menelaus • Metrodorus • Nicomachus • Nicomedes • Nicoteles • Oenopides • Pappus • Perseus • Philolaus • Philon • Philonides • Plato • Porphyry • Posidonius • Proclus • Ptolemy • Pythagoras • Serenus • Simplicius • Sosigenes • Sporus • Thales • Theaetetus • Theano • Theodorus • Theodosius • Theon of Alexandria • Theon of Smyrna • Thymaridas • Xenocrates • Zeno of Elea • Zeno of Sidon • Zenodorus Treatises • Almagest • Archimedes Palimpsest • Arithmetica • Conics (Apollonius) • Catoptrics • Data (Euclid) • Elements (Euclid) • Measurement of a Circle • On Conoids and Spheroids • On the Sizes and Distances (Aristarchus) • On Sizes and Distances (Hipparchus) • On the Moving Sphere (Autolycus) • Optics (Euclid) • On Spirals • On the Sphere and Cylinder • Ostomachion • Planisphaerium • Sphaerics • The Quadrature of the Parabola • The Sand Reckoner Problems • Constructible numbers • Angle trisection • Doubling the cube • Squaring the circle • Problem of Apollonius Concepts and definitions • Angle • Central • Inscribed • Axiomatic system • Axiom • Chord • Circles of Apollonius • Apollonian circles • Apollonian gasket • Circumscribed circle • Commensurability • Diophantine equation • Doctrine of proportionality • Euclidean geometry • Golden ratio • Greek numerals • Incircle and excircles of a triangle • Method of exhaustion • Parallel postulate • Platonic solid • Lune of Hippocrates • Quadratrix of Hippias • Regular polygon • Straightedge and compass construction • Triangle center Results In Elements • Angle bisector theorem • Exterior angle theorem • Euclidean algorithm • Euclid's theorem • Geometric mean theorem • Greek geometric algebra • Hinge theorem • Inscribed angle theorem • Intercept theorem • Intersecting chords theorem • Intersecting secants theorem • Law of cosines • Pons asinorum • Pythagorean theorem • Tangent-secant theorem • Thales's theorem • Theorem of the gnomon Apollonius • Apollonius's theorem Other • Aristarchus's inequality • Crossbar theorem • Heron's formula • Irrational numbers • Law of sines • Menelaus's theorem • Pappus's area theorem • Problem II.8 of Arithmetica • Ptolemy's inequality • Ptolemy's table of chords • Ptolemy's theorem • Spiral of Theodorus Centers • Cyrene • Mouseion of Alexandria • Platonic Academy Related • Ancient Greek astronomy • Attic numerals • Greek numerals • Latin translations of the 12th century • Non-Euclidean geometry • Philosophy of mathematics • Neusis construction History of • A History of Greek Mathematics • by Thomas Heath • algebra • timeline • arithmetic • timeline • calculus • timeline • geometry • timeline • logic • timeline • mathematics • timeline • numbers • prehistoric counting • numeral systems • list Other cultures • Arabian/Islamic • Babylonian • Chinese • Egyptian • Incan • Indian • Japanese  Ancient Greece portal •  Mathematics portal
Wikipedia
Vector field In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space $\mathbb {R} ^{n}$.[1] A vector field on a plane can be visualized as a collection of arrows with given magnitudes and directions, each attached to a point on the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout three dimensional space, such as the wind, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point. The elements of differential and integral calculus extend naturally to vector fields. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, and under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, and this physical intuition leads to notions such as the divergence (which represents the rate of change of volume of a flow) and curl (which represents the rotation of a flow). A vector field is a special case of a vector-valued function, whose domain's dimension has no relation to the dimension of its range; for example, the position vector of a space curve is defined only for smaller subset of the ambient space. Likewise, n coordinates, a vector field on a domain in n-dimensional Euclidean space $\mathbb {R} ^{n}$ can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, and there is a well-defined transformation law (covariance and contravariance of vectors) in passing from one coordinate system to the other. Vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point (a tangent vector). More generally, vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold (that is, a section of the tangent bundle to the manifold). Vector fields are one kind of tensor field. Definition Vector fields on subsets of Euclidean space Two representations of the same vector field: v(x, y) = −r. The arrows depict the field at discrete points, however, the field exists everywhere. Given a subset S of Rn, a vector field is represented by a vector-valued function V: S → Rn in standard Cartesian coordinates (x1, …, xn). If each component of V is continuous, then V is a continuous vector field. It is common to focus on smooth vector fields, meaning that each component is a smooth function (differentiable any number of times). A vector field can be visualized as assigning a vector to individual points within an n-dimensional space.[1] One standard notation is to write ${\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}$ for the unit vectors in the coordinate directions. In these terms, every smooth vector field $V$ on an open subset $S$ of ${\mathbf {R} }^{n}$ can be written as $\sum _{i=1}^{n}V_{i}(x_{1},\ldots ,x_{n}){\frac {\partial }{\partial x_{i}}}$ for some smooth functions $V_{1},\ldots ,V_{n}$ on $S$.[2] The reason for this notation is that a vector field determines a linear map from the space of smooth functions to itself, $V\colon C^{\infty }(S)\to C^{\infty }(S)$, given by differentiating in the direction of the vector field. Example: The vector field $-x_{2}{\frac {\partial }{\partial x_{1}}}+x_{1}{\frac {\partial }{\partial x_{2}}}$ describes a counterclockwise rotation around the origin in $\mathbf {R} ^{2}$. To show that the function $x_{1}^{2}+x_{2}^{2}$ is rotationally invariant, compute: ${\bigg (}-x_{2}{\frac {\partial }{\partial x_{1}}}+x_{1}{\frac {\partial }{\partial x_{2}}}{\bigg )}(x_{1}^{2}+x_{2}^{2})=-x_{2}(2x_{1})+x_{1}(2x_{2})=0.$ Given vector fields V, W defined on S and a smooth function f defined on S, the operations of scalar multiplication and vector addition, $(fV)(p):=f(p)V(p)$ $(V+W)(p):=V(p)+W(p),$ make the smooth vector fields into a module over the ring of smooth functions, where multiplication of functions is defined pointwise. Coordinate transformation law In physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a geometrically distinct entity from a simple list of scalars, or from a covector. Thus, suppose that (x1, ..., xn) is a choice of Cartesian coordinates, in terms of which the components of the vector V are $V_{x}=(V_{1,x},\dots ,V_{n,x})$ and suppose that (y1,...,yn) are n functions of the xi defining a different coordinate system. Then the components of the vector V in the new coordinates are required to satisfy the transformation law $V_{i,y}=\sum _{j=1}^{n}{\frac {\partial y_{i}}{\partial x_{j}}}V_{j,x}.$ (1) Such a transformation law is called contravariant. A similar transformation law characterizes vector fields in physics: specifically, a vector field is a specification of n functions in each coordinate system subject to the transformation law (1) relating the different coordinate systems. Vector fields are thus contrasted with scalar fields, which associate a number or scalar to every point in space, and are also contrasted with simple lists of scalar fields, which do not transform under coordinate changes. Vector fields on manifolds Given a differentiable manifold $M$, a vector field on $M$ is an assignment of a tangent vector to each point in $M$.[2] More precisely, a vector field $F$ is a mapping from $M$ into the tangent bundle $TM$ so that $p\circ F$ is the identity mapping where $p$ denotes the projection from $TM$ to $M$. In other words, a vector field is a section of the tangent bundle. An alternative definition: A smooth vector field $X$ on a manifold $M$ is a linear map $X:C^{\infty }(M)\to C^{\infty }(M)$ such that $X$ is a derivation: $X(fg)=fX(g)+X(f)g$ for all $f,g\in C^{\infty }(M)$.[3] If the manifold $M$ is smooth or analytic—that is, the change of coordinates is smooth (analytic)—then one can make sense of the notion of smooth (analytic) vector fields. The collection of all smooth vector fields on a smooth manifold $M$ is often denoted by $\Gamma (TM)$ or $C^{\infty }(M,TM)$ (especially when thinking of vector fields as sections); the collection of all smooth vector fields is also denoted by $ {\mathfrak {X}}(M)$ (a fraktur "X"). Examples • A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind; the length (magnitude) of the arrow will be an indication of the wind speed. A "high" on the usual barometric pressure map would then act as a source (arrows pointing away), and a "low" would be a sink (arrows pointing towards), since air tends to move from high pressure areas to low pressure areas. • Velocity field of a moving fluid. In this case, a velocity vector is associated to each point in the fluid. • Streamlines, streaklines and pathlines are 3 types of lines that can be made from (time-dependent) vector fields. They are: • streaklines: the line produced by particles passing through a specific fixed point over various times • pathlines: showing the path that a given particle (of zero mass) would follow. • streamlines (or fieldlines): the path of a particle influenced by the instantaneous field (i.e., the path of a particle if the field is held fixed). • Magnetic fields. The fieldlines can be revealed using small iron filings. • Maxwell's equations allow us to use a given set of initial and boundary conditions to deduce, for every point in Euclidean space, a magnitude and direction for the force experienced by a charged test particle at that point; the resulting vector field is the electromagnetic field. • A gravitational field generated by any massive object is also a vector field. For example, the gravitational field vectors for a spherically symmetric body would all point towards the sphere's center with the magnitude of the vectors reducing as radial distance from the body increases. Gradient field in euclidean spaces Further information: Gradient Vector fields can be constructed out of scalar fields using the gradient operator (denoted by the del: ∇).[4] A vector field V defined on an open set S is called a gradient field or a conservative field if there exists a real-valued function (a scalar field) f on S such that $V=\nabla f=\left({\frac {\partial f}{\partial x_{1}}},{\frac {\partial f}{\partial x_{2}}},{\frac {\partial f}{\partial x_{3}}},\dots ,{\frac {\partial f}{\partial x_{n}}}\right).$ The associated flow is called the gradient flow, and is used in the method of gradient descent. The path integral along any closed curve γ (γ(0) = γ(1)) in a conservative field is zero: $\oint _{\gamma }V(\mathbf {x} )\cdot \mathrm {d} \mathbf {x} =\oint _{\gamma }\nabla f(\mathbf {x} )\cdot \mathrm {d} \mathbf {x} =f(\gamma (1))-f(\gamma (0)).$ Central field in euclidean spaces A C∞-vector field over Rn \ {0} is called a central field if $V(T(p))=T(V(p))\qquad (T\in \mathrm {O} (n,\mathbb {R} ))$ where O(n, R) is the orthogonal group. We say central fields are invariant under orthogonal transformations around 0. The point 0 is called the center of the field. Since orthogonal transformations are actually rotations and reflections, the invariance conditions mean that vectors of a central field are always directed towards, or away from, 0; this is an alternate (and simpler) definition. A central field is always a gradient field, since defining it on one semiaxis and integrating gives an antigradient. Operations on vector fields Line integral Main article: Line integral A common technique in physics is to integrate a vector field along a curve, also called determining its line integral. Intuitively this is summing up all vector components in line with the tangents to the curve, expressed as their scalar products. For example, given a particle in a force field (e.g. gravitation), where each vector at some point in space represents the force acting there on the particle, the line integral along a certain path is the work done on the particle, when it travels along this path. Intuitively, it is the sum of the scalar products of the force vector and the small tangent vector in each point along the curve. The line integral is constructed analogously to the Riemann integral and it exists if the curve is rectifiable (has finite length) and the vector field is continuous. Given a vector field V and a curve γ, parametrized by t in [a, b] (where a and b are real numbers), the line integral is defined as $\int _{\gamma }V(\mathbf {x} )\cdot \mathrm {d} \mathbf {x} =\int _{a}^{b}V(\gamma (t))\cdot {\dot {\gamma }}(t)\,\mathrm {d} t.$ To show vector field topology one can use line integral convolution. Divergence Main article: Divergence The divergence of a vector field on Euclidean space is a function (or scalar field). In three-dimensions, the divergence is defined by $\operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} ={\frac {\partial F_{1}}{\partial x}}+{\frac {\partial F_{2}}{\partial y}}+{\frac {\partial F_{3}}{\partial z}},$ with the obvious generalization to arbitrary dimensions. The divergence at a point represents the degree to which a small volume around the point is a source or a sink for the vector flow, a result which is made precise by the divergence theorem. The divergence can also be defined on a Riemannian manifold, that is, a manifold with a Riemannian metric that measures the length of vectors. Curl in three dimensions Main article: Curl (mathematics) The curl is an operation which takes a vector field and produces another vector field. The curl is defined only in three dimensions, but some properties of the curl can be captured in higher dimensions with the exterior derivative. In three dimensions, it is defined by $\operatorname {curl} \mathbf {F} =\nabla \times \mathbf {F} =\left({\frac {\partial F_{3}}{\partial y}}-{\frac {\partial F_{2}}{\partial z}}\right)\mathbf {e} _{1}-\left({\frac {\partial F_{3}}{\partial x}}-{\frac {\partial F_{1}}{\partial z}}\right)\mathbf {e} _{2}+\left({\frac {\partial F_{2}}{\partial x}}-{\frac {\partial F_{1}}{\partial y}}\right)\mathbf {e} _{3}.$ The curl measures the density of the angular momentum of the vector flow at a point, that is, the amount to which the flow circulates around a fixed axis. This intuitive description is made precise by Stokes' theorem. Index of a vector field The index of a vector field is an integer that helps describe its behaviour around an isolated zero (i.e., an isolated singularity of the field). In the plane, the index takes the value −1 at a saddle singularity but +1 at a source or sink singularity. Let n be the dimension of the manifold on which the vector field is defined. Take a closed surface (homeomorphic to the (n-1)-sphere) S around the zero, so that no other zeros lie in the interior of S. A map from this sphere to a unit sphere of dimension n − 1 can be constructed by dividing each vector on this sphere by its length to form a unit length vector, which is a point on the unit sphere Sn−1. This defines a continuous map from S to Sn−1. The index of the vector field at the point is the degree of this map. It can be shown that this integer does not depend on the choice of S, and therefore depends only on the vector field itself. The index is not defined at any non-singular point (i.e., a point where the vector is non-zero). It is equal to +1 around a source, and more generally equal to (−1)k around a saddle that has k contracting dimensions and n−k expanding dimensions. The index of the vector field as a whole is defined when it has just finitely many zeroes. In this case, all zeroes are isolated, and the index of the vector field is defined to be the sum of the indices at all zeroes. For an ordinary (2-dimensional) sphere in three-dimensional space, it can be shown that the index of any vector field on the sphere must be 2. This shows that every such vector field must have a zero. This implies the hairy ball theorem. For a vector field on a compact manifold with finitely many zeroes, the Poincaré-Hopf theorem states that the vector field’s index is the manifold’s Euler characteristic. Physical intuition Michael Faraday, in his concept of lines of force, emphasized that the field itself should be an object of study, which it has become throughout physics in the form of field theory. In addition to the magnetic field, other phenomena that were modeled by Faraday include the electrical field and light field. In recent decades many phenomenological formulations of irreversible dynamics and evolution equations in physics, from the mechanics of complex fluids and solids to chemical kinetics and quantum thermodynamics, have converged towards the geometric idea of "steepest entropy ascent" or "gradient flow" as a consistent universal modeling framework that guarantees compatibility with the second law of thermodynamics and extends well-known near-equilibrium results such as Onsager reciprocity to the far-nonequilibrium realm.[5] Flow curves Main article: Integral curve Consider the flow of a fluid through a region of space. At any given time, any point of the fluid has a particular velocity associated with it; thus there is a vector field associated to any flow. The converse is also true: it is possible to associate a flow to a vector field having that vector field as its velocity. Given a vector field $V$ defined on $S$, one defines curves $\gamma (t)$ on $S$ such that for each $t$ in an interval $I$, $\gamma '(t)=V(\gamma (t))\,.$ By the Picard–Lindelöf theorem, if $V$ is Lipschitz continuous there is a unique $C^{1}$-curve $\gamma _{x}$ for each point $x$ in $S$ so that, for some $\varepsilon >0$, ${\begin{aligned}\gamma _{x}(0)&=x\\\gamma '_{x}(t)&=V(\gamma _{x}(t))\qquad \forall t\in (-\varepsilon ,+\varepsilon )\subset \mathbb {R} .\end{aligned}}$ The curves $\gamma _{x}$ are called integral curves or trajectories (or less commonly, flow lines) of the vector field $V$ and partition $S$ into equivalence classes. It is not always possible to extend the interval $(-\varepsilon ,+\varepsilon )$ to the whole real number line. The flow may for example reach the edge of $S$ in a finite time. In two or three dimensions one can visualize the vector field as giving rise to a flow on $S$. If we drop a particle into this flow at a point $p$ it will move along the curve $\gamma _{p}$ in the flow depending on the initial point $p$. If $p$ is a stationary point of $V$ (i.e., the vector field is equal to the zero vector at the point $p$), then the particle will remain at $p$. Typical applications are pathline in fluid, geodesic flow, and one-parameter subgroups and the exponential map in Lie groups. Complete vector fields By definition, a vector field on $M$ is called complete if each of its flow curves exists for all time.[6] In particular, compactly supported vector fields on a manifold are complete. If $X$ is a complete vector field on $M$, then the one-parameter group of diffeomorphisms generated by the flow along $X$ exists for all time; it is described by a smooth mapping $\mathbf {R} \times M\to M.$ On a compact manifold without boundary, every smooth vector field is complete. An example of an incomplete vector field $V$ on the real line $\mathbb {R} $ is given by $V(x)=x^{2}$. For, the differential equation $ x'(t)=x^{2}$, with initial condition $x(0)=x_{0}$, has as its unique solution $ x(t)={\frac {x_{0}}{1-tx_{0}}}$ if $x_{0}\neq 0$ (and $x(t)=0$ for all $t\in \mathbb {R} $ if $x_{0}=0$). Hence for $x_{0}\neq 0$, $x(t)$ is undefined at $ t={\frac {1}{x_{0}}}$ so cannot be defined for all values of $t$. The Lie bracket The flows associated to two vector fields need not commute with each other. Their failure to commute is described by the Lie bracket of two vector fields, which is again a vector field. The Lie bracket has a simple definition in terms of the action of vector fields on smooth functions $f$: $[X,Y](f):=X(Y(f))-Y(X(f)).$ f-relatedness Given a smooth function between manifolds, $f:M\to N$, the derivative is an induced map on tangent bundles, $f_{*}:TM\to TN$. Given vector fields $V:M\to TM$ and $W:N\to TN$, we say that $W$ is $f$-related to $V$ if the equation $W\circ f=f_{*}\circ V$ holds. If $V_{i}$ is $f$-related to $W_{i}$, $i=1,2$, then the Lie bracket $[V_{1},V_{2}]$ is $f$-related to $[W_{1},W_{2}]$. Generalizations Replacing vectors by p-vectors (pth exterior power of vectors) yields p-vector fields; taking the dual space and exterior powers yields differential k-forms, and combining these yields general tensor fields. Algebraically, vector fields can be characterized as derivations of the algebra of smooth functions on the manifold, which leads to defining a vector field on a commutative algebra as a derivation on the algebra, which is developed in the theory of differential calculus over commutative algebras. See also • Eisenbud–Levine–Khimshiashvili signature formula • Field line • Field strength • Gradient flow and balanced flow in atmospheric dynamics • Lie derivative • Scalar field • Time-dependent vector field • Vector fields in cylindrical and spherical coordinates • Tensor fields References 1. Galbis, Antonio; Maestre, Manuel (2012). Vector Analysis Versus Vector Calculus. Springer. p. 12. ISBN 978-1-4614-2199-3. 2. Tu, Loring W. (2010). "Vector fields". An Introduction to Manifolds. Springer. p. 149. ISBN 978-1-4419-7399-3. 3. Lerman, Eugene (August 19, 2011). "An Introduction to Differential Geometry" (PDF). Definition 3.23. 4. Dawber, P.G. (1987). Vectors and Vector Operators. CRC Press. p. 29. ISBN 978-0-85274-585-4. 5. Beretta, Gian Paolo (2020-05-01). "The fourth law of thermodynamics: steepest entropy ascent". Philosophical Transactions of the Royal Society A. 378 (2170): 20190168. arXiv:1908.05768. Bibcode:2020RSPTA.37890168B. doi:10.1098/rsta.2019.0168. ISSN 1471-2962. S2CID 201058607. 6. Sharpe, R. (1997). Differential geometry. Springer-Verlag. ISBN 0-387-94732-9. Bibliography • Hubbard, J. H.; Hubbard, B. B. (1999). Vector calculus, linear algebra, and differential forms. A unified approach. Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-657446-7. • Warner, Frank (1983) [1971]. Foundations of differentiable manifolds and Lie groups. New York-Berlin: Springer-Verlag. ISBN 0-387-90894-3. • Boothby, William (1986). An introduction to differentiable manifolds and Riemannian geometry. Pure and Applied Mathematics, volume 120 (second ed.). Orlando, FL: Academic Press. ISBN 0-12-116053-X. External links Wikimedia Commons has media related to Vector fields. • Online Vector Field Editor • "Vector field", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Vector field — Mathworld • Vector field — PlanetMath • 3D Magnetic field viewer • Vector fields and field lines • Vector field simulation An interactive application to show the effects of vector fields Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Tangent In geometry, the tangent line (or simply tangent) to a plane curve at a given point is the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve.[1] More precisely, a straight line is said to be a tangent of a curve y = f(x) at a point x = c if the line passes through the point (c, f(c)) on the curve and has slope f'(c), where f' is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space. For the tangent function, see Tangent (trigonometry). For other uses, see Tangent (disambiguation). As it passes through the point where the tangent line and the curve meet, called the point of tangency, the tangent line is "going in the same direction" as the curve, and is thus the best straight-line approximation to the curve at that point. The tangent line to a point on a differentiable curve can also be thought of as a tangent line approximation, the graph of the affine function that best approximates the original function at the given point.[2] Similarly, the tangent plane to a surface at a given point is the plane that "just touches" the surface at that point. The concept of a tangent is one of the most fundamental notions in differential geometry and has been extensively generalized; see Tangent space. The word "tangent" comes from the Latin tangere, "to touch". History Euclid makes several references to the tangent (ἐφαπτομένη ephaptoménē) to a circle in book III of the Elements (c. 300 BC).[3] In Apollonius' work Conics (c. 225 BC) he defines a tangent as being a line such that no other straight line could fall between it and the curve.[4] Archimedes (c.  287 – c.  212 BC) found the tangent to an Archimedean spiral by considering the path of a point moving along the curve.[4] In the 1630s Fermat developed the technique of adequality to calculate tangents and other problems in analysis and used this to calculate tangents to the parabola. The technique of adequality is similar to taking the difference between $f(x+h)$ and $f(x)$ and dividing by a power of $h$. Independently Descartes used his method of normals based on the observation that the radius of a circle is always normal to the circle itself.[5] These methods led to the development of differential calculus in the 17th century. Many people contributed. Roberval discovered a general method of drawing tangents, by considering a curve as described by a moving point whose motion is the resultant of several simpler motions.[6] René-François de Sluse and Johannes Hudde found algebraic algorithms for finding tangents.[7] Further developments included those of John Wallis and Isaac Barrow, leading to the theory of Isaac Newton and Gottfried Leibniz. An 1828 definition of a tangent was "a right line which touches a curve, but which when produced, does not cut it".[8] This old definition prevents inflection points from having any tangent. It has been dismissed and the modern definitions are equivalent to those of Leibniz, who defined the tangent line as the line through a pair of infinitely close points on the curve. Tangent line to a plane curve Further information: Differentiable curve § Tangent vector, and Frenet–Serret formulas The intuitive notion that a tangent line "touches" a curve can be made more explicit by considering the sequence of straight lines (secant lines) passing through two points, A and B, those that lie on the function curve. The tangent at A is the limit when point B approximates or tends to A. The existence and uniqueness of the tangent line depends on a certain type of mathematical smoothness, known as "differentiability." For example, if two circular arcs meet at a sharp point (a vertex) then there is no uniquely defined tangent at the vertex because the limit of the progression of secant lines depends on the direction in which "point B" approaches the vertex. At most points, the tangent touches the curve without crossing it (though it may, when continued, cross the curve at other places away from the point of tangent). A point where the tangent (at this point) crosses the curve is called an inflection point. Circles, parabolas, hyperbolas and ellipses do not have any inflection point, but more complicated curves do have, like the graph of a cubic function, which has exactly one inflection point, or a sinusoid, which has two inflection points per each period of the sine. Conversely, it may happen that the curve lies entirely on one side of a straight line passing through a point on it, and yet this straight line is not a tangent line. This is the case, for example, for a line passing through the vertex of a triangle and not intersecting it otherwise—where the tangent line does not exist for the reasons explained above. In convex geometry, such lines are called supporting lines. Analytical approach The geometrical idea of the tangent line as the limit of secant lines serves as the motivation for analytical methods that are used to find tangent lines explicitly. The question of finding the tangent line to a graph, or the tangent line problem, was one of the central questions leading to the development of calculus in the 17th century. In the second book of his Geometry, René Descartes[9] said of the problem of constructing the tangent to a curve, "And I dare say that this is not only the most useful and most general problem in geometry that I know, but even that I have ever desired to know".[10] Intuitive description Suppose that a curve is given as the graph of a function, y = f(x). To find the tangent line at the point p = (a, f(a)), consider another nearby point q = (a + h, f(a + h)) on the curve. The slope of the secant line passing through p and q is equal to the difference quotient ${\frac {f(a+h)-f(a)}{h}}.$ As the point q approaches p, which corresponds to making h smaller and smaller, the difference quotient should approach a certain limiting value k, which is the slope of the tangent line at the point p. If k is known, the equation of the tangent line can be found in the point-slope form: $y-f(a)=k(x-a).\,$ More rigorous description To make the preceding reasoning rigorous, one has to explain what is meant by the difference quotient approaching a certain limiting value k. The precise mathematical formulation was given by Cauchy in the 19th century and is based on the notion of limit. Suppose that the graph does not have a break or a sharp edge at p and it is neither plumb nor too wiggly near p. Then there is a unique value of k such that, as h approaches 0, the difference quotient gets closer and closer to k, and the distance between them becomes negligible compared with the size of h, if h is small enough. This leads to the definition of the slope of the tangent line to the graph as the limit of the difference quotients for the function f. This limit is the derivative of the function f at x = a, denoted f ′(a). Using derivatives, the equation of the tangent line can be stated as follows: $y=f(a)+f'(a)(x-a).\,$ Calculus provides rules for computing the derivatives of functions that are given by formulas, such as the power function, trigonometric functions, exponential function, logarithm, and their various combinations. Thus, equations of the tangents to graphs of all these functions, as well as many others, can be found by the methods of calculus. How the method can fail Calculus also demonstrates that there are functions and points on their graphs for which the limit determining the slope of the tangent line does not exist. For these points the function f is non-differentiable. There are two possible reasons for the method of finding the tangents based on the limits and derivatives to fail: either the geometric tangent exists, but it is a vertical line, which cannot be given in the point-slope form since it does not have a slope, or the graph exhibits one of three behaviors that precludes a geometric tangent. The graph y = x1/3 illustrates the first possibility: here the difference quotient at a = 0 is equal to h1/3/h = h−2/3, which becomes very large as h approaches 0. This curve has a tangent line at the origin that is vertical. The graph y = x2/3 illustrates another possibility: this graph has a cusp at the origin. This means that, when h approaches 0, the difference quotient at a = 0 approaches plus or minus infinity depending on the sign of x. Thus both branches of the curve are near to the half vertical line for which y=0, but none is near to the negative part of this line. Basically, there is no tangent at the origin in this case, but in some context one may consider this line as a tangent, and even, in algebraic geometry, as a double tangent. The graph y = |x| of the absolute value function consists of two straight lines with different slopes joined at the origin. As a point q approaches the origin from the right, the secant line always has slope 1. As a point q approaches the origin from the left, the secant line always has slope −1. Therefore, there is no unique tangent to the graph at the origin. Having two different (but finite) slopes is called a corner. Finally, since differentiability implies continuity, the contrapositive states discontinuity implies non-differentiability. Any such jump or point discontinuity will have no tangent line. This includes cases where one slope approaches positive infinity while the other approaches negative infinity, leading to an infinite jump discontinuity Equations When the curve is given by y = f(x) then the slope of the tangent is $dy/dx,$ so by the point–slope formula the equation of the tangent line at (X, Y) is $y-Y={\frac {dy}{dx}}(X)\cdot (x-X)$ where (x, y) are the coordinates of any point on the tangent line, and where the derivative is evaluated at $x=X$.[11] When the curve is given by y = f(x), the tangent line's equation can also be found[12] by using polynomial division to divide $f\,(x)$ by $(x-X)^{2}$; if the remainder is denoted by $g(x)$, then the equation of the tangent line is given by $y=g(x).$ When the equation of the curve is given in the form f(x, y) = 0 then the value of the slope can be found by implicit differentiation, giving ${\frac {dy}{dx}}=-{\frac {\partial f}{\partial x}}{\bigg /}{\frac {\partial f}{\partial y}}.$ The equation of the tangent line at a point (X,Y) such that f(X,Y) = 0 is then[11] ${\frac {\partial f}{\partial x}}(X,Y)\cdot (x-X)+{\frac {\partial f}{\partial y}}(X,Y)\cdot (y-Y)=0.$ This equation remains true if ${\frac {\partial f}{\partial y}}(X,Y)=0,\quad {\frac {\partial f}{\partial x}}(X,Y)\neq 0,$ in which case the slope of the tangent is infinite. If, however, ${\frac {\partial f}{\partial y}}(X,Y)={\frac {\partial f}{\partial x}}(X,Y)=0,$ the tangent line is not defined and the point (X,Y) is said to be singular. For algebraic curves, computations may be simplified somewhat by converting to homogeneous coordinates. Specifically, let the homogeneous equation of the curve be g(x, y, z) = 0 where g is a homogeneous function of degree n. Then, if (X, Y, Z) lies on the curve, Euler's theorem implies ${\frac {\partial g}{\partial x}}\cdot X+{\frac {\partial g}{\partial y}}\cdot Y+{\frac {\partial g}{\partial z}}\cdot Z=ng(X,Y,Z)=0.$ It follows that the homogeneous equation of the tangent line is ${\frac {\partial g}{\partial x}}(X,Y,Z)\cdot x+{\frac {\partial g}{\partial y}}(X,Y,Z)\cdot y+{\frac {\partial g}{\partial z}}(X,Y,Z)\cdot z=0.$ The equation of the tangent line in Cartesian coordinates can be found by setting z=1 in this equation.[13] To apply this to algebraic curves, write f(x, y) as $f=u_{n}+u_{n-1}+\dots +u_{1}+u_{0}\,$ where each ur is the sum of all terms of degree r. The homogeneous equation of the curve is then $g=u_{n}+u_{n-1}z+\dots +u_{1}z^{n-1}+u_{0}z^{n}=0.\,$ Applying the equation above and setting z=1 produces ${\frac {\partial f}{\partial x}}(X,Y)\cdot x+{\frac {\partial f}{\partial y}}(X,Y)\cdot y+{\frac {\partial g}{\partial z}}(X,Y,1)=0$ as the equation of the tangent line.[14] The equation in this form is often simpler to use in practice since no further simplification is needed after it is applied.[13] If the curve is given parametrically by $x=x(t),\quad y=y(t)$ then the slope of the tangent is ${\frac {dy}{dx}}={\frac {dy}{dt}}{\bigg /}{\frac {dx}{dt}}$ giving the equation for the tangent line at $\,t=T,\,X=x(T),\,Y=y(T)$ as[15] ${\frac {dx}{dt}}(T)\cdot (y-Y)={\frac {dy}{dt}}(T)\cdot (x-X).$ If ${\frac {dx}{dt}}(T)={\frac {dy}{dt}}(T)=0,$ the tangent line is not defined. However, it may occur that the tangent line exists and may be computed from an implicit equation of the curve. Normal line to a curve Further information: Normal (geometry) The line perpendicular to the tangent line to a curve at the point of tangency is called the normal line to the curve at that point. The slopes of perpendicular lines have product −1, so if the equation of the curve is y = f(x) then slope of the normal line is $-{\frac {1}{\frac {dy}{dx}}}$ and it follows that the equation of the normal line at (X, Y) is $(x-X)+{\frac {dy}{dx}}(y-Y)=0.$ Similarly, if the equation of the curve has the form f(x, y) = 0 then the equation of the normal line is given by[16] ${\frac {\partial f}{\partial y}}(x-X)-{\frac {\partial f}{\partial x}}(y-Y)=0.$ If the curve is given parametrically by $x=x(t),\quad y=y(t)$ then the equation of the normal line is[15] ${\frac {dx}{dt}}(x-X)+{\frac {dy}{dt}}(y-Y)=0.$ Angle between curves See also: Angle § Angles between curves The angle between two curves at a point where they intersect is defined as the angle between their tangent lines at that point. More specifically, two curves are said to be tangent at a point if they have the same tangent at a point, and orthogonal if their tangent lines are orthogonal.[17] Multiple tangents at a point The formulas above fail when the point is a singular point. In this case there may be two or more branches of the curve that pass through the point, each branch having its own tangent line. When the point is the origin, the equations of these lines can be found for algebraic curves by factoring the equation formed by eliminating all but the lowest degree terms from the original equation. Since any point can be made the origin by a change of variables (or by translating the curve) this gives a method for finding the tangent lines at any singular point. For example, the equation of the limaçon trisectrix shown to the right is $(x^{2}+y^{2}-2ax)^{2}=a^{2}(x^{2}+y^{2}).\,$ Expanding this and eliminating all but terms of degree 2 gives $a^{2}(3x^{2}-y^{2})=0\,$ which, when factored, becomes $y=\pm {\sqrt {3}}x.$ So these are the equations of the two tangent lines through the origin.[18] When the curve is not self-crossing, the tangent at a reference point may still not be uniquely defined because the curve is not differentiable at that point although it is differentiable elsewhere. In this case the left and right derivatives are defined as the limits of the derivative as the point at which it is evaluated approaches the reference point from respectively the left (lower values) or the right (higher values). For example, the curve y = |x | is not differentiable at x = 0: its left and right derivatives have respective slopes −1 and 1; the tangents at that point with those slopes are called the left and right tangents.[19] Sometimes the slopes of the left and right tangent lines are equal, so the tangent lines coincide. This is true, for example, for the curve y = x 2/3, for which both the left and right derivatives at x = 0 are infinite; both the left and right tangent lines have equation x = 0. Tangent line to a space curve This section is an excerpt from Tangent vector.[edit] In mathematics, a tangent vector is a vector that is tangent to a curve or surface at a given point. Tangent vectors are described in the differential geometry of curves in the context of curves in Rn. More generally, tangent vectors are elements of a tangent space of a differentiable manifold. Tangent vectors can also be described in terms of germs. Formally, a tangent vector at the point $x$ is a linear derivation of the algebra defined by the set of germs at $x$. Tangent circles Main article: Tangent circles Two circles of non-equal radius, both in the same plane, are said to be tangent to each other if they meet at only one point. Equivalently, two circles, with radii of ri and centers at (xi, yi), for i = 1, 2 are said to be tangent to each other if $\left(x_{1}-x_{2}\right)^{2}+\left(y_{1}-y_{2}\right)^{2}=\left(r_{1}\pm r_{2}\right)^{2}.\,$ • Two circles are externally tangent if the distance between their centres is equal to the sum of their radii. $\left(x_{1}-x_{2}\right)^{2}+\left(y_{1}-y_{2}\right)^{2}=\left(r_{1}+r_{2}\right)^{2}.\,$ • Two circles are internally tangent if the distance between their centres is equal to the difference between their radii.[20] $\left(x_{1}-x_{2}\right)^{2}+\left(y_{1}-y_{2}\right)^{2}=\left(r_{1}-r_{2}\right)^{2}.\,$ Tangent plane to a surface Further information: Differential geometry of surfaces § Tangent plane, and Parametric surface § Tangent plane See also: Normal plane (geometry) The tangent plane to a surface at a given point p is defined in an analogous way to the tangent line in the case of curves. It is the best approximation of the surface by a plane at p, and can be obtained as the limiting position of the planes passing through 3 distinct points on the surface close to p as these points converge to p. Higher-dimensional manifolds Main article: Tangent space More generally, there is a k-dimensional tangent space at each point of a k-dimensional manifold in the n-dimensional Euclidean space. See also • Newton's method • Normal (geometry) • Osculating circle • Osculating curve • Perpendicular • Subtangent • Supporting line • Tangent cone • Tangential angle • Tangential component • Tangent lines to circles • Tangent vector • Multiplicity (mathematics)#Behavior of a polynomial function near a multiple root • Algebraic curve#Tangent at a point References 1. Leibniz, G., "Nova Methodus pro Maximis et Minimis", Acta Eruditorum, Oct. 1684. 2. Dan Sloughter (2000) . "Best Affine Approximations" 3. Euclid. "Euclid's Elements". Retrieved 1 June 2015. 4. Shenk, Al. "e-CALCULUS Section 2.8" (PDF). p. 2.8. Retrieved 1 June 2015. 5. Katz, Victor J. (2008). A History of Mathematics (3rd ed.). Addison Wesley. p. 510. ISBN 978-0321387004. 6. Wolfson, Paul R. (2001). "The Crooked Made Straight: Roberval and Newton on Tangents". The American Mathematical Monthly. 108 (3): 206–216. doi:10.2307/2695381. JSTOR 2695381. 7. Katz, Victor J. (2008). A History of Mathematics (3rd ed.). Addison Wesley. pp. 512–514. ISBN 978-0321387004. 8. Noah Webster, American Dictionary of the English Language (New York: S. Converse, 1828), vol. 2, p. 733, 9. Descartes, René (1954) [1637]. The Geometry of René Descartes. Translated by Smith, David Eugene; Latham, Marcia L. Open Court. p. 95. 10. R. E. Langer (October 1937). "Rene Descartes". American Mathematical Monthly. Mathematical Association of America. 44 (8): 495–512. doi:10.2307/2301226. JSTOR 2301226. 11. Edwards Art. 191 12. Strickland-Constable, Charles, "A simple method for finding tangents to polynomial graphs", Mathematical Gazette, November 2005, 466–467. 13. Edwards Art. 192 14. Edwards Art. 193 15. Edwards Art. 196 16. Edwards Art. 194 17. Edwards Art. 195 18. Edwards Art. 197 19. Thomas, George B. Jr., and Finney, Ross L. (1979), Calculus and Analytic Geometry, Addison Wesley Publ. Co.: p. 140. 20. "Circles For Leaving Certificate Honours Mathematics by Thomas O'Sullivan 1997". Sources • J. Edwards (1892). Differential Calculus. London: MacMillan and Co. pp. 143 ff. External links Wikimedia Commons has media related to Tangency. Wikisource has the text of the 1921 Collier's Encyclopedia article Tangent. • "Tangent line", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Weisstein, Eric W. "Tangent Line". MathWorld. • Tangent to a circle With interactive animation • Tangent and first derivative — An interactive simulation Calculus Precalculus • Binomial theorem • Concave function • Continuous function • Factorial • Finite difference • Free variables and bound variables • Graph of a function • Linear function • Radian • Rolle's theorem • Secant • Slope • Tangent Limits • Indeterminate form • Limit of a function • One-sided limit • Limit of a sequence • Order of approximation • (ε, δ)-definition of limit Differential calculus • Derivative • Second derivative • Partial derivative • Differential • Differential operator • Mean value theorem • Notation • Leibniz's notation • Newton's notation • Rules of differentiation • linearity • Power • Sum • Chain • L'Hôpital's • Product • General Leibniz's rule • Quotient • Other techniques • Implicit differentiation • Inverse functions and differentiation • Logarithmic derivative • Related rates • Stationary points • First derivative test • Second derivative test • Extreme value theorem • Maximum and minimum • Further applications • Newton's method • Taylor's theorem • Differential equation • Ordinary differential equation • Partial differential equation • Stochastic differential equation Integral calculus • Antiderivative • Arc length • Riemann integral • Basic properties • Constant of integration • Fundamental theorem of calculus • Differentiating under the integral sign • Integration by parts • Integration by substitution • trigonometric • Euler • Tangent half-angle substitution • Partial fractions in integration • Quadratic integral • Trapezoidal rule • Volumes • Washer method • Shell method • Integral equation • Integro-differential equation Vector calculus • Derivatives • Curl • Directional derivative • Divergence • Gradient • Laplacian • Basic theorems • Line integrals • Green's • Stokes' • Gauss' Multivariable calculus • Divergence theorem • Geometric • Hessian matrix • Jacobian matrix and determinant • Lagrange multiplier • Line integral • Matrix • Multiple integral • Partial derivative • Surface integral • Volume integral • Advanced topics • Differential forms • Exterior derivative • Generalized Stokes' theorem • Tensor calculus Sequences and series • Arithmetico-geometric sequence • Types of series • Alternating • Binomial • Fourier • Geometric • Harmonic • Infinite • Power • Maclaurin • Taylor • Telescoping • Tests of convergence • Abel's • Alternating series • Cauchy condensation • Direct comparison • Dirichlet's • Integral • Limit comparison • Ratio • Root • Term Special functions and numbers • Bernoulli numbers • e (mathematical constant) • Exponential function • Natural logarithm • Stirling's approximation History of calculus • Adequality • Brook Taylor • Colin Maclaurin • Generality of algebra • Gottfried Wilhelm Leibniz • Infinitesimal • Infinitesimal calculus • Isaac Newton • Fluxion • Law of Continuity • Leonhard Euler • Method of Fluxions • The Method of Mechanical Theorems Lists • Differentiation rules • List of integrals of exponential functions • List of integrals of hyperbolic functions • List of integrals of inverse hyperbolic functions • List of integrals of inverse trigonometric functions • List of integrals of irrational functions • List of integrals of logarithmic functions • List of integrals of rational functions • List of integrals of trigonometric functions • Secant • Secant cubed • List of limits • Lists of integrals Miscellaneous topics • Complex calculus • Contour integral • Differential geometry • Manifold • Curvature • of curves • of surfaces • Tensor • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Regiomontanus' angle maximization problem • Steinmetz solid Authority control: National • Germany
Wikipedia
Linear approximation In mathematics, a linear approximation is an approximation of a general function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order methods for solving or approximating solutions to equations. Definition Given a twice continuously differentiable function $f$ of one real variable, Taylor's theorem for the case $n=1$ states that $f(x)=f(a)+f'(a)(x-a)+R_{2}$ where $R_{2}$ is the remainder term. The linear approximation is obtained by dropping the remainder: $f(x)\approx f(a)+f'(a)(x-a).$ This is a good approximation when $x$ is close enough to $a$; since a curve, when closely observed, will begin to resemble a straight line. Therefore, the expression on the right-hand side is just the equation for the tangent line to the graph of $f$ at $(a,f(a))$. For this reason, this process is also called the tangent line approximation. Linear approximations in this case are further improved when the second derivative of a, $f''(a)$, is sufficiently small (close to zero) (i.e., at or near an inflection point). If $f$ is concave down in the interval between $x$ and $a$, the approximation will be an overestimate (since the derivative is decreasing in that interval). If $f$ is concave up, the approximation will be an underestimate.[1] Linear approximations for vector functions of a vector variable are obtained in the same way, with the derivative at a point replaced by the Jacobian matrix. For example, given a differentiable function $f(x,y)$ with real values, one can approximate $f(x,y)$ for $(x,y)$ close to $(a,b)$ by the formula $f\left(x,y\right)\approx f\left(a,b\right)+{\frac {\partial f}{\partial x}}\left(a,b\right)\left(x-a\right)+{\frac {\partial f}{\partial y}}\left(a,b\right)\left(y-b\right).$ The right-hand side is the equation of the plane tangent to the graph of $z=f(x,y)$ at $(a,b).$ In the more general case of Banach spaces, one has $f(x)\approx f(a)+Df(a)(x-a)$ where $Df(a)$ is the Fréchet derivative of $f$ at $a$. Applications Optics Gaussian optics is a technique in geometrical optics that describes the behaviour of light rays in optical systems by using the paraxial approximation, in which only rays which make small angles with the optical axis of the system are considered.[2] In this approximation, trigonometric functions can be expressed as linear functions of the angles. Gaussian optics applies to systems in which all the optical surfaces are either flat or are portions of a sphere. In this case, simple explicit formulae can be given for parameters of an imaging system such as focal distance, magnification and brightness, in terms of the geometrical shapes and material properties of the constituent elements. Period of oscillation The period of swing of a simple gravity pendulum depends on its length, the local strength of gravity, and to a small extent on the maximum angle that the pendulum swings away from vertical, θ0, called the amplitude.[3] It is independent of the mass of the bob. The true period T of a simple pendulum, the time taken for a complete cycle of an ideal simple gravity pendulum, can be written in several different forms (see pendulum), one example being the infinite series:[4][5] $T=2\pi {\sqrt {L \over g}}\left(1+{\frac {1}{16}}\theta _{0}^{2}+{\frac {11}{3072}}\theta _{0}^{4}+\cdots \right)$ where L is the length of the pendulum and g is the local acceleration of gravity. However, if one takes the linear approximation (i.e. if the amplitude is limited to small swings,[Note 1] ) the period is:[6] $T\approx 2\pi {\sqrt {\frac {L}{g}}}\qquad \qquad \qquad \theta _{0}\ll 1$ (1) In the linear approximation, the period of swing is approximately the same for different size swings: that is, the period is independent of amplitude. This property, called isochronism, is the reason pendulums are so useful for timekeeping.[7] Successive swings of the pendulum, even if changing in amplitude, take the same amount of time. Electrical resistivity The electrical resistivity of most materials changes with temperature. If the temperature T does not vary too much, a linear approximation is typically used: $\rho (T)=\rho _{0}[1+\alpha (T-T_{0})]$ where $\alpha $ is called the temperature coefficient of resistivity, $T_{0}$ is a fixed reference temperature (usually room temperature), and $\rho _{0}$ is the resistivity at temperature $T_{0}$. The parameter $\alpha $ is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, $\alpha $ is different for different reference temperatures. For this reason it is usual to specify the temperature that $\alpha $ was measured at with a suffix, such as $\alpha _{15}$, and the relationship only holds in a range of temperatures around the reference.[8] When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used. See also • Binomial approximation • Euler's method • Finite differences • Finite difference methods • Newton's method • Power series • Taylor series Notes 1. A "small" swing is one in which the angle θ is small enough that sin(θ) can be approximated by θ when θ is measured in radians References 1. "12.1 Estimating a Function Value Using the Linear Approximation". Retrieved 3 June 2012. 2. Lipson, A.; Lipson, S. G.; Lipson, H. (2010). Optical Physics (4th ed.). Cambridge, UK: Cambridge University Press. p. 51. ISBN 978-0-521-49345-1. 3. Milham, Willis I. (1945). Time and Timekeepers. MacMillan. pp. 188–194. OCLC 1744137. 4. Nelson, Robert; M. G. Olsson (February 1987). "The pendulum – Rich physics from a simple system" (PDF). American Journal of Physics. 54 (2): 112–121. Bibcode:1986AmJPh..54..112N. doi:10.1119/1.14703. S2CID 121907349. Retrieved 2008-10-29. 5. Beckett, Edmund; and three more (1911). "Clock" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 06 (11th ed.). Cambridge University Press. pp. 534–553, see page 538, second para. Pendulum.— includes a derivation 6. Halliday, David; Robert Resnick; Jearl Walker (1997). Fundamentals of Physics, 5th Ed. New York: John Wiley & Sons. p. 381. ISBN 0-471-14854-7. 7. Cooper, Herbert J. (2007). Scientific Instruments. New York: Hutchinson's. p. 162. ISBN 978-1-4067-6879-4. 8. Ward, M. R. (1971). Electrical Engineering Science. McGraw-Hill. pp. 36–40. ISBN 0-07-094255-2. Further reading • Weinstein, Alan; Marsden, Jerrold E. (1984). Calculus III. Berlin: Springer-Verlag. p. 775. ISBN 0-387-90985-0. • Strang, Gilbert (1991). Calculus. Wellesley College. p. 94. ISBN 0-9614088-2-0. • Bock, David; Hockett, Shirley O. (2005). How to Prepare for the AP Calculus. Hauppauge, NY: Barrons Educational Series. p. 118. ISBN 0-7641-2382-3.
Wikipedia
Tangent circles In geometry, tangent circles (also known as kissing circles) are circles in a common plane that intersect in a single point. There are two types of tangency: internal and external. Many problems and constructions in geometry are related to tangent circles; such problems often have real-life applications such as trilateration and maximizing the use of materials. Two given circles Two circles are mutually and externally tangent if distance between their centers is equal to the sum of their radii[1] Steiner chains Main article: Steiner chain Pappus chains Main article: Pappus chain Three given circles: Apollonius' problem Main article: Problem of Apollonius Apollonius' problem is to construct circles that are tangent to three given circles. Apollonian gasket If a circle is iteratively inscribed into the interstitial curved triangles between three mutually tangent circles, an Apollonian gasket results, one of the earliest fractals described in print. Malfatti's problem Main article: Malfatti circles Malfatti's problem is to carve three cylinders from a triangular block of marble, using as much of the marble as possible. In 1803, Gian Francesco Malfatti conjectured that the solution would be obtained by inscribing three mutually tangent circles into the triangle (a problem that had previously been considered by Japanese mathematician Ajima Naonobu); these circles are now known as the Malfatti circles, although the conjecture has been proven to be false. Six circles theorem Main article: Six circles theorem A chain of six circles can be drawn such that each circle is tangent to two sides of a given triangle and also to the preceding circle in the chain. The chain closes; the sixth circle is always tangent to the first circle. Generalizations Problems involving tangent circles are often generalized to spheres. For example, the Fermat problem of finding sphere(s) tangent to four given spheres is a generalization of Apollonius' problem, whereas Soddy's hexlet is a generalization of a Steiner chain. See also • Tangent lines to circles • Circle packing theorem, the result that every planar graph may be realized by a system of tangent circles • Hexafoil, the shape formed by a ring of six tangent circles • Feuerbach's theorem on the tangency of the nine-point circle of a triangle with its incircle and excircles • Descartes' theorem • Ford circle • Bankoff circle • Archimedes' twin circles • Archimedean circle • Schoch circles • Woo circles • Arbelos • Ring lemma References 1. Weisstein, Eric W. "Tangent Circles." From MathWorld--A Wolfram Web Resource External links • Weisstein, Eric W. "Tangent circles". MathWorld.
Wikipedia
Tangential and normal components In mathematics, given a vector at a point on a curve, that vector can be decomposed uniquely as a sum of two vectors, one tangent to the curve, called the tangential component of the vector, and another one perpendicular to the curve, called the normal component of the vector. Similarly, a vector at a point on a surface can be broken down the same way. More generally, given a submanifold N of a manifold M, and a vector in the tangent space to M at a point of N, it can be decomposed into the component tangent to N and the component normal to N. Formal definition Surface More formally, let $S$ be a surface, and $x$ be a point on the surface. Let $\mathbf {v} $ be a vector at $x$. Then one can write uniquely $\mathbf {v} $ as a sum $\mathbf {v} =\mathbf {v} _{\parallel }+\mathbf {v} _{\perp }$ where the first vector in the sum is the tangential component and the second one is the normal component. It follows immediately that these two vectors are perpendicular to each other. To calculate the tangential and normal components, consider a unit normal to the surface, that is, a unit vector ${\hat {\mathbf {n} }}$ perpendicular to $S$ at $x$. Then, $\mathbf {v} _{\perp }=\left(\mathbf {v} \cdot {\hat {\mathbf {n} }}\right){\hat {\mathbf {n} }}$ and thus $\mathbf {v} _{\parallel }=\mathbf {v} -\mathbf {v} _{\perp }$ where "$\cdot $" denotes the dot product. Another formula for the tangential component is $\mathbf {v} _{\parallel }=-{\hat {\mathbf {n} }}\times ({\hat {\mathbf {n} }}\times \mathbf {v} ),$ where "$\times $" denotes the cross product. Note that these formulas do not depend on the particular unit normal ${\hat {\mathbf {n} }}$ used (there exist two unit normals to any surface at a given point, pointing in opposite directions, so one of the unit normals is the negative of the other one). Submanifold More generally, given a submanifold N of a manifold M and a point $p\in N$, we get a short exact sequence involving the tangent spaces: $T_{p}N\to T_{p}M\to T_{p}M/T_{p}N$ The quotient space $T_{p}M/T_{p}N$ is a generalized space of normal vectors. If M is a Riemannian manifold, the above sequence splits, and the tangent space of M at p decomposes as a direct sum of the component tangent to N and the component normal to N: $T_{p}M=T_{p}N\oplus N_{p}N:=(T_{p}N)^{\perp }$ Thus every tangent vector $v\in T_{p}M$ splits as $v=v_{\parallel }+v_{\perp }$, where $v_{\parallel }\in T_{p}N$ and $v_{\perp }\in N_{p}N:=(T_{p}N)^{\perp }$. Computations Suppose N is given by non-degenerate equations. If N is given explicitly, via parametric equations (such as a parametric curve), then the derivative gives a spanning set for the tangent bundle (it is a basis if and only if the parametrization is an immersion). If N is given implicitly (as in the above description of a surface, (or more generally as) a hypersurface) as a level set or intersection of level surfaces for $g_{i}$, then the gradients of $g_{i}$ span the normal space. In both cases, we can again compute using the dot product; the cross product is special to 3 dimensions however. Applications • Lagrange multipliers: constrained critical points are where the tangential component of the total derivative vanish. • Surface normal • Frenet–Serret formulas • Differential geometry of surfaces § Tangent vectors and normal vectors References • Rojansky, Vladimir (1979). Electromagnetic fields and waves. New York: Dover Publications. ISBN 0-486-63834-0. • Crowell, Benjamin (2003). Light and Matter.
Wikipedia
Tangent cone In geometry, the tangent cone is a generalization of the notion of the tangent space to a manifold to the case of certain spaces with singularities. Definitions in nonlinear analysis In nonlinear analysis, there are many definitions for a tangent cone, including the adjacent cone, Bouligand's contingent cone, and the Clarke tangent cone. These three cones coincide for a convex set, but they can differ on more general sets. Clarke tangent cone Let $A$ be a nonempty closed subset of the Banach space $X$. The Clarke's tangent cone to $A$ at $x_{0}\in A$, denoted by ${\widehat {T}}_{A}(x_{0})$ consists of all vectors $v\in X$, such that for any sequence $\{t_{n}\}_{n\geq 1}\subset \mathbb {R} $ tending to zero, and any sequence $\{x_{n}\}_{n\geq 1}\subset A$ tending to $x_{0}$, there exists a sequence $\{v_{n}\}_{n\geq 1}\subset X$ tending to $v$, such that for all $n\geq 1$ holds $x_{n}+t_{n}v_{n}\in A$ Clarke's tangent cone is always subset of the corresponding contingent cone (and coincides with it, when the set in question is convex). It has the important property of being a closed convex cone. Definition in convex geometry Let K be a closed convex subset of a real vector space V and ∂K be the boundary of K. The solid tangent cone to K at a point x ∈ ∂K is the closure of the cone formed by all half-lines (or rays) emanating from x and intersecting K in at least one point y distinct from x. It is a convex cone in V and can also be defined as the intersection of the closed half-spaces of V containing K and bounded by the supporting hyperplanes of K at x. The boundary TK of the solid tangent cone is the tangent cone to K and ∂K at x. If this is an affine subspace of V then the point x is called a smooth point of ∂K and ∂K is said to be differentiable at x and TK is the ordinary tangent space to ∂K at x. Definition in algebraic geometry Let X be an affine algebraic variety embedded into the affine space $k^{n}$, with defining ideal $I\subset k[x_{1},\ldots ,x_{n}]$. For any polynomial f, let $\operatorname {in} (f)$ be the homogeneous component of f of the lowest degree, the initial term of f, and let $\operatorname {in} (I)\subset k[x_{1},\ldots ,x_{n}]$ be the homogeneous ideal which is formed by the initial terms $\operatorname {in} (f)$ for all $f\in I$, the initial ideal of I. The tangent cone to X at the origin is the Zariski closed subset of $k^{n}$ defined by the ideal $\operatorname {in} (I)$. By shifting the coordinate system, this definition extends to an arbitrary point of $k^{n}$ in place of the origin. The tangent cone serves as the extension of the notion of the tangent space to X at a regular point, where X most closely resembles a differentiable manifold, to all of X. (The tangent cone at a point of $k^{n}$ that is not contained in X is empty.) For example, the nodal curve $C:y^{2}=x^{3}+x^{2}$ is singular at the origin, because both partial derivatives of f(x, y) = y2 − x3 − x2 vanish at (0, 0). Thus the Zariski tangent space to C at the origin is the whole plane, and has higher dimension than the curve itself (two versus one). On the other hand, the tangent cone is the union of the tangent lines to the two branches of C at the origin, $x=y,\quad x=-y.$ Its defining ideal is the principal ideal of k[x] generated by the initial term of f, namely y2 − x2 = 0. The definition of the tangent cone can be extended to abstract algebraic varieties, and even to general Noetherian schemes. Let X be an algebraic variety, x a point of X, and (OX,x, m) be the local ring of X at x. Then the tangent cone to X at x is the spectrum of the associated graded ring of OX,x with respect to the m-adic filtration: $\operatorname {gr} _{m}O_{X,x}=\bigoplus _{i\geq 0}m^{i}/m^{i+1}.$ If we look at our previous example, then we can see that graded pieces contain the same information. So let $({\mathcal {O}}_{X,x},{\mathfrak {m}})=\left(\left({\frac {k[x,y]}{(y^{2}-x^{3}-x^{2})}}\right)_{(x,y)},(x,y)\right)$ then if we expand out the associated graded ring ${\begin{aligned}\operatorname {gr} _{m}O_{X,x}&={\frac {{\mathcal {O}}_{X,x}}{(x,y)}}\oplus {\frac {(x,y)}{(x,y)^{2}}}\oplus {\frac {(x,y)^{2}}{(x,y)^{3}}}\oplus \cdots \\&=k\oplus {\frac {(x,y)}{(x,y)^{2}}}\oplus {\frac {(x,y)^{2}}{(x,y)^{3}}}\oplus \cdots \end{aligned}}$ we can see that the polynomial defining our variety $y^{2}-x^{3}-x^{2}\equiv y^{2}-x^{2}$ in ${\frac {(x,y)^{2}}{(x,y)^{3}}}$ See also • Cone • Monge cone • Normal cone References • M. I. Voitsekhovskii (2001) [1994], "Tangent cone", Encyclopedia of Mathematics, EMS Press • Aubin, J.-P., Frankowska, H. (2009). "Tangent Cones". Set-Valued Analysis. Modern Birkhäuser Classics. Birkhäuser. pp. 117–177. doi:10.1007/978-0-8176-4848-0_4. ISBN 978-0-8176-4848-0.
Wikipedia
Smoothness In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives it has over some domain, called differentiability class.[1] At the very minimum, a function could be considered smooth if it is differentiable everywhere (hence continuous).[2] At the other end, it might also possess derivatives of all orders in its domain, in which case it is said to be infinitely differentiable and referred to as a C-infinity function (or $C^{\infty }$ function).[3] "C infinity" redirects here. For the extended complex plane $\mathbb {C} _{\infty }$, see Riemann sphere. "C^n" redirects here. For $\mathbb {C} ^{n}$, see Complex coordinate space. For smoothness in number theory, see smooth number. Differentiability classes Differentiability class is a classification of functions according to the properties of their derivatives. It is a measure of the highest order of derivative that exists and is continuous for a function. Consider an open set $U$ on the real line and a function $f$ defined on $U$ with real values. Let k be a non-negative integer. The function $f$ is said to be of differentiability class $C^{k}$ if the derivatives $f',f'',\dots ,f^{(k)}$ exist and are continuous on $U$. If $f$ is $k$-differentiable on $U$, then it is at least in the class $C^{k-1}$ since $f',f'',\dots ,f^{(k-1)}$ are continuous on $U$. The function $f$ is said to be infinitely differentiable, smooth, or of class $C^{\infty }$, if it has derivatives of all orders on $U$. (So all these derivatives are continuous functions over $U$.)[4] The function $f$ is said to be of class $C^{\omega }$, or analytic, if $f$ is smooth (i.e., $f$ is in the class $C^{\infty }$) and its Taylor series expansion around any point in its domain converges to the function in some neighborhood of the point. $C^{\omega }$ is thus strictly contained in $C^{\infty }$. Bump functions are examples of functions in $C^{\infty }$ but not in $C^{\omega }$. To put it differently, the class $C^{0}$ consists of all continuous functions. The class $C^{1}$ consists of all differentiable functions whose derivative is continuous; such functions are called continuously differentiable. Thus, a $C^{1}$ function is exactly a function whose derivative exists and is of class $C^{0}$. In general, the classes $C^{k}$ can be defined recursively by declaring $C^{0}$ to be the set of all continuous functions, and declaring $C^{k}$ for any positive integer $k$ to be the set of all differentiable functions whose derivative is in $C^{k-1}$. In particular, $C^{k}$ is contained in $C^{k-1}$ for every $k>0$, and there are examples to show that this containment is strict ($C^{k}\subsetneq C^{k-1}$). The class $C^{\infty }$ of infinitely differentiable functions, is the intersection of the classes $C^{k}$ as $k$ varies over the non-negative integers. Example: Continuous (C0) But Not Differentiable The function $f(x)={\begin{cases}x&{\mbox{if }}x\geq 0,\\0&{\text{if }}x<0\end{cases}}$ is continuous, but not differentiable at x = 0, so it is of class C0, but not of class C1. Example: Finitely-times Differentiable (Ck) For each even integer k, the function $f(x)=|x|^{k+1}$ is continuous and k times differentiable at all x. At x = 0, however, $f$ is not (k + 1) times differentiable, so $f$ is of class Ck, but not of class Cj where j > k. Example: Differentiable But Not Continuously Differentiable (not C1) The function $g(x)={\begin{cases}x^{2}\sin {\left({\tfrac {1}{x}}\right)}&{\text{if }}x\neq 0,\\0&{\text{if }}x=0\end{cases}}$ is differentiable, with derivative $g'(x)={\begin{cases}-{\mathord {\cos \left({\tfrac {1}{x}}\right)}}+2x\sin \left({\tfrac {1}{x}}\right)&{\text{if }}x\neq 0,\\0&{\text{if }}x=0.\end{cases}}$ Because $\cos(1/x)$ oscillates as x → 0, $g'(x)$ is not continuous at zero. Therefore, $g(x)$ is differentiable but not of class C1. Example: Differentiable But Not Lipschitz Continuous The function $h(x)={\begin{cases}x^{4/3}\sin {\left({\tfrac {1}{x}}\right)}&{\text{if }}x\neq 0,\\0&{\text{if }}x=0\end{cases}}$ is differentiable but its derivative is unbounded on a compact set. Therefore, $h$ is an example of a function that is differentiable but not locally Lipschitz continuous. Example: Analytic (Cω) The exponential function $e^{x}$ is analytic, and hence falls into the class Cω. The trigonometric functions are also analytic wherever they are defined as they are linear combinations of complex exponential functions $e^{ix}$ and $e^{-ix}$. Example: Smooth (C∞) but not Analytic (Cω) The bump function $f(x)={\begin{cases}e^{-{\frac {1}{1-x^{2}}}}&{\text{ if }}|x|<1,\\0&{\text{ otherwise }}\end{cases}}$ is smooth, so of class C∞, but it is not analytic at x = ±1, and hence is not of class Cω. The function f is an example of a smooth function with compact support. Multivariate differentiability classes A function $f:U\subset \mathbb {R} ^{n}\to \mathbb {R} $ defined on an open set $U$ of $\mathbb {R} ^{n}$ is said[5] to be of class $C^{k}$ on $U$, for a positive integer $k$, if all partial derivatives ${\frac {\partial ^{\alpha }f}{\partial x_{1}^{\alpha _{1}}\,\partial x_{2}^{\alpha _{2}}\,\cdots \,\partial x_{n}^{\alpha _{n}}}}(y_{1},y_{2},\ldots ,y_{n})$ exist and are continuous, for every $\alpha _{1},\alpha _{2},\ldots ,\alpha _{n}$ non-negative integers, such that $\alpha =\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}\leq k$, and every $(y_{1},y_{2},\ldots ,y_{n})\in U$. Equivalently, $f$ is of class $C^{k}$ on $U$ if the $k$-th order Fréchet derivative of $f$ exists and is continuous at every point of $U$. The function $f$ is said to be of class $C$ or $C^{0}$ if it is continuous on $U$. Functions of class $C^{1}$ are also said to be continuously differentiable. A function $f:U\subset \mathbb {R} ^{n}\to \mathbb {R} ^{m}$, defined on an open set $U$ of $\mathbb {R} ^{n}$, is said to be of class $C^{k}$ on $U$, for a positive integer $k$, if all of its components $f_{i}(x_{1},x_{2},\ldots ,x_{n})=(\pi _{i}\circ f)(x_{1},x_{2},\ldots ,x_{n})=\pi _{i}(f(x_{1},x_{2},\ldots ,x_{n})){\text{ for }}i=1,2,3,\ldots ,m$ are of class $C^{k}$, where $\pi _{i}$ are the natural projections $\pi _{i}:\mathbb {R} ^{m}\to \mathbb {R} $ defined by $\pi _{i}(x_{1},x_{2},\ldots ,x_{m})=x_{i}$. It is said to be of class $C$ or $C^{0}$ if it is continuous, or equivalently, if all components $f_{i}$ are continuous, on $U$. The space of Ck functions Let $D$ be an open subset of the real line. The set of all $C^{k}$ real-valued functions defined on $D$ is a Fréchet vector space, with the countable family of seminorms $p_{K,m}=\sup _{x\in K}\left|f^{(m)}(x)\right|$ where $K$ varies over an increasing sequence of compact sets whose union is $D$, and $m=0,1,\dots ,k$. The set of $C^{\infty }$ functions over $D$ also forms a Fréchet space. One uses the same seminorms as above, except that $m$ is allowed to range over all non-negative integer values. The above spaces occur naturally in applications where functions having derivatives of certain orders are necessary; however, particularly in the study of partial differential equations, it can sometimes be more fruitful to work instead with the Sobolev spaces. Continuity The terms parametric continuity (Ck) and geometric continuity (Gn) were introduced by Brian Barsky, to show that the smoothness of a curve could be measured by removing restrictions on the speed, with which the parameter traces out the curve.[6][7][8] Parametric continuity Parametric continuity (Ck) is a concept applied to parametric curves, which describes the smoothness of the parameter's value with distance along the curve. A (parametric) curve $s:[0,1]\to \mathbb {R} ^{n}$ is said to be of class Ck, if $\textstyle {\frac {d^{k}s}{dt^{k}}}$ exists and is continuous on $[0,1]$, where derivatives at the end-points $0,1\in [0,1]$ are taken to be one sided derivatives (i.e., at $0$ from the right, and at $1$ from the left). As a practical application of this concept, a curve describing the motion of an object with a parameter of time must have C1 continuity and its first derivative is differentiable—for the object to have finite acceleration. For smoother motion, such as that of a camera's path while making a film, higher orders of parametric continuity are required. Order of parametric continuity The various order of parametric continuity can be described as follows:[9] • $C^{0}$: zeroth derivative is continuous (curves are continuous) • $C^{1}$: zeroth and first derivatives are continuous • $C^{2}$: zeroth, first and second derivatives are continuous • $C^{n}$: 0-th through $n$-th derivatives are continuous Geometric continuity The concept of geometrical continuity or geometric continuity (Gn) was primarily applied to the conic sections (and related shapes) by mathematicians such as Leibniz, Kepler, and Poncelet. The concept was an early attempt at describing, through geometry rather than algebra, the concept of continuity as expressed through a parametric function.[10] The basic idea behind geometric continuity was that the five conic sections were really five different versions of the same shape. An ellipse tends to a circle as the eccentricity approaches zero, or to a parabola as it approaches one; and a hyperbola tends to a parabola as the eccentricity drops toward one; it can also tend to intersecting lines. Thus, there was continuity between the conic sections. These ideas led to other concepts of continuity. For instance, if a circle and a straight line were two expressions of the same shape, perhaps a line could be thought of as a circle of infinite radius. For such to be the case, one would have to make the line closed by allowing the point $x=\infty $ to be a point on the circle, and for $x=+\infty $ and $x=\neg \infty $ to be identical. Such ideas were useful in crafting the modern, algebraically defined, idea of the continuity of a function and of $\infty $ (see projectively extended real line for more).[10] Order of geometric continuity A curve or surface can be described as having $G^{n}$ continuity, with $n$ being the increasing measure of smoothness. Consider the segments either side of a point on a curve: • $G^{0}$: The curves touch at the join point. • $G^{1}$: The curves also share a common tangent direction at the join point. • $G^{2}$: The curves also share a common center of curvature at the join point. In general, $G^{n}$ continuity exists if the curves can be reparameterized to have $C^{n}$ (parametric) continuity.[11][12] A reparametrization of the curve is geometrically identical to the original; only the parameter is affected. Equivalently, two vector functions $f(t)$ and $g(t)$ have $G^{n}$ continuity if $f^{(n)}(t)\neq 0$ and $f^{(n)}(t)\equiv kg^{(n)}(t)$, for a scalar $k>0$ (i.e., if the direction, but not necessarily the magnitude, of the two vectors is equal). While it may be obvious that a curve would require $G^{1}$ continuity to appear smooth, for good aesthetics, such as those aspired to in architecture and sports car design, higher levels of geometric continuity are required. For example, reflections in a car body will not appear smooth unless the body has $G^{2}$ continuity. A rounded rectangle (with ninety degree circular arcs at the four corners) has $G^{1}$ continuity, but does not have $G^{2}$ continuity. The same is true for a rounded cube, with octants of a sphere at its corners and quarter-cylinders along its edges. If an editable curve with $G^{2}$ continuity is required, then cubic splines are typically chosen; these curves are frequently used in industrial design. Other concepts Relation to analyticity While all analytic functions are "smooth" (i.e. have all derivatives continuous) on the set on which they are analytic, examples such as bump functions (mentioned above) show that the converse is not true for functions on the reals: there exist smooth real functions that are not analytic. Simple examples of functions that are smooth but not analytic at any point can be made by means of Fourier series; another example is the Fabius function. Although it might seem that such functions are the exception rather than the rule, it turns out that the analytic functions are scattered very thinly among the smooth ones; more rigorously, the analytic functions form a meagre subset of the smooth functions. Furthermore, for every open subset A of the real line, there exist smooth functions that are analytic on A and nowhere else . It is useful to compare the situation to that of the ubiquity of transcendental numbers on the real line. Both on the real line and the set of smooth functions, the examples we come up with at first thought (algebraic/rational numbers and analytic functions) are far better behaved than the majority of cases: the transcendental numbers and nowhere analytic functions have full measure (their complements are meagre). The situation thus described is in marked contrast to complex differentiable functions. If a complex function is differentiable just once on an open set, it is both infinitely differentiable and analytic on that set . Smooth partitions of unity Smooth functions with given closed support are used in the construction of smooth partitions of unity (see partition of unity and topology glossary); these are essential in the study of smooth manifolds, for example to show that Riemannian metrics can be defined globally starting from their local existence. A simple case is that of a bump function on the real line, that is, a smooth function f that takes the value 0 outside an interval [a,b] and such that $f(x)>0\quad {\text{ for }}\quad a<x<b.\,$ Given a number of overlapping intervals on the line, bump functions can be constructed on each of them, and on semi-infinite intervals $(-\infty ,c]$ and $[d,+\infty )$ to cover the whole line, such that the sum of the functions is always 1. From what has just been said, partitions of unity don't apply to holomorphic functions; their different behavior relative to existence and analytic continuation is one of the roots of sheaf theory. In contrast, sheaves of smooth functions tend not to carry much topological information. Smooth functions on and between manifolds Given a smooth manifold $M$, of dimension $m,$ and an atlas ${\mathfrak {U}}=\{(U_{\alpha },\phi _{\alpha })\}_{\alpha },$ then a map $f:M\to \mathbb {R} $ is smooth on $M$ if for all $p\in M$ there exists a chart $(U,\phi )\in {\mathfrak {U}},$ such that $p\in U,$ and $f\circ \phi ^{-1}:\phi (U)\to \mathbb {R} $ is a smooth function from a neighborhood of $\phi (p)$ in $\mathbb {R} ^{m}$ to $\mathbb {R} $ (all partial derivatives up to a given order are continuous). Smoothness can be checked with respect to any chart of the atlas that contains $p,$ since the smoothness requirements on the transition functions between charts ensure that if $f$ is smooth near $p$ in one chart it will be smooth near $p$ in any other chart. If $F:M\to N$ is a map from $M$ to an $n$-dimensional manifold $N$, then $F$ is smooth if, for every $p\in M,$ there is a chart $(U,\phi )$ containing $p,$ and a chart $(V,\psi )$ containing $F(p)$ such that $F(U)\subset V,$ and $\psi \circ F\circ \phi ^{-1}:\phi (U)\to \psi (V)$ is a smooth function from $\mathbb {R} ^{n}.$ Smooth maps between manifolds induce linear maps between tangent spaces: for $F:M\to N$, at each point the pushforward (or differential) maps tangent vectors at $p$ to tangent vectors at $F(p)$: $F_{*,p}:T_{p}M\to T_{F(p)}N,$ and on the level of the tangent bundle, the pushforward is a vector bundle homomorphism: $F_{*}:TM\to TN.$ The dual to the pushforward is the pullback, which "pulls" covectors on $N$ back to covectors on $M,$ and $k$-forms to $k$-forms: $F^{*}:\Omega ^{k}(N)\to \Omega ^{k}(M).$ In this way smooth functions between manifolds can transport local data, like vector fields and differential forms, from one manifold to another, or down to Euclidean space where computations like integration are well understood. Preimages and pushforwards along smooth functions are, in general, not manifolds without additional assumptions. Preimages of regular points (that is, if the differential does not vanish on the preimage) are manifolds; this is the preimage theorem. Similarly, pushforwards along embeddings are manifolds.[13] Smooth functions between subsets of manifolds There is a corresponding notion of smooth map for arbitrary subsets of manifolds. If $f:X\to Y$ is a function whose domain and range are subsets of manifolds $X\subseteq M$ and $Y\subseteq N$ respectively. $f$ is said to be smooth if for all $x\in X$ there is an open set $U\subseteq M$ with $x\in U$ and a smooth function $F:U\to N$ such that $F(p)=f(p)$ for all $p\in U\cap X.$ See also • Discontinuity – Mathematical analysis of discontinuous pointsPages displaying short descriptions of redirect targets • Hadamard's lemma • Non-analytic smooth function – Mathematical functions which are smooth but not analytic • Quasi-analytic function • Singularity (mathematics) – Point where a function, a curve or another mathematical object does not behave regularly • Sinuosity – Ratio of arc length and straight-line distance between two points on a wave-like function • Smooth scheme – type of schemePages displaying wikidata descriptions as a fallback • Smooth number – Integer having only small prime factors (number theory) • Smoothing – Fitting an approximating function to data • Spline – Mathematical function defined piecewise by polynomials • Sobolev mapping References 1. Weisstein, Eric W. "Smooth Function". mathworld.wolfram.com. Archived from the original on 2019-12-16. Retrieved 2019-12-13. 2. "Smooth (mathematics)". TheFreeDictionary.com. Archived from the original on 2019-09-03. Retrieved 2019-12-13. 3. "Smooth function - Encyclopedia of Mathematics". www.encyclopediaofmath.org. Archived from the original on 2019-12-13. Retrieved 2019-12-13. 4. Warner, Frank W. (1983). Foundations of Differentiable Manifolds and Lie Groups. Springer. p. 5 [Definition 1.2]. ISBN 978-0-387-90894-6. Archived from the original on 2015-10-01. Retrieved 2014-11-28. 5. Henri Cartan (1977). Cours de calcul différentiel. Paris: Hermann. 6. Barsky, Brian A. (1981). The Beta-spline: A Local Representation Based on Shape Parameters and Fundamental Geometric Measures (Ph.D.). University of Utah, Salt Lake City, Utah. 7. Brian A. Barsky (1988). Computer Graphics and Geometric Modeling Using Beta-splines. Springer-Verlag, Heidelberg. ISBN 978-3-642-72294-3. 8. Richard H. Bartels; John C. Beatty; Brian A. Barsky (1987). An Introduction to Splines for Use in Computer Graphics and Geometric Modeling. Morgan Kaufmann. Chapter 13. Parametric vs. Geometric Continuity. ISBN 978-1-55860-400-1. 9. van de Panne, Michiel (1996). "Parametric Curves". Fall 1996 Online Notes. University of Toronto, Canada. Archived from the original on 2020-11-26. Retrieved 2019-09-01. 10. Taylor, Charles (1911). "Geometrical Continuity" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 11 (11th ed.). Cambridge University Press. pp. 674–675. 11. Barsky, Brian A.; DeRose, Tony D. (1989). "Geometric Continuity of Parametric Curves: Three Equivalent Characterizations". IEEE Computer Graphics and Applications. 9 (6): 60–68. doi:10.1109/38.41470. S2CID 17893586. 12. Hartmann, Erich (2003). "Geometry and Algorithms for Computer Aided Design" (PDF). Technische Universität Darmstadt. p. 55. Archived (PDF) from the original on 2020-10-23. Retrieved 2019-08-31. 13. Guillemin, Victor; Pollack, Alan (1974). Differential Topology. Englewood Cliffs: Prentice-Hall. ISBN 0-13-212605-2. Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Tangent developable In the mathematical study of the differential geometry of surfaces, a tangent developable is a particular kind of developable surface obtained from a curve in Euclidean space as the surface swept out by the tangent lines to the curve. Such a surface is also the envelope of the tangent planes to the curve. Parameterization Let $\gamma (t)$ be a parameterization of a smooth space curve. That is, $\gamma $ is a twice-differentiable function with nowhere-vanishing derivative that maps its argument $t$ (a real number) to a point in space; the curve is the image of $\gamma $. Then a two-dimensional surface, the tangent developable of $\gamma $, may be parameterized by the map $(s,t)\mapsto \gamma (t)+s\gamma {\,'}(t).$[1] The original curve forms a boundary of the tangent developable, and is called its directrix or edge of regression. This curve is obtained by first developing the surface into the plane, and then considering the image in the plane of the generators of the ruling on the surface. The envelope of this family of lines is a plane curve whose inverse image under the development is the edge of regression. Intuitively, it is a curve along which the surface needs to be folded during the process of developing into the plane. Properties The tangent developable is a developable surface; that is, it is a surface with zero Gaussian curvature. It is one of three fundamental types of developable surface; the other two are the generalized cones (the surface traced out by a one-dimensional family of lines through a fixed point), and the cylinders (surfaces traced out by a one-dimensional family of parallel lines). (The plane is sometimes given as a fourth type, or may be seen as a special case of either of these two types.) Every developable surface in three-dimensional space may be formed by gluing together pieces of these three types; it follows from this that every developable surface is a ruled surface, a union of a one-dimensional family of lines.[2] However, not every ruled surface is developable; the helicoid provides a counterexample. The tangent developable of a curve containing a point of zero torsion will contain a self-intersection. History Tangent developables were first studied by Leonhard Euler in 1772.[3] Until that time, the only known developable surfaces were the generalized cones and the cylinders. Euler showed that tangent developables are developable and that every developable surface is of one of these types.[2] Notes 1. Pressley, Andrew (2010), Elementary Differential Geometry, Springer, p. 129, ISBN 1-84882-890-X. 2. Lawrence, Snežana (2011), "Developable surfaces: their history and application", Nexus Network Journal, 13 (3): 701–714, doi:10.1007/s00004-011-0087-z. 3. Euler, L. (1772), "De solidis quorum superficiem in planum explicare licet", Novi Commentarii academiae scientiarum Petropolitanae (in Latin), 16: 3–34. References • Struik, Dirk Jan (1961), Lectures on Classical Differential Geometry, Addison-Wesley. • Hilbert, David; Cohn-Vossen, Stephan (1952), Geometry and the Imagination (2nd ed.), New York: Chelsea, ISBN 978-0-8284-1087-8 • Sabitov, I.Kh. (2001) [1994], "Developable surface", Encyclopedia of Mathematics, EMS Press • Voitsekhovskii, M.I. (2001) [1994], "Edge of regression", Encyclopedia of Mathematics, EMS Press External links • Weisstein, Eric W. "Tangent Developable". MathWorld.
Wikipedia
Tangent half-angle formula In trigonometry, tangent half-angle formulas relate the tangent of half of an angle to trigonometric functions of the entire angle. The tangent of half an angle is the stereographic projection of the circle onto a line. Among these formulas are the following: ${\begin{aligned}\tan {\tfrac {1}{2}}(\eta \pm \theta )&={\frac {\tan {\tfrac {1}{2}}\eta \pm \tan {\tfrac {1}{2}}\theta }{1\mp \tan {\tfrac {1}{2}}\eta \,\tan {\tfrac {1}{2}}\theta }}={\frac {\sin \eta \pm \sin \theta }{\cos \eta +\cos \theta }}=-{\frac {\cos \eta -\cos \theta }{\sin \eta \mp \sin \theta }},\\[10pt]\tan {\tfrac {1}{2}}\theta &={\frac {\sin \theta }{1+\cos \theta }}={\frac {\tan \theta }{\sec \theta +1}}={\frac {1}{\csc \theta +\cot \theta }},&&(\eta =0)\\[10pt]\tan {\tfrac {1}{2}}\theta &={\frac {1-\cos \theta }{\sin \theta }}={\frac {\sec \theta -1}{\tan \theta }}=\csc \theta -\cot \theta ,&&(\eta =0)\\[10pt]\tan {\tfrac {1}{2}}{\big (}\theta \pm {\tfrac {1}{2}}\pi {\big )}&={\frac {1\pm \sin \theta }{\cos \theta }}=\sec \theta \pm \tan \theta ={\frac {\csc \theta \pm 1}{\cot \theta }},&&{\big (}\eta ={\tfrac {1}{2}}\pi {\big )}\\[10pt]\tan {\tfrac {1}{2}}{\big (}\theta \pm {\tfrac {1}{2}}\pi {\big )}&={\frac {\cos \theta }{1\mp \sin \theta }}={\frac {1}{\sec \theta \mp \tan \theta }}={\frac {\cot \theta }{\csc \theta \mp 1}},&&{\big (}\eta ={\tfrac {1}{2}}\pi {\big )}\\[10pt]{\frac {1-\tan {\tfrac {1}{2}}\theta }{1+\tan {\tfrac {1}{2}}\theta }}&=\pm {\sqrt {\frac {1-\sin \theta }{1+\sin \theta }}}\\[10pt]\tan {\tfrac {1}{2}}\theta &=\pm {\sqrt {\frac {1-\cos \theta }{1+\cos \theta }}}\\[10pt]\end{aligned}}$ Trigonometry • Outline • History • Usage • Functions (inverse) • Generalized trigonometry Reference • Identities • Exact constants • Tables • Unit circle Laws and theorems • Sines • Cosines • Tangents • Cotangents • Pythagorean theorem Calculus • Trigonometric substitution • Integrals (inverse functions) • Derivatives From these one can derive identities expressing the sine, cosine, and tangent as functions of tangents of half-angles: ${\begin{aligned}\sin \alpha &={\frac {2\tan {\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}\\[7pt]\cos \alpha &={\frac {1-\tan ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}\\[7pt]\tan \alpha &={\frac {2\tan {\tfrac {1}{2}}\alpha }{1-\tan ^{2}{\tfrac {1}{2}}\alpha }}\end{aligned}}$ Proofs Algebraic proofs Using double-angle formulae and the Pythagorean identity $ 1+\tan ^{2}\alpha =1{\big /}\cos ^{2}\alpha $ gives $\sin \alpha =2\sin {\tfrac {1}{2}}\alpha \cos {\tfrac {1}{2}}\alpha ={\frac {2\sin {\tfrac {1}{2}}\alpha \,\cos {\tfrac {1}{2}}\alpha {\Big /}\cos ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}={\frac {2\tan {\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }},\quad {\text{and}}$ $\cos \alpha =\cos ^{2}{\tfrac {1}{2}}\alpha -\sin ^{2}{\tfrac {1}{2}}\alpha ={\frac {\left(\cos ^{2}{\tfrac {1}{2}}\alpha -\sin ^{2}{\tfrac {1}{2}}\alpha \right){\Big /}\cos ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}={\frac {1-\tan ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }},\quad {\text{and}}$ Taking the quotient of the formulae for sine and cosine yields $\tan \alpha ={\frac {2\tan {\tfrac {1}{2}}\alpha }{1-\tan ^{2}{\tfrac {1}{2}}\alpha }}.$ Combining the Pythagorean identity with the double-angle formula for the cosine, $ \cos 2\alpha =\cos ^{2}\alpha -\sin ^{2}\alpha =1-2\sin ^{2}\alpha =2\cos ^{2}\alpha -1,$ rearranging, and taking the square roots yields $\left|\sin \alpha \right|={\sqrt {\frac {1-\cos 2\alpha }{2}}}$ and $\left|\cos \alpha \right|={\sqrt {\frac {1+\cos 2\alpha }{2}}}$ which, upon division gives $\left|\tan \alpha \right|={\frac {\sqrt {1-\cos 2\alpha }}{\sqrt {1+\cos 2\alpha }}}={\frac {{\sqrt {1-\cos 2\alpha }}{\sqrt {1+\cos 2\alpha }}}{1+\cos 2\alpha }}={\frac {\sqrt {1-\cos ^{2}2\alpha }}{1+\cos 2\alpha }}={\frac {\left|\sin 2\alpha \right|}{1+\cos 2\alpha }}.$ Alternatively, $\left|\tan \alpha \right|={\frac {\sqrt {1-\cos 2\alpha }}{\sqrt {1+\cos 2\alpha }}}={\frac {1-\cos 2\alpha }{{\sqrt {1+\cos 2\alpha }}{\sqrt {1-\cos 2\alpha }}}}={\frac {1-\cos 2\alpha }{\sqrt {1-\cos ^{2}2\alpha }}}={\frac {1-\cos 2\alpha }{\left|\sin 2\alpha \right|}}.$ It turns out that the absolute value signs in these last two formulas may be dropped, regardless of which quadrant α is in. With or without the absolute value bars these formulas do not apply when both the numerator and denominator on the right-hand side are zero. Also, using the angle addition and subtraction formulae for both the sine and cosine one obtains: ${\begin{aligned}\cos(a+b)&=\cos a\cos b-\sin a\sin b\\\cos(a-b)&=\cos a\cos b+\sin a\sin b\\\sin(a+b)&=\sin a\cos b+\cos a\sin b\\\sin(a-b)&=\sin a\cos b-\cos a\sin b\end{aligned}}$ Pairwise addition of the above four formulae yields: ${\begin{aligned}&\sin(a+b)+\sin(a-b)\\[5mu]&\quad =\sin a\cos b+\cos a\sin b+\sin a\cos b-\cos a\sin b\\[5mu]&\quad =2\sin a\cos b\\[15mu]&\cos(a+b)+\cos(a-b)\\[5mu]&\quad =\cos a\cos b-\sin a\sin b+\cos a\cos b+\sin a\sin b\\[5mu]&\quad =2\cos a\cos b\end{aligned}}$ Setting $ a={\tfrac {1}{2}}(p+q)$ and $b={\tfrac {1}{2}}(p-q)$ and substituting yields: ${\begin{aligned}&\sin p+\sin q\\[5mu]&\quad =\sin \left({\tfrac {1}{2}}(p+q)+{\tfrac {1}{2}}(p-q)\right)+\sin \left({\tfrac {1}{2}}(p+q)-{\tfrac {1}{2}}(p-q)\right)\\[5mu]&\quad =2\sin {\tfrac {1}{2}}(p+q)\,\cos {\tfrac {1}{2}}(p-q)\\[15mu]&\cos p+\cos q\\[5mu]&\quad =\cos \left({\tfrac {1}{2}}(p+q)+{\tfrac {1}{2}}(p-q)\right)+\cos \left({\tfrac {1}{2}}(p+q)-{\tfrac {1}{2}}(p-q)\right)\\[5mu]&\quad =2\cos {\tfrac {1}{2}}(p+q)\,\cos {\tfrac {1}{2}}(p-q)\end{aligned}}$ Dividing the sum of sines by the sum of cosines one arrives at: ${\frac {\sin p+\sin q}{\cos p+\cos q}}={\frac {2\sin {\tfrac {1}{2}}(p+q)\,\cos {\tfrac {1}{2}}(p-q)}{2\cos {\tfrac {1}{2}}(p+q)\,\cos {\tfrac {1}{2}}(p-q)}}=\tan {\tfrac {1}{2}}(p+q)$ Geometric proofs Applying the formulae derived above to the rhombus figure on the right, it is readily shown that $\tan {\tfrac {1}{2}}(a+b)={\frac {\sin {\tfrac {1}{2}}(a+b)}{\cos {\tfrac {1}{2}}(a+b)}}={\frac {\sin a+\sin b}{\cos a+\cos b}}.$ In the unit circle, application of the above shows that $ t=\tan {\tfrac {1}{2}}\varphi $. By similarity of triangles, ${\frac {t}{\sin \varphi }}={\frac {1}{1+\cos \varphi }}.$ It follows that $t={\frac {\sin \varphi }{1+\cos \varphi }}={\frac {\sin \varphi (1-\cos \varphi )}{(1+\cos \varphi )(1-\cos \varphi )}}={\frac {1-\cos \varphi }{\sin \varphi }}.$ The tangent half-angle substitution in integral calculus Main article: Weierstrass substitution In various applications of trigonometry, it is useful to rewrite the trigonometric functions (such as sine and cosine) in terms of rational functions of a new variable $t$. These identities are known collectively as the tangent half-angle formulae because of the definition of $t$. These identities can be useful in calculus for converting rational functions in sine and cosine to functions of t in order to find their antiderivatives. Geometrically, the construction goes like this: for any point (cos φ, sin φ) on the unit circle, draw the line passing through it and the point (−1, 0). This point crosses the y-axis at some point y = t. One can show using simple geometry that t = tan(φ/2). The equation for the drawn line is y = (1 + x)t. The equation for the intersection of the line and circle is then a quadratic equation involving t. The two solutions to this equation are (−1, 0) and (cos φ, sin φ). This allows us to write the latter as rational functions of t (solutions are given below). The parameter t represents the stereographic projection of the point (cos φ, sin φ) onto the y-axis with the center of projection at (−1, 0). Thus, the tangent half-angle formulae give conversions between the stereographic coordinate t on the unit circle and the standard angular coordinate φ. Then we have ${\begin{aligned}&\sin \varphi ={\frac {2t}{1+t^{2}}},&&\cos \varphi ={\frac {1-t^{2}}{1+t^{2}}},\\[8pt]&\tan \varphi ={\frac {2t}{1-t^{2}}}&&\cot \varphi ={\frac {1-t^{2}}{2t}},\\[8pt]&\sec \varphi ={\frac {1+t^{2}}{1-t^{2}}},&&\csc \varphi ={\frac {1+t^{2}}{2t}},\end{aligned}}$ and $e^{i\varphi }={\frac {1+it}{1-it}},\qquad e^{-i\varphi }={\frac {1-it}{1+it}}.$ By eliminating phi between the directly above and the initial definition of $t$, one arrives at the following useful relationship for the arctangent in terms of the natural logarithm $2\arctan t=-i\ln {\frac {1+it}{1-it}}.$ In calculus, the Weierstrass substitution is used to find antiderivatives of rational functions of sin φ and cos φ. After setting $t=\tan {\tfrac {1}{2}}\varphi .$ This implies that $\varphi =2\arctan(t)+2\pi n,$ for some integer n, and therefore $d\varphi ={{2\,dt} \over {1+t^{2}}}.$ Hyperbolic identities One can play an entirely analogous game with the hyperbolic functions. A point on (the right branch of) a hyperbola is given by (cosh ψ, sinh ψ). Projecting this onto y-axis from the center (−1, 0) gives the following: $t=\tanh {\tfrac {1}{2}}\psi ={\frac {\sinh \psi }{\cosh \psi +1}}={\frac {\cosh \psi -1}{\sinh \psi }}$ with the identities ${\begin{aligned}&\sinh \psi ={\frac {2t}{1-t^{2}}},&&\cosh \psi ={\frac {1+t^{2}}{1-t^{2}}},\\[8pt]&\tanh \psi ={\frac {2t}{1+t^{2}}},&&\coth \psi ={\frac {1+t^{2}}{2t}},\\[8pt]&\operatorname {sech} \,\psi ={\frac {1-t^{2}}{1+t^{2}}},&&\operatorname {csch} \,\psi ={\frac {1-t^{2}}{2t}},\end{aligned}}$ and $e^{\psi }={\frac {1+t}{1-t}},\qquad e^{-\psi }={\frac {1-t}{1+t}}.$ Finding ψ in terms of t leads to following relationship between the inverse hyperbolic tangent $\operatorname {artanh} $ and the natural logarithm: $2\operatorname {artanh} t=\ln {\frac {1+t}{1-t}}.$ The Gudermannian function Main article: Gudermannian function Comparing the hyperbolic identities to the circular ones, one notices that they involve the same functions of t, just permuted. If we identify the parameter t in both cases we arrive at a relationship between the circular functions and the hyperbolic ones. That is, if $t=\tan {\tfrac {1}{2}}\varphi =\tanh {\tfrac {1}{2}}\psi $ then $\varphi =2\arctan {\bigl (}\tanh {\tfrac {1}{2}}\psi \,{\bigr )}\equiv \operatorname {gd} \psi .$ where gd(ψ) is the Gudermannian function. The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. The above descriptions of the tangent half-angle formulae (projection the unit circle and standard hyperbola onto the y-axis) give a geometric interpretation of this function. Rational values and Pythagorean triples Main article: Pythagorean triple Starting with a Pythagorean triangle with side lengths a, b, and c that are positive integers and satisfy a2 + b2 = c2, it follows immediately that each interior angle of the triangle has rational values for sine and cosine, because these are just ratios of side lengths. Thus each of these angles has a rational value for its half-angle tangent, using tan φ/2 = sin φ / (1 + cos φ). The reverse is also true. If there are two positive angles that sum to 90°, each with a rational half-angle tangent, and the third angle is a right angle then a triangle with these interior angles can be scaled to a Pythagorean triangle. If the third angle is not required to be a right angle, but is the angle that makes the three positive angles sum to 180° then the third angle will necessarily have a rational number for its half-angle tangent when the first two do (using angle addition and subtraction formulas for tangents) and the triangle can be scaled to a Heronian triangle. Generally, if K is a subfield of the complex numbers then tan φ/2 ∈ K ∪ {∞} implies that {sin φ, cos φ, tan φ, sec φ, csc φ, cot φ} ⊆ K ∪ {∞}. See also • List of trigonometric identities • Half-side formula External links • Tangent Of Halved Angle at Planetmath
Wikipedia
Tangent indicatrix In differential geometry, the tangent indicatrix of a closed space curve is a curve on the unit sphere intimately related to the curvature of the original curve. Let $\gamma (t)$ be a closed curve with nowhere-vanishing tangent vector ${\dot {\gamma }}$. Then the tangent indicatrix $T(t)$ of $\gamma $ is the closed curve on the unit sphere given by $T={\frac {\dot {\gamma }}{|{\dot {\gamma }}|}}$. The total curvature of $\gamma $ (the integral of curvature with respect to arc length along the curve) is equal to the arc length of $T$. References • Solomon, B. "Tantrices of Spherical Curves." American Mathematical Monthly 103, 30–39, 1996.
Wikipedia
Tangent lines to circles In Euclidean plane geometry, a tangent line to a circle is a line that touches the circle at exactly one point, never entering the circle's interior. Tangent lines to circles form the subject of several theorems, and play an important role in many geometrical constructions and proofs. Since the tangent line to a circle at a point P is perpendicular to the radius to that point, theorems involving tangent lines often involve radial lines and orthogonal circles. Tangent lines to one circle A tangent line t to a circle C intersects the circle at a single point T. For comparison, secant lines intersect a circle at two points, whereas another line may not intersect a circle at all. This property of tangent lines is preserved under many geometrical transformations, such as scalings, rotation, translations, inversions, and map projections. In technical language, these transformations do not change the incidence structure of the tangent line and circle, even though the line and circle may be deformed. The radius of a circle is perpendicular to the tangent line through its endpoint on the circle's circumference. Conversely, the perpendicular to a radius through the same endpoint is a tangent line. The resulting geometrical figure of circle and tangent line has a reflection symmetry about the axis of the radius. No tangent line can be drawn through a point within a circle, since any such line must be a secant line. However, two tangent lines can be drawn to a circle from a point P outside of the circle. The geometrical figure of a circle and both tangent lines likewise has a reflection symmetry about the radial axis joining P to the center point O of the circle. Thus the lengths of the segments from P to the two tangent points are equal. By the secant-tangent theorem, the square of this tangent length equals the power of the point P in the circle C. This power equals the product of distances from P to any two intersection points of the circle with a secant line passing through P. The tangent line t and the tangent point T have a conjugate relationship to one another, which has been generalized into the idea of pole points and polar lines. The same reciprocal relation exists between a point P outside the circle and the secant line joining its two points of tangency. If a point P is exterior to a circle with center O, and if the tangent lines from P touch the circle at points T and S, then ∠TPS and ∠TOS are supplementary (sum to 180°). If a chord TM is drawn from the tangency point T of exterior point P and ∠PTM ≤ 90° then ∠PTM = (1/2)∠TOM. Cartesian equation Suppose that the equation of the circle in Cartesian coordinates is $(x-a)^{2}+(y-b)^{2}=r^{2}$ with center at $(a,b)$. Then the tangent line of the circle at $(x_{1},y_{1})$ has Cartesian equation $(x-x_{1})(x_{1}-a)+(y-y_{1})(y_{1}-b)=0$ This can be proved by taking the implicit derivative of the circle. Say that the circle has equation of $(x-a)^{2}+(y-b)^{2}=r^{2}$, and we are finding the slope of tangent line at $(x_{1},y_{1})$ where $(x_{1}-a)^{2}+(y_{1}-b)^{2}=r^{2}$. We begin by taking the implicit derivative with respect to $x$: ${\begin{aligned}(x-a)^{2}+(y-b)^{2}&=r^{2}\\2(x-a)+2(y-b){\frac {dy}{dx}}&=0\\{\frac {dy}{dx}}&=-{\frac {x_{1}-a}{y_{1}-b}}\\\end{aligned}}$ Now that we have the slope of the tangent line, we can substitute the slope and the coordinate of the tangency point into the line equation $y=kx+m$. ${\begin{aligned}y&=-{\frac {x_{1}-a}{y_{1}-b}}x+y_{1}+x_{1}\left({\frac {x_{1}-a}{y_{1}-b}}\right)\\y-y_{1}&=(x_{1}-x)\left({\frac {x_{1}-a}{y_{1}-b}}\right)\\(y-y_{1})(y_{1}-b)&=-(x-x_{1})(x_{1}-a)\\(x-x_{1})(x_{1}-a)+(y-y_{1})(y_{1}-b)&=0\\\end{aligned}}$ Compass and straightedge constructions It is relatively straightforward to construct a line t tangent to a circle at a point T on the circumference of the circle: • A line a is drawn from O, the center of the circle, through the radial point T; • The line t is the perpendicular line to a. Thales' theorem may be used to construct the tangent lines to a point P external to the circle C: • A circle is drawn centered on the midpoint of the line segment OP, having diameter OP, where O is again the center of the circle C. • The intersection points T1 and T2 of the circle C and the new circle are the tangent points for lines passing through P, by the following argument. The line segments OT1 and OT2 are radii of the circle C; since both are inscribed in a semicircle, they are perpendicular to the line segments PT1 and PT2, respectively. But only a tangent line is perpendicular to the radial line. Hence, the two lines from P and passing through T1 and T2 are tangent to the circle C. Another method to construct the tangent lines to a point P external to the circle using only a straightedge: • Draw any three different lines through the given point P that intersect the circle twice. • Let $A_{1},A_{2},B_{1},B_{2},C_{1},C_{2}$ be the six intersection points, with the same letter corresponding to the same line and the index 1 corresponding to the point closer to P. • Let D be the point where the lines $A_{1}B_{2}$ and $A_{2}B_{1}$ intersect, • Similarly E for the lines $B_{1}C_{2}$ and $B_{2}C_{1}$. • Draw a line through D and E. • This line meets the circle at two points, F and G. • The tangents are the lines PF and PG.[1] With analytic geometry Let $P=(a,b)$ be a point of the circle with equation $x^{2}+y^{2}=r^{2}$. The tangent at $P$ has equation $ax+by=r^{2}$, because $P$ lies on both the curves and ${\vec {OP}}=(a,b)^{T}$ is a normal vector of the line. The tangent intersects the x-axis at point $P_{0}=(x_{0},0)$ with $ax_{0}=r^{2}$. Conversely, if one starts with point $P_{0}=(x_{0},0)$, than the two tangents through $P_{0}$ meet the circle at the two points $P_{1/2}=(a,b_{\pm })$ with $a={\frac {r^{2}}{x_{0}}},\qquad b_{\pm }=\pm {\sqrt {r^{2}-a^{2}}}=\pm {\frac {r}{x_{0}}}{\sqrt {x_{0}^{2}-r^{2}}}\quad $. Written in vector form: ${\binom {a}{b_{\pm }}}={\frac {r^{2}}{x_{0}}}{\binom {1}{0}}\pm {\frac {r}{x_{0}}}{\sqrt {x_{0}^{2}-r^{2}}}{\binom {0}{1}}\ .$ If point $P_{0}=(x_{0},y_{0})$ lies not on the x-axis: In the vector form one replaces $x_{0}$ by the distance $\ d_{0}={\sqrt {x_{0}^{2}+y_{0}^{2}}}\ $ and the unit base vectors by the orthogonal unit vectors $\ {\vec {e}}_{1}={\frac {1}{d_{0}}}{\binom {x_{0}}{y_{0}}},\ {\vec {e}}_{2}={\frac {1}{d_{0}}}{\binom {-y_{0}}{x_{0}}}\ $. Then the tangents through point $P_{0}$ touch the circle at the points ${\binom {x_{1/2}}{y_{1/2}}}={\frac {r^{2}}{d_{0}^{2}}}{\binom {x_{0}}{y_{0}}}\pm {\frac {r}{d_{0}^{2}}}{\sqrt {d_{0}^{2}-r^{2}}}{\binom {-y_{0}}{x_{0}}}\ .$ For $d_{0}<r$ no tangents exist. For $d_{0}=r$ point $P_{0}$ lies on the circle and there is just one tangent with equation $x_{0}x+y_{0}y=r^{2}$. In case of $d_{0}>r$ there are 2 tangents with equations $\ x_{1}x+y_{1}y=r^{2},\ x_{2}x+y_{2}y=r^{2}$. Relation to circle inversion: Equation $ax_{0}=r^{2}$ describes the circle inversion of point $(x_{0},0)$. Relation to pole and polar: The polar of point $(x_{0},0)$ has equation $xx_{0}=r^{2}$. Tangential polygons A tangential polygon is a polygon each of whose sides is tangent to a particular circle, called its incircle. Every triangle is a tangential polygon, as is every regular polygon of any number of sides; in addition, for every number of polygon sides there are an infinite number of non-congruent tangential polygons. Tangent quadrilateral theorem and inscribed circles A tangential quadrilateral ABCD is a closed figure of four straight sides that are tangent to a given circle C. Equivalently, the circle C is inscribed in the quadrilateral ABCD. By the Pitot theorem, the sums of opposite sides of any such quadrilateral are equal, i.e., ${\overline {AB}}+{\overline {CD}}={\overline {BC}}+{\overline {DA}}.$ This conclusion follows from the equality of the tangent segments from the four vertices of the quadrilateral. Let the tangent points be denoted as P (on segment AB), Q (on segment BC), R (on segment CD) and S (on segment DA). The symmetric tangent segments about each point of ABCD are equal, e.g., BP=BQ=b, CQ=CR=c, DR=DS=d, and AS=AP=a. But each side of the quadrilateral is composed of two such tangent segments ${\overline {AB}}+{\overline {CD}}=(a+b)+(c+d)={\overline {BC}}+{\overline {DA}}=(b+c)+(d+a)$ proving the theorem. The converse is also true: a circle can be inscribed into every quadrilateral in which the lengths of opposite sides sum to the same value.[2] This theorem and its converse have various uses. For example, they show immediately that no rectangle can have an inscribed circle unless it is a square, and that every rhombus has an inscribed circle, whereas a general parallelogram does not. Tangent lines to two circles For two circles, there are generally four distinct lines that are tangent to both (bitangent) – if the two circles are outside each other – but in degenerate cases there may be any number between zero and four bitangent lines; these are addressed below. For two of these, the external tangent lines, the circles fall on the same side of the line; for the two others, the internal tangent lines, the circles fall on opposite sides of the line. The external tangent lines intersect in the external homothetic center, whereas the internal tangent lines intersect at the internal homothetic center. Both the external and internal homothetic centers lie on the line of centers (the line connecting the centers of the two circles), closer to the center of the smaller circle: the internal center is in the segment between the two circles, while the external center is not between the points, but rather outside, on the side of the center of the smaller circle. If the two circles have equal radius, there are still four bitangents, but the external tangent lines are parallel and there is no external center in the affine plane; in the projective plane, the external homothetic center lies at the point at infinity corresponding to the slope of these lines.[3] Outer tangent The red line joining the points $(x_{3},y_{3})$ and $(x_{4},y_{4})$ is the outer tangent between the two circles. Given points $(x_{1},y_{1})$, $(x_{2},y_{2})$ the points $(x_{3},y_{3})$, $(x_{4},y_{4})$ can easily be calculated with help of the angle $\alpha $: ${\begin{aligned}x_{3}&=x_{1}\pm r\sin \alpha \\y_{3}&=y_{1}\pm r\cos \alpha \\x_{4}&=x_{2}\pm R\sin \alpha \\y_{4}&=y_{2}\pm R\cos \alpha \\\end{aligned}}$ Here R and r notate the radii of the two circles and the angle $\alpha $ can be computed using basic trigonometry. You have $\alpha =\gamma -\beta $ with $\gamma =-\arctan \left({\tfrac {y_{2}-y_{1}}{x_{2}-x_{1}}}\right)$ and $\beta =\pm \arcsin \left({\tfrac {R-r}{\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}}\right)$. [4] The distances between the centers of the nearer and farther circles, O2 and O1 and the point where the two outer tangents of the two circles intersect (homothetic center), 'S' respectively can be found out using similarity as follows: ${\frac {dr}{r_{1}-r_{2}}}$ Here, r can be r1 or r2 depending upon the need to find distances from the centers of the nearer or farther circle, O2 and O1. d is distance O1O2 between the centers of two circles Inner tangent An inner tangent is a tangent that intersects the segment joining two circles' centers. Note that the inner tangent will not be defined for cases when the two circles overlap. Construction The bitangent lines can be constructed either by constructing the homothetic centers, as described at that article, and then constructing the tangent lines through the homothetic center that is tangent to one circle, by one of the methods described above. The resulting line will then be tangent to the other circle as well. Alternatively, the tangent lines and tangent points can be constructed more directly, as detailed below. Note that in degenerate cases these constructions break down; to simplify exposition this is not discussed in this section, but a form of the construction can work in limit cases (e.g., two circles tangent at one point). Synthetic geometry Let O1 and O2 be the centers of the two circles, C1 and C2 and let r1 and r2 be their radii, with r1 > r2; in other words, circle C1 is defined as the larger of the two circles. Two different methods may be used to construct the external and internal tangent lines. External tangents A new circle C3 of radius r1 − r2 is drawn centered on O1. Using the method above, two lines are drawn from O2 that are tangent to this new circle. These lines are parallel to the desired tangent lines, because the situation corresponds to shrinking both circles C1 and C2 by a constant amount, r2, which shrinks C2 to a point. Two radial lines may be drawn from the center O1 through the tangent points on C3; these intersect C1 at the desired tangent points. The desired external tangent lines are the lines perpendicular to these radial lines at those tangent points, which may be constructed as described above. Internal tangents A new circle C3 of radius r1 + r2 is drawn centered on O1. Using the method above, two lines are drawn from O2 that are tangent to this new circle. These lines are parallel to the desired tangent lines, because the situation corresponds to shrinking C2 to a point while expanding C1 by a constant amount, r2. Two radial lines may be drawn from the center O1 through the tangent points on C3; these intersect C1 at the desired tangent points. The desired internal tangent lines are the lines perpendicular to these radial lines at those tangent points, which may be constructed as described above. Analytic geometry Let the circles have centres c1 = (x1,y1) and c2 = (x2,y2) with radius r1 and r2 respectively. Expressing a line by the equation $ax+by+c=0,$ with the normalization a2 + b2 = 1, then a bitangent line satisfies: ax1 + by1 + c = r1 and ax2 + by2 + c = r2. Solving for $(a,b,c)$ by subtracting the first from the second yields aΔx + bΔy = Δr where Δx = x2 − x1, Δy = y2 − y1 and Δr = r2 − r1. If $d={\sqrt {(\Delta x)^{2}+(\Delta y)^{2}}}$ is the distance from c1 to c2 we can normalize by X = Δx/d, Y = Δy/d and R = Δr/d to simplify equations, yielding the equations aX + bY = R and a2 + b2 = 1, solve these to get two solutions (k = ±1) for the two external tangent lines: a = RX − kY√(1 − R2) b = RY + kX√(1 − R2) c = r1 − (ax1 + by1) Geometrically this corresponds to computing the angle formed by the tangent lines and the line of centers, and then using that to rotate the equation for the line of centers to yield an equation for the tangent line. The angle is computed by computing the trigonometric functions of a right triangle whose vertices are the (external) homothetic center, a center of a circle, and a tangent point; the hypotenuse lies on the tangent line, the radius is opposite the angle, and the adjacent side lies on the line of centers. (X, Y) is the unit vector pointing from c1 to c2, while R is $\cos \theta $ where $\theta $ is the angle between the line of centers and a tangent line. $\sin \theta $ is then $\pm {\sqrt {1-R^{2}}}$ (depending on the sign of $\theta $, equivalently the direction of rotation), and the above equations are rotation of (X, Y) by $\pm \theta ,$ using the rotation matrix: ${\begin{pmatrix}R&\mp {\sqrt {1-R^{2}}}\\\pm {\sqrt {1-R^{2}}}&R\end{pmatrix}}$ k = 1 is the tangent line to the right of the circles looking from c1 to c2. k = −1 is the tangent line to the right of the circles looking from c2 to c1. The above assumes each circle has positive radius. If r1 is positive and r2 negative then c1 will lie to the left of each line and c2 to the right, and the two tangent lines will cross. In this way all four solutions are obtained. Switching signs of both radii switches k = 1 and k = −1. Vectors In general the points of tangency t1 and t2 for the four lines tangent to two circles with centers v1 and v2 and radii r1 and r2 are given by solving the simultaneous equations: ${\begin{aligned}(t_{2}-v_{2})\cdot (t_{2}-t_{1})&=0\\(t_{1}-v_{1})\cdot (t_{2}-t_{1})&=0\\(t_{1}-v_{1})\cdot (t_{1}-v_{1})&=r_{1}^{2}\\(t_{2}-v_{2})\cdot (t_{2}-v_{2})&=r_{2}^{2}\\\end{aligned}}$ These equations express that the tangent line, which is parallel to $t_{2}-t_{1},$ is perpendicular to the radii, and that the tangent points lie on their respective circles. These are four quadratic equations in two two-dimensional vector variables, and in general position will have four pairs of solutions. Degenerate cases Two distinct circles may have between zero and four bitangent lines, depending on configuration; these can be classified in terms of the distance between the centers and the radii. If counted with multiplicity (counting a common tangent twice) there are zero, two, or four bitangent lines. Bitangent lines can also be generalized to circles with negative or zero radius. The degenerate cases and the multiplicities can also be understood in terms of limits of other configurations – e.g., a limit of two circles that almost touch, and moving one so that they touch, or a circle with small radius shrinking to a circle of zero radius. • If the circles are outside each other ($d>r_{1}+r_{2}$), which is general position, there are four bitangents. • If they touch externally at one point ($d=r_{1}+r_{2}$) – have one point of external tangency – then they have two external bitangents and one internal bitangent, namely the common tangent line. This common tangent line has multiplicity two, as it separates the circles (one on the left, one on the right) for either orientation (direction). • If the circles intersect in two points ($|r_{1}-r_{2}|<d<r_{1}+r_{2}$), then they have no internal bitangents and two external bitangents (they cannot be separated, because they intersect, hence no internal bitangents). • If the circles touch internally at one point ($d=|r_{1}-r_{2}|$) – have one point of internal tangency – then they have no internal bitangents and one external bitangent, namely the common tangent line, which has multiplicity two, as above. • If one circle is completely inside the other ($d<|r_{1}-r_{2}|$) then they have no bitangents, as a tangent line to the outer circle does not intersect the inner circle, or conversely a tangent line to the inner circle is a secant line to the outer circle. Finally, if the two circles are identical, any tangent to the circle is a common tangent and hence (external) bitangent, so there is a circle's worth of bitangents. Further, the notion of bitangent lines can be extended to circles with negative radius (the same locus of points, $x^{2}+y^{2}=(-r)^{2},$ but considered "inside out"), in which case if the radii have opposite sign (one circle has negative radius and the other has positive radius) the external and internal homothetic centers and external and internal bitangents are switched, while if the radii have the same sign (both positive radii or both negative radii) "external" and "internal" have the same usual sense (switching one sign switches them, so switching both switches them back). Bitangent lines can also be defined when one or both of the circles has radius zero. In this case the circle with radius zero is a double point, and thus any line passing through it intersects the point with multiplicity two, hence is "tangent". If one circle has radius zero, a bitangent line is simply a line tangent to the circle and passing through the point, and is counted with multiplicity two. If both circles have radius zero, then the bitangent line is the line they define, and is counted with multiplicity four. Note that in these degenerate cases the external and internal homothetic center do generally still exist (the external center is at infinity if the radii are equal), except if the circles coincide, in which case the external center is not defined, or if both circles have radius zero, in which case the internal center is not defined. Belt problem The internal and external tangent lines are useful in solving the belt problem, which is to calculate the length of a belt or rope needed to fit snugly over two pulleys. If the belt is considered to be a mathematical line of negligible thickness, and if both pulleys are assumed to lie in exactly the same plane, the problem devolves to summing the lengths of the relevant tangent line segments with the lengths of circular arcs subtended by the belt. If the belt is wrapped about the wheels so as to cross, the interior tangent line segments are relevant. Conversely, if the belt is wrapped exteriorly around the pulleys, the exterior tangent line segments are relevant; this case is sometimes called the pulley problem. Tangent lines to three circles: Monge's theorem Main article: Monge's theorem For three circles denoted by C1, C2, and C3, there are three pairs of circles (C1C2, C2C3, and C1C3). Since each pair of circles has two homothetic centers, there are six homothetic centers altogether. Gaspard Monge showed in the early 19th century that these six points lie on four lines, each line having three collinear points. Problem of Apollonius Main article: Special cases of Apollonius' problem Many special cases of Apollonius's problem involve finding a circle that is tangent to one or more lines. The simplest of these is to construct circles that are tangent to three given lines (the LLL problem). To solve this problem, the center of any such circle must lie on an angle bisector of any pair of the lines; there are two angle-bisecting lines for every intersection of two lines. The intersections of these angle bisectors give the centers of solution circles. There are four such circles in general, the inscribed circle of the triangle formed by the intersection of the three lines, and the three exscribed circles. A general Apollonius problem can be transformed into the simpler problem of circle tangent to one circle and two parallel lines (itself a special case of the LLC special case). To accomplish this, it suffices to scale two of the three given circles until they just touch, i.e., are tangent. An inversion in their tangent point with respect to a circle of appropriate radius transforms the two touching given circles into two parallel lines, and the third given circle into another circle. Thus, the solutions may be found by sliding a circle of constant radius between two parallel lines until it contacts the transformed third circle. Re-inversion produces the corresponding solutions to the original problem. Generalizations Main articles: Pole and polar and Inversive geometry The concept of a tangent line to one or more circles can be generalized in several ways. First, the conjugate relationship between tangent points and tangent lines can be generalized to pole points and polar lines, in which the pole points may be anywhere, not only on the circumference of the circle. Second, the union of two circles is a special (reducible) case of a quartic plane curve, and the external and internal tangent lines are the bitangents to this quartic curve. A generic quartic curve has 28 bitangents. A third generalization considers tangent circles, rather than tangent lines; a tangent line can be considered as a tangent circle of infinite radius. In particular, the external tangent lines to two circles are limiting cases of a family of circles which are internally or externally tangent to both circles, while the internal tangent lines are limiting cases of a family of circles which are internally tangent to one and externally tangent to the other of the two circles.[5] In Möbius or inversive geometry, lines are viewed as circles through a point "at infinity" and for any line and any circle, there is a Möbius transformation which maps one to the other. In Möbius geometry, tangency between a line and a circle becomes a special case of tangency between two circles. This equivalence is extended further in Lie sphere geometry. Radius and tangent line are perpendicular at a point of a circle, and hyperbolic-orthogonal at a point of the unit hyperbola. The parametric representation of the unit hyperbola via radius vector is $p(a)\ =\ (\cosh a,\sinh a).$ The derivative of p(a) points in the direction of tangent line at p(a), and is ${\frac {dp}{da}}\ =\ (\sinh a,\cosh a).$ The radius and tangent are hyperbolic orthogonal at a since $p(a)\ {\text{and}}\ {\frac {dp}{da}}$ are reflections of each other in the asymptote y=x of the unit hyperbola. When interpreted as split-complex numbers (where j j = +1), the two numbers satisfy $jp(a)\ =\ {\frac {dp}{da}}.$ References 1. "Finding tangents to a circle with a straightedge". Stack Exchange. August 15, 2015. 2. Alexander Bogomolny "When A Quadrilateral Is Inscriptible?" at Cut-the-knot 3. Paul Kunkel. "Tangent circles". Whistleralley.com. Retrieved 2008-09-29. 4. Libeskind, Shlomo (2007), Euclidean and Transformational Geometry: A Deductive Inquiry, pp. 110–112 (online copy, p. 110, at Google Books) 5. Kunkel, Paul (2007), "The tangency problem of Apollonius: three looks" (PDF), BSHM Bulletin: Journal of the British Society for the History of Mathematics, 22 (1): 34–46, doi:10.1080/17498430601148911, S2CID 122408307 External links • Weisstein, Eric W. "Tangent lines to one circle". MathWorld. • Weisstein, Eric W. "Tangent lines to two circles". MathWorld.
Wikipedia
Tangent bundle In differential geometry, the tangent bundle of a differentiable manifold $M$ is a manifold $TM$ which assembles all the tangent vectors in $M$. As a set, it is given by the disjoint union[note 1] of the tangent spaces of $M$. That is, ${\begin{aligned}TM&=\bigsqcup _{x\in M}T_{x}M\\&=\bigcup _{x\in M}\left\{x\right\}\times T_{x}M\\&=\bigcup _{x\in M}\left\{(x,y)\mid y\in T_{x}M\right\}\\&=\left\{(x,y)\mid x\in M,\,y\in T_{x}M\right\}\end{aligned}}$ where $T_{x}M$ denotes the tangent space to $M$ at the point $x$. So, an element of $TM$ can be thought of as a pair $(x,v)$, where $x$ is a point in $M$ and $v$ is a tangent vector to $M$ at $x$. There is a natural projection $\pi :TM\twoheadrightarrow M$ defined by $\pi (x,v)=x$. This projection maps each element of the tangent space $T_{x}M$ to the single point $x$. The tangent bundle comes equipped with a natural topology (described in a section below). With this topology, the tangent bundle to a manifold is the prototypical example of a vector bundle (which is a fiber bundle whose fibers are vector spaces). A section of $TM$ is a vector field on $M$, and the dual bundle to $TM$ is the cotangent bundle, which is the disjoint union of the cotangent spaces of $M$. By definition, a manifold $M$ is parallelizable if and only if the tangent bundle is trivial. By definition, a manifold $M$ is framed if and only if the tangent bundle $TM$ is stably trivial, meaning that for some trivial bundle $E$ the Whitney sum $TM\oplus E$ is trivial. For example, the n-dimensional sphere Sn is framed for all n, but parallelizable only for n = 1, 3, 7 (by results of Bott-Milnor and Kervaire). Role One of the main roles of the tangent bundle is to provide a domain and range for the derivative of a smooth function. Namely, if $f:M\rightarrow N$ is a smooth function, with $M$ and $N$ smooth manifolds, its derivative is a smooth function $Df:TM\rightarrow TN$. Topology and smooth structure The tangent bundle comes equipped with a natural topology (not the disjoint union topology) and smooth structure so as to make it into a manifold in its own right. The dimension of $TM$ is twice the dimension of $M$. Each tangent space of an n-dimensional manifold is an n-dimensional vector space. If $U$ is an open contractible subset of $M$, then there is a diffeomorphism $TU\to U\times \mathbb {R} ^{n}$ which restricts to a linear isomorphism from each tangent space $T_{x}U$ to $\{x\}\times \mathbb {R} ^{n}$. As a manifold, however, $TM$ is not always diffeomorphic to the product manifold $M\times \mathbb {R} ^{n}$. When it is of the form $M\times \mathbb {R} ^{n}$, then the tangent bundle is said to be trivial. Trivial tangent bundles usually occur for manifolds equipped with a 'compatible group structure'; for instance, in the case where the manifold is a Lie group. The tangent bundle of the unit circle is trivial because it is a Lie group (under multiplication and its natural differential structure). It is not true however that all spaces with trivial tangent bundles are Lie groups; manifolds which have a trivial tangent bundle are called parallelizable. Just as manifolds are locally modeled on Euclidean space, tangent bundles are locally modeled on $U\times \mathbb {R} ^{n}$, where $U$ is an open subset of Euclidean space. If M is a smooth n-dimensional manifold, then it comes equipped with an atlas of charts $(U_{\alpha },\phi _{\alpha })$, where $U_{\alpha }$ is an open set in $M$ and $\phi _{\alpha }:U_{\alpha }\to \mathbb {R} ^{n}$ is a diffeomorphism. These local coordinates on $U_{\alpha }$ give rise to an isomorphism $T_{x}M\rightarrow \mathbb {R} ^{n}$ for all $x\in U_{\alpha }$. We may then define a map ${\widetilde {\phi }}_{\alpha }:\pi ^{-1}\left(U_{\alpha }\right)\to \mathbb {R} ^{2n}$ by ${\widetilde {\phi }}_{\alpha }\left(x,v^{i}\partial _{i}\right)=\left(\phi _{\alpha }(x),v^{1},\cdots ,v^{n}\right)$ We use these maps to define the topology and smooth structure on $TM$. A subset $A$ of $TM$ is open if and only if ${\widetilde {\phi }}_{\alpha }\left(A\cap \pi ^{-1}\left(U_{\alpha }\right)\right)$ is open in $\mathbb {R} ^{2n}$ for each $\alpha .$ These maps are homeomorphisms between open subsets of $TM$ and $\mathbb {R} ^{2n}$ and therefore serve as charts for the smooth structure on $TM$. The transition functions on chart overlaps $\pi ^{-1}\left(U_{\alpha }\cap U_{\beta }\right)$ are induced by the Jacobian matrices of the associated coordinate transformation and are therefore smooth maps between open subsets of $\mathbb {R} ^{2n}$. The tangent bundle is an example of a more general construction called a vector bundle (which is itself a specific kind of fiber bundle). Explicitly, the tangent bundle to an $n$-dimensional manifold $M$ may be defined as a rank $n$ vector bundle over $M$ whose transition functions are given by the Jacobian of the associated coordinate transformations. Examples The simplest example is that of $\mathbb {R} ^{n}$. In this case the tangent bundle is trivial: each $T_{x}\mathbf {\mathbb {R} } ^{n}$ is canonically isomorphic to $T_{0}\mathbb {R} ^{n}$ via the map $\mathbb {R} ^{n}\to \mathbb {R} ^{n}$ which subtracts $x$, giving a diffeomorphism $T\mathbb {R} ^{n}\to \mathbb {R} ^{n}\times \mathbb {R} ^{n}$. Another simple example is the unit circle, $S^{1}$ (see picture above). The tangent bundle of the circle is also trivial and isomorphic to $S^{1}\times \mathbb {R} $. Geometrically, this is a cylinder of infinite height. The only tangent bundles that can be readily visualized are those of the real line $\mathbb {R} $ and the unit circle $S^{1}$, both of which are trivial. For 2-dimensional manifolds the tangent bundle is 4-dimensional and hence difficult to visualize. A simple example of a nontrivial tangent bundle is that of the unit sphere $S^{2}$: this tangent bundle is nontrivial as a consequence of the hairy ball theorem. Therefore, the sphere is not parallelizable. Vector fields A smooth assignment of a tangent vector to each point of a manifold is called a vector field. Specifically, a vector field on a manifold $M$ is a smooth map $V\colon M\to TM$ such that $V(x)=(x,V_{x})$ with $V_{x}\in T_{x}M$ for every $x\in M$. In the language of fiber bundles, such a map is called a section. A vector field on $M$ is therefore a section of the tangent bundle of $M$. The set of all vector fields on $M$ is denoted by $\Gamma (TM)$. Vector fields can be added together pointwise $(V+W)_{x}=V_{x}+W_{x}$ and multiplied by smooth functions on M $(fV)_{x}=f(x)V_{x}$ to get other vector fields. The set of all vector fields $\Gamma (TM)$ then takes on the structure of a module over the commutative algebra of smooth functions on M, denoted $C^{\infty }(M)$. A local vector field on $M$ is a local section of the tangent bundle. That is, a local vector field is defined only on some open set $U\subset M$ and assigns to each point of $U$ a vector in the associated tangent space. The set of local vector fields on $M$ forms a structure known as a sheaf of real vector spaces on $M$. The above construction applies equally well to the cotangent bundle – the differential 1-forms on $M$ are precisely the sections of the cotangent bundle $\omega \in \Gamma (T^{*}M)$, $\omega :M\to T^{*}M$ that associate to each point $x\in M$ a 1-covector $\omega _{x}\in T_{x}^{*}M$, which map tangent vectors to real numbers: $\omega _{x}:T_{x}M\to \mathbb {R} $. Equivalently, a differential 1-form $\omega \in \Gamma (T^{*}M)$ maps a smooth vector field $X\in \Gamma (TM)$ to a smooth function $\omega (X)\in C^{\infty }(M)$. Higher-order tangent bundles Since the tangent bundle $TM$ is itself a smooth manifold, the second-order tangent bundle can be defined via repeated application of the tangent bundle construction: $T^{2}M=T(TM).\,$ In general, the $k$th order tangent bundle $T^{k}M$ can be defined recursively as $T\left(T^{k-1}M\right)$. A smooth map $f:M\rightarrow N$ has an induced derivative, for which the tangent bundle is the appropriate domain and range $Df:TM\rightarrow TN$. Similarly, higher-order tangent bundles provide the domain and range for higher-order derivatives $D^{k}f:T^{k}M\to T^{k}N$. A distinct but related construction are the jet bundles on a manifold, which are bundles consisting of jets. Canonical vector field on tangent bundle On every tangent bundle $TM$, considered as a manifold itself, one can define a canonical vector field $V:TM\rightarrow T^{2}M$ as the diagonal map on the tangent space at each point. This is possible because the tangent space of a vector space W is naturally a product, $TW\cong W\times W,$ since the vector space itself is flat, and thus has a natural diagonal map $W\to TW$ given by $w\mapsto (w,w)$ under this product structure. Applying this product structure to the tangent space at each point and globalizing yields the canonical vector field. Informally, although the manifold $M$ is curved, each tangent space at a point $x$, $T_{x}M\approx \mathbb {R} ^{n}$, is flat, so the tangent bundle manifold $TM$ is locally a product of a curved $M$ and a flat $\mathbb {R} ^{n}.$ Thus the tangent bundle of the tangent bundle is locally (using $\approx $ for "choice of coordinates" and $\cong $ for "natural identification"): $T(TM)\approx T(M\times \mathbb {R} ^{n})\cong TM\times T(\mathbb {R} ^{n})\cong TM\times (\mathbb {R} ^{n}\times \mathbb {R} ^{n})$ and the map $TTM\to TM$ is the projection onto the first coordinates: $(TM\to M)\times (\mathbb {R} ^{n}\times \mathbb {R} ^{n}\to \mathbb {R} ^{n}).$ Splitting the first map via the zero section and the second map by the diagonal yields the canonical vector field. If $(x,v)$ are local coordinates for $TM$, the vector field has the expression $V=\sum _{i}\left.v^{i}{\frac {\partial }{\partial v^{i}}}\right|_{(x,v)}.$ More concisely, $(x,v)\mapsto (x,v,0,v)$ – the first pair of coordinates do not change because it is the section of a bundle and these are just the point in the base space: the last pair of coordinates are the section itself. This expression for the vector field depends only on $v$, not on $x$, as only the tangent directions can be naturally identified. Alternatively, consider the scalar multiplication function: ${\begin{cases}\mathbb {R} \times TM\to TM\\(t,v)\longmapsto tv\end{cases}}$ The derivative of this function with respect to the variable $\mathbb {R} $ at time $t=1$ is a function $V:TM\rightarrow T^{2}M$, which is an alternative description of the canonical vector field. The existence of such a vector field on $TM$ is analogous to the canonical one-form on the cotangent bundle. Sometimes $V$ is also called the Liouville vector field, or radial vector field. Using $V$ one can characterize the tangent bundle. Essentially, $V$ can be characterized using 4 axioms, and if a manifold has a vector field satisfying these axioms, then the manifold is a tangent bundle and the vector field is the canonical vector field on it. See for example, De León et al. Lifts There are various ways to lift objects on $M$ into objects on $TM$. For example, if $\gamma $ is a curve in $M$, then $\gamma '$ (the tangent of $\gamma $) is a curve in $TM$. In contrast, without further assumptions on $M$ (say, a Riemannian metric), there is no similar lift into the cotangent bundle. The vertical lift of a function $f:M\rightarrow \mathbb {R} $ is the function $f^{\vee }:TM\rightarrow \mathbb {R} $ defined by $f^{\vee }=f\circ \pi $, where $\pi :TM\rightarrow M$ is the canonical projection. See also • Pushforward (differential) • Unit tangent bundle • Cotangent bundle • Frame bundle • Musical isomorphism Notes 1. The disjoint union ensures that for any two points x1 and x2 of manifold M the tangent spaces T1 and T2 have no common vector. This is graphically illustrated in the accompanying picture for tangent bundle of circle S1, see Examples section: all tangents to a circle lie in the plane of the circle. In order to make them disjoint it is necessary to align them in a plane perpendicular to the plane of the circle. References • Lee, Jeffrey M. (2009), Manifolds and Differential Geometry, Graduate Studies in Mathematics, vol. 107, Providence: American Mathematical Society. ISBN 978-0-8218-4815-9 • John M. Lee, Introduction to Smooth Manifolds, (2003) Springer-Verlag, New York. ISBN 0-387-95495-3. • Jürgen Jost, Riemannian Geometry and Geometric Analysis, (2002) Springer-Verlag, Berlin. ISBN 3-540-42627-2 • Ralph Abraham and Jerrold E. Marsden, Foundations of Mechanics, (1978) Benjamin-Cummings, London. ISBN 0-8053-0102-X • M. De León, E. Merino, J.A. Oubiña, M. Salgado, A characterization of tangent and stable tangent bundles, Annales de l'institut Henri Poincaré (A) Physique théorique, Vol. 61, no. 1, 1994, 1-15 External links • "Tangent bundle", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Wolfram MathWorld: Tangent Bundle • PlanetMath: Tangent Bundle Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Pushforward (differential) In differential geometry, pushforward is a linear approximation of smooth maps on tangent spaces. Suppose that $\varphi :M\to N$ is a smooth map between smooth manifolds; then the differential of $\varphi $ at a point $x$, denoted $d\varphi _{x}$, is, in some sense, the best linear approximation of $\varphi $ near $x$. It can be viewed as a generalization of the total derivative of ordinary calculus. Explicitly, the differential is a linear map from the tangent space of $M$ at $x$ to the tangent space of $N$ at $\varphi (x)$, $d\varphi _{x}:T_{x}M\to T_{\varphi (x)}N$. Hence it can be used to push tangent vectors on $M$ forward to tangent vectors on $N$. The differential of a map $\varphi $ is also called, by various authors, the derivative or total derivative of $\varphi $. Motivation Let $\varphi :U\to V$ be a smooth map from an open subset $U$ of $\mathbb {R} ^{m}$ to an open subset $V$ of $\mathbb {R} ^{n}$. For any point $x$ in $U$, the Jacobian of $\varphi $ at $x$ (with respect to the standard coordinates) is the matrix representation of the total derivative of $\varphi $ at $x$, which is a linear map $d\varphi _{x}:T_{x}\mathbb {R} ^{m}\to T_{\varphi (x)}\mathbb {R} ^{n}$ between their tangent spaces. Note the tangent spaces $T_{x}\mathbb {R} ^{m},T_{\varphi (x)}\mathbb {R} ^{n}$ are isomorphic to $\mathbb {R} ^{m}$ and $\mathbb {R} ^{n}$, respectively. The pushforward generalizes this construction to the case that $\varphi $ is a smooth function between any smooth manifolds $M$ and $N$. The differential of a smooth map Let $\varphi \colon M\to N$ be a smooth map of smooth manifolds. Given $x\in M,$ the differential of $\varphi $ at $x$ is a linear map $d\varphi _{x}\colon \ T_{x}M\to T_{\varphi (x)}N\,$ from the tangent space of $M$ at $x$ to the tangent space of $N$ at $\varphi (x).$ The image $d\varphi _{x}X$ of a tangent vector $X\in T_{x}M$ under $d\varphi _{x}$ is sometimes called the pushforward of $X$ by $\varphi .$ The exact definition of this pushforward depends on the definition one uses for tangent vectors (for the various definitions see tangent space). If tangent vectors are defined as equivalence classes of the curves $\gamma $ for which $\gamma (0)=x,$ then the differential is given by $d\varphi _{x}(\gamma '(0))=(\varphi \circ \gamma )'(0).$ Here, $\gamma $ is a curve in $M$ with $\gamma (0)=x,$ and $\gamma '(0)$ is tangent vector to the curve $\gamma $ at $0.$ In other words, the pushforward of the tangent vector to the curve $\gamma $ at $0$ is the tangent vector to the curve $\varphi \circ \gamma $ at $0.$ Alternatively, if tangent vectors are defined as derivations acting on smooth real-valued functions, then the differential is given by $d\varphi _{x}(X)(f)=X(f\circ \varphi ),$ for an arbitrary function $f\in C^{\infty }(N)$ and an arbitrary derivation $X\in T_{x}M$ at point $x\in M$ (a derivation is defined as a linear map $X\colon C^{\infty }(M)\to \mathbb {R} $ that satisfies the Leibniz rule, see: definition of tangent space via derivations). By definition, the pushforward of $X$ is in $T_{\varphi (x)}N$ and therefore itself is a derivation, $d\varphi _{x}(X)\colon C^{\infty }(N)\to \mathbb {R} $. After choosing two charts around $x$ and around $\varphi (x),$ $\varphi $ is locally determined by a smooth map ${\widehat {\varphi }}\colon U\to V$ between open sets of $\mathbb {R} ^{m}$ and $\mathbb {R} ^{n}$, and $d\varphi _{x}\left({\frac {\partial }{\partial u^{a}}}\right)={\frac {\partial {\widehat {\varphi }}^{b}}{\partial u^{a}}}{\frac {\partial }{\partial v^{b}}},$ in the Einstein summation notation, where the partial derivatives are evaluated at the point in $U$ corresponding to $x$ in the given chart. Extending by linearity gives the following matrix $\left(d\varphi _{x}\right)_{a}^{\;b}={\frac {\partial {\widehat {\varphi }}^{b}}{\partial u^{a}}}.$ Thus the differential is a linear transformation, between tangent spaces, associated to the smooth map $\varphi $ at each point. Therefore, in some chosen local coordinates, it is represented by the Jacobian matrix of the corresponding smooth map from $\mathbb {R} ^{m}$ to $\mathbb {R} ^{n}$. In general, the differential need not be invertible. However, if $\varphi $ is a local diffeomorphism, then $d\varphi _{x}$ is invertible, and the inverse gives the pullback of $T_{\varphi (x)}N.$ The differential is frequently expressed using a variety of other notations such as $D\varphi _{x},\left(\varphi _{*}\right)_{x},\varphi '(x),T_{x}\varphi .$ It follows from the definition that the differential of a composite is the composite of the differentials (i.e., functorial behaviour). This is the chain rule for smooth maps. Also, the differential of a local diffeomorphism is a linear isomorphism of tangent spaces. The differential on the tangent bundle The differential of a smooth map $\varphi $ induces, in an obvious manner, a bundle map (in fact a vector bundle homomorphism) from the tangent bundle of $M$ to the tangent bundle of $N$, denoted by $d\varphi $, which fits into the following commutative diagram: where $\pi _{M}$ and $\pi _{N}$ denote the bundle projections of the tangent bundles of $M$ and $N$ respectively. $\operatorname {d} \!\varphi $ induces a bundle map from $TM$ to the pullback bundle φ∗TN over $M$ via $(m,v_{m})\mapsto (m,\operatorname {d} \!\varphi (m,v_{m})),$ where $m\in M$ and $v_{m}\in T_{m}M.$ The latter map may in turn be viewed as a section of the vector bundle Hom(TM, φ∗TN) over M. The bundle map $\operatorname {d} \!\varphi $ is also denoted by $T\varphi $ and called the tangent map. In this way, $T$ is a functor. Pushforward of vector fields Given a smooth map φ : M → N and a vector field X on M, it is not usually possible to identify a pushforward of X by φ with some vector field Y on N. For example, if the map φ is not surjective, there is no natural way to define such a pushforward outside of the image of φ. Also, if φ is not injective there may be more than one choice of pushforward at a given point. Nevertheless, one can make this difficulty precise, using the notion of a vector field along a map. A section of φ∗TN over M is called a vector field along φ. For example, if M is a submanifold of N and φ is the inclusion, then a vector field along φ is just a section of the tangent bundle of N along M; in particular, a vector field on M defines such a section via the inclusion of TM inside TN. This idea generalizes to arbitrary smooth maps. Suppose that X is a vector field on M, i.e., a section of TM. Then, $\operatorname {d} \!\phi \circ X$ yields, in the above sense, the pushforward φ∗X, which is a vector field along φ, i.e., a section of φ∗TN over M. Any vector field Y on N defines a pullback section φ∗Y of φ∗TN with (φ∗Y)x = Yφ(x). A vector field X on M and a vector field Y on N are said to be φ-related if φ∗X = φ∗Y as vector fields along φ. In other words, for all x in M, dφx(X) = Yφ(x). In some situations, given a X vector field on M, there is a unique vector field Y on N which is φ-related to X. This is true in particular when φ is a diffeomorphism. In this case, the pushforward defines a vector field Y on N, given by $Y_{y}=\phi _{*}\left(X_{\phi ^{-1}(y)}\right).$ A more general situation arises when φ is surjective (for example the bundle projection of a fiber bundle). Then a vector field X on M is said to be projectable if for all y in N, dφx(Xx) is independent of the choice of x in φ−1({y}). This is precisely the condition that guarantees that a pushforward of X, as a vector field on N, is well defined. Pushforward from multiplication on Lie groups Given a Lie group $G$, we can use the multiplication map $m(-,-):G\times G\to G$ to get left multiplication $L_{g}=m(g,-)$ and right multiplication $R_{g}=m(-,g)$ maps $G\to G$. These maps can be used to construct left or right invariant vector fields on $G$ from its tangent space at the origin ${\mathfrak {g}}=T_{e}G$ (which is its associated Lie algebra). For example, given $X\in {\mathfrak {g}}$ we get an associated vector field ${\mathfrak {X}}$ on $G$ defined by ${\mathfrak {X}}_{g}=(L_{g})_{*}(X)\in T_{g}G$ for every $g\in G$. This can be readily computed using the curves definition of pushforward maps. If we have a curve $\gamma :(-1,1)\to G$ :(-1,1)\to G} where $\gamma (0)=e\,,\quad \gamma '(0)=X$ we get ${\begin{aligned}(L_{g})_{*}(X)&=(L_{g}\circ \gamma )'(0)\\&=(g\cdot \gamma (t))'(0)\\&={\frac {dg}{d\gamma }}\gamma (0)+g\cdot {\frac {d\gamma }{dt}}(0)\\&=g\cdot \gamma '(0)\end{aligned}}$ since $L_{g}$ is constant with respect to $\gamma $. This implies we can interpret the tangent spaces $T_{g}G$ as $T_{g}G=g\cdot T_{e}G=g\cdot {\mathfrak {g}}$. Pushforward for some Lie groups For example, if $G$ is the Heisenberg group given by matrices $H=\left\{{\begin{bmatrix}1&a&b\\0&1&c\\0&0&1\end{bmatrix}}:a,b,c\in \mathbb {R} \right\}$ it has Lie algebra given by the set of matrices ${\mathfrak {h}}=\left\{{\begin{bmatrix}0&a&b\\0&0&c\\0&0&0\end{bmatrix}}:a,b,c\in \mathbb {R} \right\}$ since we can find a path $\gamma :(-1,1)\to H$ :(-1,1)\to H} giving any real number in one of the upper matrix entries with $i<j$ (i-th row and j-th column). Then, for $g={\begin{bmatrix}1&2&3\\0&1&4\\0&0&1\end{bmatrix}}$ we have $T_{g}H=g\cdot {\mathfrak {h}}=\left\{{\begin{bmatrix}0&a&2b+3c\\0&0&c\\0&0&0\end{bmatrix}}:a,b,c\in \mathbb {R} \right\}$ which is equal to the original set of matrices. This is not always the case, for example, in the group $G=\left\{{\begin{bmatrix}a&b\\0&1/a\end{bmatrix}}:a,b\in \mathbb {R} ,a\neq 0\right\}$ we have its Lie algebra as the set of matrices ${\mathfrak {g}}=\left\{{\begin{bmatrix}a&b\\0&-a\end{bmatrix}}:a,b\in \mathbb {R} \right\}$ hence for some matrix $g={\begin{bmatrix}2&3\\0&1/2\end{bmatrix}}$ we have $T_{g}G=\left\{{\begin{bmatrix}2a&2b-a/2\\0&-a/2\end{bmatrix}}:a,b\in \mathbb {R} \right\}$ which is not the same set of matrices. See also • Pullback (differential geometry) • Flow-based generative model References • Lee, John M. (2003). Introduction to Smooth Manifolds. Springer Graduate Texts in Mathematics. Vol. 218. • Jost, Jürgen (2002). Riemannian Geometry and Geometric Analysis. Berlin: Springer-Verlag. ISBN 3-540-42627-2. See section 1.6. • Abraham, Ralph; Marsden, Jerrold E. (1978). Foundations of Mechanics. London: Benjamin-Cummings. ISBN 0-8053-0102-X. See section 1.7 and 2.3. Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Tangent space In mathematics, the tangent space of a manifold is a generalization of tangent lines to curves in two-dimensional space and tangent planes to surfaces in three-dimensional space in higher dimensions. In the context of physics the tangent space to a manifold at a point can be viewed as the space of possible velocities for a particle moving on the manifold. Informal description In differential geometry, one can attach to every point $x$ of a differentiable manifold a tangent space—a real vector space that intuitively contains the possible directions in which one can tangentially pass through $x$. The elements of the tangent space at $x$ are called the tangent vectors at $x$. This is a generalization of the notion of a vector, based at a given initial point, in a Euclidean space. The dimension of the tangent space at every point of a connected manifold is the same as that of the manifold itself. For example, if the given manifold is a $2$-sphere, then one can picture the tangent space at a point as the plane that touches the sphere at that point and is perpendicular to the sphere's radius through the point. More generally, if a given manifold is thought of as an embedded submanifold of Euclidean space, then one can picture a tangent space in this literal fashion. This was the traditional approach toward defining parallel transport. Many authors in differential geometry and general relativity use it.[1][2] More strictly, this defines an affine tangent space, which is distinct from the space of tangent vectors described by modern terminology. In algebraic geometry, in contrast, there is an intrinsic definition of the tangent space at a point of an algebraic variety $V$ that gives a vector space with dimension at least that of $V$ itself. The points $p$ at which the dimension of the tangent space is exactly that of $V$ are called non-singular points; the others are called singular points. For example, a curve that crosses itself does not have a unique tangent line at that point. The singular points of $V$ are those where the "test to be a manifold" fails. See Zariski tangent space. Once the tangent spaces of a manifold have been introduced, one can define vector fields, which are abstractions of the velocity field of particles moving in space. A vector field attaches to every point of the manifold a vector from the tangent space at that point, in a smooth manner. Such a vector field serves to define a generalized ordinary differential equation on a manifold: A solution to such a differential equation is a differentiable curve on the manifold whose derivative at any point is equal to the tangent vector attached to that point by the vector field. All the tangent spaces of a manifold may be "glued together" to form a new differentiable manifold with twice the dimension of the original manifold, called the tangent bundle of the manifold. Formal definitions The informal description above relies on a manifold's ability to be embedded into an ambient vector space $\mathbb {R} ^{m}$ so that the tangent vectors can "stick out" of the manifold into the ambient space. However, it is more convenient to define the notion of a tangent space based solely on the manifold itself.[3] There are various equivalent ways of defining the tangent spaces of a manifold. While the definition via the velocity of curves is intuitively the simplest, it is also the most cumbersome to work with. More elegant and abstract approaches are described below. Definition via tangent curves In the embedded-manifold picture, a tangent vector at a point $x$ is thought of as the velocity of a curve passing through the point $x$. We can therefore define a tangent vector as an equivalence class of curves passing through $x$ while being tangent to each other at $x$. Suppose that $M$ is a $C^{k}$ differentiable manifold (with smoothness $k\geq 1$) and that $x\in M$. Pick a coordinate chart $\varphi :U\to \mathbb {R} ^{n}$, where $U$ is an open subset of $M$ containing $x$. Suppose further that two curves $\gamma _{1},\gamma _{2}:(-1,1)\to M$ with ${\gamma _{1}}(0)=x={\gamma _{2}}(0)$ are given such that both $\varphi \circ \gamma _{1},\varphi \circ \gamma _{2}:(-1,1)\to \mathbb {R} ^{n}$ are differentiable in the ordinary sense (we call these differentiable curves initialized at $x$). Then $\gamma _{1}$ and $\gamma _{2}$ are said to be equivalent at $0$ if and only if the derivatives of $\varphi \circ \gamma _{1}$ and $\varphi \circ \gamma _{2}$ at $0$ coincide. This defines an equivalence relation on the set of all differentiable curves initialized at $x$, and equivalence classes of such curves are known as tangent vectors of $M$ at $x$. The equivalence class of any such curve $\gamma $ is denoted by $\gamma '(0)$. The tangent space of $M$ at $x$, denoted by $T_{x}M$, is then defined as the set of all tangent vectors at $x$; it does not depend on the choice of coordinate chart $\varphi :U\to \mathbb {R} ^{n}$. To define vector-space operations on $T_{x}M$, we use a chart $\varphi :U\to \mathbb {R} ^{n}$ and define a map $\mathrm {d} {\varphi }_{x}:T_{x}M\to \mathbb {R} ^{n}$ by $ {\mathrm {d} {\varphi }_{x}}(\gamma '(0)):=\left.{\frac {\mathrm {d} }{\mathrm {d} {t}}}[(\varphi \circ \gamma )(t)]\right|_{t=0},$ where $\gamma \in \gamma '(0)$. The map $\mathrm {d} {\varphi }_{x}$ turns out to be bijective and may be used to transfer the vector-space operations on $\mathbb {R} ^{n}$ over to $T_{x}M$, thus turning the latter set into an $n$-dimensional real vector space. Again, one needs to check that this construction does not depend on the particular chart $\varphi :U\to \mathbb {R} ^{n}$ and the curve $\gamma $ being used, and in fact it does not. Definition via derivations Suppose now that $M$ is a $C^{\infty }$ manifold. A real-valued function $f:M\to \mathbb {R} $ is said to belong to ${C^{\infty }}(M)$ if and only if for every coordinate chart $\varphi :U\to \mathbb {R} ^{n}$, the map $f\circ \varphi ^{-1}:\varphi [U]\subseteq \mathbb {R} ^{n}\to \mathbb {R} $ is infinitely differentiable. Note that ${C^{\infty }}(M)$ is a real associative algebra with respect to the pointwise product and sum of functions and scalar multiplication. A derivation at $x\in M$ is defined as a linear map $D:{C^{\infty }}(M)\to \mathbb {R} $ that satisfies the Leibniz identity $\forall f,g\in {C^{\infty }}(M):\qquad D(fg)=D(f)\cdot g(x)+f(x)\cdot D(g),$ which is modeled on the product rule of calculus. (For every identically constant function $f={\text{const}},$ it follows that $D(f)=0$). Denote $T_{x}M$ the set of all derivations at $x.$ Setting • $(D_{1}+D_{2})(f):={D}_{1}(f)+{D}_{2}(f)$ and • $(\lambda \cdot D)(f):=\lambda \cdot D(f)$ turns $T_{x}M$ into a vector space. Generalizations Generalizations of this definition are possible, for instance, to complex manifolds and algebraic varieties. However, instead of examining derivations $D$ from the full algebra of functions, one must instead work at the level of germs of functions. The reason for this is that the structure sheaf may not be fine for such structures. For example, let $X$ be an algebraic variety with structure sheaf ${\mathcal {O}}_{X}$. Then the Zariski tangent space at a point $p\in X$ is the collection of all $\mathbb {k} $-derivations $D:{\mathcal {O}}_{X,p}\to \mathbb {k} $, where $\mathbb {k} $ is the ground field and ${\mathcal {O}}_{X,p}$ is the stalk of ${\mathcal {O}}_{X}$ at $p$. Equivalence of the definitions For $x\in M$ and a differentiable curve $\gamma :(-1,1)\to M$ :(-1,1)\to M} such that $\gamma (0)=x,$ define ${D_{\gamma }}(f):=(f\circ \gamma )'(0)$ (where the derivative is taken in the ordinary sense because $f\circ \gamma $ is a function from $(-1,1)$ to $\mathbb {R} $). One can ascertain that $D_{\gamma }(f)$ is a derivation at the point $x,$ and that equivalent curves yield the same derivation. Thus, for an equivalence class $\gamma '(0),$ we can define ${D_{\gamma '(0)}}(f):=(f\circ \gamma )'(0),$ where the curve $\gamma \in \gamma '(0)$ has been chosen arbitrarily. The map $\gamma '(0)\mapsto D_{\gamma '(0)}$ is a vector space isomorphism between the space of the equivalence classes $\gamma '(0)$ and that of the derivations at the point $x.$ Definition via cotangent spaces Again, we start with a $C^{\infty }$ manifold $M$ and a point $x\in M$. Consider the ideal $I$ of $C^{\infty }(M)$ that consists of all smooth functions $f$ vanishing at $x$, i.e., $f(x)=0$. Then $I$ and $I^{2}$ are both real vector spaces, and the quotient space $I/I^{2}$ can be shown to be isomorphic to the cotangent space $T_{x}^{*}M$ through the use of Taylor's theorem. The tangent space $T_{x}M$ may then be defined as the dual space of $I/I^{2}$. While this definition is the most abstract, it is also the one that is most easily transferable to other settings, for instance, to the varieties considered in algebraic geometry. If $D$ is a derivation at $x$, then $D(f)=0$ for every $f\in I^{2}$, which means that $D$ gives rise to a linear map $I/I^{2}\to \mathbb {R} $. Conversely, if $r:I/I^{2}\to \mathbb {R} $ is a linear map, then $D(f):=r\left((f-f(x))+I^{2}\right)$ defines a derivation at $x$. This yields an equivalence between tangent spaces defined via derivations and tangent spaces defined via cotangent spaces. Properties If $M$ is an open subset of $\mathbb {R} ^{n}$, then $M$ is a $C^{\infty }$ manifold in a natural manner (take coordinate charts to be identity maps on open subsets of $\mathbb {R} ^{n}$), and the tangent spaces are all naturally identified with $\mathbb {R} ^{n}$. Tangent vectors as directional derivatives Another way to think about tangent vectors is as directional derivatives. Given a vector $v$ in $\mathbb {R} ^{n}$, one defines the corresponding directional derivative at a point $x\in \mathbb {R} ^{n}$ by $\forall f\in {C^{\infty }}(\mathbb {R} ^{n}):\qquad (D_{v}f)(x):=\left.{\frac {\mathrm {d} }{\mathrm {d} {t}}}[f(x+tv)]\right|_{t=0}=\sum _{i=1}^{n}v^{i}{\frac {\partial f}{\partial x^{i}}}(x).$ This map is naturally a derivation at $x$. Furthermore, every derivation at a point in $\mathbb {R} ^{n}$ is of this form. Hence, there is a one-to-one correspondence between vectors (thought of as tangent vectors at a point) and derivations at a point. As tangent vectors to a general manifold at a point can be defined as derivations at that point, it is natural to think of them as directional derivatives. Specifically, if $v$ is a tangent vector to $M$ at a point $x$ (thought of as a derivation), then define the directional derivative $D_{v}$ in the direction $v$ by $\forall f\in {C^{\infty }}(M):\qquad {D_{v}}(f):=v(f).$ If we think of $v$ as the initial velocity of a differentiable curve $\gamma $ initialized at $x$, i.e., $v=\gamma '(0)$, then instead, define $D_{v}$ by $\forall f\in {C^{\infty }}(M):\qquad {D_{v}}(f):=(f\circ \gamma )'(0).$ Basis of the tangent space at a point For a $C^{\infty }$ manifold $M$, if a chart $\varphi =(x^{1},\ldots ,x^{n}):U\to \mathbb {R} ^{n}$ is given with $p\in U$, then one can define an ordered basis $ \left\{\left.{\frac {\partial }{\partial x^{1}}}\right|_{p},\dots ,\left.{\frac {\partial }{\partial x^{n}}}\right|_{p}\right\}$ of $T_{p}M$ by $\forall i\in \{1,\ldots ,n\},~\forall f\in {C^{\infty }}(M):\qquad {\left.{\frac {\partial }{\partial x^{i}}}\right|_{p}}(f):=\left({\frac {\partial }{\partial x^{i}}}{\Big (}f\circ \varphi ^{-1}{\Big )}\right){\Big (}\varphi (p){\Big )}.$ Then for every tangent vector $v\in T_{p}M$, one has $v=\sum _{i=1}^{n}v^{i}\left.{\frac {\partial }{\partial x^{i}}}\right|_{p}.$ This formula therefore expresses $v$ as a linear combination of the basis tangent vectors $ \left.{\frac {\partial }{\partial x^{i}}}\right|_{p}\in T_{p}M$ defined by the coordinate chart $\varphi :U\to \mathbb {R} ^{n}$.[4] The derivative of a map Main article: Pushforward (differential) Every smooth (or differentiable) map $\varphi :M\to N$ between smooth (or differentiable) manifolds induces natural linear maps between their corresponding tangent spaces: $\mathrm {d} {\varphi }_{x}:T_{x}M\to T_{\varphi (x)}N.$ If the tangent space is defined via differentiable curves, then this map is defined by ${\mathrm {d} {\varphi }_{x}}(\gamma '(0)):=(\varphi \circ \gamma )'(0).$ If, instead, the tangent space is defined via derivations, then this map is defined by $[\mathrm {d} {\varphi }_{x}(D)](f):=D(f\circ \varphi ).$ The linear map $\mathrm {d} {\varphi }_{x}$ is called variously the derivative, total derivative, differential, or pushforward of $\varphi $ at $x$. It is frequently expressed using a variety of other notations: $D\varphi _{x},\qquad (\varphi _{*})_{x},\qquad \varphi '(x).$ In a sense, the derivative is the best linear approximation to $\varphi $ near $x$. Note that when $N=\mathbb {R} $, then the map $\mathrm {d} {\varphi }_{x}:T_{x}M\to \mathbb {R} $ coincides with the usual notion of the differential of the function $\varphi $. In local coordinates the derivative of $\varphi $ is given by the Jacobian. An important result regarding the derivative map is the following: Theorem — If $\varphi :M\to N$ is a local diffeomorphism at $x$ in $M$, then $\mathrm {d} {\varphi }_{x}:T_{x}M\to T_{\varphi (x)}N$ is a linear isomorphism. Conversely, if $\varphi :M\to N$ is continuously differentiable and $\mathrm {d} {\varphi }_{x}$ is an isomorphism, then there is an open neighborhood $U$ of $x$ such that $\varphi $ maps $U$ diffeomorphically onto its image. This is a generalization of the inverse function theorem to maps between manifolds. See also • Coordinate-induced basis • Cotangent space • Differential geometry of curves • Exponential map • Vector space Notes 1. do Carmo, Manfredo P. (1976). Differential Geometry of Curves and Surfaces. Prentice-Hall.: 2. Dirac, Paul A. M. (1996) [1975]. General Theory of Relativity. Princeton University Press. ISBN 0-691-01146-X. 3. Chris J. Isham (1 January 2002). Modern Differential Geometry for Physicists. Allied Publishers. pp. 70–72. ISBN 978-81-7764-316-9. 4. Lerman, Eugene. "An Introduction to Differential Geometry" (PDF). p. 12. References • Lee, Jeffrey M. (2009), Manifolds and Differential Geometry, Graduate Studies in Mathematics, vol. 107, Providence: American Mathematical Society. • Michor, Peter W. (2008), Topics in Differential Geometry, Graduate Studies in Mathematics, vol. 93, Providence: American Mathematical Society. • Spivak, Michael (1965), Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus, W. A. Benjamin, Inc., ISBN 978-0-8053-9021-6. External links • Tangent Planes at MathWorld Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Tangential angle In geometry, the tangential angle of a curve in the Cartesian plane, at a specific point, is the angle between the tangent line to the curve at the given point and the x-axis.[1] (Some authors define the angle as the deviation from the direction of the curve at some fixed starting point. This is equivalent to the definition given here by the addition of a constant to the angle or by rotating the curve.[2]) Equations If a curve is given parametrically by (x(t), y(t)), then the tangential angle φ at t is defined (up to a multiple of 2π) by[3] ${\frac {{\big (}x'(t),\ y'(t){\big )}}{{\big |}x'(t),\ y'(t){\big |}}}=(\cos \varphi ,\ \sin \varphi ).$ Here, the prime symbol denotes the derivative with respect to t. Thus, the tangential angle specifies the direction of the velocity vector (x(t), y(t)), while the speed specifies its magnitude. The vector ${\frac {{\big (}x'(t),\ y'(t){\big )}}{{\big |}x'(t),\ y'(t){\big |}}}$ is called the unit tangent vector, so an equivalent definition is that the tangential angle at t is the angle φ such that (cos φ, sin φ) is the unit tangent vector at t. If the curve is parametrized by arc length s, so |x′(s), y′(s)| = 1, then the definition simplifies to ${\big (}x'(s),\ y'(s){\big )}=(\cos \varphi ,\ \sin \varphi ).$ In this case, the curvature κ is given by φ′(s), where κ is taken to be positive if the curve bends to the left and negative if the curve bends to the right.[1] Conversely, the tangent angle at a given point equals the definite integral of curvature up to that point:[4][1] $\varphi (s)=\int _{0}^{s}\kappa (s)ds+\varphi _{0}$ $\varphi (t)=\int _{0}^{t}\kappa (t)s'(t)dt+\varphi _{0}$ If the curve is given by the graph of a function y = f(x), then we may take (x, f(x)) as the parametrization, and we may assume φ is between −π/2 and π/2. This produces the explicit expression $\varphi =\arctan f'(x).$ Polar tangential angle[5] In polar coordinates, the polar tangential angle is defined as the angle between the tangent line to the curve at the given point and ray from the origin to the point.[6] If ψ denotes the polar tangential angle, then ψ = φ − θ, where φ is as above and θ is, as usual, the polar angle. If the curve is defined in polar coordinates by r = f(θ), then the polar tangential angle ψ at θ is defined (up to a multiple of 2π) by ${\frac {{\big (}f'(\theta ),\ f(\theta ){\big )}}{{\big |}f'(\theta ),\ f(\theta ){\big |}}}=(\cos \psi ,\ \sin \psi )$. If the curve is parametrized by arc length s as r = r(s), θ = θ(s), so |r′(s), rθ′(s)| = 1, then the definition becomes ${\big (}r'(s),\ r\theta '(s){\big )}=(\cos \psi ,\ \sin \psi )$. The logarithmic spiral can be defined a curve whose polar tangential angle is constant.[5][6] See also • Differential geometry of curves • Whewell equation • Subtangent References 1. Weisstein, Eric W. "Natural Equation". MathWorld. 2. For example: Whewell, W. (1849). "Of the Intrinsic Equation of a Curve, and Its Application". Cambridge Philosophical Transactions. 8: 659–671. This paper uses φ to mean the angle between the tangent and tangent at the origin. This is the paper introducing the Whewell equation, an application of the tangential angle. 3. Weisstein, Eric W. "Tangential Angle". MathWorld. 4. Surazhsky, Tatiana; Surazhsky, Vitaly (2004). Sampling planar curves using curvature-based shape analysis. Mathematical methods for curves and surfaces. Tromsø. CiteSeerX 10.1.1.125.2191. ISBN 978-0-9728482-4-4. 5. Williamson, Benjamin (1899). "Angle between Tangent and Radius Vector". An Elementary Treatise on the Differential Calculus (9th ed.). p. 222. 6. Logarithmic Spiral at PlanetMath. Further reading • "Notations". Encyclopédie des Formes Mathématiques Remarquables (in French). • Yates, R. C. (1952). A Handbook on Curves and Their Properties. Ann Arbor, MI: J. W. Edwards. pp. 123–126.
Wikipedia
Line integral In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve.[1] The terms path integral, curve integral, and curvilinear integral are also used; contour integral is used as well, although that is typically reserved for line integrals in the complex plane. Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulae in physics, such as the definition of work as $W=\mathbf {F} \cdot \mathbf {s} $, have natural continuous analogues in terms of line integrals, in this case $ W=\int _{L}\mathbf {F} (\mathbf {s} )\cdot d\mathbf {s} $, which computes the work done on an object moving through an electric or gravitational field F along a path $L$. Vector calculus In qualitative terms, a line integral in vector calculus can be thought of as a measure of the total effect of a given tensor field along a given curve. For example, the line integral over a scalar field (rank 0 tensor) can be interpreted as the area under the field carved out by a particular curve. This can be visualized as the surface created by z = f(x,y) and a curve C in the xy plane. The line integral of f would be the area of the "curtain" created—when the points of the surface that are directly over C are carved out. Line integral of a scalar field Definition For some scalar field $f\colon U\to \mathbb {R} $ where $U\subseteq \mathbb {R} ^{n}$, the line integral along a piecewise smooth curve ${\mathcal {C}}\subset U$ is defined as $\int _{\mathcal {C}}f(\mathbf {r} )\,ds=\int _{a}^{b}f\left(\mathbf {r} (t)\right)\left|\mathbf {r} '(t)\right|\,dt.$ where $\mathbf {r} \colon [a,b]\to {\mathcal {C}}$ is an arbitrary bijective parametrization of the curve ${\mathcal {C}}$ such that r(a) and r(b) give the endpoints of ${\mathcal {C}}$ and a < b. Here, and in the rest of the article, the absolute value bars denote the standard (Euclidean) norm of a vector. The function f is called the integrand, the curve ${\mathcal {C}}$ is the domain of integration, and the symbol ds may be intuitively interpreted as an elementary arc length of the curve ${\mathcal {C}}$ (i.e., a differential length of ${\mathcal {C}}$). Line integrals of scalar fields over a curve ${\mathcal {C}}$ do not depend on the chosen parametrization r of ${\mathcal {C}}$.[2] Geometrically, when the scalar field f is defined over a plane (n = 2), its graph is a surface z = f(x, y) in space, and the line integral gives the (signed) cross-sectional area bounded by the curve ${\mathcal {C}}$ and the graph of f. See the animation to the right. Derivation For a line integral over a scalar field, the integral can be constructed from a Riemann sum using the above definitions of f, C and a parametrization r of C. This can be done by partitioning the interval [a, b] into n sub-intervals [ti−1, ti] of length Δt = (b − a)/n, then r(ti) denotes some point, call it a sample point, on the curve C. We can use the set of sample points {r(ti): 1 ≤ i ≤ n} to approximate the curve C as a polygonal path by introducing the straight line piece between each of the sample points r(ti−1) and r(ti). (The approximation of a curve to a polygonal path is called rectification of a curve, see here for more details.) We then label the distance of the line segment between adjacent sample points on the curve as Δsi. The product of f(r(ti)) and Δsi can be associated with the signed area of a rectangle with a height and width of f(r(ti)) and Δsi, respectively. Taking the limit of the sum of the terms as the length of the partitions approaches zero gives us $I=\lim _{\Delta s_{i}\to 0}\sum _{i=1}^{n}f(\mathbf {r} (t_{i}))\,\Delta s_{i}.$ By the mean value theorem, the distance between subsequent points on the curve, is $\Delta s_{i}=\left|\mathbf {r} (t_{i}+\Delta t)-\mathbf {r} (t_{i})\right|\approx \left|\mathbf {r} '(t_{i})\Delta t\right|$ Substituting this in the above Riemann sum yields $I=\lim _{\Delta t\to 0}\sum _{i=1}^{n}f(\mathbf {r} (t_{i}))\left|\mathbf {r} '(t_{i})\right|\Delta t$ which is the Riemann sum for the integral $I=\int _{a}^{b}f(\mathbf {r} (t))\left|\mathbf {r} '(t)\right|dt.$ Definition For a vector field F: U ⊆ Rn → Rn, the line integral along a piecewise smooth curve C ⊂ U, in the direction of r, is defined as $\int _{C}\mathbf {F} (\mathbf {r} )\cdot d\mathbf {r} =\int _{a}^{b}\mathbf {F} (\mathbf {r} (t))\cdot \mathbf {r} '(t)\,dt$ where · is the dot product, and r: [a, b] → C is a bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C. A line integral of a scalar field is thus a line integral of a vector field, where the vectors are always tangential to the line of the integration. Line integrals of vector fields are independent of the parametrization r in absolute value, but they do depend on its orientation. Specifically, a reversal in the orientation of the parametrization changes the sign of the line integral.[2] From the viewpoint of differential geometry, the line integral of a vector field along a curve is the integral of the corresponding 1-form under the musical isomorphism (which takes the vector field to the corresponding covector field), over the curve considered as an immersed 1-manifold. Derivation The line integral of a vector field can be derived in a manner very similar to the case of a scalar field, but this time with the inclusion of a dot product. Again using the above definitions of F, C and its parametrization r(t), we construct the integral from a Riemann sum. We partition the interval [a, b] (which is the range of the values of the parameter t) into n intervals of length Δt = (b − a)/n. Letting ti be the ith point on [a, b], then r(ti) gives us the position of the ith point on the curve. However, instead of calculating up the distances between subsequent points, we need to calculate their displacement vectors, Δri. As before, evaluating F at all the points on the curve and taking the dot product with each displacement vector gives us the infinitesimal contribution of each partition of F on C. Letting the size of the partitions go to zero gives us a sum $I=\lim _{\Delta t\to 0}\sum _{i=1}^{n}\mathbf {F} (\mathbf {r} (t_{i}))\cdot \Delta \mathbf {r} _{i}$ By the mean value theorem, we see that the displacement vector between adjacent points on the curve is $\Delta \mathbf {r} _{i}=\mathbf {r} (t_{i}+\Delta t)-\mathbf {r} (t_{i})\approx \mathbf {r} '(t_{i})\,\Delta t.$ Substituting this in the above Riemann sum yields $I=\lim _{\Delta t\to 0}\sum _{i=1}^{n}\mathbf {F} (\mathbf {r} (t_{i}))\cdot \mathbf {r} '(t_{i})\,\Delta t,$ which is the Riemann sum for the integral defined above. Path independence Main article: Gradient theorem If a vector field F is the gradient of a scalar field G (i.e. if F is conservative), that is, $\mathbf {F} =\nabla G,$ then by the multivariable chain rule the derivative of the composition of G and r(t) is ${\frac {dG(\mathbf {r} (t))}{dt}}=\nabla G(\mathbf {r} (t))\cdot \mathbf {r} '(t)=\mathbf {F} (\mathbf {r} (t))\cdot \mathbf {r} '(t)$ which happens to be the integrand for the line integral of F on r(t). It follows, given a path C, that $\int _{C}\mathbf {F} (\mathbf {r} )\cdot d\mathbf {r} =\int _{a}^{b}\mathbf {F} (\mathbf {r} (t))\cdot \mathbf {r} '(t)\,dt=\int _{a}^{b}{\frac {dG(\mathbf {r} (t))}{dt}}\,dt=G(\mathbf {r} (b))-G(\mathbf {r} (a)).$ In other words, the integral of F over C depends solely on the values of G at the points r(b) and r(a), and is thus independent of the path between them. For this reason, a line integral of a conservative vector field is called path independent. Applications The line integral has many uses in physics. For example, the work done on a particle traveling on a curve C inside a force field represented as a vector field F is the line integral of F on C.[3] Flow across a curve For a vector field $\mathbf {F} \colon U\subseteq \mathbb {R} ^{2}\to \mathbb {R} ^{2}$, F(x, y) = (P(x, y), Q(x, y)), the line integral across a curve C ⊂ U, also called the flux integral, is defined in terms of a piecewise smooth parametrization r: [a,b] → C, r(t) = (x(t), y(t)), as: $\int _{C}\mathbf {F} (\mathbf {r} )\cdot d\mathbf {r} ^{\perp }=\int _{a}^{b}{\begin{bmatrix}P{\big (}x(t),y(t){\big )}\\Q{\big (}x(t),y(t){\big )}\end{bmatrix}}\cdot {\begin{bmatrix}y'(t)\\-x'(t)\end{bmatrix}}~dt=\int _{a}^{b}\left(-Q~dx+P~dy\right).$ Here ⋅ is the dot product, and $\mathbf {r} '(t)^{\perp }=(y'(t),-x'(t))$ is the clockwise perpendicular of the velocity vector $\mathbf {r} '(t)=(x'(t),y'(t))$. The flow is computed in an oriented sense: the curve C has a specified forward direction from r(a) to r(b), and the flow is counted as positive when F(r(t)) is on the clockwise side of the forward velocity vector r'(t). Complex line integral In complex analysis, the line integral is defined in terms of multiplication and addition of complex numbers. Suppose U is an open subset of the complex plane C, f : U → C is a function, and $L\subset U$ is a curve of finite length, parametrized by γ: [a,b] → L, where γ(t) = x(t) + iy(t). The line integral $\int _{L}f(z)\,dz$ may be defined by subdividing the interval [a, b] into a = t0 < t1 < ... < tn = b and considering the expression $\sum _{k=1}^{n}f(\gamma (t_{k}))\,[\gamma (t_{k})-\gamma (t_{k-1})]=\sum _{k=1}^{n}f(\gamma _{k})\,\Delta \gamma _{k}.$ The integral is then the limit of this Riemann sum as the lengths of the subdivision intervals approach zero. If the parametrization γ is continuously differentiable, the line integral can be evaluated as an integral of a function of a real variable: $\int _{L}f(z)\,dz=\int _{a}^{b}f(\gamma (t))\gamma '(t)\,dt.$ When L is a closed curve (initial and final points coincide), the line integral is often denoted $ \oint _{L}f(z)\,dz,$ sometimes referred to in engineering as a cyclic integral. The line integral with respect to the conjugate complex differential ${\overline {dz}}$ is defined[4] to be $\int _{L}f(z){\overline {dz}}:={\overline {\int _{L}{\overline {f(z)}}\,dz}}=\int _{a}^{b}f(\gamma (t)){\overline {\gamma '(t)}}\,dt.$ The line integrals of complex functions can be evaluated using a number of techniques. The most direct is to split into real and imaginary parts, reducing the problem to evaluating two real-valued line integrals. The Cauchy integral theorem may be used to equate the line integral of an analytic function to the same integral over a more convenient curve. It also implies that over a closed curve enclosing a region where f(z) is analytic without singularities, the value of the integral is simply zero, or in case the region includes singularities, the residue theorem computes the integral in terms of the singularities. This also implies the path independence of complex line integral for analytic functions. Example Consider the function f(z) = 1/z, and let the contour L be the counterclockwise unit circle about 0, parametrized by z(t) = eit with t in [0, 2π] using the complex exponential. Substituting, we find: ${\begin{aligned}\oint _{L}{\frac {1}{z}}\,dz&=\int _{0}^{2\pi }{\frac {1}{e^{it}}}ie^{it}\,dt=i\int _{0}^{2\pi }e^{-it}e^{it}\,dt\\&=i\int _{0}^{2\pi }dt=i(2\pi -0)=2\pi i.\end{aligned}}$ This is a typical result of Cauchy's integral formula and the residue theorem. Relation of complex line integral and line integral of vector field Viewing complex numbers as 2-dimensional vectors, the line integral of a complex-valued function $f(z)$ has real and complex parts equal to the line integral and the flux integral of the vector field corresponding to the conjugate function ${\overline {f(z)}}.$ Specifically, if $\mathbf {r} (t)=(x(t),y(t))$ parametrizes L, and $f(z)=u(z)+iv(z)$ corresponds to the vector field $\mathbf {F} (x,y)={\overline {f(x+iy)}}=(u(x+iy),-v(x+iy)),$ then: ${\begin{aligned}\int _{L}f(z)\,dz&=\int _{L}(u+iv)(dx+i\,dy)\\&=\int _{L}(u,-v)\cdot (dx,dy)+i\int _{L}(u,-v)\cdot (dy,-dx)\\&=\int _{L}\mathbf {F} (\mathbf {r} )\cdot d\mathbf {r} +i\int _{L}\mathbf {F} (\mathbf {r} )\cdot d\mathbf {r} ^{\perp }.\end{aligned}}$ By Cauchy's theorem, the left-hand integral is zero when $f(z)$ is analytic (satisfying the Cauchy–Riemann equations) for any smooth closed curve L. Correspondingly, by Green's theorem, the right-hand integrals are zero when $\mathbf {F} ={\overline {f(z)}}$ is irrotational (curl-free) and incompressible (divergence-free). In fact, the Cauchy-Riemann equations for $f(z)$ are identical to the vanishing of curl and divergence for F. By Green's theorem, the area of a region enclosed by a smooth, closed, positively oriented curve $L$ is given by the integral $ {\frac {1}{2i}}\int _{L}{\overline {z}}\,dz.$ This fact is used, for example, in the proof of the area theorem. Quantum mechanics The path integral formulation of quantum mechanics actually refers not to path integrals in this sense but to functional integrals, that is, integrals over a space of paths, of a function of a possible path. However, path integrals in the sense of this article are important in quantum mechanics; for example, complex contour integration is often used in evaluating probability amplitudes in quantum scattering theory. See also • Divergence theorem • Gradient theorem • Methods of contour integration • Nachbin's theorem • Surface integral • Volume element • Volume integral References 1. Kwong-Tin Tang (30 November 2006). Mathematical Methods for Engineers and Scientists 2: Vector Analysis, Ordinary Differential Equations and Laplace Transforms. Springer Science & Business Media. ISBN 978-3-540-30268-1. 2. Nykamp, Duane. "Line integrals are independent of parametrization". Math Insight. Retrieved September 18, 2020. 3. "16.2 Line Integrals". www.whitman.edu. Retrieved 2020-09-18. 4. Ahlfors, Lars (1966). Complex Analysis (2nd ed.). New York: McGraw-Hill. p. 103. External links • "Integral over trajectories", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Khan Academy modules: • "Introduction to the Line Integral" • "Line Integral Example 1" • "Line Integral Example 2 (part 1)" • "Line Integral Example 2 (part 2)" • Path integral at PlanetMath. • Line integral of a vector field – Interactive Calculus Precalculus • Binomial theorem • Concave function • Continuous function • Factorial • Finite difference • Free variables and bound variables • Graph of a function • Linear function • Radian • Rolle's theorem • Secant • Slope • Tangent Limits • Indeterminate form • Limit of a function • One-sided limit • Limit of a sequence • Order of approximation • (ε, δ)-definition of limit Differential calculus • Derivative • Second derivative • Partial derivative • Differential • Differential operator • Mean value theorem • Notation • Leibniz's notation • Newton's notation • Rules of differentiation • linearity • Power • Sum • Chain • L'Hôpital's • Product • General Leibniz's rule • Quotient • Other techniques • Implicit differentiation • Inverse functions and differentiation • Logarithmic derivative • Related rates • Stationary points • First derivative test • Second derivative test • Extreme value theorem • Maximum and minimum • Further applications • Newton's method • Taylor's theorem • Differential equation • Ordinary differential equation • Partial differential equation • Stochastic differential equation Integral calculus • Antiderivative • Arc length • Riemann integral • Basic properties • Constant of integration • Fundamental theorem of calculus • Differentiating under the integral sign • Integration by parts • Integration by substitution • trigonometric • Euler • Tangent half-angle substitution • Partial fractions in integration • Quadratic integral • Trapezoidal rule • Volumes • Washer method • Shell method • Integral equation • Integro-differential equation Vector calculus • Derivatives • Curl • Directional derivative • Divergence • Gradient • Laplacian • Basic theorems • Line integrals • Green's • Stokes' • Gauss' Multivariable calculus • Divergence theorem • Geometric • Hessian matrix • Jacobian matrix and determinant • Lagrange multiplier • Line integral • Matrix • Multiple integral • Partial derivative • Surface integral • Volume integral • Advanced topics • Differential forms • Exterior derivative • Generalized Stokes' theorem • Tensor calculus Sequences and series • Arithmetico-geometric sequence • Types of series • Alternating • Binomial • Fourier • Geometric • Harmonic • Infinite • Power • Maclaurin • Taylor • Telescoping • Tests of convergence • Abel's • Alternating series • Cauchy condensation • Direct comparison • Dirichlet's • Integral • Limit comparison • Ratio • Root • Term Special functions and numbers • Bernoulli numbers • e (mathematical constant) • Exponential function • Natural logarithm • Stirling's approximation History of calculus • Adequality • Brook Taylor • Colin Maclaurin • Generality of algebra • Gottfried Wilhelm Leibniz • Infinitesimal • Infinitesimal calculus • Isaac Newton • Fluxion • Law of Continuity • Leonhard Euler • Method of Fluxions • The Method of Mechanical Theorems Lists • Differentiation rules • List of integrals of exponential functions • List of integrals of hyperbolic functions • List of integrals of inverse hyperbolic functions • List of integrals of inverse trigonometric functions • List of integrals of irrational functions • List of integrals of logarithmic functions • List of integrals of rational functions • List of integrals of trigonometric functions • Secant • Secant cubed • List of limits • Lists of integrals Miscellaneous topics • Complex calculus • Contour integral • Differential geometry • Manifold • Curvature • of curves • of surfaces • Tensor • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Regiomontanus' angle maximization problem • Steinmetz solid Authority control: National • Germany • Czech Republic
Wikipedia
Tangential polygon In Euclidean geometry, a tangential polygon, also known as a circumscribed polygon, is a convex polygon that contains an inscribed circle (also called an incircle). This is a circle that is tangent to each of the polygon's sides. The dual polygon of a tangential polygon is a cyclic polygon, which has a circumscribed circle passing through each of its vertices. All triangles are tangential, as are all regular polygons with any number of sides. A well-studied group of tangential polygons are the tangential quadrilaterals, which include the rhombi and kites. Characterizations A convex polygon has an incircle if and only if all of its internal angle bisectors are concurrent. This common point is the incenter (the center of the incircle).[1] There exists a tangential polygon of n sequential sides a1, ..., an if and only if the system of equations $x_{1}+x_{2}=a_{1},\quad x_{2}+x_{3}=a_{2},\quad \ldots ,\quad x_{n}+x_{1}=a_{n}$ has a solution (x1, ..., xn) in positive reals.[2] If such a solution exists, then x1, ..., xn are the tangent lengths of the polygon (the lengths from the vertices to the points where the incircle is tangent to the sides). Uniqueness and non-uniqueness If the number of sides n is odd, then for any given set of sidelengths $a_{1},\dots ,a_{n}$ satisfying the existence criterion above there is only one tangential polygon. But if n is even there are an infinitude of them.[3]: p. 389  For example, in the quadrilateral case where all sides are equal we can have a rhombus with any value of the acute angles, and all rhombi are tangential to an incircle. Inradius If the n sides of a tangential polygon are a1, ..., an, the inradius (radius of the incircle) is[4] $r={\frac {K}{s}}={\frac {2K}{\sum _{i=1}^{n}a_{i}}}$ where K is the area of the polygon and s is the semiperimeter. (Since all triangles are tangential, this formula applies to all triangles.) Other properties • For a tangential polygon with an odd number of sides, all sides are equal if and only if all angles are equal (so the polygon is regular). A tangential polygon with an even number of sides has all sides equal if and only if the alternate angles are equal (that is, angles A, C, E, ... are equal, and angles B, D, F, ... are equal).[5] • In a tangential polygon with an even number of sides, the sum of the odd numbered sides' lengths is equal to the sum of the even numbered sides' lengths.[2] • A tangential polygon has a larger area than any other polygon with the same perimeter and the same interior angles in the same sequence.[6]: p. 862 [7] • The centroid of any tangential polygon, the centroid of its boundary points, and the center of the inscribed circle are collinear, with the polygon's centroid between the others and twice as far from the incenter as from the boundary's centroid.[6]: pp. 858–9  Tangential triangle While all triangles are tangential to some circle, a triangle is called the tangential triangle of a reference triangle if the tangencies of the tangential triangle with the circle are also the vertices of the reference triangle. Tangential quadrilateral Main article: Tangential quadrilateral Tangential hexagon • In a tangential hexagon ABCDEF, the main diagonals AD, BE, and CF are concurrent according to Brianchon's theorem. See also • Circumgon References 1. Owen Byer, Felix Lazebnik and Deirdre Smeltzer, Methods for Euclidean Geometry, Mathematical Association of America, 2010, p. 77. 2. Dušan Djukić, Vladimir Janković, Ivan Matić, Nikola Petrović, The IMO Compendium, Springer, 2006, p. 561. 3. Hess, Albrecht (2014), "On a circle containing the incenters of tangential quadrilaterals" (PDF), Forum Geometricorum, 14: 389–396. 4. Alsina, Claudi and Nelsen, Roger, Icons of Mathematics. An exploration of twenty key images, Mathematical Association of America, 2011, p. 125. 5. De Villiers, Michael. "Equiangular cyclic and equilateral circumscribed polygons," Mathematical Gazette 95, March 2011, 102–107. 6. Tom M. Apostol and Mamikon A. Mnatsakanian (December 2004). "Figures Circumscribing Circles" (PDF). American Mathematical Monthly. 111 (10): 853–863. doi:10.2307/4145094. JSTOR 4145094. Retrieved 6 April 2016. 7. Apostol, Tom (December 2005). "erratum". American Mathematical Monthly. 112 (10): 946. doi:10.1080/00029890.2005.11920274. S2CID 218547110. Polygons (List) Triangles • Acute • Equilateral • Ideal • Isosceles • Kepler • Obtuse • Right Quadrilaterals • Antiparallelogram • Bicentric • Crossed • Cyclic • Equidiagonal • Ex-tangential • Harmonic • Isosceles trapezoid • Kite • Orthodiagonal • Parallelogram • Rectangle • Right kite • Right trapezoid • Rhombus • Square • Tangential • Tangential trapezoid • Trapezoid By number of sides 1–10 sides • Monogon (1) • Digon (2) • Triangle (3) • Quadrilateral (4) • Pentagon (5) • Hexagon (6) • Heptagon (7) • Octagon (8) • Nonagon (Enneagon, 9) • Decagon (10) 11–20 sides • Hendecagon (11) • Dodecagon (12) • Tridecagon (13) • Tetradecagon (14) • Pentadecagon (15) • Hexadecagon (16) • Heptadecagon (17) • Octadecagon (18) • Icosagon (20) >20 sides • Icositrigon (23) • Icositetragon (24) • Triacontagon (30) • 257-gon • Chiliagon (1000) • Myriagon (10,000) • 65537-gon • Megagon (1,000,000) • Apeirogon (∞) Star polygons • Pentagram • Hexagram • Heptagram • Octagram • Enneagram • Decagram • Hendecagram • Dodecagram Classes • Concave • Convex • Cyclic • Equiangular • Equilateral • Infinite skew • Isogonal • Isotoxal • Magic • Pseudotriangle • Rectilinear • Regular • Reinhardt • Simple • Skew • Star-shaped • Tangential • Weakly simple
Wikipedia
Region connection calculus The region connection calculus (RCC) is intended to serve for qualitative spatial representation and reasoning. RCC abstractly describes regions (in Euclidean space, or in a topological space) by their possible relations to each other. RCC8 consists of 8 basic relations that are possible between two regions: • disconnected (DC) • externally connected (EC) • equal (EQ) • partially overlapping (PO) • tangential proper part (TPP) • tangential proper part inverse (TPPi) • non-tangential proper part (NTPP) • non-tangential proper part inverse (NTPPi) From these basic relations, combinations can be built. For example, proper part (PP) is the union of TPP and NTPP. Axioms RCC is governed by two axioms.[1] • for any region x, x connects with itself • for any region x, y, if x connects with y, y will connects with x Remark on the axioms The two axioms describe two features of the connection relation, but not the characteristic feature of the connect relation.[2] For example, we can say that an object is less than 10 meters away from itself and that if object A is less than 10 meters away from object B, object B will be less than 10 meters away from object A. So, the relation 'less-than-10-meters' also satisfies the above two axioms, but does not talk about the connection relation in the intended sense of RCC. Composition table The composition table of RCC8 are as follows: R2(b,c)→ R1(a,b)↓ DCECPOTPPNTPPTPPiNTPPiEQ DC *DC,EC,PO,TPP,NTPPDC,EC,PO,TPP,NTPPDC,EC,PO,TPP,NTPPDC,EC,PO,TPP,NTPPDCDCDC EC DC,EC,PO,TPPi,NTPPiDC,EC,PO,TPP,TPPi,EQDC,EC,PO,TPP,NTPPEC,PO,TPP,NTPPPO,TPP,NTPPDC,ECDCEC PO DC,EC,PO,TPPi,NTPPiDC,EC,PO,TPPi,NTPPi*PO,TPP,NTPPPO,TPP,NTPPDC,EC,PO,TPPi,NTPPiDC,EC,PO,TPPi,NTPPiPO TPP DCDC,ECDC,EC,PO,TPP,NTPPTPP,NTPPNTPPDC,EC,PO,TPP,TPPi,EQDC,EC,PO,TPPi,NTPPiTPP NTPP DCDCDC,EC,PO,TPP,NTPPNTPPNTPPDC,EC,PO,TPP,NTPP*NTPP TPPi DC,EC,PO,TPPi,NTPPiEC,PO,TPPi,NTPPiPO,TPPi,NTPPiPO,TPP,TPPi,EQPO,TPP,NTPPTPPi,NTPPiNTPPiTPPi NTPPi DC,EC,PO,TPPi,NTPPiPO,TPPi,NTPPiPO,TPPi,NTPPiPO,TPPi,NTPPiPO,TPP,NTPP,TPPi,NTPPi,EQNTPPiNTPPiNTPPi EQ DCECPOTPPNTPPTPPiNTPPiEQ • "*" denotes the universal relation, no relation can be discarded. Usage example: if a TPP b and b EC c, (row 4, column 2) of the table says that a DC c or a EC c. Examples The RCC8 calculus is intended for reasoning about spatial configurations. Consider the following example: two houses are connected via a road. Each house is located on an own property. The first house possibly touches the boundary of the property; the second one surely does not. What can we infer about the relation of the second property to the road? The spatial configuration can be formalized in RCC8 as the following constraint network: house1 DC house2 house1 {TPP, NTPP} property1 house1 {DC, EC} property2 house1 EC road house2 { DC, EC } property1 house2 NTPP property2 house2 EC road property1 { DC, EC } property2 road { DC, EC, TPP, TPPi, PO, EQ, NTPP, NTPPi } property1 road { DC, EC, TPP, TPPi, PO, EQ, NTPP, NTPPi } property2 Using the RCC8 composition table and the path-consistency algorithm, we can refine the network in the following way: road { PO, EC } property1 road { PO, TPP } property2 That is, the road either overlaps (PO) property2, or is a tangential proper part of it. But, if the road is a tangential proper part of property2, then the road can only be externally connected (EC) to property1. That is, road PO property1 is not possible when road TPP property2. This fact is not obvious, but can be deduced once we examine the consistent "singleton-labelings" of the constraint network. The following paragraph briefly describes singleton-labelings. First, we note that the path-consistency algorithm will also reduce the possible properties between house2 and property1 from { DC, EC } to just DC. So, the path-consistency algorithm leaves multiple possible constraints on 5 of the edges in the constraint network. Since each of the multiple constraints involves 2 constraints, we can reduce the network to 32 (5^2) possible unique constraint networks, each containing only single labels on each edge ("singleton labelings"). However, of the 32 possible singleton labelings, only 9 are consistent. (See qualreas for details.) Only one of the consistent singleton labelings has the edge road TPP property2 and the same labeling includes road EC property1. Other versions of the region connection calculus include RCC5 (with only five basic relations - the distinction whether two regions touch each other are ignored) and RCC23 (which allows reasoning about convexity). RCC8 use in GeoSPARQL RCC8 has been partially implemented in GeoSPARQL as described below: Implementations • GQR is a reasoner for RCC-5, RCC-8, and RCC-23 (as well as other calculi for spatial and temporal reasoning) • qualreas is a Python framework for qualitative reasoning over networks of relation algebras, such as RCC-8, Allen's interval algebra and more. See also • Spatial relation • DE-9IM References 1. Randell et al. 1992 2. Dong 2008 Bibliography • Randell, D.A.; Cui, Z; Cohn, A.G. (1992). "A spatial logic based on regions and connection". 3rd Int. Conf. on Knowledge Representation and Reasoning. Morgan Kaufmann. pp. 165–176. • Anthony G. Cohn; Brandon Bennett; John Gooday; Micholas Mark Gotts (1997). "Qualitative Spatial Representation and Reasoning with the Region Connection Calculus". GeoInformatica. 1 (3): 275–316. doi:10.1023/A:1009712514511. S2CID 14841370.. • Renz, J. (2002). Qualitative Spatial Reasoning with Topological Information. Lecture Notes in Computer Science. Vol. 2293. Springer Verlag. doi:10.1007/3-540-70736-0. ISBN 978-3-540-43346-0. S2CID 8236425. • Dong, Tiansi (2008). "A Comment on RCC: From RCC to RCC⁺⁺". Journal of Philosophical Logic. 34 (2): 319–352. doi:10.1007/s10992-007-9074-y. JSTOR 41217909. S2CID 6243376..
Wikipedia
Tangential trapezoid In Euclidean geometry, a tangential trapezoid, also called a circumscribed trapezoid, is a trapezoid whose four sides are all tangent to a circle within the trapezoid: the incircle or inscribed circle. It is the special case of a tangential quadrilateral in which at least one pair of opposite sides are parallel. As for other trapezoids, the parallel sides are called the bases and the other two sides the legs. The legs can be equal (see isosceles tangential trapezoid below), but they don't have to be. Special cases Examples of tangential trapezoids are rhombi and squares. Characterization If the incircle is tangent to the sides AB and CD at W and Y respectively, then a tangential quadrilateral ABCD is also a trapezoid with parallel sides AB and CD if and only if[1]: Thm. 2  ${\overline {AW}}\cdot {\overline {DY}}={\overline {BW}}\cdot {\overline {CY}}$ and AD and BC are the parallel sides of a trapezoid if and only if ${\overline {AW}}\cdot {\overline {BW}}={\overline {CY}}\cdot {\overline {DY}}.$ Area The formula for the area of a trapezoid can be simplified using Pitot's theorem to get a formula for the area of a tangential trapezoid. If the bases have lengths a, b, and any one of the other two sides has length c, then the area K is given by the formula[2] (This formula can be used only in cases where the bases are parallel.) $K={\frac {a+b}{|b-a|}}{\sqrt {ab(a-c)(c-b)}}.$ The area can be expressed in terms of the tangent lengths e, f, g, h as[3]: p.129  $K={\sqrt[{4}]{efgh}}(e+f+g+h).$ Inradius Using the same notations as for the area, the radius in the incircle is[2] $r={\frac {K}{a+b}}={\frac {\sqrt {ab(a-c)(c-b)}}{|b-a|}}.$ The diameter of the incircle is equal to the height of the tangential trapezoid. The inradius can also be expressed in terms of the tangent lengths as[3]: p.129  $r={\sqrt[{4}]{efgh}}.$ Moreover, if the tangent lengths e, f, g, h emanate respectively from vertices A, B, C, D and AB is parallel to DC, then[1] $r={\sqrt {eh}}={\sqrt {fg}}.$ Properties of the incenter If the incircle is tangent to the bases at P, Q, then P, I, Q are collinear, where I is the incenter.[4] The angles ∠ AID and ∠ BIC in a tangential trapezoid ABCD, with bases AB and DC, are right angles.[4] The incenter lies on the median (also called the midsegment; that is, the segment connecting the midpoints of the legs).[4] Other properties The median (midsegment) of a tangential trapezoid equals one fourth of the perimeter of the trapezoid. It also equals half the sum of the bases, as in all trapezoids. If two circles are drawn, each with a diameter coinciding with the legs of a tangential trapezoid, then these two circles are tangent to each other.[5] Right tangential trapezoid A right tangential trapezoid is a tangential trapezoid where two adjacent angles are right angles. If the bases have lengths a, b, then the inradius is[6] $r={\frac {ab}{a+b}}.$ Thus the diameter of the incircle is the harmonic mean of the bases. The right tangential trapezoid has the area[6] $\displaystyle K=ab$ and its perimeter P is[6] $\displaystyle P=2(a+b).$ Isosceles tangential trapezoid An isosceles tangential trapezoid is a tangential trapezoid where the legs are equal. Since an isosceles trapezoid is cyclic, an isosceles tangential trapezoid is a bicentric quadrilateral. That is, it has both an incircle and a circumcircle. If the bases are a, b, then the inradius is given by[7] $r={\tfrac {1}{2}}{\sqrt {ab}}.$ To derive this formula was a simple Sangaku problem from Japan. From Pitot's theorem it follows that the lengths of the legs are half the sum of the bases. Since the diameter of the incircle is the square root of the product of the bases, an isosceles tangential trapezoid gives a nice geometric interpretation of the arithmetic mean and geometric mean of the bases as the length of a leg and the diameter of the incircle respectively. The area K of an isosceles tangential trapezoid with bases a, b is given by[8] $K={\tfrac {1}{2}}{\sqrt {ab}}(a+b).$ References 1. Josefsson, Martin (2014), "The diagonal point triangle revisited" (PDF), Forum Geometricorum, 14: 381–385. 2. H. Lieber and F. von Lühmann, Trigonometrische Aufgaben, Berlin, Dritte Auflage, 1889, p. 154. 3. Josefsson, Martin (2010), "Calculations concerning the tangent lengths and tangency chords of a tangential quadrilateral" (PDF), Forum Geometricorum, 10: 119–130. 4. "Problem Set 2.2". jwilson.coe.uga.edu. Retrieved 2022-02-10. 5. "Empire-Dental - Здоровая и счастливая улыбка!". math.chernomorsky.com. Retrieved 2022-02-10. 6. "Math Message Boards FAQ & Community Help | AoPS". artofproblemsolving.com. Retrieved 2022-02-10. 7. "Inscribed Circle and Trapezoid | Mathematical Association of America". www.maa.org. Retrieved 2022-02-10. 8. Abhijit Guha, CAT Mathematics, PHI Learning Private Limited, 2014, p. 7-73. Polygons (List) Triangles • Acute • Equilateral • Ideal • Isosceles • Kepler • Obtuse • Right Quadrilaterals • Antiparallelogram • Bicentric • Crossed • Cyclic • Equidiagonal • Ex-tangential • Harmonic • Isosceles trapezoid • Kite • Orthodiagonal • Parallelogram • Rectangle • Right kite • Right trapezoid • Rhombus • Square • Tangential • Tangential trapezoid • Trapezoid By number of sides 1–10 sides • Monogon (1) • Digon (2) • Triangle (3) • Quadrilateral (4) • Pentagon (5) • Hexagon (6) • Heptagon (7) • Octagon (8) • Nonagon (Enneagon, 9) • Decagon (10) 11–20 sides • Hendecagon (11) • Dodecagon (12) • Tridecagon (13) • Tetradecagon (14) • Pentadecagon (15) • Hexadecagon (16) • Heptadecagon (17) • Octadecagon (18) • Icosagon (20) >20 sides • Icositrigon (23) • Icositetragon (24) • Triacontagon (30) • 257-gon • Chiliagon (1000) • Myriagon (10,000) • 65537-gon • Megagon (1,000,000) • Apeirogon (∞) Star polygons • Pentagram • Hexagram • Heptagram • Octagram • Enneagram • Decagram • Hendecagram • Dodecagram Classes • Concave • Convex • Cyclic • Equiangular • Equilateral • Infinite skew • Isogonal • Isotoxal • Magic • Pseudotriangle • Rectilinear • Regular • Reinhardt • Simple • Skew • Star-shaped • Tangential • Weakly simple
Wikipedia
Cobordism hypothesis In mathematics, the cobordism hypothesis, due to John C. Baez and James Dolan,[1] concerns the classification of extended topological quantum field theories (TQFTs). In 2008, Jacob Lurie outlined a proof of the cobordism hypothesis, though the details of his approach have yet to appear in the literature as of 2022.[2][3][4] In 2021, Daniel Grady and Dmitri Pavlov claimed a complete proof of the cobordism hypothesis, as well as a generalization to bordisms with arbitrary geometric structures.[4] Formulation For a symmetric monoidal $(\infty ,n)$-category ${\mathcal {C}}$ which is fully dualizable and every $k$-morphism of which is adjointable, for $1\leq k\leq n-1$, there is a bijection between the ${\mathcal {C}}$-valued symmetric monoidal functors of the cobordism category and the objects of ${\mathcal {C}}$. Motivation Symmetric monoidal functors from the cobordism category correspond to topological quantum field theories. The cobordism hypothesis for topological quantum field theories is the analogue of the Eilenberg–Steenrod axioms for homology theories. The Eilenberg–Steenrod axioms state that a homology theory is uniquely determined by its value for the point, so analogously what the cobordism hypothesis states is that a topological quantum field theory is uniquely determined by its value for the point. In other words, the bijection between ${\mathcal {C}}$-valued symmetric monoidal functors and the objects of ${\mathcal {C}}$ is uniquely defined by its value for the point. See also • Cobordism References 1. Baez, John C.; Dolan, James (1995). "Higher‐dimensional algebra and topological quantum field theory". Journal of Mathematical Physics. 36 (11): 6073–6105. arXiv:q-alg/9503002. Bibcode:1995JMP....36.6073B. doi:10.1063/1.531236. ISSN 0022-2488. S2CID 14908618. 2. Hisham Sati; Urs Schreiber (2011). Mathematical Foundations of Quantum Field Theory and Perturbative String Theory. American Mathematical Soc. p. 18. ISBN 978-0-8218-5195-1. 3. Ayala, David; Francis, John (2017-05-05). "The cobordism hypothesis". arXiv:1705.02240 [math.AT]. 4. Grady, Daniel; Pavlov, Dmitri (2021-11-01). "The geometric cobordism hypothesis". arXiv:2111.01095 [math.AT]. Further reading • Freed, Daniel S. (11 October 2012). "The Cobordism hypothesis". Bulletin of the American Mathematical Society. American Mathematical Society (AMS). 50 (1): 57–92. doi:10.1090/s0273-0979-2012-01393-9. ISSN 0273-0979. • Seminar on the Cobordism Hypothesis and (Infinity,n)-Categories, 2013-04-22 • Jacob Lurie (4 May 2009). On the Classification of Topological Field Theories External links • cobordism hypothesis at the nLab
Wikipedia
Tango bundle In algebraic geometry, a Tango bundle is one of the indecomposable vector bundles of rank n − 1 constructed on n-dimensional projective space Pn by Tango (1976) References • Tango, Hiroshi (1976), "An example of indecomposable vector bundle of rank n − 1 on Pn", Journal of Mathematics of Kyoto University, 16 (1): 137–141, ISSN 0023-608X, MR 0401766 • Kumar, N.; Peterson, Chris; Rao, A. (2003-05-08). "Degenerating families of rank two bundles". Proceedings of the American Mathematical Society. 131 (12): 3681–3688. doi:10.1090/s0002-9939-03-07071-0. ISSN 0002-9939.
Wikipedia
Tanh-sinh quadrature Tanh-sinh quadrature is a method for numerical integration introduced by Hidetoshi Takahashi and Masatake Mori in 1974.[1] It is especially applied where singularities or infinite derivatives exist at one or both endpoints. The method uses hyperbolic functions in the change of variables $x=\tanh \left({\frac {1}{2}}\pi \sinh t\right)\,$ to transform an integral on the interval x ∈ (−1, 1) to an integral on the entire real line t ∈ (−∞, ∞), the two integrals having the same value. After this transformation, the integrand decays with a double exponential rate, and thus, this method is also known as the double exponential (DE) formula.[2] For a given step size $h$, the integral is approximated by the sum $\int _{-1}^{1}f(x)\,dx\approx \sum _{k=-\infty }^{\infty }w_{k}f(x_{k}),$ with the abscissas $x_{k}=\tanh \left({\frac {1}{2}}\pi \sinh kh\right)$ and the weights $w_{k}={\frac {{\frac {1}{2}}h\pi \cosh kh}{\cosh ^{2}\left({\frac {1}{2}}\pi \sinh kh\right)}}.$ Use The Tanh-Sinh method is quite insensitive to endpoint behavior. Should singularities or infinite derivatives exist at one or both endpoints of the (−1, 1) interval, these are mapped to the (−∞,∞) endpoints of the transformed interval, forcing the endpoint singularities and infinite derivatives to vanish. This results in a great enhancement of the accuracy of the numerical integration procedure, which is typically performed by the Trapezoidal rule. In most cases, the transformed integrand displays a rapid roll-off (decay), enabling the numerical integrator to quickly achieve convergence. Like Gaussian quadrature, Tanh-Sinh quadrature is well suited for arbitrary-precision integration, where an accuracy of hundreds or even thousands of digits is desired. The convergence is exponential (in the discretization sense) for sufficiently well-behaved integrands: doubling the number of evaluation points roughly doubles the number of correct digits. However, Tanh-Sinh quadrature is not as efficient as Gaussian quadrature for smooth integrands; but unlike Gaussian quadrature, tends to work equally well with integrands having singularities or infinite derivatives at one or both endpoints of the integration interval as already noted. Furthermore, Tanh-Sinh quadrature can be implemented in a progressive manner, with the step size halved each time the rule level is raised, and reusing the function values calculated on previous levels. A further advantage is that the abscissas and weights are relatively simple to compute. The cost of calculating abscissa–weight pairs for n-digit accuracy is roughly n2 log2 n compared to n3 log n for Gaussian quadrature. Bailey and others have done extensive research on Tanh-Sinh quadrature, Gaussian quadrature and Error Function quadrature, as well as several of the classical quadrature methods, and found that the classical methods are not competitive with the first three methods, particularly when high-precision results are required. In a conference paper presented at RNC5 on Real Numbers and Computers (Sept 2003), when comparing Tanh-Sinh quadrature with Gaussian quadrature and Error Function quadrature, Bailey and Li found: "Overall, the Tanh-Sinh scheme appears to be the best. It combines uniformly excellent accuracy with fast run times. It is the nearest we have to a truly all-purpose quadrature scheme at the present time." Upon comparing the scheme to Gaussian quadrature and Error Function quadrature, Bailey et al. (2005) found that the Tanh-Sinh scheme "appears to be the best for integrands of the type most often encountered in experimental math research". Bailey (2006) found that: "The Tanh-Sinh quadrature scheme is the fastest currently known high-precision quadrature scheme, particularly when one counts the time for computing abscissas and weights. It has been successfully employed for quadrature calculations of up to 20,000-digit precision." In summary, the Tanh-Sinh quadrature scheme is designed so that it gives the most accurate result for the minimum number of function evaluations. In practice, the Tanh-Sinh quadrature rule is almost invariably the best rule and is often the only effective rule when extended precision results are sought. Implementations • Tanh-sinh, exp-sinh, and sinh-sinh quadrature are implemented in the C++ library Boost[3] • Tanh-sinh quadrature is implemented in a macro-enabled Excel spreadsheet by Graeme Dennes.[4] • Tanh-sinh quadrature is implemented in the Haskell package integration.[5] • Tanh-sinh quadrature is implemented in the Python library mpmath.[6] • An effective implementation of Tanh-sinh quadrature in C# by Ned Ganchovski.[7] Notes 1. Takahashi & Mori (1974) 2. Mori (2005) 3. Thompson, Nick; Maddock, John. "Double-exponential quadrature". boost.org. 4. Dennes, Graeme. "Numerical Integration With Tanh-Sinh Quadrature". Newton Excel Bach, not (just) an Excel Blog. 5. Kmett, Edward. "integration: Fast robust numeric integration via tanh-sinh quadrature". Hackage. 6. "mpmath library for real and complex floating-point arithmetic with arbitrary precision". mpmath. 7. Ganchovski, Ned. "Tanh-Sinh Integration with C#". GitHub repository. References • Bailey, David H, "Tanh-Sinh High-Precision Quadrature". (2006). • Molin, Pascal, Intégration numérique et calculs de fonctions L (in French), doctoral thesis (2010). • Bailey, David H, Karthik Jeyabalan, and Xiaoye S. Li, "A comparison of three high-precision quadrature schemes". Experimental Mathematics, 14.3 (2005). • Bailey, David H, Jonathan M. Borwein, David Broadhurst, and Wadim Zudlin, Experimental mathematics and mathematical physics, in Gems in Experimental Mathematics (2010), American Mathematical Society, pp. 41–58. • Jonathan Borwein, David H. Bailey, and Roland Girgensohn, Experimentation in Mathematics—Computational Paths to Discovery. A K Peters, 2003. ISBN 1-56881-136-5. • Mori, Masatake; Sugihara, Masaaki (15 January 2001). "The double-exponential transformation in numerical analysis". Journal of Computational and Applied Mathematics. 127 (1–2): 287–296. doi:10.1016/S0377-0427(00)00501-X. ISSN 0377-0427. • Mori, Masatake (2005), "Discovery of the Double Exponential Transformation and Its Developments", Publications of the Research Institute for Mathematical Sciences, 41 (4): 897–935, doi:10.2977/prims/1145474600, ISSN 0034-5318. This paper is also available from here. • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 4.5. Quadrature by Variable Transformation", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 • Takahashi, Hidetoshi; Mori, Masatake (1974), "Double Exponential Formulas for Numerical Integration", Publications of the Research Institute for Mathematical Sciences, 9 (3): 721–741, doi:10.2977/prims/1195192451, ISSN 0034-5318. This paper is also available from here. External links • Cook, John D, "Double Exponential Integration" with source code. • Dennes, Graeme, "Numerical Integration With Tanh-Sinh Quadrature" A Microsoft Excel workbook containing fourteen quadrature programs which demonstrate the Tanh-Sinh and other quadrature methods. Demonstrates the astounding speed and accuracy of the Tanh-Sinh method in particular and the Double Exponential methods in general. The quadrature programs are exercised using a wide, diverse range of test integrals with results. Full open VBA source code and documentation is provided. • van Engelen, Robert A, "Improving the Double Exponential Quadrature Tanh-Sinh, Sinh-Sinh and Exp-Sinh Formulas" compares Tanh-Sinh implementations and introduces optimizations to improve Tanh-Sinh convergence speed and accuracy. Includes Tanh-Sinh, Sinh-Sinh and Exp-Sinh methods with source code.
Wikipedia